uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,498,515 | arxiv | \section{Introduction}
The currently accepted paradigm in observational astronomy is that the universe in which we live is undergoing an accelerated expansion. Recent data for the Hubble constant from CMB data \cite{planck2016astronomy} and for the Hubble constant from local data \cite{riess20162} are in support of this. The simplest theoretical model incorporating such an accelerating universe is the $\Lambda$CDM model, see e.g. \cite{buchert2016observational} for a review and open tensions. Despite the fact that there exist alternative explanations for the accelerated expansion, such as \cite{racz2017concordance}, we will adopt in the present paper the view that $\Lambda$CDM is a correct description of the expansion of the universe, and that the Einstein-de Sitter equations are the fundamental equations describing gravity.
In the present paper we will argue that the cosmological constant can be assigned a similar function for the gravitational realms that $\hbar$ plays for matter. The correspondence principle in quantum mechanics is the notion that when the scales of the action in a quantum mechanical system become large compared to $\hbar$, the system approximates a corresponding classical system. This quantum-classical correspondence gives a heuristic for recovering a classical system from a quantum system - simply take the limit $\hbar \to 0$. In the present paper we will study the limits of Schwarzschild-de Sitter black holes as the cosmological constant goes to zero, and we will argue that this allows one to define a scale of validity for the Einstein equations of an isolated gravitational system in a de Sitter universe. To do so, we will employ Geroch's notion for the limits of a family of spacetimes \cite{geroch_limits_1969} applied to Schwarzschild-de Sitter.
To study the way in which the limit is approached in detail, we use an embedding of the quotient of Schwarzschild-de Sitter space over the sphere into $AdS_3$ space.\footnote{That is, $(2+1)$-dimensional anti-de Sitter space.} This embedding was first introduced in \cite{bengtsson_classics_2014} for the case of Reissner-Nordstr\"{o}m.
For calculations on the scale of astrophysical systems the cosmological constant is usually dropped and the spacetime is assumed to be asymptotically flat. This approximation is often employed with little justification, other than a brief citation of the small value $\Lambda \sim 10^{-52} \, m^{-2}$ for the cosmological constant. Recent work that takes the cosmological setting into consideration suggests that the effects are by no means trivial. Prominent examples being the quadropole formula for gravitational energy loss \cite{ashtekar2014asymptotics,ashtekar2015asymptotics,ashtekar2015asymptotics2,ashtekar2016gravitational}, as well as recent work on the gravitational memory effect in de Sitter spacetimes \cite{bieri2016gravitational}. Note that even when one assumes that the universe is spatially flat, asymptotic flatness has to be employed with care, since the definition of asymptotic flatness incudes the requirement that the matter density falls off sufficiently fast towards infinity. This condition is obviously violated for a spacelike slice in a homogeneous, spatially flat FLRW universe.\footnote{This was pointed out to the authors by Beatrice Bonga in private communication} Interestingly, the problem of global non-linear stability for black holes has recently been solved for slow-rotating black holes in a de Sitter universe \cite{hintz2016global},\footnote{This is arguably the physically relevant case if the cosmological constant is, in fact, positive.} while the corresponding problem for asymptotically flat spacetimes remains one of the big challenges in the field of mathematical relativity, see \cite{ma2017uniform,ma2017uniform2,aksteiner2016new,andersson2017morawetz,dafermos2017boundedness,finster2016linear} for recent progress on the linearised problem and \cite{klainerman2017global} for the full non-linear problem under strong constraints. In this paper we will use the qualitative properties of how the $\Lambda \to 0$ limit is approached to give a heuristic argument that the Einstein equations are a legitimate approximation to the fundamental Einstein-de Sitter equations, for calculations in the short-range regime. For gravitational memory this was recently worked out in \cite{bieri2017gravitational}, where the authors found that for low redshift, i.e. for nearby sources, and high frequencies the gravitational memory in a $\Lambda$CDM background is equivalent to that in a flat space while for large redshift there is a significant deviation.
\subsection*{Overview of the paper}
The paper is organized in the following way. In Section \ref{sec:ssds} we will introduce and review the relevant background, including the Schwarzschild-de Sitter spacetimes. Then, in Section \ref{sec:math}, we discuss Geroch's notion for the limits of spacetimes. In Section \ref{sec:embedding} we discuss how the embedding of Schwarzschild-de Sitter into $AdS_3$ is performed. The resulting embeddings are then presented in Section \ref{sec:pics}. Finally in Section \ref{sec:phys}, we give a possible physical interpretation of our findings.
\section{The Schwarzschild-de Sitter Spacetime}
\label{sec:ssds}
The Schwarzschild-de Sitter spacetime is the spherically symmetric solution to the vacuum Einstein-de Sitter equations\footnote{Note that, until section \ref{sec:phys}, we will use units such that $\hbar = G = c = 1$.}
\begin{equation}
R_{\mu \nu} - \frac{1}{2}R g_{\mu \nu} + \Lambda g_{\mu \nu} = 0
\end{equation}
with $\Lambda >0$. In Schwarzschild coordinates the metric is given by
\begin{equation}\label{eq:ssds}
ds^2 = -f(r) dt^2 + \frac{1}{f(r)} dr^2 + r^2 d \Omega^2
\end{equation}
with
\begin{equation}
f(r) = 1-\frac{2M}{r} - \frac{\Lambda}{3}r^2,
\end{equation}
where $M$ and $\Lambda$ are regarded as free parameters. The spacetime is spherically symmetric and static. We define the domain of outer communication as the region where the Killing vector field $\partial_t$, for which the orbits of points under the diffeomorphism are open, is timelike. The metric \eqref{eq:ssds} has a coordinate singularity when
\begin{equation} \label{eq:radialfunc}
1-\frac{2M}{r} - \frac{\Lambda}{3} r^2 = 0,
\end{equation}
where the norm of the Killing vector $\partial_t$ switches sign, indicating the location of a horizon. Note that this equation always has at least one real solution independent of the choice of parameters. For parameters in the subextremal range there are three real solutions to equation \eqref{eq:radialfunc}. They can be written explicitly as
\begin{align}
\label{eq:rH}
r_H &= \frac{2}{\sqrt{\Lambda}} \cos \left[\frac{1}{3} \arccos (3M\sqrt{\Lambda}) + \frac{\pi}{3} \right] \\
r_C &= \frac{2}{\sqrt{\Lambda}} \cos \left[\frac{1}{3} \arccos (3M\sqrt{\Lambda}) - \frac{\pi}{3} \right] \\
r_U &= -(r_H + r_C).
\end{align}
In this work we are only interested in the coordinate range where $r\in(0,\infty)$ and, since $r_U$ is always negative, it will not be relevant to our discussion.
In the subextremal case, $r_H$ is the location of the black hole horizon and $r_C$ is the location of the cosmological horizon. It is the region between those two where the Killing vector field $\partial_t$ is timelike. Note that Schwarzschild-de Sitter becomes extremal when $r_H$ and $r_C$ coincide, which is the case when $9\Lambda M^2=1$. We will primarily restrict ourselves to the subextremal case, where $0< \Lambda < \frac{1}{9M^2}$. Note that the photon sphere in Schwarzschild-de Sitter is located at
\begin{equation}
r_{ph}=3M,
\end{equation}
independent of the value of $\Lambda$.\footnote{See \cite{geometryphotonsurfaces} for a derivation.}
The conformal diagram for Schwarzschild-de Sitter is given in Figure \ref{fig:ssds} from which we can see immediately, by gluing two consecutive cosmological horizons together, that its topology is given by $S^1\times S^2\times \Reals$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/ssds.png}
\caption{Conformal diagram for the maximal extension of the subextremal Schwarzschild-de-Sitter space-time. The blue lines correspond to hypersurfaces of constant $t$ the red lines to hypersurfaces of constant $r$. $\mathcal{H^\pm}$ are the future and past event horizon located at $r=r_H$ while $\mathcal{CH}^\pm$ are the future/past cosmological horizons located at $r=r_C$. Time like future and past infinity is indicated by $i^\pm$. The singularity is located at $r=0$. Here $r=\infty$ is a spacelike conformal boundary }
\label{fig:ssds}
\end{figure}
In the following when we speak about limits of spacetime properties, we are simply discussing the properties of coordinate functions. This is not to be confused with the limits of spacetimes that we consider later on in the paper, although some intuitive results do carry over.
Since the location of the photon sphere is constant for a fixed $M$, it is not surprising that in the limit $\Lambda \to \frac{1}{9M^2}$, the two relevant horizons approach this value:
\begin{align*}
\lim_{\Lambda \to \frac{1}{9M^2}} r_H &= 3M \\
\lim_{\Lambda \to \frac{1}{9M^2}} r_C &= 3M.
\end{align*}
On the other hand, the limit $\Lambda \to 0$ for these functions is
\begin{align*}
\lim_{\Lambda \to 0} r_H &= 2M \\
\lim_{\Lambda \to 0} r_C &= \infty.
\end{align*}
In this limit, the radius of the black hole horizon takes the same value as the black hole horizon from the Schwarzschild metric, for which the function $f(r)$ in the metric \eqref{eq:ssds} is given by
\begin{equation}
f(r)=1-\frac{2M}{r}.
\end{equation}
The domain of outer communication for Schwarzschild stretches out an infinite distance from the black hole horizon, consistent with the cosmological horizon extending to infinity. The Schwarzschild metric solves the Einstein vacuum equations
\begin{equation}
R_{\mu \nu} = 0,
\end{equation}
and is asymptotically flat. Its conformal diagram is given in Figure \ref{fig:ss}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{img/schwarzschild.png}
\caption{Conformal diagram for the maximal extension of the Schwarzschild space-time. The blue lines correspond to hypersurfaces of constant $t$ the red lines to hypersurfaces of constant $r$. $\mathcal{H^\pm}$ are the future and past event horizon located at $r=2M$ while $\mathcal{I}^\pm$ are the future/past null infinity. Time like future and past infinity is indicated by $i^\pm$, while $i^0$ indicates space like infinity. The singularity is located at $r=0$.}
\label{fig:ss}
\end{figure}
\section{Limits of spacetimes}
\label{sec:math}
Lorentzian metrics appearing in general relativity often come in families parameterised by one or more constants, whose values are not fixed by the Einstein field equations. Consider, for example, the Kerr family of solutions. In this family, there are two free parameters, corresponding to the mass $M$ and the rotation parameter $a$. It is a natural question to ask what type of spacetime we obtain if we reduce, say, the rotation parameter $a$ to 0.
Na\"{i}vely, the answer to this question consists of simply setting $a = 0$ in the coordinate description of the metric.\footnote{Or taking the limit $a \to 0$ if required.} This approach has significant issues however, since one can first perform a coordinate transformation and then take the same limit to obtain a completely different spacetime! This fact seems at odds with the notion that coordinate changes in general relativity aren't supposed to affect anything.
Geroch provides the resolution to this paradox by asserting that it is only meaningful to take limits if we first introduce a method of comparing points in different spacetimes \cite{geroch_limits_1969}. That is, we need a way of deciding which points are `the same' in spacetimes which have different values for the chosen parameter. There is no canonical way of doing this, and so any such limit will implicitly involve a choice.
Let us now describe Geroch's prescription in a little detail. We begin with a one-parameter family of spacetimes $M_{\lambda}$, and wish to assign a sensible limiting spacetime to this family as we take the parameter to some fixed value, say $\lambda \to 0$. We assemble the family of spacetimes into a smooth 5-dimensional manifold, $\mathcal{M}$, where each $M_{\lambda}$ is a smooth 4-dimensional submanifold of $\mathcal{M}$.\footnote{Note that unless otherwise specified, we assume all manifolds are Hausdorff.} The manifold $\mathcal{M}$ is foliated by these submanifolds, and the parameter $\lambda$ defines a scalar field on $\mathcal{M}$ which is constant on each leaf of the foliation. We assume the metric tensors $g_{ab} (\lambda)$ combine to form a smooth metric $\mathcal{G}$ on $\mathcal{M}$ with signature $(0, -, +, +, +)$. The data defined by $(\mathcal{M}, \mathcal{G})$ is equivalent to the data defined by the family $(M_{\lambda}, g(\lambda))$. A limiting spacetime is then obtained by defining a suitable boundary $\partial \mathcal{M}$ for $\mathcal{M}$, see Figure \ref{fig:foliation}. More specifically, a limit space is a 5-dimensional manifold $\overline{\mathcal{M}}$ with boundary $\partial \overline{\mathcal{M}}$, a metric $\overline{\mathcal{G}}$ and a scalar field $\overline{\lambda}$ on $\overline{\mathcal{M}}$, and a smooth injective map $\Psi$ from $\mathcal{M}$ into the interior of $\overline{\mathcal{M}}$ satisfying:
\begin{itemize}
\item $\Psi$ takes $\mathcal{G}$ into $\overline{\mathcal{G}}$, and $\lambda$ into $\overline{\lambda}$ \\
\item $\partial \overline{\mathcal{M}}$ is connected, non-empty, and $\overline{\lambda} = 0$ when restricted to $\partial \overline{\mathcal{M}}$. \\
\item $\overline{\mathcal{G}}$ has signature $(0,-,+,+,+)$ on $\partial \overline{\mathcal{M}}$.
\end{itemize}
Geroch goes on to define a \emph{family of frames} - that is, for each leaf of the foliation, one chooses a fiducial point $p_{\lambda}$ and an orthonormal frame $\omega(\lambda)$ at $p_{\lambda}$, and identifies such points and frames for each $\lambda$. Then, by calculating geodesics from the fiducial point to any other point, we have a way of comparing points in the different spacetimes. Geroch then states that such a choice of a family of frames either defines no limit space, or else determines a unique maximal limit space.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{img/foliation.png}
\caption{A cartoon depiction of the Geroch foliation.}
\label{fig:foliation}
\end{figure}
How does this connect to our intuitive notion of simply taking the limits in the coordinate representation of the metric? Choosing a coordinate system is implicitly choosing a point,\footnote{Actually, it implicitly chooses any one of the points in the open set on which the coordinates are defined.} and an orthonormal frame at that point, for each value of the parameter $\lambda$. Such a choice of coordinates therefore determines a family of frames, and by Geroch's theorem, a limiting spacetime. There is no guarantee that a different choice of coordinates will result in the same limiting spacetime.
To illustrate, let us look at the limit of Schwarzschild-de Sitter, as the value of the cosmological constant goes to zero, and take as our fiducial point the point bifurcation sphere $p_H$, shown in Figure \ref{fig:ssdstoss}. A natural question is to ask whether points in block VI exist in the limit. Geodesics from $p_H$ to a point $p_6$ in region VI must first pass through $r = r_C$. That is,
\begin{align*}
d(p_H,p_6) = d(p_H,r_C) + d(r_C,p_6).
\end{align*}
But the first term diverges in the limit, so
\begin{align*}
\lim_{\Lambda \to 0} d(p_H,p_6) = \infty,
\end{align*}
and it follows that $p_6$ cannot survive in the limit. A similar argument shows that region V cannot survive either.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{img/ssdstoss.png}
\caption{The conformal diagram of Schwarzschild-de Sitter and the conformal diagram of the limiting Schwarzschild spacetime.}
\label{fig:ssdstoss}
\end{figure}
Note that choosing $p_C$ as the fiducial point could result in a completely different limiting spacetime, however we will not investigate this question in the present work.
\section{An embedding into anti-de Sitter space}
\label{sec:embedding}
Geroch's notion of limits of spacetime is somewhat abstract, so we shall use the formalism of \cite{bengtsson_classics_2014} to implement the Geroch procedure and describe the associated limits. Following \cite{bengtsson_classics_2014}, we embed the entire one-parameter family of spacetimes into a fixed ambient space, which we take to be $AdS_3$. Each spacetime touches at a definite point in the ambient space, the origin of the $AdS_3$ space, and the tangent spaces (and therefore an othornomal frame) coincide at that point. It follows that the conditions of Geroch's limit theorem are met, and we can therefore uniquely assign a limiting spacetime. Of course, the limiting spacetime will depend on the points we are identifying, that is, on the embedding. There is, in general, no canonical procedure for selecting points in the different spacetimes which we may regard as ``the same''. We will choose this fiducial point to be a point on the bifurcation sphere, $p_H$.
Since our family of spacetimes is spherically symmetric, it is enough to embed the 1+1 dimensional spacetime $\Sigma$, described by the metric
\begin{align}
\label{eq:2DBHmetric}
ds^2 &= - f(r) dt^2 + \frac{1}{f(r)} dr^2.
\end{align}
The embedding of $\Sigma$ into $AdS_3$ is determined by the following equations
\begin{subequations}
\label{eq:AdS3embedding}
\begin{align}
X &= \sqrt{1+a^2 f(r)} \, \sinh{\left( g(r) \right)} \\
Y &= a \sqrt{f(r)} \cosh{\left( \frac{t}{a} \right)} \\
U &= a \sqrt{f(r)} \sinh{\left( \frac{t}{a} \right)} \\
V &= \sqrt{1+a^2 f(r)} \, \cosh{ \left( g(r) \right)}.
\end{align}
\end{subequations}
The parameter $a$ is a constant which we choose for convenience to be $\frac{1}{\kappa}$, where $\kappa$ is the surface gravity of the black hole. The functions $(X,Y,U,V)$ are coordinates for the $AdS_3$ space, thought of as the hypersurface $X^2 +Y^2 - U^2-V^2 = -1$ in $\mathbb{R}^4$, endowed with the metric
\begin{align}
\label{eq:AdS3metric}
ds^2 &= dX^2 + dY^2 - dU^2 - dV^2.
\end{align}
Since we want this embedding to be an isometric embedding of our spacetime into $AdS_3$, we insist that the induced metric, determined by the ambient $AdS_3$ metric (\ref{eq:AdS3metric}) and the embedding (\ref{eq:AdS3embedding}), matches the black hole metric (\ref{eq:2DBHmetric}). This will occur when the function $g(r)$ satisfies the differential equation
\begin{align}
\label{eq:embeddingDE}
\big( g'(r) \big)^2 &= \frac{1 + a^2 f - \frac{a^2 f'}{4}}{f \big( 1+a^2f \big)^2}.
\end{align}
Note that, so far, the only difference between the setup here and the setup in \cite{bengtsson_classics_2014} is the form of the function $f(r)$. Determining the embedding therefore amounts to solving the differential equation (\ref{eq:embeddingDE}) for the function $g(r)$, which we will do numerically. By choosing $g(r_H) = 0$, we are able to ensure that the black hole horizon for each embedding touches the point $(X,Y,U,V) = (0,0,0,1)$ in the ambient $AdS_3$ space.
In order to visualise the embeddings, we will use the so-called sausage coordinates $(x,y,\tau)$ for $AdS_3$. These coordinates are related to the embedding coordinates $(X,Y,U,V)$ by:
\begin{equation*}
\begin{aligned}[c]
X &= \frac{2x}{1-\rho^2} \\[1em]
Y &= \frac{2y}{1-\rho^2}
\end{aligned}
\qquad
\begin{aligned}[c]
U &= \frac{1 + \rho^2}{1-\rho^2} \sin \tau \\[1em]
V &= \frac{1 + \rho^2}{1-\rho^2} \cos \tau,
\end{aligned}
\end{equation*}
where $\rho = x^2 + y^2$ and $0\leq \rho < 1$. The sausage coordinates realise $AdS_3$ as a solid cylinder in $\mathbb{R}^3$. Slices of constant $\tau$ in this cylinder are Poincar\'{e} disks, and the embedding of $\Sigma$ into the $AdS_3$ space now appears as a two dimensional sheet inside the solid cylinder. We refer the reader to the appendix of \cite{bengtsson_classics_2014} for a nice discussion of the geometric properties of this embedding.
\\
\section{Illustrations}
\label{sec:pics}
When we embed a Schwarzschild-de Sitter spacetime, we have to choose the values of $M$ and $\Lambda$ for a given embedding. A straightforward way to do this is to fix $M$ to some convenient value, say $M = 1$, and then study the embeddings as you vary $\Lambda$. A representative embedding is shown in Figures \ref{fig:basicssdsembeddinga} and \ref{fig:basicssdsembeddingb}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/Option1a.png}
\caption{An embedding of Schwarzschild-de Sitter, with $\Lambda M^2 = \frac{1}{10}$. The $AdS$ cylinder in being viewed from the left. One of the sheets has been made translucent to aid visualisation.}
\label{fig:basicssdsembeddinga}
\end{figure}
\begin{figure}
\begin{multicols}{2}
\includegraphics[width=0.5\textwidth]{img/Option1b.png}\par
\includegraphics[width=0.5\textwidth]{img/Option1c.png}
\end{multicols}
\caption{Views of the embedding in Figure \ref{fig:basicssdsembeddinga} from above (image on left) and the front (image on right). Note that these figures have been produced in Mathematica from a three-dimensional figure, and the pictures are stereographic projections from the described viewpoints.}
\label{fig:basicssdsembeddingb}
\end{figure}
In Figure \ref{fig:basiccircles}, we plot the $\tau = 0$ slice of this embedding, together with the $\tau = 0$ slice of the Schwarzschild embedding of the same mass.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/basicsandssds.pdf}
\caption{The $\tau = 0$ slice of the embedding in Figure \ref{fig:basicssdsembeddinga}, and the embedding of Schwarzschild of the same mass. The two embeddings touch at the origin of the ambient $AdS_3$ space.}
\label{fig:basiccircles}
\end{figure}
An unpleasant feature of this picture is the discrepancy between the embedding of the Schwarzschild-de Sitter domain of outer communication and the embedding of the Schwarzschild domain of outer communication. The physical interpretation discussed in Section \ref{sec:phys} involves a comparison between the near horizon geometry of Schwarzschild and Schwarzschild-de Sitter black holes. The key point is that when comparing these black holes, the near-horizon geometry only matches once we adjust the relative masses. To achieve this, we consider a mass parameter $M = M(\Lambda)$, varying with $\Lambda$ in such a way that the horizon area is kept constant. That is, we want to fix the radius of the black hole horizon to be $r = r_H = 2 \mu$, where $\mu$ is the mass of some reference Schwarzschild spacetime. Note that this mass-fixing procedure is equivalent to changing the mass of the reference Schwarzschild black hole. The $\tau = 0$ slice of the resultant embeddings provide a much cleaner comparison between the Schwarzschild and the Schwarzschild-de Sitter embeddings, as seen in Figure \ref{fig:masscorrectedcircles}. We shall employ this mass-correction for the remainder of the present work.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/masscorrectedsandssds.pdf}
\caption{The $\tau = 0$ slice of the embedding in Figure \ref{fig:basicssdsembeddinga}, and the embedding of the mass-corrected Schwarzschild. The two embeddings touch at the origin of the ambient $AdS_3$ space.}
\label{fig:masscorrectedcircles}
\end{figure}
Before we elaborate more on the physical interpretation, let us first make a few comments on how to view the embeddings we have already obtained. The two sheets of the embedding in Figures \ref{fig:basicssdsembeddinga} and \ref{fig:basicssdsembeddingb} correspond to regions I and II in the Schwarzschild-de Sitter conformal diagram in Figure \ref{fig:ssds}. In the $\tau = 0$ slice, the centre of the disk corresponds to the origin of the $AdS_3$ space, and the event horizon of the embeddings. The circle $x^2 + y^2 = 1$, corresponding to the boundary of the solid cylinder in Figures \ref{fig:basicssdsembeddinga} and \ref{fig:basicssdsembeddingb}, is an infinite metric distance away from the origin. The blue line is the intersection of the embedding of the Schwarzschild-de Sitter spacetime with this plane, and the other intersection of the blue line with the $y=0$ line corresponds to the cosmological horizon, $r_C$. The fact that the Schwarzschild de Sitter spacetime is a smooth manifold of topology $S^1\times S^2 \times \Reals$ and the embedding is isometric, suggests that the cuspy nature of this intersection is a numerical artefact. The red line is the intersection of the embedding of the Schwarzschild black hole with the plane $\tau = 0$. Note that the Schwarzschild spacetime reaches the edge of the $AdS_3$ space - points in the Schwarzschild domain of outer communication can be arbitrarily far from the event horizon.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/differentlambda.png}
\caption{A plot of the $\tau = 0$ slices for various embeddings. The values of $\Lambda$ are such that $9 \Lambda M^2$ is given by $\left[ \frac{9}{10} (\textrm{blue}), \frac{7}{10}, \frac{5}{10}, \frac{3}{10} ,\frac{1}{10} (\textrm{red}) \right] $. The point at which $f'(r) = 0$ is represented on each embedding by a solid dot (See Section \ref{subsec:hierarchy} for more details).}
\label{fig:differentlambda}
\end{figure}
\section{Physical interpretation}
\label{sec:phys}
In the following section, we will use the illustrations of the previous section to establish a heuristic argument in favour of a hierarchy of validity between the Einstein-de Sitter equations and the Einstein equations.
\subsection{Schwarzschild mass correction in a de-Sitter Universe}
When embedding the Schwarzschild-de Sitter black holes into $AdS_3$, we had to choose the mass parameter of the black hole to be a function of $\Lambda$ to guarantee that the black hole horizon area remained constant. By identifying the radius of the black hole horizon $r = r_H$ with the radius of a reference Schwarzschild black hole horizon $r = 2 \mu$, we obtain a relation between the mass parameter of the Schwarzschild-de Sitter spacetime $M$ and the effective mass of the reference Schwarzschild black hole $\mu$. Doing this na\"{i}vely by using the expression (\ref{eq:rH}) for $r_H$, we obtain
\begin{equation}
\label{Mlambda}
M =\frac{1}{3\sqrt{\Lambda}}\cos \Big( 3 \arccos(\mu \sqrt{\Lambda})-\pi \Big)
\end{equation}
Note that since $M = M(\Lambda,\mu)$, the extremality condition $9M \Lambda^2 < 1$ changes, and now becomes
\begin{align*}
\Lambda < \frac{1}{4 \mu^2}.
\end{align*}
A much simpler expression can be obtained by noting that $f(r_H) = 0$, and so fixing the horizon at $r_H = 2 \mu$ means that we require
\begin{equation}
f(2\mu) = 1 - \frac{2M}{2\mu} - \frac{\Lambda}{3}\left(2\mu \right)^2 = 0
\end{equation}
Rearranging this expression for $M$ gives us
\begin{equation}
\label{eq:correctedmass}
M = \mu - \frac{4 \Lambda}{3} \mu^3,
\end{equation}
which is identical to the expression (\ref{Mlambda}).
Until this point, we have been using natural units to simplify calculations and expressions. We find it prudent to now switch to S.I. units (meters, kilograms, seconds). Expression (\ref{eq:correctedmass}) for the corrected-mass is, in S.I. units, given by
\begin{equation}
M = \mu - \frac{4 \Lambda G^2}{3c^4} \mu^3.
\end{equation}
For a system with a fixed Schwarzschild/Newtonian mass $\mu$, the Schwarzschild-de Sitter solution with corrected mass $M$ exhibits a similar near field behaviour.
\subsection{Hierarchy of Validity}
\label{subsec:hierarchy}
In quantum mechanics the limit $\hbar \rightarrow 0$ serves to recover the equations governing the evolution of systems in classical mechanics from the equations that govern the same system in the quantum regime. This gives us two things:
\begin{itemize}
\item A compatibility of quantum mechanics and classical mechanics
\item A breakdown criterion for regimes in which classical mechanics is no longer valid.
\end{itemize}
These two things emphasise that the modeling of a system is scale dependent. Newtonian Gravity emerges from Einstein's Relativity in a similar fashion, namely as a static, small perturbation to a flat background spacetime. We will argue that the $\Lambda \to 0$ limit related the Einstein-de Sitter equations and the Einstein equations in a similar fashio. We will be able to establish a heuristic hierarchy of validity between these systems describing gravity. The precise form in which the embedding of the Schwarzschild-de-Sitter spacetimes approach the asymptotically flat limit further serves to clarify the effect of a non-zero $\Lambda$.
We see from the illustrations in Section \ref{sec:pics} that a non-zero $\Lambda$ mainly effects the structure of the exterior region in the neighbourhood of infinity/the cosmological horizon, that is, regions far away from the massive body. Speaking in interaction terms a non-zero $\Lambda$ effects only the long-range interaction between a massive gravitating object and a test particle.
Let us now introduce the notion of a radius of validity - namely a radius outside of which the Schwarzschild-de Sitter solution starts to significantly differ from the Schwarzschild solution. We can identify a candidate for such a radius by investigating properties of the radial function $f(r)$. Outside the event horizon, the radial function for Schwarzschild-de Sitter agrees closely with the radial function for Schwarzschild. The point at which they begin to significantly differ is the maximum of the Schwarzschild-de Sitter radial function - that is, the point at which $f'(r) = 0$. This is
\begin{equation}\label{eq:rval}
r_{v}=\left(\frac{3GM}{c^2 \Lambda}\right)^{\frac{1}{3}}.
\end{equation}
These radii are shown in Figure \ref{fig:differentlambda} for various values of $\Lambda$. Their exact location in the embedding suggests that this is a sensible choice for a radius of validity.
It would be more satisfying to have a geometric characterisation of this radius - that is, a definition that was coordinate independent. We can obtain this by noting that at this radius, we have
\begin{align*}
\mathcal{R}^2 = 3 \mathcal{I}_1,
\end{align*}
where $\mathcal{R} = 4 \Lambda$ is the Ricci scalar curvature of the Schwarzschild-de Sitter metric, and $\mathcal{I}_1$ is a principal invariant of the Weyl scalar, $C_{abcd}$, defined by
\begin{align*}
\mathcal{I}_1 &= C_{abcd} C^{abcd}.
\end{align*}
\subsection{Comparison of validity length scale with size of astronomical objects}
Since we have obtained a formula for the radius of validity of the Einstein equations in a de Sitter universe, let us now compare that radius of validity to the size of astrophysical objects. As it mostly boils down to an order of magnitude comparison, we have chosen to compare four systems that are roughly representative for their class and cover the various mass ranges. The Solar system, the Globular Cluster NGC 2419, the Milky Way and the Virgo Super Cluster have masses of the order of $1$, $10^5$, $10^{11}$, $10^{15}$ solar masses. Note that we will abstain from using any sort of astronomical units and will be working with SI units instead.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c | c | c | c|}
\hline
Object & Mass (kg) & Size (m) & $r_v$ (m) \\
\hline
Solar System & $2\times10^{30}$ & $7.5\times10^{12}$ & $3\times 10^{18}$ \\
\hline
NGC2419 Globular Cluster & $2\times10^{36}$ & $2.5\times 10^{18}$ & $3\times 10^{20}$ \\
\hline
Milky Way (with out dark matter halo) & $2\times 10^{41}$ & $9.5\times10^{20}$ & $1.5 \times 10^{22}$\\
\hline
Milky Way (with dark matter halo) & $2\times 10^{42}$ & $1.5\times10^{21}$ & $3 \times 10^{22}$\\
\hline
Virgo Supercluster & $2\times10^{45}$ & $5.2\times10^{23}$ & $3 \times 10^{23}$ \\
\hline
Universe (present day) & $3 \times 10^{52}$ & $4.3\times10^{26}$ & $8 \times 10^{25}$ \\
\hline
\end{tabular}
\end{center}
\caption{Observational values of astronomical systems compared to the scale of validity calculated by formula \eqref{eq:rval}. A more extensive discussion is contained in the bulk of the text.}
\label{tab:rval}
\end{table}
First we observe that the solar system and the globular cluster both have radii of validity that extend well beyond their physical size. Secondly, since the mass of the system affects the radius of validity, we calculated the radius of validity for the Milky Way with and without dark matter. Dark matter was originally introduced to compensate for deviations from a simple Newtonian calculation. Now since the Newtonian approximation is an approximation to the Einstein field equations, its application beyond the radius of validity for the Einstein field equation is delicate. The radius of validity depends on the total mass of the system, so if one adds in dark matter to fix deviations from Newtonian calculations, one artificially extends the radius of validity. If the radius of validity for a system without dark matter were smaller than the size of the system, such an artificial extension might cause one to wrongfully conclude that the system lies within the radius of validity.\\
Assuming for the sake of simplicity that dark matter makes up 90\% of the total mass of the Milky Way, it changes the radius of validity roughly by a factor of $2$. Given that the proposed dark matter halo of the Milky Way also extends significantly further out then just the edge of the disk, the ratio between the systems size and the radius of validity barely changes. However in both cases the radius of validity is roughly one order of magnitude bigger than the physical size of the system and thus we conclude that using the Einstein field equations and thus Newtonian calculations is adequate.\\
For the Virgo Super Cluster, however, the radius of validity is of the same order of magnitude as the system. In fact, the radius of validity is roughly half the radius of the system itself. This implies that applying Newtonian or post-Newtonian calculations to that system has to be done with care. The fact that we are using here a point particle and spherically symmetric approach for a system as extended as the Virgo Super Cluster means that we can not make strict statements on whether such calculations are actually invalid or not.\\
While the point particle approach for the Virgo Super Cluster is still somewhat justified, the same can not be said about the Universe as a whole. There we see that for the present day universe the radius of validity is an order of magnitude smaller than its size. In that case instead of using equation \eqref{eq:rval} for a given mass $M$, we replace $M$ by the mass contained in a sphere of Radius $\textbf{R}$ with homogeneous matter density $\rho$.\footnote{In an abuse of notation, we will in the following compare a sphere of radius $\textbf{R}$ in Euclidean space with a coordinate sphere in Schwarzschild-de Sitter.} That is, we replace $M$ in \eqref{eq:rval} with
\begin{align}
M = \frac{4 \pi}{3}\textbf{R}^3 \rho .
\end{align}
Rearranging \eqref{eq:rval} we then obtain
\begin{equation}\label{eq:critdens}
\frac{r_v}{\textbf{R}}= \left(\frac{4\pi G \rho}{c^2 \Lambda}\right)^{1/3}
\end{equation}
which is bigger than $1$ whenever $\frac{4\pi G \rho}{c^2 \Lambda}>1$. In cosmology the different eras (radiation-dominated, matter-dominated, $\Lambda$-dominated) are distinguished by the type of energy (radiation, matter, $\Lambda$) that makes up the largest fraction of the total energy. We see then, that (\ref{eq:critdens}) tells us that the radius of validity for a system outside the radius of the system, and therefore the Einstein field equations are a valid approximation, precisely when it is matter-dominated. Hence we could arrive at a similar expression for the redius of validity by looking at the ratio $\rho_M/\rho_{vac} $ between the matter density $\rho_M=\frac{3M}{4\pi r^3}$ of a mass $M$ evenly distributed over a sphere of radius $r$, and the vacuum energy density $\rho_{vac}=\frac{\Lambda c^2}{8 \pi G}$ we get
\begin{equation}
\frac{\rho_M}{\rho_{vac}}= \left(\frac{6 M G }{\Lambda c^2 r^3}\right).
\end{equation}
This is equal to $1$ precisely when
\begin{equation}
r=2^{1/3}r_v.
\end{equation}
Thus for a given mass $M$, the the radius of validity is, up to a small numerical factor, the same radius at which the matter density and vacuum energy density are equal. For $r<r_v$, the matter density dominates and we are confident that the Einstein equations provide a good description. For $r>r_v$, the vacuum energy dominates, and we should be careful about applying Newtonian or post-Newtonian arguments. \\
Up to this point we have ignored the mass-correction formula because for all the compact systems under consideration so far it was sufficient to take the approximation $M = \mu$. It is only on mass scales that are on the order of the mass of the universe that we see a significant deviation, as can be seen in Figure \ref{fig:masscorrection}. Indeed, a mass deviation of $1$\% only occurs once the mass of the system reaches $10^{52}$ kg. For $\mu = 3 \times 10^{52}$, which is roughly the mass of the observable universe, the mass-correction is roughly 10\%. This can be thought of as a secondary modification that takes effect already within the radius of validity and might thus be relevant for considerations on the scale of the universe. Here of course one has to keep in mind that the mass-correction formula originates from a point particle consideration and is thus not necessarily applicable to the universe. Note also that the mass-correction becomes negligible when we only consider baryonic matter. On the other hand in the early universe, when the total energy from the electromagnetic radiation was significantly higher the effect might be more prominent.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/mass-correction.pdf}
\caption{A plot of the Schwarzschild mass $M = \mu$ and a plot of the corrected mass, as determined by (\ref{eq:correctedmass}).}
\label{fig:masscorrection}
\end{figure}
\section{Conclusion}
We derived an isometric embedding for the Schwarzschild and Schwarzschild-de Sitter spacetimes into $AdS_3$. We used the detailed behaviour of the embedding in the $\Lambda \to 0$ limit to heuristically define a radius of validity for the Einstein Equations in a de Sitter universe. One possible interpretation of this hierarchy of validity is that one can assign to $\Lambda$ a similar role in the context of gravity that $\hbar$ plays for quantum mechanics. This observation suggests that one can, in principle, interpret the cosmological constant as a fundamental energy scale for gravitational systems.\\
The considerations in Section \ref{sec:phys} show that for most scales in the universe, it is safe to ignore possible effects of the cosmological constant. For large systems, however, using an Einstein or Newtonian approximation may not be justified, despite the low value of the cosmological constant. In particular for the largest structures such as superclusters, the Newtonian approximation might not be entirely valid. Note that these effects on long-range interactions could affect the interpretation of weak lensing observations, since most of the reconstruction is based on post Newtonian approximations, see for example \cite{kilbinger2015cosmology} for an extensive review. Note in particular that beyond the radius of validity the sign of $f'(r)$ changes and thus the lensing might behave substantially different beyond this point. Similar considerations for the long range interactions might be relevant to figure out which black hole stability problem is actually the physical relevant one. (i.e. Kerr or Kerr-de-Sitter). Last but not least, the field of vision for LIGO spans far beyond the radius of validity of even the Virgo Supercluster \cite{abbott2016prospects}. Modifications of gravitational wave sources in the spirit of \cite{ashtekar2014asymptotics,ashtekar2015asymptotics,ashtekar2015asymptotics2,ashtekar2016gravitational} might therefore have an effect on observations. \\
For a homogeneous universe the cosmological constant becomes relevant when the matter density and the vacuum energy density are roughly equal. However there is a secondary effect due to the mass-correction that might play a role. However it is at most of the order of 10\% so it is certainly no dramatic change.
One limitation to the present work is, that \emph{a priori} it only holds true for the case of spherical symmetry which we investigated here. This is relevant to mention because preliminary calculations for an extension of the results in \cite{mars2017fingerprints} to the case with a positive cosmological constant suggest that in principle $\Lambda$ is detectable in the shape of the shadow of a black hole when $a>0$. As the shadow contains mostly near horizon information, this suggests that the cosmological constant should affect the near horizon geometry.
It would be interesting to try to elaborate in a quantitative manner on the features investigated in the present work. Further it would be interesting to investigate the role of the cosmological constant away from spherical symmetry.
\subsection*{Acknowledgements}This work was partially supported by the Australian Research Council grant DP170100630.
We would like to thank Hermine Boat for her essential support during the conceptional phase of this paper. We would further like to thank the Institute Henri Poincar\'{e} for hospitality during the trimester on Mathematical Relativity, Paris, during fall 2015, and M.B. would like to thank Monash University where part of this work was done. C.P. was supported by the Albert Einstein Institute during a part of this project. M.B. was supported by an Australian Postgraduate Award for part of this project.
\newpage
\bibliographystyle{plain}
\section{Introduction}
The currently accepted paradigm in observational astronomy is that the universe in which we live is undergoing an accelerated expansion. Recent data for the Hubble constant from CMB data \cite{planck2016astronomy} and for the Hubble constant from local data \cite{riess20162} are in support of this. The simplest theoretical model incorporating such an accelerating universe is the $\Lambda$CDM model, see e.g. \cite{buchert2016observational} for a review and open tensions. Despite the fact that there exist alternative explanations for the accelerated expansion, such as \cite{racz2017concordance}, we will adopt in the present paper the view that $\Lambda$CDM is a correct description of the expansion of the universe, and that the Einstein-de Sitter equations are the fundamental equations describing gravity.
In the present paper we will argue that the cosmological constant can be assigned a similar function for the gravitational realms that $\hbar$ plays for matter. The correspondence principle in quantum mechanics is the notion that when the scales of the action in a quantum mechanical system become large compared to $\hbar$, the system approximates a corresponding classical system. This quantum-classical correspondence gives a heuristic for recovering a classical system from a quantum system - simply take the limit $\hbar \to 0$. In the present paper we will study the limits of Schwarzschild-de Sitter black holes as the cosmological constant goes to zero, and we will argue that this allows one to define a scale of validity for the Einstein equations of an isolated gravitational system in a de Sitter universe. To do so, we will employ Geroch's notion for the limits of a family of spacetimes \cite{geroch_limits_1969} applied to Schwarzschild-de Sitter.
To study the way in which the limit is approached in detail, we use an embedding of the quotient of Schwarzschild-de Sitter space over the sphere into $AdS_3$ space.\footnote{That is, $(2+1)$-dimensional anti-de Sitter space.} This embedding was first introduced in \cite{bengtsson_classics_2014} for the case of Reissner-Nordstr\"{o}m.
For calculations on the scale of astrophysical systems the cosmological constant is usually dropped and the spacetime is assumed to be asymptotically flat. This approximation is often employed with little justification, other than a brief citation of the small value $\Lambda \sim 10^{-52} \, m^{-2}$ for the cosmological constant. Recent work that takes the cosmological setting into consideration suggests that the effects are by no means trivial. Prominent examples being the quadropole formula for gravitational energy loss \cite{ashtekar2014asymptotics,ashtekar2015asymptotics,ashtekar2015asymptotics2,ashtekar2016gravitational}, as well as recent work on the gravitational memory effect in de Sitter spacetimes \cite{bieri2016gravitational}. Note that even when one assumes that the universe is spatially flat, asymptotic flatness has to be employed with care, since the definition of asymptotic flatness incudes the requirement that the matter density falls off sufficiently fast towards infinity. This condition is obviously violated for a spacelike slice in a homogeneous, spatially flat FLRW universe.\footnote{This was pointed out to the authors by Beatrice Bonga in private communication} Interestingly, the problem of global non-linear stability for black holes has recently been solved for slow-rotating black holes in a de Sitter universe \cite{hintz2016global},\footnote{This is arguably the physically relevant case if the cosmological constant is, in fact, positive.} while the corresponding problem for asymptotically flat spacetimes remains one of the big challenges in the field of mathematical relativity, see \cite{ma2017uniform,ma2017uniform2,aksteiner2016new,andersson2017morawetz,dafermos2017boundedness,finster2016linear} for recent progress on the linearised problem and \cite{klainerman2017global} for the full non-linear problem under strong constraints. In this paper we will use the qualitative properties of how the $\Lambda \to 0$ limit is approached to give a heuristic argument that the Einstein equations are a legitimate approximation to the fundamental Einstein-de Sitter equations, for calculations in the short-range regime. For gravitational memory this was recently worked out in \cite{bieri2017gravitational}, where the authors found that for low redshift, i.e. for nearby sources, and high frequencies the gravitational memory in a $\Lambda$CDM background is equivalent to that in a flat space while for large redshift there is a significant deviation.
\subsection*{Overview of the paper}
The paper is organized in the following way. In Section \ref{sec:ssds} we will introduce and review the relevant background, including the Schwarzschild-de Sitter spacetimes. Then, in Section \ref{sec:math}, we discuss Geroch's notion for the limits of spacetimes. In Section \ref{sec:embedding} we discuss how the embedding of Schwarzschild-de Sitter into $AdS_3$ is performed. The resulting embeddings are then presented in Section \ref{sec:pics}. Finally in Section \ref{sec:phys}, we give a possible physical interpretation of our findings.
\section{The Schwarzschild-de Sitter Spacetime}
\label{sec:ssds}
The Schwarzschild-de Sitter spacetime is the spherically symmetric solution to the vacuum Einstein-de Sitter equations\footnote{Note that, until section \ref{sec:phys}, we will use units such that $\hbar = G = c = 1$.}
\begin{equation}
R_{\mu \nu} - \frac{1}{2}R g_{\mu \nu} + \Lambda g_{\mu \nu} = 0
\end{equation}
with $\Lambda >0$. In Schwarzschild coordinates the metric is given by
\begin{equation}\label{eq:ssds}
ds^2 = -f(r) dt^2 + \frac{1}{f(r)} dr^2 + r^2 d \Omega^2
\end{equation}
with
\begin{equation}
f(r) = 1-\frac{2M}{r} - \frac{\Lambda}{3}r^2,
\end{equation}
where $M$ and $\Lambda$ are regarded as free parameters. The spacetime is spherically symmetric and static. We define the domain of outer communication as the region where the Killing vector field $\partial_t$, for which the orbits of points under the diffeomorphism are open, is timelike. The metric \eqref{eq:ssds} has a coordinate singularity when
\begin{equation} \label{eq:radialfunc}
1-\frac{2M}{r} - \frac{\Lambda}{3} r^2 = 0,
\end{equation}
where the norm of the Killing vector $\partial_t$ switches sign, indicating the location of a horizon. Note that this equation always has at least one real solution independent of the choice of parameters. For parameters in the subextremal range there are three real solutions to equation \eqref{eq:radialfunc}. They can be written explicitly as
\begin{align}
\label{eq:rH}
r_H &= \frac{2}{\sqrt{\Lambda}} \cos \left[\frac{1}{3} \arccos (3M\sqrt{\Lambda}) + \frac{\pi}{3} \right] \\
r_C &= \frac{2}{\sqrt{\Lambda}} \cos \left[\frac{1}{3} \arccos (3M\sqrt{\Lambda}) - \frac{\pi}{3} \right] \\
r_U &= -(r_H + r_C).
\end{align}
In this work we are only interested in the coordinate range where $r\in(0,\infty)$ and, since $r_U$ is always negative, it will not be relevant to our discussion.
In the subextremal case, $r_H$ is the location of the black hole horizon and $r_C$ is the location of the cosmological horizon. It is the region between those two where the Killing vector field $\partial_t$ is timelike. Note that Schwarzschild-de Sitter becomes extremal when $r_H$ and $r_C$ coincide, which is the case when $9\Lambda M^2=1$. We will primarily restrict ourselves to the subextremal case, where $0< \Lambda < \frac{1}{9M^2}$. Note that the photon sphere in Schwarzschild-de Sitter is located at
\begin{equation}
r_{ph}=3M,
\end{equation}
independent of the value of $\Lambda$.\footnote{See \cite{geometryphotonsurfaces} for a derivation.}
The conformal diagram for Schwarzschild-de Sitter is given in Figure \ref{fig:ssds} from which we can see immediately, by gluing two consecutive cosmological horizons together, that its topology is given by $S^1\times S^2\times \Reals$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/ssds.png}
\caption{Conformal diagram for the maximal extension of the subextremal Schwarzschild-de-Sitter space-time. The blue lines correspond to hypersurfaces of constant $t$ the red lines to hypersurfaces of constant $r$. $\mathcal{H^\pm}$ are the future and past event horizon located at $r=r_H$ while $\mathcal{CH}^\pm$ are the future/past cosmological horizons located at $r=r_C$. Time like future and past infinity is indicated by $i^\pm$. The singularity is located at $r=0$. Here $r=\infty$ is a spacelike conformal boundary }
\label{fig:ssds}
\end{figure}
In the following when we speak about limits of spacetime properties, we are simply discussing the properties of coordinate functions. This is not to be confused with the limits of spacetimes that we consider later on in the paper, although some intuitive results do carry over.
Since the location of the photon sphere is constant for a fixed $M$, it is not surprising that in the limit $\Lambda \to \frac{1}{9M^2}$, the two relevant horizons approach this value:
\begin{align*}
\lim_{\Lambda \to \frac{1}{9M^2}} r_H &= 3M \\
\lim_{\Lambda \to \frac{1}{9M^2}} r_C &= 3M.
\end{align*}
On the other hand, the limit $\Lambda \to 0$ for these functions is
\begin{align*}
\lim_{\Lambda \to 0} r_H &= 2M \\
\lim_{\Lambda \to 0} r_C &= \infty.
\end{align*}
In this limit, the radius of the black hole horizon takes the same value as the black hole horizon from the Schwarzschild metric, for which the function $f(r)$ in the metric \eqref{eq:ssds} is given by
\begin{equation}
f(r)=1-\frac{2M}{r}.
\end{equation}
The domain of outer communication for Schwarzschild stretches out an infinite distance from the black hole horizon, consistent with the cosmological horizon extending to infinity. The Schwarzschild metric solves the Einstein vacuum equations
\begin{equation}
R_{\mu \nu} = 0,
\end{equation}
and is asymptotically flat. Its conformal diagram is given in Figure \ref{fig:ss}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{img/schwarzschild.png}
\caption{Conformal diagram for the maximal extension of the Schwarzschild space-time. The blue lines correspond to hypersurfaces of constant $t$ the red lines to hypersurfaces of constant $r$. $\mathcal{H^\pm}$ are the future and past event horizon located at $r=2M$ while $\mathcal{I}^\pm$ are the future/past null infinity. Time like future and past infinity is indicated by $i^\pm$, while $i^0$ indicates space like infinity. The singularity is located at $r=0$.}
\label{fig:ss}
\end{figure}
\section{Limits of spacetimes}
\label{sec:math}
Lorentzian metrics appearing in general relativity often come in families parameterised by one or more constants, whose values are not fixed by the Einstein field equations. Consider, for example, the Kerr family of solutions. In this family, there are two free parameters, corresponding to the mass $M$ and the rotation parameter $a$. It is a natural question to ask what type of spacetime we obtain if we reduce, say, the rotation parameter $a$ to 0.
Na\"{i}vely, the answer to this question consists of simply setting $a = 0$ in the coordinate description of the metric.\footnote{Or taking the limit $a \to 0$ if required.} This approach has significant issues however, since one can first perform a coordinate transformation and then take the same limit to obtain a completely different spacetime! This fact seems at odds with the notion that coordinate changes in general relativity aren't supposed to affect anything.
Geroch provides the resolution to this paradox by asserting that it is only meaningful to take limits if we first introduce a method of comparing points in different spacetimes \cite{geroch_limits_1969}. That is, we need a way of deciding which points are `the same' in spacetimes which have different values for the chosen parameter. There is no canonical way of doing this, and so any such limit will implicitly involve a choice.
Let us now describe Geroch's prescription in a little detail. We begin with a one-parameter family of spacetimes $M_{\lambda}$, and wish to assign a sensible limiting spacetime to this family as we take the parameter to some fixed value, say $\lambda \to 0$. We assemble the family of spacetimes into a smooth 5-dimensional manifold, $\mathcal{M}$, where each $M_{\lambda}$ is a smooth 4-dimensional submanifold of $\mathcal{M}$.\footnote{Note that unless otherwise specified, we assume all manifolds are Hausdorff.} The manifold $\mathcal{M}$ is foliated by these submanifolds, and the parameter $\lambda$ defines a scalar field on $\mathcal{M}$ which is constant on each leaf of the foliation. We assume the metric tensors $g_{ab} (\lambda)$ combine to form a smooth metric $\mathcal{G}$ on $\mathcal{M}$ with signature $(0, -, +, +, +)$. The data defined by $(\mathcal{M}, \mathcal{G})$ is equivalent to the data defined by the family $(M_{\lambda}, g(\lambda))$. A limiting spacetime is then obtained by defining a suitable boundary $\partial \mathcal{M}$ for $\mathcal{M}$, see Figure \ref{fig:foliation}. More specifically, a limit space is a 5-dimensional manifold $\overline{\mathcal{M}}$ with boundary $\partial \overline{\mathcal{M}}$, a metric $\overline{\mathcal{G}}$ and a scalar field $\overline{\lambda}$ on $\overline{\mathcal{M}}$, and a smooth injective map $\Psi$ from $\mathcal{M}$ into the interior of $\overline{\mathcal{M}}$ satisfying:
\begin{itemize}
\item $\Psi$ takes $\mathcal{G}$ into $\overline{\mathcal{G}}$, and $\lambda$ into $\overline{\lambda}$ \\
\item $\partial \overline{\mathcal{M}}$ is connected, non-empty, and $\overline{\lambda} = 0$ when restricted to $\partial \overline{\mathcal{M}}$. \\
\item $\overline{\mathcal{G}}$ has signature $(0,-,+,+,+)$ on $\partial \overline{\mathcal{M}}$.
\end{itemize}
Geroch goes on to define a \emph{family of frames} - that is, for each leaf of the foliation, one chooses a fiducial point $p_{\lambda}$ and an orthonormal frame $\omega(\lambda)$ at $p_{\lambda}$, and identifies such points and frames for each $\lambda$. Then, by calculating geodesics from the fiducial point to any other point, we have a way of comparing points in the different spacetimes. Geroch then states that such a choice of a family of frames either defines no limit space, or else determines a unique maximal limit space.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{img/foliation.png}
\caption{A cartoon depiction of the Geroch foliation.}
\label{fig:foliation}
\end{figure}
How does this connect to our intuitive notion of simply taking the limits in the coordinate representation of the metric? Choosing a coordinate system is implicitly choosing a point,\footnote{Actually, it implicitly chooses any one of the points in the open set on which the coordinates are defined.} and an orthonormal frame at that point, for each value of the parameter $\lambda$. Such a choice of coordinates therefore determines a family of frames, and by Geroch's theorem, a limiting spacetime. There is no guarantee that a different choice of coordinates will result in the same limiting spacetime.
To illustrate, let us look at the limit of Schwarzschild-de Sitter, as the value of the cosmological constant goes to zero, and take as our fiducial point the point bifurcation sphere $p_H$, shown in Figure \ref{fig:ssdstoss}. A natural question is to ask whether points in block VI exist in the limit. Geodesics from $p_H$ to a point $p_6$ in region VI must first pass through $r = r_C$. That is,
\begin{align*}
d(p_H,p_6) = d(p_H,r_C) + d(r_C,p_6).
\end{align*}
But the first term diverges in the limit, so
\begin{align*}
\lim_{\Lambda \to 0} d(p_H,p_6) = \infty,
\end{align*}
and it follows that $p_6$ cannot survive in the limit. A similar argument shows that region V cannot survive either.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{img/ssdstoss.png}
\caption{The conformal diagram of Schwarzschild-de Sitter and the conformal diagram of the limiting Schwarzschild spacetime.}
\label{fig:ssdstoss}
\end{figure}
Note that choosing $p_C$ as the fiducial point could result in a completely different limiting spacetime, however we will not investigate this question in the present work.
\section{An embedding into anti-de Sitter space}
\label{sec:embedding}
Geroch's notion of limits of spacetime is somewhat abstract, so we shall use the formalism of \cite{bengtsson_classics_2014} to implement the Geroch procedure and describe the associated limits. Following \cite{bengtsson_classics_2014}, we embed the entire one-parameter family of spacetimes into a fixed ambient space, which we take to be $AdS_3$. Each spacetime touches at a definite point in the ambient space, the origin of the $AdS_3$ space, and the tangent spaces (and therefore an othornomal frame) coincide at that point. It follows that the conditions of Geroch's limit theorem are met, and we can therefore uniquely assign a limiting spacetime. Of course, the limiting spacetime will depend on the points we are identifying, that is, on the embedding. There is, in general, no canonical procedure for selecting points in the different spacetimes which we may regard as ``the same''. We will choose this fiducial point to be a point on the bifurcation sphere, $p_H$.
Since our family of spacetimes is spherically symmetric, it is enough to embed the 1+1 dimensional spacetime $\Sigma$, described by the metric
\begin{align}
\label{eq:2DBHmetric}
ds^2 &= - f(r) dt^2 + \frac{1}{f(r)} dr^2.
\end{align}
The embedding of $\Sigma$ into $AdS_3$ is determined by the following equations
\begin{subequations}
\label{eq:AdS3embedding}
\begin{align}
X &= \sqrt{1+a^2 f(r)} \, \sinh{\left( g(r) \right)} \\
Y &= a \sqrt{f(r)} \cosh{\left( \frac{t}{a} \right)} \\
U &= a \sqrt{f(r)} \sinh{\left( \frac{t}{a} \right)} \\
V &= \sqrt{1+a^2 f(r)} \, \cosh{ \left( g(r) \right)}.
\end{align}
\end{subequations}
The parameter $a$ is a constant which we choose for convenience to be $\frac{1}{\kappa}$, where $\kappa$ is the surface gravity of the black hole. The functions $(X,Y,U,V)$ are coordinates for the $AdS_3$ space, thought of as the hypersurface $X^2 +Y^2 - U^2-V^2 = -1$ in $\mathbb{R}^4$, endowed with the metric
\begin{align}
\label{eq:AdS3metric}
ds^2 &= dX^2 + dY^2 - dU^2 - dV^2.
\end{align}
Since we want this embedding to be an isometric embedding of our spacetime into $AdS_3$, we insist that the induced metric, determined by the ambient $AdS_3$ metric (\ref{eq:AdS3metric}) and the embedding (\ref{eq:AdS3embedding}), matches the black hole metric (\ref{eq:2DBHmetric}). This will occur when the function $g(r)$ satisfies the differential equation
\begin{align}
\label{eq:embeddingDE}
\big( g'(r) \big)^2 &= \frac{1 + a^2 f - \frac{a^2 f'}{4}}{f \big( 1+a^2f \big)^2}.
\end{align}
Note that, so far, the only difference between the setup here and the setup in \cite{bengtsson_classics_2014} is the form of the function $f(r)$. Determining the embedding therefore amounts to solving the differential equation (\ref{eq:embeddingDE}) for the function $g(r)$, which we will do numerically. By choosing $g(r_H) = 0$, we are able to ensure that the black hole horizon for each embedding touches the point $(X,Y,U,V) = (0,0,0,1)$ in the ambient $AdS_3$ space.
In order to visualise the embeddings, we will use the so-called sausage coordinates $(x,y,\tau)$ for $AdS_3$. These coordinates are related to the embedding coordinates $(X,Y,U,V)$ by:
\begin{equation*}
\begin{aligned}[c]
X &= \frac{2x}{1-\rho^2} \\[1em]
Y &= \frac{2y}{1-\rho^2}
\end{aligned}
\qquad
\begin{aligned}[c]
U &= \frac{1 + \rho^2}{1-\rho^2} \sin \tau \\[1em]
V &= \frac{1 + \rho^2}{1-\rho^2} \cos \tau,
\end{aligned}
\end{equation*}
where $\rho = x^2 + y^2$ and $0\leq \rho < 1$. The sausage coordinates realise $AdS_3$ as a solid cylinder in $\mathbb{R}^3$. Slices of constant $\tau$ in this cylinder are Poincar\'{e} disks, and the embedding of $\Sigma$ into the $AdS_3$ space now appears as a two dimensional sheet inside the solid cylinder. We refer the reader to the appendix of \cite{bengtsson_classics_2014} for a nice discussion of the geometric properties of this embedding.
\\
\section{Illustrations}
\label{sec:pics}
When we embed a Schwarzschild-de Sitter spacetime, we have to choose the values of $M$ and $\Lambda$ for a given embedding. A straightforward way to do this is to fix $M$ to some convenient value, say $M = 1$, and then study the embeddings as you vary $\Lambda$. A representative embedding is shown in Figures \ref{fig:basicssdsembeddinga} and \ref{fig:basicssdsembeddingb}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/Option1a.png}
\caption{An embedding of Schwarzschild-de Sitter, with $\Lambda M^2 = \frac{1}{10}$. The $AdS$ cylinder in being viewed from the left. One of the sheets has been made translucent to aid visualisation.}
\label{fig:basicssdsembeddinga}
\end{figure}
\begin{figure}
\begin{multicols}{2}
\includegraphics[width=0.5\textwidth]{img/Option1b.png}\par
\includegraphics[width=0.5\textwidth]{img/Option1c.png}
\end{multicols}
\caption{Views of the embedding in Figure \ref{fig:basicssdsembeddinga} from above (image on left) and the front (image on right). Note that these figures have been produced in Mathematica from a three-dimensional figure, and the pictures are stereographic projections from the described viewpoints.}
\label{fig:basicssdsembeddingb}
\end{figure}
In Figure \ref{fig:basiccircles}, we plot the $\tau = 0$ slice of this embedding, together with the $\tau = 0$ slice of the Schwarzschild embedding of the same mass.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/basicsandssds.pdf}
\caption{The $\tau = 0$ slice of the embedding in Figure \ref{fig:basicssdsembeddinga}, and the embedding of Schwarzschild of the same mass. The two embeddings touch at the origin of the ambient $AdS_3$ space.}
\label{fig:basiccircles}
\end{figure}
An unpleasant feature of this picture is the discrepancy between the embedding of the Schwarzschild-de Sitter domain of outer communication and the embedding of the Schwarzschild domain of outer communication. The physical interpretation discussed in Section \ref{sec:phys} involves a comparison between the near horizon geometry of Schwarzschild and Schwarzschild-de Sitter black holes. The key point is that when comparing these black holes, the near-horizon geometry only matches once we adjust the relative masses. To achieve this, we consider a mass parameter $M = M(\Lambda)$, varying with $\Lambda$ in such a way that the horizon area is kept constant. That is, we want to fix the radius of the black hole horizon to be $r = r_H = 2 \mu$, where $\mu$ is the mass of some reference Schwarzschild spacetime. Note that this mass-fixing procedure is equivalent to changing the mass of the reference Schwarzschild black hole. The $\tau = 0$ slice of the resultant embeddings provide a much cleaner comparison between the Schwarzschild and the Schwarzschild-de Sitter embeddings, as seen in Figure \ref{fig:masscorrectedcircles}. We shall employ this mass-correction for the remainder of the present work.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/masscorrectedsandssds.pdf}
\caption{The $\tau = 0$ slice of the embedding in Figure \ref{fig:basicssdsembeddinga}, and the embedding of the mass-corrected Schwarzschild. The two embeddings touch at the origin of the ambient $AdS_3$ space.}
\label{fig:masscorrectedcircles}
\end{figure}
Before we elaborate more on the physical interpretation, let us first make a few comments on how to view the embeddings we have already obtained. The two sheets of the embedding in Figures \ref{fig:basicssdsembeddinga} and \ref{fig:basicssdsembeddingb} correspond to regions I and II in the Schwarzschild-de Sitter conformal diagram in Figure \ref{fig:ssds}. In the $\tau = 0$ slice, the centre of the disk corresponds to the origin of the $AdS_3$ space, and the event horizon of the embeddings. The circle $x^2 + y^2 = 1$, corresponding to the boundary of the solid cylinder in Figures \ref{fig:basicssdsembeddinga} and \ref{fig:basicssdsembeddingb}, is an infinite metric distance away from the origin. The blue line is the intersection of the embedding of the Schwarzschild-de Sitter spacetime with this plane, and the other intersection of the blue line with the $y=0$ line corresponds to the cosmological horizon, $r_C$. The fact that the Schwarzschild de Sitter spacetime is a smooth manifold of topology $S^1\times S^2 \times \Reals$ and the embedding is isometric, suggests that the cuspy nature of this intersection is a numerical artefact. The red line is the intersection of the embedding of the Schwarzschild black hole with the plane $\tau = 0$. Note that the Schwarzschild spacetime reaches the edge of the $AdS_3$ space - points in the Schwarzschild domain of outer communication can be arbitrarily far from the event horizon.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{img/differentlambda.png}
\caption{A plot of the $\tau = 0$ slices for various embeddings. The values of $\Lambda$ are such that $9 \Lambda M^2$ is given by $\left[ \frac{9}{10} (\textrm{blue}), \frac{7}{10}, \frac{5}{10}, \frac{3}{10} ,\frac{1}{10} (\textrm{red}) \right] $. The point at which $f'(r) = 0$ is represented on each embedding by a solid dot (See Section \ref{subsec:hierarchy} for more details).}
\label{fig:differentlambda}
\end{figure}
\section{Physical interpretation}
\label{sec:phys}
In the following section, we will use the illustrations of the previous section to establish a heuristic argument in favour of a hierarchy of validity between the Einstein-de Sitter equations and the Einstein equations.
\subsection{Schwarzschild mass correction in a de-Sitter Universe}
When embedding the Schwarzschild-de Sitter black holes into $AdS_3$, we had to choose the mass parameter of the black hole to be a function of $\Lambda$ to guarantee that the black hole horizon area remained constant. By identifying the radius of the black hole horizon $r = r_H$ with the radius of a reference Schwarzschild black hole horizon $r = 2 \mu$, we obtain a relation between the mass parameter of the Schwarzschild-de Sitter spacetime $M$ and the effective mass of the reference Schwarzschild black hole $\mu$. Doing this na\"{i}vely by using the expression (\ref{eq:rH}) for $r_H$, we obtain
\begin{equation}
\label{Mlambda}
M =\frac{1}{3\sqrt{\Lambda}}\cos \Big( 3 \arccos(\mu \sqrt{\Lambda})-\pi \Big)
\end{equation}
Note that since $M = M(\Lambda,\mu)$, the extremality condition $9M \Lambda^2 < 1$ changes, and now becomes
\begin{align*}
\Lambda < \frac{1}{4 \mu^2}.
\end{align*}
A much simpler expression can be obtained by noting that $f(r_H) = 0$, and so fixing the horizon at $r_H = 2 \mu$ means that we require
\begin{equation}
f(2\mu) = 1 - \frac{2M}{2\mu} - \frac{\Lambda}{3}\left(2\mu \right)^2 = 0
\end{equation}
Rearranging this expression for $M$ gives us
\begin{equation}
\label{eq:correctedmass}
M = \mu - \frac{4 \Lambda}{3} \mu^3,
\end{equation}
which is identical to the expression (\ref{Mlambda}).
Until this point, we have been using natural units to simplify calculations and expressions. We find it prudent to now switch to S.I. units (meters, kilograms, seconds). Expression (\ref{eq:correctedmass}) for the corrected-mass is, in S.I. units, given by
\begin{equation}
M = \mu - \frac{4 \Lambda G^2}{3c^4} \mu^3.
\end{equation}
For a system with a fixed Schwarzschild/Newtonian mass $\mu$, the Schwarzschild-de Sitter solution with corrected mass $M$ exhibits a similar near field behaviour.
\subsection{Hierarchy of Validity}
\label{subsec:hierarchy}
In quantum mechanics the limit $\hbar \rightarrow 0$ serves to recover the equations governing the evolution of systems in classical mechanics from the equations that govern the same system in the quantum regime. This gives us two things:
\begin{itemize}
\item A compatibility of quantum mechanics and classical mechanics
\item A breakdown criterion for regimes in which classical mechanics is no longer valid.
\end{itemize}
These two things emphasise that the modeling of a system is scale dependent. Newtonian Gravity emerges from Einstein's Relativity in a similar fashion, namely as a static, small perturbation to a flat background spacetime. We will argue that the $\Lambda \to 0$ limit related the Einstein-de Sitter equations and the Einstein equations in a similar fashio. We will be able to establish a heuristic hierarchy of validity between these systems describing gravity. The precise form in which the embedding of the Schwarzschild-de-Sitter spacetimes approach the asymptotically flat limit further serves to clarify the effect of a non-zero $\Lambda$.
We see from the illustrations in Section \ref{sec:pics} that a non-zero $\Lambda$ mainly effects the structure of the exterior region in the neighbourhood of infinity/the cosmological horizon, that is, regions far away from the massive body. Speaking in interaction terms a non-zero $\Lambda$ effects only the long-range interaction between a massive gravitating object and a test particle.
Let us now introduce the notion of a radius of validity - namely a radius outside of which the Schwarzschild-de Sitter solution starts to significantly differ from the Schwarzschild solution. We can identify a candidate for such a radius by investigating properties of the radial function $f(r)$. Outside the event horizon, the radial function for Schwarzschild-de Sitter agrees closely with the radial function for Schwarzschild. The point at which they begin to significantly differ is the maximum of the Schwarzschild-de Sitter radial function - that is, the point at which $f'(r) = 0$. This is
\begin{equation}\label{eq:rval}
r_{v}=\left(\frac{3GM}{c^2 \Lambda}\right)^{\frac{1}{3}}.
\end{equation}
These radii are shown in Figure \ref{fig:differentlambda} for various values of $\Lambda$. Their exact location in the embedding suggests that this is a sensible choice for a radius of validity.
It would be more satisfying to have a geometric characterisation of this radius - that is, a definition that was coordinate independent. We can obtain this by noting that at this radius, we have
\begin{align*}
\mathcal{R}^2 = 3 \mathcal{I}_1,
\end{align*}
where $\mathcal{R} = 4 \Lambda$ is the Ricci scalar curvature of the Schwarzschild-de Sitter metric, and $\mathcal{I}_1$ is a principal invariant of the Weyl scalar, $C_{abcd}$, defined by
\begin{align*}
\mathcal{I}_1 &= C_{abcd} C^{abcd}.
\end{align*}
\subsection{Comparison of validity length scale with size of astronomical objects}
Since we have obtained a formula for the radius of validity of the Einstein equations in a de Sitter universe, let us now compare that radius of validity to the size of astrophysical objects. As it mostly boils down to an order of magnitude comparison, we have chosen to compare four systems that are roughly representative for their class and cover the various mass ranges. The Solar system, the Globular Cluster NGC 2419, the Milky Way and the Virgo Super Cluster have masses of the order of $1$, $10^5$, $10^{11}$, $10^{15}$ solar masses. Note that we will abstain from using any sort of astronomical units and will be working with SI units instead.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c | c | c | c|}
\hline
Object & Mass (kg) & Size (m) & $r_v$ (m) \\
\hline
Solar System & $2\times10^{30}$ & $7.5\times10^{12}$ & $3\times 10^{18}$ \\
\hline
NGC2419 Globular Cluster & $2\times10^{36}$ & $2.5\times 10^{18}$ & $3\times 10^{20}$ \\
\hline
Milky Way (with out dark matter halo) & $2\times 10^{41}$ & $9.5\times10^{20}$ & $1.5 \times 10^{22}$\\
\hline
Milky Way (with dark matter halo) & $2\times 10^{42}$ & $1.5\times10^{21}$ & $3 \times 10^{22}$\\
\hline
Virgo Supercluster & $2\times10^{45}$ & $5.2\times10^{23}$ & $3 \times 10^{23}$ \\
\hline
Universe (present day) & $3 \times 10^{52}$ & $4.3\times10^{26}$ & $8 \times 10^{25}$ \\
\hline
\end{tabular}
\end{center}
\caption{Observational values of astronomical systems compared to the scale of validity calculated by formula \eqref{eq:rval}. A more extensive discussion is contained in the bulk of the text.}
\label{tab:rval}
\end{table}
First we observe that the solar system and the globular cluster both have radii of validity that extend well beyond their physical size. Secondly, since the mass of the system affects the radius of validity, we calculated the radius of validity for the Milky Way with and without dark matter. Dark matter was originally introduced to compensate for deviations from a simple Newtonian calculation. Now since the Newtonian approximation is an approximation to the Einstein field equations, its application beyond the radius of validity for the Einstein field equation is delicate. The radius of validity depends on the total mass of the system, so if one adds in dark matter to fix deviations from Newtonian calculations, one artificially extends the radius of validity. If the radius of validity for a system without dark matter were smaller than the size of the system, such an artificial extension might cause one to wrongfully conclude that the system lies within the radius of validity.\\
Assuming for the sake of simplicity that dark matter makes up 90\% of the total mass of the Milky Way, it changes the radius of validity roughly by a factor of $2$. Given that the proposed dark matter halo of the Milky Way also extends significantly further out then just the edge of the disk, the ratio between the systems size and the radius of validity barely changes. However in both cases the radius of validity is roughly one order of magnitude bigger than the physical size of the system and thus we conclude that using the Einstein field equations and thus Newtonian calculations is adequate.\\
For the Virgo Super Cluster, however, the radius of validity is of the same order of magnitude as the system. In fact, the radius of validity is roughly half the radius of the system itself. This implies that applying Newtonian or post-Newtonian calculations to that system has to be done with care. The fact that we are using here a point particle and spherically symmetric approach for a system as extended as the Virgo Super Cluster means that we can not make strict statements on whether such calculations are actually invalid or not.\\
While the point particle approach for the Virgo Super Cluster is still somewhat justified, the same can not be said about the Universe as a whole. There we see that for the present day universe the radius of validity is an order of magnitude smaller than its size. In that case instead of using equation \eqref{eq:rval} for a given mass $M$, we replace $M$ by the mass contained in a sphere of Radius $\textbf{R}$ with homogeneous matter density $\rho$.\footnote{In an abuse of notation, we will in the following compare a sphere of radius $\textbf{R}$ in Euclidean space with a coordinate sphere in Schwarzschild-de Sitter.} That is, we replace $M$ in \eqref{eq:rval} with
\begin{align}
M = \frac{4 \pi}{3}\textbf{R}^3 \rho .
\end{align}
Rearranging \eqref{eq:rval} we then obtain
\begin{equation}\label{eq:critdens}
\frac{r_v}{\textbf{R}}= \left(\frac{4\pi G \rho}{c^2 \Lambda}\right)^{1/3}
\end{equation}
which is bigger than $1$ whenever $\frac{4\pi G \rho}{c^2 \Lambda}>1$. In cosmology the different eras (radiation-dominated, matter-dominated, $\Lambda$-dominated) are distinguished by the type of energy (radiation, matter, $\Lambda$) that makes up the largest fraction of the total energy. We see then, that (\ref{eq:critdens}) tells us that the radius of validity for a system outside the radius of the system, and therefore the Einstein field equations are a valid approximation, precisely when it is matter-dominated. Hence we could arrive at a similar expression for the redius of validity by looking at the ratio $\rho_M/\rho_{vac} $ between the matter density $\rho_M=\frac{3M}{4\pi r^3}$ of a mass $M$ evenly distributed over a sphere of radius $r$, and the vacuum energy density $\rho_{vac}=\frac{\Lambda c^2}{8 \pi G}$ we get
\begin{equation}
\frac{\rho_M}{\rho_{vac}}= \left(\frac{6 M G }{\Lambda c^2 r^3}\right).
\end{equation}
This is equal to $1$ precisely when
\begin{equation}
r=2^{1/3}r_v.
\end{equation}
Thus for a given mass $M$, the the radius of validity is, up to a small numerical factor, the same radius at which the matter density and vacuum energy density are equal. For $r<r_v$, the matter density dominates and we are confident that the Einstein equations provide a good description. For $r>r_v$, the vacuum energy dominates, and we should be careful about applying Newtonian or post-Newtonian arguments. \\
Up to this point we have ignored the mass-correction formula because for all the compact systems under consideration so far it was sufficient to take the approximation $M = \mu$. It is only on mass scales that are on the order of the mass of the universe that we see a significant deviation, as can be seen in Figure \ref{fig:masscorrection}. Indeed, a mass deviation of $1$\% only occurs once the mass of the system reaches $10^{52}$ kg. For $\mu = 3 \times 10^{52}$, which is roughly the mass of the observable universe, the mass-correction is roughly 10\%. This can be thought of as a secondary modification that takes effect already within the radius of validity and might thus be relevant for considerations on the scale of the universe. Here of course one has to keep in mind that the mass-correction formula originates from a point particle consideration and is thus not necessarily applicable to the universe. Note also that the mass-correction becomes negligible when we only consider baryonic matter. On the other hand in the early universe, when the total energy from the electromagnetic radiation was significantly higher the effect might be more prominent.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{img/mass-correction.pdf}
\caption{A plot of the Schwarzschild mass $M = \mu$ and a plot of the corrected mass, as determined by (\ref{eq:correctedmass}).}
\label{fig:masscorrection}
\end{figure}
\section{Conclusion}
We derived an isometric embedding for the Schwarzschild and Schwarzschild-de Sitter spacetimes into $AdS_3$. We used the detailed behaviour of the embedding in the $\Lambda \to 0$ limit to heuristically define a radius of validity for the Einstein Equations in a de Sitter universe. One possible interpretation of this hierarchy of validity is that one can assign to $\Lambda$ a similar role in the context of gravity that $\hbar$ plays for quantum mechanics. This observation suggests that one can, in principle, interpret the cosmological constant as a fundamental energy scale for gravitational systems.\\
The considerations in Section \ref{sec:phys} show that for most scales in the universe, it is safe to ignore possible effects of the cosmological constant. For large systems, however, using an Einstein or Newtonian approximation may not be justified, despite the low value of the cosmological constant. In particular for the largest structures such as superclusters, the Newtonian approximation might not be entirely valid. Note that these effects on long-range interactions could affect the interpretation of weak lensing observations, since most of the reconstruction is based on post Newtonian approximations, see for example \cite{kilbinger2015cosmology} for an extensive review. Note in particular that beyond the radius of validity the sign of $f'(r)$ changes and thus the lensing might behave substantially different beyond this point. Similar considerations for the long range interactions might be relevant to figure out which black hole stability problem is actually the physical relevant one. (i.e. Kerr or Kerr-de-Sitter). Last but not least, the field of vision for LIGO spans far beyond the radius of validity of even the Virgo Supercluster \cite{abbott2016prospects}. Modifications of gravitational wave sources in the spirit of \cite{ashtekar2014asymptotics,ashtekar2015asymptotics,ashtekar2015asymptotics2,ashtekar2016gravitational} might therefore have an effect on observations. \\
For a homogeneous universe the cosmological constant becomes relevant when the matter density and the vacuum energy density are roughly equal. However there is a secondary effect due to the mass-correction that might play a role. However it is at most of the order of 10\% so it is certainly no dramatic change.
One limitation to the present work is, that \emph{a priori} it only holds true for the case of spherical symmetry which we investigated here. This is relevant to mention because preliminary calculations for an extension of the results in \cite{mars2017fingerprints} to the case with a positive cosmological constant suggest that in principle $\Lambda$ is detectable in the shape of the shadow of a black hole when $a>0$. As the shadow contains mostly near horizon information, this suggests that the cosmological constant should affect the near horizon geometry.
It would be interesting to try to elaborate in a quantitative manner on the features investigated in the present work. Further it would be interesting to investigate the role of the cosmological constant away from spherical symmetry.
\subsection*{Acknowledgements}This work was partially supported by the Australian Research Council grant DP170100630.
We would like to thank Hermine Boat for her essential support during the conceptional phase of this paper. We would further like to thank the Institute Henri Poincar\'{e} for hospitality during the trimester on Mathematical Relativity, Paris, during fall 2015, and M.B. would like to thank Monash University where part of this work was done. C.P. was supported by the Albert Einstein Institute during a part of this project. M.B. was supported by an Australian Postgraduate Award for part of this project.
\newpage
\bibliographystyle{plain}
|
1,116,691,498,516 | arxiv | \section{Introduction}
{\em Property directed reachability (PDR)} (also called \emph{IC3}) introduced in~\cite{Bradley11,EenMB11}
is a model checking algorithm for
proving/disproving safety problems. It has been successfully applied
to software and hardware model checking, and later it has been
extended in several directions, including {\em fbPDR}
\cite{SeufertS18,SeufertS19} that uses both forward and backward
predicate transformers and {\em PrIC3} \cite{BatzJKKMS20} for the
quantitative safety problem for probabilistic systems. See~\cite{Gurfinkel2015IC3PA} for a concise overview.
The original PDR assumes that systems are given by binary
predicates representing transition relations. The PDR
algorithm maintains data structures called {\em
frames} and {\em proof obligations}---these are collections of
predicates over states---and updates them.
While this logic-based description immediately yields automated tools using SAT/SMT solvers, it limits
target systems to qualitative and
nondeterministic ones. This limitation was first overcome
by PrIC3 \cite{BatzJKKMS20} whose target is probabilistic systems. This suggests room for
further generalization of PDR.
\todoil{Mention implementation, downplay category theory}
In this paper, we propose the first lattice theory-based generalization
of the PDR algorithm; we call it \emph{LT-PDR}.
This makes the PDR algorithm apply to a wider class of safety problems, including qualitative and quantitative. We also derive a new concrete extension of PDR, namely one for Markov reward models.
We implemented the general algorithm LT-PDR in Haskell, in a way that maintains the theoretical abstraction and clarity. Deriving concrete instances for various types of systems is easy (for Kripke structures, probabilistic systems, etc.). We conducted an experimental evaluation, which shows that these easily-obtained instances have at least reasonable performance.
\myparagraph{Preview of the Theoretical Contribution}
We generalize the PDR algorithm so that it operates over an
arbitrary complete lattice $L$. This generalization recasts the PDR
algorithm to solve a general problem $\mu F\leq^{?}\alpha$ of
over-approximating the least fixed point of an $\omega$-continuous function $F\colon L\to L$ by a
safety property $\alpha$. This lattice-theoretic generalization
signifies the relationship between the PDR algorithm and the theory of
fixed points. This also allows us to incorporate quantitative
predicates suited for probabilistic verification.
More specifically,
we reconstruct the original PDR algorithm as a combination of
two constituent parts. They are called {\em positive LT-PDR} and {\em negative LT-PDR}. Positive LT-PDR comes
from a witness-based proof method by the \emph{Knaster--Tarski fixed point
theorem}, and aims to \emph{verify} $\mu F\leq^{?}\alpha$.
In contrast, negative LT-PDR comes from the \emph{Kleene fixed point theorem} and aims to \emph{refute} $\mu F\leq^{?}\alpha$.
The two algorithms build up witnesses in an iterative and
nondeterministic manner, where nondeterminism
accommodates guesses and heuristics. We identify the essence of PDR
to be an ingenious combination of these two algorithms, in which
intermediate results on one side (positive or negative) give
informed guesses on the other side.
This is how we formulate LT-PDR in~\S{}\ref{sec:int}.
We discuss several instances of our general theory of PDR. We discuss three concrete settings:
Kripke structures (where we obtain two instances of LT-PDR),
Markov decision
processes (MDPs), and Markov reward models. The two in the first setting essentially subsume many existing PDR
algorithms, such as the original PDR~\cite{Bradley11,EenMB11} and Reverse PDR~\cite{SeufertS18,SeufertS19}, and the one for MDPs
resembles PrIC3~\cite{BatzJKKMS20}. The last one (Markov reward models) is a new
algorithm that fully exploits the generality of our
framework.
In fact, there is another dimension of theoretical generalization: the derivation of the above concrete instances follows a \emph{structural theory of state-based dynamics and predicate transformers}. We formulate the structural theory in the language of \emph{category theory}~\cite{MacLane71,Awodey06}---using especially \emph{coalgebras}~\cite{Jacobs16coalgBook} and \emph{fibrations}~\cite{CLTT}---following works such as~\cite{HermidaJ98,SprungerKDH18,KoriHK21,BonchiKP18}. The structural theory tells us which safety problems arise under what conditions; it can therefore suggest that certain safety problems are unlikely to be formulatable, too. The structural theory is important because it builds a mathematical order in the PDR literature, in which theoretical developments tend to be closely tied to implementation and thus theoretical essences are often not very explicit. For example, the theory is useful in classifying a plethora of PDR-like algorithms for Kripke structures (the original, Reverse PDR, fbPDR, etc.). See \S\ref{sec:LTPDRsForKripke}.
We present the above structural theory in \S\ref{sec:strTh} and briefly discuss its use in the derivation of concrete instances in \S\ref{sec:instances}. We note, however, that this categorical theory is not needed for reading and using the other parts of the paper.
There are other works on generalization of PDR~\cite{HoderB12,RinetzkyS16}, but
our identification of the interplay of Knaster--Tarski and Kleene is new. They do not accommodate probabilistic verification, either. See Appendix~\ref{appendix:relatedWorkOnGen} for further discussions.
\todoil{Write about implementation. List up main contributions.}
\myparagraph{Preliminaries}
Let $(L,\le)$ be a poset. $(L,\le)^\mathrm{op}$ denotes the opposite poset
$(L,\ge)$. Note that if $(L,\leq)$ is a complete lattice then so is
$(L,\le)^\mathrm{op}$.
An $\omega$-chain (resp. $\omega^{op}$-chain) in $L$ is an
$\mathbb{N}$-indexed family of increasing (resp. decreasing) elements
in $L$. A monotone function $F:L\to L$ is {\em $\omega$-continuous}
(resp. $\omega^{op}$-continuous) if $F$ preserves existing suprema of
$\omega$-chains (resp. infima of $\omega^\mathrm{op}$-chains).
\section{Fixed-points in Complete Lattices}
Let $(L, \leq)$ be a complete lattice and $F: L \to L$ be a monotone
function. When we analyze
fixed points of $F$, pre/postfixed points play important
roles.
\begin{definition}
A \emph{prefixed point} of $F$ is an element $x \in L$ satisfying
$Fx \leq x$. A \emph{postfixed point} of $F$ is an element
$x \in L$ satisfying $x \leq Fx$. We write $\Pref F$ and $\Postf F$
for the set of prefixed points and postfixed points of $F$,
respectively.
\end{definition}
The following
results are central in fixed point theory. They allow us to
under/over-approximate the least/greatest fixed points.
\begin{theorem} \label{thm:kt_cc}
A monotone endofunction $F$ on a complete lattice $(L, \leq)$ has
the least fixed point $\mu F$ and the greatest fixed point $\nu
F$. Moreover,
\begin{enumerate}
\item\label{item:thm:kt_cc1} (Knaster--Tarski~\cite{Tarski55}) The
set of fixed points forms a complete lattice. Furthermore,
$\mu F = \bigwedge \{x \in L \mid Fx \leq x\}$ and
$\nu F = \bigvee \{x \in L \mid x \leq Fx\}$.
\item\label{item:thm:kt_cc2} (Kleene, see e.g.~\cite{Baranga91}) If
$F$ is $\omega$-continuous,
$\mu F=\bigvee_{n \in \mathbb{N}} F^n\bot$. Dually, if $F$ is
$\omega^\mathrm{op}$-continuous,
$\nu F = \bigwedge_{n \in \mathbb{N}} F^n\top$.
\qed
\end{enumerate}
\end{theorem}
Thm.~\ref{thm:kt_cc}.\ref{item:thm:kt_cc2} is known to hold for
arbitrary $\omega$-cpos (complete lattices are their special case).
A generalization of Thm.~\ref{thm:kt_cc}.\ref{item:thm:kt_cc2} is the
Cousot--Cousot characterization~\cite{Cousot79}, where $F$ is assumed
to be monotone (but not necessarily $\omega$-continuous) and we have
$\mu F=F^\kappa\bot$ for a sufficiently large, possibly transfinite,
ordinal $\kappa$. In this paper, for the algorithmic study of PDR, we
assume the $\omega$-continuity of $F$. Note that $\omega$-continuous
$F$ on a complete lattice is necessarily monotone.
We call the $\omega$-chain $\bot \leq F\bot \leq \cdots $ \emph{the
initial chain of $F$} and the $\omega^\mathrm{op}$-chain
$\top \geq F\top \geq \cdots $ \emph{the final chain of $F$}.
\todo{kori: remove "These appear in..."?}These appear in Thm.~\ref{thm:kt_cc}.\ref{item:thm:kt_cc2}.
Thm.~\ref{thm:kt_cc}.\ref{item:thm:kt_cc1} and
Thm.~\ref{thm:kt_cc}.\ref{item:thm:kt_cc2} yield the following witness
notions for \emph{proving} and \emph{disproving} $\mu F \leq\alpha$,
respectively.
\begin{corollary} \label{cor:kt_kleene} Let $(L,\leq)$ be a complete
lattice and $F:L\to L$ be $\omega$-continuous.
\begin{enumerate}
\item\label{item:cor:kt_kleene1} (KT) $\mu F \leq\alpha$ if and only
if there is $x\in L$ such that $Fx \leq x\leq \alpha$.
\item\label{item:cor:kt_kleene2} (Kleene) $\mu F \not\leq\alpha$ if
and only if there is $n\in\mathbb{N}$ and $x \in L$ such that
$x \leq F^n \bot$ and $x \not \leq\alpha$. \qed
\end{enumerate}
\end{corollary}
By Cor.~\ref{cor:kt_kleene}.\ref{item:cor:kt_kleene1}, proving
$\mu F \leq\alpha$ is reduced to searching for $x\in L$ such that
$Fx \leq x\leq \alpha$. We call such $x$ a \emph{KT (positive)
witness}. In contrast, by
Cor.~\ref{cor:kt_kleene}.\ref{item:cor:kt_kleene2}, disproving
$\mu F \leq\alpha$ is reduced to searching for $n\in\mathbb{N}$ and
$x \in L$ such that $x \leq F^n \bot$ and $x \not \leq\alpha$. We call
such $x$ a \emph{Kleene (negative) witness}.
\begin{notation} We shall use lowercase
(Roman and Greek) letters for elements of $L$ (such as
$\alpha, x\in L$), and uppercase letters for (finite or infinite)
sequences of $L$ (such as $X\in L^{*}$ or $L^{\omega}$). The $i$-th (or $(i-j)$-th when subscripts are started from $j$)
element of a sequence $X$ is designated by a subscript:
$X_{i}\in L$.
\end{notation}
\section{Lattice-Theoretic Reconstruction of PDR }
\label{sec:seqcon}
Towards the LT-PDR algorithm, we first introduce two simpler
algorithms, called positive LT-PDR (\S{}\ref{sec:pos}) and negative
LT-PDR (\S{}\ref{sec:neg}). The target problem of the LT-PDR
algorithm is the following:
\begin{definition}[the LFP-OA
problem $\mu F \leq^{?} \alpha$]\label{def:lfpOverapprox}
Let $L$ be a complete lattice, $F: L \to L$ be $\omega$-continuous,
and $\alpha \in L$. The \emph{lfp over-approximation (LFP-OA)
problem} asks if $\mu F \leq \alpha$ holds; the problem is denoted
by $\mu F \leq^{?} \alpha$.
\end{definition}
\begin{example}\label{ex:forward}
Consider a transition system, where $S$ be the set of states,
$\iota \subseteq S$ be the set of initial states,
$\delta: S \to \mathcal{P} S$ be the transition relation, and
$\alpha \subseteq S$ be the set of safe states. Then letting
$L\coloneqq \mathcal{P} S$ and
$F \coloneqq \iota \cup \bigcup_{s \in (-)}\delta(s)$, the lfp
over-approximation problem $\mu F \leq^{?} \alpha$ is the problem
whether all reachable states are safe. It is equal to the problem
studied by the conventional IC3/PDR~\cite{Bradley11,EenMB11}.
\end{example}
Positive LT-PDR iteratively builds a KT witness in a bottom-up manner
that positively answers the LFP-OA problem, while negative LT-PDR
iteratively builds a Kleene witness for the same LFP-OA
problem. We shall present these two algorithms as clear reflections of
two proof principles
(Cor.~\ref{cor:kt_kleene}), each of which comes from the fundamental
Knaster--Tarski and Kleene theorems.
The two algorithms build up witnesses in an iterative and
nondeterministic manner. The nondeterminism is there for accommodating
guesses and heuristics. We identify the essence of PDR to be an
ingenious combination of these two algorithms, in which intermediate
results on one side (positive or negative) give informed guesses on
the other side. This way, each of the positive and negative algorithms
provides heuristics in resolving the nondeterminism in the execution of
the other. This is how we formulate the LT-PDR algorithm
in~\S{}\ref{sec:int}.
The dual of LFP-OA problem is called the \emph{gfp-under-approximation
problem} (GFP-UA): the GFP-UA problem for a complete lattice $L$,
an $\omega^\mathrm{op}$-continuous function $F:L\to L$ and
$\alpha\in L$
is whether the inequality $\alpha\leq \nu F$ holds or not,
and is denoted by $\alpha\le^{?}\nu F$. It is evident that the GFP-UA
problem for $(L, F, \alpha)$ is equivalent to the LFP-OA problem for
$(L^\mathrm{op},F,\alpha)$. This suggests the dual algorithm called LT-OpPDR
for GFP-UA problem. See Rem.~\ref{rem:LTOpPDR} later.
\subsection{Positive LT-PDR: Sequential Positive Witnesses}\label{sec:pos}
We introduce the notion of KT$^\omega$ witness---a KT witness
(Cor.~\ref{cor:kt_kleene}) constructed in a sequential manner. Positive LT-PDR searches for a KT$^\omega$ witness by growing its finitary
approximations (called KT sequences).
Let $L$ be be a complete lattice. We regard each element $x\in L$ as
an abstract presentation of a predicate on states. The inequality $x\le y$ means
that the predicate $x$ is stronger than the predicate $y$. We introduce
the complete lattice $[n, L]$ of increasing chains of length
$n\in\mathbb{N}$, whose elements are $(X_0 \leq \cdots \leq X_{n-1})$
in $L$ equipped with the element-wise order. We similarly introduce
the complete lattice $[\omega, L]$ of $\omega$-chains in $L$. We lift
$F:L\to L$ to $F^\# : [\omega, L] \to [\omega, L]$ and
$F^\#_n: [n, L] \to [n, L]$ (for $n \geq 2$) as follows. Note that the
entries are shifted.
\begin{equation}
\begin{aligned}
F^\#(X_0 \leq X_1 \leq \cdots)&\;:=\; (\bot \leq FX_0 \leq FX_1 \leq\cdots)\\[-.3em]
F^\#_n(X_0 \leq \cdots \leq X_{n-1})\;&:=\; (\bot \leq FX_0 \leq \cdots\leq FX_{n-2})
\end{aligned}
\end{equation}
\begin{definition}[$\text{KT}^{\omega}${} witness]\label{def:KTseqwitness}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
Define
$\Delta \alpha := (\alpha \leq \alpha \leq \cdots)$.
A \emph{$\text{KT}^{\omega}${} witness} is $X \in [\omega, L]$ such that $F^\# X \leq X \leq \Delta \alpha$.
\end{definition}
\begin{theorem} \label{thm:safe_witness}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
There exists a KT witness (Cor.~\ref{cor:kt_kleene})
if and only if there exists a $\text{KT}^{\omega}$ witness.
\qed
\end{theorem}
Concretely, a KT witness $x$ yields a $\text{KT}^{\omega}$\ witness $x\le x\le\cdots$; conversely, a $\text{KT}^{\omega}$\ witness $X$ yields a KT witness $\bigvee_{n \in \omega} X_{n}$.
A full proof (via Galois connections) is given in the appendix.
The initial sequence $\bot \le F\bot \le \cdots$ is always a $\text{KT}^{\omega}$\ witness for $\mu F \leq \alpha$.
There are other $\text{KT}^{\omega}$\ witnesses whose growth is accelerated by some heuristic guesses---an extreme example is $x\le x\le\cdots$ with a KT witness $x$.
$\text{KT}^{\omega}$\ witnesses embrace the spectrum of such different sequential witnesses for $\mu F\le \alpha$, those which mix
routine constructions (i.e.\ application of $F$) and heuristic guesses.
\begin{definition}[KT sequence] \label{def:kt_sequence}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
A \emph{KT sequence} for
$\mu F \leq^{?} \alpha$ is a finite increasing sequence
$(X_0\le \cdots\le X_{n-1})$, for $n\geq 2$, satisfying
\begin{enumerate}
\item \label{item:xn} $X_{n-2} \leq \alpha$; and
\item \label{item:fn-alg} $X$ is a prefixed point of $F^\#_n$, that is, $FX_{i}\le X_{i+1}$ for each $i\in [0, n-2]$.
\end{enumerate}
A KT sequence $(X_{0} \leq \cdots \leq X_{n-1})$ is
\emph{conclusive} if $X_{j+1} \le X_{j}$ for some $j$.
\end{definition}
KT sequences are finite by definition. Note that the upper bound $\alpha$ is imposed on all $X_{i}$ but $X_{n-1}$. This freedom in the choice of $X_{n-1}$ offers room for heuristics, one that is exploited in the combination with negative LT-PDR (\S{}\ref{sec:int}).
We take KT sequences as finite approximations of $\text{KT}^{\omega}$\ witnesses.
This view shall be justified by the partial order
$(\preceq)$ between KT sequences defined below.
\begin{definition}[order $\preceq$ between KT sequences]
We define a partial order relation $\preceq$ on KT sequences
as follows: $(X_0, \dots, X_{n-1}) \preceq (X'_0, \dots, X'_{m-1})$ if
$n \leq m$ and $X_j \geq X_j'$ for each $0 \leq j \leq n-1$.
\end{definition}
The order $X_j\geq X_j'$ represents that $X_j'$ is
a stronger predicate (on states) than $X_j$. Therefore $X\preceq X'$ expresses
that $X'$ is a longer and stronger / more determined chain than $X$. We obtain $\text{KT}^{\omega}$\
witnesses as their $\omega$-superma.
\begin{theorem}\label{thm:KTseqCPO}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}. The set
of KT sequences, augmented with the set of $\text{KT}^{\omega}$\
witnesses $\{X \in [\omega, L] \mid F^\# X \leq X \leq \Delta \alpha \}$ and
ordered by the natural extension of $\preceq$, is an $\omega$-cpo.
In this $\omega$-cpo, each $\text{KT}^{\omega}$ witness $X$ is represented as
the suprema of an $\omega$-chain of KT sequences, namely
$X = \bigvee_{n \geq 2} X|_n$ where $X|_n \in [n, L]$ is the
length $n$ prefix of $X$. \qed
\end{theorem}
\begin{proposition} \label{prop:safe}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}. There exists a $\text{KT}^{\omega}$\ witness
if and only if there exists a conclusive KT sequence.
\end{proposition}
\begin{proof}
($\Rightarrow$): If there exists a $\text{KT}^{\omega}$\ witness,
$\mu F \leq \alpha$ holds by Cor.~\ref{cor:kt_kleene} and
Thm.~\ref{thm:safe_witness}. Therefore, the ``informed guess'' $(\mu F \leq \mu F)$ gives a
conclusive KT sequence. ($\Leftarrow$): When $X$ is a conclusive KT sequence with $X_j = X_{j+1}$,
$X_0 \leq \cdots \leq X_j = X_{j+1} = \cdots$ is a $\text{KT}^{\omega}$\ witness. \qed
\end{proof}
\noindent
The proposition above yields the following partial algorithm that aims to answer positively to
the LFP-OA problem. It searches for a conclusive KT sequence.
\begin{definition}[positive LT-PDR]
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}. \emph{Positive LT-PDR} is
the algorithm shown in Alg.~\ref{alg:posi_pdr}, which says
`True' to the LFP-OA problem $\mu F \leq^{?} \alpha$ if successful.
\end{definition}
\begin{figure}[p]
\begin{minipage}{\textwidth}
\begin{algorithm}[H]
\caption{positive LT-PDR}\label{alg:posi_pdr}
\SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
\SetKwInOut{Initially}{Initially} \Input{An instance
($\mu F \leq^? \alpha$) of the LFP-OA problem in $L$}
\SetKwRepeat{Repeat}{repeat (do
one of the following)}{until}
\Output{`True' with a conclusive KT sequence} \KwData{a KT
sequence $X = (X_0 \leq \dots \leq X_{n-1})$} \Initially{$X\coloneqq(\bot \leq F \bot)$}
\Repeat{any return value is obtained}{ \textbf{Valid}
If $X_{j+1} \leq X_j$ for some $j < n-1$, return `True' with the conclusive KT sequence $X$. \\
\textbf{Unfold} If $X_{n-1} \leq \alpha$,
let $X\coloneqq(X_0 \leq \cdots \leq X_{n-1} \leq \top)$, appending $\top$ \\
\textbf{Induction} If some $k \geq 2$ and $x\in L$ satisfy
$X_{k} \not \leq x$ and $F(X_{k-1} \land x) \leq x$,
let $X \coloneqq X[X_j := X_j \land x]_{2 \leq j \leq k}$. \\
}
\end{algorithm}
\end{minipage}
\vspace{.5em}
\begin{minipage}{\textwidth}
\begin{algorithm}[H]
\caption{negative LT-PDR}\label{alg:nega_pdr}
\SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
\SetKwInOut{Initially}{Initially} \SetKwRepeat{Repeat}{repeat (do
one of the following)}{until} \Input{An instance
($\mu F \leq^? \alpha$) of the LFP-OA problem in $L$}
\Output{`False' with a conclusive Kleene sequence} \KwData{a
Kleene sequence $C = (C_0, \dots, C_{n-1})$}
\Initially{$C\coloneqq()$} \Repeat{any return value is obtained}{
\textbf{Candidate} Choose $x\in L$ such that
$x \not \leq \alpha$,
and let $C\coloneqq(x)$. \\
\textbf{Model}
If $C_0 = \bot$, return `False' with the conclusive Kleene sequence $C$. \\
\textbf{Decide} If there exists $x$ such that $C_0 \leq Fx$,
then let $C \coloneqq (x, C_0, \dots, C_{n-1})$. \\
}
\end{algorithm}
\end{minipage}
\vspace{.5em}
\begin{minipage}{\textwidth}
\begin{algorithm}[H]
\caption{LT-PDR}\label{alg:pdr}
\SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output}
\SetKwInOut{Initially}{Initially}
\SetKwRepeat{Repeat}{repeat (do
one of the following)}{until}
\Input{An instance
($\mu F \leq^? \alpha$) of the LFP-OA problem in $L$}
\Output{`True' with a conclusive KT sequence, or `False' with a
conclusive Kleene sequence} \KwData{$(X; C)$ where $X$ is a KT
sequence $(X_0 \leq \cdots \leq X_{n-1})$, and $C$ is a Kleene
sequence $(C_i, C_{i+1}, \dots, C_{n-1})$ ($C$ is empty if
$n=i$).} \Initially{$(X; C)\coloneqq(\bot \leq F \bot;\; ()\,)$}
\Repeat{any return value is obtained}{ \textbf{Valid}
If $X_{j+1} \leq X_j$ for some $j < n-1$, return `True' with the conclusive KT sequence $X$. \\
\textbf{Unfold} If $X_{n-1} \leq \alpha$,
let $(X; C)\coloneqq(X_0 \leq \cdots \leq X_{n-1} \leq \top; ())$. \\
\textbf{Induction} If some $k \geq 2$ and $x\in L$ satisfy
$X_{k} \not \leq x$ and $F(X_{k-1} \land x) \leq x$,
let $(X; C) \coloneqq (X[X_j := X_j \land x]_{2 \leq j \leq k}; C)$. \\
\textbf{Candidate} If $C=()$ and $X_{n-1} \not \leq \alpha$,
choose
$x\in L$ such that $x \leq X_{n-1}$ and $x \not \leq \alpha$,
and let $(X;C)\coloneqq(X; (x))$. \\
\textbf{Model}
If $C_1$ is defined, return `False' with the conclusive Kleene sequence $(\bot, C_1, \dots, C_{n-1})$. \\
\textbf{Decide} If $C_i \leq FX_{i-1}$, choose $x \in L$
satisfying $x \leq X_{i-1}$ and $C_i \leq Fx$,
and let $(X; C) \coloneqq (X; (x, C_i, \dots, C_{n-1}))$. \\
\textbf{Conflict} If $C_i \not \leq FX_{i-1}$, choose $x \in L$
satisfying $C_i \not \leq x$ and $F(X_{i-1} \land x) \leq x$, and let
$(X; C) \coloneqq (X[X_j := X_j \land x]_{2 \leq j \leq i};
(C_{i+1}, \dots, C_{n-1}))$. }
\end{algorithm}
\end{minipage}
\end{figure}
The rules are designed by the following principles.
\textbf{Valid} is applied when the current $X$ is
conclusive.
\textbf{Unfold} extends $X$ with $\top$. In fact, we
can use any element $x$ satisfying $X_{n-1} \leq x$ and $FX_{n-1} \leq x$ in place of
$\top$ (by the application of \textbf{Induction} with $x$). The condition $X_{n-1} \leq \alpha$ is checked to ensure
that the extended $X$ satisfies the condition in
Def.~\ref{def:kt_sequence}.\ref{item:xn}.
\textbf{Induction} strengthens $X$, replacing the $j$-th element with its meet with $x$.
The first condition $X_{k} \not \leq x$ ensures that this rule indeed strengthens $X$,
and the second condition
$F(X_{k-1} \land x) \leq x$ ensures that the strengthened $X$ satisfies
the condition in Def.~\ref{def:kt_sequence}.\ref{item:fn-alg}, that is,
$F^\#_n X \leq X$ (see the proof in Appendix~\ref{ap:config}).
\begin{theorem}\label{thm:positive_sound_terminate}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
Then positive LT-PDR is sound, i.e.~if it outputs `True' then $\mu F \leq \alpha$ holds.
Moreover, assume $\mu F\le \alpha$ is true. Then positive LT-PDR is weakly
terminating (meaning that suitable choices make the algorithm
terminate).
\qed
\end{theorem}
The last ``optimistic termination'' is realized by the informed guess
$\mu F$ as $x$ in \textbf{Induction}.
\todoil{Originally: Let us turn to the worst-case
termination/stabilization of the algorithm. We have to assume the
well-foundedness of $\leq$ in $L$.}
To guarantee the termination of LT-PDR,
\kori{we assume that} the complete lattice $L$ is well-founded
(no infinite decreasing chain exists in $L$), although we cannot hope for this assumption in every instance (\S{} \ref{sec:LTPDRsForMDP},
\ref{sec:LTPDRsForMRM}).
\begin{lemma} \label{lem:kt_order}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
If $\mu F \leq \alpha$, then for any KT sequence $X$, at least one of the three rules in Algorithm~\ref{alg:posi_pdr} is enabled.
Moreover, for any KT sequence $X$, let $X'$ be obtained by applying either \textbf{Unfold} or \textbf{Induction}. Then $X\preceq X'$ and $X\neq X'$.
\qed
\end{lemma}
\begin{theorem} \label{thm:posi_kt}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
Assume that $\le$ in $L$ is well-founded and $\mu F \leq \alpha$.
Then, any non-terminating run of positive LT-PDR converges to a $\text{KT}^{\omega}$\ witness.
Moreover, if there is no strictly increasing $\omega$-chain bounded by $\alpha$ in $L$, then positive LT-PDR is strongly terminating.
\qed
\end{theorem}
\subsection{Negative PDR: Sequential Negative Witnesses
\conf{70}}\label{sec:neg}
We next introduce \emph{Kleene sequences} as a
lattice-theoretic counterpart of
\emph{proof obligations} in the standard PDR. Kleene sequences
represent a chain of sufficient conditions to conclude that certain unsafe states are reachable.
\begin{definition}[Kleene sequence] \label{def:kleene_sequence}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}. A \emph{Kleene sequence}
for the LFP-OA problem $\mu F \leq^? \alpha$ is a finite sequence
$(C_0, \dots, C_{n-1})$, for $n \geq 0$ ($C$ is empty if $n=0$), satisfying
\begin{enumerate}
\item \label{item:cfc} $C_j \leq FC_{j-1}$ for each
$1 \leq j \leq n-1$;
\item \label{item:cn} $C_{n-1} \not \leq \alpha$.
\end{enumerate}
A Kleene sequence $(C_0, \dots, C_{n-1})$ is \emph{conclusive} if
$C_0 = \bot$.
We may use $i \ (0 \leq i \leq n)$ instead of $0$ as the starting
index of the Kleene sequence $C$.
\end{definition}
When we have a Kleene sequence $C=(C_{0},\dots,C_{n-1})$, the
chain of implications $(C_{j}\le F^{j}\bot) \implies (C_{j+1}\le F^{j+1}\bot)$ hold for $0 \leq j < n-1$.
Therefore when $C$ is conclusive, $C_{n-1}$ is a
Kleene witness (Cor.~\ref{cor:kt_kleene}.\ref{item:cor:kt_kleene2}).
\begin{proposition} \label{prop:unsafe}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
There exists a Kleene (negative)
witness if and only if there exists a conclusive Kleene sequence.
\end{proposition}
\begin{proof}
($\Rightarrow$): If there exists a Kleene witness $x$ such that
$x \leq F^n \bot$ and $x \not \leq \alpha$,
$(\bot, F\bot, \dots, F^n \bot)$ is a conclusive Kleene sequence.
($\Leftarrow$): Assume there exists a conclusive Kleene sequence
$C$. Then $C_{n-1}$ satisfies $C_{n-1} \leq F^{n-1}\bot$ and
$C_{n-1} \not \leq \alpha$ because of
$C_{n-1} \leq FC_{n-2} \leq \cdots \leq F^{n-1}C_0 = F^{n-1}\bot$ and Def.~\ref{def:kleene_sequence}.\ref{item:cn}.
\qed
\end{proof}
This proposition suggests the following algorithm to negatively answer to
the LFP-OA problem. It searches for a conclusive
Kleene sequence. The algorithm updates a Kleene sequence until its
first component becomes $\bot$.
\begin{definition}[negative LT-PDR]
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
\emph{Negative LT-PDR} is
the algorithm shown in Alg.~\ref{alg:nega_pdr},
which says `False' to the LFP-OA problem $\mu F \leq^? \alpha$ if successful.
\end{definition}
The rules are designed by the following principles.
\textbf{Candidate} initializes $C$ with only one
element $x$. The element $x$ has to be chosen such that
$x \not \leq \alpha$ to ensure
Def.~\ref{def:kleene_sequence}.\ref{item:cn}.
\textbf{Model} is applied when the current Kleene
sequence $C$ is conclusive.
\textbf{Decide} prepends $x$ to $C$. The element $x$
has to be chosen such that $C_0 \leq Fx$ to ensure
Def.~\ref{def:kleene_sequence}.\ref{item:cfc}.
\begin{theorem} \label{thm:negative}
Let $L,F,\alpha$ be as in Def.~\ref{def:lfpOverapprox}.
\begin{enumerate}
\item Negative LT-PDR is sound, i.e.\ if it outputs `False' then $\mu F \not\leq \alpha$.
\item Assume $\mu F \not \le \alpha$ is true. Then negative LT-PDR
is weakly terminating
(meaning that suitable choices make the algorithm terminate).
\qed
\end{enumerate}
\end{theorem}
\subsection{LT-PDR: Integrating Positive and Negative}\label{sec:int}
We have introduced two simple PDR algorithms, called positive LT-PDR
(\S{}\ref{sec:pos}) and negative LT-PDR (\S{}\ref{sec:neg}). They are
so simple that they have potential inefficiencies. Specifically, in positive LT-PDR, it
is unclear that how we choose $x \in L$ in \textbf{Induction}, while
in negative LT-PDR, it may diverge because the rules
\textbf{Candidate} and \textbf{Decide} may choose $x\in L$ that would
not lead to a conclusive Kleene sequence. We resolve these inefficiencies by
combining positive LT-PDR and negative LT-PDR. The combined PDR
algorithm is called LT-PDR, and it is a lattice-theoretic
generalization of conventional PDR.
Note that negative LT-PDR is only weakly terminating. Even worse, it is easy to make it diverge---after a choice of $x$ in \textbf{Candidate} or \textbf{Decide} such that $x\not\le \mu F$, no continued execution of the algorithm can lead to a conclusive Kleene sequence. For deciding $\mu F \leq^? \alpha$ efficiently, therefore, it is crucial to detect such useless Kleene sequences.
The core fact that underlies the efficiency of PDR is the following proposition, which says that a KT sequence (in positive LT-PDR) can quickly tell that a Kleene sequence (in negative LT-PDR) is useless. This fact is crucially used for many rules in LT-PDR (Def.~\ref{def:lt-pdr}).
\begin{proposition} \label{prop:C_X} Let $C=(C_i, \dots, C_{n-1})$ be a
Kleene sequence $(2 \leq n, 0 < i \leq n-1)$ and
$X=(X_0 \leq \cdots \leq X_{n-1})$ be a KT sequence. Then
\begin{enumerate}
\item \label{item:c_x} $C_i \not \leq X_i$ implies that $C$ cannot be extended to a
conclusive one,
that is, there does not exist
$C_0, \dots, C_{i-1}$ such that $(C_0, \dots, C_{n-1})$ is
conclusive.
\item \label{item:c_fx} $C_i \not \leq F X_{i-1}$ implies that $C$
cannot be extended to a conclusive one.
\item \label{item:n-step} There is no conclusive Kleene sequence with length
$n-1$. \qed
\end{enumerate}
\end{proposition}
\todoil{What is the point of item 3? Any explanation? (R3)}
The proof relies on the following lemmas.
\begin{lemma} \label{lem:x_init} Any KT sequence
$(X_0 \leq \cdots \leq X_{n-1})$ over-approximates the initial
sequence: $ F^i\bot\le X_i$ holds for any $i$ such that $0\le i\le n-1$.
\qed
\end{lemma}
\begin{lemma} \label{lem:C_X}
Let $C=(C_i, \dots, C_{n-1})$ be a
Kleene sequence $(0 < i \leq n-1)$ and
$(X_0 \leq \cdots \leq X_{n-1})$ be a KT sequence. The following
satisfy $1 \Leftrightarrow 2 \Rightarrow 3$.
\begin{enumerate}
\item \label{item:possible} The Kleene sequence $C$
can be extended to a conclusive one.
\item \label{item:fibot}$C_i \leq F^i\bot$.
\item \label{item:fjx}$C_i \leq F^j X_{i-j}$ for each $j$ with $0 \leq j \leq i$.
\qed
\end{enumerate}
\end{lemma}
Using the above lattice-theoretic properties, we combine positive and
negative LT-PDRs into the following {\em LT-PDR} algorithm. It is
also a lattice-theoretic generalization of the original PDR
algorithm. The combination
exploits the mutual
relationship between KT sequences and Kleene sequences,
exhibited as Prop.~\ref{prop:C_X}, for narrowing down choices in positive and negative LT-PDRs.
\begin{definition}[LT-PDR] \label{def:lt-pdr}
Given a complete lattice $L$,
an $\omega$-continuous function $F: L \to L$,
and an element $\alpha \in L$,
\emph{LT-PDR} is the algorithm
shown in Alg.~\ref{alg:pdr} for the LFP-OA problem
$\mu F \leq^? \alpha$.
\end{definition}
The rules are designed by the following principles.
(\textbf{Valid}, \textbf{Unfold}, and \textbf{Induction}): These rules
are almost the same as in positive LT-PDR.
In \textbf{Unfold},
we reset the Kleene sequence because of
Prop.~\ref{prop:C_X}.\ref{item:n-step}.
Occurrences of \textbf{Unfold} punctuate an execution of the algorithm: between two occurrences of \textbf{Unfold}, a main goal (towards a negative conclusion) is to construct a conclusive Kleene sequence with the same length as the $X$.
(\textbf{Candidate}, \textbf{Model}, and \textbf{Decide}): These rules
have many similarities to those in negative LT-PDR. Differences are as follows:
the \textbf{Candidate} and \textbf{Decide} rules impose $x \leq X_i$
on the new element $x$ in $(x, C_{i+1}, \dots, C_{n-1})$ because
Prop.~\ref{prop:C_X}.\ref{item:c_x} tells us that other choices are useless. In \textbf{Model}, we only need to check
whether $C_1$ is defined instead of $C_0 = \bot$. Indeed, since $C_1$ is
added in \textbf{Candidate} or \textbf{Decide},
$C_1 \leq X_1 = F\bot$ always holds.
Therefore, $2 \Rightarrow 1$ in Lem.~\ref{lem:C_X} shows that $(\bot, C_1, \dots, C_{n-1})$ is
conclusive.
(\textbf{Conflict}): This new rule emerges from the combination of
positive and negative LT-PDRs. This rule is applied when
$C_i \not \leq FX_{i-1}$, which confirms that the current $C$ cannot
be extended to a conclusive one (Prop.~\ref{prop:C_X}.\ref{item:c_fx}).
Therefore, we eliminate $C_i$ from $C$ and strengthen $X$ so that we
cannot choose $C_i$ again, that is, so that $C_i \not \leq (X_i \land x)$.
Let us explain how $X$ is strengthened. The element $x$ has to be
chosen so that $C_i \not \leq x$ and $F(X_{i-1} \land x) \leq x$.
The former dis-inequality ensures
the strengthened $X$ satisfies $C_i \not \leq (X_i \land x)$,
and
the latter
inequality implies $F(X_{i-1} \land x) \leq x$.
One can see that \textbf{Conflict} is \textbf{Induction} with additional condition $C_i \not \leq x$, which enhances so that the search space for $x$ is narrowed down using the Kleene sequence $C$.
Canonical choices
of $x\in L$ in the \textbf{Candidate}, \textbf{Decide} and \textbf{Conflict} rules
are $x:=X_{n-1}$, $x:=X_{i-1}$ and $x:=FX_{i-1}$, respectively.
However, there can be cleverer choices that accelerate LT-PDR. For example, when $L=\mathcal{P} S$, we can let $x:= S\setminus(C_i\setminus FX_{i-1})$ in \textbf{Conflict}.
\begin{lemma} \label{lem:config} Each rule of LT-PDR, when applied to
a pair of a KT and a Kleene sequence, yields a pair of a
KT and a Kleene sequence. \qed
\end{lemma}
\begin{theorem}[correctness] \label{thm:pdrcor}
LT-PDR is sound, i.e.~if it outputs `True'
then $\mu F \leq \alpha$ holds, and if it outputs `False' then
$\mu F \not \leq \alpha$ holds. \qed
\end{theorem}
Many existing PDR algorithms ensure termination if the state
space is finite. The following proposition generalizes it.
\begin{proposition}[termination] \label{prop:term}
LT-PDR terminates regardless of the order of the rule-applications
if the following conditions are satisfied.
\begin{enumerate}
\item\label{item:prop:term0}
\textbf{Valid} and \textbf{Model} rules are
immediately applied if applicable.
\item\label{item:prop:term1}
$(L,\leq)$ is well-founded.
\item\label{item:prop:term2} Either of the following is
satisfied: a) $\mu F \leq \alpha$ and $(L, \leq)$ has
no strictly increasing $\omega$-chain bounded by $\alpha$,
or b) $\mu F \not \leq \alpha$. \qed
\end{enumerate}
\end{proposition}
\noindent
Cond.~\ref{item:prop:term0} is natural: it just requires LT-PDR to immediately conclude `True' or `False' if it can.
Cond.~\ref{item:prop:term1}--\ref{item:prop:term2} are always satisfied when $L$ is finite.
Thm.~\ref{thm:pdrcor} and Prop.~\ref{prop:term} still hold if \textbf{Induction} rule is dropped. However, the rule can accelerate the convergence of KT sequences and improve efficiency.
\begin{remark}[LT-OpPDR]\label{rem:LTOpPDR}
The GFP-UA problem $\alpha\le^{?}\nu F$ is the dual of LFP-OA, obtained by opposing the order $\le$ in $L$. We can dualize the LT-PDR algorithm (Alg.~\ref{alg:pdr}), too, obtaining what we call the \emph{LT-OpPDR} algorithm for GFP-UA. Moreover, we can express LT-OpPDR as LT-PDR if a suitable \emph{involution} $\neg\colon L\to L^{\mathrm{op}}$ is present. See Appendix~\ref{appendix:LTOpPDR} for further details; see also Prop.~\ref{prop:corresponds_inv}.
\end{remark}
\section{Structural Theory of PDR by Category Theory}\label{sec:strTh}
Before we discuss concrete instances of LT-PDR in~\S{}\ref{sec:instances},
we develop a structural theory of transition systems and predicate transformers as a basis of LT-PDR.
The theory is formulated in the language of \emph{category theory}~\cite{MacLane71,Awodey06,Jacobs16coalgBook,CLTT}.
We use category theory
because 1) categorical modeling of relevant notions
is well established in the community (see e.g.~\cite{Jacobs16coalgBook,CLTT,BonchiKP18,AguirreK20,Sokolova11}), and 2) it gives us the right level of abstraction that accommodates a variety of instances. In particular, qualitative and quantitative settings are described in a uniform manner.
\begin{table}[tbp]
\caption{Categorical modeling of state-based dynamics and predicate transformers}
\label{table:categoricalNotions}
\begin{tabular}{m{16em}l}
\toprule
\rowcolor[gray]{.87}
\multicolumn{2}{c}{a transition system as a \emph{coalgebra}~\cite{Jacobs16coalgBook} in
the base category $\mathbb{B}$ of sets and functions}
\\ \cmidrule{1-2}
objects $X,Y,\dotsc$ in
$\mathbb{B}$~~~
&
sets (in our examples where $\mathbb{B}=\mathbf{Set}$)
\\
\rowcolor[gray]{.95}
an arrow $f\colon X\to Y$ in $\mathbb{B}$
&
a function (in our examples where $\mathbb{B}=\mathbf{Set}$)
\\
a functor $G\colon\mathbb{B}\to \mathbb{B}$
&
\parbox{20em}{ a transition type \\
$\left(\parbox{18em}{$G=\mathcal{P}$ for Kripke structures (\S\ref{sec:LTPDRsForKripke}), \\$G=(\mathcal{D}(-)+1)^\mathrm{Act}$ for MDPs (\S\ref{sec:LTPDRsForMDP}), etc.}\right)$
}
\\
\rowcolor[gray]{.95}
a \emph{coalgebra} $\delta\colon S\to GS$ in $\mathbb{B}$~\cite{Jacobs16coalgBook}
&
a transition system (Kripke structure, MDP, etc.)
\\
\midrule
\rowcolor[gray]{.87}
\multicolumn{2}{c}{a \emph{fibration} $p\colon \mathbb{E}\to \mathbb{B}$~\cite{CLTT} that equips sets in $\mathbb{B}$ with \emph{predicates}}
\\ \cmidrule{1-2}
the fiber category $\mathbb{E}_{S}$ over $S$ in $\mathbb{B}$
&
the lattice of predicates over a set $S$
\\
\rowcolor[gray]{.95}
\parbox{15em}{ the \emph{pullback} functor $l^{*}\colon \mathbb{E}_{Y}\to \mathbb{E}_{X}$
\\\phantom{hoge} for $l\colon X\to Y$ in $\mathbb{B}$
}
&
\parbox{20em}{ substitution $P(y)\mapsto P(l(x))$ in
\\\phantom{hoge}predicates $P\in \mathbb{E}_{Y}$ over $Y$}
\\[+.7em]
a \emph{lifting} $\dot{G}\colon \mathbb{E}\to\mathbb{E}$ of $G$ along $p$
&
\parbox{20em}{ logical interpretation of the transition type $G$
\\
(specifies e.g.\ the may vs.\ must modalities)
}
\\\midrule
\rowcolor[gray]{.87}
\multicolumn{2}{c}{the \emph{predicate transformer}, whose fixed points are of our interest}
\\ \cmidrule{1-2}
the composite $\delta^{*}\dot{G}\colon \mathbb{E}_{S}\to\mathbb{E}_{S}$
&
\parbox{20em}{ the predicate transformer associated with
\\\phantom{hoge} the transition system $\delta$
} \\\bottomrule
\end{tabular}
\end{table}
\todo{we focus on fixed points of $\alpha \land \delta^*\dot{G}$, not $\delta^*\dot{G}$}
Our structural theory (\S\ref{sec:strTh}) serves as a backend, not a frontend. That is,
\begin{itemize}
\item the theory in \S\ref{sec:strTh} is important in that it explains how the instances in \S\ref{sec:instances} arise and why others do not, but
\item the instances in \S\ref{sec:instances} are described in non-categorical terms, so readers who skipped \S\ref{sec:strTh} will have no difficulties following \S\ref{sec:instances} and using those instances.
\end{itemize}
\subsection{Categorical Modeling of Dynamics and Predicate Transformers}\label{sec:categoricalModeling}
Our interests are in instances of the LFP-OA problem $\mu F \leq^{?} \alpha$ (Def.~\ref{def:lfpOverapprox}) that appear in \emph{model checking}. In this context, 1) the underlying lattice $L$ is that of \emph{predicates} over a state space, and 2) the function $F\colon L\to L$ arises from the dynamic/transition structure, specifically as a \emph{predicate transformer}. The categorical notions in Table~\ref{table:categoricalNotions} model these ideas (state-based dynamics, predicate transformers). This modeling is well-established in the community.
Our introduction of Table~\ref{table:categoricalNotions} here is minimal, due to the limited space.
See Appendix~\ref{appendix:categorical} and the references therein for more details.
A \emph{category} consists of \emph{objects} and \emph{arrows} between them. In Table~\ref{table:categoricalNotions}, categories occur twice: 1) a \emph{base category} $\mathbb{B}$ where objects are typically sets and arrows are typically functions; and 2) \emph{fiber categories} $\mathbb{E}_{S}$, defined for each object $S$ of $\mathbb{B}$, that are identified with the lattices of \emph{predicates}. Specifically, objects $P,Q,\dotsc$ of $\mathbb{E}_{S}$ are predicates over $S$, and an arrow $P\to Q$ represents logical implication. A general fact behind the last is that every preorder is a category---see e.g.~\cite{Awodey06}.
\myparagraph{Transition Systems as Coalgebras}
State-based transition systems are modeled as \emph{coalgebras} in the base category $\mathbb{B}$~\cite{Jacobs16coalgBook}. We use a \emph{functor} $G\colon\mathbb{B}\to\mathbb{B}$ to represent a transition type. A \emph{$G$-coalgebra} is an arrow $\delta\colon S\to GS$, where $S$ is a state space and $\delta$ describes the dynamics. For example, a Kripke structure can be identified with a pair $(S,\delta)$ of a set $S$ and a function $\delta\colon S\to \mathcal{P} S$, where $\mathcal{P} S$ denotes the powerset. The powerset construction $\mathcal{P}$ is known to be a functor $\mathcal{P}\colon \mathbf{Set}\to\mathbf{Set}$; therefore Kripke structures are $\mathcal{P}$-coalgebras. For other choices of $G$, $G$-coalgebras become different types of transition systems, such as MDPs (\S\ref{sec:LTPDRsForMDP}) and Markov Reward Models (\S\ref{sec:LTPDRsForMRM}).
\todo{ref is right? (double check)}
\myparagraph{Predicates Form a Fibration }
Fibrations are powerful categorical constructs that can model various indexed entities; see e.g.~\cite{CLTT} for its general theory. Our use of them is for organizing the lattices $\mathbb{E}_{S}$ of \emph{predicates} over a set $S$, indexed by the choice of $S$. For example, $\mathbb{E}_{S}=2^{S}$---the lattice of subsets of $S$---for modeling qualitative predicates. For quantitative reasoning (e.g.\ for MDPs), we use $\mathbb{E}_{S}=[0,1]^{S}$, where $[0,1]$ is the unit interval. This way, qualitative and quantitative reasonings are mathematically unified in the language of fibrations.
A \emph{fibration} is a functor $p\colon \mathbb{E}\to\mathbb{B}$ with suitable properties; it can be thought of as a collection $(\mathbb{E}_{S})_{S\in \mathbb{B}}$ of \emph{fiber categories} $\mathbb{E}_{S}$---indexed by objects $S$ of $\mathbb{B}$---suitably organized as a single category $\mathbb{E}$. Notable in this organization is that we obtain the \emph{pullback} functor $l^{*}\colon \mathbb{E}_{Y}\to \mathbb{E}_{X}$
for each arrow $l\colon X\to Y$ in $\mathbb{B}$. In our examples, $l^{*}$ is a \emph{substitution} along $l$ in predicates---$l^{*}$ is the monotone map that carries a predicate $P(y)$ over $Y$ to the predicate $P(l(x))$ over $X$.
In this paper, we restrict to a subclass of fibrations (called \emph{\CLatw fibrations}) in which every fiber category $\mathbb{E}_{S}$ is a complete lattice, and each pullback functor preserves all meets. We therefore write $P\le Q$ for arrows in $\mathbb{E}_{S}$; this represents logical implication, as announced above. Notice that each $f^*$ has a left adjoint (lower adjoint in terms of Galois connection), which exists by Freyd's adjoint functor theorem. The left adjoint is denoted by $f_*$.
\begin{wrapfigure}[4]{r}{0pt}
\begin{math}
\xymatrix@R=1em{
\mathbb{E}
\ar[r]^-{\dot{G}}
\ar[d]_-{p}
&
\mathbb{E}
\ar[d]_-{p}
\\
\mathbb{B}
\ar[r]^-{G}
&
\mathbb{B}
}
\end{math}
\end{wrapfigure}
We also consider a \emph{lifting} $\dot{G}\colon \mathbb{E}\to\mathbb{E}$ of $G$ along $p$; it is a functor $\dot{G}$ such that $p\dot{G}=Gp$. See the diagram on the right. It specifies the \emph{logical interpretation} of the transition type $G$. For example, for $G=\mathcal{P}$ (the powerset functor) from the above, two choices of $\dot{G}$ are for the \emph{may} and \emph{must} modalities. See e.g.~\cite{HermidaJ98,KoriHK21,AguirreK20,KomoridaKHKH19}.
\myparagraph{Categorical Predicate Transformer} The above constructs allow us to model predicate transformers---$F$ in our examples of the LFP-OA problem $\mu F \leq^{?} \alpha$---in categorical terms. A \emph{predicate transformer} along a coalgebra $\delta\colon S\to GS$ with respect to the lifting $\dot{G}$ is simply the composite
\begin{math}
\mathbb{E}_{S}\xrightarrow{\dot{G}}
\mathbb{E}_{GS}\xrightarrow{\delta^{*}}
\mathbb{E}_{S}
\end{math}, where the first $\dot{G}$ is the restriction of $\dot{G}\colon \mathbb{E}\to\mathbb{E}$ to $\mathbb{E}_{S}$. Intuitively, 1) given a \emph{postcondition} $P$ in $\mathbb{E}_{S}$, 2) it is first interpreted as the predicate $\dot{G}P$ over $GS$, and then 3) it is pulled back along the dynamics $\delta$ to yield a \emph{precondition} $\delta^{*}\dot{G}P$. Such (backward) predicate transformers are fundamental in a variety of model checking problems.
\subsection{Structural Theory of PDR from Transition Systems}\label{sec:structuralTheoryofPDR}
We formulate a few general \emph{safety} problems. We show how they are amenable to the LT-PDR (Def.~\ref{def:lt-pdr}) and LT-OpPDR (Rem.~\ref{rem:LTOpPDR}) algorithms.
\begin{definition}[\backwardGFP, BSP] \label{def:safe_prob_bd} Let $p$ be a \CLatw
fibration, $\delta: S \to GS$ be a coalgebra in $\mathbb{B}$, and $\dot{G}: \mathbb{E} \to \mathbb{E}$ be a lifting of
$G$ along $p$ such that $\dot G_X:\mathbb{E}_X\to\mathbb{E}_{GX}$ is
$\omega^\mathrm{op}$-continuous for each $X\in\mathbb{B}$. The
\emph{\backwardGFP for
$(\iota \in \mathbb{E}_S, \delta,
\alpha \in \mathbb{E}_S)$ in $(p, G, \dot{G})$} is
the GFP-UA problem for $(\mathbb{E}_S, \alpha \land \delta^*\dot{G}, \iota)$, that is,
\begin{equation}
\label{eq:bd_gfp}
\iota \;\leq^?\; \nu x.\, \alpha \land \delta^*\dot{G} x.
\end{equation}
\end{definition}
Here, $\iota$ represents the initial states and $\alpha$ represents the safe states.
The predicate transformer $x\mapsto \alpha \land \delta^*\dot{G}x$ in (\ref{eq:bd_gfp}) is the standard one for modeling safety---currently safe ($\alpha$), and the next time $x$ ($\delta^*\dot{G}x$). Its gfp is the safety property; (\ref{eq:bd_gfp}) asks if all initial states ($\iota$) satisfy the safety property.
Since
the \backwardGFP is a GFP-UA problem, we can solve it by LT-OpPDR (Rem.~\ref{rem:LTOpPDR}).
\begin{wrapfigure}[5]{r}{0pt}
\begin{math}
\xymatrix@R=2em@C=3.3em{
\text{BSP} \ar[r]^-{\text{as-is}} \ar@/_1ex/[dr]_-{\text{involution }\neg} \ar@/^1ex/[dr]^(.7){\text{ suitable adjoints}
} &\text{GFP-UA} \ar[r]^-{\text{LT-OpPDR}} &\text{True/False} \\
&\text{LFP-OA} \ar[r]^-{\text{LT-PDR}} &\text{True/False}
}
\end{math}
\end{wrapfigure}
Additional assumptions allow us to
reduce the \backwardGFP to LFP-OA problems, which are solvable by LT-PDR, as shown on the right.
The first case requires the existence of the \emph{left adjoint} to the predicate transformer
$\delta^*\dot G_S:\mathbb{E}_S\to \mathbb{E}_S$.
Then we can translate BSP to the following LFP-OA problem.
It directly asks
whether all reachable states are safe.
\begin{proposition}[\forwardLFP, FSP] \label{prop:safe_prob_fd} In the setting of
Def.~\ref{def:safe_prob_bd}, assume that each
$\dot{G}_X: \mathbb{E}_{X} \to \mathbb{E}_{GX}$ preserves all meets.
Then by letting $\dot{H}_S: \mathbb{E}_{GS} \to \mathbb{E}_{S}$ be
the left adjoint of $\dot{G}_S$, the BSP \eqref{eq:bd_gfp} is
equivalent to the LFP-OA problem for
$(\mathbb{E}_S,\iota\vee\dot{H}_S\delta_*,\alpha)$:
\begin{equation} \label{eq:fd_lfp}
\mu x.\,\iota \lor \dot{H}_S\delta_*x\; \leq^?\; \alpha.
\end{equation}
This problem
is called the \emph{\forwardLFP} for $(\iota, \delta, \alpha)$ in $(p, G, \dot{G})$.
\qed
\end{proposition}
The second case assumes that the complete
lattice $\mathbb{E}_S$ of predicates admits an involution operator
$\neg:\mathbb{E}_S\to\mathbb{E}_S^\mathrm{op}$ (cf.\ Appendix.~\ref{appendix:LTOpPDR}).
\begin{proposition}[\backwardLFP, IBSP] \label{prop:corresponds_inv} In the setting of
Def.~\ref{def:safe_prob_bd}, assume further that there is a monotone
function $\neg: \mathbb{E}_S \to \mathbb{E}^\mathrm{op}_S$ satisfying
$\neg \circ \neg = \mathrm{id}$. Then the \backwardGFP \eqref{eq:bd_gfp}
is equivalent to the LFP-OA problem for
$(\mathbb{E}_S,(\neg\alpha)\lor(\neg\circ\delta^*\dot
G\circ\neg),\neg\iota)$, that is,
\begin{equation} \label{eq:bd_lfp}
\mu x.\,(\neg \alpha) \lor (\neg\circ \delta^*\dot{G}\circ\neg x)\; \leq^{?}\; \neg \iota.
\end{equation}
We call
\eqref{eq:bd_lfp} the \emph{\backwardLFP} for $(\iota, \delta, \alpha)$ in $(p, G, \dot{G})$.
Here $(\neg \alpha) \lor (\neg\circ \delta^*\dot{G}\circ\neg (-))$ is the
\emph{inverse backward predicate transformer}.
\qed
\end{proposition}
When both additional assumptions are fulfilled (in Prop.~\ref{prop:safe_prob_fd} \&~\ref{prop:corresponds_inv}), we obtain two LT-PDR algorithms to
solve BSP.
One can even simultaneously run these two algorithms---this is done in
fbPDR~\cite{SeufertS18, SeufertS19}. See also \S\ref{sec:LTPDRsForKripke}.
\begin{auxproof}
Such a setting does not seem common, however: the only example we know is
for Kripke structures~\cite{SeufertS18, SeufertS19}.
We will illustrate this special case in
\S\ref{sec:p_monad}.
\end{auxproof}
\section{Known and New PDR Algorithms as Instances}\label{sec:instances}
We present several concrete instances of our LT-PDR algorithms. The one for Markov reward models is new (\S\ref{sec:LTPDRsForMRM}). We also sketch how those instances can be systematically derived by the theory in \S\ref{sec:strTh}; details are in Appendix~\ref{appendix:strDerivDetails}.
\subsection{LT-PDRs for Kripke Structures: $\textbf{PDR}^{\textbf{F-Kr}}$ and $\textbf{PDR}^{\textbf{IB-Kr}}$}\label{sec:LTPDRsForKripke}
In most of the PDR literature, the target system is a Kripke structure that arises from a program's operational semantics. A \emph{Kripke structure} consists of a set $S$ of states
and a transition relation
$\delta\subseteq S\times S$ (here we ignore initial states and atomic propositions). The basic problem formulation is as follows.
\begin{definition}[backward safety problem (BSP) for Kripke structures]\label{def:BSPKripke}
The \emph{BSP} for a Kripke structure $(S,\delta)$, a set $\iota\in 2^S$ of initial states, and a set $\alpha\in 2^S$ of safe states, is the GFP-UA problem
\begin{math}\label{eq:BSPKripke}
\iota \,\leq^?\, \nu x.\, \alpha \land F' x,
\end{math}
where $F'\colon 2^{S}\to 2^{S}$ is defined by $F'(A)\triangleq\{s\mid \forall s'.\, ((s,s')\in \delta \Rightarrow s'\in A)\}$.
\end{definition}
It is clear that the GFP in Def.~\ref{def:BSPKripke} represents the set of states from which all reachable states are in $\alpha$. Therefore the BSP is the usual safety problem.
The above BSP is easily seen to be equivalent to the following problems.
\begin{proposition}[forward safety problem (FSP) for Kripke structures]\label{prop:FSPKripke}
The BSP in Def.~\ref{def:BSPKripke} is equivalent to the LFP-OA problem
\begin{math}\label{eq:FSPKripke}
\mu x.\, \iota \lor F'' x\, \leq^? \,\alpha
\end{math},
where $F''\colon 2^{S}\to 2^{S}$ is defined by $F''(A)\triangleq \bigcup_{s\in A}\{s'\mid (s,s')\in \delta\}$.
\qed
\end{proposition}
\begin{proposition}[inverse backward safety problem (IBSP) for Kripke structures]\label{prop:IBSPKripke}
The BSP in Def.~\ref{def:BSPKripke} is equivalent to the LFP-OA problem
\begin{math}\label{eq:IBSPKripke}
\mu x.\, \neg \alpha\lor \neg F'(\neg x) \,\leq^?\, \neg \iota
\end{math},
where $\neg\colon 2^{S}\to 2^{S}$ is the complement function $A\mapsto S\setminus A$.
\qed
\end{proposition}
\myparagraph{Instances of LT-PDR}
The FSP and IBSP (Prop.~\ref{prop:FSPKripke}--\ref{prop:IBSPKripke}), being LFP-OA, are amenable to the LT-PDR algorithm (Def.~\ref{def:lt-pdr}). Thus we obtain two instances of LT-PDR; we call them \emph{$\textbf{PDR}^{\textbf{F-Kr}}$} and \emph{$\textbf{PDR}^{\textbf{IB-Kr}}$}. $\textbf{PDR}^{\textbf{IB-Kr}}${} is a step-by-step dual to the application of LT-OpPDR to the BSP (Def.~\ref{def:BSPKripke})---see Rem.~\ref{rem:LTOpPDR}.
We compare these two instances of LT-PDR
with algorithms in the literature.
If we impose $|C_i|=1$
on each element $C_i$ of Kleene sequences, the $\textbf{PDR}^{\textbf{F-Kr}}${} instance of LT-PDR coincides with
the conventional IC3/PDR~\cite{Bradley11, EenMB11}. In contrast, $\textbf{PDR}^{\textbf{IB-Kr}}${} coincides with \emph{Reverse PDR} in~\cite{SeufertS18,SeufertS19}.
The parallel execution of $\textbf{PDR}^{\textbf{F-Kr}}${} and $\textbf{PDR}^{\textbf{IB-Kr}}${} roughly corresponds to
fbPDR~\cite{SeufertS18, SeufertS19}.
\begin{auxproof}
All these algorithms---including LT-PDR---represent predicates (i.e.\ subsets) by logical formulas in their implementation, in order to exploit efficient SAT solvers.
\end{auxproof}
\myparagraph{Structural Derivation} The equivalent problems (Prop.~\ref{prop:FSPKripke}--\ref{prop:IBSPKripke})
are derived systematically from the categorical theory in \S\ref{sec:structuralTheoryofPDR}. Indeed, using a lifting
$\dot \mathcal{P}\colon 2^{S} \to 2^{\mathcal{P} S}$ such that $A\mapsto\{A'\mid A'\subseteq A\}$ (the \emph{must modality} $\Box$), $F'$ in Def.~\ref{def:BSPKripke} coincides with $\delta^{*}\dot\mathcal{P}$ in (\ref{eq:bd_gfp}). The above $\dot \mathcal{P}$ preserves meets (cf.\ the modal axiom $\Box(\varphi\land\psi)\cong\Box\varphi\land\Box\psi$, see e.g.~\cite{BlackburnRV01}); thus Prop.~\ref{prop:safe_prob_fd} derives the FSP. Finally, $\neg$ in Prop.~\ref{prop:IBSPKripke} allows the use of Prop.~\ref{prop:corresponds_inv}. More details are in Appendix~\ref{appendix:strDerivDetails}.
\subsection{LT-PDR for MDPs: $\textbf{PDR}^{\textbf{IB-MDP}}$}\label{sec:LTPDRsForMDP}
The only known PDR-like algorithm for \emph{quantitative} verification is \emph{PrIC3}~\cite{BatzJKKMS20} for Markov decision processes (MDPs). Here we instantiate LT-PDR for MDPs and compare it with PrIC3.
An \emph{MDP} consists of a set $S$
of states,
a set $\mathrm{Act}$ of actions
and a transition function $\delta$ mapping $s \in S$ and $a \in \mathrm{Act}$
to either $\ast$ (``the action $a$ is unavailable at $s$'') or a probability distribution $\delta(s)(a)$ over $S$.
\begin{definition}[IBSP for MDPs]\label{def:IBSPMDP}
The \emph{inverse backward safety problem (IBSP)} for an MDP $(S,\delta)$, an initial state $s_{\iota}\in S$, a real number $\lambda\in[0,1]$, and a set $\alpha\subseteq S$ of safe states, is the LFP-OA problem
\begin{math}\label{eq:IBSPMDP}
\mu x.\, F'( x) \;\leq^?\; d_{\iota,\lambda}
\end{math}.
Here $d_{\iota,\lambda}\colon S\to [0,1]$ is the predicate such that $d_{\iota,\lambda}(s_{\iota})=\lambda$ and $d_{\iota,\lambda}(s)=1$ otherwise. $F'\colon [0,1]^{S}\to [0,1]^{S}$ is defined by $ F' (d)(s) =1$ if $s\not\in\alpha$, and $ F' (d)(s) = \max\{\sum_{s' \in S} d(s') \cdot \delta(s)(a)(s') \mid a \in
\mathrm{Act}, \delta (s)(a) \neq \ast \}$ if $s\in\alpha$.
\end{definition}
The function $F'$ in Def.~\ref{def:IBSPMDP} is a \emph{Bellman operator} for MDPs---it takes the average of $d$ over $\delta(s)(a)$ and takes the maximum over $a$. Therefore the lfp in Def.~\ref{def:IBSPMDP} is the \kori{maximum} reachability probability to $S\setminus \alpha$; the problem asks if it is $\le \lambda$. In other words, it asks whether the \emph{safety} probability---of staying in $\alpha$ henceforth, under any choices of actions---is $\ge 1-\lambda$. This problem is the same as in~\cite{BatzJKKMS20}.
\myparagraph{Instance of PDR} The IBSP (Def.~\ref{def:IBSPMDP}) is LFP-OA and thus amenable to LT-PDR. We call this instance \emph{$\textbf{PDR}^{\textbf{IB-MDP}}$}; it is spelled out in Appendix~\ref{ap:mdp}.
$\textbf{PDR}^{\textbf{IB-MDP}}${}
shares many essences with PrIC3
\cite{BatzJKKMS20}. It uses the operator $F'$ in Def.~\ref{def:IBSPMDP}, which coincides with the one in \cite[Def.~2]{BatzJKKMS20}.
PrIC3 maintains \emph{frames}; they coincide with KT sequences in $\textbf{PDR}^{\textbf{IB-MDP}}$.
Our Kleene sequences correspond to
\emph{obligations} in PrIC3, modulo the following difference. Kleene sequences aim at a negative witness (\S\ref{sec:neg}), but they happen to help the positive proof efforts too (\S\ref{sec:int}); obligations in PrIC3 are solely for accelerating the positive proof efforts.
\kori{Thus, if PrIC3 cannot solve these efforts, we need to check whether obligations yield a negative witness.}
\myparagraph{Structural Derivation} One can derive the IBSP (Def.~\ref{def:IBSPMDP}) from the categorical theory in \S\ref{sec:structuralTheoryofPDR}. Specifically, we first formulate the \emph{BSP}
\begin{math}
\neg d_{\lambda} \;\leq^?\; \nu x.\, d_\alpha \land \delta^*\dot{\Gmdp }x
\end{math},
where $\dot{G}$ is a suitable lifting (of $G$ for MDPs, Table~\ref{table:categoricalNotions}) that combines average and minimum,
$\neg \colon [0,1]^{S}\to[0,1]^{S}$ is defined by $(\neg d)(s)\triangleq 1-d(s)$,
and $d_{\alpha}$ is such that $d_{\alpha}(s)=1$ if $s\in\alpha$ and $d_{\alpha}(s)=0$ otherwise. Using
$\neg \colon [0,1]^{S}\to[0,1]^{S}$ in the above as an involution, we apply Prop.~\ref{prop:corresponds_inv} and obtain the IBSP (Def.~\ref{def:IBSPMDP}).
Another benefit of the categorical theory is that it can tell us a forward instance of LT-PDR (much like $\textbf{PDR}^{\textbf{F-Kr}}${} in \S\ref{sec:LTPDRsForKripke}) is unlikely for MDPs. Indeed, we showed in Prop.~\ref{prop:safe_prob_fd} that $\dot{G}'s$ preservation of meets is essential (existence of a left adjoint is equivalent to meet preservation). We can easily show that our $\dot{G}$ for MDPs does not preserve meets. See Appendix~\ref{appendix:profwedge}.
\subsection{LT-PDR for Markov Reward Models: $\textbf{PDR}^{\textbf{MRM}}$}
\label{sec:LTPDRsForMRM}
We present a PDR-like algorithm for \emph{Markov reward models (MRMs)}, which seems to be new, as an instance of LT-PDR.
An MRM consists of a set $S$ of states
and a transition function $\delta$
that maps $s \in S$ (the current state) and $c \in \mathbb{N}$ (the reward) to a function
$\delta(s)(c): S \to [0, 1]$; the last represents the probability distribution
of next states.
We solve the following problem. We use $[0,\infty]$-valued predicates---representing accumulated rewards---where $[0,\infty]$ is the set of extended nonnegative reals.
\begin{definition}[SP for MRMs]\label{def:SPMRM}
The \emph{safety problem (SP)} for an MRM $(S,\delta)$, an initial state $s_{\iota}\in S$, $\lambda\in[0,\infty]$, and a set $\alpha\subseteq S$ of safe states, is
\begin{math}\label{eq:IBSPMRM}
\mu x.\, F'( x) \,\leq^?\, d_{\iota,\lambda}
\end{math}.
Here $d_{\iota,\lambda}\colon S\to [0,\infty]$ maps $s_{\iota}$ to $\lambda$ and others to $\infty$, and $F'\colon [0,\infty]^{S}\to [0,\infty]^{S}$ is defined by
\begin{math}
F'(d)(s) = 0
\end{math} if $s\not\in\alpha$, and
\begin{math}
F'(d)(s) =
\sum_{s' \in S,c \in \mathbb{N}} (c+d(s')) \cdot
\delta(s)(c)(s')
\end{math} if $s\in\alpha$.
\end{definition}
The function $F'$ accumulates expected reward in $\alpha$. Thus the problem asks if the expected accumulated reward, starting from $s_{\iota}$ and until leaving $\alpha$, is $\le \lambda$.
\myparagraph{Instance of PDR} The SP (Def.~\ref{def:SPMRM}) is LFP-OA thus amenable to LT-PDR. We call this instance \emph{$\textbf{PDR}^{\textbf{MRM}}$}. It seems new. It is spelled out in Appendix~\ref{ap:mrm}.
\myparagraph{Structural Derivation} The function $F'$ in Def.~\ref{def:SPMRM} can be expressed categorically as $F'(x)=d_\alpha \land \delta^*\dot{G}(x)$, where $d_{\alpha}\colon S\to [0,\infty]$ carries $s\in\alpha$ to $\infty$ and $s\not\in\alpha$ to $0$, and $\dot{G}$ is a suitable lifting that accumulates expected reward. However, the SP (Def.~\ref{def:SPMRM}) is \emph{not} an instance of the three general safety problems in \S\ref{sec:structuralTheoryofPDR}.
\begin{auxproof}
---the combination of $\mu$ and $\land$ in Def.~\ref{def:SPMRM} does not occur in \S\ref{sec:structuralTheoryofPDR}.
\end{auxproof}
Consequently, we expect that other instances of LT-PDR than $\textbf{PDR}^{\textbf{MRM}}${} (such as $\textbf{PDR}^{\textbf{F-Kr}}${} and $\textbf{PDR}^{\textbf{IB-Kr}}${} in \S\ref{sec:LTPDRsForKripke}) are hard for MRMs.
\section{Implementation and Evaluation}
\label{sec:implEval}
\myparagraph{Implementation \lstinline{LTPDR}}
We implemented LT-PDR in Haskell. Exploiting Haskell's language features, it is succinct ($\sim$50 lines) and almost a literal translation of Alg.~\ref{alg:pdr} to Haskell. Its main part is presented in Appendix~\ref{appendix:code}. In particular, using suitable type classes, the code is as abstract and generic as Alg.~\ref{alg:pdr}.
Specifically, our implementation is a Haskell module named \lstinline{LTPDR}. It has two interfaces, namely the type class \lstinline{CLat $\tau$} (the lattice of predicates) and the type \lstinline{Heuristics $\tau$} (the definitions of \textbf{Candidate}, \textbf{Decide}, and \textbf{Conflict}). The main function for LT-PDR is
\lstinline{ltPDR :: CLat $\tau$ => Heuristics $\tau$ -> ($\tau$ -> $\tau$) -> $\tau$ -> IO (PDRAnswer $\tau$)}, where the second argument is for a monotone function $F$ of type \lstinline{$\tau$ -> $\tau$} and the last is for the safety predicate $\alpha$.
Obtaining concrete instances is easy by fixing $\tau$ and \lstinline{Heuristics $\tau$}. A simple implementation of $\textbf{PDR}^{\textbf{F-Kr}}${} takes $15$ lines; a more serious SAT-based one for $\textbf{PDR}^{\textbf{F-Kr}}${} takes $\sim$130 lines; $\textbf{PDR}^{\textbf{IB-MDP}}${} and $\textbf{PDR}^{\textbf{MRM}}${} take $\sim$80 lines each.
\myparagraph{Experiment Setting}
We conducted experiments to assess the performance of instances of \lstinline{LTPDR}.
The settings are as follows: 1.2GHz Quad-Core Intel Core i7 with 10 GB memory using Docker, for $\textbf{PDR}^{\textbf{IB-MDP}}${};
Apple M1 Chip with 16 GB memory
for the other instances. The different setting is because we needed Docker to run PrIC3~\cite{BatzJKKMS20}.
\myparagraph{Experiments with $\textbf{PDR}^{\textbf{MRM}}${}}
Table~\ref{table:expMRM}
shows the results. We observe that $\textbf{PDR}^{\textbf{MRM}}${} answered correctly, and that the execution time is reasonable. Further performance analysis (e.g.\ comparison with~\cite{KatoenKZ05}) and improvement is future work; the point here, nevertheless, is the fact that we obtained a reasonable MRM model checker by adding $\sim$80 lines to the generic solver \lstinline{LTPDR}.
\myparagraph{Experiments with $\textbf{PDR}^{\textbf{IB-MDP}}${}}
Table~\ref{table:expMDP} shows the results.
Both PrIC3 and our $\textbf{PDR}^{\textbf{IB-MDP}}${} solve a a linear programming (LP) problem in
\textbf{Decide}.
PrIC3 uses Z3 for this; $\textbf{PDR}^{\textbf{IB-MDP}}${} uses GLPK.
PrIC3 represents an MDP symbolically, while $\textbf{PDR}^{\textbf{IB-MDP}}${} do so concretely. Symbolic representation in $\textbf{PDR}^{\textbf{IB-MDP}}${} is possible---it is future work.
PrIC3 can use four different
\emph{interpolation generalization} methods, leading to different performance (Table~\ref{table:expMDP}).
We observe that $\textbf{PDR}^{\textbf{IB-MDP}}${} outperforms PrIC3 for some benchmarks with smaller state spaces.
We believe that the failure of $\textbf{PDR}^{\textbf{IB-MDP}}${} in many instances can be attributed to our current choice of a generalization method (it is the closest to the linear one for PrIC3).
Table~\ref{table:expMDP} suggests that use of \emph{polynomial} or \emph{hybrid} can enhance the performance.
\myparagraph{Experiments with $\textbf{PDR}^{\textbf{F-Kr}}${}}
Table~\ref{table:expFKripke} shows the results. The benchmarks are mostly
from the HWMCC'15 competition~\cite{HWMCC15},
except for \texttt{latch0.smv}\footnote{\url{https://github.com/arminbiere/aiger}} and \texttt{counter.smv} (our own).
IC3ref vastly outperforms $\textbf{PDR}^{\textbf{F-Kr}}${} in many instances.
This is hardly a surprise---IC3ref was developed towards superior performance, while $\textbf{PDR}^{\textbf{F-Kr}}$'s emphasis is on its theoretical simplicity and genericity.
We nevertheless see that $\textbf{PDR}^{\textbf{F-Kr}}${} solves some benchmarks of substantial size, such as \texttt{power2bit8.smv}.
This demonstrates the practical potential of LT-PDR, especially in view of the following improvement opportunities (we will pursue them as future work): 1) use of well-developed SAT solvers (we currently use \texttt{toysolver}\footnote{\url{https://github.com/msakai/toysolver}} for its good interface but we could use Z3); 2) allowing $|C_i| > 1$, a technique discussed in \S\ref{sec:LTPDRsForKripke} and implemented in IC3ref but not in $\textbf{PDR}^{\textbf{F-Kr}}${}; and 3) other small improvements, e.g.\ in our CNF-based handling of propositional formulas.
\myparagraph{Ablation Study}
To assess the value of the key concept of PDR (namely the \emph{positive-negative interplay} between the Knaster--Tarski and Kleene theorems (\S\ref{sec:int})), we compared $\textbf{PDR}^{\textbf{F-Kr}}${} with the instances of positive and negative LT-PDR (\S\ref{sec:pos}--\ref{sec:neg}) for Kripke structures.
Table~\ref{table:expAblation} shows the results.
Note that
the value of the positive-negative interplay is already theoretically established; see e.g.\ Prop.~\ref{prop:C_X} (the interplay detects executions that lead to nowhere).
This value was also experimentally witnessed: see \texttt{power2bit8.smv} and \texttt{simpleTrans.smv}, where the one-sided methods made wrong choices and timed out.
One-sided methods can be efficient when they get lucky (e.g.\ in \texttt{counter.smv}).
LT-PDR may be slower because of the overhead of running two sides, but that is a trade-off for the increased chance of termination.
\myparagraph{Discussion}
We observe that all of the studied instances exhibited at least reasonable performance. We note again that detailed
performance analysis and improvement is out of our current scope. Being able to derive these model checkers, with such a small effort as $\sim$100 lines of Haskell code each, demonstrates the value of our abstract theory and its generic Haskell implementation \lstinline{LTPDR}.
\begin{table}[tbp!]\footnotesize\centering
\caption{experimental results for our $\textbf{PDR}^{\textbf{F-Kr}}${}, $\textbf{PDR}^{\textbf{IB-MDP}}${}, and $\textbf{PDR}^{\textbf{MRM}}${}}
\begin{subtable}[t]{0.48\textwidth}
\centering
\caption{Results with $\textbf{PDR}^{\textbf{MRM}}${}. The MRM is from \cite[Example 10.72]{BaierK}, whose ground truth expected reward is $\frac{4}{3}$. The benchmarks ask if the expected reward (not known to the solver) is $\le 1.5$ or $\le 1.3$.
}\label{table:expMRM}
\scalebox{.85}{ \begin{tabular}{ccc}
\toprule
Benchmark & Result & Time \\
\midrule
$\textsc{DieByCoin}^{{\leq^?}1.5}$ & True & \SI{6.01}{ms} \\
$\textsc{DieByCoin}^{{\leq^?}1.3}$ & False & \SI{43.1}{\micro s} \\
\bottomrule
\end{tabular}
} \end{subtable}
\hfill
\begin{subtable}[t]{0.48\textwidth}
\centering
\caption{Results with $\textbf{PDR}^{\textbf{F-Kr}}${}, in comparison with IC3ref. Both solvers answered correctly.
Timeout (TO) is 600 sec.
}\label{table:expFKripke}
\scalebox{.85}{
\begin{tabular}{ccccc}
\toprule
Benchmark &$|S|$ & Result & $\textbf{PDR}^{\textbf{F-Kr}}$ & IC3ref \\
\midrule
latch0.smv &$2^3$ & True & \SI{317}{\micro s} & \SI{270}{\micro s} \\
counter.smv &$2^5$ & False & \SI{1.620}{s} & \SI{3.27}{ms} \\
power2bit8.smv &$2^{15}$ & True & \SI{1.516}{s} & \SI{4.13}{ms} \\
ndista128.smv &$2^{17}$ & True & TO & \SI{73.1}{ms} \\
shift1add256.smv &$2^{21}$ & True & TO & \SI{174}{ms} \\
\bottomrule
\end{tabular}
} \end{subtable}
\vspace{\baselineskip}
\begin{subtable}{\textwidth}
\centering
\caption{Results with $\textbf{PDR}^{\textbf{IB-MDP}}$ (an excerpt of Table~\ref{table:expMDPextended}). Comparison is against PrIC3~\cite{BatzJKKMS20} with four
different interpolation generalization methods (none, linear, polynomial, hybrid). The benchmarks are from~\cite{BatzJKKMS20}.
$|S|$ is the number of states of the benchmark MDP. ``GT pr.'' is for the \emph{ground truth probability}, that is the reachability probability $\mathit{Pr}^{\mathit{max}}(s_\iota \models \diamond (S \setminus\alpha))$ computed outside the solvers under experiments. The solvers were asked whether the GT pr.\ (which they do not know) is $\le \lambda$ or not; they all answered correctly. The last five columns show the average execution time in seconds. -- is for ``did not finish,'' for out of memory or timeout (600 sec.)
}\label{table:expMDP}
\scalebox{.85}{ \begin{tabular}{ccccccccc}
\toprule
Benchmark & $|S|$ &
GT pr.\
& $\lambda$ & $\textbf{PDR}^{\textbf{IB-MDP}}${} &
\multicolumn{4}{c}{PrIC3}
\\\cmidrule{6-9}
&&&&& none & lin. & pol. & hyb.
\\
\midrule
\multirow{2}{*}{Grid} & \multirow{2}{*}{$10^2$} & \multirow{2}{*}{$1.2E^{-3}$} & 0.3 &0.31 & 1.31 & 19.34 & -- & -- \\
& & & 0.2 &0.48 & 1.75 & 24.62 & -- & --\\
\midrule
\multirow{2}{*}{Grid} & \multirow{2}{*}{$10^3$} & \multirow{2}{*}{$4.4E^{-10}$} & 0.3 &122.29 & -- & -- & -- & -- \\
& & & 0.2 &136.46 & -- & -- & -- & --\\
\midrule
\multirow{3}{*}{BRP} & \multirow{3}{*}{$10^3$} & \multirow{3}{*}{0.035} & 0.1 &-- & -- & -- & -- & -- \\
& & & 0.01 &18.52 & 56.55 & 594.89 & -- & 722.38 \\
& & & 0.005 &1.36 & 11.68 & 238.09 & -- & -- \\
\midrule
\multirow{4}{*}{ZeroConf} & \multirow{4}{*}{$10^4$} & \multirow{4}{*}{0.5} & 0.9 &-- & -- & -- & 0.58 & 0.51 \\
& & & 0.75 & -- & -- & -- & 0.55 & 0.46 \\
& & & 0.52 & -- & -- & -- & 0.48 & 0.46 \\
& & & 0.45 &$<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 \\
\midrule
\multirow{4}{*}{Chain} & \multirow{4}{*}{$10^3$} & \multirow{4}{*}{0.394} & 0.9 & -- & 72.37 & -- & 0.91 & 0.70 \\
& & & 0.4 & -- & 80.83 & -- & 0.93 & -- \\
& & & 0.35 & 177.12 & 115.98 & -- & -- & -- \\
& & & 0.3 & 88.27 & 66.89 & 557.68 & -- & -- \\
\midrule
\multirow{4}{*}{DoubleChain} & \multirow{4}{*}{$10^3$} & \multirow{4}{*}{0.215} & 0.9 & -- & -- & -- & 1.83 & 1.99 \\
& & & 0.3 & -- & -- & -- & 1.88 & 1.96 \\
& & & 0.216 & -- & -- & -- & 139.76 & -- \\
& & & 0.15 & 7.46 & -- & -- & -- & -- \\
\bottomrule
\end{tabular}}
\end{subtable}
\vspace{\baselineskip}
\begin{subtable}{\textwidth}
\centering
\caption{Ablation experiments: LT-PDR ($\textbf{PDR}^{\textbf{F-Kr}}${}) vs.~positive and negative LT-PDRs, implemented for the FSP for Kripke structures. The benchmarks are as in Table~\ref{table:expFKripke}, except for a new micro benchmark \texttt{simpleTrans.smv}.
Timeout (TO) is 600 sec.}
\label{table:expAblation}
\centering
\scalebox{.85}{ \begin{tabular}{ccccc}
\toprule
Benchmark & Result & LT-PDR & positive & negative \\
\midrule
latch0.smv & True & \SI{317}{\micro s} & \SI{1.68}{ms} & TO \\
power2bit8.smv & True & \SI{1.516}{s} & TO & TO \\
counter.smv & False & \SI{1.620}{s} & TO & \SI{2.88}{\micro s} \\
simpleTrans.smv & False & \SI{295}{\micro s} & TO & TO \\
\bottomrule
\end{tabular}
} \end{subtable}
\vspace{\baselineskip}
\end{table}
\clearpage
\begin{auxproof}
\section{Implementation}
We implemented LT-PDR using Haskell.
Our implementation separates the generic definition of LT-PDR from the definitions specific to each instantiation as explained below.
This design decision makes our implementation easy to extend.
The main part of our implementation is the module named \lstinline{LTPDR}.
This module defines a type class \lstinline{CLat $\tau$} that represents a complete lattice whose underlying set is expressed by type $\tau$.
An instance of this type class has to implement the following.
\begin{itemize}
\item Type \lstinline{Info $\tau$}, that is returned by the operation \lstinline{leq} explained below, which represents auxiliary information to be passed to an implementation of \textbf{Candidate}, \textbf{Decide}, and \textbf{Conflict} in Algorithm~\ref{alg:pdr}.
\item Operation \lstinline{leq :: $\tau$ -> $\tau$ -> IO(Bool, Info $\tau$)}, which corresponds to the predicate ${\le}$. The first element of the pair returned by \lstinline{leq $x$ $y$} expresses whether $x \le y$ holds or not; the second element is the auxiliary information to be passed to the body of LT-PDR.
\item Constant functions \lstinline{bot :: $\tau$ -> $\tau$} and \lstinline{top :: $\tau$ -> $\tau$} that always return a value corresponding to $\bot$ and $\top$, respectively.
\item Operation \lstinline{meet :: $\tau$ -> $\tau$ -> $\tau$} that corresponds to ${\land}$.
\end{itemize}
Another main ingredient defined in this module is the type \lstinline{Heuristics $\tau$}, which expresses the definitions of \textbf{Candidate}, \textbf{Decide}, and \textbf{Conflict} in Algorithm~\ref{alg:pdr}; in its definition, type $\tau$ is forced to be an instance of \lstinline{CLat $\tau$}.
The type \lstinline{Heuristics $\tau$} is a record with the following fields.
\begin{itemize}
\item Field \lstinline{f_candidate :: $\tau$ -> $\tau$ -> Info $\tau$ -> IO $\tau$}, which is supposed to implement \textbf{Candidate}:
The expression \lstinline{f_candidate $X$ $\alpha$ $i$} returns a value $v$ of type $\tau$ such that $x \le X$ and $x \not\le \alpha$.
This function may assume that $X \not\le \alpha$.
\item Field \lstinline{f_decide :: $\tau$ -> $\tau$ -> ($\tau$ -> $\tau$) -> Info $\tau$ -> IO $\tau$}, which is supposed to implement \textbf{Decide}:
The expression \lstinline{f_decide $X$ $C$ $F$ $i$} returns a value $x$ of type $\tau$ such that $x \le X$ and $C \le F x$.
This function may assume $C \le F X$.
\item Field \lstinline{f_conflict :: $\tau$ -> $\tau$ -> ($\tau$ -> $\tau$) -> Info $\tau$ -> IO $\tau$}, which is supposed to implement \textbf{Conflict}:
The expression \lstinline{f_conflict X C F i} returns a value $x$ of type $\tau$ such that $C \not\leq x$ and $F (X \land x) \le x$.
This function may assume $C \not\le F X$.
\end{itemize}
On top of these definitions, the module \lstinline{LTPDR} defines a function \lstinline{ltPDR :: CLat $\tau$ => Heuristics $\tau$ -> ($\tau$ -> $\tau$) -> $\tau$ -> IO (PDRAnswer $\tau$)}.
The function \lstinline{ltPDR} takes a value $H$ of type \lstinline{Heuristic $\tau$}, a value $F$ of type \lstinline{$\tau$ -> $\tau$}, and a value $\alpha$ of type $\tau$ as arguments and executes Algorithm~\ref{alg:pdr}.
The genericity of Algorithm~\ref{alg:pdr} enables the implementation of \lstinline{ltPDR} to be abstracted by the definitions of \lstinline{CLat $\tau$} and \lstinline{Heuristic $\tau$}; its definition is independent of the implementation of \lstinline{CLat $\tau$}, \lstinline{f_candidate}, \lstinline{f_decide}, and \lstinline{f_conflict}.
Its implementation is almost a literal translation of Algorithm~\ref{alg:pdr} to Haskell.
Its implementation is fairly simple; it consists of 42 lines of Haskell code.
One can instantiate LT-PDR to a new domain by (1) defining the type $\tau$ of a complete lattice; (2) making it an instance of the type class \lstinline{CLat $\tau$} by defining \lstinline{Info $\tau$}, \lstinline{leq}, \lstinline{bot}, \lstinline{top}, and \lstinline{meet}; and (3) implementing a value of type \lstinline{Heuristic $\tau$} by definition functions \lstinline{f_candidate}, \lstinline{f_decide}, and \lstinline{f_conflict}.
In a simple case, one can implement an instance whose code consists of less than 15 lines.
We implemented the instantiation of LT-PDR to a Kripke structure (Section~\ref{sec:LTPDRsForKripke}),
to an MDP (Section~\ref{sec:LTPDRsForMDP}),
and to an MRM (Section~\ref{sec:LTPDRsForMRM}).
The heuristics in the first instantiation is defined by solving SAT problem,
and ones in the others are defined by solving LP problem.
Each of the above instantiations is implemented as a Haskell module, each of which consists of as small as 80-140 lines of code;
this demonstrates that our implementation is easy to instantiate to a new domain.
\end{auxproof}
\begin{auxproof}
\section{Experiments}
We implemented several instantiations of LT-PDR and conducted experiments.
The purposes of the experiments are (1) to measure the performance of
the instantiations mentioned in \S\ref{sec:implementation} and (2) to
conduct ablation studies of LT-PDR.
Experimental settings are as follows: for experiments in
\S{}\ref{exp:mdp}, 1.2GHz Quad-Core Intel Core i7 with the limitation to 10 GB memory;
for the other experiments, we used Apple M1 Chip limited with the limitation to 16GB memory.
\todoil{Make it clear that we don't intend to outperform the existing solvers.}
\subsection{LT-PDR for Markov Reward Models}
\begin{table}[htp]
\caption{Empirical results of LT-PDR for MRM.}
\label{table:mrm}
\centering
\begin{tabular}{ccc}
\toprule
Benchmark & Result & Time \\
\midrule
$\textsc{DieByCoin}^{{\leq^?}1.5}$ & True & \SI{6.01}{ms} \\
$\textsc{DieByCoin}^{{\leq^?}1.3}$ & False & \SI{43.1}{\micro s} \\
\bottomrule
\end{tabular}
\end{table}
We measured the performance of $\textbf{PDR}^{\textbf{MRM}}$ described in \S\ref{sec:mrm}.
We used the model presented in~\cite[Example 10.72]{BaierK} in the benchmarks, which models a die by a coin.
The benchmark $\textsc{DieByCoin}^{{\leq^?}1.5}$ tries to verify that the expected reward is less than or equal to $1.5$;
$\textsc{DieByCoin}^{{\leq^?}1.3}$ tries to verify the expected reward is less than or equal to $1.3$.
Their expected reward of the model is $\frac{4}{3}$; therefore, the former is supposed to be successfully verified, whereas the latter is not.
The timeout is set to 600s.
Table~\ref{table:mrm} shows the result.
We can observe that our implementation returns True for $\textsc{DieByCoin}^{{\leq^?}1.5}$ and False for $\textsc{DieByCoin}^{{\leq^?}1.3}$ as expected.
The execution time is also reasonable.
The comparison with other tools such as MRMC~\cite{...} is future work.
\todoil{Possibilities for performance improvement.}
\begin{table}[thp]
\caption{$\textbf{PDR}^{\textbf{IB-MDP}}$ vs.~PrIC3 (extract of Table~\ref{table:full_mdp}). The column $|S|$ is the numbers of the states of MDPs in the benchmarks. The column $\mathit{Pr}^{\mathit{max}}(s_\iota \models \diamond (S \setminus\alpha))$ shows the probabilities of reaching an unsafe state if an MDP is executed with an adversarial scheduler; a verifier is supposed to show that $\mathit{Pr}^{\mathit{max}}(s_\iota \models \diamond (S \setminus\alpha)) \le \lambda$. All the time is presented in seconds. Timeout is set to 600 sec. A cell with -- represents that the execution does not finish.}
\label{table:mdp}
\centering
\begin{tabular}{ccccccccc}
\toprule
Benchmark & $|S|$ & $\mathit{Pr}^{\mathit{max}}(s_\iota \models \diamond (S \setminus\alpha))$ & $\lambda$ & LT-PDR & none & lin. & pol. & hyb. \\
\midrule
\multirow{2}{*}{Grid} & \multirow{2}{*}{$10^2$} & \multirow{2}{*}{$1.2E^{-3}$} & 0.3 &0.31 & 1.31 & 19.34 & -- & -- \\
& & & 0.2 &0.48 & 1.75 & 24.62 & -- & --\\
\midrule
\multirow{2}{*}{Grid} & \multirow{2}{*}{$10^3$} & \multirow{2}{*}{$4.4E^{-10}$} & 0.3 &122.29 & -- & -- & -- & -- \\
& & & 0.2 &136.46 & -- & -- & -- & --\\
\midrule
\multirow{3}{*}{BRP} & \multirow{3}{*}{$10^3$} & \multirow{3}{*}{0.035} & 0.1 &-- & -- & -- & -- & -- \\
& & & 0.01 &18.52 & 56.55 & 594.89 & -- & 722.38 \\
& & & 0.005 &1.36 & 11.68 & 238.09 & -- & -- \\
\midrule
\multirow{4}{*}{ZeroConf} & \multirow{4}{*}{$10^4$} & \multirow{4}{*}{0.5} & 0.9 &-- & -- & -- & 0.58 & 0.51 \\
& & & 0.75 & -- & -- & -- & 0.55 & 0.46 \\
& & & 0.52 & -- & -- & -- & 0.48 & 0.46 \\
& & & 0.45 &$<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 \\
\midrule
\multirow{4}{*}{Chain} & \multirow{4}{*}{$10^3$} & \multirow{4}{*}{0.394} & 0.9 & -- & 72.37 & -- & 0.91 & 0.70 \\
& & & 0.4 & -- & 80.83 & -- & 0.93 & -- \\
& & & 0.35 & 177.12 & 115.98 & -- & -- & -- \\
& & & 0.3 & 88.27 & 66.89 & 557.68 & -- & -- \\
\midrule
\multirow{4}{*}{DoubleChain} & \multirow{4}{*}{$10^3$} & \multirow{4}{*}{0.215} & 0.9 & -- & -- & -- & 1.83 & 1.99 \\
& & & 0.3 & -- & -- & -- & 1.88 & 1.96 \\
& & & 0.216 & -- & -- & -- & 139.76 & -- \\
& & & 0.15 & 7.46 & -- & -- & -- & -- \\
\bottomrule
\end{tabular}
\end{table}
\subsection{LT-PDR for Markov Decision Processes} \label{exp:mdp}
We compared the performance of $\textbf{PDR}^{\textbf{IB-MDP}}$ described in \S\ref{sec:mdp} and PrIC3~\cite{BatzJKKMS20}.
In this experiments, we conducted the experiments on Docker by limiting the available memory to 10GB.
We used the IBSPs presented in the PrIC3 paper~\cite{BatzJKKMS20} and bundled with the PrIC3 implementation.\footnote{\url{https://github.com/moves-rwth/PrIC3.git}} as benchmarks.
Each benchmark comes with its own $\alpha$ and $\lambda$ in Definition~\ref{def:IBSPMDP}.
\todoil{Add explanation of each model.}
Both PrIC3 and our implementation reduces an execution of \textbf{Decide} to a linear programming (LP) problem.
PrIC3 uses Z3 to solve an LP problem; our implementation uses GLPK.
PrIC3 uses heuristics called \emph{interpolation generalization} in verification.
PrIC3 internally represents an MDP using a symbolic transition system.
Our implementation uses a primitive transition system to represent an MDP.
\todoil{Remark that the table is excerpt; the complete table is in the appendix.}
Table~\ref{table:expMDP} shows the result.
PrIC3 uses four interpolation generalization methods---\emph{none}, \emph{linear}, \emph{polynomial}, and \emph{hybrid}; Table~\ref{expMDP} presents execution time of each method in the columns none, lin., pol., hyb., respectively.
We can observe that LT-PDR outperforms PrIC3 for benchmarks with small state spaces; the trend becomes the opposite as state spaces become larger.
The failure of LT-PDR in many benchmark instances can be attributed to our current choice of a generalization method---it is the closest to the linear one for PrIC3.
Table~\ref{expMDP} suggests that the use of \emph{polynomial} or \emph{hybrid} can enhance the performance.
\todoil{Possibilities for performance improvement.}
\subsection{LT-PDR for Kripke Structures} \label{ex:kripke}
\begin{table}[htp]
\caption{$\textbf{PDR}^{\textbf{F-Kr}}$ vs.~IC3ref. Timeout is set to 600s.}
\label{table:kripke}
\centering
\begin{tabular}{cccc}
\toprule
Benchmark & Result & $\textbf{PDR}^{\textbf{F-Kr}}$ & IC3ref \\
\midrule
latch0.smv & True & \SI{317}{\micro s} & \SI{270}{\micro s} \\
counter.smv & False & \SI{1.620}{s} & \SI{3.27}{ms} \\
power2bit8.smv & True & \SI{1.516}{s} & \SI{4.13}{ms} \\
ndista128.smv & True & TO & \SI{73.1}{ms} \\
shift1add256.smv & True & TO & \SI{174}{ms} \\
\bottomrule
\end{tabular}
\end{table}
We compared the performance of the instantiation $\textbf{PDR}^{\textbf{F-Kr}}$ of LT-PDR to Kripke structure described in \S\ref{sec:p_monad} with IC3ref, the reference implementation of IC3.\footnote{\url{https://github.com/arbrad/IC3ref}}
We used the benchmarks used in HWMCC'15 competition\footnote{\label{ft:hwmcc}\url{http://fmv.jku.at/hwmcc15/}} with two models \texttt{latch0.smv}\footnote{\url{https://github.com/arminbiere/aiger}} and \texttt{counter.smv}.
In each model, we tried to verify that an unsafe state is unreachable.
Table~\ref{table:kripke} shows the result.
We observe that IC3ref outperforms LT-PDR by magnitudes.
This is hardly a surprise: IC3ref was developed aimed at superior performance itself, while LT-PDR's emphasis is on its theoretical simplicity, abstraction and genericity.
We nevertheless observe that LT-PDR does solve some benchmarks of substantial size.
For example, \texttt{power2bit8.smv} is a benchmark from the HWMCC'15 competition\footnote{\url{http://fmv.jku.at/hwmcc15/}} whose state is represented by $15$ Boolean values; hence, the model consists of $2^{15}$ states.
This demonstrates the practical potential of LT-PDR, especially in view of the following improvement opportunities.
Possibilities for performance improvement:
\begin{itemize}
\item We currently use a \texttt{toysolver}\footnote{https://github.com/msakai/toysolver/} for SAT solving because of its good interface. Use of other well-developed SAT solvers such as Z3 can improve performance.
\item There are many small possible improvements that can nevertheless impact the performance a lot. For example, our current handling of propositional formulas is based on CNF---perhaps overly so, which may be hampering the performance.
\item IC3ref implements a technique in~\cite{EenMB11} that allows $|C_i| > 1$. Incorporating it in LT-PDR is possible, and it may improve the performance.
\end{itemize}
\subsection{Ablation Studies}
\begin{table}[htp]
\caption{LT-PDR vs.~positive LT-PDR vs.~negative LT-PDR. Timeout is set to 600s.}
\label{table:ablation}
\centering
\begin{tabular}{ccccc}
\toprule
Benchmark & Result & LT-PDR & positive & negative \\
\midrule
latch0.smv & True & \SI{317}{\micro s} & \SI{1.68}{ms} & TO \\
power2bit8.smv & True & \SI{1.516}{s} & TO & TO \\
counter.smv & False & \SI{1.620}{s} & TO & \SI{2.88}{\micro s} \\
simpleTrans.smv & False & \SI{295}{\micro s} & TO & TO \\
\bottomrule
\end{tabular}
\end{table}
We conducted experiments to assess the value of the theoretical core of PDR (in our opinion), namely the \emph{positive-negative interplay} between the Knaster--Tarski and Kleene theorems (\S\ref{sec:int}).
For this purpose, we implemented positive LT-PDR (\S\ref{sec:pos}) and negative LT-PDR (\S\ref{sec:neg}).
For ablation studies, we would compare LT-PDR with the parallel execution of positive and negative LT-PDRs (without their interplay); however, we compared LT-PDR with positive LT-PDR for those whose answer is true; we used negative LT-PDR for the other benchmarks for simplicity.
Benchmarks are the same as \S\ref{ex:kripke} except for the micro benchmark \texttt{simpleTrans.smv}, which contains only $8$ states.
Table~\ref{table:ablation} shows the result.
The value of the positive-negative interplay is already theoretically developed e.g.\ in Prop.~\ref{prop:C_X}, namely that it detects executions that lead to nowhere.
This is experimentally witnessed: see \texttt{power2bit8.smv} and \texttt{simpleTrans.smv}, where the one-sided methods made wrong choices and timed out.
One-sided methods can be quite efficient when they are lucky: see \texttt{counter.smv}, where the first candidate Kleene sequence happened to be conclusive.
LT-PDR is slower because of the overhead of its positive side, but we would say that's a price worth well paying.
\subsection{Discussions}
Overall, we observed that our generic and abstract algorithm---derived purely from lattice-theoretic considerations---is amenable to implementation that is generic and abstract as well (here we exploit the language features of Haskell).
Serious performance analysis, fine-tuning, and performance comparison with other tools is out of the scope of the current paper.
Nevertheless, we saw that the generic implementation has at least reasonable performance, even in its original shape (i.e.\ without fine-tuning for enhanced performance).
As we discussed in the above, we see a number of concrete opportunities for further performance improvement for different concrete instances (example1, example2, etc.).
\todoil{Fill the above.}
\todoil{below is Kohel's original writing}
We conducted experiments with the implementation described in Section~\ref{sec:implementation}.
The purpose of the experiments are (1) to compare the performance of the current implementation with the existing PDR-based verifiers and (2) to analyze which and how features of LT-PDR affect the performance.
\todoil{We decide whether to use the following paragraph or not later.}
Notice that the aim of the current study is a generic PDR, from which the instantiations to various domains are obtained, and hence, our purpose is \emph{not} to outperform the existing verifiers.
The main purpose of our experiments is instead to confirm that the PDR instantiated from LT-PDR indeed works and to analyze the features that affect the performance, which we expect to serve for further study to make our procedure faster in future.
All the experiments are conducted under the following environment.
\todoil{Write the environment.}
To compare the performance of LT-PDR with
\end{auxproof}
\begin{auxproof}
\section{Conclusions and Future Work}
We have presented a lattice-theoretic generalization of the PDR
algorithm called LT-PDR. This involves the decomposition of the PDR
algorithm into positive and negative ones, which are tightly connected
to the Knaster--Tarski and Kleene fixed point theorems, respectively. We then combined it
with the coalgebraic and fibrational theory for modeling transition
systems with predicates. We instantiated it with several transition
systems, deriving existing PDR algorithms as well as a new one over
Markov reward models.
We leave instantiating our LT-PDR and categorical safety problems to
derive other PDR-like algorithms, such as PDR for hybrid
systems~\cite{SuenagaI20}, for future work. We also plan to
implement the LT-PDR algorithm---use of generic programming will allow it.
\end{auxproof}
\begin{auxproof}
\paragraph*{Acknowledgement}
The authors are supported by
ERATO HASUO Metamathematics for Systems Design Project
(No.~JPMJER1603), JST.
KS is supported by JST CREST Grant (No.~JPMJCR2012), Japan and JSPS KAKENHI Grant (No.~19H04084).
\end{auxproof}
\bibliographystyle{splncs04}
|
1,116,691,498,517 | arxiv | \section{Introduction}\label{sec-intro}
This paper is a sequel to \cite{Diao/Lan/Liu/Zhu:lrhrv}, in which a $p$-adic Riemann--Hilbert functor was constructed as an analogue of Deligne's Riemann--Hilbert correspondence over $\mathbb{C}$ \Ref{see \cite{Deligne:1970-EDR}}. We refer to \cite{Diao/Lan/Liu/Zhu:lrhrv} for the general introduction and backgrounds. In this paper, we further investigate the properties of the $p$-adic Riemann--Hilbert functor. We establish the de Rham comparison isomorphisms for the cohomology with compact support under the $p$-adic Riemann--Hilbert correspondences, and show that they are compatible with duality. In particular, we obtain the following theorem \Ref{see Theorems \ref{thm-comp-alg-dR-cpt} and \ref{thm-comp-alg-dR-int} for more complete statements}:
\begin{thm}\label{thm-intro}
Let $U$ be a $d$-dimensional smooth algebraic variety over a finite extension $k$ of $\mathbb{Q}_p$, and let $\mathbb{L}$ be a de Rham $p$-adic \'etale local system on $U$. Then there is a canonical comparison isomorphism
\begin{equation}\label{eq-thm-intro}
H^i_{\et, \cpt}(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} B_\dR \cong H^i_{\dR, \cpt}\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k B_\dR
\end{equation}
compatible with the canonical filtrations and the actions of $\Gal(\AC{k} / k)$ on both sides. Here $D_\dR^\alg$ is the \Pth{above-mentioned} $p$-adic Riemann--Hilbert functor constructed in \cite{Diao/Lan/Liu/Zhu:lrhrv}, and $H^i_{\et, \cpt}$ \Pth{\resp $H^i_{\dR, \cpt}$} denotes the usual \'etale \Pth{\resp de Rham} cohomology with compact support.
In addition, the above comparison isomorphism \Refeq{\ref{eq-thm-intro}} is compatible with the one in \cite[\aThm 1.1]{Diao/Lan/Liu/Zhu:lrhrv} \Pth{for varying $\mathbb{L}$} in the following sense:
\begin{enumerate}
\item\label{enum-compat-int} The following diagram
\[
\xymatrix{ {H^i_{\et, \cpt}(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} B_\dR} \ar^-\sim[r] \ar[d] & {H^i_{\dR, \cpt}\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k B_\dR} \ar[d] \\
{H^i_\et(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} B_\dR} \ar^-\sim[r] & {H^i_\dR\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k B_\dR} }
\]
is commutative, where the horizontal isomorphisms are the comparison isomorphisms, and where the vertical morphisms are the canonical ones. The vertical morphisms are strictly compatible with the filtrations.
\item\label{enum-compat-dual} The following diagram
\[
\xymatrix{ {H^i_{\et, \cpt}(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} B_\dR} \ar_-\wr[d] \ar^-\sim[r] & {H^i_{\dR, \cpt}\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k B_\dR} \ar^-\wr[d] \\
{\dual{\Bigl(H^{2d - i}_\et\bigl(U_{\AC{k}}, \dual{\mathbb{L}}(d)\bigr) \otimes_{\mathbb{Q}_p} B_\dR\Bigr)}} \ar^-\sim[r] & {\dual{\Bigl(H^{2d - i}_\dR\bigl(U, D_\dR^\alg(\dual{\mathbb{L}}(d))\bigr) \otimes_k B_\dR\Bigr)}} }
\]
is commutative, where the horizontal isomorphisms are given by the comparison isomorphisms \Pth{and where the duals are with respect to the base field $B_\dR$}, and where the vertical isomorphisms are given by the usual Poincar\'e duality of \'etale and de Rham cohomology.
\end{enumerate}
\end{thm}
Although it might seem that a comparison isomorphism as in \Refeq{\ref{eq-thm-intro}} could be easily constructed using the comparison isomorphism in \cite[\aThm 1.1]{Diao/Lan/Liu/Zhu:lrhrv} and the Poincar\'e duality for the \'etale and de Rham cohomology of algebraic varieties, in which case the compatibility \Refeq{\ref{enum-compat-dual}} would be tautological, the compatibility \Refeq{\ref{enum-compat-int}} would not be clear. Therefore, we need a different approach. We shall first prove such a comparison theorem for \Pth{appropriately defined} cohomology with compact support in the rigid analytic setting \Ref{see Theorems \ref{thm-L-!-coh-comp} and \ref{thm-int-coh-comp}}, using the log Riemann--Hilbert correspondence introduced in \cite{Diao/Lan/Liu/Zhu:lrhrv} and further developed in this paper, and show that the comparison isomorphisms indeed satisfy the desired compatibilities \Refeq{\ref{enum-compat-int}} and \Refeq{\ref{enum-compat-dual}}. After that, we obtain the comparison theorem in the algebraic setting by GAGA \cite{Kopf:1974-efava} and the comparison results in \cite{Huber:1996-ERA}.
Given the general theory developed in \cite{Diao/Lan/Liu/Zhu:lrhrv}, the main new ingredient is the definition of a period sheaf that works for the cohomology with compact support. To give a flavor of what it looks like, consider the simplest situation where $U$ is a smooth rigid analytic variety that admits a smooth compactification $X$ such that $U = X - D$ for some smooth divisor $D$. Then we equip $X$ with the log structure defined by $D$ \Ref{as in \cite[\aEx \logadicexlogadicspncd]{Diao/Lan/Liu/Zhu:laske}}, and equip $D$ with the pullback of the log structure of $X$ along the closed immersion $\imath: D \to X$ \Ref{as in \cite[\aEx \logadicexlogadicspncdstrictclimm]{Diao/Lan/Liu/Zhu:laske}}. We emphasize that the log structure of $D$ is nontrivial, and that it is crucial to equip $D$ with such a log structure. For this reason, we denote $D$ with this nontrivial log structure by $D^\partial$. Then the \Qtn{correct} period sheaf for our purpose is the sheaf
\[
\ker\bigl(\OBdlX{X} \to \imath_{\proket, *}(\OBdlX{D^\partial})\bigr)
\]
on $X_\proket$, the pro-Kummer \'etale site of $X$, where $\OBdlX{X}$ and $\OBdlX{D^\partial}$ are the period sheaves on $X_\proket$ and $D^\partial_\proket$, respectively, as in \cite[\aDef \logRHdefOBdl]{Diao/Lan/Liu/Zhu:lrhrv}. Note that this is \emph{not} the same as the naive $!$-pushforward to $X_\proket$ of the period sheaf on $U_\proet$. Once the period sheaf is constructed, the remaining arguments follow similar strategies as in \cite{Diao/Lan/Liu/Zhu:lrhrv}, sometimes with generalizations.
As an application of Theorem \ref{thm-intro}, we obtain a version of Poincar\'e duality for the \Pth{rational} $p$-adic \'etale cohomology of smooth rigid analytic varieties \Ref{see Theorem \ref{thm-trace-et} for more complete statements}:
\begin{thm}\label{thm-intro-trace-et}
Suppose that $U$ is a smooth rigid analytic variety over $k$ of pure dimension $d$ that is of the form $U = X - Z$, where $X$ is a proper rigid analytic variety over $k$, and where $Z$ is a closed rigid analytic subvariety of $X$. Then there is a canonical trace morphism
\[
t_\et: H_{\et, \cpt}^{2d}\bigl(U_{\AC{k}}, \mathbb{Q}_p(d)\bigr) \to \mathbb{Q}_p,
\]
whose formation is compatible with the excision and Gysin isomorphisms defined by complements of smooth divisors. In addition, for each $\mathbb{Z}_p$-local system $\mathbb{L}$ on $U_\et$ \Pth{which is not necessarily de Rham}, with $\mathbb{L}_{\mathbb{Q}_p} := \mathbb{L} \otimes_{\mathbb{Z}_p} \mathbb{Q}_p$, we have a canonical perfect pairing
\[
H^i_{\et, \cpt}(U_{\AC{k}}, \mathbb{L}_{\mathbb{Q}_p}) \otimes_{\mathbb{Q}_p} H^{2d - i}_\et\bigl(U_{\AC{k}}, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p,
\]
which we call the \emph{Poincar\'e duality pairing}, defined by pre-composing $t_\et$ with the cup product pairing $H^i_{\et, \cpt}(U_{\AC{k}}, \mathbb{L}_{\mathbb{Q}_p}) \otimes_{\mathbb{Q}_p} H^{2d - i}_\et\bigl(U_{\AC{k}}, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to H_{\et, \cpt}^{2d}\bigl(U_{\AC{k}}, \mathbb{Q}_p(d)\bigr)$.
\end{thm}
We refer to Definition \ref{def-H-c} for our definition of the $p$-adic \'etale cohomology with compact support for rigid analytic varieties over $k$. We remark that the Poincar\'e duality we obtained is, essentially by construction, compatible with all the de Rham comparison isomorphisms in \cite{Scholze:2013-phtra}, \cite{Diao/Lan/Liu/Zhu:lrhrv}, and this article.
We note that the question of Poincar\'e duality for \emph{proper} smooth rigid analytic varieties \Pth{in which case the cohomology with compact support coincides with the usual cohomology} was raised earlier by Scholze in \cite{Scholze:2013-phtra}, and Gabber has announced a proof for such a result using a different method \Ref{see \cite[Appendix to Lecture X, footnote 11]{Scholze/Weinstein:2017-blpag}}. Nevertheless, even in the original proper smooth setting in \cite{Scholze:2013-phtra}, our approach makes essential use of the excision and Gysin isomorphisms defined by complements of smooth divisors, and hence crucially depends on the de Rham comparison results in the nonproper setting in \cite{Diao/Lan/Liu/Zhu:lrhrv} and this paper.
We will also study the cohomology with \emph{partial compact support}, as in \cite[\aSec 4.2]{Deligne/Illusie:1987-rdcdr}, \cite[\aSec III]{Faltings:1989-ccpgr}, and \cite{Faltings:2002-aee}; and also some generalized interior cohomology, namely, the image of a morphism between cohomology with different partial compact support conditions; and construct de Rham comparison isomorphisms for such cohomology that are also compatible with Poincar\'e duality.
\subsection*{Outline of this paper}
Let us briefly describe the organization of this paper, and highlight the main topics in each section.
In Section \ref{sec-bd}, we work with a rigid analytic variety $U$ that is the open complement in a smooth rigid analytic variety $X$ of a normal crossings divisor $D$ whose intersections of irreducible components define a stratification of $X$ with smooth \Pth{closed} strata, and use such a stratification to study the \'etale cohomology of $U$ with partial compact support. More specifically, in Section \ref{sec-log-str-bd}, we equip the smooth strata as above with several different log structures. In Section \ref{sec-loc-syst-bd}, we study the torsion Kummer \'etale local systems on such strata which are pullbacks of torsion Kummer \'etale local systems on $X$, and prove a form of primitive comparison theorem for $\mathbb{F}_p$-local systems of this kind \Ref{see Proposition \ref{prop-L-J-dir-im-et} and Theorem \ref{thm-prim-comp-bd}}. In Section \ref{sec-ket-coh-cpt}, we define the Kummer \'etale cohomology of torsion local systems over $U$ with partial compact support along some subdivisor $D^\Utext{$\star$-c}$ of $D$, and prove \Pth{using results in Section \ref{sec-loc-syst-bd}} a primitive comparison theorem for such cohomology \Ref{see Theorem \ref{thm-L-!-prim-comp}}. In Section \ref{sec-proket-coh-cpt}, we define the pro-Kummer \'etale cohomology of $\widehat{\mathbb{Z}}_p$-local systems over $U$ with partial compact support along $D^\Utext{$\star$-c}$, and relate it to the Kummer \'etale cohomology. In Section \ref{sec-period-bd}, we introduce some variants supported on the boundary strata \Pth{with log structures pulled back from $X$} of the period sheaves in \cite[\aSec \logRHsecOBdl]{Diao/Lan/Liu/Zhu:lrhrv}, and provide some variants of the Poincar\'e lemma for them.
In Section \ref{sec-dR-comp-cpt}, we generalize the results in \cite[\aSec \logRHseclogRHthm]{Diao/Lan/Liu/Zhu:lrhrv} to the \'etale, de Rham, Higgs, and Hodge cohomology with partial compact support. More specifically, in Section \ref{sec-dR-comp-cpt-main}, we introduce the de Rham, Higgs, and Hodge cohomology with partial compact support, and state the comparison theorem for such cohomology \Ref{see Theorem \ref{thm-L-!-coh-comp}}. In Sections \ref{sec-period-A-inf-B-inf-cpt}, \ref{sec-period-B-dR-cpt}, and \ref{sec-period-OB-dR-cpt}, we introduce more variants of the period sheaves introduced in \cite[\aSec \logRHsecOBdl]{Diao/Lan/Liu/Zhu:lrhrv}, which are useful for studying the pro-Kummer \'etale cohomology with partial compact support by taking limits and by using the primitive comparison theorem established earlier, and prove the Poincar\'e lemma for such variants of period sheaves. In Section \ref{sec-dR-comp-cpt-proof}, we prove the desired comparison theorem, and provide some criteria for cohomology with different partial compact support conditions to be isomorphic to each other.
In Section \ref{sec-trace}, we construct some trace morphisms for the \'etale and de Rham cohomology with compact support, and show that they define Poincar\'e duality pairings for the \'etale and de Rham cohomology with partial compact support that are compatible with the comparison isomorphisms in Section \ref{sec-dR-comp-cpt}. More specifically, in Section \ref{sec-trace-coh}, as a foundation for later constructions, we review the trace morphisms and Serre duality for the coherent cohomology of proper smooth rigid analytic varieties. In Section \ref{sec-trace-dR}, we establish a perfect pairing between Higgs cohomology with complementary partial compact supports \Ref{see Theorem \ref{thm-Higgs-pairing}}; and we construct some trace morphisms for de Rham \Pth{\resp Hodge} cohomology with compact support using the trace morphisms for coherent cohomology, and show that they induce perfect pairings between de Rham \Pth{\resp Hodge} cohomology with complementary partial compact supports, when the coefficients of cohomology are associated with \emph{de Rham} \'etale $\mathbb{Z}_p$-local systems \Ref{see Theorem \ref{thm-trace-dR-Hdg}}. In Section \ref{sec-exc-Gysin}, we show that the \'etale and de Rham excision and Gysin isomorphisms defined by complements of smooth divisors are compatible with the de Rham comparison isomorphisms \Ref{see Propositions \ref{prop-exc-dR-comp} and \ref{prop-Gysin-dR-comp}}. In Section \ref{sec-trace-et}, by using the compatibility results in Section \ref{sec-exc-Gysin}, we construct some trace morphisms for \'etale cohomology with compact support using the trace morphisms for de Rham cohomology constructed in Section \ref{sec-trace-dR}, and show \Pth{by comparison with the above perfect duality for Higgs cohomology} that these trace morphisms induce perfect duality pairings between \'etale cohomology with complementary partial compact supports, when the coefficients are $\mathbb{Q}_p$-base extensions of $\mathbb{Z}_p$-local systems \Ref{see Theorem \ref{thm-trace-et}}, which are compatible with the above perfect duality for de Rham cohomology \Pth{via comparison isomorphisms} when the coefficients are de Rham. In Section \ref{sec-int-coh}, we introduce the notion of generalized interior cohomology, which is the image of a morphism between cohomology with different partial compact support conditions, and deduce from the results in Sections \ref{sec-dR-comp-cpt} and \ref{sec-trace-et} the de Rham comparison and the compatibility with Poincar\'e duality for such cohomology.
In Section \ref{sec-comp-alg}, we deduce the de Rham comparison and the compatibility with Poincar\'e duality for the cohomology with partial compact support and the generalized interior cohomology similarly defined over \emph{algebraic varieties}, by showing that the various constructions are compatible with the analytification functors.
In Section \ref{sec-Sh-var}, we apply the results in Section \ref{sec-comp-alg} to Shimura varieties, in the setting of \cite[\aSec \logRHsecShvar]{Diao/Lan/Liu/Zhu:lrhrv}, and obtain the de Rham comparison and the dual Bernstein--Gelfand--Gelfand \Pth{BGG} decomposition for the cohomology with partial compact support of Shimura varieties with coefficients in the automorphic local systems \Pth{on the \'etale side} and automorphic bundles \Pth{on the de Rham and coherent sides}. As a byproduct, we can compute the Hodge--Tate weights of the \'etale cohomology with partial compact support in terms of the dual BGG decomposition. We also obtained corresponding results for the generalized interior cohomology and, when the coefficients have regular weights, for the intersection cohomology as well.
\subsection*{Acknowledgements}
We would like to thank Shizhang Li and Yoichi Mieda for some helpful discussions, and to thank the Beijing International Center for Mathematical Research, the Morningside Center of Mathematics, the California Institute of Technology, and the Academia Sinica for their hospitality.
\subsection*{Notation and conventions}
We shall follow the notation and conventions of \cite{Diao/Lan/Liu/Zhu:lrhrv}, unless otherwise specified. In particular, we shall denote by $k$ a nonarchimedean local field \Pth{\ie, a field complete with respect to the topology induced by a nontrivial nonarchimedean multiplicative norm $|\cdot|: k \to \mathbb{R}_{\geq 0}$} with residue field $\kappa$ of characteristic $p > 0$, and by $\mathcal{O}_k$ its ring of integers. Since we will be mainly working with rigid analytic varieties, we shall work with $k^+ = \mathcal{O}_k$ and regard rigid analytic varieties over $k$ as adic spaces locally topologically of finite type over $\OP{Spa}(k, \mathcal{O}_k)$ \Ref{as in \cite{Huber:1996-ERA}}. All rigid analytic varieties will be separated. Group cohomology will always mean continuous group cohomology.
\numberwithin{equation}{subsection}
\section{Boundary stratification and cohomology with compact support}\label{sec-bd}
In this section, let $X$ be a smooth rigid analytic variety over $k$, and $D$ a normal crossings divisor \Ref{see \cite[\aEx \logRHexlogadicspncd]{Diao/Lan/Liu/Zhu:lrhrv}} with \Pth{finitely many} irreducible components $\{ D_j \}_{j \in I}$ \Ref{\ie, the images of the connected components of the normalization of $D$, as in \cite{Conrad:1999-icrs}} satisfying the condition that all the intersections
\begin{equation*}\label{eq-XJ}
X_J := \cap_{j \in J} \, D_j,
\end{equation*}
where $J \subset I$, are also \emph{smooth}.
\subsection{Log structures on smooth boundary strata}\label{sec-log-str-bd}
Let us denote by
\[
\imath_J: X_J \to X
\]
the canonical closed immersion of adic spaces. Let
\[
D_J := \cup_{J \subsetneq J' \subset I} \, X_{J'}
\]
\Pth{with its canonical reduced closed subspace structure} and
\[
U_J := X_J - D_J,
\]
as adic spaces. Note that $X_\emptyset = X$, $D_\emptyset = D$, and $U_\emptyset = U$. Then we also have a canonical open immersion of adic spaces
\[
\jmath_J: U_J \to X_J
\]
For any $I^\Utext{$\star$-c} \subset I$, with $I^\Utext{$\star$-nc} := I - I^\Utext{$\star$-c}$, let
\[
D^\Utext{$\star$-c} := \cup_{j \in I^\Utext{$\star$-c}} \, D_j
\]
and
\[
D^\Utext{$\star$-nc} := \cup_{j \in I^\Utext{$\star$-nc}} \, D_j,
\]
\Pth{with their canonical reduced closed subspace structures}, and let $U^\Utext{$\star$-c} := X - D^\Utext{$\star$-c}$ and $U^\Utext{$\star$-nc} := X - D^\Utext{$\star$-nc}$. Let $\jmath_\Utext{$\star$-c}: U \to U^\Utext{$\star$-c}$, $\jmath^\Utext{$\star$-c}: U^\Utext{$\star$-c} \to X$, $\jmath_\Utext{$\star$-nc}: U \to U^\Utext{$\star$-nc}$, and $\jmath^\Utext{$\star$-nc}: U^\Utext{$\star$-nc} \to X$ denote the canonical open immersions of adic spaces. \Ref{In Sections \ref{sec-ket-coh-cpt} and \ref{sec-proket-coh-cpt} below, we will use $\jmath_\Utext{$\star$-c}$ and $\jmath^\Utext{$\star$-c}$ to define the Kummer \'etale and pro-Kummer \'etale cohomology of $U$ with partial compact support along $D^\Utext{$\star$-c}$.}
We shall view $X$ as a log adic space by equipping it with the log structure $\alpha_X: \mathcal{M}_X \to \mathcal{O}_X$ defined by $D$ as in \cite[\aEx \logRHexlogadicspncd]{Diao/Lan/Liu/Zhu:lrhrv}, together with a canonical morphism of sites
\[
\varepsilon_\et: X_\ket \to X_\et.
\]
For each $J \subset I$, the smooth rigid analytic variety $X_J$ can be equipped with several natural log structures:
\begin{itemize}
\item the trivial log structure $\alpha_{X_J}^{\Utext{triv}}: \mathcal{M}_{X_J}^{\Utext{triv}} = \mathcal{O}_{X_{J, \et}}^\times \to \mathcal{O}_{X_{J, \et}}$;
\item the log structure $\alpha_{X_J}^{\Utext{std}}: \mathcal{M}_{X_J}^{\Utext{std}} \to \mathcal{O}_{X_{J, \et}}$ defined by the normal crossings divisor $D_J$ as in \cite[\aEx \logRHexlogadicspncd]{Diao/Lan/Liu/Zhu:lrhrv}; and
\item the log structure associated with the pre-log structure $\imath_J^{-1}(\mathcal{M}_X) \to \mathcal{O}_{X_{J, \et}}$ induced by the composition of $\alpha_X$ and $\imath_J^\#: \mathcal{O}_{X_\et} \to \imath_{J, *}(\mathcal{O}_{X_{J, \et}})$, which we shall denote by $\alpha_{X_J}^\partial: \mathcal{M}_{X_J}^\partial \to \mathcal{O}_{X_{J, \et}}$.
\end{itemize}
By abuse of notation, we shall denote these log adic spaces by $X_J^\times$, $X_J$, and $X_J^\partial$, respectively. For the sake of clarity, let us introduce the following:
\begin{defn}\label{def-imm}
We say that a morphism of log adic spaces is a \emph{closed immersion} \Pth{\resp an open immersion} if it is \emph{strict} as in \cite[\aDef \logadicdeflogstr(7)]{Diao/Lan/Liu/Zhu:laske}---\ie, if the log structure on the source space is canonically isomorphic to the pullback as above \Pth{\resp the restriction} of the one on the target space---and if the underlying morphism of adic spaces is a closed immersion \Pth{\resp an open immersion}.
\end{defn}
\begin{rk}\label{rem-cl-imm}
Definition \ref{def-imm} is more restrictive than the one in \cite[\aDef \logadicdefimm]{Diao/Lan/Liu/Zhu:laske}, where closed immersions that are not necessarily strict were also introduced. However, we do not need such a generality in this paper.
\end{rk}
As explained in \cite[\aEx \logadicexlogadicspncdstrictclimm]{Diao/Lan/Liu/Zhu:laske}, we have the following commutative diagram of canonical morphisms between log adic spaces
\[
\xymatrix{ {U_J^\partial} \ar[d]_-{\varepsilon_J^\partial|_{U_J^\partial}} \ar[r]^-{\jmath_J^\partial} & {X_J^\partial} \ar[d]_-{\varepsilon_J^\partial} \ar[r]^-{\imath_J^\partial} & {X} \\
{U_J} \ar[r]^-{\jmath_J} & {X_J} }
\]
in which $\jmath_J^\partial$ and $\jmath_J$ are open immersions, $\imath_J^\partial$ is a closed immersion, and the underlying morphisms of adic spaces of $\varepsilon_J^\partial|_{U_J^\partial}$ and $\varepsilon_J^\partial$ are isomorphisms. Moreover, $U_J$ is equipped with the trivial log structure, while $U_J^\partial$ is equipped with the log structure pulled back from $X_J^\partial$ and hence $X$. Note that there is no natural morphism of log adic spaces from $X_J$ to $X$, and this is the main reason to introduce $X_J^\partial$.
For each $a \geq 0$, we define the log adic space
\[
X_{(a)}^\partial := \coprod_{J \subset I^\Utext{$\star$-c}, \, |J| = a} X_J^\partial,
\]
a disjoint union, which admits a canonical finite morphism of log adic spaces
\[
\imath_{(a)}^\partial: X_{(a)}^\partial \to X.
\]
\Pth{Note that the definition of $X_{(a)}^\partial$ only involves the irreducible components of $D^\Utext{$\star$-c}$.}
\begin{rk}\label{rem-ket-local-geom-pattern}
In what follows, we will sometimes use Kummer \'etale localizations $X' \to X$ to reduce the proofs of various statements for torsion local systems to the analogous statements for constant ones, and the assertions to prove will often be equivalent to assertions concerning direct images and direct images with compact support from open complements of closed subspaces of the forms $X_J$, $D_J$, or $D^J$ above. This is justified because, by \cite[\aProp \logadicpropAbhyankar{} and \aLem \logadiclemAbhyankarbasic]{Diao/Lan/Liu/Zhu:laske}, locally over $X'$, the underlying reduced subspaces of the preimages of the irreducible components of $D$ still form normal crossings divisors of the same pattern.
\end{rk}
We will make use of the following notation and conventions in the remainder of the article. Let $Y$ be a locally noetherian fs log adic space over $k$. Let $\imath: Z \to Y$ be a closed immersion of log adic spaces, and $\jmath: W \to Y$ an open immersion of log adic spaces, over $k$. \Pth{By Definition \ref{def-imm}, this means that the log structures on $Z$ and $W$ are the pullbacks of the log structure on $Y$}. For $? = \an$, $\et$, $\ket$, $\proet$, or $\proket$ \Pth{referring to the analytic, \'etale, Kummer \'etale, pro-\'etale, or pro-Kummer \'etale topology, respectively, on these spaces}, let $(\imath_{?, *}, \imath_?^{-1})$ and $(\jmath_{?, *}, \jmath_?^{-1})$ denote the associated morphisms of topoi. For a sheaf $F$ on $Y_?$, we shall sometimes denote $\imath^{-1}(F)$ \Pth{\resp $\jmath^{-1}(F)$} by $F|_Z$ \Pth{\resp $F|_W$}. Note that $\jmath_?^{-1}$ admits a left adjoint, denoted by $\jmath_{?, !}$, which is an exact functor on the category of abelian sheaves.
\begin{lem}\label{lem-exc-ex-seq}
In the above setting, assume moreover that $W = Y - Z$. Then, for every abelian sheaf $F$ on $Y_?$, where $? = \an$, $\et$, or $\ket$, we have the excision short exact sequence $0 \to \jmath_{?, !} \, \jmath_?^{-1}(F) \to F \to \imath_{?, *} \, \imath_?^{-1}(F) \to 0$. Moreover, the functor $\imath_{?, *}$ \Pth{\resp $\jmath_{?, !}$} from the category of abelian sheaves on $Z$ \Pth{\resp $W$} to the category of abelian sheaves on $Y$ is exact and fully faithful.
\end{lem}
\begin{proof}
See \cite[\aLem \logadiclemexc]{Diao/Lan/Liu/Zhu:laske}.
\end{proof}
\begin{lem}\label{lem-!-resol}
Let $F$ be an abelian sheaf on $X_?$, where $? = \an$, $\et$, or $\ket$. For each $a \geq 0$, let us denote $\imath_{(a), ?}^{-1}(F)$ by $F_{(a)}$, which is a sheaf on $X_{(a), ?}^\partial$. Let us choose any total order of the finite set $I^\Utext{$\star$-c}$, which induces total orders on any subset $J$ of $I^\Utext{$\star$-c}$. Then there is an exact complex
\[
0 \to \jmath_{?, !}(F|_{U^\Utext{$\star$-c}_?}) \to \imath_{(0), ?, *}^\partial(F_{(0)}) \to \imath_{(1), ?, *}^\partial(F_{(1)}) \to \cdots \to \imath_{(a), ?, *}^\partial(F_{(a)}) \to \cdots
\]
over $X_?$, where the morphism $\imath_{(a), ?, *}^\partial(F_{(a)}) \to \imath_{(a + 1), ?, *}^\partial(F_{(a + 1)})$, for each $a \geq 0$, is the direct sum of morphisms $\imath_{J, ?, *}^\partial(F|_{X_J^\partial}) \to \imath_{J', ?, *}^\partial(F|_{X_{J'}^\partial})$ indexed by pairs $(J, J')$ with $J \subsetneq J' \subset I^\Utext{$\star$-c}$, $|J| = a$, and $J' = J \cup \{ j_0 \}$ for some $j_0$; each being the canonical one induced by the closed immersion $X_{J'}^\partial \to X_J^\partial$ multiplied by $(-1)^{|\{ j \in J' : j < j_0 \}|}$. \Pth{This is probably well known, but we included some details, at least, to set up the convention, because such complexes will appear repeatedly in our arguments.}
\end{lem}
\begin{proof}
By Lemma \ref{lem-exc-ex-seq}, it suffices to check the exactness of the complex after pulling back to $U^\Utext{$\star$-c} \subset X$ and to $U_J^\partial \subset X$, for each $J\subset I^\Utext{$\star$-c}$. The resulted complex can be identified with $0 \to F|_{U^\Utext{$\star$-c}_?} \to F|_{U^\Utext{$\star$-c}_?} \to 0$ in the former case, where the morphism in the middle is the identity morphism; and with a complex
\[
0 \to 0 \to (F|_{U_{J, ?}^\partial})^{\binom{a}{0}} \to (F|_{U_{J, ?}^\partial})^{\binom{a}{1}} \to \cdots \to (F|_{U_{J, ?}^\partial})^{\binom{a}{a - 1}} \to (F|_{U_{J, ?}^\partial})^{\binom{a}{a}} \to 0 \to \cdots
\]
in the latter case, where $a = |J|$ and the exponents are the binomial coefficients. In both cases, the sequences are exact by construction, as desired.
\end{proof}
\subsection{Kummer \'etale local systems on smooth boundary strata}\label{sec-loc-syst-bd}
Let $\mathbb{L}$ be a torsion local system on $X_\ket$. For each $J \subset I$, let
\[
\mathbb{L}_J := \imath_{J, \ket}^{\partial, -1}(\mathbb{L}).
\]
We have the following primitive comparison theorem for $X_{J, \ket}^\partial$ and $\mathbb{L}_J$:
\begin{thm}\label{thm-prim-comp-bd}
Suppose that $k$ is algebraically closed of characteristic zero, $X_J$ is proper, and $\mathbb{L}$ is an $\mathbb{F}_p$-local system. Then there is a natural almost isomorphism
\begin{equation}\label{eq-thm-prim-comp-bd}
H^i\bigl(X_{J, \ket}^\partial, \mathbb{L}_J \bigr) \otimes_{\mathbb{F}_p} (k^+ / p) \Mi H^i\bigl(X_{J, \ket}^\partial, \mathbb{L}_J \otimes_{\mathbb{F}_p} (\mathcal{O}_{X_{J, \ket}^\partial}^+ / p)\bigr)
\end{equation}
of almost finitely generated $k^+$-modules, for each $i \geq 0$. Both $k^+$-modules are almost zero when $i > 2 \dim(X_J) + |J| = 2 \dim(X) - |J|$.
\end{thm}
The remainder of this subsection will be devoted to the proof of Theorem \ref{thm-prim-comp-bd}. Along the way, we will establish several other facts that will be needed in the remainder of this paper. We shall temporarily drop the assumptions that $k$ is algebraically closed, that $X_J$ is proper, and that $\mathbb{L}$ is $p$-torsion. The first step is the following proposition:
\begin{prop}\label{prop-L-J-dir-im-et}
The sheaf $R^i\varepsilon_{J, \ket, *}^\partial( \mathbb{L}_J )$ is a torsion local system on $X_{J, \ket}$, for each $i \geq 0$, and it vanishes when $i > |J|$. Moreover, for each $m \geq 1$, the canonical morphism $R^i\varepsilon_{J, \ket, *}^\partial(\mathbb{L}_J) \to R^i\varepsilon_{J, \ket, *}^\partial(\mathbb{L}_J / m)$ is surjective.
\end{prop}
We need some preparations before presenting the proof of Proposition \ref{prop-L-J-dir-im-et}.
\begin{lem}\label{lem-L-J-op}
Let $D^J = \cup_{j \in I - J} \, D_j$ as before, so that $D_J = X_J \cap D^J$ as subsets of $X$. Let $\widetilde{\jmath}_J: X - D^J \to X$ denote the canonical open immersion of log adic spaces, whose pullback under $\imath_J^\partial: X_J^\partial \to X$ is $\jmath_J^\partial: U_J^\partial \to X_J^\partial$. Let $\jmath^J: X - X_J^\partial \to X$ denote the complementary open immersion. Then the adjunction morphism
\begin{equation}\label{eq-lem-L-J-op}
\jmath_{\ket, !}^J \, \jmath_\ket^{J, -1} \, R\widetilde{\jmath}_{J, \ket, *} \, \widetilde{\jmath}_{J, \ket}^{-1}(\mathbb{L}) \to R\widetilde{\jmath}_{J, \ket, *} \, \widetilde{\jmath}_{J, \ket}^{-1} \, \jmath_{\ket, !}^J \, \jmath_\ket^{J, -1}(\mathbb{L}),
\end{equation}
induced by $\widetilde{\jmath}_{J, \ket}^{-1} \, \jmath_{\ket, !}^J \, \jmath_\ket^{J, -1} \, R\widetilde{\jmath}_{J, \ket, *} \, \widetilde{\jmath}_{J, \ket}^{-1}(\mathbb{L}) \cong \widetilde{\jmath}_{J, \ket}^{-1} \, \jmath_{\ket, !}^J \, \jmath_\ket^{J, -1}(\mathbb{L})$ is an isomorphism.
\end{lem}
\begin{proof}
As explained in Remark \ref{rem-ket-local-geom-pattern}, we may work locally on $X_\ket$, and assume that $\mathbb{L} = \mathbb{Z} / m$ for some integer $m \geq 1$. By the same argument as in the proof of \cite[\aThm 2.72]{Diao/Lan/Liu/Zhu:laske}, it suffices to show that the analogue of \Refeq{\ref{eq-lem-L-J-op}} over $X_\et$ \Pth{with subscripts \Qtn{$\ket$} replaced with \Qtn{$\et$}} is an isomorphism. Since $D$ is a normal crossings divisor \Ref{again, see \cite[\aEx \logRHexlogadicspncd]{Diao/Lan/Liu/Zhu:lrhrv}}, up to \'etale localization, we may reduce \Ref{by \cite[\aProp 2.1.4 and \aThms 3.8.1 and 5.7.2]{Huber:1996-ERA}} to the case of schemes, and assume that $X$ is a fiber product of two varieties $X_1$ and $X_2$ over $k$, with the morphisms $\jmath^J$ and $\widetilde{\jmath}_J$ being pullbacks of some open immersions to $X_1$ and $X_2$, respectively. Then the desired assertion follows from the K\"unneth isomorphisms as in \cite[\aSec 4.2.7]{Beilinson/Bernstein/Deligne/Gabber:2018-FP(2)} \Ref{\Refcf{} \cite[\aLem 4.3.23 and its proof]{Lan/Stroh:2018-csisv}}.
\end{proof}
\begin{rk}\label{rem-!-*-switch}
A similar argument shows that there is a canonical isomorphism
\[
\jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L}|_U) \Mi R\jmath_{\et, *}^\Utext{$\star$-nc} \, \jmath_{\Utext{$\star$-nc}, \et, !}(\mathbb{L}|_U).
\]
\end{rk}
\begin{lem}\label{lem-L-J-pure}
The adjunction morphism
\begin{equation}\label{eq-lem-L-J-pure}
\mathbb{L}_J \to R\jmath_{J, \ket, *}^\partial \, \jmath_{J, \ket}^{\partial, -1}(\mathbb{L}_J)
\end{equation}
is an isomorphism.
\end{lem}
\begin{proof}
Let us retain the setting of Lemma \ref{lem-L-J-op}. By Lemma \ref{lem-exc-ex-seq}, it suffices to apply $\imath_{J, \ket, *}^\partial$ to the morphism \Refeq{\ref{eq-lem-L-J-pure}}, and show that the morphism
\[
\begin{split}
& \imath_{J, \ket, *}^\partial \, \imath_{J, \ket}^{\partial, -1}(\mathbb{L}) \to \imath_{J, \ket, *}^\partial \, R\jmath_{J, \ket, *}^\partial \, \jmath_{J, \ket}^{\partial, -1} \, \imath_{J, \ket}^{\partial, -1}(\mathbb{L}) \\
& \cong R\widetilde{\jmath}_{J, \ket, *} \, (\imath_J^\partial|_{U_J})_{\ket, *} \, \jmath_{J, \ket}^{\partial, -1} \, \imath_{J, \ket}^{\partial, -1}(\mathbb{L}) \cong R\widetilde{\jmath}_{J, \ket, *} \, \widetilde{\jmath}_{J, \ket}^{-1} \, \imath_{J, \ket, *}^\partial \, \imath_{J, \ket}^{\partial, -1}(\mathbb{L}),
\end{split}
\]
which can be identified with the adjunction morphism for the sheaf $\imath_{J, \ket, *}^\partial \, \imath_{J, \ket}^{\partial, -1}(\mathbb{L})$ and the morphism $\widetilde{\jmath}_J$, is an isomorphism. By \cite[\aThm \logadicthmpurity]{Diao/Lan/Liu/Zhu:laske}, the adjunction morphism $\mathbb{L} \to R\widetilde{\jmath}_{J, \ket, *} \, \widetilde{\jmath}_{J, \ket}^{-1}(\mathbb{L})$ is an isomorphism over $X_\ket$. Hence, we have a canonical isomorphism $\jmath_{\ket, !}^J \, \jmath_\ket^{J, -1}(\mathbb{L}) \Mi \jmath_{\ket, !}^J \, \jmath_\ket^{J, -1} \, R\widetilde{\jmath}_{J, \ket, *} \, \widetilde{\jmath}_{J, \ket}^{-1}(\mathbb{L})$, whose composition with \Refeq{\ref{eq-lem-L-J-op}} is the adjunction morphism for the sheaf $\jmath_{\ket, !}^J \, \jmath_\ket^{J, -1}(\mathbb{L})$ and the morphism $\widetilde{\jmath}_J$. Thus, the desired assertion follows from Lemmas \ref{lem-exc-ex-seq} and \ref{lem-L-J-op}.
\end{proof}
Let $\mathcal{M}_X$ be as in Section \ref{sec-log-str-bd}. By \cite[\aLems \logadiclemclimmketmor{} and \logadiclemkettoetconst]{Diao/Lan/Liu/Zhu:laske}, we have
\begin{equation}\label{eq-L-J-dir-im-et-J-log-str}
R^i(\varepsilon_J^\partial|_{U_J^\partial})_{\et, *}(\mathbb{Z} / n) \cong \bigl(\Ex^i (\overline{\mathcal{M}}_X^\gp / n)\bigr)(-i)|_{U_J}
\end{equation}
over $U_{J, \et}$, for each $n \in \mathbb{Z}_{\geq 1}$. Now we are ready for the following:
\begin{proof}[Proof of Proposition \ref{prop-L-J-dir-im-et}]
By Lemma \ref{lem-L-J-pure}, and by applying \cite[\aThm \logadicthmpurity]{Diao/Lan/Liu/Zhu:laske} to torsion local systems over $X_J$, we may replace $X_J^\partial$ \Pth{\resp $X$} with $U_J^\partial$ \Pth{\resp $X - D^J$}. By \cite[\aLem \logadiclemclimmketmor]{Diao/Lan/Liu/Zhu:lrhrv} and Remark \ref{rem-ket-local-geom-pattern}, and by the same argument as in the proof of \cite[\aThm \logadicthmpurity]{Diao/Lan/Liu/Zhu:laske}, we may work locally on $X_\ket$, and assume that $\mathbb{L} = \mathbb{Z} / n$ for some $n \in \mathbb{Z}_{\geq 1}$. Then Proposition \ref{prop-L-J-dir-im-et} reduces to the isomorphism \Refeq{\ref{eq-L-J-dir-im-et-J-log-str}}, which is clearly compatible with reduction mod $m$ on both sides.
\end{proof}
\begin{cor}\label{cor-L-J-coh-fin}
Let $\mathbb{L}$ be a $\mathbb{Z} / p^m$-local system on $X_\ket$. Then we have the Leray spectral sequence
\begin{equation}\label{eq-cor-L-J-coh-fin}
E_2^{a, b} := H^a\bigl(X_{J, \ket}, R^b\varepsilon_{J, \ket, *}^\partial(\mathbb{L}_J)\bigr) \Rightarrow H^{a + b}(X_{J, \ket}^\partial, \mathbb{L}_J).
\end{equation}
In particular, the $\mathbb{Z} / p^m$-module $H^i(X_{J, \ket}^\partial, \mathbb{L}_J)$ is finitely generated, for any $i \geq 0$ and $m \geq 0$, and vanishes when $i > 2 \dim(X_J) + |J| = 2 \dim(X) - |J|$.
\end{cor}
\begin{proof}
This follows from Proposition \ref{prop-L-J-dir-im-et} and \cite[\aThm \logadicthmprimcomp]{Diao/Lan/Liu/Zhu:laske}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm-prim-comp-bd}]
Consider the Leray spectral sequence
\[
\begin{split}
E_2^{a, b} & := H^a\bigl(X_{J, \ket}, \bigl(R^b\varepsilon_{J, \ket, *}^\partial(\mathbb{L}_J )\bigr) \otimes_{\mathbb{F}_p} (\mathcal{O}_{X_{J, \ket}}^+ / p)\bigr) \\
& \cong H^a\bigl(X_{J, \ket}, R^b\varepsilon_{J, \ket, *}^\partial\bigl(\mathbb{L}_J \otimes_{\mathbb{F}_p} (\mathcal{O}_{X_{J, \ket}^\partial}^+ / p)\bigr)\bigr) \\
& \Rightarrow H^{a + b}\bigl(X_{J, \ket}^\partial, \mathbb{L}_J \otimes_{\mathbb{F}_p} (\mathcal{O}_{X_{J, \ket}^\partial}^+ / p)\bigr),
\end{split}
\]
where the first isomorphism is based on \cite[\aLem \logadiclemketmorOplusp]{Diao/Lan/Liu/Zhu:laske}, which admits a morphism from the following spectral sequence, given by the base change of \Refeq{\ref{eq-cor-L-J-coh-fin}}:
\[
E_2^{a, b} := H^a\bigl(X_{J, \ket}, R^b\varepsilon_{J, \ket, *}^\partial(\mathbb{L}_J)\bigr) \otimes_{\mathbb{F}_p} (k^+ / p) \Rightarrow H^{a + b}(X_{J, \ket}^\partial, {\mathbb{L}}_J) \otimes_{\mathbb{F}_p} (k^+ / p).
\]
By Proposition \ref{prop-L-J-dir-im-et} and \cite[\aThm \logadicthmprimcomp]{Diao/Lan/Liu/Zhu:laske}, this morphism is given by almost isomorphisms between the $E_2$ terms, which are almost finitely generated $k^+$-modules that are almost zero except when $a, b \geq 0$ and $a + b \leq 2 \dim(X_J) + |J| = 2 \dim(X) - |J|$ \Ref{as in Corollary \ref{cor-L-J-coh-fin}}. Thus, the theorem follows.
\end{proof}
\subsection{Kummer \'etale cohomology with partial compact support}\label{sec-ket-coh-cpt}
In this subsection, let us fix $I^\Utext{$\star$-c} \subset I$ and define $U^\Utext{$\star$-c}$ etc as in Section \ref{sec-log-str-bd}. Let $\mathbb{L}$ be a torsion local system on $X_\ket$ as before. As in Lemma \ref{lem-!-resol}, for each $a \geq 0$, let
\begin{equation}\label{eq-def-L-a}
\mathbb{L}_{(a)} := \imath_{(a), \ket}^{\partial, -1}(\mathbb{L}).
\end{equation}
We shall deduce from Theorem \ref{thm-prim-comp-bd} a finiteness result and a generalization of the \emph{primitive comparison theorem} for the cohomology with compact support. In order to state them, let us first introduce the relevant cohomology groups.
\begin{defn}\label{def-H-c-torsion}
Assume that $k$ is algebraically closed and of characteristic zero, and that $X$ is proper over $k$. For any torsion local system $\mathbb{L}$ on $X_\ket$, we abusively define
\[
H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) := H_\Utext{$\star$-c}^i(U_\et, \mathbb{L}) := H^i\bigl(X_\ket, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})\bigr).
\]
\Pth{We introduce both $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L})$ and $H_\Utext{$\star$-c}^i(U_\et, \mathbb{L})$ for the sake of flexibility.}
\end{defn}
The following lemma shows that $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L})$ can be interpreted as the cohomology of $\mathbb{L}|_U$ with a partial compact support condition along $D^\Utext{$\star$-c} \subset X$, which justifies our choice of notation.
\begin{lem}\label{lem-L-!}
We have canonical isomorphisms
\begin{equation}\label{eq-L-!}
\jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L}|_U) \Mi \jmath_{\et, !}^\Utext{$\star$-c} \, R\varepsilon_{\et, *}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket}) \Mi R\varepsilon_{\et, *} \, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})
\end{equation}
\Ref{\Refcf{} Remark \ref{rem-!-*-switch}}. Therefore, if $k$ is algebraically closed and of characteristic zero and $X$ is proper, then we have
\[
H_{\et,\Utext{$\star$-c}}^i(U, \mathbb{L}) \cong H^i\bigl(X_\et, \jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L}|_U)\bigr) \cong H_\cpt^i\bigl(U^\Utext{$\star$-c}_\et, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L}|_U)\bigr).
\]
In particular,
\begin{itemize}
\item if $I^\Utext{$\star$-c} = \emptyset$, then $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) \cong H^i(U_\et, \mathbb{L}|_U)$;
\item if $I^\Utext{$\star$-c} = I$, then $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) \cong H_\cpt^i(U_\et, \mathbb{L}|_U)$,
\end{itemize}
where $H_\cpt^i(U_\et, \mathbb{L}|_U)$ is the \'etale cohomology with compact support of the \'etale local system $\mathbb{L}|_U$ on $U_\et$, as defined in \cite[\aSec 5]{Huber:1996-ERA}.
\end{lem}
\begin{proof}
The first isomorphism in \Refeq{\ref{eq-L-!}} follows from \cite[\aThm \logadicthmpurity]{Diao/Lan/Liu/Zhu:laske} \Pth{and its proof}, and the second isomorphism follows from \cite[\aLem \logadiclemkettoetconst]{Diao/Lan/Liu/Zhu:laske} and the definition of these sheaves. The rest of the lemma follows immediately.
\end{proof}
Now we are ready to state the following primitive comparison theorem for the cohomology with partial compact support \Ref{\Refcf{} the analogous results \cite[\aThm 5.1]{Scholze:2013-phtra} and \cite[\aThm \logadicthmprimcomp]{Diao/Lan/Liu/Zhu:laske} for the usual cohomology}:
\begin{thm}\label{thm-L-!-prim-comp}
Assume that $k$ is algebraically closed and of characteristic zero, that $X$ is proper over $k$, and that $\mathbb{L}$ is an $\mathbb{F}_p$-local system over $X_\ket$. Then:
\begin{enumerate}
\item\label{thm-L-!-prim-comp-fg} $H^i\bigl(X_\ket, \bigl(\jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\mathbb{F}_p} (\mathcal{O}_X^+ / p)\bigr)$ is an almost finitely generated $k^+$-module for each $i \geq 0$, and is almost zero if $i > 2 \dim(X)$.
\item\label{thm-L-!-prim-comp-isom} There is a canonical almost isomorphism
\begin{equation}\label{eq-thm-L-!-prim-comp}
H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) \otimes_{\mathbb{F}_p} (k^+ / p) \Mi H^i\bigl(X_\ket, \bigl(\jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L})\bigr) \otimes_{\mathbb{F}_p} (\mathcal{O}_X^+ / p)\bigr)
\end{equation}
of $k^+$-modules, for each $i \geq 0$. In particular, $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L})$ is a finite-dimensional $\mathbb{F}_p$-vector space for each $i \geq 0$, and vanishes for $i > 2 \dim(X)$.
\end{enumerate}
\end{thm}
\begin{proof}
By Lemma \ref{lem-!-resol} and \cite[\aLem \logadiclemclimmOplusp]{Diao/Lan/Liu/Zhu:laske}, we have an exact complex
\begin{equation}\label{eq-thm-L-!-prim-comp-resol}
\begin{split}
0 & \to \bigl(\jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\mathbb{F}_p} (\mathcal{O}_X^+ / p) \\
& \to \imath_{(0), \ket, *}^\partial\bigl(\mathbb{L}_{(0)} \otimes_{\mathbb{F}_p} (\mathcal{O}_{X^\partial_{(0)}}^+ / p)\bigr) \to \imath_{(1), \ket, *}^\partial\bigl(\mathbb{L}_{(1)} \otimes_{\mathbb{F}_p} (\mathcal{O}_{X^\partial_{(1)}}^+ / p)\bigr) \\
& \to \cdots \to \imath_{(a), \ket, *}^\partial\bigl(\mathbb{L}_{(a)} \otimes_{\mathbb{F}_p} (\mathcal{O}_{X^\partial_{(a)}}^+ / p)\bigr) \to \cdots
\end{split}
\end{equation}
over $X_\ket$, which admits a morphism from the complex
\[
0 \to \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket}) \to \imath_{(0), \ket, *}^\partial(\mathbb{L}_{(0)}) \to \imath_{(1), \ket, *}^\partial(\mathbb{L}_{(1)}) \to \cdots \to \imath_{(a), \ket, *}^\partial(\mathbb{L}_{(a)}) \to \cdots
\]
\Ref{as in Lemma \ref{lem-!-resol}}. Therefore, we obtain a \Pth{filtration} spectral sequence
\[
\begin{split}
E_1^{a, b} & := H^{a + b}\bigl(X_{(a), \ket}^\partial, \mathbb{L}_{(a)} \otimes_{\mathbb{F}_p} (\mathcal{O}_{X^\partial_{(a)}}^+ / p)\bigr) \\
& \cong \oplus_{J \subset I^\Utext{$\star$-c}, \, |J| = a} \; H^{a + b}\bigl(X_{J, \ket}^\partial, \mathbb{L}_J \otimes_{\mathbb{F}_p} (\mathcal{O}_{X^\partial_J}^+ / p)\bigr) \\
& \Rightarrow H^{a + b}\bigl(X_\ket, \bigl(\jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\mathbb{F}_p} (\mathcal{O}_X^+ / p)\bigr),
\end{split}
\]
which admits a morphism from the spectral sequence
\[
\begin{split}
E_1^{a, b} & := H^{a + b}(X_{(a), \ket}^\partial, \mathbb{L}_{(a)}) \otimes_{\mathbb{F}_p} (k^+ / p) \\
& \cong \oplus_{J \subset I^\Utext{$\star$-c}, \, |J| = a} \; \bigl(H^{a + b}(X_{J, \ket}^\partial, \mathbb{L}_J) \otimes_{\mathbb{F}_p} (k^+ / p)\bigr) \\
& \Rightarrow H^{a + b}\bigl(X_\ket, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\mathbb{F}_p} (k^+ / p).
\end{split}
\]
By Theorem \ref{thm-prim-comp-bd}, this morphism is given by almost isomorphisms between the $E_1$ terms, which are almost finitely generated $k^+$-modules that are almost zero except when $a, b \geq 0$ and $a + b \leq 2 \dim(X)$. Thus, the morphism induces the almost isomorphism \Refeq{\ref{eq-thm-L-!-prim-comp}} in \Refeq{\ref{thm-L-!-prim-comp-isom}} and justifies \Refeq{\ref{thm-L-!-prim-comp-fg}}, as desired.
\end{proof}
\subsection{Pro-Kummer \'etale cohomology with partial compact support}\label{sec-proket-coh-cpt}
Recall that a $\mathbb{Z}_p$-local system $\mathbb{L}$ is an inverse system $\{ \mathbb{L}_n \}_{n \geq 1}$, where each $\mathbb{L}_n$ is a $\mathbb{Z} / p^n$-local system, satisfying $\mathbb{L}_m / p^n \cong \mathbb{L}_n$ for all $m \geq n \geq 1$. \Ref{See \cite[\aDef \logadicdefketlisse]{Diao/Lan/Liu/Zhu:laske}.} Since we will need to deal with inverse systems of sheaves on $X_\ket$ such as $\{ \jmath_!(\mathbb{L}_n) \}_{n \geq 1}$, it is convenient to introduce the following definitions:
\begin{defn}\label{def-adic-formalism}
A Kummer \'etale $\mathbb{Z}_p$-sheaf $F$ on a locally noetherian fs log adic space $Y$ \Pth{over $k$} is an inverse system $\{ F_n \}_{n \geq 1}$ of sheaves on $Y_\ket$, where $F_n$ is a $\mathbb{Z} / p^n$-module, for each $n \geq 1$. Let $\Shv_{\mathbb{Z}_p}(Y_\ket)$ denote the abelian category of $\mathbb{Z}_p$-sheaves on $Y_\ket$. If $f: Y' \to Y$ is a morphism between such log adic spaces, let
\[
f_\ket^{-1}: \Shv_{\mathbb{Z}_p}(Y_\ket) \rightleftarrows \Shv_{\mathbb{Z}_p}(Y'_\ket): f_{\ket, *}
\]
be the pair of adjoint functors, namely, the inverse and direct image functors of $\mathbb{Z}_p$-sheaves, given by applying the usual $f_\ket^{-1}$ and $f_{\ket, *}$ \Pth{for torsion sheaves} to each component of the inverse system. If $f = \jmath: W \to Y$ is an open immersion, let
\[
\jmath_{\ket, !}: \Shv_{\mathbb{Z}_p}(W_\ket) \to \Shv_{\mathbb{Z}_p}(Y_\ket)
\]
be the left adjoint of $\jmath_\ket^{-1}$, again given by applying the usual $\jmath_{\ket, !}$ \Pth{for torsion sheaves} to each component of the inverse system.
When $k$ is algebraically closed of characteristic zero, we define the $i$-th cohomology $H^i(Y_\ket, \cdot)$ as the $i$-th right derived functor of the functor
\[
\Shv_{\mathbb{Z}_p}(Y_\ket) \to \Mod_{\mathbb{Z}_p}: \; \{ F_n \}_{n \geq 1} \mapsto \Gamma(Y_\ket, \varprojlim_n F_n) \cong \varprojlim_n \Gamma(Y_\ket, F_n).
\]
\end{defn}
\begin{defn}\label{def-H-c}
Let $X$ be as before. Assume that $k$ is algebraically closed of characteristic zero and that $X$ is proper over $k$. For each $\mathbb{Z}_p$-local system $\mathbb{L}$ on $X_\ket$, we define
\[
H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) := H_\Utext{$\star$-c}^i(U_\et, \mathbb{L}) := H^i\bigl(X_\ket, \jmath_{\ket, !}(\mathbb{L}|_{U^\Utext{$\star$-c}_\ket})\bigr).
\]
\Pth{Again, we introduce both $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L})$ and $H_\Utext{$\star$-c}^i(U_\et, \mathbb{L})$ for the sake of flexibility.}
\end{defn}
\begin{lem}\label{lem-def-H-c-fin-Z-p}
In the setting of Definition \ref{def-H-c}, there is a canonical isomorphism $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) \cong \varprojlim_n H^i_{\et, \Utext{$\star$-c}}(U, \mathbb{L}_n)$ as finite $\mathbb{Z}_p$-modules.
\end{lem}
\begin{proof}
By definition, $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L})$ can be written as
\[
H^i\bigl(R\Gamma\bigl(X_\ket, R\varprojlim_n \, \jmath_{\ket, !}(\mathbb{L}_n|_{U^\Utext{$\star$-c}_\ket})\bigr)\bigr) \cong H^i\bigl(R\varprojlim_n R\Gamma\bigl(X_\ket, \jmath_{\ket, !}(\mathbb{L}_n|_{U^\Utext{$\star$-c}_\ket})\bigr)\bigr).
\]
Since each $H^i_{\et, \Utext{$\star$-c}}(U, \mathbb{L}_n)$ is finite \Pth{under our assumptions} by Theorem \ref{thm-L-!-prim-comp}, the right-hand side is equal to $\varprojlim_n H^i\bigl(R\Gamma\bigl(X_\ket, \jmath_{\ket, !}(\mathbb{L}_n|_{U^\Utext{$\star$-c}_\ket})\bigr)\bigr) \cong \varprojlim_n H^i_{\et, \Utext{$\star$-c}}(U, \mathbb{L}_n)$, which is a finite $\mathbb{Z}_p$-module by standard arguments.
\end{proof}
\begin{rk}\label{rem-def-H-c-conv}
When $I^\Utext{$\star$-c} = I$, we have $H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) \cong \varprojlim_n H_\cpt^i(U_\et, \mathbb{L}_n|_U)$, by Lemma \ref{lem-L-!}. It is intriguing whether this is isomorphic to the $H_\cpt^i(U_\et, \mathbb{L}|_U)$ as defined in \cite{Huber:1998-ctlac}. \Ref{Recall that, for a $\mathbb{Z}_p$-sheaf $F = \{ F_n \}_{n \geq 1}$ on $U_\et$, which is partially proper over $k$, the cohomology with compact support $H_\cpt^i(U_\et, F)$ is defined in \cite{Huber:1998-ctlac} as the $i$-th derived functor of the functor $F = \{ F_n \}_{n \geq 1} \mapsto \Gamma_\cpt(U_\et, \varprojlim_n F_n)$, where $\Gamma_c$ is the functor of sections with proper support, as in \cite[\aDef 5.2.1]{Huber:1996-ERA}.} Nevertheless, despite this issue, we shall abusively \emph{define} \Pth{or rather \emph{denote}}
\[
H_\cpt^i(U_\et, \mathbb{L}|_U) := \varprojlim_n H_\cpt^i(U_\et, \mathbb{L}_n|_U).
\]
Again by Lemma \ref{lem-L-!}, when $I^\Utext{$\star$-c} = \emptyset$, which is another extremal case, we have
\[
H_{\et, \Utext{$\star$-c}}^i(U, \mathbb{L}) \cong \varprojlim_n H^i(U_\et, \mathbb{L}_n|_U) \cong H^i(U_\et, \mathbb{L}|_U).
\]
\end{rk}
\begin{rk}\label{rem-def-H-c-alt}
We shall denote the objects defined by any subset $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$ with subscripts \Qtn{$\Utext{$\circ$-c}$}. Then the objects with subscripts \Qtn{$\Utext{$\star$-c}$} admit compatible canonical morphisms to those with subscripts \Qtn{$\Utext{$\circ$-c}$}. We shall also denote with subscripts \Qtn{$\Utext{$\star$-nc}$} the objects defined with the complementary subset $I^\Utext{$\star$-nc} \subset I$ replacing $I^\Utext{$\star$-c} \subset I$. \Pth{This is consistent of the previous definitions of $\jmath_\Utext{$\star$-nc}$ and $\jmath^\Utext{$\star$-nc}$, although we will not explicitly use them.}
\end{rk}
For each locally noetherian fs log adic space $Y$, the pro-Kummer \'etale site $Y_\proket$ was introduced in \cite[\aSec \logadicsecproket]{Diao/Lan/Liu/Zhu:laske}. Let $\nu_Y: Y_\proket \to Y_\ket$ denote the natural projection of sites. \Pth{We shall omit the subscript \Qtn{$Y$} when the context is clear.}
\begin{lem}\label{lem-from-ket-to-proket}
For each morphism $f: Z \to Y$ of locally noetherian fs log adic spaces, $\nu_Z^{-1} \, f^{-1}_\ket \cong f^{-1}_\proket \, \nu_Y^{-1}$. If $f$ is quasi-compact, then $\nu_Y^{-1} \, f_{\ket, *} \cong f_{\proket, *} \, \nu_Z^{-1}$.
\end{lem}
\begin{proof}
The first statement is clear. As for the second, we may assume that $Y$ is affinoid. Let $U = \varprojlim_i U_i$ be any qcqs object of $Y_\proket$. Then $f^{-1}(U) = \varprojlim_i f^{-1}(U_i)$ is a qcqs object in $Z_\proket$. By \cite[\aProp \logadicpropproketvsket]{Diao/Lan/Liu/Zhu:laske}, we have $\bigl(\nu_Y^{-1} \, f_{\ket, *}(F)\bigr)(U) \cong \varinjlim_i F\bigl(f^{-1}(U_i)\bigr) \cong \bigl(\nu_Z^{-1}(F)\bigr)\bigl(f^{-1}(U)\bigr) \cong \bigl(f_{\proket, *} \, \nu_Z^{-1}(F)\bigr)(U)$, as desired.
\end{proof}
\begin{rk}\label{rem-compat-!-bc}
The above basic results in this subsection are compatible with base changes from $k$ to other nonarchimedean local fields.
\end{rk}
Finally, let $\widehat{\mathbb{Z}}_p := \varprojlim_n (\mathbb{Z} / p^n)$, and let $\Shv_{\widehat{\mathbb{Z}}_p}(Y_\proket)$ denote the category of $\widehat{\mathbb{Z}}_p$-sheaves on $Y_\proket$, understood in the naive sense. Then there is a natural functor
\begin{equation}\label{eq-ket-to-proket}
\nu_Y^{-1}: \Shv_{\mathbb{Z}_p}(Y_\ket) \to \Shv_{\widehat{\mathbb{Z}}_p}(Y_\proket): F = \{ F_n \}_{n \geq 1} \mapsto \widehat{F} := \varprojlim_n \bigl(\nu_Y^{-1}(F_n)\bigr).
\end{equation}
\subsection{Period sheaves on the boundary strata}\label{sec-period-bd}
Let us begin with some notational preparation, which will also be used in some later subsections. Consider the perfectoid field $K := \widehat{\AC{k}}$, the $p$-adic completion of some fixed algebraic closure $\AC{k}$ of $k$, with $K^+ = \mathcal{O}_K$. Let $(K^\flat, K^{\flat+})$ denote the tilt of $(K, K^+)$, as usual. As in \cite[\aSec \logRHsecOBdlexplicit]{Diao/Lan/Liu/Zhu:lrhrv}, let $\xi \in A_{\inf} = W(K^{\flat+})$ be given by \cite[\aLem 6.3]{Scholze:2013-phtra}, which generates the kernel of the surjective canonical homomorphism $\theta: A_{\inf} \to K^+$. Let $\varpi \in K^{\flat+}$ be such that $\varpi^\sharp = p$. Then $p^m A_{\inf} / p^{m + 1} A_{\inf} \cong A_{\inf} / p \cong K^{\flat+}$ and $\varpi^n K^{\flat+} / \varpi^{n + 1} K^{\flat+} \cong K^{\flat+} / \varpi \cong K^+ / p$, for all $m , n \geq 0$.
\begin{defn}\label{def-AAinf-etc-bd}
For each $J \subset I$, by applying $\imath_{J, \proket, *}^\partial$ to the sheaves on $X_{J, \proket}^\partial$ defined in \cite[\aDef \logadicdefproketsheaves]{Diao/Lan/Liu/Zhu:laske} and \cite[\aSec \logRHsecOBdl]{Diao/Lan/Liu/Zhu:laske}, we obtain the sheaves $\widehat{\mathcal{O}}^\partial_{X_J^\partial}$, $\widehat{\mathcal{O}}^{+, \partial}_{X_J^\partial}$, $\widehat{\mathcal{O}}^{\flat, \partial}_{X_J^\partial}$, $\widehat{\mathcal{O}}^{\flat+, \partial}_{X_J^\partial}$, $\AAinfX{X_J^\partial}^\partial$, $\BBinfX{X_J^\partial}^\partial$, $\BBdRX{X_J^\partial}^{+, \partial}$, $\BBdRX{X_J^\partial}^\partial$, $\OBdlX{X_J^\partial}^{+, \partial}$, $\OBdlX{X_J^\partial}^\partial$, their filtered pieces, and $\OClX{X_J^\partial}^\partial := \OP{gr}^0 \OBdlX{X_J^\partial}^\partial$, together with the homomorphisms $\theta^\partial: \AAinfX{X_J^\partial}^\partial \to \widehat{\mathcal{O}}^{+, \partial}_{X_J^\partial}$ and $\theta^\partial: \BBinfX{X_J^\partial}^\partial \to \widehat{\mathcal{O}}^\partial_{X_J^\partial}$, over $X_\proket$, denoted with an additional superscript \Qtn{$\partial$}. For each $a \geq 0$, we define similar sheaves $\widehat{\mathcal{O}}^\partial_{X_{(a)}^\partial}$, $\widehat{\mathcal{O}}^{+, \partial}_{X_{(a)}^\partial}$, $\widehat{\mathcal{O}}^{\flat, \partial}_{X_{(a)}^\partial}$, $\widehat{\mathcal{O}}^{\flat+, \partial}_{X_{(a)}^\partial}$, $\AAinfX{X_{(a)}^\partial}^\partial$, $\BBinfX{X_{(a)}^\partial}^\partial$, $\BBdRX{X_{(a)}^\partial}^{+, \partial}$, $\BBdRX{X_{(a)}^\partial}^\partial$, $\OBdlX{X_{(a)}^\partial}^{+, \partial}$, $\OBdlX{X_{(a)}^\partial}^\partial$, their filtered pieces, and $\OClX{X_{(a)}^\partial}^\partial$ over $X_\proket$ by direct sums.
\end{defn}
\begin{lem}\label{lem-O-flat+-cl}
For each $J \subset I$, and for each log affinoid perfectoid object $U = \varprojlim_{i \in I} U_i$ in $X_\proket$ with associated perfectoid space $\widehat{U}$, the pullback of $U$ to $X_{J, \proket}^\partial$ defined by $V = \varprojlim_{i \in I} (U_i \times_X X_J^\partial)$ is a log perfectoid affinoid object in $X_{J, \proket}$, with an associated perfectoid space $\widehat{V}$ and a closed immersion $\widehat{V} \to \widehat{U}$ of adic spaces compatible with $\imath$ \Pth{but is generally \emph{not} the pullback of $\imath$ under $\widehat{U} \to Y$}. Suppose that $\widehat{V} = \OP{Spa}(\overline{R}, \overline{R}^+)$ for some perfectoid $(\overline{R}, \overline{R}^+)$ with tilt $(\overline{R}^\flat, \overline{R}^{\flat+})$. Then:
\begin{enumerate}
\item $\bigl(\widehat{\mathcal{O}}^\partial_{X_{J, \proket}^\partial}(U), \widehat{\mathcal{O}}^{+, \partial}_{X_{J, \proket}^\partial}(U)\bigr) \cong \bigl(\widehat{\mathcal{O}}_{X_{J, \proket}^\partial}(V), \widehat{\mathcal{O}}^+_{X_{J, \proket}^\partial}(V)\bigr) \cong \bigl(\overline{R}, \overline{R}^+\bigr)$;
\item $\bigl(\widehat{\mathcal{O}}^{\flat, \partial}_{X_{J, \proket}^\partial}(U), \widehat{\mathcal{O}}^{\flat+, \partial}_{X_{J, \proket}^\partial}(U)\bigr) \cong \bigl(\widehat{\mathcal{O}}^\flat_{X_{J, \proket}^\partial}(V), \widehat{\mathcal{O}}^{\flat+}_{X_{J, \proket}^\partial}(V)\bigr) \cong \bigl(\overline{R}^\flat, \overline{R}^{\flat+}\bigr)$;
\end{enumerate}
\end{lem}
\begin{proof}
These follow from \cite[\aLem \logadiclemlogaffperfclimm{} and \aThm \logadicthmalmostvanhat]{Diao/Lan/Liu/Zhu:laske}.
\end{proof}
\begin{cor}\label{cor-O-flat+-cl}
For each $J \subset I$, let $\mathcal{F}$ be one of the following sheaves on $X_{J, \proket}^\partial$: $\widehat{\mathcal{O}}_{X_J^\partial}$, $\widehat{\mathcal{O}}^+_{X_J^\partial}$, $\widehat{\mathcal{O}}^\flat_{X_J^\partial}$, $\widehat{\mathcal{O}}^{\flat+}_{X_J^\partial}$, $\AAinfX{X_J^\partial}$, $\BBinfX{X_J^\partial}$, $\BBdRpX{X_J^\partial}$, and $\BBdRX{X_J^\partial}$. Then the canonical morphisms $\imath_\proket^{-1}(\mathcal{F}^\partial) = \imath_\proket^{-1} \, \imath_{\proket, *}(\mathcal{F}) \to \mathcal{F}$ and $\mathcal{F}^\partial \to \imath_{\proket, *} \, \imath_\proket^{-1}(\mathcal{F}^\partial)$ defined by adjunction are isomorphisms. If $U$ and $V$ are in Lemma \ref{lem-O-flat+-cl}, then $\mathcal{F}^\partial(U) \cong \mathcal{F}(V)$. Moreover, we have the following:
\begin{enumerate}
\item $\AAinfX{X_J^\partial}^\partial \cong W(\widehat{\mathcal{O}}^{\flat+, \partial}_{X_{J, \proket}^\partial})$ and $\BBinfX{Z}^\partial \cong \AAinfX{Z}^\partial[\frac{1}{p}]$.
\item The kernels of $\theta^\partial: \AAinfX{X_J^\partial}^\partial \to \widehat{\mathcal{O}}^{+, \partial}_{X_J^\partial}$ and $\theta^\partial: \BBinfX{X_J^\partial}^\partial \to \widehat{\mathcal{O}}^\partial_{X_J^\partial}$ are locally principal over $X_\proket$, and is generated by the above $\xi$ over $X_{K, \proket}$.
\item $\BBdRX{X_J^\partial}^{+, \partial} \cong \varprojlim (\BBinfX{X_J^\partial}^\partial / \xi^r)$ and $\BBdRX{X_J^\partial}^\partial \cong \BBdRX{X_J^\partial}^{+, \partial}[\frac{1}{\xi}]$, where $\xi$ is any local generator of $\ker \theta^\partial$, which can be the above $\xi$ over $X_{K, \proket}$.
\end{enumerate}
\end{cor}
\begin{proof}
The assertions for $\widehat{\mathcal{O}}_{X_J^\partial}$, $\widehat{\mathcal{O}}^+_{X_J^\partial}$, $\widehat{\mathcal{O}}^\flat_{X_J^\partial}$, and $\widehat{\mathcal{O}}^{\flat+}_{X_J^\partial}$ follow from Lemma \ref{lem-O-flat+-cl}. Since $\mathcal{F}^\partial = \imath_{\proket, *}(\mathcal{F})$, and since $\imath_{\proket, *}$ is compatible with limits and colimits \Ref{by \cite[\aProp \logadicpropproketsiteqcqs]{Diao/Lan/Liu/Zhu:laske}}, the remaining assertions also follow.
\end{proof}
\begin{lem}\label{lem-property-AAinf-bd}
Over ${X_\proket}_{/X_K}$, we have the following, for each $J \subset I$:
\begin{enumerate}
\item\label{lem-property-AAinf-bd-mod-pi} $\AAinfX{X_J^\partial}^\partial / (p, [\varpi]) \cong \widehat{\mathcal{O}}^{+, \partial}_{X_{J, \proket}^\partial} / p \cong \imath_{\proket,*}(\mathcal{O}^+_{X_{J, \proket}^\partial} / p)$.
\item\label{lem-property-AAinf-bd-coh} For all log affinoid perfectoid object $U$ in ${X_\proket}_{/Y_K}$, and all $m, n \geq 1$, the $K^+$-module $H^j\bigl(U_\proket, \AAinfX{X_J^\partial}^\partial / (p^m, [\varpi^n])\bigr)$ is almost zero, when $j > 0$; and is almost isomorphic to $\AAinfX{X_J^\partial}^\partial(U) / (p^m, [\varpi^n])$, when $j = 0$.
\item\label{lem-property-AAinf-bd-lim} $\AAinfX{X_J^\partial}^\partial \cong \varprojlim_{m, n} \bigl(\AAinfX{X_J^\partial}^\partial / (p^m, [\varpi^n])\bigr)$; and $R^j\varprojlim_{m, n} \bigl(\AAinfX{X_J^\partial}^\partial / (p^m, [\varpi^n])\bigr)$ is almost zero, for all $j > 0$.
\end{enumerate}
\end{lem}
\begin{proof}
The assertion \Refeq{\ref{lem-property-AAinf-bd-mod-pi}} follows from Lemmas \ref{lem-from-ket-to-proket} and \ref{lem-O-flat+-cl}, and Definition \ref{def-AAinf-etc-bd}. Since $H^j(U_\proket, \widehat{\mathcal{O}}^{+, \partial}_{X_{J, \proket}^\partial} / p^m) \cong H^j(V_\proket, \widehat{\mathcal{O}}^+_{X_{J, \proket}^\partial} / p^m)$, where $\widehat{V}$ is the perfectoid space associated with the pullback of $U$ as in Lemma \ref{lem-from-ket-to-proket}, the assertion \Refeq{\ref{lem-property-AAinf-bd-coh}} follows \Pth{by induction} from \cite[\aProp 7.13]{Scholze:2012-ps} and \cite[\aThm \logadicthmalmostvanhat]{Diao/Lan/Liu/Zhu:laske}. Finally, the assertion \Refeq{\ref{lem-property-AAinf-bd-lim}} follows from \cite[\aLem 3.18]{Scholze:2013-phtra}, \cite[\aProp \logadicproplogaffperfbasis]{Diao/Lan/Liu/Zhu:laske}, and the previous two assertions.
\end{proof}
Essentially by definition, and by \ref{lem-from-ket-to-proket} and \ref{lem-O-flat+-cl}, we have the following lemmas:
\begin{lem}\label{lem-AAinf-bd-mor}
For $J' \subset J \subset I$, we have canonical morphisms $\widehat{\mathcal{O}}^\partial_{X_J^\partial} \to \widehat{\mathcal{O}}^\partial_{X_{J'}^\partial}$, $\widehat{\mathcal{O}}^{+, \partial}_{X_J^\partial} \to \widehat{\mathcal{O}}^{+, \partial}_{X_{J'}^\partial}$, $\widehat{\mathcal{O}}^{\flat, \partial}_{X_J^\partial} \to \widehat{\mathcal{O}}^{\flat, \partial}_{X_{J'}^\partial}$, $\widehat{\mathcal{O}}^{\flat+, \partial}_{X_J^\partial} \to \widehat{\mathcal{O}}^{\flat+, \partial}_{X_{J'}^\partial}$, $\AAinfX{X_J^\partial}^\partial \to \AAinfX{X_{J'}^\partial}^\partial$, $\BBinfX{X_J^\partial}^\partial \to \BBinfX{X_{J'}^\partial}^\partial$, $\BBdRX{X_J^\partial}^{+, \partial} \to \BBdRX{X_{J'}^\partial}^{+, \partial}$, and $\BBdRX{X_J^\partial}^\partial \to \BBdRX{X_{J'}^\partial}^\partial$ over $X_\proket$. For $a \geq a' \geq 0$, we have similar morphisms for the analogous sheaves for $X_{(a), \proket}^\partial$ and $X_{(a'), \proket}^\partial$.
\end{lem}
\begin{lem}\label{lem-BdR-bd-gr}
For each $J \subset I$, both $\BBdRX{X_J^\partial}^{+, \partial}$ and $\BBdRX{X_J^\partial}^\partial$ admit filtrations induced by powers of $\ker(\theta^\partial: \BBinfX{X_J^\partial}^\partial \to \widehat{\mathcal{O}}^\partial_{X_J^\partial})$, which are also induced by those of $\BBdRpX{X}$ and $\BBdRX{X}$. Over $X_{K, \proket}$, the filtrations are given by multiplication by powers of $\xi$, and induce canonical isomorphisms $\OP{gr}^r \BBdRX{X_J^\partial}^{+, \partial} \cong \widehat{\mathcal{O}}^\partial_{X_J^\partial}(r)$, for $r \geq 0$; and $\OP{gr}^r \BBdRX{X_J^\partial}^\partial \cong \widehat{\mathcal{O}}^\partial_{X_J^\partial}(r)$, for all $r \in \mathbb{Z}$, where $(r)$ denotes Tate twists as usual. For each $a \geq 0$, we have similar facts for $\BBdRX{X_{(a)}^\partial}^{+, \partial}$ and $\BBdRX{X_{(a)}^\partial}^\partial$.
\end{lem}
\begin{lem}\label{lem-OBdRp-bd-loc}
Let us temporarily assume that $X$ is affinoid and admits a toric chart $X \to \mathbb{D}_k^n := \OP{Spa}(k\Talg{T_1, \ldots, T_n}, k^+\Talg{T_1, \ldots, T_n})$ as in \cite[\aSec \logRHsecOBdlexplicit]{Diao/Lan/Liu/Zhu:lrhrv}, with $D$ defined by $\{ T_1 \cdots T_n = 0 \}$. Let $\widetilde{X} \to X$ denote the log affinoid perfectoid as defined in \cite[\aSec \logRHsecOBdlexplicit]{Diao/Lan/Liu/Zhu:lrhrv}, with associated perfectoid space $\widehat{\widetilde{X}}$, and let $\xi$ be the same element introduced there. Suppose that $X_J^\partial$ is defined by $\{ T_1 = \cdots = T_a = 0 \}$. Then, for each $U \in {X_\proket}_{/\widetilde{X}}$, we have a canonical surjective morphism
\begin{equation}\label{eq-lem-OBdRp-bd-loc-BdR}
\BBdRpX{X}(U) \Surj \BBdRX{X_J^\partial}^{+, \partial}(U)
\end{equation}
inducing, for each $r \geq 1$, an isomorphism
\begin{equation}\label{eq-lem-OBdRp-bd-loc-BdR-mod-xi-r}
\bigl(\BBdRpX{X}(U) \big/ \xi^r\bigr) \big/ ([T_1^{s\flat}], \ldots, [T_a^{s\flat}])^\wedge_{s \in \mathbb{Q}_{> 0}} \Mi \bigl(\BBdRX{X_J^\partial}^{+, \partial}(U) \big/ \xi^r\bigr),
\end{equation}
where $([T_1^{s\flat}], \ldots, [T_a^{s\flat}])^\wedge_{s \in \mathbb{Q}_{> 0}}$ denotes the $p$-adic completion of the ideal generated by $\{ [T_1]^{s\flat}, \ldots, [T_a^{s\flat}] \}_{s \in \mathbb{Q}_{> 0}}$; and we have a canonical $\BBdRX{X_J^\partial}^{+, \partial}|_{\widetilde{X}}$-linear isomorphism
\begin{equation}\label{eq-lem-OBdRp-bd-loc-OBdR-bd}
\OBdlX{X_J^\partial}^{+, \partial}|_{\widetilde{X}} \cong \BBdRX{X_J^\partial}^{+, \partial}|_{\widetilde{X}}[[y_1, \ldots, y_n]].
\end{equation}
compatible with the canonical $\BBdRpX{X}|_{\widetilde{X}}$-linear isomorphism
\begin{equation}\label{eq-lem-OBdRp-bd-loc-OBdR}
\OBdlpX{X}|_{\widetilde{X}} \cong \BBdRpX{X}|_{\widetilde{X}}[[y_1, \ldots, y_n]],
\end{equation}
in \cite[\aProp \logRHpropOBdlploc]{Diao/Lan/Liu/Zhu:lrhrv}.
\end{lem}
\begin{proof}
Combine Corollary \ref{cor-O-flat+-cl} and \cite[\aCor \logRHcorOBdRplocclimm]{Diao/Lan/Liu/Zhu:lrhrv}.
\end{proof}
\begin{cor}\label{cor-OBdRp-cplx-bd}
For each $a \geq 0$, we have an exact complex
\begin{equation}\label{eq-cor-OBdRp-cplx-bd}
\begin{split}
0 \to \BBdRX{X_{(a)}^\partial}^{+, \partial} \to \OBdlX{X_{(a)}^\partial}^{+, \partial} & \Mapn{\nabla} \OBdlX{X_{(a)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_{X_\proket}} \Omega^{\log, 1}_X \\
& \Mapn{\nabla} \OBdlX{X_{(a)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_{X_\proket}} \Omega^{\log, 2}_X \to \cdots
\end{split}
\end{equation}
over $X_\proket$. The statement also holds with $\BBdRX{X_{(a)}^\partial}^{+, \partial}$ and $\OBdlX{X_{(a)}^\partial}^{+, \partial}$ replaced with $\BBdRX{X_{(a)}^\partial}^\partial$ and $\OBdlX{X_{(a)}^\partial}^\partial$, respectively.
\end{cor}
\begin{proof}
Combine Lemma \ref{lem-OBdRp-bd-loc}, \cite[\aCor \logRHcorlogdRcplx]{Diao/Lan/Liu/Zhu:lrhrv}, and \cite[\aEx \logadicexlogdiffsheafncd]{Diao/Lan/Liu/Zhu:laske}.
\end{proof}
\begin{cor}\label{cor-OBdRp-cplx-bd-strict}
Both the canonical morphisms $\OBdlpX{X} \to \OBdlX{X_J^\partial}^{+, \partial}$ and $\OBdlX{X} \to \OBdlX{X_J^\partial}^\partial$ are strictly compatible with the filtrations on both sides.
\end{cor}
\begin{proof}
The assertion for $\OBdlpX{X} \to \OBdlX{X_J^\partial}^{+, \partial}$, which is local in nature, follows from Lemma \ref{lem-OBdRp-bd-loc}. Then the assertion for $\OBdlX{X} \to \OBdlX{X_J^\partial}^\partial$ follows from the definition of both sides as limits with respect to their \Pth{strictly compatible} filtrations, as in \cite[\aDef \logRHdefOBdl]{Diao/Lan/Liu/Zhu:lrhrv}.
\end{proof}
\section{Comparison theorems for cohomology with compact support}\label{sec-dR-comp-cpt}
\subsection{Statements of main results}\label{sec-dR-comp-cpt-main}
In this section, we shall retain the setting of Section \ref{sec-bd}, but assume that $k$ is a finite extension of $\mathbb{Q}_p$ and that $X$ is \emph{proper} over $k$. As in Section \ref{sec-period-bd}, let $K = \widehat{\AC{k}}$ and $K^+ = \mathcal{O}_K$, and let $\xi \in A_{\inf} = W(K^{\flat+})$ be given by \cite[\aLem 6.3]{Scholze:2013-phtra}. Let $\mathbb{L}$ be a $\mathbb{Z}_p$-local system on $X_\ket$, as in \cite[\aDef \logadicdefketlisse]{Diao/Lan/Liu/Zhu:laske}. Fix $I^\Utext{$\star$-c} \subset I$ as before. As usual, we shall denote by $(-D^\Utext{$\star$-c})$ the tensor product with \Pth{pullbacks of} the invertible ideal defining the divisor $D^\Utext{$\star$-c} \subset X$.
We will freely use the notation and constructions in \cite[\aSec \logRHseclogRH]{Diao/Lan/Liu/Zhu:lrhrv}. In particular, we have the ringed spaces $\mathcal{X}^+ = (X_\an, \mathcal{O}_X \widehat{\otimes} B_\dR^+)$ and $\mathcal{X} = (X_\an, \mathcal{O}_X \widehat{\otimes} B_\dR)$, as in \cite[(\logRHeqdefcX)]{Diao/Lan/Liu/Zhu:lrhrv}. Moreover, we have the notions of \emph{log connections and their log de Rham complexes} on $\mathcal{X}$ and on $X$, and of \emph{log Higgs bundles and their log Higgs complexes} on $X_K$, as in \cite[\aDef \logRHdeflogconnetc]{Diao/Lan/Liu/Zhu:lrhrv}. Note that, given a log connection or a log Higgs bundle, its tensor product with \Pth{the pullback of} the invertible ideal defining $D^\Utext{$\star$-c} \subset X$ is still a log connection or a log Higgs bundle.
\begin{defn}\label{def-dR-Hi-Hdg-coh-cpt}
For a log connection $\mathcal{E}$ on $\mathcal{X}$, we define
\begin{equation}\label{eq-def-dR-coh-cpt-geom}
H_{\dR, \Utext{$\star$-c}}^i(\mathcal{U}, \mathcal{E}) := H^i\bigl(\mathcal{X}, \DRl\bigl(\mathcal{E}(-D^\Utext{$\star$-c})\bigr)\bigr).
\end{equation}
Similarly, for a log connection $E$ on $X$, we define
\begin{equation}\label{eq-def-dR-coh-cpt}
H_{\dR, \Utext{$\star$-c}}^i(U_\an, E) := H^i\bigl(X_\an, \DRl\bigl(E(-D^\Utext{$\star$-c})\bigr)\bigr),
\end{equation}
For a log Higgs bundle $E$ on $X_K$, we define
\begin{equation}\label{eq-def-Hi-coh-cpt}
H_{\Hi, \Utext{$\star$-c}}^i(U_{K, \an}, E) := H^i\bigl(X_{K, \an}, \Hil\bigl(E(-D^\Utext{$\star$-c})\bigr)\bigr),
\end{equation}
Finally, for a log connection $E$ on $X$ equipped with a decreasing filtration by coherent subsheaves $\Fil^\bullet E$ satisfying the \Pth{usual} Griffiths transversality condition, we define
\begin{equation}\label{eq-def-Hdg-coh-cpt}
H_{\Hdg, \Utext{$\star$-c}}^{a, i - a}(U_\an, E) := H^i\bigl(X_\an, \OP{gr}^a \DRl\bigl(E(-D^\Utext{$\star$-c})\bigr)\bigr).
\end{equation}
Then there is also the Hodge--de Rham spectral sequence
\begin{equation}\label{eq-def-Hdg-dR-coh-cpt-spec-seq}
E_1^{a, i - a} := H_{\Hdg, \Utext{$\star$-c}}^{a, i - a}(U_\an, E) \Rightarrow H_{\dR, \Utext{$\star$-c}}^i(U_\an, E),
\end{equation}
When the eigenvalues of the residues of $\mathcal{E}$ and $E$ \Pth{along the irreducible components of $D$} are in $\mathbb{Q} \cap [0, 1)$, in which case $\mathcal{E}$ and $E$ are the \emph{canonical extensions} of $\mathcal{E}|_\mathcal{U}$ and $E|_U$, respectively \Ref{see the discussion in \cite[\aCh 1, \aSec 4]{Andre/Baldassarri:2001-DDA}}, we shall also write $H_{\dR, \Utext{$\star$-c}}^i(\mathcal{U}, \mathcal{E}|_\mathcal{U})$, $H_{\dR, \Utext{$\star$-c}}^i(U_\an, E_U)$, and $H_{\Hdg, \Utext{$\star$-c}}^i(U_\an, E_U)$, when the meaning of such notation is clear in the context. In particular, for each $\mathbb{Z}_p$-local system $\mathbb{L}$, we shall write $H_{\dR, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr)$, $H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$, and $H_{\Hdg, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$, which is justified by \cite[\aThms \logRHthmlogRHgeom(\logRHthmlogRHgeomres) and \logRHthmlogRHarith(\logRHthmlogRHarithres)]{Diao/Lan/Liu/Zhu:lrhrv}. We shall also abusively write $H_{\Hi, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \Hc(\mathbb{L})\bigr)$ instead of $H_{\Hi, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \Hl(\mathbb{L})\bigr)$. \Pth{For simplicity, we shall denote $\mathbb{L}|_U$ simply by $\mathbb{L}$, when applying $\RH(\,\cdot\,)$ etc to it.}
\end{defn}
\begin{rk}\label{rem-def-dR-Hi-Hdg-coh-cpt-abuse}
The definitions above are rather serious abuses of notation, because, a priori, they do depend on $\mathcal{E}$ and $E$ over the \emph{whole} $X_\an$. Nevertheless, we will mainly apply the definitions to $\mathcal{E} = \RHl(\mathbb{L})$, $E = \Hl(\mathbb{L})$, and $E = D_{\dR, \log}(\mathbb{L})$, for $\mathbb{Z}_p$-local systems $\mathbb{L}$. Since the eigenvalues of the residues of $\RHl(\mathbb{L})$ and $D_{\dR, \log}(\mathbb{L})$ are in $\mathbb{Q} \cap [0, 1)$, the definition of their de Rham cohomology \Pth{with support conditions} is compatible with their analogues in the complex analytic setting using canonical extensions, as in \cite[II, 6]{Deligne:1970-EDR} and \cite[\aSec 2.11 and \aCor 2.12]{Esnault/Viehweg:1992-LVT-B}.
\end{rk}
\begin{rk}\label{rem-def-dR-Hi-Hdg-coh-cpt-conv}
If $I^\Utext{$\star$-c} = I$ and hence $D^\Utext{$\star$-c} = D$ in the above, we shall abusively denote $H_{\dR, \Utext{$\star$-c}}^i(U_\an, E)$ by $H_{\dR, \cpt}^i(U_\an, E)$. If $I^\Utext{$\star$-c} = \emptyset$ and hence $D^\Utext{$\star$-c} = \emptyset$, we shall abusively denote $H_{\dR, \Utext{$\star$-c}}^i(U_\an, E)$ by $H_\dR^i(U_\an, E)$, even though $H_\dR^i(U_\an, E)$ is defined using $E$ over the whole compactification $X$. Nevertheless, for simplicity, we shall still write $H_{\dR, \cpt}^i(U_\an, E|_U)$ and $H_\dR^i(U_\an, E|_U)$, as in the last part of Definition \ref{def-dR-Hi-Hdg-coh-cpt}, when the meaning is clear in the context. This abusive choice of notation is consistent with our previous choice for the \'etale cohomology \Ref{see Remark \ref{rem-def-H-c-conv}}. We shall use similar notation for the other cohomology in Definition \ref{def-dR-Hi-Hdg-coh-cpt}.
\end{rk}
\begin{rk}[{\Refcf{} Remark \ref{rem-def-H-c-alt}}]\label{rem-def-dR-Hi-Hdg-coh-cpt-alt}
We shall denote the objects defined by any subset $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$ with subscripts \Qtn{$\Utext{$\circ$-c}$}. Then the objects with subscripts \Qtn{$\Utext{$\star$-c}$} admit compatible canonical morphisms to those with subscripts \Qtn{$\Utext{$\circ$-c}$}. Also, we shall denote with subscripts \Qtn{$\Utext{$\star$-nc}$} the objects defined with the complementary subset $I^\Utext{$\star$-nc} \subset I$ replacing $I^\Utext{$\star$-c} \subset I$.
\end{rk}
The main result of this section is the following:
\begin{thm}\label{thm-L-!-coh-comp}
For each $i \geq 0$, we have a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-thm-L-!-coh-comp-dR-RH}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR \cong H_{\dR, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr),
\end{equation}
compatible with the filtrations on both sides, and also \Pth{by taking $\OP{gr}^0$} a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-thm-L-!-coh-comp-Hi}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} K \cong H_{\Hi, \Utext{$\star$-c}}^i\bigl(U_{K, \an}, \Hc(\mathbb{L})\bigr).
\end{equation}
Suppose that $\mathbb{L}|_U$ is a \emph{de Rham} $\mathbb{Z}_p$-local system over $U_\et$. Then we also have a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-thm-L-!-coh-comp-dR}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR \cong H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k B_\dR,
\end{equation}
compatible with the filtrations on both sides, and also \Pth{by taking $\OP{gr}^0$} a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-thm-L-!-coh-comp-Hdg}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} K \cong \oplus_{a + b = i} \, \Bigl(H_{\Hdg, \Utext{$\star$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k K(-a)\Bigr).
\end{equation}
Moreover, the Hodge--de Rham spectral sequence
\begin{equation}\label{eq-thm-L-!-coh-comp-spec-seq}
E_1^{a, b} := H_{\Hdg, \Utext{$\star$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \Rightarrow H_{\dR, \Utext{$\star$-c}}^{a + b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr)
\end{equation}
degenerates on the $E_1$ page.
\end{thm}
The proof of Theorem \ref{thm-L-!-coh-comp} will be carried out in the following subsections. We shall freely use the notation introduced in Section \ref{sec-period-bd}. For simplicity, the pullbacks of various sheaves from $X_\proket$ to $X_{K, \proket}$ will be denoted by the same symbols.
\subsection{Period sheaves {$\mathbb{A}_{\inf}^\Utext{$\star$-c}$} and {$\mathbb{B}_{\inf}^\Utext{$\star$-c}$}}\label{sec-period-A-inf-B-inf-cpt}
\begin{defn}\label{def-AAinf-cpt}
Let
\[
\AAinfX{X}^\Utext{$\star$-c} := \ker(\AAinfX{X_{(0)}^\partial}^\partial \to \AAinfX{X_{(1)}^\partial}^\partial)
\]
\Ref{see Lemma \ref{lem-AAinf-bd-mor}}, and
\[
\BBinfX{X}^\Utext{$\star$-c} := \AAinfX{X}^\Utext{$\star$-c}[\tfrac{1}{p}] \cong \AAinfX{X}^\Utext{$\star$-c} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathbb{Q}}_p.
\]
We shall omit the subscripts \Qtn{$X$} when the context is clear.
\end{defn}
\begin{rk}\label{rem-def-AAinf-cpt}
By definition, we have $\AAinfX{X_{(0)}^\partial}^\partial \cong \AAinfX{X}$, $\BBinfX{X_{(0)}^\partial}^\partial \cong \BBinfX{X} \cong \AAinfX{X}[\frac{1}{p}] \cong \AAinfX{X}^\Utext{$\star$-c} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathbb{Q}}_p$, and $\BBinfX{X}^\Utext{$\star$-c} \cong \ker(\BBinfX{X_{(0)}^\partial}^\partial \to \BBinfX{X_{(1)}^\partial}^\partial)$. Moreover, we could have defined $\AAinfX{X}^\Utext{$\star$-c}$ as a derived limit as in \Refeq{\ref{eq-cplx-AAinf-deg-zero}} below \Pth{with $\widehat{\mathbb{L}} = \widehat{\mathbb{Z}}_p$ there}, without using the boundary stratification.
\end{rk}
\begin{lem}\label{lem-fil-AAinf-cpt}
Both $\AAinfX{X}^\Utext{$\star$-c}$ and $\BBinfX{X}^\Utext{$\star$-c}$ are equipped with filtrations induced by those of $\AAinfX{X}$ and $\BBinfX{X}$, respectively. Over $X_{K, \proket}$, they agree with the filtrations defined more directly by multiplication by powers of $\xi$, where $\xi$ is as in Section \ref{sec-period-bd}, and we have compatible canonical isomorphisms $\AAinfX{X}^\Utext{$\star$-c} \otimes_{A_{\inf}} (A_{\inf} / \xi^r) \Mi \AAinfX{X}^\Utext{$\star$-c} / \xi^r$ and $\BBinfX{X}^\Utext{$\star$-c} \otimes_{B_{\inf}} (B_{\inf} / \xi^r) \Mi \BBinfX{X}^\Utext{$\star$-c} / \xi^r$, for each $r \geq 1$.
\end{lem}
\begin{proof}
By Definition \ref{def-AAinf-cpt} and Remark \ref{rem-def-AAinf-cpt}, $\AAinfX{X}^\Utext{$\star$-c}$ and $\BBinfX{X}^\Utext{$\star$-c}$ are subsheaves of $\AAinfX{X}$ and $\BBinfX{X}$, respectively, and the first assertion follows. Over $X_{K, \proket}$, by \cite[\aLem 6.3]{Scholze:2013-phtra}, the filtrations on $\AAinfX{X}$ and $\BBinfX{X}$ are defined by multiplication by powers of $\xi$, and the same is true for all the sheaves $\AAinfX{X_J^\partial}^\partial$ and $\BBinfX{X_J^\partial}^\partial$ \Ref{see Definition \ref{def-AAinf-etc-bd} and Corollary \ref{cor-O-flat+-cl}}. Hence, $\xi^r$ acts with zero kernels on $\AAinfX{X}^\Utext{$\star$-c}$ and $\BBinfX{X}^\Utext{$\star$-c}$, for each $r \geq 1$, and the second assertion also follows.
\end{proof}
The goal of this subsection is to prove the following:
\begin{prop}\label{prop-coh-AAinf-cpt}
We have a canonical $\Gal(\AC{k} / k)$-equivariant almost isomorphism
\begin{equation}\label{eq-prop-coh-AAinf-cpt-comp-Ainf}
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} A_{\inf} \cong H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \AAinfX{X}^\Utext{$\star$-c}\bigr),
\end{equation}
which induces by inverting $p$ a canonical $\Gal(\AC{k} / k)$-equivariant almost isomorphism
\begin{equation}\label{eq-prop-coh-AAinf-cpt-comp-Binf}
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_{\inf} \cong H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c}\bigr).
\end{equation}
Moreover, the isomorphism \Refeq{\ref{eq-prop-coh-AAinf-cpt-comp-Binf}} is compatible with the filtrations defined by multiplication by powers of $\xi$ \Ref{\Refcf{} Lemma \ref{lem-fil-AAinf-cpt}}; and, for all $r \geq 1$, we have compatible canonical $\Gal(\AC{k} / k)$-equivariant isomorphisms
\begin{equation}\label{eq-prop-coh-AAinf-cpt-comp-Binf-gr}
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} (B_{\inf} / \xi^r) \cong H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr).
\end{equation}
\end{prop}
Let $\varpi \in K^{\flat+}$ be such that $\varpi^\sharp = p$. We begin with the following consequence of the primitive comparison isomorphism \Ref{see Theorem \ref{thm-L-!-prim-comp}}:
\begin{lem}\label{lem-coh-Ainf}
For each $i \geq 0$ and all $m, n \geq 1$, we have canonical $\Gal(\AC{k} / k)$-equivariant almost isomorphisms
\begin{equation}\label{eq-lem-coh-Ainf}
\begin{split}
& H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} A_{\inf} \Mi R\varprojlim_{m, n}\Bigl(H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}_m) \otimes_{\mathbb{Z}_p} \bigl(A_{\inf} / (p^m, [\varpi^n])\bigr) \Bigr) \\
& \Mi H^i\Bigl(X_{K, \proket}, R\varprojlim_{m, n} \Bigl(\bigl(\nu_X^{-1} \, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}_m|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X} / (p^m, [\varpi^n])\bigr)\Bigr)\Bigr).
\end{split}
\end{equation}
\end{lem}
\begin{proof}
By Lemma \ref{lem-def-H-c-fin-Z-p}, $H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \cong \varprojlim_m H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}_m)$ is a finitely generated $\mathbb{Z}_p$-module, and $H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) / p^m \Mi H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}_m)$ for all sufficiently large $m$. Therefore, since $A_{\inf} \cong \varprojlim_m (A_{\inf} / p^m) \cong \varprojlim_{m, n} \bigl(A_{\inf} / (p^m, [\varpi^n])\bigr)$, we obtain
\[
\begin{split}
& H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} A_{\inf} \Mi R\varprojlim_m \bigl(H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} (A_{\inf} / p^m)\bigr) \\
& \Mi R\varprojlim_m \bigl(H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}_m) \otimes_{\mathbb{Z}_p} (A_{\inf} / p^m)\bigr) \\
& \Mi R\varprojlim_{m, n} \Bigl(H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}_m) \otimes_{\mathbb{Z}_p} \bigl(A_{\inf}/ (p^m, [\varpi^n])\bigr)\Bigr)
\end{split}
\]
\Pth{with vanishing higher limits}, whose composition is the first almost isomorphism in \Refeq{\ref{eq-lem-coh-Ainf}}. By using the almost isomorphism \Refeq{\ref{eq-thm-L-!-prim-comp}} in Theorem \ref{thm-L-!-prim-comp}, by Lemma
\ref{lem-property-AAinf-bd}\Refenum{\ref{lem-property-AAinf-bd-mod-pi}}, and by the same inductive argument as in the proof of \cite[\aThm 8.4]{Scholze:2013-phtra}, we obtain the second almost isomorphism in \Refeq{\ref{eq-lem-coh-Ainf}}. By their very constructions, both the almost isomorphisms in \Refeq{\ref{eq-lem-coh-Ainf}} are canonical and independent of the choices, and hence $\Gal(\AC{k} / k)$-equivariant, as desired.
\end{proof}
\begin{lem}\label{lem-Scholze-cplx}
Let $\{ \mathcal{F}_i \}_{i \in \mathbb{Z}_{\geq 1}}$ be an inverse system of sheaves on a site $T$, and let $\{ 0 \to \mathcal{F}_i \to \mathcal{F}_{i, 0} \to \mathcal{F}_{i, 1} \to \cdots \mathcal{F}_{i, a} \to \cdots \}_{i \in \mathbb{Z}_{\geq 1}}$ be an inverse system of exact complexes. Assume that there exists a basis $\mathcal{B}$ of the site $T$ such that, for each $U \in \mathcal{B}$, the following conditions hold:
\begin{enumerate}
\item\label{lem-Scholze-cplx-coh-van} $H^b(U, \mathcal{F}_{i, a}) = 0$, for all $a \geq 0$, $b > 0$, and $i \geq 1$.
\item\label{lem-Scholze-cplx-ex} The complex $0 \to \mathcal{F}_i(U) \to \mathcal{F}_{i, 0}(U) \to \mathcal{F}_{i, 1}(U) \to \cdots \to \mathcal{F}_{i, a}(U) \to \cdots$ is exact, for each $i \geq 0$.
\item\label{lem-Scholze-cplx-lim-van} $\mathcal{F}_{i + 1, a}(U) \to \mathcal{F}_{i, a}$ is surjective, for all $a \geq 0$.
\end{enumerate}
Then, for $? = \emptyset$ or any $a \geq 0$, we have $R^j \varprojlim_i \mathcal{F}_{i, ?} = 0$ and $H^j(U, \varprojlim_i \mathcal{F}_{i, ?}) = 0$, for $j > 0$; and $\bigl(\varprojlim_i \mathcal{F}_{i, ?}\bigr)(U) \cong \varprojlim_i\bigl(\mathcal{F}_{i, ?}(U)\bigr)$. Moreover, the complex $0 \to \varprojlim_i \mathcal{F}_i \to \varprojlim_i \mathcal{F}_{i, 0} \to \varprojlim_i \mathcal{F}_{i, 1} \to \cdots$ is also exact.
\end{lem}
\begin{proof}
Since $\mathcal{F}_{i, \bullet}$ is a resolution of $\mathcal{F}_i$, we have a \Pth{filtration} spectral sequence $E_1^{a, b} := H^b\bigl(U, \mathcal{F}_{i, a}) \Rightarrow H^{a + b}(U, \mathcal{F}_i)$, which is concentrated on the terms $E_1^{a, 0}$, by assumption \Refenum{\ref{lem-Scholze-cplx-coh-van}}. Then the spectral sequence degenerates on the $E_2$ page, and
\begin{equation}\label{eq-lem-Scholze-cplx-coh-van}
H^j(U, \mathcal{F}_i) = 0,
\end{equation}
for all $i \geq 1$ and $j > 0$. Similarly, by assumption \Refenum{\ref{lem-Scholze-cplx-ex}}, we have a spectral sequence $E_1^{a, b} := R^b\varprojlim_i\bigl(\mathcal{F}_{i, a}(U)\bigr) \Rightarrow R^{a + b}\varprojlim_i\bigl(\mathcal{F}_i(U)\bigr)$, which is concentrated on the terms $E_1^{a, 0}$, because $R^b\varprojlim_i\bigl(\mathcal{F}_{i, a}(U)\bigr) = 0$ for all $b > 0$, by assumption \Refenum{\ref{lem-Scholze-cplx-lim-van}}. Then the spectral sequence degenerates on the $E_2$ page, and
\begin{equation}\label{eq-lem-Scholze-cplx-lim-van}
R^j\varprojlim_i\bigl(\mathcal{F}_i(U)\bigr) = 0
\end{equation}
for all $j > 0$. Hence, by \Refeq{\ref{eq-lem-Scholze-cplx-coh-van}} and \Refeq{\ref{eq-lem-Scholze-cplx-lim-van}}, by assumptions \Refeq{\ref{lem-Scholze-cplx-coh-van}} and \Refeq{\ref{lem-Scholze-cplx-lim-van}}, and by \cite[\aLem 3.18]{Scholze:2013-phtra}, the first assertion of the lemma follows. Consequently, by \cite[\aProp 1.12.4]{Kashiwara/Shapira:1990-SM}, we have an exact complex $0 \to (\varprojlim_i \mathcal{F}_i)(U) \to (\varprojlim_i \mathcal{F}_{i, 0})(U)\to (\varprojlim_i \mathcal{F}_{i, 1})(U) \to \cdots$. Since $U$ is an arbitrary object in the basis $\mathcal{B}$ of $T$, the second assertion of the lemma also follows, as desired.
\end{proof}
\begin{lem}\label{lem-cplx-AAinf}
For each $m \geq 1$ and each $n \geq 1$, we have a canonical $\Gal(\AC{k} / k)$-equivariant exact complex
\begin{equation}\label{eq-lem-cplx-AAinf-tor}
\begin{split}
0 & \to \bigl(\nu_X^{-1} \, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}_m|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X} / (p^m, [\varpi^n])\bigr) \\
& \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X_{(0)}^\partial}^\partial / (p^m, [\varpi^n])\bigr) \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X_{(1)}^\partial}^\partial / (p^m, [\varpi^n])\bigr) \\
& \to \cdots \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X_{(a)}^\partial}^\partial / (p^m, [\varpi^n])\bigr) \to \cdots
\end{split}
\end{equation}
over $X_{K, \proket}$. Consequently, we have a canonical $\Gal(\AC{k} / k)$-equivariant almost quasi-isomorphism between
\[
R\varprojlim_{m, n} \Bigl(\bigl(\nu_X^{-1} \, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}_m|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X} / (p^m, [\varpi^n])\bigr)\Bigr)
\]
\Pth{with almost vanishing higher limits}
\[
\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \AAinfX{X_{(0)}^\partial}^\partial \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \AAinfX{X_{(1)}^\partial}^\partial \to \cdots \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \AAinfX{X_{(a)}^\partial}^\partial \to \cdots
\]
\Pth{which is almost exact except in degree $0$} over $X_{K, \proket}$. Since $\widehat{\mathbb{L}}$ is a local system, by Definition \ref{def-AAinf-cpt}, we obtain a canonical $\Gal(\AC{k} / k)$-equivariant almost isomorphism
\begin{equation}\label{eq-cplx-AAinf-deg-zero}
\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \AAinfX{X}^\Utext{$\star$-c} \Mi R\varprojlim_{m, n} \Bigl(\bigl(\nu_X^{-1} \, \jmath_{\ket, !}^\Utext{$\star$-c}(\mathbb{L}_m|_{U^\Utext{$\star$-c}_\ket})\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X} / (p^m, [\varpi^n])\bigr)\Bigr).
\end{equation}
\end{lem}
\begin{proof}
The first assertion follows inductively from the exactness of \Refeq{\ref{eq-thm-L-!-prim-comp-resol}}, by using Lemma \ref{lem-property-AAinf-bd}\Refenum{\ref{lem-property-AAinf-bd-mod-pi}} and the canonical morphisms induced by the short exact sequence $0 \to p \mathbb{L}_m \to \mathbb{L}_m \to \mathbb{L}_m / p \to 0$. Let $U$ be any log perfectoid affinoid object in ${X_\proket}_{/X_K}$, which we may assume to trivialize $\widehat{\mathbb{L}}$, because such objects form a basis, by \cite[\aProp \logadicproplogaffperfbasis]{Diao/Lan/Liu/Zhu:laske}. By induction on $m$ and $n$, by \cite[\aLem \logadiclemclimmOplusp]{Diao/Lan/Liu/Zhu:laske} and Lemmas \ref{lem-!-resol} and \ref{lem-property-AAinf-bd}, and by downward induction on $a$ using the finiteness of $|I^\Utext{$\star$-c}|$, we see that \Refeq{\ref{eq-lem-cplx-AAinf-tor}} is almost exact when evaluated on $U$, and so that Lemma \ref{lem-Scholze-cplx} applies, from which the second assertion follows, as desired.
\end{proof}
Thus, we are ready for the following:
\begin{proof}[Proof of Proposition \ref{prop-coh-AAinf-cpt}]
By combining Lemma \ref{lem-coh-Ainf} and \Refeq{\ref{eq-cplx-AAinf-deg-zero}}, we obtain the two almost isomorphisms \Refeq{\ref{eq-prop-coh-AAinf-cpt-comp-Ainf}} and \Refeq{\ref{eq-prop-coh-AAinf-cpt-comp-Binf}}, which are naturally compatible with the multiplication by powers of $\xi$ on both sides. As explained in the proof of \cite[\aThm 6.5]{Scholze:2013-phtra}, multiplication by $[\varpi] \in B_{\inf} = W(K^{\flat+})[\frac{1}{p}]$ is invertible \Pth{and so almost isomorphisms becomes isomorphisms} after reduction modulo $\xi$, because $[\varpi]$ is mapped to $\varpi^\sharp = p$ in $K$, by the choice of $\varpi \in K^{\flat+}$. For each $r \geq 1$, since
\[
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} (B_{\inf} / \xi^r) \cong \bigl(H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} \mathbb{Q}_p\bigr) \otimes_{\mathbb{Q}_p} (B_{\inf} / \xi^r),
\]
and since $H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} \mathbb{Q}_p$ is a finite-dimensional $\mathbb{Q}_p$-vector space, by using the canonical isomorphism \Refeq{\ref{eq-prop-coh-AAinf-cpt-comp-Binf}} we have just established, we see that $\xi^r$ acts with zero kernel on $H^i(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c})$, and we obtain a canonical isomorphism
\[
H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c}\bigr) \otimes_{B_{\inf}} (B_{\inf} / \xi^r) \Mi H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c}\bigr) / \xi^r.
\]
Therefore, the connecting morphisms in the long exact sequence associated with the short exact sequence $0 \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c} \Mapn{\xi^r} \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r) \to 0$ over $X_{K, \proket}$ \Ref{see Lemma \ref{lem-fil-AAinf-cpt}} are all zero, and we obtain a canonical isomorphism
\[
H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c}\bigr) / \xi^r \Mi H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr).
\]
It follows that \Refeq{\ref{eq-prop-coh-AAinf-cpt-comp-Binf}} is compatible with the filtrations and induce the desired isomorphism \Refeq{\ref{eq-prop-coh-AAinf-cpt-comp-Binf-gr}}, which is just the composition of the last two isomorphisms.
\end{proof}
\subsection{Period sheaves {$\mathbb{B}_\dR^{\Utext{$\star$-c}, +}$} and {$\mathbb{B}_\dR^\Utext{$\star$-c}$}}\label{sec-period-B-dR-cpt}
\begin{defn}\label{def-O-hat-cpt}
For $? = \emptyset$, $+$, $\flat$, or $\flat+$, let $\widehat{\mathcal{O}}^{\Utext{$\star$-c}, ?}_X := \ker(\widehat{\mathcal{O}}^{?, \partial}_{X_{(0)}^\partial} \to \widehat{\mathcal{O}}^{?, \partial}_{X_{(1)}^\partial})$.
\end{defn}
\begin{defn}\label{def-BBdRp-cpt}
Let
\[
\BBdRX{X}^{\Utext{$\star$-c}, +} := \ker(\BBdRX{X_{(0)}^\partial}^{+, \partial} \to \BBdRX{X_{(1)}^\partial}^{+, \partial})
\]
and
\[
\BBdRX{X}^\Utext{$\star$-c} := \ker(\BBdRX{X_{(0)}^\partial}^\partial \to \BBdRX{X_{(1)}^\partial}^\partial)
\]
\Ref{see Lemma \ref{lem-AAinf-bd-mor}}. We shall omit the subscripts \Qtn{$X$} when the context is clear.
\end{defn}
\begin{rk}\label{rem-def-BBdRp-cpt}
By definition, we have $\BBdRX{X_{(0)}^\partial}^{+, \partial} \cong \BBdRpX{X}$; and we have $\BBdRX{X}^\Utext{$\star$-c} \cong \BBdRX{X}^{\Utext{$\star$-c}, +}[\tfrac{1}{\xi}] \cong \BBdRX{X}^{\Utext{$\star$-c}, +} \otimes_{B_\dR^+} B_\dR$ over $X_{K, \proket}$. Moreover, we could have defined $\BBdRX{X}^{\Utext{$\star$-c}, +}$ as a derived limit as in \Refeq{\ref{eq-lem-BBdRp-cpt}} below \Pth{with $\widehat{\mathbb{L}} = \widehat{\mathbb{Z}}_p$ there}, without reference to the boundary stratification.
\end{rk}
The goal of this subsection is to prove the following generalization of \cite[\aLem \logRHlemketBdR]{Diao/Lan/Liu/Zhu:lrhrv}:
\begin{prop}\label{prop-L-!-coh-comp-proket}
For each $i \geq 0$, we have a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-prop-BBdRp-cpt-comp-dR}
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR^+ \cong H^i(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^{\Utext{$\star$-c}, +}),
\end{equation}
compatible with filtrations on both sides, and also \Pth{by taking $\OP{gr}^0$} a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-prop-BBdRp-cpt-comp-Higgs}
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} K \cong H^i(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}_{X_{K, \proket}}).
\end{equation}
\end{prop}
\begin{lem}\label{lem-BBdRp-cpt}
Over $X_{K, \proket}$, we have a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-lem-BBdRp-cpt}
\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^{\Utext{$\star$-c}, +} \cong R\varprojlim_r \bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr),
\end{equation}
\Pth{with vanishing higher limits} and a canonical $\Gal(\AC{k} / k)$-equivariant exact complex
\begin{equation}\label{eq-lem-BBdRp-cpt-cplx}
\begin{split}
0 & \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^{\Utext{$\star$-c}, +} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(0)}^\partial}^{+, \partial} \\
& \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(1)}^\partial}^{+, \partial} \to \cdots \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(a)}^\partial}^{+, \partial} \to \cdots,
\end{split}
\end{equation}
which is strictly compatible with the filtrations defined by multiplication by powers of $\xi$, and induces, for each $r \in \mathbb{Z}$, a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-lem-BBdRp-cpt-gr}
\OP{gr}^r(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-c}) \cong \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OP{gr}^r(\BBdRX{X}^\Utext{$\star$-c}) \cong \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}_{X_{K, \proket}}(r)
\end{equation}
and a canonical $\Gal(\AC{k} / k)$-equivariant exact complex
\begin{equation}\label{eq-lem-BBdRp-cpt-cplx-gr}
\begin{split}
0 & \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}_{X_{K, \proket}} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}_{X_{(0), K, \proket}^\partial} \\
& \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}_{X_{(1), K, \proket}^\partial} \to \cdots \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}_{X_{(a), K, \proket}^\partial} \to \cdots.
\end{split}
\end{equation}
\end{lem}
\begin{proof}
Since $\widehat{\mathbb{L}}$ is a local system, by forming the tensor product of the short exact sequence $0 \to B_{\inf} \Mapn{\xi^r} B_{\inf} \to B_{\inf} / \xi^r \to 0$ with the complex $\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X_{(\bullet)}^\partial}^\partial$ \Ref{which is almost exact except in degree $0$, by Lemma \ref{lem-cplx-AAinf}}, we obtain a short exact sequence of complexes $0 \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X_{(\bullet)}^\partial}^\partial \Mapn{\xi^r} \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X_{(\bullet)}^\partial}^\partial \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X_{(\bullet)}^\partial}^\partial / \xi^r) \to 0$, inducing an almost long exact sequence with only three nonzero terms in the beginning $0 \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r) \to 0 \to \cdots$, showing that we have a canonical isomorphism $\bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c}\bigr) / \xi^r \Mi \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)$ \Ref{\Refcf{} Lemma \ref{lem-fil-AAinf-cpt}} and a canonical $\Gal(\AC{k} / k)$-equivariant exact complex
\begin{equation}\label{eq-lem-BBdRp-cpt-cplx-BBinf-xi-r}
\begin{split}
0 & \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r) \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X_{(0)}^\partial}^\partial / \xi^r) \\
& \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X_{(1)}^\partial}^\partial / \xi^r) \to \cdots \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X_{(a)}^\partial}^\partial / \xi^r) \to \cdots.
\end{split}
\end{equation}
When $r = 1$, this gives the exact complex \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-gr}}, because $\BBinfX{X_{(a)}^\partial}^\partial / \xi \cong \widehat{\mathcal{O}}^\partial_{X_{(a)}^\partial}$. \Pth{Alternatively, we can obtain the exact complex \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-gr}} more directly from the exact complex \Refeq{\ref{eq-thm-L-!-prim-comp-resol}}.} More generally, let $U$ be any log perfectoid affinoid object in ${X_\proket}_{/X_K}$, which we may assume to trivialize $\widehat{\mathbb{L}}$, because such objects form a basis, by \cite[\aProp \logadicproplogaffperfbasis]{Diao/Lan/Liu/Zhu:laske}. By induction on $r$, by the exactness of \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-BBinf-xi-r}}, by \cite[\aThm \logadicthmalmostvanhat]{Diao/Lan/Liu/Zhu:laske}, and by downward induction on $a$ using the finiteness of $|I^\Utext{$\star$-c}|$, we see that \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-BBinf-xi-r}} is exact when evaluated on $U$, and so that Lemma \ref{lem-Scholze-cplx} applies, from which we obtain that $R^j\varprojlim_r \bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr) = 0$, for all $j > 0$, and that the canonical $\Gal(\AC{k} / k)$-equivariant complex
\begin{equation}\label{eq-lem-BBdRp-cpt-cplx-pre}
\begin{split}
0 & \to \varprojlim_r \bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^{\Utext{$\star$-c}, +} / \xi^r)\bigr) \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(0)}^{+, \partial}} \\
& \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(1)}^{+, \partial}} \to \cdots \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(a)}^{+, \partial}} \to \cdots
\end{split}
\end{equation}
is exact. Since $\widehat{\mathbb{L}}$ is a local system, by Definition \ref{def-BBdRp-cpt}, we obtain an exact sequence
\begin{equation}\label{eq-lem-BBdRp-cpt-cplx-init}
0 \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^{\Utext{$\star$-c}, +} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(0)}^\partial}^{+, \partial} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(1)}^\partial}^{+, \partial}
\end{equation}
as in the first few terms of \Refeq{\ref{eq-lem-BBdRp-cpt-cplx}}. Hence, we obtain both \Refeq{\ref{eq-lem-BBdRp-cpt}} and \Refeq{\ref{eq-lem-BBdRp-cpt-cplx}} by comparing \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-pre}} and \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-init}}, which are strictly compatible with filtrations because \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-pre}} and \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-init}} are, by their very constructions above. Since
\[
\OP{gr}^r(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(a)}^\partial}^{+, \partial}) \cong \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\partial_{X_{(a), K, \proket}^\partial}(a)
\]
by Lemma \ref{lem-BdR-bd-gr}, for all $a \geq 0$; and since
\[
\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}_{X_{K, \proket}} \cong \ker(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\partial_{X_{(0), K, \proket}^\partial} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\partial_{X_{(1), K, \proket}^\partial}),
\]
by Definition \ref{def-O-hat-cpt}, we also obtain \Refeq{\ref{eq-lem-BBdRp-cpt-gr}} and \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-gr}}, as desired.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop-L-!-coh-comp-proket}]
Since $H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} \mathbb{Q}_p$ is finite as a \Pth{free} $\mathbb{Q}_p$-module \Ref{see Lemma \ref{lem-def-H-c-fin-Z-p}}, and since $H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR^+ \cong \bigl((H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} \mathbb{Q}_p) \otimes_{\mathbb{Q}_p} B_{\inf}\bigr) \otimes_{B_{\inf}} B_\dR^+$ and $B_\dR^+ \cong \varprojlim_r (B_{\inf} / \xi^r)$, by Proposition \ref{prop-coh-AAinf-cpt}, we obtain
\[
\begin{split}
H^i_{\et, \Utext{$\star$-c}}(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR^+ & \cong R\varprojlim_r \bigl(H^i(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBinfX{X}^\Utext{$\star$-c}) \otimes_{B_{\inf}} (B_{\inf} / \xi^r)\bigr) \\
& \cong R\varprojlim_r H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr)
\end{split}
\]
\Pth{with vanishing higher limits}, which are compatible with the filtrations defined by multiplication by powers of $\xi$. Thus, the proposition follows from Lemma \ref{lem-BBdRp-cpt} and the standard isomorphism $R\varprojlim_r R\Gamma\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr) \cong R\Gamma\bigl(X_{K, \proket}, R\varprojlim_r \bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} (\BBinfX{X}^\Utext{$\star$-c} / \xi^r)\bigr)\bigr)$, as desired.
\end{proof}
\subsection{Period sheaves {$\mathcal{O}\mathbb{B}_{\dR, \log}^{\Utext{$\star$-c}, +}$} and {$\mathcal{O}\mathbb{B}_{\dR, \log}^\Utext{$\star$-c}$}, and Poincar\'e lemma}\label{sec-period-OB-dR-cpt}
\begin{defn}\label{def-OBdRp-cpt}
Let
\[
\OBdlX{X}^{\Utext{$\star$-c}, +} := \ker(\OBdlX{X_{(0)}^\partial}^{+, \partial} \to \OBdlX{X_{(1)}^\partial}^{+, \partial});
\]
\[
\OBdlX{X}^\Utext{$\star$-c} := \ker(\OBdlX{X_{(0)}^\partial}^\partial \to \OBdlX{X_{(1)}^\partial}^\partial);
\]
\[
\Fil^r \OBdlX{X}^{\Utext{$\star$-c}, +} := \ker(\Fil^r \OBdlX{X_{(0)}^\partial}^{+, \partial} \to \Fil^r \OBdlX{X_{(1)}^\partial}^{+, \partial}),
\]
for $r \geq 0$;
\[
\Fil^r \OBdlX{X}^\Utext{$\star$-c} := \ker(\Fil^r \OBdlX{X_{(0)}^\partial}^\partial \to \Fil^r \OBdlX{X_{(1)}^\partial}^\partial);
\]
for $r \in \mathbb{Z}$; and
\[
\OClX{X}^\Utext{$\star$-c} := \OP{gr}^0\bigl(\OBdlX{X}^\Utext{$\star$-c}\big) \cong \ker(\OClX{X_{(0)}^\partial}^\partial \to \OClX{X_{(1)}^\partial}^\partial).
\]
\Ref{See Lemma \ref{lem-AAinf-bd-mor}. The isomorphism above is justified by Corollary \ref{cor-OBdRp-cplx-bd-strict}.}
\end{defn}
\begin{cor}\label{cor-OBdRp-cplx-bd-cplx}
The morphisms in Lemma \ref{lem-AAinf-bd-mor} induce an exact complex
\begin{equation}\label{eq-cor-OBdRp-cplx-bd-cplx-sh-OBdl}
\begin{split}
0 & \to \OBdlX{X}^{\Utext{$\star$-c}, +} \to \OBdlX{X_{(0)}^\partial}^{+, \partial} \\
& \to \OBdlX{X_{(1)}^\partial}^{+, \partial} \to \cdots \to \OBdlX{X_{(a)}^\partial}^{+, \partial} \to \cdots
\end{split}
\end{equation}
strictly compatible with the filtrations. Moreover, by forming the tensor product of \Refeq{\ref{eq-cor-OBdRp-cplx-bd-cplx-sh-OBdl}} with the finite locally free $\mathcal{O}_X$-module $\Omega^{\log, \bullet}_X$, we obtain an exact complex of log de Rham complexes \Ref{\Refcf{} \cite[\aCor \logRHcorlogdRcplx]{Diao/Lan/Liu/Zhu:lrhrv}}
\begin{equation}\label{eq-cor-OBdRp-cplx-bd-cplx}
\begin{split}
0 & \to \OBdlX{X}^{\Utext{$\star$-c}, +} \otimes_{\mathcal{O}_X} \Omega^{\log, \bullet}_X \to \OBdlX{X_{(0)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_X} \Omega^{\log, \bullet}_X \\
& \to \OBdlX{X_{(1)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_X} \Omega^{\log, \bullet}_X \to \cdots \to \OBdlX{X_{(a)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_X} \Omega^{\log, \bullet}_X \to \cdots
\end{split}
\end{equation}
strictly compatible with the filtrations. The above statements hold with $\OBdlX{X_{(a)}^\partial}^{+, \partial}$ and $\BBdRX{X_{(a)}^\partial}^{+, \partial}$ replaced with $\OBdlX{X_{(a)}^\partial}^\partial$ and $\BBdRX{X_{(a)}^\partial}^\partial$, respectively, for all $a \geq 0$. Consequently, we have an exact complex
\begin{equation}\label{eq-cor-OBdRp-cplx-bd-cplx-sh-OCl}
0 \to \OClX{X}^\Utext{$\star$-c} \to \OClX{X_{(0)}^\partial}^\partial \to \OClX{X_{(1)}^\partial}^\partial \to \cdots \to \OClX{X_{(a)}^\partial}^\partial \to \cdots.
\end{equation}
\end{cor}
\begin{proof}
The assertions for $\OBdlX{X_{(a)}^\partial}^{+, \partial}$ follow from Lemma \ref{lem-OBdRp-bd-loc}, because power series algebras are direct products as modules; because the filtrations are defined by multiplication by powers of $\xi \in A_{\inf}$ and hence strictly compatible; and because the complex \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-gr}} \Pth{with $\widehat{\mathbb{L}} = \widehat{\mathbb{Z}}_p$} is exact. Then the assertions for $\OBdlX{X_{(a)}^\partial}^\partial$, $\BBdRX{X_{(a)}^\partial}^\partial$, and $\OClX{X}^\Utext{$\star$-c}$ also follow, by strict compatibility with filtrations.
\end{proof}
We have the following variant of the \emph{Poincar\'e lemma}:
\begin{prop}\label{prop-Poin-lem}
We have the following convenient facts over $X_{K, \proket}$:
\begin{enumerate}
\item\label{prop-Poin-lem-1} The exact complex in \cite[\aCor \logRHcorlogdRcplx(1)]{Diao/Lan/Liu/Zhu:lrhrv} induces an exact complex \begin{equation}\label{eq-prop-Poin-lem-1}
\begin{split}
0 & \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^{\Utext{$\star$-c}, +} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^{\Utext{$\star$-c}, +} \\
& \Mapn{\nabla} (\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^{\Utext{$\star$-c}, +}) \otimes_{\mathcal{O}_X} \Omega^{\log, 1}_X \\
& \Mapn{\nabla} (\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^{\Utext{$\star$-c}, +}) \otimes_{\mathcal{O}_X} \Omega^{\log, 2}_X \to \cdots.
\end{split}
\end{equation}
\item\label{prop-Poin-lem-2} The above statement holds with \cite[\aCor \logRHcorlogdRcplx(1)]{Diao/Lan/Liu/Zhu:lrhrv} replaced with \cite[\aCor \logRHcorlogdRcplx(2)]{Diao/Lan/Liu/Zhu:lrhrv}, and with $\BBdRX{X}^{\Utext{$\star$-c}, +}$ and $\OBdlX{X}^{\Utext{$\star$-c}, +}$ replaced with $\BBdRX{X}^\Utext{$\star$-c}$ and $\OBdlX{X}^\Utext{$\star$-c}$, respectively.
\item\label{prop-Poin-lem-3} As in \cite[\aCor \logRHcorlogdRcplx(3)]{Diao/Lan/Liu/Zhu:lrhrv}, for each $r \in \mathbb{Z}$, the subcomplex
\[
\begin{split}
0 & \to \Fil^r(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-c}) \to \Fil^r(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^\Utext{$\star$-c}) \\
& \Mapn{\nabla} \Fil^{r - 1}(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^\Utext{$\star$-c}) \otimes_{\mathcal{O}_X} \Omega^{\log, 1}_X \\
& \Mapn{\nabla} \Fil^{r - 2}(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^\Utext{$\star$-c}) \otimes_{\mathcal{O}_X} \Omega^{\log, 2}_X \to \cdots
\end{split}
\]
of the complex for $\BBdRX{X}^\Utext{$\star$-c}$ and $\OBdlX{X}^\Utext{$\star$-c}$ is also exact.
\item\label{prop-Poin-lem-4} For each $r \in \mathbb{Z}$, the quotient complex
\[
\begin{split}
0 & \to \OP{gr}^r(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-c}) \to \OP{gr}^r(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^\Utext{$\star$-c}) \\
& \Mapn{\nabla} \OP{gr}^{r - 1}(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^\Utext{$\star$-c}) \otimes_{\mathcal{O}_X} \Omega^{\log, 1}_X \\
& \Mapn{\nabla} \OP{gr}^{r - 2}(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X}^\Utext{$\star$-c}) \otimes_{\mathcal{O}_X} \Omega^{\log, 2}_X \to \cdots
\end{split}
\]
of the previous complex is exact, and can be $\Gal(\AC{k} / k)$-equivariantly identified with the complex
\[
\begin{split}
0 & \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}_X(r) \to \bigr(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log}^\Utext{$\star$-c}(r)\bigr) \\
& \Mapn{\nabla} \bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OClX{X}^\Utext{$\star$-c}(r - 1)\bigr) \otimes_{\mathcal{O}_X} \Omega^{\log, 1}_X \\
& \Mapn{\nabla} \bigl(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OClX{X}^\Utext{$\star$-c}(r - 2)\bigr) \otimes_{\mathcal{O}_X} \Omega^{\log, 2}_X \to \cdots.
\end{split}
\]
\end{enumerate}
\end{prop}
\begin{proof}
Let $\mathcal{R}^\bullet$ denote the complex \Refeq{\ref{eq-prop-Poin-lem-1}}, which we would like to show to be exact. Since $\widehat{\mathbb{L}}$ is a local system, by forming its tensor product with the exact complex \Refeq{\ref{eq-cor-OBdRp-cplx-bd}} in Corollary \ref{cor-OBdRp-cplx-bd}, we obtain an exact complex
\[
\begin{split}
0 & \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X_{(a)}^\partial}^{+, \partial} \to \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X_{(a)}^\partial}^{+, \partial} \\
& \Mapn{\nabla} \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X_{(a)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_X} \Omega^{\log, 1}_X \Mapn{\nabla} \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OBdlX{X_{(a)}^\partial}^{+, \partial} \otimes_{\mathcal{O}_X} \Omega^{\log, 2}_X \to \cdots,
\end{split}
\]
which we denote by $\mathcal{R}_{(a)}^\bullet$, for each $a \geq 0$; and we obtain a canonical exact complex of complexes
\begin{equation}\label{prop-Poin-lem-cplx-cplx}
0 \to \mathcal{R}^\bullet \to \mathcal{R}_{(0)}^\bullet \to \mathcal{R}_{(1)}^\bullet \to \cdots \to \mathcal{R}_{(a)}^\bullet \to \cdots,
\end{equation}
by Lemma \ref{lem-BBdRp-cpt} and Corollary \ref{cor-OBdRp-cplx-bd-cplx}. Since \Refeq{\ref{prop-Poin-lem-cplx-cplx}} contains only finitely many nonzero terms, we can break it into finitely many short exact sequences of complexes by taking kernels and cokernels, and argue by taking the associated long exact sequences of cohomology and by downward induction that the complex $\mathcal{R}^\bullet$ is exact when all the other complexes $\mathcal{R}_{(a)}^\bullet$ are. This shows that the complex \Refeq{\ref{eq-prop-Poin-lem-1}} in \Refenum{\ref{prop-Poin-lem-1}} is exact, as desired. The remaining assertions then follow from this, from the strict compatibility with filtrations in Corollary \ref{cor-OBdRp-cplx-bd-cplx}, and from the corresponding assertions in \cite[\aCor \logRHcorlogdRcplx]{Diao/Lan/Liu/Zhu:lrhrv}.
\end{proof}
\subsection{Comparison of cohomology}\label{sec-dR-comp-cpt-proof}
For simplicity, we shall omit the subscripts \Qtn{$X$} from the period sheaves. As in \cite[\aSec \logRHseclogRH]{Diao/Lan/Liu/Zhu:lrhrv}, let $\mu: X_\proket \to X_\an$ and $\mu': {X_\proket}_{/X_K} \to X_\an$ denote the canonical morphisms of sites. Recall that $\RHl(\mathbb{L}) = R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{B}_{\dR, \log})$, $\Hl(\mathbb{L}) = \OP{gr}^0\bigl(\RHl(\mathbb{L})\bigr) \cong R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log})$, and $D_{\dR, \log}(\mathbb{L}) = \mu_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{B}_{\dR, \log})$.
\begin{defn}\label{def-unip-twist}
Let
\[
\RHl^\Utext{$\star$-c}(\mathbb{L}) := \ker\Bigl( \RHl(\mathbb{L}) \to \oplus_{j \in I^\Utext{$\star$-c}} \, \bigl(\RHl(\mathbb{L})|_{\mathcal{D}_j}^0\bigr)\Bigr)
\]
and
\[
D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L}) := \ker\Bigl( D_{\dR, \log}(\mathbb{L}) \to \oplus_{j \in I^\Utext{$\star$-c}} \, \bigl(D_{\dR, \log}(\mathbb{L})|_{D_j}^0\bigr)\Bigr),
\]
which are equipped with the induced log connections and filtrations, where \Qtn{$|_{\mathcal{D}_j}$} and \Qtn{$|_{D_j}$} denote pullbacks \Pth{as coherent sheaves} to $\mathcal{D}_j$ and $D_j$, respectively, and where the superscripts \Qtn{$0$} denote the maximal quotient sheaves on which the residue endomorphisms act nilpotently \Ref{\Refcf{} \cite[(\logRHeqresgeneigen)]{Diao/Lan/Liu/Zhu:lrhrv}}, with induced quotient filtrations. For simplicity, by pushforward, we shall abusively consider such sheaves as coherent sheaves over the ambient spaces $\mathcal{X}$ and $X$. Accordingly, let
\[
\Hl^\Utext{$\star$-c}(\mathbb{L}) := \OP{gr}^0\bigl(\RHl^\Utext{$\star$-c}(\mathbb{L})\bigr),
\]
which is equipped with a canonically induced log Higgs field.
\end{defn}
\begin{rk}\label{rem-unip-twist}
While the eigenvalues of the residues of $\RHl(\mathbb{L})$ along \Pth{the irreducible components of} $D$ are all in $[0, 1)$, the eigenvalues of the residues of $\RHl^\Utext{$\star$-c}(\mathbb{L})$ along $D^\Utext{$\star$-c}$ and $D_\Utext{$\star$-c}$ are in $(0, 1]$ and $[0, 1)$, respectively; and the analogous statement is true for $D_{\dR, \log}(\mathbb{L})$ and $D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L})$. By definition, we always have the canonical inclusion $\RHl(\mathbb{L})(-D^\Utext{$\star$-c}) \Em \RHl^\Utext{$\star$-c}(\mathbb{L})$ \Pth{\resp $\Hl(\mathbb{L})(-D^\Utext{$\star$-c}) \Em \Hl^\Utext{$\star$-c}(\mathbb{L})$, \resp $D_{\dR, \log}(\mathbb{L})(-D^\Utext{$\star$-c}) \Em D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L})$}, which is an isomorphism when the residues of $\RHl(\mathbb{L})$ \Pth{\resp $\RHl(\mathbb{L})$, \resp $D_{\dR, \log}(\mathbb{L})$} along irreducible components of $D_\Utext{$\star$-c}$ are all nilpotent. \Ref{By \cite[\aThm \logRHthmunipvsnilp]{Diao/Lan/Liu/Zhu:lrhrv}, such a nilpotence holds when $\mathbb{L}_{\mathbb{Q}_p}$ has \emph{unipotent} geometric monodromy along all irreducible components of $D^\Utext{$\star$-c}$.}
\end{rk}
\begin{lem}\label{lem-unip-twist-qis}
The canonical morphisms of log de Rham complexes
\begin{equation}\label{eq-lem-unip-twist-qis-RHl}
\DRl\bigl(\RHl(\mathbb{L})(-D^\Utext{$\star$-c})\bigr) \to \DRl\bigl(\RHl^\Utext{$\star$-c}(\mathbb{L})\bigr)
\end{equation}
and
\begin{equation}\label{eq-lem-unip-twist-qis-Ddl}
\DRl\bigl(D_{\dR, \log}(\mathbb{L})(-D^\Utext{$\star$-c})\bigr) \to \DRl\bigl(D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L})\bigr),
\end{equation}
which are strictly compatible with the filtrations by construction, are quasi-isomorphisms. Hence, the log Higgs complex
\begin{equation}\label{eq-lem-unip-twist-qis-Hil}
\Hil\bigl(\Hl(\mathbb{L})(-D^\Utext{$\star$-c})\bigr) \to \Hil\bigl(\Hl^\Utext{$\star$-c}(\mathbb{L})\bigr)
\end{equation}
is also a quasi-isomorphism.
\end{lem}
\begin{proof}
By definition of $\RHl^\Utext{$\star$-c}(\mathbb{L})$, the residues induce automorphisms of the pullback of $\bigl(\RHl^\Utext{$\star$-c}(\mathbb{L})\bigr) / \bigl(\RHl(\mathbb{L})(-D^\Utext{$\star$-c})\bigr)$ to $D_j$, for all $j \in I^\Utext{$\star$-c}$. Hence, \Refeq{\ref{eq-lem-unip-twist-qis-RHl}} is a quasi-isomorphism, by the same argument as in the proof of \cite[\aLem 2.10]{Esnault/Viehweg:1992-LVT-B}; and so is \Refeq{\ref{eq-lem-unip-twist-qis-Hil}} by taking $\OP{gr}^0$. Similarly, \Refeq{\ref{eq-lem-unip-twist-qis-Ddl}} is also a quasi-isomorphism.
\end{proof}
\begin{prop}\label{prop-RHl-Hl-Ddl-cpt}
The canonical morphisms $R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{B}_{\dR, \log}^\Utext{$\star$-c}) \to \RHl(\mathbb{L})$, $R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log}^\Utext{$\star$-c}) \to \Hl(\mathbb{L})$, and $\mu_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{B}_{\dR, \log}^\Utext{$\star$-c}) \to D_{\dR, \log}(\mathbb{L})$ factor though canonical isomorphisms $R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{B}_{\dR, \log}^\Utext{$\star$-c}) \Mi \RHl^\Utext{$\star$-c}(\mathbb{L})$, $R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log}^\Utext{$\star$-c}) \Mi \Hl^\Utext{$\star$-c}(\mathbb{L})$, and $\mu_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{B}_{\dR, \log}^\Utext{$\star$-c}) \Mi D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L})$, respectively.
\end{prop}
\begin{proof}
It suffices to establish the assertion for $\Hl^\Utext{$\star$-c}(\mathbb{L})$, after which the assertions for $\RHl^\Utext{$\star$-c}(\mathbb{L})$ and $D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L})$ follow. Since the assertions are local in nature, we may suppose as in \cite[\aSec \logRHseccoh]{Diao/Lan/Liu/Zhu:lrhrv} that $X = \OP{Spa}(R, R^+)$ is an affinoid log adic space over $k$, equipped with a strictly \'etale morphism
\[
X \to \mathbb{D}^n = \OP{Spa}(k\Talg{T_1, \ldots, T_n}, k^+\Talg{T_1, \ldots, T_n})
\]
\Pth{with $P = \mathbb{Z}_{\geq n}$ and $Q = 0$ there and} with $D^\Utext{$\star$-c} \Em X$ given by the preimage of $\{ T_1 \cdots T_r = 0 \} \Em \mathbb{D}^n$, so that we have a log perfectoid affinoid covering $\widetilde{X} \to X$ as defined there such that $\widetilde{X}_K \to X_K$ is a Galois pro-Kummer \'etale covering with Galois group $\Gamma_\geom \cong (\widehat{\mathbb{Z}}(1))^n$. For each $m \geq 1$, let us write $X_{K, m} = \OP{Spa}(R_{K, m}, R_{K, m}^+) := X_K \times_{\mathbb{D}^n_K} \mathbb{D}^n_{K, m}$, and denote by $(\widehat{R}_{K, \infty}, \widehat{R}^+_{K, \infty})$ the $p$-adic completion of $\varinjlim_m (R_{K, m}, R^+_{K, m})$, so that $\widehat{\mathcal{O}}(\widetilde{X}_K) = \widehat{R}_{K, \infty}$. For each subset $J$ of $\{ 1, \ldots, r \}$, let $R_{J, K, m}$ denote the quotient of $R_{K, m}$ by the ideal generated by $\{ T_j \}_{j \in J}$, and let $R^+_{J, K, m}$ denote the integral closure in $R_{J, K, m}$ of the image of $R^+_{K, m}$. Note that the nilpotent elements in $R^+_{J, K, m}$ are necessarily $p$-divisible. Therefore, if we denote by $(\widehat{R}_{J, K, \infty}, \widehat{R}^+_{J, K, \infty})$ the $p$-adic completion of $\varinjlim_m (R_{J, K, m}, R^+_{J, K, m})$, then we have a canonical isomorphism $\widehat{R}_{K, \infty} / (T_j^s)^\wedge_{j \in J, \, s \in \mathbb{Q}_{> 0}} \Mi \widehat{R}_{J, K, \infty}$, and we have $\widehat{\mathcal{O}}^\partial_{X_J^\partial}(\widetilde{X}_K) = \widehat{R}_{J, K, \infty}$, as in Lemma \ref{lem-O-flat+-cl}, where $X_J^\partial \subset D^\Utext{$\star$-c}$ is defined by $\{ T_j = 0 \}_{j \in J}$ \Pth{with its log structure pulled back from $X$}. When $m = 1$, we shall drop the subscript $m$ in the above notation.
Let $\mathcal{L} := \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}$ and $\mathcal{L}^\Utext{$\star$-c} := \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}$, so that $(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log})|_{\widetilde{X}_K} \cong \mathcal{L}|_{\widetilde{X}_K}[W_1, \ldots, W_n]$ and $(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log}^\Utext{$\star$-c})|_{\widetilde{X}_K} \cong \mathcal{L}^\Utext{$\star$-c}|_{\widetilde{X}_K}[W_1, \ldots, W_n]$, by Lemma \ref{lem-OBdRp-bd-loc} and \cite[\aCor \logRHcorOBdlplocgr]{Diao/Lan/Liu/Zhu:lrhrv}. Let $L_\infty := \mathcal{L}(\widetilde{X}_K)$ and $L^\Utext{$\star$-c}_\infty := \mathcal{L}^\Utext{$\star$-c}(\widetilde{X}_K)$. For each $J \subset \{ 1, \ldots, r \}$, let $\mathcal{L}_J := \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\partial_{X_J^\partial}$ and $L_{J, \infty} := \mathcal{L}_J(\widetilde{X}_K)$. Then $L_\infty$ is a finite projective $\widehat{R}_{K, \infty}$-module, and $L_{J, \infty} \cong L_\infty \otimes_{\widehat{R}_K} \widehat{R}_{J, K}$, for all $j$. By evaluating the exact complexes \Refeq{\ref{eq-lem-BBdRp-cpt-cplx-gr}} and \Refeq{\ref{eq-cor-OBdRp-cplx-bd-cplx-sh-OCl}} on $\widetilde{X}$, and by \cite[\aThm \logadicthmalmostvanhat]{Diao/Lan/Liu/Zhu:laske}, we obtain an exact complex
\begin{equation}\label{eq-prop-RHl-Hl-Ddl-cpt-infty}
\begin{split}
0 & \to L^\Utext{$\star$-c}_\infty[W_1, \ldots, W_n] \to L_\infty[W_1, \ldots, W_n] \to \oplus_{|J| = 1} \, L_{J, \infty}[W_1, \ldots, W_n] \\
& \to \oplus_{|J| = 2} \, L_{J, \infty}[W_1, \ldots, W_n] \to \cdots \to \oplus_{|J| = r} \, L_{J, \infty}[W_1, \ldots, W_n] \to 0
\end{split}
\end{equation}
respecting the variables $W_1, \ldots, W_n$. By Corollary \ref{cor-O-flat+-cl}, Lemma \ref{lem-OBdRp-bd-loc}, and \cite[\aProp \logRHpropLOCl{} and \aLem \logRHlemLOClcoh]{Diao/Lan/Liu/Zhu:lrhrv},
\[
H^i({X_\proket}_{/X_K}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \OClX{X_J^\partial}^\partial) \cong H^i(\Gamma_\geom, L_{J, \infty}[W_1, \ldots, W_n])
\]
is zero, when $i > 0$; and is canonically isomorphic to a finite projective $R_K / (T_j)_{j \in J}$-module $L(X_{J, K}^\partial)$, when $i = 0$, whose formation is compatible with pullbacks under rational localizations or finite \'etale morphisms, by \cite[\aLem \logRHlemLdescent]{Diao/Lan/Liu/Zhu:lrhrv}. Concretely, in the notation of \cite[\aSec \logRHseccoh]{Diao/Lan/Liu/Zhu:lrhrv}, there exists some model $L_{m_0}(X_K)$ of $L_\infty$ over $R_{K, m_0}$, for some $m_0 \geq 1$, such that $L_{m_0}(X_{J, K}^\partial) := L_{m_0}(X_K) / (T_j^{\frac{1}{m_0}})_{j \in J}$ is a good model of $L_{J, \infty}$, for each $J$, and the $R_K / (T_j)_{j \in J}$-submodule $L(X_{J, K}^\partial)$ is the maximal $K$-subspace of $L_{m_0}(X_{J, K}^\partial)$ on which $\Gamma_\geom$ acts unipotently. Let
\begin{equation}\label{eq-def-L-X-K-cpt}
L(X_K)^\Utext{$\star$-c} := \ker\bigl(L(X_K) \to \oplus_{|J| = 1} \, L(X_{J, K}^\partial)\bigr).
\end{equation}
Since each $L_{m_0}(X_K)$ is finite projective and hence flat over $R_{K, m_0}$, by usual arguments \Ref{\Refcf{} the proof of \cite[\aLem 2.3]{Harris/Lan/Taylor/Thorne:2016-rccsv}}, we have an exact complex
\[
\begin{split}
& 0 \to (T_1^{\frac{1}{m_0}} \cdots T_r^{\frac{1}{m_0}}) \, L_{m_0}(X_K) \to L_{m_0}(X_K) \to \oplus_{|J| = 1} \, \bigl(L_{m_0}(X_K) / (T_j^{\frac{1}{m_0}})_{j \in J}\bigr) \\
& \to \oplus_{|J| = 2} \, \bigl(L_{m_0}(X_K) / (T_j^{\frac{1}{m_0}})_{j \in J}\bigr) \to \cdots \to \oplus_{|J| = r} \, \bigl(L_{m_0}(X_K) / (T_j^{\frac{1}{m_0}})_{j \in J}\bigr) \to 0,
\end{split}
\]
where $J$ in the above direct sums runs over subsets of $\{ 1, \ldots, r \}$. By taking the maximal $K$-subspaces on which $\Gamma_\geom$ acts unipotently \Ref{\Refcf{} \cite[\aRem \logRHremcompatLmzero]{Diao/Lan/Liu/Zhu:lrhrv}}, we obtain an exact complex
\begin{equation}\label{eq-prop-RHl-Hl-Ddl-cpt}
\begin{split}
& 0 \to L(X_K)^\Utext{$\star$-c} \to L(X_K) \to \oplus_{|J| = 1} \, L(X_{J, K}^\partial) \\
& \to \oplus_{|J| = 2} \, L(X_{J, K}^\partial) \to \cdots \to \oplus_{|J| = r} \, L(X_{J, K}^\partial) \to 0
\end{split}
\end{equation}
Now, by the exactness of \Refeq{\ref{eq-prop-RHl-Hl-Ddl-cpt-infty}}, we have a spectral sequence
\[
E_1^{a, b} := H^b(\Gamma_\geom, \oplus_{|J| = a} \, L_{J, \infty}[W_1, \ldots, W_n]) \Rightarrow H^{a + b}(\Gamma_\geom, L^\Utext{$\star$-c}_\infty[W_1, \ldots, W_n]).
\]
By the above discussions, the $E_1$ page is concentrated on the terms $E_1^{a, 0}$. Hence, the spectral sequence degenerates on the $E_2$ page, and by the exactness of \Refeq{\ref{eq-prop-RHl-Hl-Ddl-cpt}},
\[
H^i({X_\proket}_{/X_K}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log}^\Utext{$\star$-c}) \cong H^j(\Gamma_\geom, L^\Utext{$\star$-c}_\infty[W_1, \ldots, W_n])
\]
is zero, when $i > 0$; and is canonically isomorphic to $L(X_K)^\Utext{$\star$-c}$, when $i = 0$, whose formation is compatible with pullbacks under rational localizations or finite \'etale morphisms. Thus, by comparing Definition \ref{def-unip-twist} and \Refeq{\ref{eq-def-L-X-K-cpt}} using \cite[\aRem \logRHremcompatLmzerores]{Diao/Lan/Liu/Zhu:lrhrv}, we obtain $R\mu'_*(\widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathcal{O}\mathbb{C}_{\log}^\Utext{$\star$-c}) \cong \Hl^\Utext{$\star$-c}(\mathbb{L})$, as desired.
\end{proof}
\begin{lem}\label{lem-L-!-DdR-to-RH}
The canonical morphism
\begin{equation}\label{eq-lem-L-!-DdR-to-RH}
D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L}) \widehat{\otimes}_k B_\dR \to \RHl^\Utext{$\star$-c}(\mathbb{L})
\end{equation}
induced by \cite[(\logRHeqlemDdltoRHlfilstrict)]{Diao/Lan/Liu/Zhu:lrhrv} is injective and strictly compatible with the filtrations on both sides. That is, the induced morphism
\begin{equation}\label{eq-lem-L-!-DdR-to-RH-gr}
\OP{gr}^r\bigl(D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L}) \widehat{\otimes}_k B_\dR\bigr) \to \OP{gr}^r\bigl(\RHl^\Utext{$\star$-c}(\mathbb{L})\bigr)
\end{equation}
is injective, for each $r$. If $\mathbb{L}|_U$ is a \emph{de Rham} $\mathbb{Z}_p$-local system over $U_\et$, then both \Refeq{\ref{eq-lem-L-!-DdR-to-RH}} and \Refeq{\ref{eq-lem-L-!-DdR-to-RH-gr}} are isomorphisms, and $\OP{gr} D_{\dR, \log}^\Utext{$\star$-c}(\mathbb{L})$ is a vector bundle of rank $\rank_{\mathbb{Q}_p}(\mathbb{L})$.
\end{lem}
\begin{proof}
These follow from \cite[\aLem \logRHlemDdltoRHlfilstrict, and \aCors \logRHcorDdltoRHlfilstrict{} and \logRHcorDdlgrvecbdl]{Diao/Lan/Liu/Zhu:lrhrv}, and from Proposition \ref{prop-RHl-Hl-Ddl-cpt} and its proof.
\end{proof}
\begin{lem}\label{lem-L-!-coh-comp-arith}
Suppose that $\mathbb{L}|_U$ is a \emph{de Rham} $\mathbb{Z}_p$-local system over $U_\et$. For each $i \geq 0$, we have a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-lem-L-!-coh-comp-arith-dR}
H_{\dR, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr) \cong H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k B_\dR,
\end{equation}
which induces \Pth{by taking $\OP{gr}^0$} a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-lem-L-!-coh-comp-arith-Hdg}
H_{\Hi, \Utext{$\star$-c}}^i\bigl(U_{K, \an}, \Hc(\mathbb{L})\bigr) \cong \oplus_{a + b = i} \Bigl(H_{\Hdg, \Utext{$\star$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k K(-a)\Bigr).
\end{equation}
\end{lem}
\begin{proof}
These follow from Proposition \ref{prop-RHl-Hl-Ddl-cpt}, Lemmas \ref{lem-unip-twist-qis} and \ref{lem-L-!-DdR-to-RH}, and the same arguments as in the proofs of \cite[\aLems \logRHlemlogRHarithcompdR{} and \logRHlemlogRHarithcompHT]{Diao/Lan/Liu/Zhu:lrhrv}.
\end{proof}
We are ready to complete the following:
\begin{proof}[Proof of Theorem \ref{thm-L-!-coh-comp}]
By applying $R\mu'_*$ to the exact sequences in Lemma \ref{prop-Poin-lem}, and by the projection formula, Lemma \ref{lem-unip-twist-qis}, and Proposition \ref{prop-RHl-Hl-Ddl-cpt}, we can replace the targets of the isomorphisms in Proposition \ref{prop-L-!-coh-comp-proket} with $H_{\dR, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr)$ and $H_{\Hi, \Utext{$\star$-c}}^i\bigl(U_{K, \an}, \Hc(\mathbb{L})\bigr)$, respectively, and obtain the canonical isomorphisms \Refeq{\ref{eq-thm-L-!-coh-comp-dR-RH}} and \Refeq{\ref{eq-thm-L-!-coh-comp-Hi}}. Consequently, by Lemma \ref{lem-L-!-coh-comp-arith}, we also obtain the canonical isomorphisms \Refeq{\ref{eq-thm-L-!-coh-comp-dR}} and \Refeq{\ref{eq-thm-L-!-coh-comp-Hdg}}. Finally, these isomorphisms imply that
\[
\dim_k\bigl(H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)\bigr) = \sum_{a + b = i} \dim_k\bigl(H_{\Hdg, \Utext{$\star$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr)\bigr),
\]
and hence the spectral sequence \Refeq{\ref{eq-thm-L-!-coh-comp-spec-seq}} degenerates on the $E_1$ page, as desired.
\end{proof}
In the remainder of this subsection, let us provide some criteria for cohomology with different partial compact support conditions to be isomorphic to each other.
\begin{lem}\label{lem-dR-qis-geom}
Let $I^+_\geom$ denote the subset of $I$ consisting of $j \in I$ such that the eigenvalues of the residue of $\RHl(\mathbb{L})$ along $D_j$ are all in $\mathbb{Q} \cap (0, 1)$ \Pth{\ie, nonzero}. Let $E = \sum_{j \in I} \, c_j D_j$, where $c_j \in \mathbb{Z}$, be a divisor satisfying the following condition:
\begin{enumerate}
\item If $j \in I^+_\geom$, then there is no condition on $c_j$.
\item If $j \in I^\Utext{$\star$-c} - I^+_\geom$, then $c_j \leq 0$.
\item If $j \in I^\Utext{$\star$-nc} - I^+_\geom$, then $c_j \geq 0$.
\end{enumerate}
Let us write $E = E^+ - E^-$, where $E^+ := \sum_{j \in I^+_\geom, \, c_j \geq 0} \, c_j D_j + \sum_{j \in I^\Utext{$\star$-nc} - I^+_\geom} \, c_j D_j$ and $E^- := - \sum_{j \in I^+_\geom, \, c_j < 0} \, c_j D_j - \sum_{j \in I^\Utext{$\star$-c} - I^+_\geom} c_j D_j$. Then we have canonical quasi-isomorphisms
\begin{equation}\label{eq-lem-dR-qis-geom}
\begin{split}
\DRl\bigl(\bigl(\RHl(\mathbb{L})\bigr)(-D^\Utext{$\star$-c})\bigr) & \to \DRl\bigl(\bigl(\RHl(\mathbb{L})\bigr)(-D^\Utext{$\star$-c} + E^+)\bigr) \\
& \to \DRl\bigl(\bigl(\RHl(\mathbb{L})\bigr)(-D^\Utext{$\star$-c} + E)\bigr),
\end{split}
\end{equation}
which induce a canonical isomorphism
\[
H_{\dR, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr) \cong H^i\bigl(\mathcal{X}, \DRl\bigl((\RHl(\mathbb{L}))(-D^\Utext{$\star$-c} + E)\bigr)\bigr).
\]
\end{lem}
\begin{proof}
By the same argument as in the proof of \cite[\aLem 2.7]{Esnault/Viehweg:1992-LVT-B}, for any $c \in \mathbb{Z}$, the eigenvalues of the residue of $\bigl(\RHl(\mathbb{L})\bigr)(c D_j)$ along $D_j$ are the corresponding eigenvalues of $\RHl(\mathbb{L})$ minus $c$. Since the eigenvalues of the residues of $\RHl(\mathbb{L})$ are all in $\mathbb{Q} \cap [0, 1)$, the first \Pth{\resp second} canonical morphism in \Refeq{\ref{eq-lem-dR-qis-geom}} is a quasi-isomorphism, by the same argument as in the proof of \cite[Properties 2.9 a)]{Esnault/Viehweg:1992-LVT-B} \Ref{\resp \cite[Properties 2.9 b)]{Esnault/Viehweg:1992-LVT-B}}, because none of the eigenvalues of the residues of $\bigl(\RHl(\mathbb{L})\bigr)(c_j D_j)$ are in $\mathbb{Z}_{\geq 1}$ \Pth{\resp $\mathbb{Z}_{\leq 0}$}, by the choice of $E^+$ \Pth{\resp $E^-$}.
\end{proof}
\begin{cor}\label{cor-dR-qis-geom}
Let $I^+_\geom$ be as in Lemma \ref{lem-dR-qis-geom}. Suppose
\begin{equation}\label{eq-cor-dR-qis-geom-cond}
I^\Utext{$\star$-c} - I^+_\geom \subset I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \cup I^+_\geom \subset I.
\end{equation}
Then, for each $i \geq 0$, we have a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-cor-dR-qis-geom}
H_{\dR, \Utext{$\star$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr) \cong H_{\dR, \Utext{$\circ$-c}}^i\bigl(\mathcal{U}, \RH(\mathbb{L})\bigr),
\end{equation}
which induces \Pth{by taking $\OP{gr}^0$} a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-cor-dR-qis-geom-Hdg}
H_{\Hi, \Utext{$\star$-c}}^i\bigl(U_{K, \an}, \Hc(\mathbb{L})\bigr) \cong H_{\Hi, \Utext{$\circ$-c}}^i\bigl(U_{K, \an}, \Hc(\mathbb{L})\bigr).
\end{equation}
Since $K$ is a field extension of $\mathbb{Q}_p$, by Theorem \ref{thm-L-!-coh-comp}, for $\mathbb{L}_{\mathbb{Q}_p} := \mathbb{L} \otimes_{\mathbb{Z}_p} \mathbb{Q}_p$, we also obtain a canonical $\Gal(\AC{k} / k)$-equivariant isomorphism
\begin{equation}\label{eq-cor-dR-qis-geom-et}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \cong H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}).
\end{equation}
\end{cor}
\begin{proof}
Since $I^\Utext{$\star$-c} - I^+_\geom = I^\Utext{$\circ$-c} - I^+_\geom$ and $I^\Utext{$\star$-c} \cup I^+_\geom = I^\Utext{$\circ$-c} \cup I^+_\geom$, we may assume that $I^\Utext{$\circ$-c} = I^\Utext{$\star$-c} - I^+_\geom \subset I^\Utext{$\star$-c}$, in which case there are compatible canonical morphisms from the cohomology with compact support condition defined by $\Utext{$\star$-c}$ to that defined by $\Utext{$\circ$-c}$, and apply Lemma \ref{lem-dR-qis-geom} and Theorem \ref{thm-L-!-coh-comp}.
\end{proof}
\begin{lem}\label{lem-dR-qis-arith}
Let $I^+_\arith$ denote the subset of $I$ consisting of $j \in I$ such that the eigenvalues of the residue of $D_{\dR, \log}(\mathbb{L})$ along $D_j$ are all in $\mathbb{Q} \cap (0, 1)$ \Pth{\ie, nonzero}. Let $E = \sum_{j \in I} \, c_j D_j$, where $c_j \in \mathbb{Z}$, be a divisor satisfying the following conditions:
\begin{enumerate}
\item If $j \in I^+_\arith$, then there is no condition on $c_j$.
\item If $j \in I^\Utext{$\star$-c} - I^+_\arith$, then $c_j \leq 0$.
\item If $j \in I^\Utext{$\star$-nc} - I^+_\arith$, then $c_j \geq 0$.
\end{enumerate}
Let us write $E = E^+ - E^-$, where $E^+ := \sum_{j \in I^+_\arith, \, c_j \geq 0} \, c_j D_j + \sum_{j \in I^\Utext{$\star$-nc} - I^+_\arith} \, c_j D_j$ and $E^- := - \sum_{j \in I^+_\arith, \, c_j < 0} \, c_j D_j - \sum_{j \in I^\Utext{$\star$-c} - I^+_\arith} c_j D_j$. Then we have canonical quasi-isomorphisms
\begin{equation}\label{eq-lem-dR-qis-arith}
\begin{split}
\DRl\bigl(\bigl(D_{\dR, \log}(\mathbb{L})\bigr)(-D^\Utext{$\star$-c})\bigr) & \to \DRl\bigl(\bigl(D_{\dR, \log}(\mathbb{L})\bigr)(-D^\Utext{$\star$-c} + E^+)\bigr) \\
& \to \DRl\bigl(\bigl(D_{\dR, \log}(\mathbb{L})\bigr)(-D^\Utext{$\star$-c} + E)\bigr),
\end{split}
\end{equation}
which induce a canonical isomorphism
\[
H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \cong H^i\bigl(X_\an, \DRl\bigl((D_{\dR, \log}(\mathbb{L}))(-D^\Utext{$\star$-c} + E)\bigr)\bigr).
\]
\end{lem}
\begin{proof}
As in the proof of Lemma \ref{lem-dR-qis-geom}, these follow from the same arguments as in the proofs of \cite[\aLem 2.7 and Properties 2.9]{Esnault/Viehweg:1992-LVT-B}.
\end{proof}
\begin{cor}\label{cor-dR-qis-arith}
Let $I^+_\arith$ be as in Lemma \ref{lem-dR-qis-arith}. Suppose
\begin{equation}\label{eq-cor-dR-qis-arith-cond}
I^\Utext{$\star$-c} - I^+_\arith \subset I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \cup I^+_\arith \subset I.
\end{equation}
Then, for each $i \geq 0$, we have a canonical isomorphism
\begin{equation}\label{eq-cor-dR-qis-arith}
H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \cong H_{\dR, \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr),
\end{equation}
which induces, for each $a \in \mathbb{Z}$, \Pth{by taking $\OP{gr}^a$} a canonical isomorphism
\begin{equation}\label{eq-cor-dR-qis-arith-Hdg}
H_{\Hdg, \Utext{$\star$-c}}^{a, i - a}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \cong H_{\Hdg, \Utext{$\circ$-c}}^{a, i - a}\bigl(U_\an, D_\dR(\mathbb{L})\bigr).
\end{equation}
\end{cor}
\begin{proof}
Since $I^\Utext{$\star$-c} - I^+_\arith = I^\Utext{$\circ$-c} - I^+_\arith$ and $I^\Utext{$\star$-c} \cup I^+_\arith = I^\Utext{$\circ$-c} \cup I^+_\arith$, we may assume that $I^\Utext{$\circ$-c} = I^\Utext{$\star$-c} - I^+_\arith$, and apply Lemma \ref{lem-dR-qis-arith} and Theorem \ref{thm-L-!-coh-comp}.
\end{proof}
\section{Trace morphisms and Poincar\'e duality}\label{sec-trace}
In this section, we shall retain the setting of Section \ref{sec-dR-comp-cpt-main} \Ref{except in the review in Section \ref{sec-trace-coh}}. The goal of this section is to construct the trace morphisms for de Rham and \'etale cohomology with compact support in degree $2d$, and show that they are compatible with trace morphisms of lower dimensions via Gysin morphisms, and with each other under the de Rham comparison isomorphism in Theorem \ref{thm-L-!-coh-comp}.
Throughout the section, we shall denote by $\mathbb{L}$ a $\mathbb{Z}_p$-local system on $X_\ket$. Recall that, by \cite[\aCor \logadiccorpuritylisse]{Diao/Lan/Liu/Zhu:laske}, any $\mathbb{Z}_p$-local system over $U_\et$ uniquely extends over $X_\ket$ by pushforward. Thus, our results for $\mathbb{L}$ over $X_\ket$ are applicable to all $\mathbb{Z}_p$-local systems over $U_\et$, despite the notation.
\subsection{Serre duality for coherent cohomology}\label{sec-trace-coh}
In this subsection, we review the trace morphism and Serre duality for the coherent cohomology of proper smooth rigid analytic varieties, and record some of their basic properties.
Let $Y$ be a rigid analytic variety \Pth{regarded as an adic space} over $k$. Let $y$ be a classical point \Pth{defined by a finite extension of $k$} and $\mathcal{F}$ a coherent sheaf on $Y$. Let $(A, \ideal{m}, M)$ be the $\ideal{m}_y$-adic completion of $(\mathcal{O}_{Y, y}, \ideal{m}_y, \mathcal{F}_y)$, where $\ideal{m}_y$ is the maximal ideal of $\mathcal{O}_{Y, y}$. Then $A$ is a noetherian complete local ring with residue field $k(y) = A / \ideal{m}$ a finite extension of $k$, and $M$ is a finitely generated $A$-module.
\begin{lem}\label{lem-alg-to-an-loc-coh}
Let $H^i_{\ideal{m}}(M)$ denote the usual algebraic local cohomology \Ref{see, for example, \cite[\aCh 1]{Brodmann/Sharp:1998-LC}}. Then there is a canonical morphism
\begin{equation}\label{eq-lem-alg-to-an-loc-coh}
H^i_{\ideal{m}}(M) \to H^i(Y, \mathcal{F}).
\end{equation}
\end{lem}
\begin{proof}
For any closed subset $Z \subset Y$, let $H_Z^i(Y_\an, \,\cdot\,)$ denote the usual sheaf cohomology with support in $Z$; \ie, the $i$-th derived functor of
\[
\mathcal{F} \mapsto \Gamma_Z(Y_\an, \mathcal{F}) := \ker\bigl(\Gamma(Y_\an, \mathcal{F}) \to \Gamma((Y - Z)_\an, \mathcal{F})\bigr).
\]
It suffices to construct a canonical morphism
\begin{equation}\label{eq-lem-alg-to-an-loc-coh-pre}
H^i_{\ideal{m}}(M) \to H^i_{\{ y \}}(Y_\an, \mathcal{F}).
\end{equation}
For this purpose, we may replace $Y$ with an open affinoid subspace $U = \OP{Spa}(R, R^\circ)$ containing $y$, because $H^i_{\{ y \}}(Y_\an, \mathcal{F}) \cong H^i_{\{ y \}}(U_\an, \mathcal{F}|_U)$. Let $N := \Gamma(Y_\an, \mathcal{F})$, which is a finite $R$-module because $\mathcal{F}$ is coherent. Let $V := \OP{Spec}(R)$, which is a noetherian scheme, and let $\pi: (Y, \mathcal{O}_Y) \to (V, \mathcal{O}_V)$ denote the morphism of ringed spaces. Note that $\pi^{-1}\bigl(\pi(y)\bigr) = \{ y \}$; that $\mathcal{O}_Y$ is flat over $\pi^{-1}(\mathcal{O}_V)$; and that, for the coherent $\mathcal{O}_V$-module $\widetilde{N}$ associated with the above $N$, we have $\mathcal{F} \cong \mathcal{O}_Y \otimes_{\pi^{-1}(\mathcal{O}_V)} \pi^{-1}(\widetilde{N})$. By \cite[\aThms 1.3.8 and 4.3.2]{Brodmann/Sharp:1998-LC}, the local cohomology for a finitely generated module over a noetherian local ring is torsion, and hence its formation is compatible with the formation of completions \Pth{with respect to powers of the maximal ideal, as usual}. Hence, by \cite[\aThm 2.8 and its proof]{Hartshorne:1967-LC}, we have $H^i_{\ideal{m}}(M) \cong H_{\{ \pi(y) \}}^i(V, \widetilde{N})$, and the desired morphism \Refeq{\ref{eq-lem-alg-to-an-loc-coh-pre}} is induced by the composition of canonical morphisms $H_{\{ \pi(y) \}}^i(V, \widetilde{N}) \to H_{\{ y \}}^i\bigl(Y_\an, \pi^{-1}(\widetilde N)\bigr) \to H_{\{ y \}}^i(Y_\an, \mathcal{F})$.
\end{proof}
Now let $Y$ be smooth of pure dimension $d$, with $\mathcal{F} = \Omega_Y^d$ the sheaf of top-degree differentials on $Y$. Then the $A$-module $M$ can be identified with the top-degree continuous K\"ahler differentials $\Omega_{A / k}^d$ of $A$ \Pth{with its $\ideal{m}$-adic topology} over $k$.
\begin{lem}\label{lem-Poin-res}
There is the canonical residue morphism
\begin{equation}\label{eq-lem-Poin-res}
\res_y: H^d_{\ideal{m}}(\Omega_{A / k}^d) \to k.
\end{equation}
\end{lem}
\begin{proof}
By choosing local coordinates of $Y$ near $k(y)$, we have compatible isomorphisms $A \cong k(y)[[T_1, \ldots, T_d]]$ and $\Omega_{A / k}^d \cong k(y)[[T_1, \ldots, T_d]] \, dT_1 \wedge \cdots \wedge dT_d$. Accordingly, we have \Ref{\Refcf{} \cite[\aSec 4, \aEx 3]{Hartshorne:1967-LC}}
\[
H^d_{\ideal{m}}(\Omega_{A / k}^d) \cong \Bigl\{ \sum_\alpha \, a_\alpha T^\alpha \, dT_1 \wedge \cdots \wedge dT_d : \alpha = (\alpha_1, \ldots, \alpha_d) \in \mathbb{Z}_{<0}^d, \, a_\alpha \in k(y) \Bigr\}
\]
\Pth{where the sum is finite}, and the desired morphism \Refeq{\ref{eq-lem-Poin-res}} is the composition of the \Pth{multiple} residue morphism $\sum_{\alpha \in \mathbb{Z}_{<0}^d} \, a_\alpha T^\alpha \, dT_1 \wedge \cdots \wedge dT_d \mapsto a_{(-1, \ldots, -1)}$ with the usual trace morphism $k(y) \to k$, which \Pth{by the chain rule} is independent of the choice of coordinates. \Pth{When $d = 0$, our convention is that $H^d_{\ideal{m}}(\Omega_{A / k}^d) \cong k(y)$ and that the residue morphism reduces to the identity morphism on $k(y)$.}
\end{proof}
\begin{thm}[Serre duality]\label{thm-Serre-duality}
Let $Y$ be a smooth proper rigid analytic variety over $k$ of pure dimension $d$. Then there is a unique morphism
\begin{equation}\label{eq-trace-coh-gen}
t_\coh: H^d(Y_\an, \Omega_Y^d) \to k,
\end{equation}
whose pre-composition with \Refeq{\ref{eq-lem-alg-to-an-loc-coh}}, for any classical point $y$, gives the residue morphism \Refeq{\ref{eq-lem-Poin-res}}. We call $t_\coh$ the \emph{trace morphism}. Moreover, by pre-composition with the cup product pairing, it induces the usual Serre duality for coherent cohomology, \ie, a perfect pairing
\begin{equation}\label{eq-thm-Serre-duality-coh}
H^i(Y_\an, \mathcal{F}^\bullet) \times \OP{Ext}_{\mathcal{O}_Y}^{d - i}(\mathcal{F}^\bullet, \Omega_Y^d) \to k,
\end{equation}
for each bounded complex $\mathcal{F}^\bullet$ of coherent $\mathcal{O}_Y$-modules and each $i \in \mathbb{Z}$. As a special case, we have a perfect pairing
\begin{equation}\label{eq-thm-Serre-duality-loc-free}
H^i(Y_\an, \mathcal{F}^\bullet) \times H^{d - i}(Y_\an, \mathcal{F}^{\bullet, \dualsign} \otimes_{\mathcal{O}_Y} \Omega_Y^d) \to k,
\end{equation}
for each bounded complex $\mathcal{F}^\bullet$ of finite locally free $\mathcal{O}_Y$-modules and each $i \in \mathbb{Z}$, where $\mathcal{F}^{\bullet, \dualsign}$ denotes the complex whose $j$-th term is $\dual{(\mathcal{F}^{-j})}$, for each $j \in \mathbb{Z}$. In particular, if $Y$ is geometrically connected, then \Refeq{\ref{eq-trace-coh-gen}} is an isomorphism.
\end{thm}
\begin{proof}
When $\mathcal{F}^\bullet$ is concentrated in degree zero, the isomorphism \Refeq{\ref{eq-thm-Serre-duality-coh}} follows from \cite[\aThms 5.1.1 and 5.1.2, \aDef 4.2.4, \aLem 4.2.9, and the explicit descriptions in \aSecs 1.2, 1.3, and 2.1]{Beyer:1997-sdcsr}. \Ref{An earlier construction of the Serre duality for proper rigid analytic varieties is in \cite{vanderPut:1992-sdras}, but the more explicit descriptions in \cite{Beyer:1997-sdcsr} allows us to more directly relate the trace morphism there to the residue morphism \Refeq{\ref{eq-lem-Poin-res}} here.} Therefore, by using \cite[\aProp 7.1 (\emph{Lemma on Way-Out Functors})]{Hartshorne:1966-RD} as usual, we also have the isomorphism \Refeq{\ref{eq-thm-Serre-duality-coh}} for each bounded complex $\mathcal{F}^\bullet$ of coherent $\mathcal{O}_Y$-modules, which specializes to the isomorphism \Refeq{\ref{eq-thm-Serre-duality-loc-free}} when $\mathcal{F}^\bullet$ is a bounded complex of finite locally free $\mathcal{O}_Y$-modules.
\end{proof}
Let us record the following properties of the trace morphism $t_\coh$ for later use.
\begin{lem}\label{lem-property-trace}
Assume that $Y$ is proper smooth over $k$, of pure dimension $d$.
\begin{enumerate}
\item\label{lem-property-trace-exc} Let $f: Y' \to Y$ be a morphism of proper smooth rigid analytic varieties which induces an isomorphism $f^{-1}(U) \to U$ for some open dense rigid analytic subvariety $U$ of $Y$, then the canonical morphism $H^d(Y_\an, \Omega_Y^d) \to H^d(Y'_\an, \Omega_{Y'}^d)$ induced by the canonical morphism $f^*(\Omega_Y^d) \to \Omega_{Y'}^d$ is an isomorphism compatible with the trace morphisms of $Y$ and $Y'$ as in \Refeq{\ref{eq-trace-coh-gen}}.
\item\label{lem-property-trace-Gysin} Let $\imath: Z \Em Y$ be a smooth divisor of $Y$. Then the canonical morphism $H^{d - 1}(Z_\an, \Omega_Z^{d - 1}) \to H^d(Y_\an, \Omega_Y^d)$ induced by the adjunction exact sequence
\begin{equation}\label{eq-lem-property-trace-Gysin}
0 \to \Omega_Y^d \to \Omega_Y^d(Z) \to \imath_*(\Omega_Z^{d - 1}) \to 0
\end{equation}
is compatible with the traces morphisms of $Y$ and $Z$ as in \Refeq{\ref{eq-trace-coh-gen}}.
\end{enumerate}
\end{lem}
\begin{proof}
The assertion \Refenum{\ref{lem-property-trace-exc}} follows from Theorem \ref{thm-Serre-duality} and Lemmas \ref{lem-alg-to-an-loc-coh} and \ref{lem-Poin-res}, because we can determine the trace morphisms as in \Refeq{\ref{eq-trace-coh-gen}} for $Y$ and $Y'$ by choosing sufficiently many $y \in f^{-1}(U) \Mi U$, and because the residue morphisms for the local cohomology of $Y$ and $Y'$ at $y$ can be canonically identified with each other.
As for the assertion \Refenum{\ref{lem-property-trace-Gysin}}, for each point $y \in Z \subset Y$, we may choose local coordinates such that, if $(A, \ideal{m})$ denotes the completion of $(\mathcal{O}_{Y, y}, \ideal{m}_y)$ as in the beginning of this subsection, and if $(B, \ideal{n})$ denotes the corresponding completion of $(\mathcal{O}_{Z, y}, \ideal{m}_y \mathcal{O}_{Z, y})$, then we have compatible isomorphisms $A \cong k(y)[[T_1, \ldots, T_d]]$ and $B \cong k(y)[[T_1, \ldots, T_{d - 1}]]$, with maximal ideals $\ideal{m}$ and $\ideal{n}$ generated by $T_1, \ldots, T_d$ and by $T_1, \ldots, T_{d - 1}$, respectively, together with the canonical short exact sequence $0 \to \Omega_{A / k}^d \to \frac{1}{T_d}\Omega_{A / k}^d \to \Omega_{B / k}^{d - 1} \to 0$ induced by \Refeq{\ref{eq-lem-property-trace-Gysin}}, which is given by
\[
\begin{split}
0 & \to k(y)[[T_1, \ldots, T_d]] \, dT_1 \wedge \cdots \wedge dT_d \to k(y)[[T_1, \ldots, T_d]] \, dT_1 \wedge \cdots \wedge \tfrac{dT_d}{T_d} \\
& \to k(y)[[T_1, \ldots, T_{d - 1}]] \, dT_1 \wedge \cdots \wedge dT_{d - 1} \to 0
\end{split}
\]
in explicit coordinates. Then the connecting morphism $H_{\ideal{m}}^{d - 1}(\Omega_{B / k}^{d - 1}) \to H_{\ideal{m}}^d(\Omega_{A / k}^d)$ in the associated long exact sequence is \Pth{by an explicit calculation} given by
\[
\begin{split}
& \Bigl\{ \sum_\alpha a_\alpha T^\alpha \, dT_1 \wedge \cdots \wedge dT_{d - 1} : \alpha = (\alpha_1, \ldots, \alpha_{d - 1}) \in \mathbb{Z}_{<0}^{d - 1}, \, a_\alpha \in k(y) \Bigr\} \\
& \to \Bigl\{ \sum_\alpha a_\alpha T^\alpha \, dT_1 \wedge \cdots \wedge dT_d : \alpha = (\alpha_1, \ldots, \alpha_d) \in \mathbb{Z}_{<0}^d, \, a_\alpha \in k(y) \Bigr\} : f \mapsto f \wedge \tfrac{dT_d}{T_d},
\end{split}
\]
which is compatible with the trace morphisms, by the construction based on \Pth{multiple} residue morphisms in the proof Lemma \ref{lem-Poin-res}.
\end{proof}
By applying the above results to our setting in the beginning of Section \ref{sec-trace}, we obtain a trace morphism
\begin{equation}\label{eq-trace-coh}
t_\coh: H^d(X_\an, \Omega_X^d) \to k,
\end{equation}
as in \Refeq{\ref{eq-trace-coh-gen}}, which induces by base change from $k$ to $K$ a trace morphism
\begin{equation}\label{eq-trace-coh-K}
t_{\coh, K}: H^d(X_{K, \an}, \Omega_{X_K}^d) \to K,
\end{equation}
which in turn induces the Serre duality for bounded complexes of coherent $\mathcal{O}_{X_K}$-modules \Ref{as in Theorem \ref{thm-Serre-duality}}.
\subsection{Poincar\'e duality for de Rham cohomology}\label{sec-trace-dR}
\begin{thm}\label{thm-Higgs-pairing}
For each $\mathbb{Z}_p$-local system $\mathbb{L}$ on $X_\ket$, the composition of the canonical cup product pairing with \Refeq{\ref{eq-trace-coh-K}} induces a perfect pairing
\begin{equation}\label{eq-thm-Higgs-pairing}
H_{\Hi, \Utext{$\star$-c}}^i\bigl(U_{K, \an}, \Hc(\mathbb{L})\bigr) \times H_{\Hi, \Utext{$\star$-nc}}^{2d - i}\bigl(U_{K, \an}, \Hc\bigl(\dual{\mathbb{L}}(d)\bigr)\bigr) \to K.
\end{equation}
\end{thm}
\begin{proof}
By \cite[\aThm 3.8(i)]{Liu/Zhu:2017-rrhpl}, $\RH\bigl(\dual{\mathbb{L}}(d)\bigr) \cong \dual{\bigl(\RH(\mathbb{L})(-d)\bigr)}$ as filtered vector bundles over $\mathcal{U}$. By the definition of $\RHl(\,\cdot\,)$, we have a canonical morphism $\bigl(\RHl(\mathbb{L})(-d)\bigr) \otimes_{\mathcal{O}_\mathcal{X}} \RHl\bigl(\dual{\mathbb{L}}(d)\bigr) \to \RHl(\mathbb{Q}_p) \cong \mathcal{O}_\mathcal{X}$. Since $\RHl(\mathbb{L})(-d)$ and $\RHl\bigl(\dual{\mathbb{L}}(d)\bigr)$ are filtered vector bundles over $\mathcal{X}$ extending $\RH(\mathbb{L}|_U)(-d)$ and $\RH\bigl(\dual{(\mathbb{L}|_U)}(d)\bigr)$, respectively, we obtain a canonical injective morphism
\begin{equation}\label{eq-thm-Higgs-pairing-mor}
\RHl\bigl(\dual{\mathbb{L}}(d)\bigr) \to \dual{\bigl(\RHl(\mathbb{L})(-d)\bigr)}
\end{equation}
of vector bundles over $\mathcal{X}$, which is compatible with the connections with log poles on both sides, whose cokernel is supported on the boundary $D$. Moreover, the filtration on $\RHl\bigl(\dual{\mathbb{L}}(d)\bigr)$ is induced by the one on $\RH\bigl(\dual{\mathbb{L}}(d)\bigr)$ via the canonical injective morphism $\RHl\bigl(\dual{\mathbb{L}}(d)\bigr) \to \jmath_*\bigl(\RH\bigl(\dual{\mathbb{L}}(d)\bigr)\bigr)$, where $\jmath$ denotes the open immersion $\mathcal{U} \to \mathcal{X}$; and the analogous assertions are true for $\RHl(\mathbb{L})(-d)$ and $\dual{\bigl(\RHl(\mathbb{L})(-d)\bigr)}$. Therefore, \Refeq{\ref{eq-thm-Higgs-pairing-mor}} is strictly compatible with the filtrations on both sides, and induces a canonical morphism
\begin{equation}\label{eq-thm-Higgs-pairing-qis-dR}
\DRl\bigl(\RHl\bigl(\dual{\mathbb{L}}(d)\bigr)(-D^\Utext{$\star$-nc})\bigr) \to \DRl\bigl(\dual{\bigl(\RHl(\mathbb{L})(-d)\bigr)}(-D^\Utext{$\star$-nc})\bigr)
\end{equation}
between the associated log de Rham complexes over $\mathcal{X}$, which is also strictly compatible with the filtrations on both sides.
By comparing the residues of the two sides of \Refeq{\ref{eq-thm-Higgs-pairing-qis-dR}} using \cite[\aThm \logRHthmlogRHgeom(\logRHthmlogRHgeomres) and \aProp \logRHpropconnisomres]{Diao/Lan/Liu/Zhu:lrhrv}, and by the same argument as in the proof of \cite[\aLem 2.10]{Esnault/Viehweg:1992-LVT-B}, we may factor \Refeq{\ref{eq-thm-Higgs-pairing-qis-dR}} as a composition of a series of inclusions $\mathcal{E} \to \mathcal{E}'$ of complexes each of whose cokernel is a two-term complex \Pth{in some degrees $a$ and $a + 1$}
\[
0 \to \Omega_{D_j}^a\bigl(\log (D - D_j)|_{D_j}\bigr) \otimes_{\mathcal{O}_{D_j}} \mathcal{F} \to \Omega_{D_j}^a\bigl(\log (D - D_j)|_{D_j}\bigr) \otimes_{\mathcal{O}_{D_j}} \mathcal{F} \to 0,
\]
where $\mathcal{F}$ is the maximal subsheaf of $\bigl(\dual{\bigl(\RHl(\mathbb{L})(-d)\bigr)}(-D^\Utext{$\star$-nc})\bigr)|_{D_j}$, for some $j$, on which the eigenvalues of residues differ from that of $\bigl(\RHl\bigl(\dual{\mathbb{L}}(d)\bigr)(-D^\Utext{$\star$-nc})\bigr)|_{D_j}$ and hence belong to $\mathbb{Q} \cap (-1, 0)$ \Pth{\resp $\mathbb{Q} \cap (0, 1)$} when $j \in I^\Utext{$\star$-c}$ \Pth{\resp $j \in I^\Utext{$\star$-nc}$}; and where the morphism between the two terms is induced by $(-1)^a$ times the residue morphism and hence is an isomorphism. Thus, \Refeq{\ref{eq-thm-Higgs-pairing-qis-dR}} is a \emph{quasi-isomorphism}, which induces by taking cohomology and taking $\OP{gr}^0$ a canonical isomorphism
\begin{equation}\label{eq-thm-Higgs-pairing-isom}
H_{\Hi, \Utext{$\star$-nc}}^{2d - i}\bigl(U_{K, \an}, \Hc\bigl(\dual{\mathbb{L}}(d)\bigr)\bigr) \Mi H_{\Hi, \Utext{$\star$-nc}}^{2d - i}\bigl(U_{K, \an}, \dual{\Hc\bigl(\mathbb{L}(-d)\bigr)}\bigr).
\end{equation}
Since $\Omega_X^{\log, \bullet} \cong \dual{(\Omega_X^{\log, d - \bullet})} \otimes_{\mathcal{O}_X} \bigl(\Omega_X^d(D)(d)[-d]\bigr)$ and the same is true after base change, we have an isomorphism of filtered complexes $\Hil\bigl(\Hl(\mathbb{L})(-D^\Utext{$\star$-c})\bigr) \cong \dual{\bigl(\Hil\bigl(\dual{\bigl(\Hl(\mathbb{L})(-d)\bigr)}(-D^\Utext{$\star$-nc})\bigr)\bigr)} \otimes \bigl(\Omega_{X_K}^d(d)[-d]\bigr)$ over $X_K$. Hence, we obtain the desired perfect pairing \Refeq{\ref{eq-thm-Higgs-pairing}} by combining \Refeq{\ref{eq-thm-Higgs-pairing-isom}} with the duality for bounded complexes of finite locally free $\mathcal{O}_{X_K}$-modules \Ref{as in Theorem \ref{thm-Serre-duality}}.
\end{proof}
Since $\Omega_X^d \cong \Omega_X^{\log, d}(-D)$ \Pth{which we already used in the above proof} and hence
\begin{equation}\label{eq-diff-top-deg-coh}
H_{\dR, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) \cong H_{\Hdg, \cpt}^{d, d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) \cong H^d(X_\an, \Omega_X^d)
\end{equation}
\Ref{by Theorem \ref{thm-L-!-coh-comp}, with $\mathbb{L} = \mathbb{Z}_p(d)$ and with $\mathcal{O}_U(d)$ denoting the same underlying $\mathcal{O}_U$-module with the trivial Hodge filtration shifted by $d$}, we obtain the following:
\begin{thm}\label{thm-trace-dR-Hdg}
The trace morphism $t_\coh$ as in \Refeq{\ref{eq-trace-coh}} induces compatible trace morphisms
\begin{equation}\label{eq-thm-trace-dR}
t_\dR: H_{\dR, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) \to k
\end{equation}
and
\begin{equation}\label{eq-thm-trace-Hdg}
t_\Hdg: H_{\Hdg, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) \to k.
\end{equation}
When $d = 0$ and $X$ is geometrically connected \Pth{\ie, a $k$-point}, the trace morphisms $t_\dR: H_\dR^0(X_\an, \mathcal{O}_X) \to k$ and $t_\Hdg: H_\Hdg^0(X_\an, \mathcal{O}_X) \to k$ are just the canonical isomorphisms given by $H_\dR^0(X_\an, \mathcal{O}_X) \cong H_\Hdg^0(X_\an, \mathcal{O}_X) \cong H^0(X_\an, \mathcal{O}_X) \cong k$. For each $\mathbb{Z}_p$-local system $\mathbb{L}$ on $X_\ket$ such that $\mathbb{L}|_U$ is \emph{de Rham}, the composition of the cup product pairing with \Refeq{\ref{eq-thm-trace-dR}} induces a perfect pairing
\begin{equation}\label{eq-thm-trace-dR-pairing}
H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \times H_{\dR, \Utext{$\star$-nc}}^{2d - i}\bigl(U_\an, D_\dR\bigl(\dual{\mathbb{L}}(d)\bigr)\bigr) \to k,
\end{equation}
which is compatible \Pth{by taking graded pieces} with the perfect pairing
\begin{equation}\label{eq-thm-trace-Hdg-pairing}
H_{\Hdg, \Utext{$\star$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \times H_{\Hdg, \Utext{$\star$-nc}}^{d - a, d - b}\bigl(U_\an, D_\dR\bigl(\dual{\mathbb{L}}(d)\bigr)\bigr) \to k
\end{equation}
defined by the composition of the cup product pairings with \Refeq{\ref{eq-thm-trace-Hdg}}.
\end{thm}
\begin{proof}
The first two assertions are clear. As for the third one, suppose that $\mathbb{L}|_U$ is de Rham. Since $K$ is a field extension of $k$, we obtain the desired perfect pairing \Refeq{\ref{eq-thm-trace-Hdg-pairing}}, by Theorem \ref{thm-Higgs-pairing} and the comparison isomorphisms as in \Refeq{\ref{eq-lem-L-!-coh-comp-arith-Hdg}}. Since the formation of cup products is compatible with the formation of the $E_1$ pages of the Hodge--de Rham spectral sequences for $H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$ and $H_{\dR, \Utext{$\star$-nc}}^{2d - i}\bigl(U_\an, D_\dR\bigl(\dual{\mathbb{L}}(d)\bigr)\bigr)$, and since these spectral sequences degenerate on the $E_1$ pages by Theorem \ref{thm-L-!-coh-comp}, we also obtain the desired perfect pairing \Refeq{\ref{eq-thm-trace-dR-pairing}}.
\end{proof}
\begin{lem}\label{lem-trace-dR}
The formation of $t_\dR$ in Theorem \ref{thm-trace-dR-Hdg} is compatible with restrictions to open rigid analytic subvarieties of the form $U - X_0 = X - D - X_0$ for some closed rigid analytic subvarieties $X_0$ of $X$. In particular, it is also compatible with any morphism between proper smooth rigid analytic varieties that is an isomorphism over some open dense rigid analytic subvariety \Pth{\eg, any blowup at closed rigid analytic subvarieties}.
\end{lem}
\begin{proof}
By using resolution of singularities \Ref{as in \cite{Bierstone/Milman:1997-cdbml}}, there exists a proper morphism $\pi: X' \to X$ such that $\bigl(\pi^{-1}(D)\bigr)_\red$ \Pth{with its canonical reduced closed subspace structure} is a simple normal crossings divisor, and such that $\pi$ is an isomorphism over $U' = U - X_0$. Thus, by \Refeq{\ref{eq-diff-top-deg-coh}}, it suffices to note that, by Lemma \ref{lem-property-trace}\Refenum{\ref{lem-property-trace-exc}}, the canonical morphism $H^d(X_\an, \Omega_X^d) \to H^d(X'_\an, \Omega_{X'}^d)$ is compatible with the trace morphisms for coherent cohomology as in \Refeq{\ref{eq-trace-coh}}.
\end{proof}
\subsection{Excision and Gysin isomorphisms}\label{sec-exc-Gysin}
In order to deduce Poincar\'e duality of \Pth{rational} \'etale cohomology from the Poincar\'e duality for de Rham and Higgs cohomology, we shall establish in this subsection some compatibilities between the comparison isomorphisms and the excision and Gysin isomorphisms defined by complements of smooth divisors.
Recall that $U = X - D$ with $D = \cup_{j \in I} \, D_j$. Let us begin with the excision isomorphisms between top-degree cohomology.
\begin{lem}\label{lem-exc-isom}
Let $Z = D_{j_0}$ for some $j_0 \in I$, that $D' := \cup_{j \in I - \{ j_0 \}} \, D_j$, and that $U' := X - D'$, so that $U = U' - W$ for some smooth closed rigid analytic subvariety $W = U' \cap Z$ of $U'$. Let $\jmath_U: U \to U'$ and $\imath_W: W \to U'$ denote the canonical open immersions and closed immersions \Pth{of underlying adic spaces, without log structures}, respectively. Then we have the excision short exact sequence
\begin{equation}\label{eq-lem-exc-isom-ex-seq}
0 \to \jmath_{U, \et, !}\bigl(\mathbb{Z}_p(d)\bigr) \to \mathbb{Z}_p(d) \to \imath_{W, \et, *}\bigl(\mathbb{Z}_p(d)\bigr) \to 0
\end{equation}
over $U'_{K, \et}$, which induces an isomorphism
\begin{equation}\label{eq-lem-exc-isom}
H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Z}_p(d)\bigr) \Mi H_{\et, \cpt}^{2d}\bigl(U'_K, \mathbb{Z}_p(d)\bigr).
\end{equation}
By composing such isomorphisms, we have $H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Z}_p(d)\bigr) \Mi H_\et^{2d}\bigl(X_K, \mathbb{Z}_p(d)\bigr)$.
\end{lem}
\begin{proof}
This is because $H_{\et, \cpt}^i\bigl(W_K, \mathbb{Z}_p(d)\bigr) = 0$ for $i > 2\dim(W) = 2d - 2$ in the long exact sequence associated with \Refeq{\ref{eq-lem-exc-isom-ex-seq}}, by Theorem \ref{thm-L-!-prim-comp} \Pth{which implies the analogous vanishing result for $\mathbb{Z}_p$-local systems, by standard arguments}.
\end{proof}
\begin{prop}\label{prop-exc-dR-comp}
With the same setting as in Lemma \ref{lem-exc-isom}, the isomorphism \Refeq{\ref{eq-lem-exc-isom}} extends to a commutative diagram
\begin{equation}\label{eq-prop-exc-dR-comp}
\xymatrix{ {H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Z}_p(d)\bigr)} \ar[r]^-\sim \ar@{^(->}[d] & {H_{\et, \cpt}^{2d}\bigl(U'_K, \mathbb{Z}_p(d)\bigr)} \ar@{^(->}[d] \\
{H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Z}_p(d)\bigr) \otimes_{\mathbb{Z}_p} B_\dR} \ar[r]^-\sim \ar[d] & {H_{\et, \cpt}^{2d}\bigl(U'_K, \mathbb{Z}_p(d)\bigr) \otimes_{\mathbb{Z}_p} B_\dR} \ar[d] \\
{H_{\dR, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) \otimes_k B_\dR} \ar[r]^-\sim & {H_{\dR, \cpt}^{2d}\bigl(U'_\an, \mathcal{O}_{U'}(d)\bigr) \otimes_k B_\dR} \\
{H_{\dR, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr)} \ar[r]^-\sim \ar@{^(->}[u] & {H_{\dR, \cpt}^{2d}\bigl(U'_\an, \mathcal{O}_{U'}(d)\bigr)} \ar@{^(->}[u] }
\end{equation}
Here the fourth row \Pth{at the bottom} is the excision isomorphism induced by the long exact sequence associated with the excision short exact sequence
\begin{equation}\label{eq-prop-exc-dR-comp-ex-seq-dR}
\begin{split}
0 & \to \Omega_X^\bullet(\log D)(-D) \to \Omega_X^\bullet(\log D')(-D') \\
& \to \imath_{Z, \an, *}\bigl(\Omega_Z^\bullet\bigl(\log (D'|_Z)\bigr)(-D'|_Z)\bigr) \to 0
\end{split}
\end{equation}
for de Rham complexes over $X_\an$, where $\imath_Z: Z \to X$ denotes the canonical closed immersion of adic spaces, and where $D'|_Z := D' \cap Z = \cup_{j \in I - \{ j_0 \}} \, (D_j \cap Z)$, which is an isomorphism because
\[
H_{\dR, \cpt}^i\bigl(W_\an, \mathcal{O}_W(d)\bigr) \cong H^i\bigl(Z_\an, \Omega_Z^\bullet\bigl(\log (D'|_Z)\bigr)(d)\bigr) = 0
\]
for $i > 2\dim(W) = 2\dim(Z) = 2d - 2$. Moreover, this isomorphism at the bottom row is compatible with the trace morphisms, by Lemma \ref{lem-trace-dR}.
\end{prop}
\begin{proof}
Let $X'$ denote the log adic space with the same underlying space as $X$, but with the log structure induced by the normal crossings divisor $D'$ as in \cite[\aEx \logRHexlogadicspncd]{Diao/Lan/Liu/Zhu:lrhrv}. Let $Z$ be equipped with the log structure induced by $D'|_Z$, so that we have a canonical closed immersion of log adic spaces $\imath_Z': Z \to X'$. For simplicity, we shall still write $\imath_{Z, \an}: Z_\an \to X_\an$ instead of $\imath_{Z, \an}': Z_\an \to X'_\an$. Let $\varepsilon: X \to X'$ denote the canonical morphism. Let $\jmath_U: U \to X$, $\jmath_U': U \to X'$, $\jmath_{U'}: U' \to X'$, and $\jmath_W: W \to Z$ denote the canonical open immersions. Then the short exact sequence \Refeq{\ref{eq-lem-exc-isom-ex-seq}} induces \Pth{and is induced by} a short exact sequence
\begin{equation}\label{eq-prop-exc-dR-comp-ex-seq-ket}
0 \to \jmath'_{U, \ket, !}\bigl(\mathbb{Z}_p(d)\bigr) \to \jmath_{U', \ket, !}\bigl(\mathbb{Z}_p(d)\bigr) \to \imath_{Z, \ket, *}' \, \jmath_{W, \ket, !}\bigl(\mathbb{Z}_p(d)\bigr) \to 0
\end{equation}
over $X'_{K, \ket}$, which induces \Refeq{\ref{eq-lem-exc-isom}} and the first row of \Refeq{\ref{eq-prop-exc-dR-comp}}, and we have
\begin{equation}\label{eq-prop-exc-dR-comp-ex-seq-ket-ch}
\jmath'_{U, \ket, !}\bigl(\mathbb{Z}_p(d)\bigr) \Mi R\varepsilon_{\ket, *} \, \jmath_{U, \ket, !}\bigl(\mathbb{Z}_p(d)\bigr) \cong \varepsilon_{\ket, *} \, \jmath_{U, \ket, !}\bigl(\mathbb{Z}_p(d)\bigr),
\end{equation}
by Lemma \ref{lem-L-!} and Remark \ref{rem-def-H-c-conv}.
Let $\varpi \in K^{\flat+}$ be such that $\varpi^\sharp = p$. By Lemma \ref{lem-from-ket-to-proket} and \cite[\aProp \logadicpropproketvsketadj]{Diao/Lan/Liu/Zhu:laske}, the morphism
\[
\jmath'_{U, \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr) \to \jmath_{U', \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr)
\]
\Ref{induced by \Refeq{\ref{eq-prop-exc-dR-comp-ex-seq-ket}}} is the pushforward of the morphism
\begin{equation}\label{eq-prop-exc-dR-comp-mor-proket}
\nu_{X'}^{-1} \, \jmath'_{U, \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr) \to \nu_{X'}^{-1} \, \jmath_{U', \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr)
\end{equation}
over $X'_{K, \proket}$, which admits compatible morphisms to a morphism
\[
\begin{split}
& \Bigl(\nu_{X'}^{-1} \, \jmath'_{U, \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr)\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X'} / (p^m, [\varpi^n])\bigr) \\
& \to \Bigl(\nu_{X'}^{-1} \, \jmath_{U', \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr)\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X'} / (p^m, [\varpi^n])\bigr)
\end{split}
\]
over $X'_{K, \proket}$, for each $m \geq 1$ and each $n \geq 1$, and we have
\[
\begin{split}
& \Bigl(\nu_{X'}^{-1} \, \jmath'_{U, \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr)\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X'} / (p^m, [\varpi^n])\bigr) \\
& \Mi \Bigl(R\varepsilon_{\proket, *} \, \nu_X^{-1} \, \jmath_{U, \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr)\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X'} / (p^m, [\varpi^n])\bigr) \\
& \Mi R\varepsilon_{\proket, *}\Bigl(\nu_X^{-1} \, \jmath_{U, \ket, !}\bigl((\mathbb{Z} / p^m)(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X} / (p^m, [\varpi^n])\bigr)\Bigr)
\end{split}
\]
over $X'_{K, \proket}$, by induction on $m$ and $n$ based on \cite[\aLem \logadiclemketmorOplusp]{Diao/Lan/Liu/Zhu:laske}. Therefore, by Lemmas \ref{lem-cplx-AAinf} and \ref{lem-BBdRp-cpt}, and by taking derived limits and inverting $p$, we see that the derived limit of \Refeq{\ref{eq-prop-exc-dR-comp-mor-proket}} also admits compatible morphisms to a morphism
\begin{equation}\label{eq-prop-exc-dR-comp-ex-seq-proket-BBdR}
R\varepsilon_{\proket, *}\bigl(\widehat{\mathbb{Z}}_p(d) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-c}\bigr) \to \widehat{\mathbb{Z}}_p(d) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X'}^\Utext{$\circ$-c},
\end{equation}
where $\BBdRX{X}^\Utext{$\star$-c}$ and $\BBdRX{X'}^\Utext{$\circ$-c}$ are as in Definition \ref{def-BBdRp-cpt}, with $I^\Utext{$\star$-c} = I$ and $I^\Utext{$\circ$-c} = I - \{ j_0 \}$, respectively. Then \Refeq{\ref{eq-prop-exc-dR-comp-ex-seq-proket-BBdR}} induces the second row of \Refeq{\ref{eq-prop-exc-dR-comp}}, by Proposition \ref{prop-L-!-coh-comp-proket}, which is an isomorphism because the first row is.
Let $\mu'_X: {X_\proket}_{X_K} \to X_\an$ and $\mu'_{X'}: {X'_\proket}_{X'_K} \to X'_\an \cong X_\an$ denote the canonical morphisms of sites, so that $R\mu'_{X, *} \cong R\mu'_{X', *} \, R\varepsilon_{\proket, *}$. By Proposition \ref{prop-Poin-lem} and the projection formula, by applying $R\mu'_{X', *}$ to \Refeq{\ref{eq-prop-exc-dR-comp-ex-seq-proket-BBdR}}, and by Proposition \ref{prop-RHl-Hl-Ddl-cpt} and Remark \ref{rem-unip-twist}, we obtain the canonical morphism
\begin{equation}\label{eq-prop-exc-dR-comp-mor-dR-BdR}
\bigl(\Omega_X^\bullet(\log D)(-D)(d)\bigr) \widehat{\otimes}_k B_\dR \to \bigl(\Omega_X^\bullet(\log D')(-D')(d)\bigr) \widehat{\otimes}_k B_\dR
\end{equation}
of complexes over $\mathcal{X}$. Note that, by construction, this is part of the pullback of \Refeq{\ref{eq-prop-exc-dR-comp-ex-seq-dR}}. Therefore, this \Refeq{\ref{eq-prop-exc-dR-comp-mor-dR-BdR}} in turn induces the third row of \Refeq{\ref{eq-prop-exc-dR-comp}}, which can be compatibly identified with the second row by the comparison isomorphisms in Theorem \ref{thm-L-!-coh-comp}; and the whole diagram \Refeq{\ref{eq-prop-exc-dR-comp}} is commutative, with the fourth row given by the excision isomorphism for de Rham cohomology, as desired.
\end{proof}
Next, let us consider the Gysin isomorphisms between top-degree cohomology.
\begin{rk}\label{rem-Gysin-seq}
Suppose that $I = \{ j_0 \}$ is a singleton, so that $D = D_{j_0}$ is an irreducible smooth divisor by assumption. Suppose moreover that $X_K$ and $D_K$ are connected, in which case $X$ and $D$ are both geometrically connected. Let $\jmath_U: U \to X$ and $\imath_D: D \to X$ denote the canonical open and closed immersions \Pth{of underlying adic spaces, without log structures}. Consider the canonical distinguished triangle
\begin{equation}\label{eq-rem-Gysin-seq-pre}
\tau_{\leq 0} \, R\jmath_{U, \et, *}\bigl(\mathbb{Z}_p(d)\bigr) \to R\jmath_{U, \et, *}\bigl(\mathbb{Z}_p(d)\bigr) \to \tau_{\geq 1} \, R\jmath_{U, \et, *}\bigl(\mathbb{Z}_p(d)\bigr) \Mapn{+1}
\end{equation}
over $X_{K, \et}$. By canonically identifying the two truncations in \Refeq{\ref{eq-rem-Gysin-seq-pre}} using \cite[\aLem \logadiclemkettoetconst]{Diao/Lan/Liu/Zhu:laske}, we obtain a canonical distinguished triangle
\begin{equation}\label{eq-rem-Gysin-seq}
\mathbb{Z}_p(d) \to R\jmath_{U, \et, *}\bigl(\mathbb{Z}_p(d)\bigr) \to \imath_{D, \et, *}\bigl(\mathbb{Z}_p(d - 1)\bigr)[-1] \Mapn{+1}.
\end{equation}
Note that, because of the proof of \cite[\aLem \logadiclemkettoetconst]{Diao/Lan/Liu/Zhu:laske}, this is compatible \Ref{via \cite[\aProp 2.1.4 and \aThm 3.8.1]{Huber:1996-ERA}} with the algebraic construction in \cite[\aSec 4]{Faltings:2002-aee}. This is also consistent with the results in \cite[\aSec 3.9]{Huber:1996-ERA}.
\end{rk}
\begin{lem}\label{lem-Gysin-isom}
With the same setting as in Remark \ref{rem-Gysin-seq}, the distinguished triangle \Refeq{\ref{eq-rem-Gysin-seq}} induces the Gysin isomorphism
\begin{equation}\label{eq-lem-Gysin-isom}
H_\et^{2d - 2}\bigl(D_K, \mathbb{Q}_p(d - 1)\bigr) \Mi H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr).
\end{equation}
\end{lem}
\begin{proof}
Since $X$ and $D$ are geometrically connected, we have $H^0(X_\an, \mathcal{O}_X) \cong k$ and $H^0(D_\an, \mathcal{O}_D) \cong k$. Moreover, we have a long exact sequence
\[
0 \to H^0\bigl(X_\an, \mathcal{O}_X(-D)\bigr) \to H^0(X_\an, \mathcal{O}_X) \to H^0(D_\an, \mathcal{O}_D) \to \cdots,
\]
which forces $H^0\bigl(X_\an, \mathcal{O}_X(-D)\bigr) = 0$ because $H^0(X_\an, \mathcal{O}_X) \to H^0(D_\an, \mathcal{O}_D)$ maps $1$ to $1$ by definition. This shows that $H_\dR^0(X_\an, \mathcal{O}_X) \cong k$, $H_\dR^0(D_\an, \mathcal{O}_D) \cong k$, and $H_{\dR, \cpt}^0(U_\an, \mathcal{O}_U) = 0$. By using the perfect Poincar\'e duality pairing \Refeq{\ref{eq-thm-trace-dR-pairing}}, we obtain $H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr) \cong k$, $H_\dR^{2d - 2}\bigl(D_\an, \mathcal{O}_D(d - 1)\bigr) \cong k$, and $H_{\dR, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) = 0$. By Theorem \ref{thm-L-!-coh-comp}, we obtain $H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr) \cong \mathbb{Q}_p$, $H_\et^{2d - 2}\bigl(D_K, \mathbb{Q}_p(d - 1)\bigr) \cong \mathbb{Q}_p$, and $H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Q}_p(d)\bigr) = 0$. Now the desired isomorphism \Refeq{\ref{eq-lem-Gysin-isom}} is just a connecting morphism in the long exact sequence associated with \Refeq{\ref{eq-rem-Gysin-seq}} \Pth{and with $p$ inverted}, which is an isomorphism by comparison of dimensions over $\mathbb{Q}_p$.
\end{proof}
\begin{prop}\label{prop-Gysin-dR-comp}
With the same setting as in Remark \ref{rem-Gysin-seq} and Lemma \ref{lem-Gysin-isom}, the isomorphism \Refeq{\ref{eq-lem-Gysin-isom}} extends to a commutative diagram
\begin{equation}\label{eq-prop-Gysin-dR-comp}
\xymatrix{ {H_\et^{2d - 2}\bigl(D_K, \mathbb{Q}_p(d - 1)\bigr)} \ar[r]^-\sim \ar@{^(->}[d] & {H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr)} \ar@{^(->}[d] \\
{H_\et^{2d - 2}\bigl(D_K, \mathbb{Q}_p(d - 1)\bigr) \otimes_{\mathbb{Q}_p} B_\dR} \ar[r]^-\sim \ar[d] & {H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr) \otimes_{\mathbb{Q}_p} B_\dR} \ar[d] \\
{H_\dR^{2d - 2}\bigl(D_\an, \mathcal{O}_D(d - 1)\bigr) \otimes_k B_\dR} \ar[r]^-\sim & {H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr) \otimes_k B_\dR} \\
{H_\dR^{2d - 2}\bigl(D_\an, \mathcal{O}_D(d - 1)\bigr)} \ar[r]^-\sim \ar@{^(->}[u] & {H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr)} \ar@{^(->}[u] }
\end{equation}
Here the fourth row \Pth{at the bottom} is the Gysin isomorphism induced by the long exact sequence associated with the adjunction exact sequence
\begin{equation}\label{eq-prop-Gysin-dR-comp-ex-seq-dR}
0 \to \Omega_X^\bullet(d) \to \Omega_X^\bullet(\log D)(d) \to \imath_{D, \an, *}\bigl(\Omega_D^\bullet(d - 1)\bigr)[-1] \to 0
\end{equation}
for de Rham complexes over $X_\an$, which is an isomorphism, as explained in the proof of Lemma \ref{lem-Gysin-isom}, because $H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr) \cong k$, $H_\dR^{2d - 2}\bigl(D_\an, \mathcal{O}_D(d - 1)\bigr) \cong k$, and $H_{\dR, \cpt}^{2d}\bigl(U_\an, \mathcal{O}_U(d)\bigr) \cong H^{2d}\bigl(X_\an, \Omega_X^\bullet(\log d)(d)\bigr) = 0$. Moreover, this isomorphism at the bottom row is compatible with the trace morphisms.
\end{prop}
\begin{proof}
Let $X^\times$ denote the log adic space with the same underlying space as $X$, but equipped with the trivial log structure. On the contrary, let $D^\partial := X_{\{ j_0 \}}^\partial$, as in Section \ref{sec-log-str-bd}, and let $D$ be equipped with the trivial log structure. Let $\jmath_U: U \to X$, $\jmath_U^\times: U \to X^\times$, $\imath_D^\partial: D^\partial \to X$, $\imath_D: D \to X^\times$, $\varepsilon: X \to X^\times$, $\varepsilon_D^\partial: D^\partial \to D$ denote the canonical morphisms of log adic spaces. Since
\[
R\jmath_{U, \ket, *}^\times\bigl(\mathbb{Z}_p(d)\bigr) \cong R\varepsilon_{\ket, *} \, R\jmath_{U, \ket, *}\bigl(\mathbb{Z}_p(d)\bigr) \cong R\varepsilon_{\ket, *}\bigl(\mathbb{Z}_p(d)\bigr)
\]
and
\[
\imath_{D, \ket, *} \, R\varepsilon_{D, \ket, *}^\partial\bigl(\mathbb{Z}_p(d - 1)\bigr) \cong R\varepsilon_{\ket, *} \, \imath_{D, \ket, *}^\partial\bigl(\mathbb{Z}_p(d - 1)\bigr),
\]
by \cite[\aLem \logadiclemclimmketmor, \aThm \logadicthmpurity, and \aCor \logadiccorpuritylisse]{Diao/Lan/Liu/Zhu:laske}, \Refeq{\ref{eq-rem-Gysin-seq}} induces \Pth{and is induced by} a distinguished triangle
\begin{equation}\label{eq-prop-Gysin-dR-comp-seq-ket}
\mathbb{Z}_p(d) \to R\varepsilon_{\ket, *}\bigl(\mathbb{Z}_p(d)\bigr) \to \imath_{D, \ket, *}\bigl(\mathbb{Z}_p(d - 1)\bigr)[-1] \Mapn{+1}
\end{equation}
over $X^\times_{K, \ket} \cong X_{K, \et}$, which induces \Refeq{\ref{eq-lem-Gysin-isom}} and the first row of \Refeq{\ref{eq-prop-Gysin-dR-comp}}. Let $\varpi \in K^{\flat+}$ be such that $\varpi^\sharp = p$. By \cite[\aProp \logadicpropproketvsketadj]{Diao/Lan/Liu/Zhu:laske}, the distinguished triangle
\[
(\mathbb{Z} / p^m)(d) \to R\varepsilon_{\ket, *}\bigl((\mathbb{Z} / p^m)(d)\bigr) \to
\imath_{D, \ket, *}\bigl((\mathbb{Z} / p^m)(d - 1)\bigr)[-1] \Mapn{+1}
\]
\Ref{induced by \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-ket}}} is the pushforward of the distinguished triangle
\begin{equation}\label{eq-prop-Gysin-dR-comp-seq-proket}
\begin{split}
& \nu_{X^\times}^{-1}\bigl((\mathbb{Z} / p^m)(d)\bigr) \to \nu_{X^\times}^{-1} \, R\varepsilon_{\ket, *}\bigl((\mathbb{Z} / p^m)(d)\bigr) \\
& \to
\nu_{X^\times}^{-1} \, \imath_{D, \ket, *}\bigl((\mathbb{Z} / p^m)(d - 1)\bigr)[-1] \Mapn{+1}
\end{split}
\end{equation}
over $X^\times_{K, \proket}$, which admits a morphism to the distinguished triangle
\[
\begin{split}
& \Bigl(\nu_{X^\times}^{-1} \, \bigl((\mathbb{Z} / p^m)(d)\bigr)\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X^\times} / (p^m, [\varpi^n])\bigr) \\
& \to \Bigl(\nu_{X^\times}^{-1} \, R\varepsilon_{\ket, *}\bigl((\mathbb{Z} / p^m)(d)\bigr)\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X^\times} / (p^m, [\varpi^n])\bigr) \\
& \to \Bigl(\nu_{X^\times}^{-1} \, \imath_{D, \ket, *}\bigl((\mathbb{Z} / p^m)(d - 1)\bigr)[-1]\Bigr) \otimes_{\widehat{\mathbb{Z}}_p} \bigl(\AAinfX{X^\times} / (p^m, [\varpi^n])\bigr) \Mapn{+1}
\end{split}
\]
over $X^\times_{K, \proket}$, for each $m \geq 1$ and each $n \geq 1$. Therefore, by induction on $m$ and $n$ based on \cite[\aLems \logadiclemketmorOplusp{} and \logadiclemclimmOplusp]{Diao/Lan/Liu/Zhu:laske}, and by taking derived limits and inverting $p$, we see that the derived limit of \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-proket}} also admits a morphism to a distinguished triangle
\begin{equation}\label{eq-prop-Gysin-dR-comp-seq-proket-BBdR}
\begin{split}
& \bigl(\widehat{\mathbb{Z}}_p(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X^\times} \to \varepsilon_{\proket, *}\Bigl(\bigl(\widehat{\mathbb{Z}}_p(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}\Bigr) \to \\
& \imath_{D, \proket, *}\Bigl(\bigl(\widehat{\mathbb{Z}}_p(d - 1)\bigr)[-1] \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{D}\Bigr) \Mapn{+1}
\end{split}
\end{equation}
over $X^\times_{K, \proket}$. Then \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-proket-BBdR}} induces the second row of \Refeq{\ref{eq-prop-Gysin-dR-comp}}, by Proposition \ref{prop-L-!-coh-comp-proket} \Ref{or \cite[\aThm 8.4]{Scholze:2013-phtra}}, which is an isomorphism because the first row is.
Let $\mu_X': {X_\proket}_{/X_K} \to X_\an$ and $\mu_{X^\times}': {X^\times_\proket}_{/X^\times_K} \to X^\times_\an \cong X_\an$ denote the canonical morphisms of sites, so that $\mu_X' = \mu_{X^\times}' \circ \varepsilon_{\proket, *}$. By Proposition \ref{prop-Poin-lem} and the projection formula, by applying $R\mu_{X^\times}'$ to \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-proket-BBdR}}, and by Proposition \ref{prop-RHl-Hl-Ddl-cpt} and Remark \ref{rem-unip-twist}, we obtain a distinguished triangle
\begin{equation}\label{eq-prop-Gysin-dR-comp-seq-dR-BdR}
\begin{split}
& \bigl(\Omega_X^\bullet(d)\bigr) \widehat{\otimes}_k B_\dR \to \bigl(\Omega_X^\bullet(\log D)(d)\bigr) \widehat{\otimes}_k B_\dR \to \\
& \Bigl(\imath_{D, \an, *}\bigl(\Omega_D^\bullet(d - 1)\bigr)[-1]\Bigr) \widehat{\otimes}_k B_\dR \Mapn{+1}
\end{split}
\end{equation}
of complexes over $\mathcal{X}^\times \cong \mathcal{X}$, and we have
\[
\Bigl(\imath_{D, \an, *}\bigl(\Omega_D^\bullet(d - 1)\bigr)[-1]\Bigr) \widehat{\otimes}_k B_\dR \Mi \imath_{\mathcal{D}, *}\Bigl(\bigl(\Omega_D^\bullet(d - 1)\bigr) \widehat{\otimes}_k B_\dR\Bigr)[-1].
\]
This \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-dR-BdR}} induces the third row of \Refeq{\ref{eq-prop-Gysin-dR-comp}}, which can be compatibly identified with the second row by the comparison isomorphisms in Theorem \ref{thm-L-!-coh-comp} \Pth{or rather \cite[\aThm 8.4]{Scholze:2013-phtra}}.
We claim that \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-dR-BdR}} is canonically isomorphic to the pullback of \Refeq{\ref{eq-prop-Gysin-dR-comp-ex-seq-dR}}. It is clear that the first morphism in \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-dR-BdR}} is the canonical one and hence coincides with the pullback of the first morphism in \Refeq{\ref{eq-prop-Gysin-dR-comp-ex-seq-dR}}. As for the second morphism, it suffices to show that it induces the pullback to $\mathcal{D}$ of the canonical morphism $\imath_D^*\bigl(\Omega_X^\bullet(\log D)(d)\bigr) \to \Omega_D^\bullet(d - 1)[-1]$ induced by adjunction. Once this is known, the third morphism must be zero, and the claim would follow.
We shall first show this locally, by adapting the arguments in \cite[\aSec \logRHseccoh]{Diao/Lan/Liu/Zhu:lrhrv}. Suppose that $X = \OP{Spa}(R, R^+)$ is affinoid and admits a strictly \'etale morphism
\[
X \to \mathbb{E} := \mathbb{T}^{n - 1} \times \mathbb{D} := \OP{Spa}(k\Talg{T_1^{\pm 1}, \ldots, T_{n - 1}^{\pm 1}, T_n}, k^+\Talg{T_1^{\pm 1}, \ldots, T_{n - 1}^{\pm 1}, T_n}),
\]
and that the underlying adic space $D$ is the pullback of $\{ T_n = 0 \}$, in which case $D = \OP{Spa}(\overline{R}, \overline{R}^+)$ with $\overline{R} := R / (T_n)$. Take finite extensions $k_m$ of $k$ in $\AC{k}$ such that $k_m$ contains all $m$-th roots of unity in $\AC{k}$, for each $m \geq 1$, and such that $\AC{k} = \cup_m \, k_m$. For each $m \geq 1$, let
\[
\mathbb{E}_m := \mathbb{T}^{n - 1}_m \times \mathbb{D}_m := \OP{Spa}(k_m\Talg{T_1^{\pm \frac{1}{m}}, \ldots, T_{n - 1}^{\pm \frac{1}{m}}, T_n^{\frac{1}{m}}}, k_m^+\Talg{T_1^{\pm \frac{1}{m}}, \ldots, T_{n - 1}^{\pm \frac{1}{m}}, T_n^{\frac{1}{m}}}),
\]
and let $X_m := X \times_\mathbb{E} \mathbb{E}_m$ and $D^\partial_m := D^\partial \times_\mathbb{E} \mathbb{E}_m$. Then $\widetilde{X} := \varprojlim_m X_m \to X_K$ and $\widetilde{D}^\partial := \varprojlim_m D^\partial_m \to D^\partial_K$ are Galois pro-Kummer \'etale covers with Galois group $\Gamma_\geom \cong (\widehat{\mathbb{Z}}(1))^n$, and we have $\widetilde{D}^\partial \cong \widetilde{X} \times_X D^\partial$. Similarly, we have a strictly \'etale morphism $D \to \mathbb{T}^{n - 1}$ \Pth{compatible with the above $D^\partial \to \mathbb{E} = \mathbb{T}^{n - 1} \times \mathbb{D}$}, with Kummer \'etale covers $\mathbb{T}^{n - 1}_m \to \mathbb{T}^{n - 1}$ inducing $D_m := X \times_{\mathbb{D}^{n - 1}} \mathbb{D}^{n - 1}_m$, and with a Galois pro-Kummer \'etale cover $\widetilde{D} := \varprojlim_m D_m \to D_K$ with Galois group $\overline{\Gamma}_\geom \cong (\widehat{\mathbb{Z}}(1))^{n - 1}$. As explained in \cite[\aSec \logadicsectoricchart]{Diao/Lan/Liu/Zhu:laske}, $\widetilde{X}$ and $\widetilde{D}$ are log affinoid perfectoid objects in $X_\proket$ and $D_\proket$, respectively. By \cite[\aLem \logadiclemlogaffperfclimm]{Diao/Lan/Liu/Zhu:laske}, $\widetilde{D}^\partial$ is also a log affinoid perfectoid object of $D^\partial_\proket$. By construction, the induced morphism $\widetilde{D}^\partial \to \widetilde{D}$ is Galois with Galois group
\[
\Gamma^\partial := \ker(\Gamma_\geom \to \overline{\Gamma}_\geom) \cong \widehat{\mathbb{Z}}(1).
\]
Let $\gamma_j \in \Gamma_\geom$ be topological generators such that $\gamma_j \, T_{j'}^{\frac{1}{m}} = \zeta_m^{\delta_{jj'}} T_{j'}^{\frac{1}{m}}$, as in \cite[(\logRHeqdefgammaj)]{Diao/Lan/Liu/Zhu:lrhrv}, for all $j, j' = 1, \ldots, n$, so that $\overline{\Gamma}_\geom$ \Pth{\resp $\Gamma^\partial$} is topologically generated by $\gamma_1, \ldots, \gamma_{n - 1}$ \Pth{\resp $\gamma_n$}. Note that such choices of topological generators forces a choice of an isomorphism $\widehat{\mathbb{Z}}(1) \cong \widehat{\mathbb{Z}}$, which we will use to trivialize all powers of $\widehat{\mathbb{Z}}(1)$ in the following. Therefore, the pushforwards along the canonical morphisms of sites $\nu_X': {X_\proket}_{/X_K} \to X_\et$, $\nu_{D^\partial}': {D^\partial_\proket}_{/D^\partial_K} \to D^\partial_\et \cong D_\et$, $\varepsilon^\partial_{D_K, \proket, *}: {D^\partial_\proket}_{/D^\partial_K} \to {D_\proket}_{/D_K}$, and $\nu_D': {D_\proket}_{/D_K} \to D_\et$, when computed using the \v{C}ech cohomology of the pro-Kummer \'etale covers $\widetilde{X}^\partial \to X$, $\widetilde{D}^\partial \to D^\partial$, $\widetilde{D}^\partial \to \widetilde{D}$, and $\widetilde{D} \to D$, correspond to the group cohomology of $\Gamma_\geom$, $\Gamma_\geom$, $\Gamma^\partial$, and $\overline{\Gamma}_\geom$, respectively. Thus, as in the proof of Proposition \ref{prop-RHl-Hl-Ddl-cpt}, by the same arguments as in the proofs of \cite[\aThm 2.1(iii)]{Liu/Zhu:2017-rrhpl} and \cite[\aProp \logRHpropLOCl]{Diao/Lan/Liu/Zhu:lrhrv}, we may compute $\imath_D^*\bigl(\Omega_X^\bullet(\log D)(d)\bigr)$ by working with $\nu_{D^\partial}'$ instead of $\nu_X'$.
As usual, the group cohomology of $\Gamma^\partial$ for a topological $\Gamma^\partial$-module $L$ can be computed by the two-term complex $L \Mapn{\gamma_n - 1} L$. In particular, $\bigl(\varepsilon^\partial_{D_K, \proket, *}(\widehat{\mathbb{Z}}_p)\bigr)(\widetilde{D})$ can be represented by the complex $\widehat{\mathbb{Z}}_p \Mapn{\gamma_n - 1} \widehat{\mathbb{Z}}_p$, where $\gamma_n - 1$ acts by zero and hence the complex just splits \Ref{\Refcf{} \cite[\aLem \logadiclemclimmketmor]{Diao/Lan/Liu/Zhu:lrhrv}}; and the pullback of the second morphism in \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-ket}} corresponds to the canonical morphism
\[
[\widehat{\mathbb{Z}}_p \Mapn{0} \widehat{\mathbb{Z}}_p] \to \widehat{\mathbb{Z}}_p[-1]
\]
given by the identity morphisms on $\widehat{\mathbb{Z}}_p[-1]$. In a consistent way, by Lemma \ref{lem-OBdRp-bd-loc} and \cite[\aCor \logRHcorOBdlplocgr]{Diao/Lan/Liu/Zhu:lrhrv}, the pullback of second morphism of \Refeq{\ref{eq-prop-Gysin-dR-comp-seq-proket-BBdR}} induces the canonical morphism
\[
t^d [\BBdRX{D^\partial}^{+, \partial}(\widetilde{D}^\partial) \Mapn{\gamma_n - 1} \BBdRX{D^\partial}^{+, \partial}(\widetilde{D}^\partial)] \to t^{d - 1} \BBdRpX{D}(\widetilde{D})[-1],
\]
which is quasi-isomorphic to the canonical morphism
\[
\begin{split}
& t^d [\BBdRX{D^\partial}^{+, \partial}(\widetilde{D}^\partial)\{W_1, \ldots, W_n\} \otimes_R \Omega_{D^\partial}^{\log, \bullet}(D) \\
& \qquad \Mapn{\gamma_n - 1} \BBdRX{D^\partial}^{+, \partial}(\widetilde{D}^\partial)\{W_1, \ldots, W_n\} \otimes_R \Omega_{D^\partial}^{\log, \bullet}(D)] \\
& \to t^{d - 1} \BBdRpX{D}(\widetilde{D})\{W_1, \ldots, W_{n - 1}\} \otimes_R \Omega_D^\bullet(D)[-1],
\end{split}
\]
where $\Omega_D^\bullet(D) \cong \oplus_{j = 1}^{n - 1} (\overline{R} \, dT_j)$ and $\Omega_{D^\partial}^{\log, \bullet}(D) \cong \Omega_D^\bullet(D) \oplus (\overline{R} \, \frac{dT_n}{T_n}) \cong \Omega_X^{\log, \bullet}(X) \otimes_R \overline{R}$, and where the connection of $\BBdRX{D^\partial}^{+, \partial}(\widetilde{D}^\partial)\{W_1, \ldots, W_n\} \otimes_R \Omega_{D^\partial}^{\log, \bullet}(D)$ maps $W_j$ to $t^{-1} \frac{dT_j}{T_j}$, for each $j$, by \cite[(\logRHeqconnWi)]{Diao/Lan/Liu/Zhu:lrhrv}. Thus, after taking the group cohomology of $\overline{\Gamma}_\geom$, the last morphism induces the canonical morphism
\[
\Omega_{D^\partial}^{\log, \bullet}(d)(D) \widehat{\otimes}_k B_\dR \to \Omega_D^\bullet(d - 1)(D)[-1] \widehat{\otimes}_k B_\dR
\]
extracting the factor $\frac{dT_n}{T_n}$, which is \emph{exactly} the same morphism defined by the pullback of the adjunction morphism $\imath_D^*\bigl(\Omega_X^\bullet(\log D)(d)\bigr) \to \Omega_D^\bullet(d - 1)[-1]$. Since all the above identifications are canonical, they globalize and the claim follows.
Thus, the whole diagram \Refeq{\ref{eq-prop-Gysin-dR-comp}} is commutative, with the fourth row given by the Gysin isomorphism for de Rham cohomology, which is compatible with the trace morphisms \Pth{for de Rham cohomology} by Lemma \ref{lem-property-trace}\Refenum{\ref{lem-property-trace-Gysin}}, as desired.
\end{proof}
\subsection{Poincar\'e duality for \'etale cohomology}\label{sec-trace-et}
\begin{thm}\label{thm-trace-et}
There exists a unique morphism
\begin{equation}\label{eq-trace-et}
t_\et: H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Q}_p(d)\bigr) \to \mathbb{Q}_p,
\end{equation}
which we shall call the \emph{trace morphism}, satisfying the following requirements:
\begin{enumerate}
\item\label{thm-trace-et-res} The formation of $t_\et$ is compatible with restrictions to open rigid analytic subvarieties of the form $U - Z = X - D - Z$ for some closed rigid analytic subvarieties $Z$ of $X$. \Pth{Such open rigid analytic subvarieties are allowed in our setting by resolution of singularities, as in \cite{Bierstone/Milman:1997-cdbml}, by the independence of the choice of compactifications in the definition of cohomology with compact support, based on Lemmas \ref{lem-L-!} and \ref{lem-def-H-c-fin-Z-p}, and on Remark \ref{rem-def-H-c-conv}.}
\item\label{thm-trace-et-comp} Suppose that $U_K$ is connected. Then $t_\et$ and $t_\dR$ are both isomorphisms, and the comparison isomorphism
\[
H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Q}_p(d)\bigr) \otimes_{\mathbb{Q}_p} B_\dR \cong H_{\dR, \cpt}^{2d}\bigl(U, \mathcal{O}_U(d)\bigr) \otimes_k B_\dR
\]
\Ref{see Theorem \ref{thm-L-!-coh-comp}} maps $\bigl(t_\et^{-1}(1)\bigr) \otimes 1$ to $\bigl(t_\dR^{-1}(1)\bigr) \otimes 1$. Consequently, the formation of $t_\et$ is compatible with the replacement of $k$ with a finite extension in $\AC{k}$ and with the Gysin isomorphism in the top row of \Refeq{\ref{eq-prop-Gysin-dR-comp}} \Ref{by Proposition \ref{prop-Gysin-dR-comp}, because the formation of $t_\dR$ is compatible with the Gysin isomorphism in the bottom row of \Refeq{\ref{eq-prop-Gysin-dR-comp}}}; and $t_\et$ is the trivial isomorphism $H_\et^0(U_K, \mathbb{Q}_p) \cong \mathbb{Q}_p$ when $d = \dim(U) = \dim(X) = 0$.
\end{enumerate}
Moreover, such a morphism \Refeq{\ref{eq-trace-et}} satisfies the following properties:
\begin{enumerate}[resume]
\item\label{thm-trace-et-pairing} By pre-composition with the canonical cup product pairing, $t_\et$ induces a perfect pairing
\begin{equation}\label{eq-thm-trace-et-pairing}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p,
\end{equation}
for each $\mathbb{Z}_p$-local system $\mathbb{L}$ on $X_\ket$ \Pth{even when $\mathbb{L}|_U$ is not de Rham}.
\item\label{thm-trace-et-pairing-dR} When $\mathbb{L}|_U$ is \emph{de Rham}, we also have a commutative diagram
\begin{equation}\label{eq-thm-trace-et-pairing-dR-diag}
\xymatrix{ {H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})} \ar[r]^-\sim \ar@{^(->}[d] & {\OP{Hom}_{\mathbb{Q}_p}\bigl(H_{\et, \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr), \mathbb{Q}_p\bigr)} \ar@{^(->}[d] \\
{H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \otimes_{\mathbb{Q}_p} B_\dR} \ar[r]^-\sim \ar[d]_-\wr & {\OP{Hom}_{\mathbb{Q}_p}\bigl(H_{\et, \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr), B_\dR\bigr)} \ar[d]^-\wr \\
{H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L}_{\mathbb{Q}_p})\bigr) \otimes_k B_\dR} \ar[r]^-\sim & {\OP{Hom}_k\bigl(H_{\dR, \Utext{$\star$-nc}}^{2d - i}\bigl(U_\an, D_\dR\bigl(\dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)\bigr), B_\dR\bigr)} \\
{H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L}_{\mathbb{Q}_p})\bigr)} \ar[r]^-\sim \ar@{^(->}[u] & {\OP{Hom}_k\bigl(H_{\dR, \Utext{$\star$-nc}}^{2d - i}\bigl(U_\an, D_\dR\bigl(\dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)\bigr), k\bigr)} \ar@{^(->}[u] }
\end{equation}
in which the top \Pth{\resp bottom} two rows are induced by $t_\et$ \Pth{\resp $t_\dR$}.
\end{enumerate}
\end{thm}
\begin{proof}
For our purpose, we may replace $k$ with a finite extension over which the connected components of $X$ are geometrically connected, and replace $X$ with its geometric connected components. Then we may assume that $X$ is geometrically connected. \Pth{Then the trace morphism to be constructed will be isomorphisms.} We may also assume that $X$ contains a $k$-point $\OP{Spa}(k, k^+) \cong Z \Em X$. Let us proceed by induction on $d = \dim(U) = \dim(X)$.
If $d = 0$, then $U_K$ is a single $K$-point, and $H_{\et, \cpt}^0(U_K, \mathbb{Q}_p) \cong H_\et^0(X_K, \mathbb{Q}_p)$ has a canonical element given by the identity section, which defines the trace isomorphism $t_\et: H_{\et, \cpt}^0(U_K, \mathbb{Q}_p) \Mi \mathbb{Q}_p$. The same identity section induces the identity section of $H^0(X_{K, \proket}, \mathbb{B}_\dR)$, which is also induced by the identity section of $H_\dR^0(X_\an, \mathcal{O}_X) \cong H^0(X_\an, \mathcal{O}_X)$. Hence, $t_\et$ satisfies the requirement \Refenum{\ref{thm-trace-et-comp}}. It is straightforward that it also satisfies \Refenum{\ref{thm-trace-et-res}}, \Refenum{\ref{thm-trace-et-pairing}}, and \Refenum{\ref{thm-trace-et-pairing-dR}}.
If $d > 0$, we first construct a trace morphism $t_\et: H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Q}_p(d)\bigr) \to \mathbb{Q}_p$ satisfying the requirement \Refenum{\ref{thm-trace-et-comp}}. By Lemma \ref{lem-trace-dR} and Proposition \ref{prop-exc-dR-comp}, we are reduced to the case where $U = X$ and $D = \emptyset$. Let $Y$ denote the blowup of $X$ along the $k$-point $Z$, and let $E$ denote the exceptional divisor. Since $Z$ is a $k$-point, both $Y$ and $E$ are smooth and geometrically connected. \Pth{Since $Y$ is \'etale locally isomorphic to $\mathbb{D}_k^n$ for some $n$, this can be seen by an explicit local construction.} Then we have a commutative diagram of canonical morphisms
\[
\xymatrix{ {H_{\et, \cpt}^{2d}\bigl((X - Z)_K, \mathbb{Z}_p(d)\bigr)} \ar[d]_-\wr \ar[r]^-\sim & {H_{\et, \cpt}^{2d}\bigl((Y - E)_K, \mathbb{Z}_p(d)\bigr)} \ar[d]^-\wr \\
{H_\et^{2d}\bigl(X_K, \mathbb{Z}_p(d)\bigr)} \ar[r] & {H_\et^{2d}\bigl(Y_K, \mathbb{Z}_p(d)\bigr)} }
\]
in which the two vertical morphisms are isomorphisms because $H_\et^i\bigl(Z_K, \mathbb{Z}_p(d)\bigr)$ and $H_\et^i\bigl(E_K, \mathbb{Z}_p(d)\bigr)$ are zero for $i > 2d - 2$, since both $Z$ and $E$ are proper smooth of dimensions no greater than $d - 1$, forcing the bottom row in the diagram to be also an isomorphism. Then we have a commutative diagram of canonical morphisms
\begin{equation}\label{eq-thm-trace-et-blowup}
\xymatrix{ {H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr)} \ar@{^(->}[d] \ar[r]^-\sim & {H_\et^{2d}\bigl(Y_K, \mathbb{Q}_p(d)\bigr)} \ar@{^(->}[d] \\
{H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr) \otimes_{\mathbb{Q}_p} B_\dR} \ar[d]_-\wr \ar[r]^-\sim & {H_\et^{2d}\bigl(Y_K, \mathbb{Q}_p(d)\bigr) \otimes_{\mathbb{Q}_p} B_\dR} \ar[d]^-\wr \\
{H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr) \otimes_k B_\dR} \ar[r]^-\sim & {H_\dR^{2d}\bigl(Y_\an, \mathcal{O}_Y(d)\bigr) \otimes_k B_\dR} \\
{H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr)} \ar@{^(->}[u] \ar[r]^-\sim & {H_\dR^{2d}\bigl(Y_\an, \mathcal{O}_Y(d)\bigr)} \ar@{^(->}[u] }
\end{equation}
in which the bottom row is an isomorphism by Lemma \ref{lem-trace-dR}, and in which the middle square is commutative because both of the middle two rows are induced by the canonical morphism
\[
H^{2d}\bigl(X_{K, \proket}, \bigl(\widehat{\mathbb{Z}}_p(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}\bigr) \to H^{2d}\bigl(Y_{K, \proket}, \bigl(\widehat{\mathbb{Z}}_p(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{Y}\bigr).
\]
In order to construct $t_\et: H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr) \Mi \mathbb{Q}_p$ satisfying the requirement \Refenum{\ref{thm-trace-et-comp}}, it suffices to show that $\bigl(t_\dR^{-1}(1)\bigr) \otimes 1 \in H_\dR^{2d}\bigl(X_\an, \mathcal{O}_X(d)\bigr) \otimes_k B_\dR$ lies in the image of $H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr)$, in which case we can define $t_\et^{-1}(1)$ to be the preimage of $\bigl(t_\dR^{-1}(1)\bigr) \otimes 1$. \Pth{Note that this does not involve the choice of $Z$, and the compatibility with the replacement of $k$ with a finite extension in $\AC{k}$ is clear.} By using the commutative diagrams \Refeq{\ref{eq-thm-trace-et-blowup}} and \Refeq{\ref{eq-prop-Gysin-dR-comp}}, it suffices to note that, by the induction hypothesis, the analogous assertion holds for $\bigl(t_\dR^{-1}(1)\bigr) \otimes 1 \in H_\dR^{2d - 2}\bigl(E_\an, \mathcal{O}_E(d - 1)\bigr) \otimes_k B_\dR$.
Such a $t_\et: H_\et^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr) \Mi \mathbb{Q}_p$ satisfies the requirement \Refenum{\ref{thm-trace-et-res}} because, in the setting of Lemma \ref{lem-trace-dR}, we can choose to blowup at some $k$-point $Z$ of $U' \subset U$ \Pth{which exists up to replacing $k$ with a finite extension in $\AC{k}$}, so that we have canonical isomorphisms $H_{\et, \cpt}^{2d}\bigl(U'_K, \mathbb{Q}_p(d)\bigr) \cong H_{\et, \cpt}^{2d}\bigl(X_K, \mathbb{Q}_p(d)\bigr) \Mi H_{\et, \cpt}^{2d}\bigl(X'_K, \mathbb{Q}_p(d)\bigr) \cong H_{\et, \cpt}^{2d}\bigl(U'_K, \mathbb{Q}_p(d)\bigr)$ because they are all isomorphic to $H_\et^{2d - 2}\bigl(E_K, \mathbb{Q}_p(d - 1)\bigr)$ via compatible canonical morphisms, and these canonical isomorphisms extend to a commutative diagram \Ref{as in \Refeq{\ref{eq-thm-trace-et-blowup}} and \Refeq{\ref{eq-prop-Gysin-dR-comp}}} involving also their de Rham counterparts and their tensor products with $B_\dR$.
Finally, let us verify the properties \Refenum{\ref{thm-trace-et-pairing}} and \Refenum{\ref{thm-trace-et-pairing-dR}}. Since $K$ is a field extension of $\mathbb{Q}_p$, and since the duality pairings are defined by composition with cup product pairings, which are compatible with Higgs comparison isomorphisms as in \Refeq{\ref{eq-thm-L-!-coh-comp-Hi}}, the desired perfect pairing \Refeq{\ref{eq-thm-trace-et-pairing}} for \'etale cohomology follows from the perfect pairing \Refeq{\ref{eq-thm-Higgs-pairing}} for Higgs cohomology. When $\mathbb{L}|_U$ is \emph{de Rham}, again since the duality pairings are defined by composition with cup product pairings, and since the de Rham comparison isomorphisms are compatible with the Higgs ones by construction, we have the desired commutative diagram \Refeq{\ref{eq-thm-trace-et-pairing-dR-diag}}, in which the middle square is commutative because both of the middle two rows are induced by the same cup product pairing $H^i(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-c}) \otimes_{B_\dR} H^{2d - i}\bigl(X_{K, \proket}, \bigl(\dual{\widehat{\mathbb{L}}}(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-nc}\bigr) \to H^{2d}\bigl(X_{K, \proket}, \bigl(\widehat{\mathbb{Z}}_p(d)\bigr) \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\cpt\bigr)$, where $\BBdRX{X}^\cpt$ is the analogue of $\BBdRX{X}^\Utext{$\star$-c}$ when $I^\Utext{$\star$-c}$ is replaced with $I$. \Ref{See Definition \ref{def-BBdRp-cpt}. Note that, since $I = I^\Utext{$\star$-c} \cup I^\Utext{$\star$-nc}$, the multiplication morphism $\BBdRX{X}^\Utext{$\star$-c} \otimes_{\widehat{\mathbb{Z}}_p} \BBdRX{X}^\Utext{$\star$-nc} \to \BBdRX{X}$ factors through $\BBdRX{X}^\cpt$.}
\end{proof}
\subsection{De Rham comparison for generalized interior cohomology}\label{sec-int-coh}
\begin{defn}\label{def-int-coh}
For any $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, we consider the \emph{generalized interior cohomology} \Ref{\Refcf{} Definitions \ref{def-H-c} and \ref{def-dR-Hi-Hdg-coh-cpt}}
\begin{equation}\label{eq-def-int-coh-et}
H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}) := \Image\bigl(H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \to H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L})\bigr)
\end{equation}
\begin{equation}\label{eq-def-int-coh-dR}
\begin{split}
& H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \\
& := \Image\Bigl(H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \to H_{\dR, \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)\Bigr),
\end{split}
\end{equation}
and
\begin{equation}\label{eq-def-int-coh-Hdg}
\begin{split}
& H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^{a, i - a}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \\
& := \Image\Bigl(H_{\Hdg, \Utext{$\star$-c}}^{a, i - a}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \to H_{\Hdg, \Utext{$\circ$-c}}^{a, i - a}\bigl(U_\an, D_\dR(\mathbb{L})\bigr)\Bigr),
\end{split}
\end{equation}
for all $i \geq 0$ and $a \in \mathbb{Z}$. When $I^\Utext{$\star$-c} = I$ and $I^\Utext{$\circ$-c} = \emptyset$, we shall denote the objects with subscripts \Qtn{$\intcoh$} instead of \Qtn{$\Utext{$\star$-c} \to \Utext{$\circ$-c}$}, and call them the \emph{interior cohomology}.
\end{defn}
\begin{lem}\label{lem-int-coh-pairing-compat}
Suppose that $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$. The Poincar\'e duality pairings
\begin{equation}\label{eq-lem-int-coh-pairing-compat-c-nc}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p
\end{equation}
and
\begin{equation}\label{eq-lem-int-coh-pairing-compat-c-alt-nc-alt}
H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \Utext{$\circ$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p
\end{equation}
induce the same pairing
\begin{equation}\label{eq-lem-int-coh-pairing-compat-c-nc-alt}
H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \Utext{$\circ$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p
\end{equation}
\Pth{which is defined because $I^\Utext{$\star$-c} \cup I^\Utext{$\circ$-nc} = I$ under the condition $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c}$}. Consequently, if $x_\Utext{$\star$-c} \in H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$ is mapped to $x_\Utext{$\circ$-c} \in H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$, and if $y_\Utext{$\circ$-nc} \in H_{\et, \Utext{$\circ$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$ is mapped to $y_\Utext{$\circ$-c} \in H_{\et, \Utext{$\circ$-c}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$, then we have
\[
\langle x_\Utext{$\star$-c}, y_\Utext{$\star$-nc} \rangle = \langle x_\Utext{$\star$-c}, y_\Utext{$\circ$-nc} \rangle = \langle x_\Utext{$\circ$-c}, y_\Utext{$\circ$-nc} \rangle.
\]
The analogous assertion for the Poincar\'e duality pairings on the de Rham cohomology of $D_\dR(\mathbb{L})$ and $D_\dR\bigl(\dual{\mathbb{L}}(d)\bigr)$ is also true.
\end{lem}
\begin{proof}
This is because the pairings \Refeq{\ref{eq-lem-int-coh-pairing-compat-c-nc}} and \Refeq{\ref{eq-lem-int-coh-pairing-compat-c-alt-nc-alt}} are both compatible with the cup product pairing $H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \Utext{$\circ$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to H_{\et, \cpt}^{2d}\bigl(U_K, \mathbb{Q}_p(d)\bigr)$ inducing \Refeq{\ref{eq-lem-int-coh-pairing-compat-c-nc-alt}}. \Pth{The assertion for de Rham cohomology is similar.}
\end{proof}
\begin{prop}\label{prop-int-coh-pairing-perf}
For any $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, the Poincar\'e duality pairing \Refeq{\ref{eq-lem-int-coh-pairing-compat-c-nc}} \Pth{based on \Refeq{\ref{eq-thm-trace-et-pairing}}} induces a canonical prefect pairing
\begin{equation}\label{eq-prop-int-coh-pairing}
H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \Utext{$\circ$-nc} \to \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p,
\end{equation}
which we also call the Poincar\'e duality pairing, by setting
\begin{equation}\label{eq-prop-int-coh-pairing-def}
\langle x, y \rangle = \langle \widetilde{x}, y \rangle
\end{equation}
for $x \in H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$ and $y \in H_{\et, \Utext{$\circ$-nc} \to \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$, if $x$ is the image of some $\widetilde{x} \in H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$. When $I^\Utext{$\star$-c} = I$ and $I^\Utext{$\circ$-c} = \emptyset$, in which case $I^\Utext{$\star$-nc} = \emptyset$ and $I^\Utext{$\circ$-nc} = I$, this defines the Poincar\'e duality pairing
\begin{equation}\label{eq-prop-int-coh-pairing-spec}
H_{\et, \intcoh}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \times H_{\et, \intcoh}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr) \to \mathbb{Q}_p
\end{equation}
for interior cohomology. This pairing \Refeq{\ref{eq-prop-int-coh-pairing}} is well defined. When $\mathbb{L}|_U$ is \emph{de Rham}, the analogous assertion for the Poincar\'e duality pairings on the de Rham cohomology of $D_\dR(\mathbb{L})$ and $D_\dR\bigl(\dual{\mathbb{L}}(d)\bigr)$ is also true.
\end{prop}
\begin{proof}
To show that the pairing \Refeq{\ref{eq-prop-int-coh-pairing}} is well defined, suppose $x$ is lifted to another element $\widetilde{x}' \in H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$. By definition, $y$ is the image of some $\widetilde{y} \in H_{\et, \Utext{$\circ$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$. Then we have $\langle \widetilde{x} - \widetilde{x}', y \rangle = \langle 0, \widetilde{y} \rangle = 0$, by Lemma \ref{lem-int-coh-pairing-compat}, showing that we still have $\langle \widetilde{x}, y \rangle = \langle \widetilde{x}', y \rangle$.
To show that the pairing \Refeq{\ref{eq-prop-int-coh-pairing}} is perfect, let $\{ e_1, \ldots, e_r \}$ be any $\mathbb{Q}_p$-basis of $H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$, which can be extended to some $\mathbb{Q}_p$-basis $\{ e_1, ..., e_s \}$ of $H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$. Let $\{ \widetilde{f}_1, \ldots, \widetilde{f}_s \}$ denote the dual $\mathbb{Q}_p$-basis of $H_{\et, \Utext{$\circ$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$ under the perfect pairing \Refeq{\ref{eq-lem-int-coh-pairing-compat-c-nc}}. For each $j = 1, \ldots, s$, let $f_j$ denote the image of $\widetilde{f}_j$ in $H_{\et, \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$. For each $j = 1, \ldots, r$, let $\widetilde{e}_j$ denote some element of $H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$ lifting $e_j$. For each $j = r + 1, \ldots, s$, if $f_j \neq 0$, then there exists some $\widetilde{e}$ in $H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$, with image $e$ in $H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$, such that $1 = \langle \widetilde{e}, f_j \rangle$, by the perfectness of \Refeq{\ref{eq-lem-int-coh-pairing-compat-c-nc}}. But this contradicts $\langle \widetilde{e}, f_j \rangle = \langle e, \widetilde{f}_j \rangle = 0$, and hence $f_j = 0$ for all $j > r$. If $\sum_{i = 1}^r a_i f_i = 0$ in $H_{\et, \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$, then $a_i = 0$ for all $i$, because $\sum_{i = 1}^r a_i^2 = \langle \sum_{i = 1}^r a_i e_i, \sum_{i = 1}^r a_i \widetilde{f}_i \rangle = \langle \sum_{i = 1}^r a_i \widetilde{e}_i, \sum_{i = 1}^r a_i f_i \rangle = 0$. It follows that $\{ f_1, \ldots, f_r \}$ is a $\mathbb{Q}_p$-basis of $H_{\et, \Utext{$\circ$-nc} \to \Utext{$\star$-nc}}^{2d - i}\bigl(U_K, \dual{\mathbb{L}}_{\mathbb{Q}_p}(d)\bigr)$, which is dual to the $\mathbb{Q}_p$-basis $\{ e_1, \ldots, e_r \}$ of $H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$ under the induced pairing \Refeq{\ref{eq-prop-int-coh-pairing}}, as desired. \Pth{The assertion for de Rham cohomology is similar.}
\end{proof}
\begin{lem}\label{lem-fil-mor}
Let $(F_1, \Fil_{F_1}^\bullet)$ and $(F_2, \Fil_{F_2}^\bullet)$ be two filtered vector spaces \Pth{over some fixed base field, which we shall omit}, with a map $F_1 \to F_2$ compatible with filtrations such that $F_3 := \Image(F_1 \to F_2)$ is finite-dimensional. Suppose that $\dim\bigl(\Image(\OP{gr}_{F_1} \to \OP{gr}_{F_2})\bigr) = \dim(F_3)$. Then $\Fil_{F_1}^\bullet$ and $\Fil_{F_2}^\bullet$ are \emph{strictly compatible} in the sense that $\Image(\Fil_{F_1}^\bullet \to F_3)$ and $\Fil_{F_3}^\bullet := \Fil_{F_2}^\bullet \cap F_3$ coincide as filtrations on $F_3$, and we have an induced isomorphism $\Image(\OP{gr}_{F_1} \to \OP{gr}_{F_2}) \Mi \OP{gr}_{F_3}$.
\end{lem}
\begin{proof}
For each $a \in \mathbb{Z}$, the map $\OP{gr}_{F_1}^a = \Fil_{F_1}^a / \Fil_{F_1}^{a + 1} \to \OP{gr}_{F_2}^a = \Fil_{F_2}^a / \Fil_{F_2}^{a + 1}$ factors through $\OP{gr}_{F_3}^a = \Fil_{F_3}^a / \Fil_{F_3}^{a + 1}$ with image $\Image(\Fil_{F_1}^a \to F_3) / \Fil_{F_3}^{a + 1}$. Hence, the assumption that $\dim\bigl(\Image(\OP{gr}_{F_1} \to \OP{gr}_{F_2})\bigr) = \dim(F_3) = \dim(\OP{gr}_{F_3})$ implies the strict compatibility $\Image(\Fil_{F_1}^\bullet \to F_3) = \Fil_{F_3}^\bullet$ and induces $\Image(\OP{gr}_{F_1} \to \OP{gr}_{F_2}) \Mi \OP{gr}_{F_3}$.
\end{proof}
\begin{thm}\label{thm-int-coh-comp}
When $\mathbb{L}|_U$ is \emph{de Rham}, the comparison isomorphisms in Theorem \ref{thm-L-!-coh-comp} are compatible with the canonical morphisms induced by any inclusions $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, and hence with the comparison isomorphisms in \cite[\aThm \logRHthmlogRHarith(\logRHthmlogRHarithcomp)]{Diao/Lan/Liu/Zhu:lrhrv} \Ref{corresponding to $I^\Utext{$\circ$-c} = \emptyset$; \Refcf{} the notation in Definition \ref{def-dR-Hi-Hdg-coh-cpt}}, in the sense that we have $\Gal(\AC{k} / k)$-equivariant commutative diagrams
\begin{equation}\label{eq-thm-int-coh-comp-diag-dR}
\xymatrix{ {H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR} \ar[r]^-\sim \ar[d] & {H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k B_\dR} \ar[d] \\
{H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR} \ar[r]^-\sim & {H_{\dR, \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k B_\dR} }
\end{equation}
and
\begin{equation}\label{eq-thm-int-coh-comp-diag-Hdg}
\xymatrix@C=3ex{ {H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} K} \ar[r]^-\sim \ar[d] & {\oplus_{a + b = i} \Bigl(H_{\Hdg, \Utext{$\star$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k K(-a)\Bigr)} \ar[d] \\
{H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} K} \ar[r]^-\sim & {\oplus_{a + b = i} \Bigl(H_{\Hdg, \Utext{$\circ$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k K(-a)\Bigr)} }
\end{equation}
of canonical morphisms, which are compatible with the Hodge--de Rham spectral sequences, for each integer $i \geq 0$. Hence, we have $\Gal(\AC{k} / k)$-equivariant isomorphisms
\begin{equation}\label{eq-thm-int-coh-comp-isom-dR}
H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR \Mi H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k B_\dR
\end{equation}
and
\begin{equation}\label{eq-thm-int-coh-comp-isom-Hdg}
H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} K \Mi \oplus_{a + b = i} \Bigl(H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \otimes_k K(-a)\Bigr)
\end{equation}
which are compatible with the prefect Poincar\'e duality pairings on both sides. Moreover, the Hodge filtrations on $H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$ and $H_{\dR, \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$ are \emph{strictly compatible} in the sense \Ref{as in Lemma \ref{lem-fil-mor}} that they induce the same filtration on $H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$, which we shall still call the \emph{Hodge filtration}, and we have a canonical graded isomorphism
\begin{equation}\label{eq-thm-int-coh-comp-gr-isom}
\OP{gr} H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr) \cong \oplus_{a + b = i} \, H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr),
\end{equation}
\Pth{matching $\OP{gr}^a H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$ with $H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^{a, i - a}\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$}.
\end{thm}
\begin{proof}
We have the commutative diagram \Refeq{\ref{eq-thm-int-coh-comp-diag-dR}} because, by Proposition \ref{prop-L-!-coh-comp-proket} and the proof of Theorem \ref{thm-L-!-coh-comp}, the morphism in both columns can be identified with the morphism
\[
H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathbb{B}_\dR^\Utext{$\star$-c}\bigr) \to H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \mathbb{B}_\dR^\Utext{$\circ$-c}\bigr)
\]
induced by the canonical morphism $\mathbb{B}_\dR^\Utext{$\star$-c} \to \mathbb{B}_\dR^\Utext{$\circ$-c}$ \Ref{which exists by the very construction of these sheaves in Definition \ref{def-BBdRp-cpt}}. Similarly, we have the commutative diagram \Refeq{\ref{eq-thm-int-coh-comp-diag-Hdg}} because, also by Proposition \ref{prop-L-!-coh-comp-proket} and the proof of Theorem \ref{thm-L-!-coh-comp}, the morphism in both columns can be identified with the morphism
\[
H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\star$-c}_{X_{K, \proket}}\bigr) \to H^i\bigl(X_{K, \proket}, \widehat{\mathbb{L}} \otimes_{\widehat{\mathbb{Z}}_p} \widehat{\mathcal{O}}^\Utext{$\circ$-c}_{X_{K, \proket}}\bigr)
\]
induced by the canonical morphism $\widehat{\mathcal{O}}^\Utext{$\star$-c}_{X_{K, \proket}} \to \widehat{\mathcal{O}}^\Utext{$\circ$-c}_{X_{K, \proket}}$. The commutative diagrams \Refeq{\ref{eq-thm-int-coh-comp-diag-dR}} and \Refeq{\ref{eq-thm-int-coh-comp-diag-Hdg}} are compatible with the Hodge--de Rham spectral sequences by Lemma \ref{prop-Poin-lem}\Refenum{\ref{prop-Poin-lem-4}}, and the resulted comparison isomorphisms \Refeq{\ref{eq-thm-int-coh-comp-isom-dR}} and \Refeq{\ref{eq-thm-int-coh-comp-isom-Hdg}} are compatible with the Poincar\'e duality pairings on generalized interior cohomology because they are induced by comparison isomorphisms respecting the original Poincar\'e duality pairings. Since the Hodge--de Rham spectral sequences for $H_{\dR, \Utext{$\star$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$ and $H_{\dR, \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)$ degenerate on the $E_1$ pages by Theorem \ref{thm-L-!-coh-comp}, and since \Refeq{\ref{eq-thm-int-coh-comp-isom-dR}} and \Refeq{\ref{eq-thm-int-coh-comp-isom-Hdg}} imply that
\[
\begin{split}
& \sum_{a + b = i} \dim_k\bigl(H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^{a, b}\bigl(U_\an, D_\dR(\mathbb{L})\bigr)\bigr) \\
& = \dim_{\mathbb{Q}_p}\bigl(H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_K, \mathbb{L}\bigr)\bigr) = \dim_k\bigr(H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U_\an, D_\dR(\mathbb{L})\bigr)\bigr),
\end{split}
\]
the last assertion of the proposition follows from Lemma \ref{lem-fil-mor}.
\end{proof}
\begin{cor}\label{cor-et-qis}
Let $I^+_\arith$ be as in Lemma \ref{lem-dR-qis-arith}, and let $I^\Utext{$\circ$-c}$ be as in \Refeq{\ref{eq-cor-dR-qis-arith-cond}}. Then we have a canonical isomorphism $H_{\et, \Utext{$\star$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \cong H_{\et, \Utext{$\circ$-c}}^i(U_K, \mathbb{L}_{\mathbb{Q}_p})$, for each $i \geq 0$ and each $a \in \mathbb{Z}$, which is compatible with \Refeq{\ref{eq-cor-dR-qis-arith}} and \Refeq{\ref{eq-cor-dR-qis-arith-Hdg}} via the comparison isomorphisms as in \Refeq{\ref{eq-thm-L-!-coh-comp-dR}} and \Refeq{\ref{eq-thm-L-!-coh-comp-Hdg}}.
\end{cor}
\begin{proof}
We may assume that $I^\Utext{$\circ$-c} = I^\Utext{$\star$-c} - I^+_\arith \subset I^\Utext{$\star$-c}$, as in the proof of Corollary \ref{cor-dR-qis-arith}. Since $B_\dR$ is a field extension of $\mathbb{Q}_p$, and since we have compatible canonical isomorphisms $H_{\et, ?}^i(U_K, \mathbb{L}) \otimes_{\mathbb{Z}_p} B_\dR \cong H_{\et, ?}^i(U_K, \mathbb{L}_{\mathbb{Q}_p}) \otimes_{\mathbb{Q}_p} B_\dR$, for $? = \Utext{$\star$-c}$ and $\Utext{$\circ$-c}$, this corollary follows from Theorem \ref{thm-int-coh-comp}, Lemma \ref{lem-dR-qis-arith}, and Corollary \ref{cor-dR-qis-arith}.
\end{proof}
\numberwithin{equation}{section}
\section{Comparison theorems for smooth algebraic varieties}\label{sec-comp-alg}
In this section, we let $U$ denote a smooth algebraic variety over a finite extension $k$ of $\mathbb{Q}_p$. Since $\chr(k) = 0$, by \cite{Nagata:1962-iavcv, Hironaka:1964-rsavz-1, Hironaka:1964-rsavz-2}, there exists a smooth compactification $X$ of $U$ such that the boundary $D = X - U$ \Pth{with its reduced subscheme structure} is a normal crossings divisor, and we may assume that the intersections of the irreducible components of $D$ are all smooth. We shall denote the analytification of these schemes, viewed as adic spaces over $\OP{Spa}(k, \mathcal{O}_k)$, by a superscript \Qtn{$\an$}, as usual. Then the analytifications of $U$, $X$, and $D$ satisfy the same setup as in Section \ref{sec-log-str-bd}, and we shall inherit most of the notation from there, the main difference being that objects and morphisms with no superscript \Qtn{$\an$} \Pth{\resp with superscripts \Qtn{$\an$}} are the algebraic \Pth{\resp analytic} ones. For any $I^\star \subset I$, we shall also consider $D^\Utext{$\star$-c} := \cup_{j \in I^\star} \, D_j$ and $D^\Utext{$\star$-nc} := \cup_{j \in I - I^\star} \, D_j$ \Pth{with their canonical reduced closed subscheme structures}, and the objects they define.
As in \cite[\aSec \logRHsecDdRalg]{Diao/Lan/Liu/Zhu:lrhrv}, for each $\mathbb{Z}_p$-local system $\mathbb{L}$ on $U_\et$, we denote by $\mathbb{L}^\an$ its analytification on $U^\an_\et$ as usual, and we write $\overline{\mathbb{L}}^\an := \jmath_{\ket, *}^\an(\mathbb{L}^\an)\cong R \jmath_{\ket, *}^\an(\mathbb{L}^\an)$, which is a $\mathbb{Z}_p$-local system on $X^\an_\ket$. We also consider $\jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L})$ \Pth{\resp $\jmath_{\et, !}^{\Utext{$\star$-c}, \an} \, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an)$} on $X_\et$ \Pth{\resp $X^\an_\et$}, and define
\[
\begin{split}
& H_{\et, \Utext{$\star$-c}}^i\bigl(U_{\AC{k}}, \mathbb{L} / p^m\bigr) := H^i\bigl(X_{\AC{k}, \ket}, \jmath_{\ket, !}^\Utext{$\star$-c}(\overline{\mathbb{L}} / p^m)\bigr) \\
& \cong H^i\bigl(X_{\AC{k}, \et}, \jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L} / p^m)\bigr) \cong H_\cpt^i\bigl(U^\Utext{$\star$-c}_{\AC{k}, \et}, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L} / p^m)\bigr)
\end{split}
\]
and
\[
H_{\et, \Utext{$\star$-c}}^i(U_{\AC{k}}, \mathbb{L}) := \varprojlim_m H_\Utext{$\star$-c}^i(U_{\AC{k}, \et}, \mathbb{L} / p^m)
\]
\Ref{\Refcf{} Definitions \ref{def-H-c-torsion} and \ref{def-H-c}, Lemmas \ref{lem-L-!} and \ref{lem-def-H-c-fin-Z-p}, and Remark \ref{rem-def-H-c-conv}}.
\begin{lem}\label{lem-comp-def-H-c}
There are canonical isomorphisms
\begin{equation}\label{eq-lem-comp-def-H-c-sh}
\begin{split}
& \bigl(\jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L})\bigr)^\an \cong \bigl(\varprojlim_m \, \jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L} / p^m)\bigr)^\an \\
& \cong \varprojlim_m \, \jmath_{\et, !}^{\Utext{$\star$-c}, \an} \, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an / p^m) \cong \jmath_{\et, !}^{\Utext{$\star$-c}, \an} \, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an) \\
& \cong \varprojlim_m \, R\varepsilon_{\et, *} \, \jmath_{\ket, !}^{\Utext{$\star$-c}, \an}(\overline{\mathbb{L}}^\an / p^m) \cong R\varepsilon_{\et, *} \, \jmath_{\ket, !}^{\Utext{$\star$-c}, \an}(\overline{\mathbb{L}}^\an).
\end{split}
\end{equation}
For all $i \geq 0$, we have
\begin{equation}\label{eq-lem-comp-def-H-c}
\begin{split}
& H_{\et, \Utext{$\star$-c}}^i(U_{\AC{k}}, \mathbb{L}) = \varprojlim_m H_{\et, \Utext{$\star$-c}}^i(U_{\AC{k}}, \mathbb{L} / p^m) \\
& \cong H_{\et, \Utext{$\star$-c}}^i(U^\an_{\AC{k}}, \mathbb{L}) \cong \varprojlim_m H_{\et, \Utext{$\star$-c}}^i(U^\an_{\AC{k}}, \mathbb{L}^\an / p^m)
\end{split}
\end{equation}
\Ref{see Definition \ref{def-H-c} and Lemma \ref{lem-def-H-c-fin-Z-p}}, which can be identified with
\[
\begin{split}
& H^i\bigl(X_{\AC{k}, \et}, \jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L})\big) := \varprojlim_m H^i\bigl(X_{\AC{k}, \et}, \jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L} / p^m)\bigr) \\
& \cong H^i\bigl(X^\an_{\AC{k}, \et}, \jmath_{\et, !}^{\Utext{$\star$-c}, \an} \, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an)\bigr) \cong \varprojlim_m H^i\bigl(X^\an_{\AC{k}, \et}, \jmath_{\et, !}^{\Utext{$\star$-c}, \an} \, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an / p^m)\bigr) \\
& \cong H^i\bigl(X^\an_{\AC{k}, \ket}, \jmath_{\ket, !}^{\Utext{$\star$-c}, \an}(\overline{\mathbb{L}}^\an)\bigr) \cong \varprojlim_m H^i\bigl(X^\an_{\AC{k}, \ket}, \jmath_{\ket, !}^{\Utext{$\star$-c}, \an}(\overline{\mathbb{L}}^\an / p^m)\bigr)
\end{split}
\]
\Ref{see Lemma \ref{lem-L-!}}. For any $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, the isomorphisms in \Refeq{\ref{eq-lem-comp-def-H-c-sh}} and \Refeq{\ref{eq-lem-comp-def-H-c}} are compatible with the canonical morphisms from the analogous objects with subscripts \Qtn{$\cpt$} to those with subscripts \Qtn{$\Utext{$\star$-c}$}; from those with subscripts \Qtn{$\Utext{$\star$-c}$} to those with subscripts \Qtn{$\Utext{$\circ$-c}$}; and from those with subscripts \Qtn{$\Utext{$\circ$-c}$} to those without any of these subscripts \Ref{\Refcf{} Remark \ref{rem-def-H-c-alt}}.
\end{lem}
\begin{proof}
These are based on the various definitions and Lemmas \ref{lem-L-!} and \ref{lem-def-H-c-fin-Z-p}, and the compatible isomorphisms $\bigl(\jmath_{\et, !}^\Utext{$\star$-c} \, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L} / p^m)\bigr)^\an \cong \jmath_{\et, !}^{\Utext{$\star$-c}, \an} \, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an / p^m)$ and $H_\cpt^i\bigl(U^\Utext{$\star$-c}_{\AC{k}, \et}, R\jmath_{\Utext{$\star$-c}, \et, *}(\mathbb{L} / p^m)\bigr) \cong H_\cpt^i\bigl(U^{\Utext{$\star$-c}, \an}_{\AC{k}, \et}, R\jmath_{\Utext{$\star$-c}, \et, *}^\an(\mathbb{L}^\an / p^m)\bigr)$, for all $m \geq 1$, by \cite[\aProp 2.1.4, and \aThms 3.8.1 and 5.7.2]{Huber:1996-ERA}.
\end{proof}
\begin{lem}\label{lem-comp-def-H-c-trace}
The usual \Pth{algebraic} trace morphism $t_\dR: H_{\dR, \cpt}^{2d}\bigl(U, \mathcal{O}_U(d)\bigr) \to k$ is compatible with the \Pth{analytic} trace morphism $t_\dR^\an: H_{\dR, \cpt}^{2d}\bigl(U^\an_\an, \mathcal{O}_{U^\an}(d)\bigr) \to k$ defined in Theorem \ref{thm-trace-dR-Hdg} via GAGA \cite{Kopf:1974-efava}. Similarly, the usual \Pth{algebraic} trace morphism $t_\et: H_{\et, \cpt}^{2d}\bigl(U_{\AC{k}}, \mathbb{Q}_p(d)\bigr) \to \mathbb{Q}_p$ is compatible with the \Pth{analytic} trace morphism $t_\et^\an: H_{\et, \cpt}^{2d}\bigl(U^\an_{\AC{k}}, \mathbb{Q}_p(d)\bigr) \to \mathbb{Q}_p$ defined in Theorem \ref{thm-trace-et} under the canonical isomorphisms given by Lemma \ref{lem-comp-def-H-c}.
\end{lem}
\begin{proof}
By Lemma \ref{lem-trace-dR}, Theorem \ref{thm-trace-et}\Refenum{\ref{thm-trace-et-res}}, and the corresponding facts for algebraic trace morphisms, up to replacing $k$ with a finite extension in $\AC{k}$, we may assume that $U_K = X_K$ is connected. By the compatibility with Gysin isomorphisms in Proposition \ref{prop-Gysin-dR-comp} and Theorem \ref{thm-trace-et}\Refenum{\ref{thm-trace-et-comp}}, by the corresponding facts for algebraic trace morphisms, and by considering smooth divisors defined by blowing up $k$-points \Pth{which exists up to further replacing $k$ with a finite extension in $\AC{k}$} as in the proof of Theorem \ref{thm-trace-et}, we can proceed by induction and reduce to the case where $X_K$ is just a single $K$-point. In this case, the algebraic and analytic trace isomorphisms for de Rham cohomology are the canonical isomorphisms $H^0(X, \mathcal{O}_X) \cong k$ and $H^0(X^\an, \mathcal{O}_{X^\an}) \cong k$, respectively, which are compatible via the canonical morphism $H^0(X, \mathcal{O}_X) \to H^0(X^\an, \mathcal{O}_{X^\an})$. Also, the algebraic and analytic trace isomorphisms for \'etale cohomology are the canonical isomorphisms $H_\et^0(X_K, \mathbb{Q}_p) \cong \mathbb{Q}_p$ and $H_\et^0(X_K^\an, \mathbb{Q}_p) \cong \mathbb{Q}_p$, respectively, which are compatible via the canonical morphism $H_\et^0(X_K, \mathbb{Q}_p) \to H_\et^0(X_K^\an, \mathbb{Q}_p)$, as desired.
\end{proof}
\begin{thm}\label{thm-comp-alg-dR-cpt}
We have canonical $\Gal(\AC{k} / k)$-equivariant filtered isomorphisms
\begin{equation}\label{eq-comp-alg-dR-cpt}
H_{\et, \Utext{$\star$-c}}^i(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} B_\dR \cong H_{\dR, \Utext{$\star$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k B_\dR
\end{equation}
and
\begin{equation}\label{eq-comp-alg-Hdg-cpt}
H_{\et, \Utext{$\star$-c}}^i(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} \widehat{\AC{k}} \cong \oplus_{a + b = i} \, \Bigl(H_{\Hdg, \Utext{$\star$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k \widehat{\AC{k}}(-a) \Bigr),
\end{equation}
where
\begin{equation}\label{eq-def-alg-dR-coh-cpt}
H_{\dR, \Utext{$\star$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) := H^i\bigl(X, \DRl\bigl(\bigl(D_{\dR, \log}^\alg(\mathbb{L})\bigr)(-D^\Utext{$\star$-c})\bigr)\bigr)
\end{equation}
and
\begin{equation}\label{eq-def-alg-Hdg-coh-cpt}
H_{\Hdg, \Utext{$\star$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) := H^i\bigl(X, \OP{gr} \DRl\bigl(\bigl(D_{\dR, \log}^\alg(\mathbb{L})\bigr)(-D^\Utext{$\star$-c})\bigr)\bigr)
\end{equation}
are abusively defined as in Definition \ref{def-dR-Hi-Hdg-coh-cpt} \Ref{for the filtered log connection $D_{\dR, \log}^\alg(\mathbb{L})$ introduced in \cite[\aSec \logRHsecDdRalg]{Diao/Lan/Liu/Zhu:lrhrv}}. Moreover, the Hodge--de Rham spectral sequence for $H_{\dR, \Utext{$\star$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr)$, as in \Refeq{\ref{eq-def-Hdg-dR-coh-cpt-spec-seq}}, degenerates on the $E_1$ page, and are compatible with the above isomorphisms in the sense that \Refeq{\ref{eq-comp-alg-dR-cpt}} induces \Refeq{\ref{eq-comp-alg-Hdg-cpt}} by taking the $0$-th graded pieces.
\end{thm}
\begin{proof}
This follows from Lemma \ref{lem-comp-def-H-c}, Theorem \ref{thm-L-!-coh-comp}, and GAGA \cite{Kopf:1974-efava}.
\end{proof}
\begin{thm}\label{thm-comp-alg-dR-int}
The comparison isomorphisms \Refeq{\ref{eq-comp-alg-dR-cpt}} and \Refeq{\ref{eq-comp-alg-Hdg-cpt}} are compatible with the canonical morphisms induced by any inclusions $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, and hence with the comparison isomorphisms in \cite[\aThms \logRHthmintromain{} and \logRHthmHTdegencomp]{Diao/Lan/Liu/Zhu:lrhrv} \Ref{corresponding to $I^\Utext{$\circ$-c} = \emptyset$; \Refcf{} the notation in Definition \ref{def-dR-Hi-Hdg-coh-cpt}}, via the canonical morphisms among them, and also via Poincar\'e duality. Consequently, we obtain $\Gal(\AC{k} / k)$-equivariant filtered isomorphisms
\begin{equation}\label{eq-comp-alg-dR-int}
H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} B_\dR \cong H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k B_\dR
\end{equation}
and
\begin{equation}\label{eq-comp-alg-Hdg-int}
H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(U_{\AC{k}}, \mathbb{L}) \otimes_{\mathbb{Q}_p} \widehat{\AC{k}} \cong \oplus_{a + b = i} \, \Bigl(H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \otimes_k \widehat{\AC{k}}(-a) \Bigr),
\end{equation}
where each generalized interior cohomology is defined as the image of the corresponding cohomology with partial compact support along $D^\Utext{$\star$-c}$ in the corresponding cohomology with partial compact support along $D^\Utext{$\circ$-c}$, as before, which are compatible with Poincar\'e duality. Moreover, the canonical morphism $H_{\dR, \Utext{$\star$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \to H_{\dR, \Utext{$\circ$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr)$ is strictly compatible with the Hodge filtrations on both sides, which induces a Hodge filtration on $H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr)$, together with a canonical graded isomorphism
\[
\OP{gr} H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr) \cong \oplus_{a + b = i} \, H_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i\bigl(U, D_\dR^\alg(\mathbb{L})\bigr).
\]
When $\emptyset = I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} = I$, we obtain the result for the usual interior cohomology, with subscripts \Qtn{$\intcoh$} replacing \Qtn{$\Utext{$\star$-c} \to \Utext{$\circ$-c}$} in all of the above.
\end{thm}
\begin{proof}
This follows from \cite[\aProp 2.1.4, and \aThms 3.8.1 and 5.7.2]{Huber:1996-ERA}; Lemmas \ref{lem-comp-def-H-c} and \ref{lem-comp-def-H-c-trace}; Theorem \ref{thm-int-coh-comp}; and GAGA \cite{Kopf:1974-efava}.
\end{proof}
\numberwithin{equation}{subsection}
\section{Cohomology of Shimura varieties and Hodge--Tate weights}\label{sec-Sh-var}
In this section, we will freely use the notation and definitions of \cite[\aSec \logRHsecShvar]{Diao/Lan/Liu/Zhu:lrhrv}. Nevertheless, let us mention the following choices: We shall fix a Shimura datum $(\Grp{G}, \Shdom)$ and a neat open compact subgroup $\levcp \subset \Grp{G}(\bAi)$, which defines a Shimura variety $\Model_\levcp = \Sh_\levcp(\Grp{G}, \Shdom)$ over the reflex field $\ReFl = \ReFl(\Grp{G}, \Shdom)$. We shall also fix the choices of an algebraic closure $\AC{\mathbb{Q}}_p$ of $\mathbb{Q}_p$ and an isomorphism $\ACMap: \AC{\mathbb{Q}}_p \Mi \mathbb{C}$.
\subsection{Coherent cohomology and dual BGG decompositions}\label{sec-dual-BGG}
The goal of this subsection is to review the so-called \emph{dual BGG complexes} introduced by Faltings in \cite{Faltings:1983-clshs}, and apply them to the de Rham and Hodge cohomology \Pth{with support conditions} of automorphic vector bundles. \Pth{The abbreviation BGG refers to Bernstein, Gelfand, and Gelfand, because of their seminal work \cite{Bernstein/Gelfand/Gelfand:1975-doagm}.} In particular, we shall obtain a refined version of the Hodge--Tate decomposition, whose terms are given by the coherent cohomology of automorphic vector bundles.
Let us fix the choice of some $\hc_\hd$ as in \cite[(\logRHeqmuh)]{Diao/Lan/Liu/Zhu:lrhrv} which is induced by some homomorphism $\Gm{\AC{\mathbb{Q}}} \to \Grp{G}_{\AC{\mathbb{Q}}}$, which we abusively denote by the same symbols. Let $\Grp{P}$ \Pth{\resp $\Grp{P}^c$} denote the parabolic subgroup of $\Grp{G}_{\AC{\mathbb{Q}}}$ \Pth{\resp $\Grp{G}^c_{\AC{\mathbb{Q}}}$} defined by the choice of $\hc_\hd$ \Ref{\Refcf{} \cite[\aProp \logRHproplocsystinfty]{Diao/Lan/Liu/Zhu:lrhrv}}. Let $\Grp{M}$ \Pth{\resp $\Grp{M}^c$} denote the Levi subgroup of $\Grp{P}$ \Pth{\resp $\Grp{P}^c$} given by the centralizer of the image of $\hc_\hd$. As in the case of $\Grp{G}$ and $\Grp{G}^c$, for any field $F$ over $\AC{\mathbb{Q}}$, let us denote by $\Rep_F(\Grp{P}^c)$ \Pth{\resp $\Rep_F(\Grp{M}^c)$} the category of finite-dimensional algebraic representations of $\Grp{P}^c$ \Pth{\resp $\Grp{M}^c$} over $F$, which we view as an algebraic representation of $\Grp{P}$ \Pth{\resp $\Grp{M}$} by pullback. We shall also view the representations of $\Grp{M}^c$ \Pth{\resp $\Grp{M}$} as representations of $\Grp{P}^c$ \Pth{\resp $\Grp{P}$} by pullback via the canonical homomorphism $\Grp{P}^c \to \Grp{M}^c$ \Pth{\resp $\Grp{P} \to \Grp{M}$}.
As explained in \cite[\aSec 3]{Harris:1985-avafs-1} \Pth{or \cite[\aSec 2.2]{Lan:2016-vtcac}}, there is a tensor functor assigning to each $\repalt \in \Rep_{\AC{\mathbb{Q}}}(\Grp{P}^c)$ a vector bundle $\cohSh{\repalt}_\mathbb{C}$ over $\Model_{\levcp, \mathbb{C}}$, which is canonically isomorphic to $\dRSh{\rep}_\mathbb{C}$ when $\repalt_\mathbb{C} \cong \rep_\mathbb{C}$ for some $\rep \in \Rep_{\AC{\mathbb{Q}}}(\Grp{G}^c)$. We call $\cohSh{\repalt}_\mathbb{C}$ the \emph{automorphic vector bundle} associated with $\repalt_\mathbb{C}$. Moreover, as explained in \cite[\aSec 4]{Harris:1989-ftcls}, this tensor functor canonically extends to a tensor functor assigning to each $\repalt \in \Rep_{\AC{\mathbb{Q}}}(\Grp{P}^c)$ a vector bundle $\cohSh{\repalt}^\canext_\mathbb{C}$ over $\Torcpt{\Model}_{\levcp, \mathbb{C}}$, called the \emph{canonical extension} of $\cohSh{\repalt}$, which is canonically isomorphic to $\dRSh{\rep}^\canext_\mathbb{C}$ when $\repalt_\mathbb{C} \cong \rep_\mathbb{C}$ for some $\rep \in \Rep_{\AC{\mathbb{Q}}}(\Grp{G}^c)$. For $\repalt \in \Rep_{\AC{\mathbb{Q}}}(\Grp{M}^c)$, this $\cohSh{\repalt}^\canext_\mathbb{C}$ is canonical isomorphic to the canonical extensions defined as in \cite[Main \aThm 3.1]{Mumford:1977-hptnc}.
Consider $\NCD = \Torcpt{\Model}_{\levcp, \mathbb{C}} - \Model_{\levcp, \mathbb{C}}$ \Pth{with its reduced subscheme structure}, which is a normal crossings divisor. We shall also write $\NCD = \cup_{j \in I} \, \NCD_j$, where $\{ \NCD_j \}_{j \in I}$ are the irreducible components of $\NCD$, so that we can also consider $\NCD^\Utext{$\star$-c} \subset \NCD$ and $\NCD^\Utext{$\circ$-c} \subset \NCD$ for each $I^\Utext{$\star$-c} \subset I$. \Pth{The results below will be for the cohomology with compact support along $D^\Utext{$\star$-c}$ and for the generalized interior cohomology, which specialize to results for the cohomology with compact support and the interior cohomology when $I^\Utext{$\star$-c} = I$ and $I^\Utext{$\circ$-c} = \emptyset$.}
\begin{defn}\label{def-int-coh-coh}
For each $\repalt \in \Rep_{\AC{\mathbb{Q}}}(\Grp{P}^c)$, we define the \emph{subcanonical extension}
\begin{equation}\label{eq-def-subext}
\cohSh{\repalt}_\mathbb{C}^\subext := \cohSh{\repalt}_\mathbb{C}^\canext(-\NCD)
\end{equation}
\Ref{as in \cite[\aSec 2]{Harris:1990-afcvs}} and the \emph{interior coherent cohomology}
\begin{equation}\label{eq-def-int-coh-coh}
\begin{split}
& H_\intcoh^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext) := \Image\bigl(H^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\subext) \to H^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext)\bigr).
\end{split}
\end{equation}
More generally, for any $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, by abuse of notation, we define
\begin{equation}\label{eq-def-coh-coh-gen}
H_\Utext{$\star$-c}^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext) := H^i\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext(-\NCD^\Utext{$\star$-c})\bigr)
\end{equation}
and
\begin{equation}\label{eq-def-int-coh-coh-gen}
\begin{split}
& H_{\Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext) \\
& := \Image\bigl(H_\Utext{$\star$-c}^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext) \to H_\Utext{$\circ$-c}^i(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \cohSh{\repalt}_\mathbb{C}^\canext)\bigr),
\end{split}
\end{equation}
the latter of which gives \Refeq{\ref{eq-def-int-coh-coh}} when $I^\Utext{$\star$-c} = I$ and $I^\Utext{$\circ$-c} = \emptyset$. We similarly define objects with $\mathbb{C}$ replaced with $\AC{\mathbb{Q}}_p$ or any field extension of $\ReFl$ over which $\repalt$ has a model, or with $\repalt$ replaced with $\dual{\repalt}$, or both.
\end{defn}
Let us fix the choice of a maximal torus $\Grp{T}^c$ of $\Grp{M}^c$, which is also a maximal torus of $\Grp{G}^c_{\AC{\mathbb{Q}}}$. With this choice, let us denote by $\RT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$, $\RT_{\Grp{M}^c}$, \etc the roots of $\Grp{G}^c_{\AC{\mathbb{Q}}}$, $\Grp{M}^c$, \etc, respectively; and by $\WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$, $\WT_{\Grp{M}^c}$, \etc the weights of $\Grp{G}^c_{\AC{\mathbb{Q}}}$, $\Grp{M}^c$, \etc, respectively. Then we have naturally $\RT_{\Grp{M}^c} \subset \RT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$ and $\WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}} = \WT_{\Grp{M}^c}$. Let us denote by $H$ the coweight induced by $\hc_\hd$. Let us also make compatible choices of positive roots $\RT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$ and $\RT_{\Grp{M}^c}^+$, and of dominant weights $\WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$ and $\WT_{\Grp{M}^c}^+$, so that $\RT_{\Grp{M}^c}^+ \subset \RT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$ and $\WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+ \subset \WT_{\Grp{M}^c}^+$. For an irreducible representation $\rep$ of highest weight $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$, we write $\rep = \rep_\wt$, $\rep_\mathbb{C} = \rep_{\wt, \mathbb{C}}$, $\dRSh{\rep}_\mathbb{C} = \dRSh{\rep}_{\wtalt, \mathbb{C}}$, \etc. Similarly, for an irreducible representation $\repalt$ of highest weight $\wtalt \in \WT_\Grp{M}^+$, we write $\repalt = \repalt_\wtalt$, $\repalt_\mathbb{C} = \repalt_{\wtalt, \mathbb{C}}$, $\cohSh{\repalt}_\mathbb{C} = \cohSh{\repalt}_{\wtalt, \mathbb{C}}$, \etc. Let $\hsum_{\Grp{G}^c_{\AC{\mathbb{Q}}}} := \frac{1}{2} \sum_{\wt \in \RT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+} \wt$ and $\hsum_{\Grp{M}^c} := \frac{1}{2} \sum_{\wtalt \in \RT_{\Grp{M}^c}^+} \wtalt$ denote the usual half-sums of positive roots, and let $\hsum^{\Grp{M}^c} := \hsum_{\Grp{G}^c_{\AC{\mathbb{Q}}}} - \hsum_{\Grp{M}^c}$. Let $\WG_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$ and $\WG_{\Grp{M}^c}$ denote the Weyl groups of $\Grp{G}^c_{\AC{\mathbb{Q}}}$ and $\Grp{M}^c$ with respect to the common maximal torus $\Grp{T}^c$, which allows us to identify $\WG_{\Grp{M}^c}$ as a subgroup of $\WG_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$. Given any element $w$ in the above Weyl groups, we shall denote its length by $\wl(w)$. In addition to the natural action of $\WG_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$ on $\WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$, there is also the dot action $w \cdot \wt = w( \wt + \hsum_{\Grp{G}^c_{\AC{\mathbb{Q}}}} ) - \hsum_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$, for all $w \in \WG_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$ and $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$. Let $\WG^{\Grp{M}^c}$ denote the subset of $\WG_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$ consisting of elements mapping $\WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$ into $\WT_{\Grp{M}^c}^+$, which are the minimal length representatives of $\WG_{\Grp{M}^c} \Lquot \WG_{\Grp{G}^c_{\AC{\mathbb{Q}}}}$.
As in Definition \ref{def-dR-Hi-Hdg-coh-cpt} and Theorem \ref{thm-comp-alg-dR-cpt}, consider the log de Rham complex,
\[
\begin{split}
\DRl\bigl(\dRSh{\rep}^\canext_\mathbb{C}(-\NCD^\Utext{$\star$-c})\bigr) & := \bigl(\dRSh{\rep}^\canext_\mathbb{C}(-\NCD^\Utext{$\star$-c}) \otimes_{\mathcal{O}_{\Torcpt{\Model}_{\levcp, \mathbb{C}}}} \Omega^\bullet_{\Torcpt{\Model}_{\levcp, \mathbb{C}}}(\log \NCD_\mathbb{C}), \nabla\bigr) \\
& \cong \DRl(\dRSh{\rep}^\canext_\mathbb{C}) \otimes_{\mathcal{O}_{\Torcpt{\Model}_{\levcp, \mathbb{C}}}} \mathcal{O}_{\Torcpt{\Model}_{\levcp, \mathbb{C}}}(-\NCD^\Utext{$\star$-c}),
\end{split}
\]
and consider the Hodge cohomology with partial compact support along $\NCD^\Utext{$\star$-c}$,
\begin{equation}\label{eq-def-aut-bdl-Hodge-coh-cpt}
H^{a, b}_{\Hdg, \Utext{$\star$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C}) := H^{a + b}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \OP{gr}^a \DRl\bigl(\dRSh{\rep}^\canext_\mathbb{C}(-\NCD^\Utext{$\star$-c})\bigr)\bigr).
\end{equation}
As in Remark \ref{rem-def-dR-Hi-Hdg-coh-cpt-conv}, when $I^\Utext{$\star$-c} = \emptyset$, this give the usual Hodge cohomology; and when $I^\Utext{$\star$-c} = I$, this gives the usual Hodge cohomology with compact support.
While it is difficult to compute hypercohomology in general, the miracle is that $\OP{gr}^a \DRl(\dRSh{\rep}^\canext_\mathbb{C})$ has a quasi-isomorphic direct summand, called the \emph{graded dual BGG complex}, whose differentials are \emph{zero} and whose terms are direct sums of $\cohSh{\repalt}^\canext_\mathbb{C}$ for some representations $\repalt$ determined explicitly by $\rep$. Then the hypercohomology of this graded dual BGG complex is just a direct sum of usual coherent cohomology of $\cohSh{\repalt}^\canext_\mathbb{C}$ up to degree shifting. More precisely, we have the following:
\begin{thm}[\emph{dual BGG complexes}; Faltings]\label{thm-dual-BGG}
There is a canonical filtered quasi-isomorphic direct summand $\BGGl(\dRSh{\rep}^\canext_\mathbb{C})$ of $\DRl(\dRSh{\rep}^\canext_\mathbb{C})$ \Pth{in the category of complexes of abelian sheaves over $\Torcpt{\Model}_{\levcp, \mathbb{C}}$ whose terms are coherent sheaves and whose differentials are differential operators} satisfying the following properties:
\begin{enumerate}
\item The formation of $\BGGl(\dRSh{\rep}^\canext_\mathbb{C})$ is functorial and exact in $\rep_\mathbb{C}$.
\item The differentials on $\OP{gr} \BGGl(\dRSh{\rep}^\canext_\mathbb{C})$ are all zero.
\item Suppose that $\rep \cong \dual{\rep}_\wt$ for some $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. Then, for each $i\geq 0$ and each $a \in \mathbb{Z}$, the $i$-th term of $\OP{gr}^a \BGGl(\dRSh{\rep}^\canext_\mathbb{C})$ is given by
\begin{equation}\label{eq-thm-dual-BGG}
\OP{gr}^a \BGGl^i(\dRSh{\rep}^\canext_\mathbb{C}) \cong \oplus_{w \in \WG^{\Grp{M}^c}, \; \wl(w) = i, \; (w \cdot \wt)(H) = -a} \; \bigl( \cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign \bigr)^\canext
\end{equation}
\end{enumerate}
\end{thm}
\begin{proof}
See \cite[\aSecs 3 and 7]{Faltings:1983-clshs}, \cite[\aCh VI, \aSec 5]{Faltings/Chai:1990-DAV}, and \cite[\aThm 5.9]{Lan/Polo:2018-bggab}. \Pth{Although these references were written in less general settings, the methods of the constructions still generalize to our setting here.}
\end{proof}
\begin{rk}\label{rem-dual-BGG-descent}
The various automorphic vector bundles $\dRSh{\rep}_{\wt, \mathbb{C}}$, $\Fil^\bullet(\dRSh{\rep}_{\wt, \mathbb{C}})$, and $\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}$ \Pth{and their canonical extensions} in Theorem \ref{thm-dual-BGG} have models over $\AC{\mathbb{Q}}$, or even over a finite extension $\ReFl'$ of $\ReFl$ \Pth{depending on $\wt$} over which $\rep_\wt$, $\Fil^\bullet \, \rep_\wt$, and $\repalt_{w \cdot \wt}$ all have models. \Ref{The case of $\dRSh{\rep}_{\wt, \mathbb{C}}$ and its canonical extension follows from \cite[\aCh III, \aThm 5.1, and \aCh V, \aThm 6.2]{Milne:1990-cmsab}; and the cases of $\Fil^\bullet \, \dRSh{\rep}_{\wt, \mathbb{C}}$, $\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}$, and their canonical extensions follow from the same argument as in \cite[\aRem \logRHrempartialflag]{Diao/Lan/Liu/Zhu:lrhrv}, based on the models of the associated partial flag varieties over $\ReFl$. Note that $\Fil^\bullet(\dRSh{\rep}^\canext_{\wt, \mathbb{C}})$ did appear in Theorem \ref{thm-dual-BGG}, when we said there is a canonical filtered quasi-isomorphic direct summand $\BGGl(\dRSh{\rep}^\canext_\mathbb{C})$ of $\DRl(\dRSh{\rep}^\canext_\mathbb{C})$.} Then the statements of Theorem \ref{thm-dual-BGG} remain true if we replace $\mathbb{C}$ with $\AC{\mathbb{Q}}$ or $\ReFl'$, by the same descent argument as in \cite[\aSec 6]{Lan/Polo:2018-bggab}.
\end{rk}
\begin{thm}\label{thm-dual-BGG-cpt-int}
Suppose that $\rep \cong \dual{\rep}_\wt$ for some $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. Then, given any $a, b \in \mathbb{Z}$ such that $a + b \geq 0$, we have the \emph{dual BGG decomposition}
\begin{equation}\label{eq-thm-dual-BGG-cpt}
\begin{split}
& H^{a, b}_{\Hdg, \Utext{$\star$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C}) = H^{a + b}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, \OP{gr} \DRl\bigl(\dRSh{\rep}^\canext_\mathbb{C}(-\NCD^\Utext{$\star$-c})\bigr)\bigr) \\
& \cong \oplus_{w \in \WG^{\Grp{M}^c}, \; (w \cdot \wt)(H) = -a} \; H_\Utext{$\star$-c}^{a + b - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr),
\end{split}
\end{equation}
which is compatible with the canonical morphisms induced by any inclusions $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$ and induces a similar dual BGG decomposition
\begin{equation}\label{eq-thm-dual-BGG-int}
\begin{split}
& H^{a, b}_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C}) \\
& \cong \oplus_{w \in \WG^{\Grp{M}^c}, \; (w \cdot \wt)(H) = -a} \; H_{\Utext{$\star$-c} \to \Utext{$\circ$-c}}^{a + b - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr).
\end{split}
\end{equation}
Moreover, the Hodge--de Rham spectral sequence for $H^{a, b}_{\Hdg, \Utext{$\star$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})$ induces the \emph{dual BGG spectral sequence}
\begin{equation}\label{eq-thm-dual-BGG-cpt-spec-seq}
\begin{split}
& E_1^{a, b} = \oplus_{w \in \WG^{\Grp{M}^c}, \, (w \cdot \wt)(H) = -a} \; H_\Utext{$\star$-c}^{a + b - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr) \\
& \Rightarrow H^{a + b}_{\dR, \Utext{$\star$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C}),
\end{split}
\end{equation}
which degenerates on the $E_1$ page, and induces a \emph{dual BGG decomposition}
\begin{equation}\label{eq-thm-dual-BGG-spec-decomp-cpt}
\OP{gr} H^i_{\dR, \Utext{$\star$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})
\cong \oplus_{w \in \WG^{\Grp{M}^c}} \; H_\Utext{$\star$-c}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr),
\end{equation}
for each $i \geq 0$, which is \Pth{strictly} compatible with the analogous decomposition for $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$ and therefore induces a dual BGG decomposition
\begin{equation}\label{eq-thm-dual-BGG-spec-decomp-int}
\begin{split}
& \OP{gr} H^i_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C}) \\
& \cong \oplus_{w \in \WG^{\Grp{M}^c}} \; H_{\Utext{$\star$-c} \to \Utext{$\circ$-c}}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr).
\end{split}
\end{equation}
\end{thm}
\begin{proof}
By tensoring the graded quasi-isomorphism between the log de Rham complex $\DRl(\dRSh{\rep}^\canext_\mathbb{C})$ and the \Pth{log} dual BGG complex $\BGGl(\dRSh{\rep}^\canext_\mathbb{C})$ in Theorem \ref{thm-dual-BGG} with the invertible $\mathcal{O}_{\Torcpt{\Model}_{\levcp, \mathbb{C}}}$-ideal $\mathcal{O}_{\Torcpt{\Model}_{\levcp, \mathbb{C}}}(-\NCD^\Utext{$\star$-c})$, and by taking graded pieces, we obtain a quasi-isomorphism between $\OP{gr}^a \DRl\bigl(\dRSh{\rep}^\canext_\mathbb{C}(-\NCD^\Utext{$\star$-c})\bigr)$ and $\bigl(\OP{gr}^a \BGGl(\dRSh{\rep}^\canext_\mathbb{C})\bigr)(-\NCD^\Utext{$\star$-c})$, and the differentials of the last complex are still all zero. Hence, by comparing the \Pth{algebraic} objects over $\mathbb{C}$ with their analogues over $\AC{\mathbb{Q}}_p$ as in \cite[\aRem \logRHremlocsystHodgedegen]{Diao/Lan/Liu/Zhu:lrhrv}, the desired isomorphism \Refeq{\ref{eq-thm-dual-BGG-cpt}} follows from \Refeq{\ref{eq-def-alg-Hdg-coh-cpt}}, and the remaining assertions follow from Theorems \ref{thm-comp-alg-dR-cpt} and \ref{thm-comp-alg-dR-int}.
\end{proof}
\begin{rk}\label{rem-dual-BGG-spec-seq}
Faltings first introduced the dual BGG spectral sequence associated with the \emph{stupid \Pth{\Qtn{b\^ete}} filtration} in \cite[\aSec 4, \apage 76, and \aSec 7, \aThm 11]{Faltings:1983-clshs}, whose degeneration on the $E_1$ page nevertheless implies \Pth{by comparison of total dimensions} the degeneration of the spectral sequence \Refeq{\ref{eq-thm-dual-BGG-cpt-spec-seq}} associated with the \emph{Hodge filtration}. The degeneracy on the $E_1$ page was first proved by Faltings himself in the compact case \Ref{see \cite[\aSec 4, \aThm 4]{Faltings:1983-clshs}}, later in the case of Siegel modular varieties by Faltings--Chai \Ref{see \cite[\aCh VI, \aThm 5.5]{Faltings/Chai:1990-DAV}} by reducing to the case of trivial coefficients over some toroidal compactifications of self-fiber products of universal abelian schemes \Pth{and this method can be generalized to the case of all PEL-type and Hodge-type Shimura varieties using \cite{Lan:2012-aatcs}, \cite{Lan:2012-tckf}, and \cite[\aSec 4.5]{Lan/Stroh:2018-ncaes}}, and in general by Harris--Zucker \Ref{see \cite[\aCor 4.2.3]{Harris/Zucker:2001-BCS-3}} using Saito's theory of mixed Hodge modules \Ref{see \cite[\aThm 2.14]{Saito:1990-mhm}}. Even when $I^\Utext{$\star$-c} = \emptyset$, our proof of the degeneration of the dual BGG spectral sequence \ref{eq-thm-dual-BGG-cpt-spec-seq}, which can be alternatively based on \cite[\aThms \logRHthmHTdegencomp{} and \logRHthmlocsystcomp]{Diao/Lan/Liu/Zhu:lrhrv}, is a new one.
\end{rk}
\subsection{Hodge--Tate weights}\label{sec-Sh-var-HT-wts}
The goal of this subsection is to describe the Hodge--Tate weights of $H^i_{\et, \Utext{$\star$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ and $H^i_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ in terms of the dimensions of the dual BGG pieces at the right-hand sides of \Refeq{\ref{eq-thm-dual-BGG-spec-decomp-cpt}} and \Refeq{\ref{eq-thm-dual-BGG-spec-decomp-int}}, respectively.
We first need to provide a definition for the Hodge--Tate weights of the cohomology of \'etale local systems over the infinite extension $\AC{\mathbb{Q}}_p$ of $\mathbb{Q}_p$. Let $\mathbb{C}_p$ denote the $p$-adic completion of $\AC{\mathbb{Q}}_p$ as usual. Let $(\dRSh{\rep}_{\AC{\mathbb{Q}}_p}, \nabla, \Fil^\bullet)$ denote the pullback of $(\dRSh{\rep}_\mathbb{C}, \nabla, \Fil^\bullet)$ under $\ACMap^{-1}: \mathbb{C} \Mi \AC{\mathbb{Q}}_p$. As in \cite[\aSec \logRHseclocsystconstr]{Diao/Lan/Liu/Zhu:lrhrv}, let $\Coef$ be a finite extension of $\mathbb{Q}_p$ in $\AC{\mathbb{Q}}_p$ such that $\rep_{\AC{\mathbb{Q}}_p}$ has a model $\rep_\Coef$ over $\Coef$, and let $\etSh{\rep}_\Coef$ be as in \cite[(\logRHeqlocsystetcoefbasech)]{Diao/Lan/Liu/Zhu:lrhrv}. Let $\BFp$ be a finite extension of the composite of $\ReFl$ and $\Coef$ in $\AC{\mathbb{Q}}_p$, so that we have $\ReFl \Emn{\can} \AC{\mathbb{Q}} \Emn{~~\ACMap^{-1}} \AC{\mathbb{Q}}_p$, and let $\CoefMap: \Coef \otimes_{\mathbb{Q}_p} \BFp \to \BFp$ be the multiplication homomorphism $a \otimes b \mapsto ab$, as in \cite[(\logRHeqcoefproj)]{Diao/Lan/Liu/Zhu:lrhrv}. By Theorem \ref{thm-comp-alg-dR-cpt}, $H^i_{\et, \Utext{$\star$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_\Coef)$ is a de Rham representations of $\Gal(\AC{\mathbb{Q}}_p/\BFp)$, and we have a canonical $\Gal(\AC{\mathbb{Q}}_p/\BFp)$-equivariant Hecke-equivariant isomorphism
\begin{equation}\label{eq-Sh-var-arith-log-RH-comp-dR-cpt}
H^i_{\et, \Utext{$\star$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_\Coef) \otimes_{\mathbb{Q}_p} B_\dR \cong H^i_{\dR, \Utext{$\star$-c}}\bigl(\Model_{\levcp, \BFp}, D_\dR^\alg(\etSh{\rep}_\Coef)\bigr) \otimes_\BFp B_\dR,
\end{equation}
which is compatible with the filtrations on both sides, whose $0$-th graded piece is a canonical $\Gal(\AC{\mathbb{Q}}_p/\BFp)$-equivariant Hecke-equivariant isomorphism
\begin{equation}\label{eq-Sh-var-arith-log-RH-comp-HT-cpt}
\begin{split}
& H^i_{\et, \Utext{$\star$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_\Coef) \otimes_{\mathbb{Q}_p} \mathbb{C}_p \\
& \cong \oplus_{a + b = i} \, \Bigl(H^{a, b}_{\Hdg, \Utext{$\star$-c}}\bigl(\Model_{\levcp, \AC{\mathbb{Q}}_p}, D_\dR^\alg(\etSh{\rep}_\Coef)\bigr) \otimes_k \mathbb{C}_p(-a)\Bigr).
\end{split}
\end{equation}
By pushing out \Refeq{\ref{eq-Sh-var-arith-log-RH-comp-dR-cpt}} and \Refeq{\ref{eq-Sh-var-arith-log-RH-comp-HT-cpt}} via the projection $\CoefMap$, by \cite[\aThm \logRHthmlocsystcomp]{Diao/Lan/Liu/Zhu:lrhrv}, and by Theorem \ref{thm-comp-alg-dR-int}, we obtain the following:
\begin{thm}\label{thm-Sh-var-comp-dR-HT}
Suppose that $\rep \in \Rep_{\AC{\mathbb{Q}}}(\Grp{G}^c)$, and that $\BFp$ is a finite extension of the image of $\ReFl \Emn{\can} \AC{\mathbb{Q}} \Emn{~~\ACMap^{-1}} \AC{\mathbb{Q}}_p$ over which $\rep_{\AC{\mathbb{Q}}_p}$ has a model. Then there is a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant \emph{de Rham comparison} isomorphism
\begin{equation}\label{eq-Sh-var-comp-dR-cpt}
H_{\et, \Utext{$\star$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} B_\dR \cong H_{\dR, \Utext{$\star$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} B_\dR,
\end{equation}
compatible with the filtrations on both sides, whose $0$-th graded piece is a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant \emph{Hodge--Tate comparison} isomorphism
\begin{equation}\label{eq-Sh-var-comp-HT-cpt}
\begin{split}
& H_{\et, \Utext{$\star$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p \\
& \cong \oplus_{a + b = i} \, \bigl(H^{a, b}_{\Hdg, \Utext{$\star$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p(-a)\bigr).
\end{split}
\end{equation}
Moreover, \Refeq{\ref{eq-Sh-var-comp-dR-cpt}} and \Refeq{\ref{eq-Sh-var-comp-HT-cpt}} are compatible with the canonical morphisms defined by inclusions $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, and also with Poincar\'e and Serre duality, which induce a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant \emph{de Rham comparison} isomorphism
\begin{equation}\label{eq-Sh-var-comp-dR-int}
\begin{split}
& H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} B_\dR \\
& \cong H_{\dR, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} B_\dR,
\end{split}
\end{equation}
which is compatible with the filtrations on both sides and with Poincar\'e duality, whose $0$-th graded piece is a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant \emph{Hodge--Tate comparison} isomorphism
\begin{equation}\label{eq-Sh-var-comp-HT-int}
\begin{split}
& H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p \\
& \cong \oplus_{a + b = i} \, \bigl(H^{a, b}_{\Hdg, \Utext{$\star$-c} \to \Utext{$\circ$-c}}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p(-a)\bigr),
\end{split}
\end{equation}
which is compatible with Poincar\'e and Serre duality.
\end{thm}
\begin{defn}\label{def-Sh-var-HT-wts}
For $? = \Utext{$\star$-c}$ or $\Utext{$\star$-c} \to \Utext{$\circ$-c}$, we abusively define the multiset of \emph{Hodge--Tate weights} of $H_{\et, ?}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ to be the multiset of integers in which each $a \in \mathbb{Z}$ has multiplicity $\dim_{\AC{\mathbb{Q}}_p}\bigl(H_{\Hdg, ?}^{a, i - a}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p})\bigr)$. We naturally extend the definition to $\AC{\mathbb{Q}}_p$-subspaces of $H_{\et, ?}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ cut out by $\AC{\mathbb{Q}}_p$-valued Hecke operators by replacing $H_{\Hdg, ?}^{a, i - a}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p})$ with their corresponding $\AC{\mathbb{Q}}_p$-subspaces cut out by the same $\AC{\mathbb{Q}}_p$-valued Hecke operators.
\end{defn}
\begin{thm}\label{thm-HT-wts}
With the same setting as in Theorem \ref{thm-Sh-var-comp-dR-HT}, suppose $\rep \cong \dual{\rep}_\wt$ for some $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. For any $\repalt$ in $\Rep_{\AC{\mathbb{Q}}}(\Grp{M}^c)$, let $\cohSh{\repalt}^\canext_{\AC{\mathbb{Q}}_p}$ be the pullback of $\cohSh{\repalt}^\canext_\mathbb{C}$ under $\ACMap^{-1}: \mathbb{C} \Mi \AC{\mathbb{Q}}_p$. Then we have a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant isomorphism
\begin{equation}\label{eq-Sh-var-HT-decomp-dual-BGG-cpt}
\begin{split}
& H_{\et, \Utext{$\star$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p \\
& \cong \oplus_{w\in \WG^{\Grp{M}^c}} \; \Bigl(H_\Utext{$\star$-c}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \AC{\mathbb{Q}}_p}, (\cohSh{\repalt}_{w \cdot \wt, \AC{\mathbb{Q}}_p}^\dualsign)^\canext\bigr) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p\bigl((w \cdot \wt)(H)\bigr)\Bigr),
\end{split}
\end{equation}
which is the dual BGG version of the Hodge--Tate decomposition \Refeq{\ref{eq-Sh-var-comp-HT-cpt}}. This isomorphism \Refeq{\ref{eq-Sh-var-HT-decomp-dual-BGG-cpt}} is compatible with the canonical morphisms defined by any inclusions $I^\Utext{$\circ$-c} \subset I^\Utext{$\star$-c} \subset I$, and with Poincar\'e and Serre duality; and induces a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant isomorphism
\begin{equation}\label{eq-Sh-var-HT-decomp-dual-BGG-int}
\begin{split}
& H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p \\
& \cong \oplus_{w\in \WG^{\Grp{M}^c}} \; \Bigl(H_{\Utext{$\star$-c} \to \Utext{$\circ$-c}}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \AC{\mathbb{Q}}_p}, (\cohSh{\repalt}_{w \cdot \wt, \AC{\mathbb{Q}}_p}^\dualsign)^\canext\bigr) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p\bigl((w \cdot \wt)(H)\bigr)\Bigr),
\end{split}
\end{equation}
compatible with Poincar\'e and Serre duality. The multiset of Hodge--Tate weights of any Hecke-invariant $\AC{\mathbb{Q}}_p$-subspace of $H_{\et, \Utext{$\star$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ cut out by some $\AC{\mathbb{Q}}_p$-valued Hecke operator \Ref{as in Definition \ref{def-Sh-var-HT-wts}} contains each $a \in \mathbb{Z}$ with multiplicity given by the $\mathbb{C}$-dimension of the corresponding Hecke-invariant $\mathbb{C}$-subspace of
\begin{equation}\label{eq-Sh-var-HT-decomp-dual-BGG-wt}
\oplus_{w\in \WG^{\Grp{M}^c}, \; (w \cdot \wt)(H) = -a} \; H_\Utext{$\star$-c}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr)
\end{equation}
cut out by the pullback of the same $\AC{\mathbb{Q}}_p$-valued Hecke operator under $\ACMap: \AC{\mathbb{Q}}_p \Mi \mathbb{C}$. The same holds with $H_{\et, \Utext{$\star$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ \Pth{\resp $H_\Utext{$\star$-c}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr)$} replaced with $H_{\et, \Utext{$\star$-c} \to \Utext{$\circ$-c}}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ \Pth{\resp $H_{\Utext{$\star$-c} \to \Utext{$\circ$-c}}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr)$}.
\end{thm}
\begin{proof}
This follows from Theorems \ref{thm-Sh-var-comp-dR-HT} and \ref{thm-dual-BGG-cpt-int}, and from the fact \Pth{which we have implicitly used several times} that the formation of coherent hypercohomology of qcqs schemes is compatible with arbitrary base field extensions.
\end{proof}
\begin{rk}\label{rem-HT-wts-Siegel}
All previously known special cases of \Refeq{\ref{eq-Sh-var-HT-decomp-dual-BGG-cpt}} \Ref{see, for example, \cite[\aThm 6.2]{Faltings/Chai:1990-DAV} and \cite[\aSec III.2]{Harris/Taylor:2001-GCS}} were proved using the Hodge--Tate comparison for the cohomology with trivial coefficients of some families of abelian varieties \Pth{and their smooth toroidal compactifications, in the noncompact case}. The novelty in Theorem \ref{thm-HT-wts} is that we can deal with nontrivial coefficients that are not at all related to families of abelian varieties.
\end{rk}
\begin{rk}\label{rem-HT-wts-compute}
As in \cite[\aEx 4.6]{Harris:1990-afcvs}, we can often compute the dimension of $H_\intcoh^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr)$ and its Hecke-invariant $\mathbb{C}$-subspaces cut out by $\mathbb{C}$-valued Hecke operators in terms of relative Lie algebra cohomology. Thanks to the recent work \cite{Su:2018-ccsva}, it might also be possible to compute the dimension of $H_\Utext{$\star$-c}^{i - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr)$ and its Hecke-invariant $\mathbb{C}$-subspaces cut out by $\mathbb{C}$-valued Hecke operators in terms of relative Lie algebra cohomology when the image of $\NCD^\Utext{$\star$-c}$ in the minimal compactification $\Mincpt{\Model}_{\levcp, \mathbb{C}}$ of $\Model_{\levcp, \mathbb{C}}$ \Ref{as in \cite{Pink:1989-Ph-D-Thesis}} is stable under the Hecke action of $\Grp{G}(\bAi)$.
\end{rk}
\begin{rk}\label{ex-HT-wts-indep}
In the special \Pth{but still common} case where $\rep_{\AC{\mathbb{Q}}_p}$ has a model $\rep_{\mathbb{Q}_p}$ over $\mathbb{Q}_p$, we can take $\Coef = \mathbb{Q}_p$ in the above, and the choice of $\ACMap: \AC{\mathbb{Q}}_p \Mi \mathbb{C}$ corresponds to the choice of places $v$ of $\ReFl$ above $p$. Then $H^i_{\et, ?}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\mathbb{Q}_p})$ is a \emph{de Rham} representation of $\Gal(\AC{\mathbb{Q}}_p/\BFp)$, for $? = \Utext{$\star$-c}$ or $\Utext{$\star$-c} \to \Utext{$\circ$-c}$, and the de Rham comparison isomorphisms \Refeq{\ref{eq-Sh-var-comp-dR-cpt}} and \Refeq{\ref{eq-Sh-var-comp-dR-int}} can be rewritten as
\[
H^i_{\et, ?}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\mathbb{Q}_p}) \otimes_{\mathbb{Q}_p} B_\dR \cong H^i_{\dR, ?}(\Model_{\levcp, \BFp}, \dRSh{\rep}_\BFp) \otimes_\BFp B_\dR
\]
\Ref{\Refcf{} \Refeq{\ref{eq-Sh-var-arith-log-RH-comp-dR-cpt}}}. Moreover, the assertion in Theorem \ref{thm-HT-wts} that the Hodge--Tate weights of $H^i_{\et, ?}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ \Pth{as in Definition \ref{def-Sh-var-HT-wts}} depend only on the $\mathbb{C}$-dimension of \Refeq{\ref{eq-Sh-var-HT-decomp-dual-BGG-wt}}, but not on the choice of $v$, implies that the \Pth{usual} Hodge--Tate weights of $H^i_{\et, ?}(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\mathbb{Q}_p})$ \Pth{as a representation of $\Gal(\AC{\mathbb{Q}}_p/\BFp)$} are also independent of $v$.
\end{rk}
\subsection{Some application to intersection cohomology}\label{sec-Sh-var-IH}
In this final subsection, let us discuss an important special case where we can apply our results to the \emph{intersection cohomology} of Shimura varieties with nontrivial coefficients, simply because it coincides with the interior cohomology.
Let us begin with some review of definitions. Consider the interior cohomology
\begin{equation}\label{eq-def-int-coh-B}
H_\intcoh^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) := \Image\bigl(H_\cpt^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \to H^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C})\bigr),
\end{equation}
as usual. By \cite[XI, 4.4, and XVII, 5.3.3 and 5.3.5]{SGA:4} and \cite[\aSec 6]{Beilinson/Bernstein/Deligne/Gabber:2018-FP(2)}, for $? = \emptyset$, $\cpt$, and $\intcoh$, we have canonical Hecke-equivariant isomorphisms
\begin{equation}\label{eq-B-et-comp}
H_?^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \cong H_{\et, ?}^i(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p, \ACMap} \mathbb{C}
\end{equation}
compatible with each other and with Poincar\'e duality. Also, by \cite[II, 6]{Deligne:1970-EDR}, \cite[\aSec 2.11 and \aCor 2.12]{Esnault/Viehweg:1992-LVT-B}, and GAGA \cite{Serre:1955-1956-gaga}, for $? = \emptyset$, $\cpt$, and $\intcoh$, we have canonical Hecke-equivariant isomorphisms
\begin{equation}\label{eq-B-dR-comp}
H_?^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \cong H_{\dR, ?}^i(\Model_{\levcp, \mathbb{C}}^\an, \dRSh{\rep}_\mathbb{C}^\an) \cong H_{\dR, ?}^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})
\end{equation}
compatible with each other and with Poincar\'e duality. By the same argument as in the proof of Theorem \ref{thm-int-coh-comp}, by using the degeneration of Hodge--de Rham spectral sequences on the $E_1$ pages, the Hodge filtrations on $H_{\dR, \cpt}^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})$ and $H_\dR^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})$ are strictly compatible with the canonical morphism between them, and induce the same Hodge filtration on $H_{\dR, \intcoh}^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})$.
Let $\Mincpt{\Model}_\levcp$ denote the minimal compactification of $\Model_\levcp$ over $\ReFl$, as in \cite{Pink:1989-Ph-D-Thesis}, and let $\Mincpt{\Model}_{\levcp, \mathbb{C}}$ and $\Mincpt{\Model}_{\levcp, \AC{\mathbb{Q}}_p}$ denote the pullbacks of $\Mincpt{\Model}_\levcp$ to $\mathbb{C}$ and $\AC{\mathbb{Q}}_p$, respectively. Let $\Mincpt{\jmath}: \Model_\levcp \to \Mincpt{\Model}_\levcp$ denote the canonical open immersion, whose pullbacks to $\mathbb{C}$ and $\AC{\mathbb{Q}}_p$ we shall denote by the same symbols, for simplicity. Let $d := \dim(\Model_\levcp)$. For each $\rep \in \Rep_{\AC{\mathbb{Q}}}(\Grp{G}^c)$, by abuse of notation, consider the \emph{intersection cohomology}
\begin{equation}\label{eq-IC-B}
\IH^i\bigl(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}\bigr) := H^{i - d}\bigl(\Model_{\levcp, \mathbb{C}}^{\Min, \an}, \jmath_{!*}^{\Min, \an}(\BSh{\rep}_\mathbb{C}[d])\bigr)
\end{equation}
and
\begin{equation}\label{eq-IC-et}
\IH_\et^i\bigl(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}\bigr) := H^{i - d}\bigl(\Mincpt{\Model}_{\levcp, \AC{\mathbb{Q}}_p}, \Mincpt{\jmath}_{\et, !*}(\etSh{\rep}_{\AC{\mathbb{Q}}_p}[d])\bigr).
\end{equation}
By \cite[\aSec 6]{Beilinson/Bernstein/Deligne/Gabber:2018-FP(2)}, we have a canonical Hecke-equivariant isomorphism
\begin{equation}\label{eq-IC-B-et-comp}
\IH^i\bigl(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}\bigr) \cong \IH_\et^i\bigl(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}\bigr) \otimes_{\AC{\mathbb{Q}}_p, \ACMap} \mathbb{C},
\end{equation}
which is compatible with \Refeq{\ref{eq-B-et-comp}} via canonical morphisms, and with Poincar\'e duality. By Zucker's conjecture \cite{Zucker:1982-lcwpa}, which has been proved \Pth{independently} by Looijenga \cite{Looijenga:1988-lclsv}, Saper--Stern \cite{Saper/Stern:1990-l2cav}, and Looijenga--Rapoport \cite{Looijenga/Rapoport:1991-wlcbc}, we have a canonical Hecke-equivariant isomorphism
\begin{equation}\label{eq-Zucker}
\IH^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \cong H_{(2)}^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}),
\end{equation}
where $H_{(2)}^i\big(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}\big)$ denotes the \emph{$L^2$-cohomology}, as in \cite[\aCh XIV, \aSec 3]{Borel/Wallach:2000-CDR(2)}, which is compatible with \Refeq{\ref{eq-B-dR-comp}} via canonical morphisms.
The left-hand side of \Refeq{\ref{eq-Zucker}} is equipped with the Hodge structure given by Saito's theory of mixed Hodge modules \Ref{see \cite{Saito:1990-mhm}}, while the right-hand side of \Refeq{\ref{eq-Zucker}} is equipped with the Hodge structure given by $L^2$ harmonic forms \Ref{which can be refined by a double dual BGG decomposition, as in \cite[\aSec 6]{Faltings:1983-clshs}}. But it is unclear that these two Hodge structures are compatible under the isomorphism \Refeq{\ref{eq-Zucker}} \Ref{\Refcf{} \cite[\aConj 5.3]{Harris/Zucker:2001-BCS-3}}. Nevertheless, the following is known:
\begin{thm}[{Harris--Zucker; see \cite[\aThm 5.4]{Harris/Zucker:2001-BCS-3}}]\label{thm-Harris-Zucker}
The canonical morphisms from both sides of \Refeq{\ref{eq-Zucker}} to $H_\dR^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})$ are strictly compatible with Hodge filtrations. In particular, the Hodge filtrations on both sides of \Refeq{\ref{eq-Zucker}} induce the same Hodge structure on their common image in $H_\dR^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})$.
\end{thm}
In general, we have Hecke-equivariant inclusions
\begin{equation}\label{eq-H-cusp-H-int-H-2}
H_{\Utext{cusp}}^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \subset H_\intcoh^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \subset H_{(2)}^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}),
\end{equation}
where $H_{\Utext{cusp}}^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C})$ denote the \emph{cuspidal cohomology} \Ref{see \cite{Borel:1974-srcag, Borel:1981-srcag-2}}, which is compatible with \Refeq{\ref{eq-B-dR-comp}} and \Refeq{\ref{eq-Zucker}} via canonical morphisms.
We have the following useful results:
\begin{thm}[{Schwermer; see \cite[\aCor 2.3]{Schwermer:1994-escag}}]\label{thm-Schwermer}
Suppose that $\rep \cong \dual{\rep}_\wt$ for some $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$ that is \emph{regular} in the sense that $(\wt, \cort) > 0$ for every simple root $\rt \in \RT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. Then all the containments in \Refeq{\ref{eq-H-cusp-H-int-H-2}} are equalities.
\end{thm}
\begin{thm}[{Li--Schwermer; see \cite[\aCor 5.6]{Li/Schwermer:2004-ecag}}]\label{thm-Li-Schwermer}
Suppose that $\rep \cong \dual{\rep}_\wt$ for some \emph{regular} $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. Then $H_\cpt^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) = 0$ for $i > d$; $H^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) = 0$ for $i < d$; and $H_\intcoh^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) = 0$ for $i \neq d$.
\end{thm}
\begin{cor}\label{cor-IH}
Suppose that $\rep \cong \dual{\rep}_\wt$ for some \emph{regular} $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. Then we have a canonical Hecke-equivariant isomorphisms
\begin{equation}\label{eq-cor-IH-B}
\IH^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \cong H_\intcoh^i(\Model_{\levcp, \mathbb{C}}^\an, \BSh{\rep}_\mathbb{C}) \cong H_{\dR, \intcoh}^i(\Model_{\levcp, \mathbb{C}}, \dRSh{\rep}_\mathbb{C})
\end{equation}
compatible with Hodge filtrations and Poincar\'e duality, and also a canonical Hecke-equivariant isomorphism
\begin{equation}\label{eq-cor-IH-et}
\IH_\et^i(\Model_{\levcp, \mathbb{C}}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \cong H_{\et, \intcoh}^i(\Model_{\levcp, \mathbb{C}}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}),
\end{equation}
and the cohomology in either \Refeq{\ref{eq-cor-IH-B}} and \Refeq{\ref{eq-cor-IH-et}} can be nonzero only when $i = d$.
\end{cor}
\begin{proof}
This follows from \Refeq{\ref{eq-B-et-comp}} and \Refeq{\ref{eq-B-dR-comp}}; from Theorems \ref{thm-Harris-Zucker}, \ref{thm-Schwermer}, and \ref{thm-Li-Schwermer}; and from the compatibility of the Poincar\'e duality on intersection cohomology with the usual one.
\end{proof}
\begin{thm}\label{thm-Sh-var-IH}
Suppose that $\rep \cong \dual{\rep}_\wt$ for some \emph{regular} $\wt \in \WT_{\Grp{G}^c_{\AC{\mathbb{Q}}}}^+$. Let $\BFp$ be a finite extension of the image of $\ReFl \Emn{\can} \AC{\mathbb{Q}} \Emn{~~\ACMap^{-1}} \AC{\mathbb{Q}}_p$ over which $\rep_{\AC{\mathbb{Q}}_p}$ has a model. Then we have a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant isomorphism
\begin{equation}\label{eq-Sh-var-comp-dR-IH}
\IH_\et^d(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} B_\dR \cong H_{\dR, \intcoh}^d(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \dRSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} B_\dR,
\end{equation}
which is compatible with the filtrations on both sides and with Poincar\'e duality, whose $0$-th graded piece can be refined by a canonical $\Gal(\AC{\mathbb{Q}}_p / \BFp)$-equivariant Hecke-equivariant \emph{dual BGG decomposition}
\begin{equation}\label{eq-Sh-var-comp-HT-IH}
\begin{split}
& \IH_\et^d(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p}) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p \\
& \cong \oplus_{w\in \WG^{\Grp{M}^c}} \, \Bigl(H_\intcoh^{d - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \AC{\mathbb{Q}}_p}, (\cohSh{\repalt}_{w \cdot \wt, \AC{\mathbb{Q}}_p}^\dualsign)^\canext\bigr) \otimes_{\AC{\mathbb{Q}}_p} \mathbb{C}_p\bigl((w \cdot \wt)(H)\bigr)\Bigr),
\end{split}
\end{equation}
compatible with Poincar\'e and Serre duality. The multiset of Hodge--Tate weights of any Hecke-invariant $\AC{\mathbb{Q}}_p$-subspace of $\IH_\et^d(\Model_{\levcp, \AC{\mathbb{Q}}_p}, \etSh{\rep}_{\AC{\mathbb{Q}}_p})$ cut out by some $\AC{\mathbb{Q}}_p$-valued Hecke operator \Ref{as in Definition \ref{def-Sh-var-HT-wts}} contains each $a \in \mathbb{Z}$ with multiplicity given by the $\mathbb{C}$-dimension of the corresponding Hecke-invariant $\mathbb{C}$-subspace of
\begin{equation}\label{eq-Sh-var-comp-IH-HT-wt}
\oplus_{w\in \WG^{\Grp{M}^c}, \; (w \cdot \wt)(H) = -a} \; H_\intcoh^{d - \wl(w)}\bigl(\Torcpt{\Model}_{\levcp, \mathbb{C}}, (\cohSh{\repalt}_{w \cdot \wt, \mathbb{C}}^\dualsign)^\canext\bigr)
\end{equation}
cut out by the pullback of the same $\AC{\mathbb{Q}}_p$-valued Hecke operator under $\ACMap: \AC{\mathbb{Q}}_p \Mi \mathbb{C}$.
\end{thm}
\begin{proof}
This follows from Theorems \ref{thm-Sh-var-comp-dR-HT} and \ref{thm-HT-wts}, and Corollary \ref{cor-IH}.
\end{proof}
\begin{rk}\label{rem-Sh-var-IH}
We natural expect the de Rham isomorphism to work for the intersection cohomology in more generality, which will be an interesting topic for a future project. But we would like to record the results in Theorem \ref{thm-Sh-var-IH} because regular weights already cover, depending on one's viewpoint, almost all weights.
\end{rk}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,498,518 | arxiv |
\subsection{Consequences of Theorem~\ref{thm:main}}
\label{sec:consequences}
A straightforward consequence of Theorem~\ref{thm:main} is that if $p$ has a
definite determinantal representation, and $e$ is a direction of hyperbolicity
for $p$, then the hyperbolicity cone associated with the directional derivative
$D_ep$ is spectrahedral.
\begin{corollary}
\label{cor:spec}
If $p(x) = \det(\sum_{i=1}^{n}A_ix_i)$ for symmetric $\ell\times \ell$
matrices $A_1,\ldots,A_n$, and $A_0 = \sum_{i=1}^{m}A_i e_i$ is positive
definite, then $\Lambda_+(D_ep,e)$ has a spectrahedral representation of size
$\binom{\ell+1}{2}-1$.
\end{corollary}
\begin{proof}
The hyperbolicity cone $\Lambda_+(D_ep,e)$ can be expressed as
\[ \Lambda_{+}(D_ep,e)= \bigg\{x\in \mathbb{R}^n\;:\;\sum_{i=1}^{n}A_0^{-1/2}A_iA_0^{-1/2}x_i \in \psdcone{n}{1}\bigg\}.\]
(see, e.g.,~\cite[Proposition 4]{saunderson2015polynomial}). Applying Theorem~\ref{thm:main} then gives
\[ \Lambda_{+}(D_ep,e) = \bigg\{x\in \mathbb{R}^n\;:\;\mathcal{B}\left(\sum_{i=1}^{n}A_0^{-1/2}A_iA_0^{-1/2}x_i\right)\succeq 0\bigg\}.\]
\end{proof}
Our main result also yields a spectrahedral representation of $\orthant{n}{2}$,
the second derivative relaxation of the non-negative orthant, of size
$\binom{n}{2}-1$. This is, in fact, a special case of Corollary~\ref{cor:spec}.
In the statement below, $V_n$ is any $n\times (n-1)$ matrix with columns that
span $1_n^\perp$.
\begin{corollary}
\label{cor:e2}
The hyperbolicity cone $\orthant{n}{2}$ has a spectrahedral
representation of size $\binom{n}{2}-1$ given by
\[ \orthant{n}{2} = \{x\in \mathbb{R}^{n}\;:\; \mathcal{B}(V_n^T\diag(x)V_n) \succeq 0\}.\]
\end{corollary}
\begin{proof}
First, we use the fact that $\orthant{n}{2} = \Lambda_{+}(D_{1_n}
e_{n-1},1_n)$. Then, by Sanyal's result (Proposition~\ref{prop:sanyal}),
we know that $e_{n-1}(x)$ has a definite determinantal representation. The
stated result then follows directly from Corollary~\ref{cor:spec} with
polynomial $p = e_{n-1}$ and direction $e = 1_n$.
\end{proof}
\subsection{Questions}
\paragraph{Constructing spectrahedral representations}
It is natural to ask for which values of $k$ the cones $\psdcone{n}{k}$ are spectrahedral.
Our main result shows that $\psdcone{n}{1}$ has a spectrahedral representation of size $d = \binom{n+1}{2}-1$.
The only other cases for which spectrahedral representations are known are the straightforward cases $k=n-1$ and $k=n-2$.
If $k=n-1$ then
\[ \psdcone{n}{n-1} = \{X\in \mathcal{S}^n\;:\;\textup{tr}(X) \geq 0\}\]
is a spectrahedron (with a representation of size $1$).
Since $\psdcone{n}{n-2}$ is a quadratic cone, it is a spectrahedron. To give an explicit representation,
let $d = \binom{n+1}{2}-1$ and $B_1,B_2,\ldots,B_{d}$ be an \emph{orthonormal} basis
(with respect to the trace inner product) for the subspace $I_n^\perp$. Now
$X\in \psdcone{n}{n-2}$ if and only if (see, e.g.,~\cite[Section 5.1]{saunderson2015polynomial})
\begin{equation}
\label{eq:Snn2}
\textup{tr}(X) \geq 0\;\;\textup{and}\;\;
\textup{tr}(X)^2 - \textup{tr}(X^2) = \left[\sqrt{\frac{n-1}{n}}\textup{tr}(X)\right]^2 - \sum_{i=1}^{d}\textup{tr}(B_iX)^2 \geq 0.
\end{equation}
By a well-known spectrahedral representation of the second-order cone, \eqref{eq:Snn2} holds if and only if
\begin{equation}
\label{eq:quad} \sqrt{\frac{n-1}{n}}\textup{tr}(X)I_{d} + \begin{bmatrix} \textup{tr}(B_1X) & \textup{tr}(B_2X) & \textup{tr}(B_3X)& \cdots & \textup{tr}(B_dX)\\
\textup{tr}(B_2X) & - \textup{tr}(B_1X)& 0 &\cdots &0\\
\textup{tr}(B_3X)& 0 & - \textup{tr}(B_1X) & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
\textup{tr}(B_d X) & 0 &0 &\cdots & - \textup{tr}(B_1X)\end{bmatrix} \succeq 0.
\end{equation}
So we see that $\psdcone{n}{n-2}$ has a spectrahedral representation of size $d= \binom{n+1}{2}-1$.
At this stage, it is unclear how to extend the approach in this paper to the remaining cases.
\begin{question}
Are the cones $\psdcone{n}{k}$ spectrahedral for $k=2,3,\ldots,n-3$?
\end{question}
At first glance, it may seem that Corollary~\ref{cor:spec} allows us to
construct a spectrahedral representation for $\psdcone{n}{2}$ from a
spectrahedral representation for $\psdcone{n}{1}$. However, this is not the
case. To apply Corollary~\ref{cor:spec} to this situation, we would need a
definite determinantal representation of $E_{n-1}(X)$, which our main result
(Theorem~\ref{thm:main}) does not provide.
\paragraph{Lower bounds on size}
Another natural question concerns the size of spectrahedral representations of
hyperbolicity cones. Given a hyperbolicity cone $K$, there is a unique (up to
scaling) hyperbolic polynomial $p$ of smallest degree $d$ that vanishes on the
boundary of $K$~(see, e.g.,~\cite{kummer2016two}). Clearly any spectrahedral
representation must have size at least $d$, but it seems that in some cases the
smallest spectrahedral representation (if it exists at all) must have larger
size.
\begin{question}
Is there a spectrahedral representation of $\psdcone{n}{1}$ with size smaller than $\binom{n+1}{2}-1$?
\end{question}
Recently, there has been considerable interest in developing methods for
producing lower bounds on the size of projected spectrahedral descriptions of
convex sets (see, e.g.,~\cite{fawzi2015positive}) . There has been much
less development in the case of lower bounds on the size of spectrahedral
descriptions. The main work in this direction is due to
Kummer~\cite{kummer2016two}. For instance it follows from~\cite[Theorem
1]{kummer2016two} that any spectrahedral representation of the quadratic cone
$\psdcone{n}{n-2}$ must have size at least
$\frac{1}{2}\left[\binom{n+1}{2}-1\right]$. Furthermore, in the special case
that $\binom{n+1}{2} - 1 = 2^{k}+1$ for some $k$ (which occurs if $n=3$ and
$k=2$ or $n=4$ and $k=3$) then Kummer's work shows that any spectrahedral
representation of $\psdcone{n}{n-2}$ must have size at least
$\binom{n+1}{2}-1$. This establishes that the construction in~\eqref{eq:quad}
is optimal when $n=3$ and $n=4$. Furthermore, in the case $n=3$ we have that
$\psdcone{n}{1} = \psdcone{n}{n-2}$. Hence our spectrahedral representation
for $\psdcone{n}{1}$ is also optimal if $n=3$.
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{Proof of Theorem~\ref{thm:main}}
\label{sec:pf}
\input{proof}
\section{Discussion}
\label{sec:discussion}
\input{discussion}
\section*{Acknowledgments}
I would like to thank Hamza Fawzi for providing very helpful feedback on a draft of this paper.
\bibliographystyle{alpha}
\subsection{Preliminaries}
\paragraph{Hyperbolic polynomials, hyperbolicity cones, and spectrahedra}
A multivariate polynomial $p$, homogeneous of degree $d$ in $n$ variables, is
\emph{hyperbolic with respect to $e\in \mathbb{R}^n$} if $p(e) \neq 0$ and for all
$x$, the univariate polynomial $t\mapsto p(x-te)$ has only real roots.
Associated with such a polynomial is a cone
\[ \Lambda_{+}(p,e) = \{x\in \mathbb{R}^n: \textup{all roots of $t\mapsto p(x-te)$ are non-negative}\}.\]
A foundational result of G\r{a}rding~\cite{gaarding1959inequality} is that $\Lambda_{+}(p,e)$ is actually a
convex cone, called the \emph{closed hyperbolicity cone associated with $p$ and $e$}.
For example $p(x) = \prod_{i=1}^{n}x_i$ is hyperbolic with respect to
$1_n$, the vector of all ones, and the corresponding closed hyperbolicity cone is the
non-negative orthant, $\mathbb{R}_{+}^n$. Similarly $p(X) = \det(X)$ (where $X$ is a
symmetric $n\times n$ matrix), is hyperbolic with respect to the identity
matrix $I$, and the corresponding closed hyperbolicity cone is the positive
semidefinite cone $\mathcal{S}_{+}^n$.
If a polynomial $p$ has a representation of the form
\begin{equation}
\label{eq:def-det}
p(x) = \det\left(\textstyle{\sum_{i=1}^{n}}A_ix_i\right)
\end{equation}
for symmetric matrices $A_1,\ldots,A_n$, and there exists $e\in \mathbb{R}^n$ such
that $\sum_{i=1}^{n}A_ie_i$ is positive definite, we say that $p$ has a
\emph{definite determinantal representation}. In this case $p$ is hyperbolic
with respect to $e$. The associated closed hyperbolicity cone is
\begin{equation}
\label{eq:spectrahedral}
K = \bigg\{x\in \mathbb{R}^n\;:\; \sum_{i=1}^{n}A_i x_i \succeq 0\bigg\}
\end{equation}
where we write $X \succeq 0$ to indicate that $X$ is positive semidefinite (and $X
\succ 0$ to indicate that $X$ is positive definite). Such convex cones are
called \emph{spectrahedral cones}. If the matrices $A_1,A_2,\ldots,A_n$ are
$d\times d$ we call~\eqref{eq:spectrahedral} a \emph{spectrahedral
representation of size $d$}.
\paragraph{Derivative relaxations}
One way to produce new hyperbolic polynomials is to take directional
derivatives of hyperbolic polynomials in directions of
hyperbolicity~\cite[Section 3.10]{atiyah1970lacunas}, a construction emphasized in the context of optimization by
Renegar~\cite{renegar2006hyperbolic}. If $p$ has degree $d$ and is hyperbolic
with respect to $e$, then for $k=0,1,\ldots,d$, the $k$th directional
derivative in the direction $e$, i.e.,
\[ D_e^{(k)}p(x) = \left.\frac{d^k}{dt^k}p(x+te)\right|_{t=0},\]
is also hyperbolic with respect to $e$. Moreover
\[ \Lambda_{+}(D_e^{(k)}p,e) \supseteq \Lambda_{+}(D_e^{(k-1)}p,e) \supseteq \cdots \supseteq \Lambda_{+}(p,e)\]
so the hyperbolicity cones of the directional derivatives form a sequence of
\emph{relaxations} of the original hyperbolcity cone.
\begin{itemize}
\item Suppose $p(x) = \prod_{i=1}^{n}x_i$ and $e=1_n$. Then, for $k=0,1,\ldots,n$,
\[ D_{1_n}^{(k)}p(x) = {k!}e_{n-k}(x)\]
where $e_{n-k}$ is the elementary symmetric polynomial of degree $n-k$ in $n$ variables.
We use the notation $\orthant{n}{k}$ for $\Lambda_+(e_{n-k},1_n)$, the closed hyperbolicity cone corresponding to $e_{n-k}$.
\item Suppose $p(X) = \det(X)$ is the determinant restricted to $n\times n$ symmetric matrices,
and $e=I_n$ is the $n\times n$ identity matrix. Then, for $k=0,1,\ldots,n$,
\[ D_{I_n}^{(k)}p(X) = {k!}\,E_{n-k}(X) = {k!}\,e_{n-k}(\lambda(X))\]
where $E_{n-k}(X)$ is the elementary symmetric polynomial of degree $n-k$ in
the eigenvalues of $X$ or, equivalently, the coefficient of $t^k$ in
$\det(X+tI_n)$. We use the notation $\psdcone{n}{k}$ for $\Lambda_+(E_{n-k},I_n)$, the closed
hyperbolicity cone corresponding to $E_{n-k}$. We use the notation $\lambda(X)$ for the
eigenvalues of a symmetric matrix $X$ ordered so that $|\lambda_1(X)| \geq |\lambda_2(X)| \geq \cdots \geq |\lambda_n(X)|$.
We use this order so that $\lambda_i(X^2) = \lambda_i(X)^2$ for all $i$.
\end{itemize}
The focus of this paper is the cone $\psdcone{n}{1}$, the hyperbolicity cone
associated with $E_{n-1}$. In particular, we consider whether
$\psdcone{n}{1}$ can be expressed as a `slice' of some higher dimensional
positive semidefinite cone. Such a description allows one
to reformulate hyperbolic programs with respect to $\psdcone{n}{1}$ (linear optimization over affine `slices' of $\psdcone{n}{1}$)
as
semidefinite programs.
\paragraph{Generalized Lax conjecture}
We have seen that every spectrahedral cone is a closed hyperbolicity cone.
The \emph{generalized Lax conjecture} asks whether the converse holds, i.e.,
whether every closed hyperbolicity cone is also a spectrahedral cone. The
original Lax conjecture, now a theorem due to Helton and
Vinnikov~\cite{helton2007linear} (see also~\cite{lewis2005lax}), states that if
$p$ is a trivariate polynomial, homogeneous of degree $d$, and hyperbolic with
respect to $e\in \mathbb{R}^3$, then $p$ has a definite determinantal representation.
While a direct generalization of this algebraic result does not hold in higher
dimensions~\cite{branden2011obstructions}, the following geometric conjecture
remains open.
\begin{conjecture}[Generalized Lax Conjecture (geometric version)]
\label{conj:geo}
Every closed hyperbolicity cone is spectrahedral.
\end{conjecture}
An equivalent algebraic formulation of this conjecture is as follows.
\begin{conjecture}[Generalized Lax Conjecture (algebraic version)]
\label{conj:alg}
If $p$ is hyperbolic with respect to $e\in \mathbb{R}^n$, then there exists a
polynomial $q$, hyperbolic with respect to $e\in \mathbb{R}^n$, such that $qp$ has a
definite determinantal representation and $\Lambda_{+}(q,e)\supseteq \Lambda_+(p,e)$.
\end{conjecture}
The algebraic version of the conjecture implies the geometric version because
it implies the existence of a multiplier $q$ such
that the hyperbolicity cone associated with $qp$ is spectrahedral and
$\Lambda_+(qp,e) = \Lambda_+(p,e)\cap \Lambda_+(q,e) = \Lambda_+(p,e)$.
To see that the geometric version implies the algebraic version requires
more algebraic machinery, and is discussed, for instance, in~\cite[Section 2]{vinnikov2012lmi}.
\subsection{Main result: a spectrahedral representation of $\psdcone{n}{1}$}
In this paper, we show that $\psdcone{n}{1}$, the first derivative relaxation
of the positive semidefinite cone, is spectrahedral. We give an explicit
spectrahedral representation of $\psdcone{n}{1}$ (see Theorem~\ref{thm:main} to follow).
Moreover, in Theorem~\ref{thm:main-alg} in Section~\ref{sec:pf} we find an explicit
hyperbolic polynomial $q$ such that $q(X)E_{n-1}(X)$ has a definite
determinantal representation and $\Lambda_{+}(q,I) \supseteq \psdcone{n}{1}$.
\begin{theorem}
\label{thm:main}
Let $d = \binom{n+1}{2}-1$ and let $B_1,\ldots,B_d$ be any basis for
the $d$-dimensional space of real symmetric $n\times n$ matrices with trace
zero. If $\mathcal{B}(X)$ is the $d\times d$ symmetric matrix with $i,j$ entry
equal to $\textup{tr}(B_i X B_j)$ then
\begin{equation}
\label{eq:main-geo} \psdcone{n}{1} = \{X\in \S^n: \mathcal{B}(X) \succeq 0\}.
\end{equation}
\end{theorem}
Section~\ref{sec:pf} is devoted to the proof of this result. At this stage we
make a few remarks about the statement and some of its consequences.
\begin{itemize}
\item The spectrahedral representation of $\psdcone{n}{1}$ in
Theorem~\ref{thm:main} has size $d = \binom{n+1}{2}-1 = \frac{1}{2}(n+2)(n-1)$.
This is about half the size of the smallest previously known \emph{projected}
spectrahedral representation of $\psdcone{n}{1}$, i.e., representation
as the image of a spectrahedral cone under a linear map~\cite{saunderson2015polynomial}.
\item A straightforward extension of this result shows that if $p$ has a definite determinantal representation and $e$ is a direction of
hyperbolicity for $p$, then the hyperbolicity cone associated with the directional derivative $D_ep$ is spectrahedral.
We discuss this in Section~\ref{sec:consequences}.
\item It also follows from Theorem~\ref{thm:main} that $\orthant{n}{2}$, the second
derivative relaxation of the orthant in the direction $1_n$, has a spectrahedral
representation of size $\binom{n}{2}-1$. We discuss this in
Section~\ref{sec:consequences}. This representation is significantly smaller
than the size $O(n^{n-3})$ representation constructed by
Br\"and\'en~\cite{branden2014hyperbolicity}, and about half the size of
the smallest previously known projected spectrahedral
representation of $\orthant{n}{2}$~\cite{saunderson2015polynomial}.
\end{itemize}
\subsection{Related work}
\label{sec:related}
We briefly summarize related work on spectrahedral and projected spectrahedral
representations of the hyperbolicity cones $\orthant{n}{k}$ and
$\psdcone{n}{k}$. Sanyal~\cite{sanyal2013derivative} showed that
$\orthant{n}{1}$ is spectrahedral by giving the following explicit definite
determinantal representation of $e_{n-1}(x)$, which we use repeatedly in the paper.
\begin{proposition}
\label{prop:sanyal}
If $1_n^\perp = \{x\in \mathbb{R}^n\;:\; 1_n^Tx = 0\}$, and $V_n$ is a
$n\times (n-1)$ matrix with columns spanning $1_n^\perp$, then there is a
positive constant $c$ such that
\[ c\,e_{n-1}(x) = \det(V_n^T\diag(x)V_n) \;\;\textup{and so}\;\;
\orthant{n}{1} = \{x\in \mathbb{R}^n: V_n^T\diag(x)V_n \succeq 0\}.\]
\end{proposition}
This representation is also implicit in the work of Choe, Oxley, Sokal, and
Wagner~\cite{choe2004homogeneous}.
Zinchenko~\cite{zinchenko2008hyperbolicity}, gave a projected spectrahedral
representation of $\orthant{n}{1}$.
Br\"and\'en~\cite{branden2014hyperbolicity}, established that each of the cones
$\orthant{n}{k}$ are spectrahedral by constructing graphs $G$ with edges
weighted by linear forms in $x$, such that the edge weighted Laplacian $L_G(x)$
is positive semidefinite if and only if $x\in \orthant{n}{k}$. Amini
showed that the hyperbolicity cones associated with
certain multivariate matching polynomials are
spectrahedral~\cite{amini2016spectrahedrality}, and used these to find
new spectrahedral representations of the cones $\orthant{n}{k}$ of size $\frac{(n-1){!}}{(k-1){!}}+1$.
Explicit projected spectrahedral representations of the cones $\psdcone{n}{k}$
of size $O(n^2\min\{k,n-k\})$ were given by Saunderson and
Parrilo~\cite{saunderson2015polynomial}, leaving open (except in the cases $k=n-2,n-1$)
the question of whether these cones
are spectrahedra. The main result of this paper is that $\psdcone{n}{1}$ is a
spectrahedron.
\subsection{Geometric argument}
We begin by stating a slight reformulation of Sanyal's spectrahedral representation (Proposition~\ref{prop:sanyal}).
\begin{proposition}
\label{prop:sanyal-geo}
Let $1_n^\perp = \{y\in \mathbb{R}^n\;:\; 1_n^Ty = 0\}$ be the subspace of $\mathbb{R}^n$ orthogonal to $1_n$. Then
\[ \orthant{n}{1} = \{x\in \mathbb{R}^n\;:\; y^T\diag(x)y \geq 0\;\;\textup{for all $y\in 1_n^\perp$}\}.\]
\end{proposition}
\begin{proof} This follows from Proposition~\ref{prop:sanyal} since $V_n^T\diag(x)V_n \succeq 0$ holds if and only if $u^TV_n^T\diag(x)V_nu \geq 0$ for all $u\in \mathbb{R}^{n-1}$ which
holds if and only if $y^T\diag(x)y \geq 0$ for all $y\in 1_n^\perp$.
\end{proof}
In this section we establish a `matrix' analogue of Proposition~\ref{prop:sanyal-geo}.
\begin{theorem}
\label{thm:main-geo}
Let $I_n^\perp = \{Y\in \mathcal{S}^n\;:\; \textup{tr}(Y) = 0\}$ be the subspace of $n\times n$ symmetric matrices with trace zero. Then
\begin{equation}
\label{eq:main-geo} \psdcone{n}{1} = \{X\in \mathcal{S}^n\;:\; \textup{tr}(YXY)\geq 0,\;\;\textup{for all $Y\in I_n^\perp$}\}.
\end{equation}
\end{theorem}
The concrete spectrahedral description given in Theorem~\ref{thm:main} follows immediately from Theorem~\ref{thm:main-geo}.
Indeed if $B_1,B_2,\ldots,B_d$ are a basis for $I_n^\perp$ then
an arbitrary $Y\in I_n^\perp$ can be written as $Y = \sum_{i=1}^{d} y_i B_i$.
The condition $\textup{tr}(YXY) \geq 0$ for all $Y\in I_n^\perp$ is equivalent to
\[ \sum_{i,j=1}^{d}y_iy_j\textup{tr}(B_iXB_j) \geq 0\;\;\textup{for all $y\in \mathbb{R}^{d}$}\;\;\textup{which holds if and only if}\;\; \mathcal{B}(X) \succeq 0.\]
\begin{proof}[{of Theorem~\ref{thm:main-geo}}]
The convex cone $\psdcone{n}{1}$ is invariant under the action
of the orthogonal group on $n\times n$ symmetric matrices by congruence transformations.
Similarly, the convex cone
\[ \{X\in \mathcal{S}^n\;:\; \textup{tr}(YXY) \geq 0 \quad\textup{for all $Y\in I_n^\perp$}\}\]
is invariant under the same action of the orthogonal group. This is because
$X\in I_n^\perp$ if and only if $QXQ^T\in I^\perp$ for any orthogonal matrix $Q$.
Because of these invariance properties, the following (straightforward) result tells us
that we can establish Theorem~\ref{thm:main-geo}
by showing that the diagonal `slices' of these two convex cones agree.
\begin{lemma}
\label{lem:orth-inv}
Let $K_1,K_2\subset \mathcal{S}^n$ be such that $QK_1Q^T = K_1$ for all $Q\in O(n)$
and $QK_2Q^T = K_2$ for all $Q\in O(n)$. If
$\{x\in \mathbb{R}^n\;:\; \diag(x)\in K_1\} = \{x\in \mathbb{R}^n\;:\;\diag(x)\in K_2\}$
then $K_1 = K_2$.
\end{lemma}
\begin{proof}
Assume that
$X\in K_1$. Then there exists $Q$ such that
$QXQ^T = \diag(\lambda(X))$. Since $K_1$ is invariant under orthogonal congruence, $\diag(\lambda(X)) \in K_1$.
By assumption, it follows that $\diag(\lambda(X))\in K_2$. Since $K_2$ is invariant under orthogonal congruence,
$X = Q^T\diag(\lambda(X))Q \in K_2$. This establishes that $K_1 \subseteq K_2$. Reversing the roles of $K_1$ and $K_2$ completes the argument.
\end{proof}
\paragraph{Relating the diagonal slices} To complete the proof of
Theorem~\ref{thm:main-geo}, it suffices (by Lemma~\ref{lem:orth-inv})
to show that the diagonal slices of the
left- and right-hand sides of~\eqref{eq:main-geo} are equal. Since the diagonal
slice of $\psdcone{n}{1}$ is $\orthant{n}{1}$, it is enough (by Proposition~\ref{prop:sanyal-geo}) to establish the following
result.
\begin{lemma}
\label{lem:orthantn1alt}
\begin{multline*}
\{x\in \mathbb{R}^{n}\;:\; \textup{tr}(Y\diag(x)Y) \geq 0 \;\;\textup{for all $Y\in I_n^\perp$}\} = \\
\{x\in \mathbb{R}^{n}\;:\; y^T\diag(x)y \geq 0 \;\;\textup{for all $y\in 1_n^\perp$}\}.
\end{multline*}
\end{lemma}
\begin{proof}
Suppose that $\textup{tr}(Y\diag(x)Y) \geq 0$ for all $Y\in I_n^\perp$. Let
$y\in 1_n^\perp$. Then $\diag(y)\in I_n^\perp$ and so it follows that
$\textup{tr}(\diag(y)\diag(x)\diag(y)) = y^T\diag(x)y \geq 0$. This shows that the
left hand side is a subset of the right hand side.
For the reverse inclusion suppose that $y^T\diag(x)y \geq 0$ for all $y\in
1_n^\perp$. Let $Y\in I_n^\perp$. Suppose the symmetric group on $n$ symbols, $S_n$, acts on $\mathbb{R}^n$ by permutations.
Then for every $\sigma \in S_n$, we have that $\sigma \cdot \lambda(Y)\in 1_n^\perp$ and thus
\[ \textup{tr}(\diag(\sigma \cdot \lambda(Y^2))\diag(x)) = (\sigma \cdot\lambda(Y))^T\diag(x)(\sigma \cdot \lambda(Y)) \geq 0.\]
(Here we have used $\lambda_i(Y^2) = \lambda_i(Y)^2$, by our definition of $\lambda(\cdot)$.)
The diagonal of a symmetric matrix is a convex combination of permutations of
its eigenvalues, a result due to Schur~\cite{schur1923uber} (see also,
e.g.,~\cite{marshall1979inequalities}). Hence $\diag(Y^2)$ is a convex
combination of permutations of $\lambda(Y^2)$, i.e.,
\[ \diag(Y^2) = \sum_{\sigma\in S_n} \eta_{\sigma}\, (\sigma\cdot \lambda(Y^2))\]
where the $\eta_\sigma$ satisfy
$\eta_{\sigma} \geq 0$ and $\sum_{\sigma \in S_n}\eta_\sigma =1$. It then
follows that
\[\textup{tr}(Y\diag(x)Y) = \textup{tr}(\diag(Y^2)\diag(x)) = \sum_{\sigma\in S_n}\eta_{\sigma}\textup{tr}(\diag(\sigma\cdot\lambda(Y^2))\diag(x)) \geq 0.\]
This shows that the right hand side is a subset of the left hand side.
\end{proof}
This completes the proof of Theorem~\ref{thm:main-geo}.
\end{proof}
\subsection{Algebraic argument}
In this section, we establish the following algebraic version of Theorem~\ref{thm:main}.
\begin{theorem}
\label{thm:main-alg}
Let $n\geq 2$ and $B_1,\ldots,B_d$ be a basis for $I_n^\perp$, the subspace of $n\times n$ symmetric matrices with trace zero. Then there is a
positive constant $c$ (depending on the choice of basis) such that
\begin{enumerate}
\item $q(X) = \prod_{1\leq i<j\leq n} (\lambda_i(X)+\lambda_j(X))$ is hyperbolic with respect to $I_n$;
\item the hyperbolicity cone associated with $q$ satisfies
\[ \Lambda_{+}(q,I_n) = \{X\in \mathcal{S}^n\;:\; \lambda_i(X) + \lambda_j(X) \geq 0\;\;\textup{for all $1\leq i<j\leq n$}\} \supseteq \psdcone{n}{1};\]
\item $q(X)E_{n-1}(X)$ has a definite determinantal representation as
\[ c\,q(X)E_{n-1}(X) = \det(\mathcal{B}(X)).\]
\end{enumerate}
\end{theorem}
We remark that $q(X)$ is defined as a symmetric polynomial in the eigenvalues
of $X$, and so can be expressed as a polynomial in the entries of $X$.
Although our argument does not use this fact, it can be shown that $q(X) =
\det(\mathcal{L}_2(X))$ where $\mathcal{L}_2(X)$ is the \emph{second additive
compound matrix} of $X$~\cite{fiedler1974additive}. This means that $q$ is not
only hyperbolic with respect to $I_n$, but also has a definite determinantal
representation.
\begin{proof}[{of Theorem~\ref{thm:main-alg}}]
{}
The three items in the statement of Theorem~\ref{thm:main-alg} are established
in the following three Lemmas (Lemmas~\ref{lem:alg1},~\ref{lem:alg2}, and~\ref{lem:alg3}).
\begin{lemma}
\label{lem:alg1}
If $q(X) = \prod_{1\leq i<j\leq n}(\lambda_i(X)+\lambda_j(X))$ then $q$ is hyperbolic with respect to $I_n$.
\end{lemma}
\begin{proof}
First observe that $q(I_n) = 2^{\binom{n}{2}}\neq 0$. Moreover, for any real $t$,
\[ q(X-tI_n) = \prod_{1\leq i<j\leq n}(\lambda_i(X-tI_n) + \lambda_j(X-tI_n)) = \prod_{1\leq i<j\leq n}(\lambda_i(X) + \lambda_j(X) - 2t)\]
which has $\binom{n}{2}$ real roots given by $\frac{1}{2}(\lambda_i(X)
+ \lambda_j(X))$ for $1\leq i<j\leq n$. Hence $q$ is hyperbolic with respect to
$I_n$.
\end{proof}
\begin{lemma}
\label{lem:alg2}
If $n\geq 2$ then
\[ \Lambda_{+}(q,I_n) = \{X\in \mathcal{S}^n\;:\; \lambda_i(X) + \lambda_j(X) \geq 0\;\;\textup{for all $1\leq i<j\leq n$}\}\supseteq \psdcone{n}{1}.\]
\end{lemma}
\begin{proof}
Since the roots of $t\mapsto q(X-tI_n)$ are $\frac{1}{2}(\lambda_i(X) +
\lambda_j(X))$, the description of $\Lambda_+(q,I_n)$ is immediate. Both sides
of the inclusion are invariant under congruence by orthogonal matrices. By
Lemma~\ref{lem:orth-inv} it is enough to show that the inclusion holds
for the diagonal slices of both sides. Note that
\[ \{x\in \mathbb{R}^n\;:\; \diag(x)\in \Lambda_{+}(q,I_n)\} = \{x\in \mathbb{R}^{n}\;:\; x_i+x_j \geq 0\;\;\textup{for all $1\leq i<j\leq n$}\}.\]
Hence it is enough to establish that
\begin{equation}
\label{eq:incl}
\{x\in \mathbb{R}^n\;:\; x_i+x_j \geq 0\;\;\textup{for all $1\leq i<j\leq n$}\} \supseteq \orthant{n}{1}.
\end{equation}
To do so, we use the characterization of $\orthant{n}{1}$ from Proposition~\ref{prop:sanyal-geo}.
This tells us that if $x\in \orthant{n}{1}$ then $v^T\diag(x)v = \sum_{\ell=1}^{n}x_\ell v_\ell^2 \geq 0$
for all $v\in 1_n^\perp$.
In particular, let $v$ be the element of $1_n^\perp$ with $v_i=1$ and $v_j=-1$ and $v_k = 0$ for $k\notin\{i,j\}$.
Then, if $x\in \orthant{n}{1}$ it follows that
$\sum_{\ell=1}^{n}x_\ell v_\ell^2 = x_i + x_j \geq 0$. This completes the proof.
\end{proof}
\begin{lemma}
\label{lem:alg3}
If $B_1,\ldots,B_d$ is a basis for $I^\perp_n$, then there is a positive
constant $c$ (depending on the choice of basis) such that
\[ c\,q(X)E_{n-1}(X) = \det(\mathcal{B}(X)).\]
\end{lemma}
\begin{proof}
Since both sides are invariant under orthogonal congruence, it is enough to
show that the identity holds for diagonal matrices. In other words, it is
enough to show that
\[ c\prod_{1\leq i<j\leq n}(x_i+x_j) e_{n-1}(x) = \det(\mathcal{B}(\diag(x))).\]
Since a change of basis for the subspace of symmetric matrices with trace zero
only changes $\det(\mathcal{B}(X))$ by a positive constant (which is one if the
change of basis is orthogonal with respect to the trace inner product), it is
enough to choose a particular basis for the subspace of symmetric matrices with
trace zero, and show that the identity holds for a particular constant.
Let $v_1,v_2,\ldots,v_{n-1}$ be a basis for $1_n^\perp = \{x\in \mathbb{R}^{n}\;:\;
\sum_{i=1}^{n}x_i = 0\}$. Let $M_{ij}$ be the $n\times n$ matrix with a one in the $(i,j)$ and the $(j,i)$ entry, and zeros elsewhere.
Clearly the $M_{ij}$ for $1\leq i<j\leq n$ form a basis for the subspace of symmetric matrices with zero
diagonal. Together $\diag(v_1),\diag(v_2),\ldots,\diag(v_{n-1})$ and $M_{ij}$
for $1\leq i<j\leq n$ form a basis for the subspace of symmetric matrices with
trace zero.
Using this basis we evaluate the matrix $\mathcal{B}(\diag(x))$. We note that
\begin{align*}
\textup{tr}(\diag(v_i)\diag(x)\diag(v_j)) & = v_i^T\diag(x)v_j\quad\textup{for $1\leq i,j\leq n$}\\
\textup{tr}(\diag(v_i)\diag(x)M_{jk}) & = 0\quad\textup{for all $1\leq i\leq n$ and $1\leq j<k\leq n$}
\end{align*}
since $M_{jk}$ has zero diagonal, and that
\[
\textup{tr}(M_{ij}\diag(x)M_{k\ell}) = \begin{cases} x_i+x_j & \textup{if $i=k$ and $j=\ell$}\\ 0 & \textup{otherwise}\end{cases}\]
for all $1\leq i<j\leq n$ and $1\leq k<\ell\leq n$.
This means that $\mathcal{B}(\diag(x))$ is block diagonal, and so
\begin{equation}
\label{eq:block-det}
\det(\mathcal{B}(\diag(x))) = \prod_{1\leq i<j\leq n}(x_i+x_j) \det(V_n^T\diag(x)V_n)
\end{equation}
where $V_n$ is the $n\times (n-1)$ matrix with columns $v_1,v_2,\ldots,v_n$. By
Proposition~\ref{prop:sanyal}, there is a positive constant $c$ such that
\begin{equation}
\label{eq:sanyal} \det(V_n^T\diag(x)V_n) = c\,e_{n-1}(x),
\end{equation}
Combining~\eqref{eq:block-det} and~\eqref{eq:sanyal} gives the stated result.
\end{proof}
This completes the proof of Theorem~\ref{thm:main-alg}.
\end{proof}
|
1,116,691,498,519 | arxiv | \section{Introduction}
Recent observational data suggest that the expansion of the universe is accelerating. These data are confirmed by different sources, such as cosmic observations from Supernovae Ia \cite{expansao_ace, expansao_ace2}, Cosmic Microwave Background (CMB) radiation \cite{WMAP2007, WMAP2009, WMAP2011}, Large Scale Structure (LSS) \cite{LSS_evidence}, weak lensing \cite{weak_lensing}, among others. These observations are not properly explained by General Relatvity (GR). A natural question then arises: how can this acceleration be explained? To reconcile observations data and a theory that describes gravity, two main approaches are considered. That is, the observed accelerated expansion of the universe is due to some kind of extra fluid-like contribution, known as dark energy \cite{dark_energy1,dark_energy2,k_essece_1}, or GR itself is modified \cite{review1}. These modified theories represent a generalization of GR where some combination of curvature invariants (such as the Riemann tensor, the Weyl tensor, the Ricci tensor, among others) replaces or is added into the classical Hilbert-Einstein action, that is composed by the Ricci scalar term. For a review of modified theories of gravity, see \cite{review1, review2, review3, review4}. In this paper, the $f(R,T)$ gravity theory is explored.
There are several attempts to modify GR. One possibility is the $f(R,L_m)$ gravity theory which assumes that the geometric and the matter terms in the Einstein-Hilbert action are modified. This gravitational theory has been the first model to consider this coupling. In essence, it consists of replacing the Einstein-Hilbert action with an arbitrary function of the Ricci scalar $R$ and the matter Lagrangian $L_m$ \cite{RL, RL1}. Another generalization is the $f(R,T)$ gravity \cite{Harko}. In this model, the field equations are obtained from an arbitrary function that depends on the Ricci scalar $R$ and $T$, the trace of the energy-momentum tensor $T_{\mu\nu}$. The addition of $T$ in the gravitational Lagrangian leads to possible investigations of a quantum description of gravity from the $f(R,T)$ theory \cite{QT}. In recent years, this modified gravitational theory has been intensively studied \cite{fRT1, fRT2, fRT3, Houndjo, Momeni, Shabani, Sharif, Kiani, Sharif2, Jamil, Sharif3, Azizi, Alvaro, Reddy, Moraes}.
The gravitational field equations can be obtained by two different approaches, namely the metric and the Palatini formalisms. The metric approach takes the Levi-Civita connection as the connection, and the action is varied with respect to the metric. The Palatini formalism has been introduced by A. Einstein \cite{Einstein1, Einstein2}. In this approach, the metric and connection are treated as independents fields. In GR both formalisms lead to the same field equations. However, in modified gravity theories, different field equations are obtained. It is important to note that, the order of the field equations is different. In the metric formalism, the field equations are higher-order derivatives, while in the Palatini approach the field equations are second-order derivatives. For a review of Palatini formalism applied to modified gravity, see \cite{palatini_motivation1}. In this paper, the main objective is to explore the causality question in the $f(R,T)$ gravity theory formulated in Palatini formalism \cite{fRT_Palatini_primeiro, palatini_fRT1}.
Causality and chronology are fundamental elements in the theory of special relativity. In this theory, the chronology is preserved and causality is respected. From a local point of view, GR has the same causal structure as special relativity, since GR space-times are locally Minkowskian. However, on a global scale, the field equations of GR do not require non-local constraints on the space-times, then interesting differences can arise. There are solutions to the GR field equations that present violation of causality in the form of Closed Timelike Curves (CTCs). As an example, the G\"{o}del solution can be considered. This cosmological model shows that, although GR is local Lorentzian which leads to the local validity of the causality principle, it admits solutions with CTCs. In GR it is known that there are other solutions to the field equations that lead to the violation of causality \cite{godel,ctc1, ctc2}. Here, the gravity is governed by the $f(R,T)$ gravity theory, constructed in the Palatini approach. Then various issues must be reexamined in its framework, including the question of whether this gravity theory permits the violation of causality, which is permitted in general relativity. In this paper, the G\"{o}del solution is considered. This cosmological solution of GR has been proposed by K. G\"{o}del \cite{godel}. It is an exact solution of GR with cosmological constant $\Lambda$ which leads to the possibility of CTCs, which allow violation of causality.
Some years later of the original solution, a G\"{o}del-type metric has been developed \cite{tipo_godel1}. This metric provides more information about the existence of CTCs. From this solution, a critical radius $r_c$, beyond which the causality is violated, can be constructed. The causality problem has been analyzed in various modified theories of gravity \cite{fr_and_godel, kessence_and_godel, chersimon_and_godel1, chersimon_and_godel2, ft_and_godel, frt_and_godel, bumblebeee_and_godel, horava_and_godel, brans_and_godel, frq_and_godel, godel_fRT, tipo_godel_fR, palatini_fR}. Considering different contents of matter, such as the perfect fluid and the scalar field, the question of breakdown of causality in Palatini $f(R, T)$ gravity theory is examined.
The present paper is organized as follows. In section II, the field equations of Palatini $f(R,T)$ gravity are derived. In section III, the G\"{o}del solution is considered. The field equations show that the violation of causality is permitted in this gravitational model. In section IV, the G\"{o}del-type solution is introduced. The gravitational equations are solved for three different matter contents: (i) perfect fluid; (ii) perfect fluid plus scalar field; (iii) only scalar field. In addition, the critical radius is analyzed for different situations. In section V, some remarks and conclusions are presented.
\section{Palatini formulation of $f(R,T)$ gravity}
In this section, field equations for Palatini $f(R,T)$ gravity theory are obtained. In the Palatini formalism, the curvature scalar is regarded as a function of the metric tensor and the connection, i.e. $R(g,\tilde{\Gamma})$. Then the gravitational action of Palatini $f(R,T)$ gravity is given as
\begin{equation}\label{action_1}
S=\frac{1}{2\kappa^2}\int d^4x \sqrt{-g}\, f\left(R(g,\tilde{\Gamma}), T\right) + \int d^4x \sqrt{-g}\, \mathcal{L}_m(g,\psi),
\end{equation}
where $\kappa^2 = 8\pi$, $g=det(g_{\mu\nu})$ and $f$ is a function of the Ricci scalar and the trace of the energy-momentum tensor $T_{\mu\nu}$. The matter Lagrangian $\mathcal{L}_m(g,\psi)$ is a function of $g$ and of the physical fields $\psi$. The Ricci scalar dependent on $g$ and of the Palatini connection $\tilde{\Gamma}$ is written as
\begin{equation}
R\left(g,\tilde{\Gamma}\right) =g^{\mu\nu} \tilde{R}_{\mu\nu}\left(\tilde{\Gamma}\right),
\end{equation}
with $\tilde{R}_{\mu\nu}\left(\tilde{\Gamma}\right)$ being the Ricci tensor expressed only in terms of the Palatini connection. It is defined as
\begin{equation}
\tilde{R}_{\mu\nu} = \partial_{\lambda} \tilde{\Gamma}_{\mu\nu}^\lambda - \partial_{\nu} \tilde{\Gamma}_{\mu\lambda}^\lambda + \tilde{\Gamma}_{\mu\nu}^\lambda \tilde{\Gamma}_{\lambda \alpha}^\alpha - \tilde{\Gamma}_{\mu\lambda}^\alpha \tilde{\Gamma}_{\nu \alpha}^\lambda,
\end{equation}
where $\tilde{\Gamma}$ is a quantity to be determined.
By taking the matter Lagrangian to be independent of $\partial_\lambda g_{\mu\nu}$, the energy momentum tensor is obtained as
\begin{equation}\label{tensor energia}
T_{\mu\nu} = \frac{-2}{\sqrt{-g}} \frac{\partial \pc{\sqrt{-g}\mathcal{L}_m}}{\partial g^{\mu\nu}} = -2 \frac{\partial \mathcal{L}_m}{\partial g^{\mu\nu}} + g_{\mu\nu} \mathcal{L}_m.
\end{equation}
In order to calculate the variation of the energy-momentum tensor with respect to the metric, a new tensor is defined, i.e.
\begin{equation}
\Theta_{\mu\nu} \equiv \frac{\delta T_{\alpha\beta}}{\delta g^{\mu\nu}} g^{\alpha\beta}.
\end{equation}
Using eq.(\ref{tensor energia}), this tensor becomes
\begin{equation}
\Theta_{\mu\nu}= -2T_{\mu\nu} + g_{\mu\nu} \mathcal{L}_m -2g^{\alpha\beta} \frac{\partial^2 \mathcal{L}_m}{\partial g^{\mu\nu}\partial g^{\alpha\beta}}.
\end{equation}
Varying the action (\ref{action_1}) with respect to the metric $g^{\mu\nu}$ and assuming that $\delta \tilde{R}_{\mu\nu}\pc{\tilde{\Gamma}}=0$, the field equations are given as
\begin{eqnarray}\label{field_1}
\tilde{R}_{\mu\nu}f_R = \kappa^2 T_{\mu\nu} - \pc{T_{\mu\nu} + \Theta_{\mu\nu}} f_T + \frac{g_{\mu\nu}}{2} f,
\end{eqnarray}
where $f_R \equiv \frac{\partial f}{\partial R}$ and $f_T \equiv \frac{\partial f}{\partial T}$.
An interesting constraint, that simplifies the field equations, comes from the trace of the equation. The trace of eq. (\ref{field_1}) is
\begin{equation}\label{field_2}
R\left(g,\tilde{\Gamma}\right) f_R = \kappa^2 T - \pc {T + \Theta} f_T + 2f,
\end{equation}
with $\Theta=\Theta^\mu\,_\mu$. Combining eq. (\ref{field_1}) and eq. (\ref{field_2}), field equations are written as
\begin{equation}\label{conteudo_materia}
f_RG_{\mu\nu} \pc{g, \tilde{\Gamma}} = \kappa^2 T_{\mu\nu} - f_T\pc{T_{\mu\nu} + \Theta_{\mu\nu}} - \frac{1}{2} \pr{f+\kappa^2 T - f_T\pc{ T+\Theta}}g_{\mu\nu},
\end{equation}
where $G_{\mu\nu} \pc{g, \tilde{\Gamma}}$ is the Einstein tensor in the Palatini formalism, which is defined as
\begin{eqnarray}
G_{\mu\nu} \pc{g, \tilde{\Gamma}} = \tilde{R}_{\mu\nu} \pc{\tilde{\Gamma}} - \frac{1}{2} g_{\mu\nu} \tilde{R} \pc{g,\tilde{\Gamma}}.
\end{eqnarray}
As the action depends on the metric and the Palatini connection, now varying it with respect to the connection $\tilde{\Gamma}$, keeping the metric constant, leads to
\begin{equation}
\delta S = \frac{1}{\kappa ^2} \int \sqrt{-g} f_R g^{\mu\nu} \pr{\tilde{\nabla} \pc{\delta \tilde{\Gamma}_{\mu\nu}^{\lambda}} - \tilde{\nabla} \pc{\delta \tilde{\Gamma}_{\mu\lambda}^{\lambda}}} d^4x.
\end{equation}
where was it used that
\begin{align}
\delta f\left(R(\tilde{\Gamma})\right) &= f_R g^{\mu\nu} \pr{\tilde{\nabla} \pc{\delta \tilde{\Gamma}_{\mu\nu}^{\lambda}} - \tilde{\nabla} \pc{\delta \tilde{\Gamma}_{\mu\lambda}^{\lambda}}},
\end{align}
with $\tilde{\nabla}$ being the covariant derivative associated with $\tilde{\Gamma}$.
Defining $A^{\mu\nu}\equiv f_Rg^{\mu\nu}$ and integrating by parts, the action variation becomes
\begin{equation}
\kappa^2 \delta S = \int \tilde{\nabla}_\lambda \pr{\sqrt{-g} \pc{A^{\mu\nu} \delta \tilde{\Gamma}_{\mu\nu}^{\lambda} -A^{\mu\lambda} \delta \tilde{\Gamma}_{\mu\alpha}^{\alpha}}}d^4x -
\int \tilde{\nabla}_\lambda \pr{\sqrt{-g} \pc{A^{\mu\nu} \delta^\lambda_\alpha - A^{\mu\lambda} \delta^\nu_\alpha}} \delta \tilde{\Gamma}_{\mu\nu}^{\alpha} d^4x.\label{Gauss}
\end{equation}
Note that, the first term in eq. (\ref{Gauss}) is a total derivative. Using Gauss theorem, it is possible to transform it into a surface integral and then it vanishes. Thus, the variation of the action yields
\begin{equation}\label{integral}
\tilde{\nabla}_\lambda \pr{ \sqrt{-g} \pc { A^{\mu\nu} \delta^\lambda_\alpha - A^{\mu\lambda} \delta^\nu_\alpha}} = 0.
\end{equation}
Considering the case $\lambda \neq \alpha$, eq.(\ref{integral}) is written as
\begin{equation}\label{conformal_1}
\tilde{\nabla}_\lambda \pr{ \sqrt{-g} f_R g^{\mu\nu}} =0.
\end{equation}
The eq. (\ref{conformal_1}) shows that the connection $\tilde{\Gamma}$ is compatible with a conformal metric $\tilde{g}_{\mu\nu}=f_R g_{\mu\nu}$. This implies that the geometry is not modified and the Palatini connection may be written as
\begin{equation}
\tilde{\Gamma}_{\mu\nu}^{\lambda} = \frac{1}{2} \tilde{g}^{\lambda\rho} \pc{\partial_\nu\tilde{g}_{\rho\mu} + \partial_\mu\tilde{g}_{\rho\nu} + \partial_\rho\tilde{g}_{\mu\nu}}.
\end{equation}
The connection $\tilde{\Gamma}$ can be written in terms of the Levi-Civita connection $\Gamma$, since the metrics $g_{\mu\nu}$ and $\tilde{g}_{\mu\nu}$ are conformally related. Then
\begin{equation}
\tilde{\Gamma}_{\mu\nu}^{\lambda} = \Gamma_{\mu\nu}^{\lambda} + \frac{1}{2f_R} \pc{ \delta^\lambda_\mu \partial_\nu + \delta^\lambda_\nu \partial_\mu - g_{\mu\nu} \partial^\lambda } f_R.
\end{equation}
Using this relation and the properties of a conformal metric, the Ricci tensor is given as
\begin{equation}
\tilde{R}_{\mu\nu}=R_{\mu\nu} + \frac{1}{f_R} \pr{\frac{3}{2F_r} \nabla_\mu f_R \nabla_\nu f_R - \pc{\nabla_\mu \nabla_\nu + \frac{g_{\mu\nu}}{2} \Box } f_R}.
\end{equation}
Note that, the tilde quantities are associated with the conformal metric. In a similar way the Ricci scalar and the Einstein tensor in the conformally related frames are given, respectively, as
\begin{equation}
\tilde{R} = R - \frac{3}{f_R} \Box f_R + \frac{3}{2f_R} \pc{\nabla f_R},
\end{equation}
and
\begin{equation}
\tilde{G}_{\mu\nu} = G_{\mu\nu} + \frac{1}{f_R} \chav { \pc{g_{\mu\nu} \Box - \nabla_\mu \nabla_\nu} f_R + \frac{3}{2f_R} \pr{\nabla_\mu f_R \nabla_\nu f_R - \frac{g_{\mu\nu}}{2} \pc{\nabla f_R}^2 }}.
\end{equation}
Therefore, the field equations, eq. (\ref{conteudo_materia}), for Palatini $f(R,T)$ gravity theory becomes
\begin{align}\label{eq_campo1}
G_{\mu\nu} + \frac{1}{f_R} \chav { \pc{g_{\mu\nu} \Box - \nabla_\mu \nabla_\nu} f_R + \frac{3}{2f_R} \pr{\nabla_\mu f_R \nabla_\nu f_R - \frac{g_{\mu\nu}}{2} \pc{\nabla f_R}^2 }} = \\ \nonumber \ \frac{1}{f_R} \chav {8\pi \pc{T_{\mu\nu} -\frac{g_{\mu\nu}}{2} T} + \pr{\pc{T_{\mu\nu} + \Theta_{\mu\nu} } -\frac{g_{\mu\nu}}{2} \pc{T + \Theta} }f_T - \frac{g_{\mu\nu}}{2} }.
\end{align}
It is important to note that, eq. (\ref{eq_campo1}) is expressed only in the $g$ frame, i.e. it is written in terms of the metric $g_{\mu\nu}$, its derivatives, and the matter fields.
In the next section, this field equation is considered and the causality violation for different matter contents is studied.
\section{G\"{o}del Metric in Palatini $f(R,T)$ gravity theory}
To investigate a possible violation of causality in Palatini $f(R,T)$ gravity theory, the G\"{o}del metric is considered. This solution was proposed by Kurt G\"{o}del, in 1949 \cite{godel}. It is an exact solution of Einstein equations, for a homogeneous rotating universe. As a consequence the possibility of Closed Timelike Curves (CTCs) emerge. CTCs allow the violation of causality and, theoretically, permit time-travel in this space-time. Its line element is defined as
\begin{equation}\label{godel_metric}
ds^2 = a^2 \left(dt^2 - dx^2 + \frac{e^{2x}}{2}dy^2 - dz^2 + 2e^xdt \ dy \right),
\end{equation}
where $a$ is an arbitrary number.
In order to study the field equations of Palatini $f(R,T)$ gravity, some tensor quantities associated with G\"{o}del space-time are calculated. The non-zero Ricci tensor components are
\begin{align}
R_{00} = 1, \ \ \ R_{02} = R_{20} = e^x, \ \ \ R_{22} = e^{2x},
\end{align}
and the Ricci scalar is
\begin{equation}
R = \frac{1}{a^2}.
\end{equation}
Another important ingredient in this study is the matter content. Let us take the perfect fluid as a matter content. Its energy-momentum tensor is
\begin{equation}
T_{\mu\nu} = \pc {\rho + p}u_\mu u_\nu -pg_{\mu\nu},
\end{equation}
where $u_\mu$ is the quadri-velocity tensor whose covariant components are $u_\mu= (a,0,ae^x,0)$, $\rho$ and $p$ are energy density and pressure, respectively. Here, the case $p=0$ is considered. In fact, the energy-momentum tensor describes only dust as matter content. Then
\begin{align}
T_{\mu\nu} &= \rho u_\mu u_\nu,
\end{align}
and the trace is
\begin{align}
T &= \rho.
\end{align}
In the same way, the tensor $\Theta_{\mu\nu}$ for a pressureless perfect fluid becomes
\begin{equation}
\Theta_{\mu\nu} = -2T_{\mu\nu}.\label{Theta}
\end{equation}
Then field equations, eq. (\ref{eq_campo1}), are written as
\begin{equation}\label{eq_campo_2}
G_{\mu\nu} + J_{\mu\nu} = \ \frac{1}{f_R} \chav {8\pi \pc{T_{\mu\nu} -\frac{g_{\mu\nu}}{2} T} + \pr{\pc{T_{\mu\nu} + \Theta_{\mu\nu} } -\frac{g_{\mu\nu}}{2} \pc{T + \Theta} }f_T - \frac{g_{\mu\nu}}{2} },
\end{equation}
where
\begin{equation} \label{tensor_J}
J_{\mu\nu} = \frac{1}{f_R} \chav { \pc{g_{\mu\nu} \Box - \nabla_\mu \nabla_\nu} f_R + \frac{3}{2f_R} \pr{\nabla_\mu f_R \nabla_\nu f_R - \frac{g_{\mu\nu}}{2} \pc{\nabla f_R}^2 }}.
\end{equation}
Note that the Ricci scalar for the G\"{o}del metric takes a constant value. This implies $J_{\mu\nu} = 0$. Then the field equations for this metric are given as
\begin{align}
\frac{1}{f_R} \chav{4\pi \rho a^2 + \frac{3}{2} \rho a^2 f_T -\frac{a^2}{f} } + \frac{1}{2} -a^2\Lambda & = 0,\\
\frac{1}{f_R} \chav{4\pi \rho a^2 + \frac{1}{2} \rho a^2 f_T -\frac{a^2}{f} } - \frac{1}{2} -a^2\Lambda & = 0,\\ \label{resolvendo1}
\frac{1}{f_R} \chav{4\pi \rho a^2 + \frac{1}{2} \rho a^2 f_T +\frac{a^2}{f} } - \frac{1}{2} -a^2\Lambda & = 0,\\ \label{resolvendo2}
\frac{1}{f_R} \chav{24 \pi \rho a^2 + 3 \rho a^2 f_t - a^2f} - 2 + a^2\Lambda &=0.
\end{align}
From eq. (\ref{resolvendo1}) and eq. (\ref{resolvendo2}) we get
\begin{equation}
\rho = \frac{f_R}{a^2 \pc {8\pi + f_T}}
\end{equation}
and
\begin{equation}
\Lambda = -\frac{f}{2f_R}.
\end{equation}
These results show that G\"{o}del metric is a solution of Palatini $f(R,T)$ gravity theory. It means that this theory allows causality violation. So, the possibility of CTCs is real and time travel to the past is theoretically possible. In addition, this is a generalization of GR solution that is recovered in the limit $f = R$ ($f_R \rightarrow 1$) and $f_T = 0$. In the next section, this phenomenon is investigated in more detail.
\section{G\"{o}del-type Metric in Palatini $f(R,T)$ gravity theory}
A generalization of the G\"{o}del solution, called G\"{o}del-type solution, has been developed \cite{tipo_godel1}. Its line element is
\begin{equation}\label{type_godel}
ds^2 = -dt^2 - 2H(r) dtd\phi + dr^2 + G(r) d\phi^2 + dz^2,
\end{equation}
where $H(r)$ and $D(r)$ are functions defined as \cite{frt_and_godel}
\begin{align}
H(r) &= \frac{4\omega}{m^2} \sinh^2 \left(\frac{mr}{2} \right),\label{condition1} \\
D(r) &= \frac{1}{m} \sinh(mr)\label{condition2},
\end{align}
and $G(r) = D^2(r) - H^2(r)$. The parameters $\omega$ and $m$ are such that $\omega^2 > 0$ and $-\infty \le m^2 \le +\infty$. The G\"{o}del-type metrics are defined by the two parameters $\omega$ and $m$. Note that, identical pairs $(\omega, m)$ specify isometric space-times \cite{tipo_godel1}. The standard G\"{o}del solution is a particular case of the $m^2>0$ class of space-times in which $m^2=2\omega^2$. However, causal and non-causal regions are allowed. These regions are determined from free parameters of the metric $m$ and $\omega$, and limited by a critical radius $r_c$. From this critical radius, there is a violation of causality. It is defined as
\begin{equation}
\sinh^2 \left(\frac{mr_c}{2}\right) \ = \ \left(\frac{4\omega^2}{m^2} -1 \right)^{-1}.
\end{equation}
It is important to note that, for $m^2=4\omega^2$ the critical radius becomes infinite $(r_c=\infty)$, this implies a causal universe. For $m^2 \ge 4\omega^2$, there are no G\"{o}del-type CTCs, and the breakdown of causality is avoided. Therefore, the G\"{o}del-type solution brings more details to the problem of causality.
In order to solve the field equations, a new basis, for simplicity, is chosen \cite{tipo_godel1}. The metric becomes
\begin{equation}
ds^2 = \eta_{AB} \theta^A \theta^B = (\theta^0)^2 - (\theta^1)^2 - (\theta^2)^2 - (\theta^3)^2, \label{frame}
\end{equation}
where Latin letters denote the transformed space. The one-forms $\theta^A$ is defined as
\begin{equation}
\theta^A = e^A\ _{\mu}\ dx^\mu,
\end{equation}
and its components are
\begin{align}
\theta^{(0)} &= dt + H(r)d\phi, \\ \theta^{(1)} &= dr, \\ \theta^{(2)} &= D(r)d\phi, \\ \theta^{(3)} &= dz,
\end{align}
with $e^A\ _{\mu}$ being the tetrads, such that $ e^A\,_\mu e^\mu\,_B=\delta^A_B$. The non-null components of the tetrads are
\begin{equation}
e^{0}\ _{(0)} = e^{1}\ _{(1)} = e^{3}\ _{(3)} = 1, \quad e^{0}\ _{(2)} = - \frac{H(r)}{D(r)}, \quad e^{2}\ _{(2)} = D^{-1}(r).
\end{equation}
Using that $G_{AB}=e^\mu_A e^\nu_B G_{\mu\nu}$, the non-vanishing components of the Einstein tensor in the flat (local) space-time take the form
\begin{align}
G_{(0)(0)} &= 3\omega^2-m^2, \\
G_{(1)(1)} &= \omega^2,\\
G_{(2)(2)} &= \omega^2, \\
G_{(3)(3)} &= m^2-\omega^2.
\end{align}
Assuming that the content of matter is a perfect fluid, whose its energy-momentum tensor is given by
\begin{equation}
T_{AB}=(\rho+p)u_A u_B-p\eta_{AB},
\end{equation}
where $u_A = (1,0,0,0)$, the field equations for Palatini $f(R,T)$ gravity theory, in the tangent (flat) space, i.e.
\begin{equation}\label{eq_campo_tipo_godel}
G_{AB} + J_{AB} = \ \frac{1}{f_R} \chav {8\pi \pc{T_{AB} -\frac{\eta_{AB}}{2} T} + \pr{\pc{T_{AB} + \Theta_{AB} } -\frac{\eta_{AB}}{2} \pc{T + \Theta} }f_T - \frac{\eta_{AB}}{2} },
\end{equation}
provides the set of equations
\begin{align}
2f_R\pc{3\omega^2 - m^2} +f &= 8\pi \pc {\rho + 3p} + \pc{\rho +p}f_T, \\
2f_R\omega^2 - f &= 8\pi \pc {\rho - p} + \pc{\rho +p}f_T, \label{resolvendo_tipo1}\\
2f_R\pc{m^2 - \omega^2} -f &= 8\pi \pc {\rho - p} + \pc{\rho +p}f_T. \label{resolvendo_tipo2}
\end{align}
Here $J_{AB}=0$, since the Ricci scalar for the G\"{o}del-type metrics assumes a constant value, i.e. $R=2(m^2-\omega^2)$. From eq. (\ref{resolvendo_tipo1}) and eq. (\ref{resolvendo_tipo2}) we get,
\begin{equation}
m^2=2\omega^2.\label{GC}
\end{equation}
This condition defines the original G\"{o}del universe. Using this result, the remaining equations become
\begin{align}
f_R m^2 + f &= 8\pi \pc{\rho +3p } + \pc{\rho +p} f_T,\\
f_R m^2 - f &=8\pi \pc{\rho -p } + \pc{\rho +p} f_T.
\end{align}
Taking these equations, the $m$ parameter is written as
\begin{equation}
m^2 = \frac{1}{16\pi f_R}\left[(8\pi+f_T)(16\pi\rho+f)\right].
\end{equation}
Then, the critical radius (beyond which the causality is violated) becomes
\begin{equation}\label{raio_critico1}
r_c = 2\sinh^{-1}(1)\sqrt{\frac{16\pi f_R}{(8\pi+f_T)(16\pi\rho+f)}}.
\end{equation}
This result leads to the following understanding: (i) the G\"{o}del-type metric is a solution of the Palatini $f(R,T)$ gravity theory; (ii) for the case $f_T = 0$, the eq. (\ref{raio_critico1}) yields the result obtained for $f(R)$ gravity \cite{tipo_godel_fR} and for $f(R,T)\rightarrow f(R)=R$, the standard GR results are recovered; (iii) the critical radius depends on the gravity theory (that is, on the function $f$ and its derivatives), and density of matter $\rho$. Note that, as the G\"{o}del condition is present, eq. (\ref{GC}), there is a non-causal region beyond the critical radius. This condition implies that the result for a perfect fluid as the content of matter is necessarily isometric to the G\"{o}del geometry, i.e. unavoidably exhibit closed timelike curves. Now, a question arises: what are the conditions for obtaining a causal universe in this gravitational theory? In order to obtain such conditions, different types of matter content are investigated.
\subsection{Perfect fluid with scalar field}
In this subsection, in order to obtain more information about causality in the Palatini $f(R,T)$ gravity theory, the scalar field and the perfect fluid are considered as matter content. The total energy-momentum tensor is composed by two parts, the scalar field $T_{AB}^S$ and the perfect fluid $T_{AB}^M$. Then
\begin{align}
T_{AB} &= T_{AB}^M + T_{AB}^S, \\
&= \pc{\rho + p} u_Au_B -p\eta_{AB} + \nabla_A \phi \nabla_B \phi - \frac{1}{2} \eta_{AB}\eta^{CD}\nabla_C \phi \nabla_D \phi,
\end{align}
where $\nabla_A$ is a covariant derivative that has a basis as $\theta^A = e^A_\beta dx^\beta$. The scalar field is given as $\phi = \epsilon z + \epsilon $, with $\epsilon = const$. This condition satisfies the scalar field equation $\Box \phi = \eta^{AB} \nabla_A\nabla_B \phi = 0$. The non-zero components of the energy-momentum tensor associated with the scalar field are
\begin{equation}
T_{00}^S = -T_{11}^S = -T_{22}^S = T_{33}^S = \frac{\epsilon^2}{2}.
\end{equation}
The trace of the energy-momentum tensor is
\begin{equation}
T=T^M+T^S=\rho -3p+\epsilon^2.
\end{equation}
Considering the scalar field contributions, the tensor $\Theta_{AB}$ is written as
\begin{equation}
\Theta_{AB} = \Theta_{AB}^M + \Theta_{AB}^S,
\end{equation}
where $\Theta_{AB}^M$ is given in eq. (\ref{Theta}). By taking the free Lagrangian from the scalar field
\begin{equation}
\mathcal{L}^S = \eta^{AB} \nabla_A \phi \nabla_B \phi,
\end{equation}
the tensor $\Theta_{AB}^S$ is obtained as
\begin{equation}
\Theta_{AB}^S = - T_{AB}^S + \frac{1}{2} T^S \eta_{AB}.
\end{equation}
Then the field equation eq.(\ref{eq_campo_tipo_godel}) becomes
\begin{align}
f_R G_{AB} &= 8\pi \pr {\pc{\rho + p}u_Au_B -p\eta_{AB} + T_{AB}^S }\\
&-\frac{1}{2} \pr{8\pi \pc{\rho - 3p + \epsilon^2 } +f +f_T \pc{\rho + p - 2\eta^2 } }\eta_{AB}\\
& +f_T \pr{\pc{\rho + p }u_Au_B - \frac{1}{2} \epsilon^2 \eta_{AB}}.
\end{align}
This equation leads to the set of equations
\begin{align}
8\pi \epsilon^2 &= \pc{m^2 - 2\omega^2} f_R,\\
8\pi + \frac{1}{2}\epsilon^2 f_T &= \frac{1}{2} \pc{2\omega^2 - m^2}f_R +\frac{1}{2} f,\\
8\pi \rho + f_T \pc{ \rho + p -\frac{1}{2}\epsilon^2} &= \frac{1}{2} \pc{6\omega^2 - m^2} f_R -\frac{1}{2}f,
\end{align}
where $f_R=\partial f / \partial R > 0$ and $f_T = \partial f / \partial T > 0$ have been considered. These equations allow a causal solution that is given as
\begin{eqnarray}
m^2&=&4\omega^2\\
f_R &=& \frac{8\pi \epsilon^2}{2\omega^2}.
\end{eqnarray}
This implies a critical radius $r_c \rightarrow \infty$. As a consequence, for this combination of matter fields, the violation of causality is not permitted. In other words, a completely causal universe in this gravitational model is possible and depends on the matter content.
\subsection{Scalar field}
Here only the scalar field $\phi(z) = \epsilon z + \epsilon $ is considered to be matter content. In this case, the field equations are
\begin{align}
f_R\pc{3\omega^2 - m^2} + \frac{f}{2} &= \frac{1}{2}\epsilon^2 f_T,\\
f_R\omega^2 - \frac{f}{2} &= - \frac{1}{2} \epsilon^2 f_T,\\
f_R\pc{m^2 - \omega^2} -\frac{f}{2} &= 8\pi \epsilon^2 -\frac{1}{2} f_T \epsilon^2.
\end{align}
Considering the conditions $f_R > 0$ and $f_T > 0$ the causal condition $m^2= 4\omega^2$ is obtained for any function $f(R,T)$. Then, a causal universe with a single scalar field as matter content is allowed. Therefore, these results show the causality problem in this formalism depends on two main ingredients: the gravitational theory and the matter source.
In order to emphasize, the study developed here, it is important to note that, cosmological solutions in $f(R,T)$ gravity theory have been investigated in both formalisms, i.e. metric and Palatini. In the Palatini formulation, the field equations contain extra terms generated by the coupling between the trace of the energy-momentum tensor and geometry. In this formalism, the energy-momentum tensor of the matter is not conserved and the motion of the particles is not geodesic as in the metric case. In addition, due to the matter-geometry coupling an extra force arises. Then, no new physics is expected in the motion of massive test particles in the Palatini formulation of the $f(R, T)$ gravity. However, for the Friedmann-Robertson-Walker universe the cosmological equations in the Palatini formalism are different due to the presence of some dynamical terms associated to $f_R$ \cite{fRT_Palatini_primeiro,palatini_fRT1}. Therefore, in this case, the Palatini $f(R, T)$ theory allows a much richer cosmological dynamics, as compared to the metric formulation. In this paper other cosmological models are examined. The G\"{o}del and G\"{o}del-type models are investigated and the problem of causality violation in Palatini $f(R, T)$ gravity generalizes the results obtained in Refs. \cite{godel_fRT, frt_and_godel}.
\section{Conclusion}
In the present paper the $f(R,T)$ gravity theory is considered. It was shown that this theory generalizes GR, implying a geometry-matter coupling, with the trace of the energy-momentum tensor included as a field variable in the gravitational action. Here, the Palatini formalism is introduced and then $f(R,T)$ gravity theory is formulated in this approach. In the Palatini formulation, the metric and affine connection are taken as independent field variables. Based on observational data, there are some motivations for studying modified theories of gravity like $f(R,T)$, because it provides an alternative way to explain the acceleration of the universe or leads to a possible quantum description of gravity. If $f(R,T)$ is a model that describes gravity, a number of questions should be reexamined in this framework. In this paper, the causality problem is investigated in Palatini $f(R,T)$ gravity theory. Our results show that the G\"{o}del metric is a solution in this gravitational model. Then the violation of causality is permitted. Furthermore, to generalize the problem of causality, G\"{o}del-type metric with perfect fluid as matter content is explored. In this case, the usual condition for G\"{o}del universe is obtained. In addition, a critical radius is calculated. The result has shown that the modification in the critical radius depends on the gravitational theory $f$, its derivatives and matter content. In order to find a causal solution, a combination of perfect fluid and scalar field as matter content has been considered. In this case, there is a solution for which the critical radius becomes infinity. Therefore, a causal solution is allowed. By considering a single scalar field as a matter source, similar causal solution has been obtained. Then for a well-motivated matter source, causal and non-causal solutions in Palitini $f(R,T)$ gravity theory are allowed. Furthermore, it is important to note that, if gravity is governed by Palatini $f(R,T)$ theory various issues of both observational and theoretical nature ought to be reexamined in this framework, including the question as to whether these theories permit the breakdown
of causality at a non-local scale. Therefore, our result is an important theoretical test for this gravitational theory. Since it is a generalization of GR, it must contain all the standard and exact solutions of GR.
\section*{Acknowledgments}
This work by A. F. S. is supported by CNPq projects 308611/2017-9 and 430194/2018-8; J. S. Gon\c{c}alves thanks CAPES for financial support.
|
1,116,691,498,520 | arxiv | \section{\refname}}
\usepackage{float}
\usepackage{subcaption}
\usepackage{makecell}
\makeatletter
\newcommand{\labitem}[2]{%
\def\@itemlabel{\textbf{#1}}
\item
\def\@currentlabel{#1}\label{#2}}
\makeatother
\makeatletter
\newcommand{\labitemc}[2]{%
\def\@itemlabel{\textbf{#1}}
\item
\def\@currentlabel{#1}\label{#2}}
\makeatother
\setlist{nolistsep}
\def\textcolor{red}{\textcolor{red}}
\def\textcolor{blue}{\textcolor{blue}}
\def\spacingset#1{\renewcommand{\baselinestretch}%
{#1}\small\normalsize} \spacingset{1}
\usepackage[final]{neurips_2020}
\usepackage[T1]{fontenc}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\newcommand{\mbox{Var}}{\mbox{Var}}
\def\varepsilon{\varepsilon}
\def \mathbb{R}{\mathbb{R}}
\def \mathbb{E}{\mathbb{E}}
\def \mathbb{N}{\mathbb{N}}
\def \mathbb{Z}{\mathbb{Z}}
\def \mathbb{Q}{\mathbb{Q}}
\def \mbox{Cov}{\mbox{Cov}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\nonumber}{\nonumber}
\newcommand{ \stackrel{d}{=} }{ \stackrel{d}{=} }
\newcommand{\displaystyle}{\displaystyle}
\newcommand{ \mbox{\sl Var} \ }{ \mbox{\sl Var} \ }
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\lfloor ns\rfloor}{\lfloor ns\rfloor}
\newcommand{n \rightarrow\infty}{n \rightarrow\infty}
\newcommand{\mathop{\mbox{\Huge{$\pi$}}}}{\mathop{\mbox{\Huge{$\pi$}}}}
\newcommand{\mathop{\rm tr}\nolimits}{\mathop{\rm tr}\nolimits}
\newcommand{\mathop{\rm E}\nolimits}{\mathop{\rm E}\nolimits}
\newcommand{\mathop{\rm MSE}\nolimits}{\mathop{\rm MSE}\nolimits}
\newcommand{\mathop{\rm Bias}\nolimits}{\mathop{\rm Bias}\nolimits}
\newcommand{\mathop{\rm P}\nolimits}{\mathop{\rm P}\nolimits}
\newcommand{\stackrel{\mathcal{D}}{\rightarrow}}{\stackrel{\mathcal{D}}{\rightarrow}}
\newcommand{\stackrel{P}{\rightarrow}}{\stackrel{P}{\rightarrow}}
\newcommand{\stackrel{e}{\rightarrow}}{\stackrel{e}{\rightarrow}}
\newcommand{\stackrel{a.s.}{\rightarrow}}{\stackrel{a.s.}{\rightarrow}}
\newcommand{\stackrel{d}{=}}{\stackrel{d}{=}}
\newcommand{\stackrel{d}{\approx}}{\stackrel{d}{\approx}}
\newcommand{\stackrel{a.s.}{=}}{\stackrel{a.s.}{=}}
\newcommand{\rightsquigarrow}{\rightsquigarrow}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\Kern}[3]{#1\left(\frac{#2}{#3}\right)}
\newcommand{\Kerns}[4]{#1^{#4}\left(\frac{#2}{#3}\right)}
\newcommand{\sum_{i=1}^n}{\sum_{i=1}^n}
\newcommand{\ddx}[1]{\partial_1^{#1}}
\newcommand{\Ind}[1]{I{\{#1\}}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathbf{a}}{\mathbf{a}}
\newcommand{\mathbf{b}}{\mathbf{b}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\newcommand{\mathbf{e}}{\mathbf{e}}
\newcommand{\mathbf{h}}{\mathbf{h}}
\newcommand{\mbox{\boldmath $j$}}{\mbox{\boldmath $j$}}
\newcommand{\mbox{\scriptsize\boldmath $j$}}{\mbox{\scriptsize\boldmath $j$}}
\newcommand{\mathbf{l}}{\mathbf{l}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{y}}{\mathbf{y}}
\newcommand{\mathbf{t}}{\mathbf{t}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{w}}{\mathbf{w}}
\newcommand{\mathbf{z}}{\mathbf{z}}
\newcommand{\mathbf{r}}{\mathbf{r}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{Z}}{\mathbf{Z}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{W}}{\mathbf{W}}
\newcommand{\mathbf{K}}{\mathbf{K}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{U}}{\mathbf{U}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\boldsymbol\alpha}{\boldsymbol\alpha}
\newcommand{\boldsymbol\beta}{\boldsymbol\beta}
\newcommand{\boldsymbol\delta}{\boldsymbol\delta}
\newcommand{\boldsymbol\gamma}{\boldsymbol\gamma}
\newcommand{\boldsymbol\psi}{\boldsymbol\psi}
\newcommand{\boldsymbol\varphi}{\boldsymbol\varphi}
\newcommand{\boldsymbol\lambda}{\boldsymbol\lambda}
\newcommand{\bm{\Sigma}}{\bm{\Sigma}}
\newcommand{\bm{\Delta}}{\bm{\Delta}}
\newcommand{\bm{\Theta}}{\bm{\Theta}}
\newcommand{\mathcal G}{\mathcal G}
\newcommand{\tilde\Psi}{\tilde\Psi}
\newcommand{\tilde\Phi}{\tilde\Phi}
\newcommand{\tilde\Psi^{*}}{\tilde\Psi^{*}}
\newcommand{\bm{U}}{\bm{U}}
\newcommand{\bm{V}}{\bm{V}}
\newcommand{\bm{W}}{\bm{W}}
\newcommand{\bm{X}}{\bm{X}}
\newcommand{\bm{h}}{\bm{h}}
\newcommand{\bm{Y}}{\bm{Y}}
\newcommand{\bm{g}}{\bm{g}}
\newcommand{\bm{u}}{\bm{u}}
\newcommand{\mathbb{W}}{\mathbb{W}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{G}}{\mathbb{G}}
\newcommand{\pmb{\mathbb{G}}}{\pmb{\mathbb{G}}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\widetilde{\mathbf{Z}}}{\widetilde{\mathbf{Z}}}
\newcommand{\IO}{\boldsymbol{0}}
\newcommand{\IF}{\boldsymbol{1}}
\newcommand{\dom}{\mbox{dom}\,}
\newcommand{\itg}{\lfloor t/\gamma \rfloor}
\DeclareMathOperator{\sgn}{sign}
\DeclareMathOperator{\diag}{diag}
\renewcommand{\tilde}{\widetilde}
\theoremstyle{definition}
\newtheorem{theo}{Theorem}[section]
\newtheorem{cor}[theo]{Corollary}
\newtheorem{prop}[theo]{Proposition}
\newtheorem{lem}[theo]{Lemma}
\newtheorem{example}[theo]{Example}
\newtheorem{defin}[theo]{Definition}
\newtheorem{rem}[theo]{Remark}
\author{%
Shih--Kang Chao\thanks{Corresponding author.} \\
Department of Statistics\\
University of Missouri\\
Columbia, MO 65211 \\
\texttt{[email protected]} \\
\And
Zhanyu Wang \\
Department of Statistics\\
Purdue University \\
West Lafayette, IN 47907 \\
\texttt{[email protected]}\\
\AND
Yue Xing\\
Department of Statistics\\
Purdue University \\
West Lafayette, IN 47907 \\
\texttt{[email protected]}\\
\And
Guang Cheng\\
Department of Statistics\\
Purdue University \\
West Lafayette, IN 47907 \\
\texttt{[email protected]}\\
}
\renewenvironment{proof}[1][\proofname]{{\noindent\bfseries Proof of #1.}}{\qed}
\begin{document}
\title{Directional Pruning of Deep Neural Networks}
\maketitle
\begin{abstract}
In the light of the fact that the stochastic gradient descent (SGD) often finds a flat minimum valley in the training loss, we propose a novel directional pruning method which searches for a sparse minimizer in or close to that flat region. The proposed pruning method does not require retraining or the expert knowledge on the sparsity level. To overcome the computational formidability of estimating the flat directions, we propose to use a carefully tuned $\ell_1$ proximal gradient algorithm which can provably achieve the directional pruning with a small learning rate after sufficient training. The empirical results demonstrate the promising results of our solution in highly sparse regime (92\% sparsity) among many existing pruning methods on the ResNet50 with the ImageNet, while using only a slightly higher wall time and memory footprint than the SGD. Using the VGG16 and the wide ResNet 28x10 on the CIFAR-10 and CIFAR-100, we demonstrate that our solution reaches the same minima valley as the SGD, and the minima found by our solution and the SGD do not deviate in directions that impact the training loss. The code that reproduces the results of this paper is available at \url{https://github.com/donlan2710/gRDA-Optimizer/tree/master/directional_pruning}.
\end{abstract}
\section{Introduction}\label{sec:intro}
Deep neural networks (DNNs), after properly trained, provide the state-of-the-art performance in various domains. Overparameterization is a common practice in modern deep learning, which facilitates better expressive power and faster convergence. On the other hand, overparameterization makes DNN exceedingly large, especially for large-scale tasks. For example, the ImageNet \citep{imagenet09,imagenet15} may need billions of parameters \cite{BHMM19} to become sufficiently overparameterized. {As the number of parameters in DNN is growing fast, the cost to deploy and process large DNNs can be prohibitive on devices with low memory/processing resources or with strict latency requirements}, such as mobile phones, augmented reality devices and autonomous cars. Many achievements have been made in shrinking the DNN while maintaining accuracy, and the MIT Technological Review lists the ``tiny AI'' as one of the breakthroughs in 2020 \cite{mitreview20}.
Among many methods for shrinking DNN, sparse DNN has attracted much attention. {Here, sparsity refers to the situation that most model parameters are zero in a DNN}. Sparse DNN not only requires less memory and storage capacity, but also reduces inference time \citep{CWZZ18}. One of the popular ways to get sparse DNNs is magnitude pruning \citep{HPTD15,han2015compression,Molchanov17,ZG17,LSTHD19,FC19,FDRC19,GEH19}. Magnitude pruning first learns the model parameters with an optimizer, e.g. stochastic gradient descent (SGD), and then prunes based on the learned magnitude of parameters with an a priori threshold. However, determining a threshold requires some expert knowledge and trial-and-error, as a principle for setting the threshold is not available. In addition, na\"ively masking parameters usually worsens the training loss and testing accuracy. Hence, retraining is needed for the pruned network to regain a similar performance as the dense network \citep{HPTD15}. Unfortunately, retraining as an additional step requires some care \cite{FC19} and additional computation.
\subsection{Directional pruning}
In this paper, we try to answer when a coefficient can be pruned without paying the price of increasing the training loss, and how we can prune based on this. These answers rely on the local geometry of the DNN loss function $\ell(\mathbf{w})$, where $\mathbf{w}$ denotes the parameters.
Suppose that $\mathbf{w}^{SGD}\in\mathbb{R}^d$, the parameter trained by the SGD, has reached a valley of minima. Hence, $\nabla \ell(\mathbf{w}^{SGD})\approx 0$. The Hessian $\nabla^2 \ell(\mathbf{w}^{SGD})$ has multiple nearly zero eigenvalues \citep{Sagun16,Sagun18,GKX19,P19}, and the directions associated with these eigenvalues are the flat directions on the loss landscape. Perturbation in these directions causes little change in the training loss by the second order Taylor expansion of $\ell(\mathbf{w})$ around $\mathbf{w}^{SGD}$. We denote the subspace generated by these directions as $\mathcal{P}_0$.
Following \cite{LDS90,BS93}, pruning $\mathbf{w}^{SGD}$ can be viewed as a perturbation of $\mathbf{w}^{SGD}$:
\begin{align}
\mathbf{w}^{SGD} - A \cdot \sgn(\mathbf{w}^{SGD}).\label{eq:prune}
\end{align}
Here, $\sgn(\mathbf{w}^{SGD})\in\{-1,1\}^{d}$ is the sign vector of $\mathbf{w}^{SGD}$ and $A$ is a diagonal matrix with $0 \leq A_{jj} \leq |w_j^{SGD}|$ for $j=1,\ldots,d$. The $j$th coefficient is pruned if $A_{jj} =|w_j^{SGD}|$. For example, in a 2D illustration in the left panel of Figure \ref{fig:dp}, \eqref{eq:prune} is a vector starting from the origin to a point in the orange rectangle.
Retraining is needed if $A \cdot \sgn(\mathbf{w}^{SGD})\not\in\mathcal{P}_0$. Some empirical studies even suggest $\mathcal{P}_0$ is nearly orthogonal to the $\mathbf{w}^{SGD}$ \cite{GRD18,GKX19}, so generally $A \cdot \sgn(\mathbf{w}^{SGD})\not\in\mathcal{P}_0$. Therefore, we instead consider $\mathbf{w}^{SGD}-\lambda\cdot\bm{\Theta}$ where the perturbation direction $\bm{\Theta}\in\mathcal{P}_0$ and $\lambda>0$. We maximize the number of $j$ such that $\sgn(\Theta_j)=\sgn(w_j^{SGD})$ for $j=1,\ldots,d$, in order to decay as many coefficients in $\mathbf{w}^{SGD}$ as possible. Specifically, we select $\bm{\Theta}$ as
\begin{align*}
\bm{\Theta} = \arg\min_{\mathbf{u}\in\mathcal{P}_0}\big\|\mathbf{u}-\sgn(\mathbf{w}^{SGD})\big\|_2^2,
\end{align*}
i.e. $\bm{\Theta} = \Pi_0\{\sgn(\mathbf{w}^{SGD})\}$, where $\Pi_0$ denotes the projection on the subspace $\mathcal{P}_0$. The vector $\bm{\Theta}$ does not always decrease the magnitude of $\mathbf{w}^{SGD}$, and it does whenever $\sgn(w_j^{SGD})\cdot \Theta_j>0$, or
\begin{align}
s_j:=\sgn(w_j^{SGD})\cdot \big(\Pi_0\{\sgn(\mathbf{w}^{SGD})\}\big)_j>0.\label{eq:s}
\end{align}
Decreasing the magnitude of the coefficients with $s_j>0$ in $\mathbf{w}^{SGD}$ would cause little changes in the training loss, as long as we simultaneously increase the magnitude of coefficients $j'\neq j$ with $s_{j'}<0$ proportional to $|s_{j'}|$. As illustrated in the left panel of Figure \ref{fig:dp}, the adverse effect due to decreasing the magnitude of $w_2$ ($s_2>0$) can be compensated by increasing the magnitude of $w_1$, so that the net change is the red vector in $\mathcal{P}_0$. Note that this argument has a similar spirit as the ``optimal brain surgeon''\cite{BS93}, and it is the key to remove the need of retraining. The $s_j$ can thus be understood as a score to indicate whether pruning the $j$th coefficient causes a (ir)redeemable training loss change. We propose the novel ``directional pruning'' using the score $s_j$ in \eqref{eq:s}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.47\textwidth,valign=t]{dp2d1_v3.pdf}
\includegraphics[width=0.38\textwidth,valign=t]{intro_cifar100_conn.pdf}
\caption{ {\bf Left:} a 2D graphical illustration of the directional pruning. The orange region contains all possible locations of the vector $\mathbf{w}^{SGD} - A \cdot \sgn(\mathbf{w}^{SGD})$. The directional pruning with different $\lambda$ takes solutions on the red dashed line. {\bf Right:} training loss contour of the wide ResNet28$\times 10$ (WRN28x10 \cite{zagoruyko2016wide}) on the CIFAR-100 around the minimal loss path (the white curve) between minimizers found by the SGD and \eqref{eq:grda} \cite{CC19} (the algorithm we propose to use) using \cite{Garipov18}. While no coefficient of the SGD minimizer is zero, our solution has only 9.7\% active parameters. Testing accuracy is 76.6\% for the SGD and 76.81\% for our solution.}\label{fig:dp}
\end{figure}
\begin{defin}[Directional pruning based on SGD]\label{def:dp} Suppose $\ell(\mathbf{w})$ is the training loss, and $\nabla\ell(\mathbf{w}^{SGD})=0$ where $\mathbf{w}^{SGD}$ is the minimizer found by SGD. Suppose none of the coefficients in $\mathbf{w}^{SGD}$ is zero. With $\lambda>0$ and $s_j$ defined in \eqref{eq:s}, the directional pruning solves
\begin{align}
\arg\min_{\mathbf{w}\in\mathbb{R}^d} \frac{1}{2}\|\mathbf{w}^{SGD}-\mathbf{w}\|_2^2 + \lambda\sum_{j=1}^d s_j |w_j|. \label{eq:grdamin}
\end{align}
\end{defin}
In \eqref{eq:grdamin}, the coefficients with $s_j>0$ are pruned with sufficiently large $\lambda$ by the absolute value penalization, but the magnitude of $w_{j'}$ with $s_{j'}\leq 0$ is un-penalized, and are even encouraged to increase. For a 2D illustration, the solution path for different $\lambda>0$ is the dashed red curve in the left panel of Figure \ref{fig:dp}. If $\lambda$ is too large, the coefficients $j$ with $s_j<0$ may overshoot, illustrated as the flat part on the dashed red line extended to the right of the red point.
\begin{rem}[Solution of \eqref{eq:grdamin}] \label{rem:def_dp}
The objective function in \eqref{eq:grdamin} is separable for each coefficient. The part with $s_j>0$ is solved by the $\ell_1$ proximal operator. The part with $s_j<0$ is non-convex, but it still has the unique global minimizer if $w_j^{SGD}\neq 0$. The solution of \eqref{eq:grdamin} is $$\widehat{w}_j = \sgn(w^{SGD}_j) \big[|w^{SGD}_j| - \lambda s_j\big]_+,$$ where $[a]_+ = \max\{0,a\}$. See Proposition \ref{prop:uniq_sol_def} in the appendix for a proof.
\end{rem}
Implementing the directional pruning is very challenging due to high dimensionality. Specifically, the matrix $\nabla^2\ell$ of modern deep neural network is often very large so that estimating $\mathcal{P}_0$ is computationally formidable. Perhaps surprisingly, we will show that there is a very simple algorithm \eqref{eq:grda} presented in Section \ref{sec:alg}, that can asymptotically solve \eqref{eq:grdamin} without explicitly estimating the Hessian. The right panel of Figure \ref{fig:dp} shows that if $\lambda$ is selected appropriately, our method achieves a similar training loss as the dense network with $\mathbf{w}^{SGD}$, while being highly sparse with a test accuracy comparable to the SGD. More detailed empirical analysis is in Section \ref{sec:conn}.
\begin{rem}[Major differences to the ``optimal brain surgeon''] It is worth noting that \eqref{eq:grdamin} is different from the optimization problem in \cite{BS93,HSW93}. While an analytic map between directional pruning and optimal brain surgeon is interesting for future study, the two are generally nonequivalent. Particularly, directional pruning perturbs from $\mathbf{w}^{SGD}$ continuously in $\lambda$ like a restricted $\ell_1$ weight decay on $\mathcal{P}_0$ (Remark \ref{rem:def_dp}), while optimal brain surgeon yields a discontinuous perturbation like a hard thresholding (see p.165 of \cite{BS93}). The main advantage of directional pruning is that it can be computed with the gRDA algorithm presented in Section \ref{sec:alg}, which does not require to estimate the Hessian or its inverse
\end{rem}
\subsection{Contributions}
Our major contribution is to propose the novel directional pruning method (Definition \ref{def:dp}), and further prove that the algorithm \eqref{eq:grda} \cite{CC19} achieves the effect of the directional pruning asymptotically. The \eqref{eq:grda} has been applied for sparse statistical inference problems with a convex loss and principal component analysis \cite{CC19}. The connection between the directional pruning and \eqref{eq:grda} is theoretically proved by leveraging the continuous time approximation developed in \cite{CC19} under proper assumptions on the gradient flow and the Hessian matrix. It is worth noting that this algorithm does not require to explicitly estimate $\mathcal{P}_0$, and it can be implemented like an optimizer in a typical deep learning framework, e.g. Tensorflow or PyTorch.
Empirically, we demonstrate that \eqref{eq:grda} successfully prunes ResNet50 on ImageNet, and achieves 73\% testing accuracy with only 8\% active parameters.
Upon benchmarking with other popular algorithms, \eqref{eq:grda} yields a high accuracy and sparsity tradeoff among many contemporary methods. We also successfully prune deep networks on CIFAR-10/100, and the results are in the appendix. Using VGG16 on CIFAR-10 and WRN28x10 on CIFAR-100, we show that \eqref{eq:grda} reaches the same valley of minima as the SGD, empirically verifying the directional pruning. Using VGG16 and WRN28x10 on CIFAR-10, we show the proportion of the difference between \eqref{eq:grda} and the SGD in the leading eigenspace of the Hessian is low, as another evidence for \eqref{eq:grda} performing the directional pruning.
\section{The gRDA algorithm}\label{sec:alg}
Consider training data $Z_i = \{(X_i,Y_i)\}_{i=1}^N$, where $X_i$ is the input variable, e.g. images, and $Y_i$ is the response variable, e.g. a vector of real numbers, or labels $Y_n\in\{0,1\}^{n_l}$, where $n_l\in\mathbb{N}$. Suppose $h(x;\mathbf{w})\in\mathbb{R}^{n_l}$ is the output of an $L$-layer feedforward overparameterized DNN, with parameters $\mathbf{w}\in\mathbb{R}^d$. Let $\mathcal{L}(h;y):\mathbb{R}^{n_l\times n_l}\to\mathbb{R}_+$ be a loss function, e.g. the $\ell_2$ loss $\mathcal{L}(h;y)=\|h-y\|_2^2$
or the cross-entropy loss.
Let $f(\mathbf{w};Z) := \mathcal{L}(h(X;\mathbf{w}),Y)$, and $\nabla f(\mathbf{w};Z)$ be the gradient of $f(\mathbf{w};Z)$, the loss function $\ell(\mathbf{w})$ and its gradient are defined by
\begin{align}
\ell(\mathbf{w}):=\mathbb{E}_{\mathcal{Z}}[f(\mathbf{w};Z)], \quad G(\mathbf{w})=\nabla \ell(\mathbf{w}) = \mathbb{E}_{\mathcal{Z}}[\nabla f(\mathbf{w};Z)], \label{eq:G}
\end{align}
where $\mathbb{E}_{\mathcal{Z}}[f(\mathbf{w};Z)]=N^{-1}\sum_{i=1}^N f(\mathbf{w};Z_i)$.
We adopt the generalized regularized dual averaging (gRDA) algorithms originally proposed in \cite{CC19}. This algorithm has been successfully applied to the ad click-through rate prediction \cite{autofis}. Specifically, let $\{\hat i_k\}_{k=1}^\infty$ be i.i.d. uniform random variables on $\{1,\ldots,N\}$ independent from the training data,
\begin{align}\tag{\texttt{gRDA}}
w_{n+1,j} = \mathcal{S}_{g(n,\gamma)}\bigg(w_{0,j}-\gamma\sum_{k=0}^{n}\nabla f_j(\mathbf{w}_{k};Z_{\hat i_{k+1}})\bigg), \mbox{ for $j=1,\ldots,d$},
\label{eq:grda}
\end{align}
where $\mathcal{S}_{g}:v\mapsto \sgn(v)(|v|-g)_+$ is the soft-thresholding operator, $\mathbf{w}_0$ is an initializer chosen at random from a distribution; $\gamma$ is the learning rate; $g(n,\gamma)>0$ is the {tuning function}, detailed in \eqref{eq:tune}. We can extend \eqref{eq:grda} to minibatch gradients, by replacing $\nabla f_j(\mathbf{w}_k;Z_{\hat i_{k+1}})$ with an average $|S_{k+1}|^{-1}\sum_{i\in S_{k+1}} \nabla f(\mathbf{w}_k;Z_i)$, where $S_{k+1}\subset\{1,\ldots,N\}$ is sampled uniformly. We will focus on \eqref{eq:grda}, i.e. $|S_k|=1$ for all $k$, but our theory can be generalized to any fixed minibatch size.
The tuning function $g(n,\gamma)$ controls the growth rate of penalization. Motivated by \cite{CC19},
\begin{align}
g(n,\gamma)= c \gamma^{1/2}(n\gamma)^\mu, \label{eq:tune}
\end{align}
where $c,\mu>0$ are the two hyperparameters positively related to the strength of penalization. The $(n\gamma)^\mu$ is used to match the growing magnitude of SGD. The $\gamma^{1/2}$ is an important scaling factor; without it, \eqref{eq:grda} with $\mu=1$ reduces to the regularized dual averaging (RDA) algorithm \cite{X10} that minimizes $\ell(\mathbf{w})+\lambda\|\mathbf{w}\|_1$ rather than the directional pruning problem in \eqref{eq:grdamin}. Note that if $c=0$, then \eqref{eq:grda} recovers the stochastic gradient descent:
\begin{align}\tag{\texttt{SGD}}
\mathbf{w}_{n+1}^{SGD} = \mathbf{w}_n^{SGD} -\gamma \nabla f(\mathbf{w}_n^{SGD};Z_{\hat i_{n+1}}). \label{eq:sgd}
\end{align}
In this paper, we only consider the constant learning rate. In practice, a ``constant-and-drop'' learning rate is often adopted. See Section \ref{app:algorithm} and \ref{sec:cd} in the appendix for the algorithms in pseudocode.
\begin{rem}[Selection of $\mu$ and $c$ in practice]
Our empirical results and theory in later sections suggest $\mu\in\{0.501,0.51,0.55\}$ generally performs well regardless of the task and network used. For a given $\mu$, we recommend to search for the greatest $c$ (starting with e.g. $10^{-4}$) such that gRDA yields a comparable test acc. as SGD using $1-5$ epochs.
\end{rem}
\section{Theoretical analysis}\label{sec:th}
To show \eqref{eq:grda} asymptotically achieves the directional pruning in Definition \ref{def:dp}, we leverage some tools from the continuous time analysis. Define the gradient flow $\mathbf{w}(t)$ to be the solution of the ordinary differential equation
\begin{align}\tag{GF}
\dot\mathbf{w} = -G(\mathbf{w}),\;\mathbf{w}(0)=\mathbf{w}_0, \label{eq:gf}
\end{align}
where $\mathbf{w}_0$ is a random initializer, and $G$ is defined in \eqref{eq:G}. The $\mathbf{w}(t)$ can provably find a good global minimizer under various conditions \citep{ACH18,ACGH19,DZPS19,Lee19,OS19,DLT18}. Throughout this paper, we assume the solution of \eqref{eq:gf} is unique.
Let $H(\cdot):=\mathbb{E}_{\mathcal{Z}}[\nabla^2 f(\cdot;Z)]$ be the Hessian matrix. Let $\Phi(t,s)\in\mathbb{R}^{d\times d}$ be the solution (termed the principal matrix solution, see Chapter 3.4 of \cite{T12}) of the matrix ODE system ($s$ is the initial time):
\begin{align}
\frac{d\Phi(t,s)}{dt} = -H(\mathbf{w}(t)) \Phi(t,s),\quad \Phi(s,s)=I_d.\label{eq:odex
\end{align}
Let $\mathbf{w}_\gamma(t):=\mathbf{w}_{\lfloor t/\gamma\rfloor}$ and $\mathbf{w}^{SGD}(t)$ be the piecewise constant interpolated process of \eqref{eq:grda} and \eqref{eq:sgd}, respectively, with the same learning rate, where $\lfloor a\rfloor$ takes the greatest integer that is less than or equal to $a$. We will make the following assumptions:
\begin{itemize}
\labitemc{(A1)}{as:M} $G(\mathbf{w}):\mathbb{R}^d\to\mathbb{R}^d$ is continuous on $\mathbb{R}^d$.
\end{itemize}
Define
\begin{align}
\Sigma(\mathbf{w}):=\mathbb{E}_{\mathcal{Z}}\big[\big(\nabla f(\mathbf{w};Z)-G(\mathbf{w})\big)\big(\nabla f(\mathbf{w};Z)-G(\mathbf{w})\big)^\top\big].\label{eq:sig}
\end{align}
\begin{itemize}
\labitemc{(A2)}{as:L} $\Sigma:\mathbb{R}^d\to\mathbb{R}^{d\times d}$ is continuous. $\mathbb{E}_{\mathcal{Z}}\big[\sup_{\|\mathbf{w}\|\leq K}\big\|\nabla f\big(\mathbf{w},Z\big)\big\|_2^2\big]<\infty$ for any $K>0$ a.s.
\labitemc{(A3)}{as:H} $H: \mathbb{R}^d\to\mathbb{R}^{d\times d}$ is continuous, and there exists a non-negative definite matrix $\bar H$ such that
$\int_0^\infty\|H(\mathbf{w}(s))- \bar H\| ds <\infty$ where $\|\cdot\|$ is the spectral norm, and the eigenspace of $\bar H$ associated with the zero eigenvalues matches $\mathcal{P}_0$.
\labitemc{(A4)}{as:pms} $\int_0^t s^{\mu-1}\Phi(t,s)\sgn(\mathbf{w}(s))ds = o(t^\mu)$.
\labitemc{(A5)}{as:sign} There exists $\bar T>0$ such that for all $t>\bar T$: (i) $\sgn\{\mathbf{w}(t)\}=\sgn\{\mathbf{w}(\bar T)\}$; (ii) $\sgn\{w_j(t)\}=\sgn\{w_j^{SGD}(t)\}$ for all $j$.
\end{itemize}
The key theoretical result of this paper shows that \eqref{eq:grda} performs the directional pruning (Definition \ref{def:dp}) for a sufficiently large $t$.
\begin{theo}\label{th:dp}
Under assumptions \ref{as:M}-\ref{as:sign}, and assume $\mu\in(0.5,1)$ and $c>0$ in \eqref{eq:tune}. Then, as $\gamma\to 0$, \eqref{eq:grda} asymptotically performs directional pruning based on $\mathbf{w}^{SGD}(t)$; particularly,
\begin{align}
\mathbf{w}_\gamma(t)\stackrel{d}{\approx}\arg\min_{\mathbf{w}\in\mathbb{R}^d}\bigg\{ \frac{1}{2}\|\mathbf{w}^{SGD}(t)-\mathbf{w}\|_2^2 + \lambda_{\gamma,t}\sum_{j=1}^d \bar s_j |w_j|\bigg\}, \quad \mbox{for any $t>\bar T$}, \label{eq:dp1}
\end{align}
where $\stackrel{d}{\approx}$ means ``asymptotic in distribution'' under the empirical probability measure of the gradients, $\lambda_{\gamma,t} = c \sqrt{\gamma} t^\mu$ and the $\bar s_j$ satisfies $\lim_{t\to\infty}|\bar s_j - s_j| = 0$ for all $j$.
\end{theo}
This theorem holds in the asymptotic regime ($\gamma\to 0$) with a finite time horizon, i.e. any fixed $t\geq \bar T$. It is important that $\lambda$ grows with $t$, because the magnitude of SGD asymptotically grows like a Gaussian process, i.e., in $t^{0.5}$. Hence, $\mu$ should be slightly greater than 0.5. The proof of Theorem \ref{th:dp} is in Section \ref{sec:pfdp} of the appendix.
\begin{rem}[Condition \ref{as:H}]
The eigenspace of $\bar H$ associated with the zero eigenvalues and $\mathcal{P}_0$ matches when $\mathbf{w}(t)$ and SGD converge to the same flat valley of minima. For the $\ell_2$ loss and in the teacher-student framework, \cite{DLT18,Xiao19,YQ19} showed $\mathbf{w}(t)\to\mathbf{w}^*$ exponentially fast for one hidden layer networks, so the limit $\bar H=H(\mathbf{w}^*)$ and the condition holds. For the cross-entropy loss, we suspect that $\bar H$ satisfying \ref{as:H} is not a zero matrix, but its exact form needs further investigation.
\end{rem}
\begin{figure}[H]
\begin{minipage}{0.6\textwidth}
\begin{rem}[Condition \ref{as:pms}]
This condition can be verified (by Problem 3.31 of \cite{T12})
if $\sgn(\mathbf{w}(t))$ is mainly restricted in the eigenspace of $H(\mathbf{w}(t))$ associated with positive eigenvalues as $t\to\infty$. Empirically, this appears to hold as \cite{GRD18,GKX19} show that $\mathbf{w}(t)$ lies mainly in the subspace of $H(\mathbf{w}(t))$ associated with the positive eigenvalues, and Figure \ref{fig:sgdsign} suggests the angle between $\mathbf{w}(t)$ and $\sgn(\mathbf{w}(t))$ is very small.
\end{rem}
\begin{rem}[Condition \ref{as:sign}
For (i), under the cross-entropy loss, several papers \cite{SENS18,GLSS18,JT19,LL20} show that $\mathbf{w}(t)/\|\mathbf{w}(t)\|_2$ converges to a unique direction while $\|\mathbf{w}(t)\|_2\to\infty$. This implies that $\sgn(\mathbf{w}(t))$ stabilizes after a finite time. For the $\ell_2$ loss, \cite{DLT18,Xiao19} show $\mathbf{w}(t)\to\mathbf{w}^*$ for one hidden layer networks under regularity conditions, and the condition follows. The (ii) holds if the learning rate is sufficiently small, so that the deviation between the gradient flow and the SGD is small.
\end{rem}
\end{minipage}
\hfill
\begin{minipage}{0.37\textwidth}
\includegraphics[width=\textwidth]{l2_over_l1.pdf}
\caption{$\|\mathbf{w}\|_2/\|\mathbf{w}\|_1$ is close to its lower bound $d^{-1/2}$ when the coefficients in $\mathbf{w}$ are of similar magnitude, i.e. the direction of $\mathbf{w}$ is the same as $\sgn(\mathbf{w})$.}\label{fig:sgdsign}
\end{minipage}
\end{figure}
\section{Empirical experiments}\label{sec:empi}
This section presents the empirical performance of \eqref{eq:grda}, and the evidence that \eqref{eq:grda} performs the directional pruning (Definition \ref{def:dp}). Section \ref{sec:imagenet} considers ResNet50 with ImageNet, and compares with several existing pruning algorithms. To check if \eqref{eq:grda} performs the directional pruning, Section \ref{sec:conn} presents the local geometry of the loss around the minimal loss curve that connects the minima found by \eqref{eq:sgd} and \eqref{eq:grda}, and Section \ref{sec:proj} investigates the direction of deviation between the minima found by \eqref{eq:sgd} and \eqref{eq:grda}.
\subsection{ResNet50 on the ImageNet}\label{sec:imagenet}
We use \eqref{eq:grda} to simultaneously prune and train the ResNet50 \cite{HZRS15} on the ImageNet dataset without any post-processing like retraining. The learning rate schedule usually applied jointly with the SGD with momentum does not work well for \eqref{eq:grda}, so we use either a constant learning rate or dropping the learning rate only once in the later training stage. Please find more implementation details in Section \ref{ap:imagenet_details} in the appendix. The results are shown in Figure \ref{fig:resnet50}, where $\mu$ is the increasing rate of the soft thresholding in the tuning function \eqref{eq:tune} of \eqref{eq:grda}.
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{imagenet_training-largefont.pdf}
\caption{Learning trajectories of \eqref{eq:sgd} and \eqref{eq:grda} for ResNet50 \cite{HZRS15} on ImageNet image recognition task. {\bf Left:} top 1 training accuracy. {\bf Center:} top 1 testing accuracy. {\bf Right:} the ratio between the number of nonzero parameters and the total number of parameters. The number of nonzero weights slightly increases, contradicting with Theorem \ref{th:dp}. This could be because that Assumption \ref{as:sign} fails due to the large learning rate. $\gamma=0.1$ for both SGD and gRDA. Minibatch size is 256
}\label{fig:resnet50}
\end{figure}
{\bf Accuracy}: gRDAs can perform as accurate as \eqref{eq:sgd} after sufficient training. Larger $\mu$ (in the tuning function \eqref{eq:tune}) can perform worse than \eqref{eq:sgd} in the early stage of training, but eventually beat \eqref{eq:sgd} in the late stage of training. The {training} accuracy of \eqref{eq:sgd} is higher than that of the gRDAs. This may result from a too large learning rate, so the coefficients $w_j$'s with $s_j<0$ (in \eqref{eq:grdamin}) overshoot and their magnitudes become too large. \\
{\bf Sparsity}: Sparsity increases rapidly at the early stage of training. With $\mu=0.55$ in Figure \ref{fig:resnet50}, \eqref{eq:grda} reaches 92\% sparsity, while the testing accuracy is higher than \eqref{eq:sgd}.\\% and eventually reaches 80-90\% without losing accuracy.\\
{\bf Wall time and memory footprint}: \eqref{eq:grda} has a slightly higher wall time than \eqref{eq:sgd}, but the memory footprint is similar. See Section \ref{ap:resource_comparison} for a detailed comparison.
The left panel of Figure \ref{fig:compareresnet50} compares \eqref{eq:grda} with the magnitude pruning \cite{ZG17} and the variational dropout \cite{pmlr-v70-molchanov17a}, and \eqref{eq:grda} is particularly competitive in the high sparsity (90-92\%) regime. The right panel of Figure \ref{fig:compareresnet50} compares different pruning algorithms that do not require expert knowledge for selecting the layerwise pruning level with \eqref{eq:grda} in terms of the layerwise sparsity. We compare \eqref{eq:grda} with the Erd\H{o}s-R\'enyi-Kernel of \cite{evci19}, variational dropout \cite{pmlr-v70-molchanov17a} and a reinforcement-learning based AutoML method \cite{He18}. Our \eqref{eq:grda} achieves a high sparsity 92\% with a competitive testing accuracy. In addition, the layerwise sparsity pattern generated by gRDA is similar to the variational dropout and the AutoML, as these methods generate higher sparsity in the 3$\times$3 convolutional layers, and lower sparsity in the 1$\times$1 layers and the initial layers, which are less wide than the latter layers. Among these methods, \eqref{eq:grda} is unique in that its spirit is interweaving with the local loss landscape.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.43\textwidth, trim={0 0 0 45}, clip]{compare_gale_gRDA_light_with_arrow_v2-largefont.pdf}
\includegraphics[width=0.54\textwidth, trim={0 0 0 45}, clip]{layerwise_sparsity_comparison-largefont.pdf}
\caption{{\bf Left:} A comparison of gRDA with the magnitude pruning \cite{ZG17} and variational dropout \cite{pmlr-v70-molchanov17a} with ResNet50 on ImageNet, done by \cite{GEH19} with around 100 epochs using SGD with momentum. Our solution is among the high performers in the very sparse regime (90-92\%).
The numbers next to the red crosses are the epochs.
{\bf Right:} Layerwise sparsity produced by different ``automatic'' pruning algorithms. All methods show the pattern that the 3x3 conv layers (on dashed lines) are greatly pruned (valleys), and the 1x1 conv layers are less pruned (peaks).}\label{fig:compareresnet50}
\end{figure}
\subsection{Connectivity between the minimizers of gRDA and SGD}\label{sec:conn}
In this section, we check whether \eqref{eq:sgd} and \eqref{eq:grda} reach the same valley, which implies \eqref{eq:grda} is performing the directional pruning. Similar analysis has been done for the minima found by \eqref{eq:sgd} with different initializers \cite{WS19,Garipov18,N19,Draxler18,HHY19,EPGE19,Gotmare18}.
We train VGG16 \cite{vgg16} on CIFAR-10 and WRN28x10 on CIFAR-100 until nearly zero training loss using both \eqref{eq:sgd} and \eqref{eq:grda}. The minima here found by \eqref{eq:grda} generally have sparsity around 90\% or higher for larger $\mu$. We use the method of \cite{Garipov18} to search for a quadratic B\'ezier curve of minimal training loss connecting the minima found by the gRDA and SGD, and then visualize the contour of the training losses and testing errors on the hyperplane containing the minimal loss curve.
See Sections \ref{append:cifar} and \ref{append:conn} for details on implementation.
The results are shown for different choices of $\mu$, which is the increasing rate of the soft thresholding in the tuning function \eqref{eq:tune} of \eqref{eq:grda}. As observed from the contours in Figure \ref{fig:conn}, the learned parameters of both SGD and gRDA lie in the same valley on the training loss landscape if $\mu$ is properly tuned, namely, $0.6$ for VGG16 and $0.501$ for WRN28x10. This verifies that \eqref{eq:grda} performs the directional pruning. For large $\mu$, a hill exists on the minimal loss/error path, which may be due to the too large learning rate that leads to large magnitude for the coefficients $j$ with $s_j<0$. The details (training accuracy, testing accuracy, sparsity) of the endpoints trained on VGG16 and WRN28x10 are shown in Tables \ref{tab:cifar10vgg} and \ref{tab:cifar100wideres} of the Appendix. For the testing error in Figure \ref{fig:conn}, the gRDA somewhat outperforms SGD when $\mu$ is slightly greater than $0.5$. Interestingly, the neighborhood of the midpoint on the B\'ezier curve often has a higher testing accuracy than the both endpoints, except for WRN28x10 on CIFAR-100 with $\mu=0.501$ and 0.55. This finding resonates with the results of \cite{Izmailov18}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{cifar10_mu06_conn_train.pdf}
\caption{VGG16/CIFAR-10/Train loss}\label{fig:5a}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{cifar10_mu06_conn_test.pdf}
\caption{VGG16/CIFAR-10/Test error}\label{fig:5b}
\end{subfigure}\\
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{cifar100_mu0501_conn_train.pdf}
\caption{WRN28x10/CIFAR-100/Train loss}\label{fig:6a}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{cifar100_mu0501_conn_test.pdf}
\caption{WRN28x10/CIFAR-100/Test error}\label{fig:6b}
\end{subfigure}
\caption{The upper figure in each panel shows the contour of training loss and testing error on the hyperplane containing the minimal loss B\'ezier curve (white) interpolating the minimizers found by the SGD and the gRDA. The lower plot of each panel shows the training loss/testing error on the minimal loss B\'ezier curve interpolating minimizers of SGD and gRDA under different $\mu$
}\label{fig:conn}
\end{figure}
\subsection{Direction of $\mathbf{w}^{gRDA}-\mathbf{w}^{SGD}$}\label{sec:proj}
The directional pruning (Definition \ref{def:dp}) implies that the vector $\Delta_n:=\mathbf{w}_n^{gRDA}-\mathbf{w}_n^{SGD}$ should lie in $\mathcal{P}_0$ as $n\to\infty$ if tuned appropriately. Unfortunately, checking this empirically requires estimating $\mathcal{P}_0$ which is computationally formidable. Nonetheless, there exists a dominating low dimensional subspace in $\mathcal{P}_0^\perp$ (the subspace orthogonal to $\mathcal{P}_0$); particularly, a few studies \cite{Sagun16,Sagun18,GKX19,P18} have empirically shown that for various networks on the CIFAR-10, the magnitude of the ten leading eigenvalues of $H(\mathbf{w}^{SGD})$ are dominating the others.
Let $\mathcal{P}_n^{top}:=\mbox{span}\{\bm{u}_{1,n},\bm{u}_{2,n},\ldots,\bm{u}_{10,n}\}$ be the top subspace spanned by the eigenvectors $\bm{u}_{j,n}$ associated with the top 10 eigenvalues of $H(\mathbf{w}_n^{SGD})$. Define
\begin{align}
P_n := \left[
\begin{array}{ccc}
\longleftarrow &\bm{u}_{1,n} &\longrightarrow \\
\longleftarrow &\bm{u}_{2,n} &\longrightarrow \\
&\vdots&\\
\longleftarrow &\bm{u}_{10,n} &\longrightarrow \\
\end{array}
\right].\label{eq:P}
\end{align}
We train the VGG16 and WRN28x10 on the CIFAR-10, until the training data are nearly interpolated and the training loss is almost zero. During the training process, we fix the initializer and minibatches when we use different optimizers to ensure the comparability. We compute $P_n$ on the training trajectory of VGG16 and WRN28x10. See Section \ref{app:proj} for details on the computation of these eigenvectors. We test the hypothesis that the proportion of $\Delta_n$ in $\mathcal{P}_n^{top}$ is low, i.e. $\|P_n \Delta_n\|/\|\Delta_n\|$ is low. The results from the VGG16 and WRN28x10 in Figure \ref{fig:proj} basically confirm this hypothesis, as the magnitude of the proportion of $\Delta_n$ in $\mathcal{P}_n^{top}$ is very small under the two networks. Particularly, the proportion is always very small for WRN28x10. The results for different $\mu$ are similar, showing that $\Delta_n$ is pointing to the same direction regardless of $\mu$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.48\textwidth]{projection_diff_fraction_fullbatch_VGG16.pdf}
\includegraphics[width=0.48\textwidth]{projection_diff_fraction_fullbatch_remove-mu06.pdf}
\caption{The fraction of the different between SGD and gRDA on the eigenspace associated with the leading 10 eigenvalues. \textbf{Left:} VGG16. \textbf{Right:} WRN28x10. The $\|\cdot\|$ is the $\ell_2$ norm.}\label{fig:proj}
\vspace{-0.3cm}
\end{figure}
\section{Discussion and future work}
We propose the novel directional pruning for deep neural networks, that aims to prune DNNs while preserving the training accuracy. For implementation, we show that \eqref{eq:grda} asymptotically achieves the directional pruning after sufficient epochs of training. Empirical evidence shows that our solution yields a accuracy and sparsity tradeoff within the range of many contemporary pruning techniques.
The testing accuracy of \eqref{eq:grda} is almost always higher than \eqref{eq:sgd} if $\mu$ is slightly greater than 0.5 when using the ResNets, and some interpolation between the minima found by \eqref{eq:grda} and \eqref{eq:sgd} often has a better testing accuracy than the two minima; see Figure \ref{fig:conn}. As suggested by Figure \ref{fig:proj}, \eqref{eq:grda} appears to deviate from \eqref{eq:sgd} in the flatter directions. These evidences support \cite{HHY19}, who argue that the valley of minima is actually asymmetric, and points on the flatter side tend to generalize better. We think a further study of the testing accuracy of \eqref{eq:grda} along the lines initiated in this work may be an interesting future research topic, as this would shed some light on the mystery of generalization.
\section*{Broader Impact}
Our paper belongs to the cluster of works focusing on efficient and resource-aware deep learning. There are numerous positive impacts of these works, including the reduction of memory footprint and computational time, so that deep neural networks can be deployed on devices equipped with less capable computing units, e.g. the microcontroller units. In addition, we help facilitate on-device deep learning, which could replace traditional cloud computation and foster the protection of privacy.
Popularization of deep learning, which our research helps facilitate, may result in some negative societal consequences. For example, the unemployment may increase due to the increased automation enabled by the deep learning.
\section*{Acknowledgments}
We thank the anonymous reviewers for the helpful comments. Shih-Kang Chao would like to acknowledge the financial support from the Research Council of the University of Missouri. This work was completed while Guang Cheng was a member of the Institute for Advanced Study, Princeton in the fall of 2019. Guang Cheng would like to acknowledge the hospitality of the IAS and the computational resource it has provided.
{
\bibliographystyle{plain}
|
1,116,691,498,521 | arxiv | \section{Introduction}
Recently a number of seemingly disparate research topics have converged and have been seen to be closely related to each other. The first is the classic problem of a half-filled Landau level of spin-polarized electrons in two space dimensions\cite{dassarmabook,jainbook}. The second is the effects of interactions on three dimensional topological insulators, and in particular, the possibility of novel strongly correlated surface states of such insulators\cite{tsarcmp}. The third is the study of three dimensional quantum spin liquid phases with an emergent gapless photon, such as may possibly be realized in quantum spin ice materials\cite{gingrasrev}. As expected from such a convergence new insights on each of these problems have emerged. Amongst other results, an old issue in the theory of the half-filled Landau level now has a simple and elegant answer. In a different direction, correlated surface states of some three dimensional topological insulators are now seen to have a surprising physical realization in ordinary two dimensional systems.
The purpose of this article is to synthesize and elaborate on these developments. The core of what we describe is based on several recent papers\cite{sonphcfl,tsymmu1,dualdrcwts2015,dualdrcmaxav}. However the point of view and emphasis that we provide is different from what is contained in these papers and other existing literature. We present a simplified, and physically transparent perspective, that distills the essence of the ideas involved.
We begin by describing the three different research topics separately.
\subsection{The half-filled Landau level}
Electrons confined to two dimensions in a strong magnetic field display the phenomenon of the integer and fractional quantum Hall effects. We will be concerned with an ``unquantized" quantum Hall effect (see for eg, the contribution by Halperin in Ref. \onlinecite{dassarmabook}) that
occurs when the filling factor $\nu$ of the lowest Landau level is $\frac{1}{2}$. Empirically this is seen to be a metal albeit a rather unusual one. The classic theory of this metal - due to Halperin, Lee, and Read (HLR)\cite{hlr} - describes this as a compressible state obtained by forming a fermi surface of ``composite fermions"\cite{jaincf} rather than the original electrons. In the original HLR theory , the composite fermions are formed by binding two flux quanta to the physical electrons. At $\nu = \frac{1}{2}$ this attached flux on average precisely cancels the external magnetic flux so that the composite fermions move in effective zero field. This facilitates the formation of a Fermi surface and leads to an effective field theory of the metal as a Fermi surface coupled to a fluctuating gauge field which is then used to describe the physical properties of this metal.
The HLR theory - and some subsequent refinements - successfully predicted many experimental properties. For instance when the filling is tuned slightly away from $\frac{1}{2}$, the composite fermions see a weak effective magnetic field and their trajectories are expected to follow cyclotron orbits with radii much larger than the underlying electrons. These have been directly demonstrated in experiment\cite{willett93,kang93,goldman94,smet96} - for reviews see, e.g., the contribution by Tsui and Stormer in Ref. \onlinecite{dassarmabook}, and Ref. \onlinecite{willett97}. Further the composite Fermi liquid acts as a parent for the construction of the Jain sequence of states\cite{jainbook} away from $\nu = \frac{1}{2}$: they are simply obtained by filling an integer number of Landau levels of the composite fermions. Finally the composite Fermi liquid yields the non-abelian Moore-Read quantum Hall state through pair ``condensation" of the composite fermions\cite{readgrn}.
Despite its success there was one unresolved question with the theory of the composite Fermi liquid at $\nu = \frac{1}{2}$. To appreciate this consider the limit that the Landau level spacing $\hbar \omega_c \gg H_{int}$ (where $\omega_c$ is the cyclotron frequency and $H_{int}$ is the electron-electron interaction). Then it is legitimate to project to the lowest Landau level. With a two-body interaction (e.g., Coulomb) the resulting Hamiltonian has a ``particle-hole" symmetry at $\nu = \frac{1}{2}$. This symmetry is not manifest in the HLR description of the composite fermi liquid , and is possibly even violated by it\cite{klkgphhlr,maissamph}. A lowest Landau level description is often routinely used in theoretical discussions and numerical calculations of quantum hall states, including at $\nu = \frac{1}{2}$. It is also not an unrealistic limit to consider in experiments. It is thus important to understand how the particle-hole symmetry should be incorporated into the theory of the composite Fermi liquid.
\subsection{Interacting topological insulators in three dimensions}
In the last decade condensed matter physics has been invigorated by the study of topological insulating phases of matter\cite{HsnKn,HsnMr,QiScz}. While much of the initial theoretical discussion focused on models of non-interacting electrons, in recent years attention has turned to studies of the phenomenon of topological insulation in strongly interacting electronic systems. The effects of interactions raises many questions. Is the topological distinction between phases obtained in free fermion models robust to the inclusion of interactions? Are there new phases enabled by interactions that have no free fermion description? Even if a free fermion topological phase survives in an interacting system, are there new correlated surface states that can appear as an alternate to the ones obtained in the free fermion model?
Tremendous progress on these questions has been achieved theoretically. Our focus here is on three dimensional topological insulators (TI). In that case for spin-orbit coupled insulators the free fermion topological insulator is known to be stable to interactions\cite{qitheta}. Within band theory the surface of such an insulator famously consists of the odd number of electronic Dirac cones. This metallic surface cannot be gapped or rendered insulating with any amount of impurities so long as the defining symmetries (charge conservation and time reversal) are preserved. On the other hand with interactions several groups\cite{fSTO1,fSTO2,fSTO3,fSTO4} described how a symmetry preserving gapped surface can emerge for the bulk topological insulator. Inspired by similar constructions\cite{avts12,hmodl,statwitt,burnell,geraedts14} for bosonic analogs of the topological insulators these papers showed that such a symmetry preserving gapped surface requires the kind of topological order familiar from discussions of the fractional quantum Hall effect and some quantum spin liquids. However the symmetry is implemented in this topologically ordered state in a manner that is forbidden (``anomalous") in a strictly two dimensional system. Symmetry-preserving surface topologically ordered phases, besides being conceptually interesting, proved to be a useful theoretical tool in describing the physics of a class of interacting generalizations of topological insulators known as Symmetry Protected Topological (SPT) phases\cite{chencoho2011}. These are phases with no non-trivial bulk excitations but which nevertheless have non-trivial surface states protected by a global symmetry.
Spin-orbit coupled electronic SPT insulators in $3d$ have a classification\cite{3dfSPT} by the group $Z_2^3$ as compared to the $Z_2$ classification without interactions. In interacting systems there are thus 6 spin-orbit coupled SPT insulators in 3d that are `beyond band theory'. Electronic SPT phases with many physically interesting symmetry groups in $3d$ have been classified\cite{fidkowski3d,3dfSPT,3dfSPT2} and their properties are understood. In several symmetry classes there exist SPT phases which are `beyond band theory' ({\em i.e} have no free fermion description)\cite{3dfSPT,3dfSPT2}. In addition for some symmetries some free fermion topological phases become indistinct from topologically trivial phases in an interacting system\cite{fidkowski3d,3dfSPT,3dfSPT2,maxvortex}. Thus the classification of $3d$ free fermion SPT phases is modified in the presence of interactions (see Ref.\onlinecite{tsarcmp} for a review).
An important open question in this area is the physical realization of these various phenomena. For instance what kinds of physical systems naturally realize the correlated surface states of the three dimensional topological insulator?
\subsection{Quantum spin liquids in three dimension}
Quantum spin liquids are ground states of interacting quantum spin systems characterized by long range entanglement between local degrees of freedom. While the theoretical possibility of such ground states has been appreciated for a long time it is only in the last decade that credible experimental candidates have emerged\cite{palrev}. There are many kinds of quantum spin liquid phases which are sharply distinct from each other. Of particular interest to us are three dimensional quantum spin liquid phases that possess an emergent gapless photon in the excitation spectrum\cite{wenbook,bosfrc3d,wen03,3ddmr,hfb04,lesikts05,kdybk,shannon}. The low energy theory of such phases is a (deconfined) $U(1)$ gauge theory. These phases are hence called $U(1)$ quantum spin liquids. Their excitation spectrum consists of a gapless emergent `photon', and emergent particle-like excitations that couple to the photon as electric or magnetic charges. Such spin liquids may possibly be realized in quantum spin ice materials on pyrochlore lattices\cite{balentsqspice}. The spin hamiltonian describing these pyrochlore magnets is rather complicated and is characterized by very little symmetry\cite{balentsqspice}. The only internal symmetry is time reversal. This motivates a classification and description of time reversal invariant $U(1)$ quantum spin liquids in three dimensions\cite{tsymmu1}.
Of particular interest to us is the so-called ``Topological Mott Insulator" discussed in Ref. \onlinecite{pesinlb} as a possible state in pyrochlore iridates such as $Y_2Ir_2O_7$. This is a three dimensional time reversal symmetric $U(1)$ quantum spin liquid state where the gapped emergent `electric' charge (denoted a spinon) is a gapped fermion that is a Kramers doublet under the time reversal symmetry. Furthermore this spinon has topological band structure leading to protected surface states.
Naively many other similar constructions of $U(1)$ quantum spin liquids are possible where the emergent electric or magnetic charges themselves form an SPT phase. How are these different constructions related to each other?
\subsection{Summary and plan}
We will see below that these three topics are closely connected to each other, and that these connections lead to a wealth of fresh insights. In a recent paper Son\cite{sonphcfl} proposed a particle-hole symmetric formulation of the composite Fermi liquid in the half-filled Landau level. This proposal was motivated by thinking about the half--filled Landau level in a microscopic system of Dirac fermions in a magnetic field such as may arise at the surface of a three dimensional topological insulator. The composite fermi liquid was suggested to be also described by a single Dirac cone at finite density (with particle-hole symmetry playing the role of time reversal), and with a coupling to a $U(1)$ gauge field. In another recent paper\cite{tsymmu1} the present authors classified and described the physics of time reversal symmetric $U(1)$ quantum spin liquids in $3d$. The results were then applied in Ref. \onlinecite{dualdrcwts2015} to deriving a new gapless metallic surface state of the $3d$ topological insulator. This same result and some of the results of Ref. \onlinecite{tsymmu1} were also independently obtained in Ref. \onlinecite{dualdrcmaxav}. The improved understanding of the topological insulator surface paves the way for an understanding of Son's proposal.
In the rest of this paper we will synthesize these results in a manner that exposes the physics most simply. We begin by describing the action of particle-hole symmetry in the half-filled Landau level (Sec. \ref{phhll}) and then describe Son's proposed theory (Sec. \ref{sonprop}). Next, in Sec. \ref{phcflphys} we provide a physical description of the particle-hole symmetric composite fermion. This modifies and extends a previous physical picture of the composite fermion as a neutral dipolar particle. We argue that this modification is natural when particle-hole symmetry is present. We show that the most essential features of the particle-hole symmetric composite fermion follow simply and naturally from this modified picture. We support our arguments by solving in Appendix \ref{drcdip} a simple model of two-particle quantum mechanics which illustrates several of the key features. We then provide, in Secs. \ref{hlltisrf} and \ref{cfsti}, an alternate understanding of the half-filled Landau level by relating it to correlated surface states of three dimensional fermionic topological insulators. As described above such surface states have ``anomalous" symmetry implementation not possible in a strictly $2d$ system. Remarkably
the well studied half-filled Landau level -despite being strictly $2d$ - provides a physical realization of such a state, and makes it relevant to experiments. This is possible because the particle-hole symmetry is not really a microscopic local symmetry in the physical Hilbert space of the two dimensional system but is an emergent low energy symmetry of a single Landau level.
We describe (in Sec. \ref{titsymmu1}) how these correlated surface states are fruitfully constrained by studying the properties of the three dimensional bulk when the fermions are coupled to a dynamical $U(1)$ gauge field. The resulting state is to be viewed as a $3d$ $U(1)$ quantum spin liquid - in particular a ``topological Mott insulator". We review arguments of Refs. \onlinecite{tsymmu1,dualdrcmaxav} showing that the topological Mott insulator admits two equivalent but dual descriptions as either charge or monopole topological insulators in Sec. \ref{bdgti}. The consequences\cite{dualdrcwts2015,dualdrcmaxav} of this bulk duality for correlated surface states of the original topological insulator are then described in Secs. \ref{dualsrf} and \ref{vms}. We then revisit the composite fermi liquid (in Sec. \ref{bcfl}) with this understanding of the
correlated surface states and show that it matches exactly with Son's proposed theory. We then consider (Sec. \ref{phpf}) a particle-hole symmetric version of a paired non-abelian quantum Hall state\cite{sonphcfl} obtained by pairing the composite fermions. This state is identical to a symmetry preserving surface topologically ordered state discussed previously\cite{fidkowski3d,3dfSPT2,maxvortex} for the corresponding $3d$ topological insulator. We show that this particle-hole symmetric Pfaffian state gives further support to the modified dipolar picture of the composite fermion.
With this understanding we revisit the phenomenology of composite Fermi liquids in Sec. \ref{cflphen} with or without particle-hole symmetry. We show that many of the essential features of the HLR theory (which have successfully confronted experiment) are preserved, for instance in the electromagnetic response. We turn next to heat transport of the composite fermi liquid metal (which does not seem to have been discussed before). We show, both within the conventional HLR theory and the new particle-hole symmetric version, that there is a dramatic violation of the conventional Wiedemann-Franz relationship between the heat and electrical conductivities. However the composite fermi liquid should satisfy a modified Wiedemann-Franz law. This can possibly be tested in future experiments. We also make some brief comments on the cyclotron radius away from half-filling, and on the effects of disorder. A key feature of the particle-hole symmetric theory is the presence of a $\pi$ Berry phase when the composite fermion circles around the Fermi surface. In Appendix \ref{sdhpi} we show that this Berry phase is implied by the standard Shubnikov-deHaas oscillations near $\nu = \frac{1}{2}$ after a simple but revealing reinterpretation.
\section{Particle-hole symmetry and the half-filled Landau level}
\label{phhll}
We begin with the half-filled Landau level in two dimensions and describe the action of particle-hole symmetry.
Consider the full set of single particle eigenstates $\phi_{I,m} (x,y)$ where $I$ labels the Landau level and $m$ the orbital within each Landau level, for instance, in the symmetric gauge. The microscopic electron destruction operator $\psi_e(x,y)$
may be expanded as
\begin{equation}
\psi_e(x,y) = \sum_{I,m} \phi_{I,m}(x,y) c_{I,m}
\end{equation}
The $c_{I,m}$ are electron destruction operators for the single particle state indexed by $(I,m)$ and satisfy the usual fermion anti commutation relations. To project to the lowest Landau level we truncate the expansion by keeping only the $I =0$ terms:
\begin{equation}
\psi_e(x,y) \approx \sum_m \phi_{0m} c_m
\end{equation}
(Here and henceforth drop the Landau level index $0$ and denote $c_{0,m}$ simply by $c_m$). The particle-hole transformation in the lowest Landau level is defined to be an {\em anti-unitary} operator $C$ such that
\begin{eqnarray}
Cc_m C^{-1} & = & h_m^\dagger \\
Cc_m^\dagger C^{-1} & = & h_m
\end{eqnarray}
The $h_m$ satisfy fermion anti commutation relations. A two-body Hamiltonian acting within the lowest Landau level can be written
\begin{equation}
H_{int} = \frac{1}{2} \sum_{m_1, m_2, m_1', m_2'} c^\dagger_{m_1'} c^\dagger_{m_2'} c_{m_2} c_{m_1} \langle m_1' m_2' | V m_1 m_2 \rangle
\end{equation}
The anti-unitary $C$ operation leaves this interaction invariant but generates a one-body term. At half-filling this is exactly compensated by a chemical potential so that the Hamiltonian is particle-hole symmetric.
Note that the total electron number $N_e = \sum_m c_m^\dagger c_m$ transforms as
\begin{equation}
C(\sum_m c_m^\dagger c_m ) C^{-1} = N_\phi - \sum_m h_m^\dagger h_m
\end{equation}
($N_\phi$ is the number of flux quanta and hence the degeneracy of the Landau level). Thus as expected the electron filling factor $\nu = \frac{N_e}{N_\phi}$ transforms to $1 - \nu_h$ with $\nu_h$ the hole filling factor.
Note that under the $C$ transformation the empty state $|0 \rangle$ is transformed to the filled Landau level. If a state $|\Psi \rangle$ at $\nu = \frac{1}{2}$ is particle-hole invariant, {\em i.e} $C\Psi \rangle = |\Psi\rangle$,
then we can view it either as a state of electrons at half-filling or as the combination of a filled Landau level and the same state of holes at half-filling. This leads to the conclusion\cite{klkgphhlr} that the electrical Hall conductivity in such a state is exactly $\sigma_{xy} = \frac{e^2}{2h}$.
The full symmetry of the half-filled Landau level thus is $U(1) \times C$ (the $U(1)$ is the familiar charge-conservation symmetry). As $C$ is anti-unitary, the direct product structure means that the generator of $U(1)$ rotations (the deviation of the physical charge density from half-filling) is odd under $C$.
\section{Particle-hole symmetry and the composite fermi liquid}
\label{sonprop}
It has been appreciated for some time\cite{klkgphhlr,maissamph} that the effective field theory proposed by HLR for the half-filled Landau level is not manifestly particle-hole symmetric, and is perhaps even inconsistent with it. On the other hand numerical calculations performed in the lowest Landau level show that with the projected 2-body Coulomb interaction the Fermi-liquid like state at half-filling preserves particle-hole symmetry(see for instance \cite{rzyhald2000}). It is therefore important to construct a description of the composite fermi liquid theory which explicitly preserves the particle-hole symmetry. A very interesting proposal for such a theory was made recently by Son\cite{sonphcfl}. The composite fermion was proposed to be a two-component Dirac fermion field $\psi_v$ at a finite non-zero density, and with the effective (Minkowski) Lagrangian:
\begin{equation}
\label{ccfl}
{\cal L} = i\bar{\psi}_v \left(\slashed{\partial} + i \slashed{a}\right) \psi_v - \mu_v \bar{\psi}_v \gamma_0 \psi_v+ \frac{1}{4\pi} \epsilon_{\mu\nu\lambda} A_\mu\partial_\nu a_\lambda \\
\end{equation}
Here $a_\mu$ is a fluctuating internal $U(1)$ gauge field and $A_\mu$ is an external probe gauge field. The $2 \times 2$ $\gamma$ matrices are $\gamma_0 = \sigma_y, \gamma_1 = i\sigma_z, \gamma_2 = -i\sigma_x$. $\mu_v$ is a composite fermion chemical potential that ensures that its density is non-zero. The physical electric current is
\begin{equation}
\label{dualj}
j_\mu = \frac{1}{4\pi} \epsilon_{\mu\nu\lambda} \partial_\nu a_\lambda
\end{equation}
Here the $0$-component is actually the deviation of the full charge density $\rho$ from that appropriate for half-filling the Landau level, {\em i.e}
\begin{equation}
\label{jorho}
j_0 = \rho - \frac{B}{4\pi}
\end{equation}
Here and henceforth (unless otherwise specified) we will work in units where the electron charge $e = 1$ and $\hbar = 1$. In the presence of long range Coulomb interactions, the above Lagrangian must be supplemented with an additional interaction term $ \int_{\vec x, \vec x'} j_0(\vec x) V(\vec x - \vec x') j_0 (\vec x')$ where $V$ is the Coulomb potential.
The Lagrangian above describes the dynamics of the composite fermions, and their coupling to external probe electromagnetic fields. To obtain the full response
of the lowest Landau level to the electromagnetic field, this Lagrangian must be supplemented by a `background' Chern-Simons term which accounts for the $\sigma_{xy}
= \frac{e^2}{2h}$ demanded by particle-hole symmetry. This background term takes the form
\begin{equation}
\label{lbg}
{\cal L}_{bg} = \frac{1}{8\pi} \epsilon_{\mu\nu\lambda} A_\mu \partial_\nu A_\lambda
\end{equation}
Note the similarity of Eqn. \ref{dualj} with the usual HLR theory. There are however some important differences between Son's proposal and the HLR theory. Under the original particle-hole symmetry operation $C$, the composite fermion field $\psi_v$ is hypothesized to transform as
\begin{equation}
\label{Cpsiv}
C\psi_v C^{-1} = i\sigma_y \psi_v
\end{equation}
Thus $\psi_v$ goes to itself rather than to its antiparticle under $C$. Further this transformation implies that the two components of $\psi_v$ form a Kramers doublet under $C$ (recall that $C$ is anti unitary).
With this transformation, the Lagrangian is manifestly invariant under $C$ so long as we choose $a_0 \rightarrow a_0, a_i \rightarrow - a_i$ and let the time $t \rightarrow - t$. The deviation $j_0$ of the physical charge density from half-filling (see Eqn. \ref{dualj}) is then odd under $C$ as required.
These composite fermions are at a non-zero density $\frac{B}{4\pi}$ and fill states upto a Fermi momentum $K_f$. This should be compared with the HLR theory where the prescription for the composite fermion density is just the electron density $\rho$. At half-filling we have $\rho = B/4\pi$ and the two prescriptions agree. . However these two
prescriptions are different on going away from half-filling. We will see later (in Appendix \ref{sdhpi}) that this slight difference actually plays a crucial role.
Returning to the particle-hole symmetric theory, the ``Diracness" of the composite fermion is manifested as follows: when a composite fermion at the Fermi surface completes a full circle in momentum space its wave function acquires a Berry phase of $\pi$. This is a ``low-energy" manifestation of the Dirac structure that does not rely on the specifics of the dispersion far away from the Fermi surface.
Finally notice that unlike in the original HLR theory (but actually similar to subsequent work\cite{read98,avl1} on the related problem of bosons at $\nu = 1$) there is no Chern-Simons term for the internal gauge field $a_\mu$.
If we ignore the gauge field, Eqn. \ref{ccfl} actually describes a single Dirac cone that arises at the surface of $3d$ spin-orbit coupled topological insulators. Interestingly, in this effective theory, $C$ plays the role of time reversal as is clear from Eqn. \ref{Cpsiv}. Thus the proposed particle-hole symmetric composite fermi liquid theory is this single Dirac cone coupled to an emergent $U(1)$ gauge field.
In the sections that follow we will build an understanding of the correctness of Son's proposal through physical arguments and by relating the half-filled Landau level to topological insulator surface states. An alternate recent discussion\cite{maissamph} of particle-hole symmetry in the half-filled Landau level proposes an ``anti-HLR" state as a particle-hole conjugate of the HLR state. We will however not describe it here.
\section{Physical picture of the particle-hole symmetric composite fermion}
\label{phcflphys}
We now provide a very simple physical picture of these particle-hole symmetric composite fermions by relating them to previous constructions of the composite Fermi liquids. Subsequent to the original HLR theory through a process of intense reexamination\cite{read94,rsgm97,Pasquier1998,read98,dhleephcf98,sternetal99,gmrsrmp03} a picture of the composite fermion as a neutral dipolar particle emerged. This is illustrated by considering the composite fermion at a filling $\nu = \frac{p}{2p+1}$ slightly different from $\frac{1}{2}$. Then a fractional quantum hall state is possible and is described by filling $p$ Landau levels of microscopic composite fermions obtained by the usual attachment of $4\pi$ flux to the electron. At the mean field level the excitations about this state are single microscopic composite fermions but their charge/statistics will be modified by the background quantum hall effect. The true low energy quasiparticle has fractional charge
$e^* = \frac{e}{2p+1}$. Thus when $\nu$ goes to $\frac{1}{2}$ (corresponding to $p$ going to $\infty$), the low energy quasiparticle might be expected to have $e^* = 0$. Its statistics also reverts back to fermionic when $p \rightarrow \infty$.
Physically due to the electrical Hall conductivity $1/2$, the $4\pi$ flux attached to the electron acquires an electric charge of $-e$ which compensates for the electron's charge.
In a lowest Landau level description of the theory, it is appropriate to replace the concept of flux attachment with the related concept of binding vortices to the particles. In such a description Read proposed\cite{read94}, based on a wave function for the HLR state, that the vortex is displaced from the electron by an amount perpendicular to the momentum of the composite fermion. The key idea is that when projected to the lowest Landau label a phase factor like $e^{i\vec k \cdot \vec r}$ generates a translation of the correlation hole (the vortex) bound to the electron by an amount proportional to and perpendicular to the momentum. Let us briefly describe this logic.
The standard flux attachment procedure leads naturally to a wave function for the composite Fermi liquid:
\begin{equation}
\psi(z_1, ........, z_N) = P_{LLL} det(e^{i\vec k_i \cdot \vec r_j}) \prod_{i < j} (z_i - z_j)^2
\end{equation}
Here $z_i$ are the complex coordinates of the ith electron and $\vec r_i$ is the same coordinate in vector form. We have suppressed the usual Gaussian factors.
This is known as the Rezayi-Read wave function\cite{rzyrd94}. The factor $(z_i - z_j)^2$ has the effect of attaching a $4\pi$ vortex to each electron to convert it into a composite fermion. The Slater determinant then builds a Fermi sea of the composite fermions. As the plane wave factors do not stay within the lowest Landau level it is necessary to project back to it through the operator $P_{LLL}$. Now write
\begin{equation}
e^{i\vec k \cdot \vec r} = e^{\frac{i}{2} \left( \bar{k} z + k \bar{z} \right)}
\end{equation}
Here $k = k_x + ik_y$ and $\bar{k} = k_x - ik_y$.
In the lowest Landau level we should replace ${\bar z} \rightarrow 2l_B^2 \frac{\partial}{\partial z}$. This leads to the expectation that in the wave function the terms involving $\bar{z}$ will shift the vortex away from the particle in the direction perpendicular to $\vec k$ and by an amount proportional to it. This line of thought leads to a dipolar picture of the composite fermion as shown in Fig. \ref{olddip}.
Though this wave function -based thinking has been criticized (see e.g. Ref. \onlinecite{gmrsrmp03}), the final dipolar description gained wide acceptance in the late 90s through more sophisticated kinds of calculations\cite{rsgm97,Pasquier1998,read98,dhleephcf98,sternetal99}.
The resulting picture was that the low energy composite fermions (as opposed to the microscopic composite fermions) were electrically neutral dipoles with a dipole moment perpendicular to their momentum (see Fig. \ref{olddip}), and it is these low energy composite fermions that live near the Fermi surface. These neutral dipolar composite fermions continue to couple to a $U(1)$ gauge field but without a Chern-Simons term. The flux of this gauge field however is the physical electrical 3-current and hence couples directly to the external probe gauge field.
Particle-hole symmetry was not addressed in these prior works (except by Dung-Hai Lee's work\cite{dhleephcf98} whose exact relation with the present circle of ideas is not clear). Here we show how a modification of this picture captures the essential features of the particle-hole symmetric composite fermion.
\begin{figure}
\begin{center}
\includegraphics[width=1.9in]{olddip.pdf}
\end{center}
\caption{ The standard picture of the composite fermion at $\nu = \frac{1}{2}$ regards it as an electron (of charge $e$) bound to a $4\pi$ vortex. The vortex carries charge $-e$ and is displaced from the electron in the direction perpendicular to its momentum. The composite fermion is thus viewed as a neutral dipolar particle. }
\label{olddip}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.9in]{newdip.pdf}
\end{center}
\caption{ The new picture of the particle-hole symmetric composite fermion at $\nu = \frac{1}{2}$. One end of the dipole has a $2\pi$ vortex bound to charge $\frac{e}{2}$. The other end has a charge $- \frac{e}{2}$ also bound to a $2\pi$ vortex. The displacement between the two is in the direction perpendicular to their center-of-mass momentum. The positively charged end can be viewed as a $2\pi$ vortex located exactly on the electron. Thus compared to the picture in Fig. \ref{olddip} only one $2\pi$ vortex is displaced from the electron. }
\label{newdip}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.9in]{dippiphs.pdf}
\end{center}
\caption{When one end of the dipole of Fig. \ref{newdip} is rotated in a closed loop around the other end, there is a phase of $\pi$.}
\label{dippiphs}
\end{figure}
Let us begin with a discussion of wave functions for the half-filled Landau level which was the initial motivation for the dipolar picture.
We now show how this line of thinking leads actually to a different picture which naturally enables a particle-hole symmetric description, and provides a physical basis to Son's proposal.
It is well known that fermion wave functions in the lowest Landau level must have the structure
\begin{equation}
\psi(z_1, z_2, ......, z_N) = \prod_{i < j} (z_i - z_j)f(z_1, ......., z_N)
\end{equation}
where $f$ is a symmetric polynomial. The $z_i - z_j$ structure is a zero of the wave function that is demanded by Pauli exclusion. Thus whatever state we build in the lowest Landau level, Pauli exclusion guarantees that there is one $2\pi$ vortex that is sitting exactly on top of the electron.
At $\nu = \frac{1}{2}$, the symmetric function $f$ can be taken to be the wave function of bosons at $\nu = 1$ which can also form a composite Fermi liquid state. For bosons at $\nu = 1$ the composite Fermi liquid theory is, in fact, better established theoretically\cite{Pasquier1998,read98,avl1} than for fermions at $\nu = \frac{1}{2}$. This bosonic composite liquid is obtained by binding a $2\pi$ vortex to the particle. Wavefunction, or other arguments, then show that this vortex is indeed displaced from the particle in the manner described above.
We thus expect the following picture for the structure of the composite fermion at $\nu = \frac{1}{2}$. One $2\pi$ vortex sits exactly on the electron while the other is displaced from it (in the direction perpendicular to the composite fermion momentum). A single vortex at $\nu = \frac{1}{2}$ will have charge $-1/2$. Thus the electron bound with the single vortex will have charge-$ + 1/2$. We thus obtain a dipole with two $2\pi$ vortices at either end, one with electric charge $+1/2$ and the other with electric charge $-1/2$ (see Fig. \ref{newdip}).
This dipole picture is very close to the ones developed before. It however makes clear how particle-hole symmetry operates and captures the essential features of Son's proposed description. To see this cleanly consider the limit in which the two ends of the dipole are separated by a distance much larger than the ``size" of each vortex. Then the self and mutual statistics of the two ends of the dipole are well defined. One end carries a $2\pi$ vortex and an associated electric charge 1/2, and hence is a semion. The other end of the dipole is an anti-semion as it carries a $2\pi$ vortex but now with opposite electric charge $-1/2$. They clearly are also mutual semions (see Fig. \ref{dippiphs}), {\em i.e}, when one of these goes around the other there is a phase of $\pi$. In the absence of this mutual statistics, the dipole - as a bound state of a semion and an anti-semion - will be a boson. However the mutual statistics converts this bound state into a fermion, exactly consistent with direct expectations since we are binding two vortices to the electron.
Let us now turn to the action of $C$. Note first that as the electric charge is odd under $C$, while the vorticity is even, the effect of $C$ is to reverse the direction of the relative coordinate ({\em i.e}, the dipole moment).
This should be contrasted with the standard picture where the dipole moment is reversed under $C$ but the $4\pi$ vortex is unaffected so that the particle-hole transformed object is not simply related to the original one.
We can now understand the Kramers doublet structure (under $C$) directly from this new picture of the particle-hole symmetric composite fermion. Let us fix one end of the dipole to be at the origin, and understand the dynamics of the relative coordinate. Due to the phase $\pi$ when the relative coordinate rotates by $2\pi$, the orbital angular momentum is quantized to be a half-integer. If we restrict to the low energy doublet with orbital angular momentum $\pm \frac{1}{2}$, the orientation of the relative coordinate become the $x$ and $y$ components of a spin operator $\vec S$ that acts on this doublet. The $z$-component is then the angular momentum $\pm \frac{1}{2}$ of the two states in the doublet. As both this angular momentum and the relative coordinate are odd under $C$, we have $C \vec S C^{-1} = - \vec S$. It follows immediately that this dipole is a Kramers doublet.
Finally it is easy to argue that these are Dirac fermions. Though at zero momentum the two states in the doublet are degenerate, at any non-zero momentum, there will be a dipole moment as explained above.
In the new theory the dipole moment is precisely the $x, y$ components of the ``spin" of the Kramers doublet - so the locking of the dipole moment to the direction perpendicular to the momentum is precisely the spin-momentum locking of a Dirac fermion. In particular if the momentum is rotated by $2\pi$ the dipole moment rotates by $2\pi$ and the wave function has a phase of $\pi$.
These arguments are spelt out in detail in Appendix \ref{drcdip}. There we solve a simple problem of two quantum particles of opposite charge moving in a uniform magnetic field. The two particles are taken to be mutual semions, {i.e}, when one goes around the other there is a phase of $\pi$. Further we impose an anti unitary $C$ symmetry that interchanges the coordinates of the two particles.
The solution shows the emergence of both the Kramers structure as well as the spin-momentum locking of the dipolar bound state of these two particles.
If we form a Fermi surface of these composite fermions, the low energy state at any momentum point $\vec K$ will have a unique direction of ``spin" polarization perpendicular to $\vec K$. It's Kramers partner is the state at $- \vec K$ which has exactly the opposite ``spin" polarization. When the composite fermion goes around it's Fermi surface the rotation of the momentum by $2\pi$ thus forces a Berry phase of $\pi$.
We can see that this `new' dipole is the natural fate of the `old' dipolar picture when $\nu = 1/2$ and particle hole symmetry is taken into account.
Thus we now have a very simple physical picture of the structure of the particle-hole symmetric composite fermion.
This physical picture also establishes a continuity between the theory of the particle-hole symmetric composite fermi liquid with the earlier descriptions.
We turn next to a different understanding of the particle-hole symmetric half-filled Landau level which yields powerful insights.
\section{The half-filled Landau level as a topological insulator surface state}
\label{hlltisrf}
It is important to emphasize that the $C$ symmetry at $\nu = \frac{1}{2}$ is not an exact ultra-violet (UV) symmetry of the theory. Further it does not act locally in the microscopic Hilbert space. It is an emergent non-local symmetry of just the lowest Landau level at half-filling with the restriction to a two-body interaction (or more generally to $2n$-body terms). As a matter of principle an exact projection from higher Landau levels will also have three-body terms, etc which will break the $C$ symmetry. A useful approximation, in the limit of weak Landau level mixing, is to ask about the ground state in the lowest Landau level with exact $C$ symmetry, and then understand the $C$-breaking effects as a perturbation.
Can we find a UV completion of the half-filled Landau level that retains $C$ as an exact microscopic local symmetry? We turn next to this question.
Consider fermions in $3d$ with a symmetry group $U(1) \times C$. For now we define $C$ acting on these fermions to be an anti unitary operator which is such that the generator of the $U(1)$ symmetry is odd under $C$.
As an example consider a lattice tight binding Hamiltonian
\begin{eqnarray}
H_{3d} & = & \sum_{ij}\sum_s t_{ij} c^\dagger_{is} c_{js} + h.c \nonumber \\
& & + \Delta_{ij} \left(c^\dagger_{i\uparrow} c^\dagger_{j\downarrow} + c^\dagger_{i\downarrow} c^\dagger_{j\uparrow} \right) + h.c \nonumber
\end{eqnarray}
Here $i, j$ are sites of a $3d$ lattice, $s = \uparrow, \downarrow$ is the electron spin. The triplet Cooper pairing term breaks charge conservation, and $SU(2)$ spin rotations but leaves a $U(1)$ subgroup of rotations generated by $S^z$ invariant. So long as the hopping and pairing parameters are real the Hamiltonian is also invariant under an anti unitary time reversal operation which we denote $C$ that acts locally and takes $c_{is} \rightarrow i\left(\sigma_y \right)_{ss'} c_{is'}$.
Consider gapped free fermion Hamiltonians with this symmetry\footnote{This symmetry class is denoted $A III$ in the topological insulator literature.} . The progress on topological insulators/superconductor shows that in $3d$ such systems are classified\cite{schnyder08,kitaev08} by the group $Z$ corresponding to an integer topological invariant which we label $n$. Correspondingly at the two dimensional interface with the vacuum there is a gapless surface state with $n$ Dirac cones with the Lagrangian:
\begin{equation}
\label{surfdirac}
{\cal L} = \sum_{\alpha = 1}^n \bar{\psi}_\alpha \left(-i\slashed{\partial }\right) \psi_\alpha
\end{equation}
with the following symmetry action
\begin{eqnarray}
U(\lambda) \psi_\alpha U^{-1}(\lambda) & = & e^{i\lambda} \psi_\alpha \\
C\psi_\alpha C^{-1} & = & i\sigma_y \psi_\alpha^\dagger
\end{eqnarray}
The fermions $\psi_\alpha$ are each $2$-component and the corresponding $\gamma$ matrices are $\gamma_0 = \sigma_y, \gamma_1 = \sigma_z, \gamma_2 = \sigma_x$. The fermion density $\psi_\alpha^\dagger \psi_\alpha$ is odd under $C$. Thus the symmetry action on the surface is $U(1) \times C$ as required. Further the oddness under $C$ implies that we cannot add a chemical potential term so that the Dirac fermions are necessarily at neutrality.
Recent work\cite{3dfSPT2,maxvortex} shows that with interactions this $Z$ classification is reduced to $Z_8$ (so that only $n = 0, 1, ...., 7$ are distinct phases)\footnote{There is an additional Symmetry Protected Topological phase which cannot be described within free fermion theory so that the full classification\cite{3dfSPT2} is $Z_8 \times Z_2$.}. We will henceforth focus on the $n = 1$ state which is stable to interactions.
We will take the liberty of calling the generator of the global $U(1)$ symmetry as `charge' irrespective of its microscopic origins in an electron model. This charge is odd under the anti unitary $C$ operation. We will further take the liberty of occasionally referring to $C$ as ``time reversal". When the results are applied to the half-filled Landau level discussed in the previous section the $C$ operation will be interpreted physically precisely as the anti-unitary particle-hole symmetry transformation (hence the same symbol as in the previous section). In that context $C$ should of course not be confused with physical time reversal which is not a
symmetry of the half-filled Landau level.
Consider coupling the surface theory, at $n = 1$, to external static ``electromagnetic" fields that couple to the $U(1)$ charge and current densities. As the charge is odd under $C$ the current is even. Then electric fields are $C$-odd while magnetic fields are $C$-even. We can thus perturb the surface theory by introducing an external magnetic field while preserving the $U(1) \times C$ symmetry. We will work in a limit in which we assume that the continuum approximation (Eqn. \ref{surfdirac}) is legitimate. The resulting Lagrangian takes the form
\begin{equation}
{\cal L} = \bar{\psi} \left(-i\slashed{\partial } + \slashed{A}\right) \psi + ....
\end{equation}
with $\vec \nabla \times \vec A = B \hat{z}$ (taking the surface to lie in the $xy$ plane).
The $.....$ represent four fermion and other interaction terms consistent with symmetry. In the absence of these interactions the spectrum has the famous Dirac Landau levels with energy $E_m = \pm \sqrt{2mB}$. For non-zero $m$ each level comes with a partner of opposite energy. Most importantly there is a zero energy Landau level that has no partner. Now the $C$ symmetry implies that this zeroth Landau level must be half-filled.
At low energies it is appropriate to project to the zeroth Landau level. We thus end up with a half-filled Landau level. As usual in the non-interacting limit this is highly degenerate and we must include interactions to resolve this degeneracy. Thus the surface of this $3+1$-d topological insulator maps exactly to the classic problem of the half-filled Landau level. Note however that the $U(1) \times C$ symmetry of the full TI maps precisely to the expected $U(1) \times C$ symmetry of the half-filled Landau level.
Thus we have obtained a UV completion that retains $U(1) \times C$ as an exact microscopic local symmetry. The price we pay is that it is the boundary of a TI that lives in one higher dimension. {\em Further our ability to obtain it this way implies that there is no strictly $2d$ UV completion of the half-filled Landau level that has $U(1) \times C$ as an exact local symmetry.}
It follows that to understand the half-filled Landau level we must study strongly correlated surface states of the $n = 1$ $3+1$-dimensional topological insulator with $U(1) \times C$ symmetry.
\section{Correlated surface states of 3d topological insulators}
\label{cfsti}
Let us consider quite generally the surface of a three dimensional topological insulator. To keep continuity with the previous section we will phrase the discussion in terms of the $n = 1$ $3+1$-D topological insulator with $U(1) \times C$ symmetry. We also initially specialize to $B = 0$. Later we will turn on a non-zero $B$. The simplest surface state - and the only one realized within band theory - is the free Dirac cone described by the Lagrangian in Eqn. \ref{surfdirac} with $n = 1$. With interactions though other states are possible\cite{avts12,fSTO1,fSTO2,fSTO3,fSTO4,neuperttisrf2015,mrosscdl15}.
The surface may spontaneously break the defining symmetries. For instance if $C$ is broken, then a Dirac mass is allowed. This leads to a quantized Hall conductance which is shifted from integer by a $1/2$. Thus if we consider a domain wall between the two possible orientations of the $C$-breaking order parameter, it will support a chiral current carrying edge mode. Crucial to the discussion that follows will be a different surface state that preserves $C$ but spontaneously breaks the global $U(1)$ symmetry - a surface `superconductor'. Finally a gapped surface that preserves the full $U(1) \times C$ symmetry is also possible. This price to pay is that such a surface state has what is known as `intrinsic topological order' with gapped `anyon' excitations carrying fractional charge. For the $n = 1$ topological insulator of interest such a state was described in Refs. \onlinecite{fidkowski3d,3dfSPT2,maxvortex}, and shown to be non-abelian. We will return to this state later but first we discuss the surface superconductor in greater detail.
We will restrict attention to gapped superconducting ground states. As is usual in any superconductor the excitations are gapped fermionic Bogoliubov quasiparticles, and vortices which quantize external magnetic flux in units of $\frac{mh}{2e} \equiv m\pi$. In addition in the absence of long range Coulomb interactions, there is a gapless zero sound (Goldstone) mode which leads to a logarithmic interaction between the vortices. We will initially ignore this zero sound mode - later we will be able to reinstate it in a straightforward manner.
This superconducting state preserves the $C$ symmetry, and we can ask about the $C$ transformation properties of the various excitations. As the $U(1)$ charge is odd under $C$, the phase of the Cooper pair is even under $C$. It follows that the vorticity is even under $C$. The structure of the vortices has many similarities to those in the familiar Fu-Kane superconductor\cite{fuknsc} obtained at the surface of the usual topological insulator. In particular $m\pi$ vortices $v_m$ with $m$ odd trap Majorana zero modes. As we are imagining turning off the coupling to the zero sound mode (for instance by weakly coupling the $U(1)$ currents to a gauge field), the vortices will have finite energy and we can discuss their statistics. Due to the Majorana zero modes $v_m$ with $m$ odd will be non-abelian.
What about $v_m$ with $m$ even? Below we will argue that there are two vortices at $m = 2$ denoted $v_{2\pm}$ one of which is a semion and the other is an antisemion. These two differ by binding a neutralized Bogoliuobov quasiparticle. They also go into each other under the $C$ operation. Most crucially from these we can build a $m = 4$ vortex that goes to itself under the $C$ operation by binding together $v_{2+}$ and $v_{2-}$. Remarkably this bound state which we dub $v_4$ is a fermion that is a `Kramers doublet' under the anti-unitary $C$ operation:
\begin{equation}
C^2 v_4 C^{-2} = -v_4
\end{equation}
We can also construct other $m = 4$ vortices by binding the neutralized Bogoliuibov quasiparticle to $v_4$ (they can be thought of as $v_{2+}^2 \sim v_{2-}^2)$. Finally the strength-$8$ vortex is a boson that transforms trivially under $C$.
We will justify these results in the following section. But for now we pause to describe our strategy for understanding correlated surface states (including when a non-zero $B$-field is turned on). We start from the surface superconductor and ask how the broken $U(1)$ symmetry may be restored. One option is that the superconducting order is destroyed by losing the pairing gap. At $B = 0$ this leads to the free Dirac cone, and at $B > 0$ to the half-filled Landau level whose fate will be decided by interactions. Alternately we may destroy the superconducting order through phase fluctuations, {\em i.e} by proliferating vortices. To obtain a symmetry preserving state we must proliferate vortices that are either fermions or trivial bosons. The former leads to gapless surface states. In particular when $B > 0$ it leads to a quantum vortex liquid of $v_4$ vortices. The resulting state is remarkably similar to the composite Fermi liquid expected in the half-filled Landau level with the additional virtue that it is manifestly $C$ symmetric. We depict in Fig. \ref{tisrfpdccl} a schematic phase diagram of the surface of this TI illustrating some of the various possibilities.
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{tisrfpdccl.pdf}
\end{center}
\caption{Schematic phase diagram for the surface of the correlated $n = 1$ topological insulator with $U(1) \times C$ symmetry. $g$ is a parameter that controls the relative strength of Cooper pairing versus the phase stiffness in the superconductor. At $B = 0$ with increasing $g$ pairing is lost leading to the Dirac metal obtained within band theory. At small $g$ superconductivity is destroyed through phase fluctuations leading to the `dual' Dirac metal. Many other phases are also of course possible. Particularly interesting is a symmetry preserving surface topological order. At non-zero $B$, the composite fermi liquid emerges as one of the possible phases.}
\label{tisrfpdccl}
\end{figure}
\section{Topological insulators and time reversal symmetric $U(1)$ quantum spin liquids}
\label{titsymmu1}
How should we understand the claims made about the structure of the even strength vortices in the surface superconductor? One approach is to work directly with the surface theory and examine the structure of the vortices in greater detail. Within the Bogoliubov-deGennes mean field theory, strength-$m\pi$ vortices will have $m$ Majorana zero modes. Knowing the action of $C$-symmetry, we can then study the fate of these zero modes in the presence of interactions to deduce the properties of the vortices. Here however we will describe a different and more insightful approach which enables us to deduce the properties of the even strength vortices.
First let us ask how we might create such vortices in the first place. As usual a strength $\frac{mh}{e} \equiv m\pi$ vortex with even $m$ may be created in a superconductor by threading in external magnetic flux of $\frac{mh}{e}$ through a point. At the surface of the three dimensional bulk this process of flux insertion has a very nice and useful interpretation. We can think of it as throwing a magnetic monopole from the outside vacuum into the sample of the topological insulator as depicted in Fig. \ref{mnpltnn}. Recall that by Dirac quantization the magnetic monopole has strength $\frac{mh}{e}$ with $m$ even. When such a monopole passes through the superconducting surface to enter the bulk it leaves behind precisely a $m\pi$ vortex with $m$ even.
Thus the properties of the surface vortices can be inferred from the properties of the bulk magnetic monopoles\cite{statwitt,fSTO1,3dfSPT2} or vice versa\cite{hmodl,3dfSPT2,maxvortex}. In the outside vacuum the monopole is a trivial boson. If inside the topological insulator sample the monopole has some non-trivial properties then the vortex left behind at the surface through a monopole tunneling event will also inherit the same non-trivial properties. We emphasize that at this stage the bulk monopole is a `probe' of the system and should not be viewed as a dynamical excitation.
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{mnpltnn.pdf}
\end{center}
\caption{When a strength $\frac{mh}{e}$ monopole from the outside vacuum tunnels into the bulk insulator through a surface superconductor, it leaves behind a strength ${mh}{e}$ vortex at the surface. Understanding the properties of the monopole in the bulk constrains the properties of the corresponding surface vortex.}
\label{mnpltnn}
\end{figure}
To discuss these monopoles somewhat precisely let us imagine that we couple the microscopic fermions that form the bulk insulator to a dynamical compact $U(1)$ gauge field in its deconfined phase. The microscopic fermions become the elementary electric charges. We are interested in the fate of the magnetic monopoles.
Consider a bit more carefully the interpretation of the theory obtained by gauging an insulator formed out of the fermions. Note that the fermions themselves are not local degrees of freedom in such a theory. To create a fermion we also need to create the electric field lines that emanate from it and go out to infinity. More formal a single electric charge creation operator is by itself not gauge invariant. Gauge invariant local operators are bosonic combinations made out of bilinear (or other even numbers of the fermions). Thus it follows that after gauging the theory should be regarded as living in the Hilbert space of a spin or boson system.
In the last three decades it has been appreciated\cite{wenbook} that systems of interacting quantum spins/bosons can settle into quantum spin liquid phases characterized by emergent gauge fields and associated matter fields with fractional quantum numbers. In three dimensional systems it has long been recognized\cite{wenbook,bosfrc3d,wen03,3ddmr,hfb04,lesikts05,kdybk} that quantum spin liquid phases exist where there is an emergent `photon' excitation which is gapless with a linear dispersion. In addition there will be particle-like excitations that couple to the photon as electric/magnetic charges. These states of matter are called $U(1)$ quantum spin liquids to emphasize that their low energy physics is described by an emergent deconfined $U(1)$ gauge theory.
The phase obtained by gauging an insulator of fermions should thus be viewed as a particular kind of $U(1)$ quantum spin liquid. Since the fermions are gapped the electric charges in this spin liquid are gapped. Further in the gauged theory the global $C$ symmetry is still present. Thus we have an example of a $U(1)$ quantum spin liquid `enriched' by the presence of a global anti-unitary $C$ symmetry. As noted above we could equally well simply call $C$ as ``time-reversal". Thus the discussion that follows can be usefully understood as being about time reversal symmetric $U(1)$ quantum spin liquids in three space dimensions.
Consider obtaining an effective low energy Lagrangian for the photon by integrating out the matter fields in such a spin liquid. Quite generally the $C$ symmetry implies that this takes the form
\begin{equation}
{\cal L}_{eff} = {\cal L}_{Max} + {\cal L}_\theta
\end{equation}
The first term is the usual Maxwell term and the second is the `theta' term:
\begin{equation}
{\cal L}_\theta = \frac{\theta}{4\pi^2} \v{E}\cdot\v{B}
\end{equation}
where $\v{E}$ and $\v{B}$ are the electric and magnetic fields respectively.
As is well known (see in the TI context the reviews in Refs. \onlinecite{HsnMr,QiScz} the $C$ symmetry restricts the allowed values to $\theta$ to an integer multiple of $\pi$. When the electrically charged fermions form the $n = 1$ topological band discussed above it is easy to argue that $\theta = \pi$. This follows for instance from the shift by $1/2$ of the surface `integer' quantum Hall effect obtained when time reversal is broken\cite{HsnMr,QiScz}. Let us now understand the implications of this for the monopole structure which is our primary interest in this section.
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{cmlat2.pdf}
\end{center}
\caption{ Charge-monopole lattice at $\theta = \pi$. }
\label{cmlat2}
\end{figure}
In the presence of a $\theta$ term, monopoles with magnetic charge $q_m = 1$ (we define the magnetic flux to be $\frac{hq_m}{e} \equiv 2\pi m$) carry electric charge $\pm \frac{1}{2}$ through the famous Witten effect\cite{wittendyon}.
It will also be necessary to consider higher strength monopoles. To that end consider generally the lattice of allowed electric and magnetic charges in this $U(1)$ gauge theory. We will call this the charge-monopole lattice. It takes the form shown in Fig. \ref{cmlat2}.
Let us denote by $(q_e, q_m)$ the electric and magnetic charges of the various particles, and by $d_{(q_e, q_m)}$ the corresponding destruction operator.
We have chosen units in which the elementary `pure' electric charge is $(1,0)$. This particle is a fermion. The elementary strength-$1$ monopoles are then $(\pm \frac{1}{2}, 1)$ particles with bose statistics. The $(1,2)$ dyons are clearly also bosons as they are obtained by binding two $(\frac{1}{2}, 1)$ dyons. However the electrically neutral $(0,2)$ particle is a fermion. It can be obtained by removing an elementary fermionic electric charge ($(1,0)$ particle) from the $(1,2)$ dyon.
It is actually extremely useful to construct the $(0,2)$ dyon differently as the bound state of the $(\frac{1}{2}, 1)$ and $(-\frac{1}{2}, 1)$ dyons. These two dyons see each other as mutual monopoles. Suppose one of these dyons, say the $(-\frac{1}{2}, 1)$ is sitting at some point we define to be the origin. When the other dyon, the $(-\frac{1}{2}, 1)$, traces out a a loop it picks up a phase equal to half the solid angle subtended by the loop at the origin. Binding two such bosonic particles produces a fermion\cite{asgdyon}.
Consider now the action of the $C$ symmetry. As the electric charge is $C$-odd the magnetic charge must be $C$-even. It follows that $C$ interchanges the $(\pm \frac{1}{2}, 1)$ dyons. Thus in their bound state the relative coordinate is odd under $C$. Now the angular momentum stored in the electromagnetic field of this bound state is readily calculated to be $\frac{1}{2}$, {\em i.e} the bound state behaves as a ``spin-1/2" particle as expected from the Fermi statistics. Further in this spin-$1/2$ Hilbert space the unit vector along the relative coordinate becomes precisely the spin operator. As this is $C$-odd the two degenerate states of this spin-$1/2$ form a Kramers doublet. More details are in Appendix \ref{blkbd}. Specifically
\begin{equation}
C^2 d_{(0,2)} C^{-2} = - d_{(0,2)}
\end{equation}
The structure of other dyons can be easily obtained from these few basic ones. We will not need them here.
In passing we note the strong similarity between this discussion and that in Section \ref{phcflphys} (and Appendix \ref{drcdip}) where we argued for the Kramers structure of the dipole in Fig. \ref{newdip}. This similarity is not coincidental: as we will see there is a deep connection between the bulk $(0,2)$ dyon and the surface composite fermion.
Armed with this understanding let us discuss the vortices in the surface superconductor. Consider first the $2\pi$ vortices. These may be understood as points of penetration at the surface of $2\pi$ magnetic flux lines that extend into the bulk. Now the $\theta = \pi$ term in the bulk implies that when two closed bulk $2\pi$ flux lines link there is a phase of $(-1)$.
This linking phase ensures that when a single $2\pi$ flux line is cut open to produce a strength-$1$ monopole it costs infinite energy unless it binds to $\pm \frac{1}{2}$ electric charge. The binding to the electric charge removes the linking phase ambiguity of an open flux tube and enables the resulting $(\pm \frac{1}{2}, 1)$ dyon to have finite energy, exactly consistent with the Witten effect.
We can now infer the statistics of the $2\pi$ vortices at the surface. When one such vortex is taken around one another, the change in the flux line configuration can be deformed to an extra pair of linked flux lines in the bulk. Thus when a $2\pi$ vortex is taken around another there is a phase of $\pi$. Note that corresponding to the two bulk dyons $(\pm \frac{1}{2}, 1)$ we will have two surface $2\pi$ vortices $v_{2\pm}$. The $\pi$ phase is picked up when any of these $2\pi$ vortices goes around the other. This implies that these $v_{2+}$ and $v_{2-}$ are mutual semions, and that their self-statistics is either semion or anti-semion. Further since in the bulk the two dyons are interchanged by $C$, the same will be true for $v_{2 \pm}$ at the surface. It follows that one of them ($v_{2+}$ )must be a semion and the other $v_{2-}$ an anti-semion.
Now let's discuss $4\pi$ vortices. When a strength $q_m = 2 $ monopole tunnels through the surface from the vacuum into the bulk it leaves behind a $4\pi$ vortex. We have already seen that in the bulk the $(0,2)$ monopole is a fermion that is Kramers doublet under $C$. It follows that at the surface there is a $4\pi$ vortex - which we dub $v_4$ - which is a Kramers doublet (under $C$) fermion.
Thus thinking about the bulk gives us a simple understanding of the claims made in the previous section about the surface vortices. The $v_4$ vortex will play a crucial role in the discussion that follows.
Further understanding of the surface superconductor is provided by the considerations of the next section.
\section{Bulk duality of the gauged topological insulator}
\label{bdgti}
We now argue that the $U(1)$ quantum spin liquid obtaining by gauging the $n = 1$ $U(1) \times C$ topological insulator has a remarkable dual description\cite{tsymmu1,dualdrcmaxav}.
First of all we know that the charge-monopole lattice has the structure shown in Fig. \ref{cmlat2}.
The most fundamental particles in this lattice are the $( \frac{1}{2}, \pm 1)$ dyons. All other particles can be obtained as composites of these. Let us first discuss their statistics. As they are interchanged under $C$, they are required to have the same statistics, {\em i.e} they are both bosons or both fermions. Further we already observed that the $((\frac{1}{2}, 1)$ and the $(\frac{1}{2}, -1)$ dyon are relative monopoles, {\em i.e} each one sees the other the way an electric charge sees a monopole. If these dyons were both fermions we would have a realization of an ``all-fermion" $U(1)$ gauge theory in a strictly $3+1$ dimensional system. However it has been argued in Ref. \onlinecite{3dfSPT} (see also Ref. \onlinecite{kmcgsw14}) that such a state cannot exist. Therefore we conclude that both these dyons must be bosons.
We have already also argued that the bound state - the $(0,2)$ particle - of these two dyons is a Kramers doublet fermion. Now consider the pure electric charge - the $(1,0)$ particle - obtained by binding
$(\frac{1}{2}, 1)$ and $(\frac{1}{2}, -1)$.
These are also relative monopoles and hence their bound state is a fermion. Now $C$ does not interchange these two dyons and hence the argument above for the Kramers structure of the $(0,2)$ particle does not apply.
Earlier we obtained this phase by starting with fermionic electric charges forming the $n = 1$ topological band and gauging it. The present discussion shows that fermi statistics of the electric charge is {\em necessary} to realize this charge-monopole lattice.
We thus see that the structure of both the elementary electric charge and the elementary magnetic charge are uniquely determined for this charge-monopole lattice. In addition the statistics and symmetry properties of the elementary dyons is also fixed. Thus there is a unique possibility for this charge-monopole lattice.
Consider now this charge-monopole lattice from the point of view of the $(0,2)$ Kramers doublet fermion. This is the elementary pure magnetic charge in this spin liquid. Dirac quantization demands that the dual electric charge be quantized in units of half-integers. In this charge-monopole lattice the elementary electric charge with $q_e = \frac{1}{2}$ also necessarily has magnetic charge $q_m = 1$ which is exactly half that of the elementary pure magnetic charge. Thus as seen by the $(0,2)$ Kramers fermion there is also a dual Witten effect. This implies that this $(0,2)$ fermion itself is in a topological insulator phase. As the magnetic charge is $C$ even this topological insulator is the same as the conventional topological insulator in spin-orbit coupled electronic insulators in three dimensions\footnote{These arguments show the uniqueness of the bulk excitation structure. There can still potentially be different phases distinguished by their possible surface states\cite{tsymmu1}. These are obtained by combining the spin liquid with a SPT phase of spins protected by time reversal alone. For the particular spin liquid discussed here, this subtlety is resolved in Metlitski (unpublished) and there is no extra such SPT phase added in the duality}.
.
Thus the same phase admits two equivalent but dual points of view. We can obtain it either by taking the $n = 1$ topological insulator of fermions with $U_e(1) \times C$ symmetry and gauging the global $U_e(1)$ or by taking the standard topological insulator of Kramers fermions with $U_m(1) \rtimes C$ symmetry\footnote{This simply means that the generator of the $U_m(1)$ is even under $C$. As $C$ is anti-unitary this implies that $U_m(1)$ rotation and $C$ do not commute.} and gauging this $U_m(1)$. For clarity in this section we use the subscripts $e$ or $m$ for $U(1)$ to distinguish between the `electric' and `magnetic' $U(1)$ rotations.
\section{Duality of surface states}
\label{dualsrf}
It is interesting to translate this bulk duality into a dual perspective of the surface states. The simplest case is the superconducting surface.
We recall that the surface avatar of the $(0,2)$ monopole is the $v_4$ vortex. We thus seek a dual description of the superconducting state in terms of the physics of the $v_4$ vortex.
Let us first quickly review pertinent aspects of the standard charge-vortex duality of two dimensional systems\cite{chandandual,mpafdhl89}. The simplest example is for bosonic superfluids. Then the superfluid phase may be fruitfully viewed as a Mott insulator of vortices in the phase of the boson. The zero sound mode of the superfluid can conveniently be represented as a gapless photon in $2+1$ dimensions, and the vortices couple to this photon as `electric charges'. . This leads to a dual Landau-Ginzburg theory of the superfluid in terms of vortex fields coupled minimally to a fluctuating non-compact $U(1)$ gauge field. The magnetic flux of this gauge field corresponds physically to the physical boson number density.
It has also been known for some time now\cite{z2long} how to extend this dual vortex formulation to an ordinary gapped $s$-wave superconductor of {\em fermions} in two dimensions. To describe the Bogoliubov quasiparticle it is convenient to formally strip them of their electric charge and define neutralized fermionic particles (``spinons") which see the elementary $\frac{h}{2e}$ vortices as $\pi$-flux. The vortices are in addition coupled minimally, exactly as in a bosonic superfluid, to a fluctuating non-compact $U(1)$ gauge field. This dual description of an ordinary superconductor is conceptually powerful, and enables passage from the superconductor to various fractionalized Mott insulators in two dimensions.
Returning now to the superconductor obtained at the surface of the $n = 1$ TI with $U(1) \times C$ symmetry, from the point of view of the $v_4$ vortex the surface is gapped. Further the vortex number conservation is the surface manifestation of the magnetic $U_m(1)$ gauge structure present in the bulk spin liquid. The preservation of the vortex number conservation means that the surface preserves the dual $U_m(1) \rtimes C$ symmetry.
Thus from the point of view of $v_4$ what we have been calling the surface superconductor is really a symmetry preserving surface topological order of the bulk topological insulator formed by the $(0,2)$ fermions.
It is possible to check this explicitly. Following the logic described in the previous section we can fully determine the braiding/fusion rules, and the symmetry assignment for the quasiparticles of the surface superconductor. These turn out to be identical to that of a specific surface topological order (known as T-Pfaffian\cite{fSTO3}) obtained earlier through bulk Walker-Wang constructions for the spin-orbit coupled topological insulator with the $v_4$ identified with the dual `electron' (and thus a vorticity $4\pi$ identified with dual `electron' charge $1$).
We are now ready to describe the full dual Landau-Ginzbuirg theory of the surface superconductor by reinstating the zero sound mode. As usual this zero sound mode is described as a gapless photon in $2+1$ dimensions. The vortices will then couple minimally to this photon. Thus a dual Landau-Ginzburg description of the surface superconductor is simply obtained: Take the T-Pfaffian topological order and couple all the charged particles to a fluctuating non-compact $U(1)$ gauge field. $a_\mu$ (Recall that the charges of the T-Pfaffian are precisely the vortices of the surface superconductor).
This dual formulation of the surface superconductor will be extremely useful as a framework in which to address non-superconducting states obtained through phase fluctuations. We turn to these next.
\section{Vortex metal surface states}
\label{vms}
The surface superconducting order may be destroyed to restore $U(1) \times C$ symmetry by proliferating vortices. If we condense bosonic vortices, for instance the $8\pi$ vortex $v_4^2$, we will get a symmetry preserving gapped surface topological order. Alternately we can kill the superconductivity by proliferating the fermionic $v_4$ vortex, {\em i.e} by making it gapless. As the dual LGW theory of the surface superconductor is the gauged version of the T-Pfaffian topological order, we will get a gapless vortex liquid if we confine the non-trivial quasiparticles of the T-Pfaffian state through a phase transition to a gapless symmetry preserving state of the $v_4$ fermion. But this is precisely the famous single Dirac cone (tuned to neutrality) formed by $v_4$. We thus have a dual Dirac liquid surface state\cite{dualdrcwts2015,dualdrcmaxav} for the $n = 1$ $U(1) \times C$ topological insulator described by the Lagrangian
\begin{equation}
\label{ddl}
{\cal L} = \bar{\psi}_v \left(-i\slashed{\partial} - \slashed{a}\right) \psi_v + \frac{1}{4\pi} \epsilon_{\mu\nu\lambda} A_\mu\partial_\nu a_\lambda
\end{equation}
Here $\psi_v$ is a fermion field representing the $v_4$ vortex. We have chosen units so that this couples to the non-compact gauge field $a_\mu$ with gauge charge-$1$. With this choice the conserved 3-current of the original global $U(1)$ symmetry is
\begin{equation}
\label{dualjagain}
j_\mu = \frac{1}{4\pi} \epsilon_{\mu\nu\lambda} \partial_\nu a_\lambda
\end{equation}
This is reflected in the last term of the Lagrangian which describes the coupling of this current to the external probe gauge field $A_\mu$. Finally the original electron $\psi$ is obtained as $4\pi$ instanton in the gauge field $a_\mu$.
Importantly $\psi_v$ is {\em Kramers doublet} under the $C$ operation transforming as
\begin{equation}
C\psi_v C^{-1} = i\sigma_y \psi_v
\end{equation}
This dual Dirac liquid describes a possible surface state if the surface superconductivity is destroyed by phase fluctuations at zero magnetic field $B$. What if the superconductivity is destroyed by turning on a non-zero $B$? Now we will have a finite density of vortices. If we wish to preserve $C$ symmetry the simplest option is to induce a finite density of $v_4$ vortices and make them form a `metallic' state. This will lead to a non-zero chemical potential in the Lagrangian in Eqn. \ref{ddl} for the dual Dirac liquid so that the dual Dirac cone is no longer tuned to be at the neutrality point. The density of these vortices is precisely
\begin{equation}
\label{nvdens}
n_v = \frac{B}{4\pi}
\end{equation}
as these are $4\pi$ vortices.
Further as these are fermions they will form a Fermi surface. The Fermi momentum $K_F$ will be related to $n_v$ in the usual way
\begin{equation}
K_F = \sqrt{4\pi n_v}
\end{equation}
The fermions at this Fermi surface will of course continue to be coupled to the $U(1)$ gauge field $a_\mu$.
\section{Back to Composite Fermi Liquids}
\label{bcfl}
Let us now return to the fate of the half-filled Landau level in the presence of particle-hole symmetry. Earlier we argued that we can UV complete this theory with the $U(1) \times C$ symmetry retained as an exact locally realized microscopic symmetry by obtaining it as the surface of the $n = 1$ TI with $U(1) \times C$ symmetry. We now see that when $B \neq 0$ at this surface, as required to produce the half-filled Landau level, a possible gapless state that preserves the $U(1) \times C$ symmetry is the dual Dirac liquid at non-zero chemical potential.
This theory bears some remarkable similarities to the usual composite Fermi liquid description. We will therefore identify the field $\psi_v$ (or equivalenty the $v_4$ vortex) with the composite fermion. First the density of $\psi_v$ as given by Eqn. \ref{nvdens} is precisely half the degeneracy of the lowest Landau level, {\em i.e} it matches exactly the density of electrons in the half-filled Landau level. Just as in the usual composite fermi liquid, $\psi_v$ forms a Fermi surface which is then coupled to a non-compact $U(1)$ gauge field. $\psi_v$ itself is formally electrically neutral (it is a vortex) but the gauge flux couples to the external vector potential.
The main difference is that particle-hole symmetry is explicitly present in this version of the composite Fermi liquid. Further $\psi_v$ is a Kramers doublet under $C$, and its Fermi surface encloses a Dirac cone. This is manifested in a $\pi$ phase when a $\psi_v$ particle at the Fermi surface circles around it.
This is precisely the description of the particle-hole symmetric composite fermion liquid proposed by Son in Ref. \onlinecite{sonphcfl} which we described in Sec. \ref{sonprop}. We have thus provided an understanding of Son's proposal through the linkage with the surface of a three dimensional electronic topological insulator.
It is worth emphasizing a few points. The vortex metal/composite fermi liquid surface state has been shown to emerge as a legitimate surface state of the $n = 1$ topological insulator with $U(1) \times C$ symmetry in a non-zero $B$-field. This same surface also provides a realization of a half-filled Landau level with $U(1) \times C$ symmetry. Thus the vortex metal/composite fermi liquid state is a legitimate state for a half-filled Landau level with particle-hole symmetry. Whether this state is really the fate of the half-filled Landau level or not depends on microscopic details which we have not attempted to address.
A different question altogether is whether the dual Dirac liquid at zero field describes the same phase as the standard single Dirac cone. We have also not attempted to answer this question here.
Finally we discuss the physical picture of the composite fermion from the point of view of the three dimensional topological insulator when the half-filled Landau level is obtained as its boundary. Note that the composite fermion is the surface avatar of the strength-$2$ electrically neutral monopole in the bulk (see Fig. \ref{blkbdrcfl}).. We earlier obtained the properties of this strength-$2$ monopole by obtaining it as the bound state of the $(\pm \frac{1}{2}, 1)$ dyons. These two dyons correspond precisely, at the surface, to the two oppositely charged $2\pi$ vortices at the two ends of the composite fermion.
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{blkbdrcfl.pdf}
\end{center}
\caption{Bulk-boundary correspondence for the composite fermi liquid. The composite fermion is the surface avatar of the electrically neutral strength-$2$ bulk monopole which itself is a bound state of the two $(\pm \frac{1}{2}, 1)$ dyons. This strength-$2$ monopole is a fermion, and is Kramers-doublet under $C$. At the surface the two dyons that make up this monopole correspond to the two ends of the dipole of Fig. \ref{newdip}.}
\label{blkbdrcfl}
\end{figure}
\section{Particle-hole symmetric Pfaffian state}
\label{phpf}
The composite fermi liquid state is well known to act as a `parent' normal state out of which the non-abelian Moore-Read (Pfafian) state arises through pairing\cite{readgrn}. It is also well known\cite{ssletalapf,levinapf} that the Pfaffian state breaks particle-hole symmetry. A particle-hole conjugate state - known as the anti-Pfaffian- has been described as an alternate candidate for the observed plateau at $\nu = \frac{5}{2}$. From the particle-hole symmetric composite fermi liquid, it is natural then to consider angular momentum $l = 0$ pairing which preserves the particle-hole symmetry. This leads to a gapped topologically ordered state - which we may call the $C$-Pfaffian - which is yet another alternate possible non-abelian quantum Hall state at the same filling.
It is interesting to view this as a correlated surface state of the related three dimensional topological insulator with $U(1) \times C$ symmetry. As it preserves the $U(1) \times C$ symmetry, this is a symmetry preserving
surface topological order. Precisely such surface topologically ordered states were described in Refs. \onlinecite{fidkowski3d,3dfSPT2,maxvortex}. The $C$-Pfaffian state obtained by $l = 0$ pairing of the composite fermions of the particle-hole symmetric composite fermi liquid is essentially identical to the states described in these references.
We briefly describe the particle content of the $C$-Pfaffian state. In the absence of the gauge field $a_\mu$ this is simply the famous Fu-Kane superconductor obtained at the surface of spin-orbit coupled $3+1$-d topological insulators. In particular the fundamental $\pi$ vortex (and all odd multiples) traps a Majorana zero mode. The presence of the gauge field means that the vortices are screened and will have finite energy cost. The $\pi$ vortex (and its odd multiples) will clearly have non-abelian statistics. Through Eqn. \ref{dualj} we see that the $\pi$ -vortex will have physical electric charge $\frac{e}{4}$. An argument identical to the one in
Sec. \ref{titsymmu1} shows that there are two $2\pi$ vortices, carrying charge $\pm \frac{e}{2}$, one of which is a semion and the other an antisemion. These are also mutual semions. Their bound state is a $4\pi$ vortex which is an electrically neutral fermion, and is a Kramers doublet under $C$. This is exactly the Bogoliubov quasiparticle obtained after the pair condensation of composite fermions. As usual this Bogoliubov quadiparticle has $\pi$ mutual statistics with the $\pi$ vortex but is local with respect to the $2\pi$ vortices.
A full description of the braiding and fusion rules and other topological data is readily obtained for the $C$-Pfaffian state. We however here focus on showing the connection with the physical picture described in the previous sections of the modified dipolar picture of the composite fermion (see Fig. \ref{newdip}). We already emphasized that the neutral fermion of the $C$-Pfaffian state was Kramers under $C$, and should be understood as the relic of the composite fermion. We also see that it can be understood as the bound state of the charge $\frac{e}{2}$ semion and the charge $ -\frac{e}{2}$ antisemion. But this is precisely the dipolar picture advocated in the previous section. In particular the two ends of the dipole have been liberated as deconfined quasiparticles by the passage to the paired $C$-Pfaffian state. This lends further support for this dipolar picture.
It is also enlightening to relate the structure of the $C$-Pfafian state to the properties of the bulk $3+1$-D topological insulator with $U(1) \times C$ symmetry. Then the neutral fermion of the $C$-Pfaffian is precisely the surface avatar of the strength-$2$ electrically neutral magnetic monopole. The charge $\pm \frac{e}{2}$ anyons (either semion or antisemion) are the surface avatars of the $(\pm \frac{1}{2}, 1)$ dyons. This ties in beautifully with the pictures described in previous sections.
\section{Revisiting the phenomenology of composite fermi liquids}
\label{cflphen}
With the understanding of the half-filled Landau level described above it is interesting to revisit the phenomenology of composite fermi liquids (with or without particle-hole symmetry). By and large these are unchanged from the original HLR theory. We also describe some new experimental predictions (that do not actually rely crucially on particle-hole symmetry).
In practice even if the projection to the lowest Landau level and the restriction to two-body interactions is a good approximation, there will inevitably be disorder potentials that will break particle-hole symmetry. Further the edge potential also breaks particle-hole symmetry so that physical quantities sensitive to edge physics will not be particle-hole symmetric.
Nevertheless in an ideal sample, if Landau level mixing can be neglected, we expect the formulation described here will apply. In that case how can the $\pi$-Berry phase associated with the Fermi surface of the composite fermions be measured? We show in Appendix \ref{sdhpi} that this $\pi$ Berry phase is implied already by a slight re-interpretation of the standard phenomenology away from $\nu = \frac{1}{2}$.
\begin{enumerate}
\item
{\em Electromagnetic response}
The electromagnetic response functions of the $C$-symmetric composite Fermi liquid were discussed in Ref. \onlinecite{sonphcfl}. They resemble but are not identical to those proposed by the standard HLR theory. To discuss dc transport at low-$T$ it is necessary to include the effects of disorder. A random potential will, as in the standard HLR theory, lead to a random magnetic field seen by the composite fermions. For simplicity we assume that the probability distribution of the random potential is particle-hole symmetric. Then the mean effective field seen by the composite fermions is zero. Consider the electrical conductivity tensor. When we access the half-filled Landau level as a TI surface state we have to include a contribution to the Hall conductivity of $\frac{e^2}{2h}$ from the filled states below the chemical potential. To understand this precisely, note that when the lowest Landau level is obtained in the usual way in two dimensional systems, the empty Landau level has Hall conductivity $0$ and the filled one has Hall conductivity $\frac{e^2}{h}$. However when this Landau level is obtained as the surface state of a $3d$ TI, the empty and full levels are related by $C$-symmetry: they hence have opposite Hall conductivities $\pm \frac{e^2}{2h}$ respectively. Thus the surface Dirac composite fermion theory must be supplemented by the background term (Eqn. \ref{lbg}) described in Sec. \ref{sonprop} when describing the usual Landau level.
Thus the full physical conductivity tensor $\sigma_{ij}$ takes the form
\begin{equation}
\sigma_{ij} = \frac{e^2}{2h} \epsilon_{ij} + \sigma^*_{ij}
\end{equation}
Here $\epsilon_{ij}$ is antisymmetric and $\epsilon_{xy} = 1$. $\sigma^*_{ij}$ is the conductivity tensor calculated within the low energy effective field theory given by Eqn. \ref{ccfl}. A physical description of this conductivity is easily obtained. First there is no off-diagonal term in $\sigma^*$ as the $\psi_v$ move in zero effective magnetic field. Second the $\psi_v$ are $4\pi$ vortices in the electron phase. Thus by the usual rules of charge-vortex duality the electrical conductivity of the vortices is proportional to the inverse of their
resistivity obtained within the standard RPA. More precisely we have
\begin{equation}
\label{nsigma}
\sigma^*_{ij} = \delta_{ij} \frac{e^4}{(4 \pi \hbar )^2 \sigma_v}
\end{equation}
Here $\sigma_v$ is the RPA expression for the conductivity of the $\psi_v$ composite fermions (we have reinstated factors of $e$ and $\hbar$). It follows that the measured physical longitudinal conductivity is just
\begin{equation}
\label{sigmaxx}
\sigma_{xx} = \frac{e^4}{(4\pi \hbar)^2 \sigma_v}
\end{equation}
As a function of wavenumber $q$, the composite fermion conductivity $\sigma_v$ takes the well-known form
\begin{eqnarray}
\sigma_v & = & \frac{e^2K_F l}{4\pi \hbar}, ~~ q \ll \frac{2}{l} \\
& = & \frac{e^2 K_F}{2\pi \hbar q}, ~~ q \gg \frac{2}{l}
\end{eqnarray}
where $l$ is the impurity induced mean free path for the composite fermions. Combining with Eqn. \ref{sigmaxx} the physical longitudinal conductivity takes exactly the form obtained by HLR in their original theory, and used to confront a number of experiments\cite{dassarmabook}.
Note that in the usual HLR theory there is a composition rule for the resistivity (rather than the conductivity) tensor:
\begin{equation}
\label{orho}
\rho_{ij}^{HLR} = \frac{2h}{e^2} \epsilon_{ij} + \rho^*_{ij}
\end{equation}
where $\rho^*$ is the resistivity tensor of the composite fermions. In practice we are in the limit $\rho_{xx} \ll \rho_{xy}$, and further $\rho_{xy}$ is approximately $\frac{2h}{e^2}$ even in the standard theory.
Thus in HLR theory the longitudinal conductivity
\begin{equation}
\sigma_{xx}^{HLR} \approx \frac{e^4 \rho^*_{xx}}{4h^2 }
\end{equation}
which is essentially the same as Eqn. \ref{sigmaxx} (after identifying $\rho^*_{xx} \approx \frac{1}{ \sigma_v}$).
\item
{\em Thermal transport and Wiedemann-Franz violation}
A striking feature of conventional Fermi liquid metals is the Wiedemann-Franz relationship between the residual electrical and thermal conductivities. Within Boltzmann transport theory, in the limit $T \rightarrow 0$, the longitudinal thermal conductivity $\kappa_{xx}$ is related to the electrical conductivity through
\begin{equation}
\label{convwf}
\kappa_{xx} = L_0 T \sigma_{xx}
\end{equation}
where $L_0 = \frac{\pi^2k_B^2}{3e^2}$ is the free electron Lorenz number.
We now argue that the composite Fermi liquid will not satisfy the conventional Wiedemann-Franz law but will instead satisfy a modified one. Though the composite fermions contribute to electrical transport as vortices, they
are directly responsible for heat transport. Thus the measured residual $\kappa_{xx}$ will satisfy Wiedemann-Franz with the $\sigma_v$, {\em i.e} the composite fermion conductivity. But this is inversely related to the measured electrical conductivity. Thus we have the relation
\begin{equation}
\label{modwfsigma}
\kappa_{xx} = \frac{L_0 T e^4}{4 h^2\sigma_{xx}}
\end{equation}
Conceptually similar violations have been discussed previously\cite{vmetal} in other vortex metals. Equivalently we observe that the longitudinal resistivity is, to a good approximation which ignores corrections of order $\left(\frac{\rho_{xx}}{\rho_{xy}}\right)^2$, given by
\begin{equation}
\rho_{xx} = \frac{(4h)^2 \sigma_{xx}}{e^4}
\end{equation}
so that the modified Wiedemann-Franz law may be be written
\begin{equation}
\label{modwfrho}
\kappa_{xx} \rho_{xx} = L_0 T
\end{equation}
If instead we use the standard HLR theory we will obtain Eqn. \ref{modwfrho} as an essentially exact relation (so long as we can ignore off-diagonal terms in $\rho^*$) and Eqn. \ref{modwfsigma} will hold approximately upto ignoring corrections of order $\left(\frac{\rho_{xx}}{\rho_{xy}}\right)^2$.
For a conventional metal in {\em zero} magnetic field, the modified Wiedemann-Franz law(Eqn. \ref{modwfrho}) is equivalent to the usual one as $\sigma_{xx} = \frac{1}{\rho_{xx}}$. However in a non-zero magnetic field, Eqns. \ref{convwf}
and \ref{modwfrho} are no longer equivalent.
For a conventional metal in non-zero magnetic field Eqn. \ref{convwf} is the appropriate result (more generally the thermal conductivity tensor is equal to $L_0 T$ times the electrical conductivity tensor)\cite{Abrikosovbook}. However for the composite Fermi liquid Eqn. \ref{modwfsigma} (or Eqn. \ref{modwfrho}) holds.
It is interesting to quantify the violation of the conventional Wiedemann-Franz law by defining a Lorenz number $L_{CF}$ for the composite Fermi liquid through
\begin{equation}
L_{CF} = \frac{\kappa_{xx}}{T\sigma_{xx}}
\end{equation}
We have
\begin{equation}
\frac{L_{CF}}{L_0} = \left(\frac{\rho_{xy}}{\rho_{xx}}\right)^2
\end{equation}
Since the measured $\rho_{xx} \ll \rho_{xy}$ we have a giant enhancement - possibly of order $10^3$ - of the Lorenz number compared to free electrons.
This modified Wiedemann-Franz law can possibly be tested in experiments. We emphasize that this result does not rely on particle-hole symmetry and is indeed obtained in the standard HLR theory as well. Similar violations are expected at $\nu = \frac{1}{4}$ and other composite fermi liquid metals. We are not aware of any thermal conductivity measurements in the $\nu = \frac{1}{2}$ state. Of course it will be necessary to subtract off the thermal conductivity of the substrate. This can perhaps be done by comparing with the thermal conductivity at a neighboring quantum Hall plateau\footnote{The off-diagonal thermal conductivity $\kappa_{xy}$
will however satisfy the conventional Wiedemann-Franz with the electrical $\sigma_{xy}$ so that $\kappa_{xy} = L_0 T \sigma_{xy}$. This means that $\kappa_{xx} \gg \kappa_{xy}$ so that the longitudinal thermal {\em resistivity} $\approx \frac{1}{\kappa_{xx}}= \frac{\rho_{xx}}{L_0 T}$. This form of the Wiedemann-Franz law is also equivalent to Eqn. \ref{convwf} at zero field but becomes inequivelent in non-zero field. }.
\item
{\em Cyclotron orbits}
If the Landau level filling is changed from $1/2$, particle-hole symmetry will be broken. Just like in the original HLR theory, the composite fermions will see an effective magnetic field that is much reduced from the externally applied one. They will then have cyclotron orbits with a radius much bigger than for electrons in the same external magnetic field.
Consider moving away from half-filling by changing the magnetic field by $\delta B$ while keeping the electron density fixed. The filling is changed to $\delta \nu = -\frac{\delta B}{2B}$. The deviation from half-filling changes $\langle j_0 \rangle$ through Eqn. \ref{jorho} to
\begin{equation}
\langle j_0 \rangle = -\frac{\delta B}{4\pi}
\end{equation}
Through Eqn. \ref{dualj} this is related to the average internal magnetic magnetic field. Thus the composite fermions see an effective magnetic field
\begin{equation}
B^* = \delta B ~~= - 2B\delta \nu
\end{equation}
To leading order in $\delta \nu$, the cyclotron radius of the composite fermions is
\begin{equation}
\label{rcstar1}
R^*_c = \frac{K_F}{|B^*|}
\end{equation}
Thus we have
\begin{equation}
R^*_c |B^*| = \frac{1}{l_B}
\end{equation}
where $l_B = \frac{1}{\sqrt{B}}$ is the magnetic length, which is the same result as in the standard HLR theory.
Recently\cite{kamburov14} through a geometric resonance experiment, $R_c^*|B^*|$ was inferred as a function of a $\delta \nu$. The results were interpreted as indicating that
$K_Fl_B$ decreased on deviating from $\nu = \frac{1}{2}$ in either direction. This has been addressed theoretically in Refs. \onlinecite{maissamph,brmtkjain15}. In the particle-hole symmetric theory, when the external magnetic field is changed at fixed density,
the density of composite fermions changes by $\delta n_v = \frac{\delta B}{4\pi}$, and correspondingly the Fermi momentum changes by $\delta K_F = \frac{\delta B}{2\sqrt{B}}$. From Eqn. \ref{rcstar1} this gives one source of $\delta \nu$ dependence which however leads to a steady decrease of $R_c^*|B^*|$ with increasing $\delta \nu$. However we caution that when $B^* \neq 0$ the composite fermion momenta are smeared on the scale of $\frac{1}{R_c^*} \sim \sqrt{B} |\delta \nu|$ which is the same order as $\delta K_F$. Thus the theory of the $\delta \nu$ dependence in the experiment likely requires more complicated analysis which we leave for the future.
\item
{\em $2K_F$ density oscillations}
It is interesting to ask about the singularities in the $2K_F$ response of physical quantities in the particle-hole symmetric theory. Note that the physical charge density is not simply the composite fermion density (unlike in HLR). Since the physical density is given by Eqn. \ref{dualj}, we see that the density correlator is determined by the correlator of the transverse gauge field. For simplicity let us specialize to zero frequency. Then,
\begin{equation}
\langle |j_0(\vec q, \omega = 0 )|^2 \rangle = q^2 \langle |a_t(\vec q, \omega = 0)|^2 \rangle
\end{equation}
where $a_t$ is the transverse component of the vector potential $\vec a$. For $q \approx 2K_f$ this means that the universal structure of the density correlator is the same as in that of the transverse gauge field.
In the effective Lagrangian the gauge field couples to the fermions through the term
\begin{equation}
\bar{\psi}_v (\vec k + \vec q) a^i_{-q} \gamma^i \psi_v(\vec k)
\end{equation}
For $\vec q \approx 2K_f \hat{x}$, the important coupling is between composite fermions in a patch of the Fermi surface near $+ K_F \hat{x}$ and those in an antipodal patch near $- K_F \hat{x}$. As the ``spin" of the composite fermion is polarized perpendicular to the Fermi momentum the wavefunctions at the two antipodal Fermi points are orthogonal to each other. This means that $a_x$ (which couples to $\sigma_y$) will not scatter a fermion from the right patch to the left one. However $a_y$ couples to $\sigma_x$ and will be able to scatter composite fermions between these two patches. Thus the effective quadratic action for $a_y$ near wave vectors $\vec q \approx 2K_F \hat{x}$ will be determined by the correlations of $\psi_{vR}^\dagger \sigma_x \psi_{vL}$ (where $R,L$ refer to the right and left patches respectively). This will have the same structure of the $2K_F$ singularity as in a usual Fermi surface coupled to a gauge field \cite{mrossde,altshuleretal}. In the presence of the long range Coulomb interaction these are essentially unmodified from the Fermi liquid form (upto log corrections) corresponding to a square root cusp as a function of $|q - 2K_F|$ that modifies a smooth non-universal contribution. It is easy to then see that the density correlations will have this same universal structure of $2K_F$ singularities (as in the standard HLR theory).
\item
{\em Disorder with statistical particle-hole symmetry: Localization}
If the disorder is particle-hole symmetric we can ask about possible localization effects on the composite fermions. Ignoring the gauge field maps the problem to that of the surface Dirac cone of spin-orbit coupled $3d$ topological insulators in the presence of a random effective magnetic field $B_{eff}$ with statistical time reversal invariance. There will be some regions in space in which the magnetic field $B_{eft}$ is positive and some in which it is negative. Inside either of these regions if the magnitude of $B_{eff}$ is large there will be a gap and a $C$-broken gapped surface will be induced. However along the domain walls between these regions there will be gapless $1d$ edge modes. In the strong disorder limit we will form a random network of these domain walls. We expect this to be at the critical point of the integer quantum Hall plateau transition. (Similar arguments have been made in Ref. \onlinecite{fkwti} to discuss disorder effects on the surface of weak topological insulators, topological crystalline insulators, and related systems). This conclusion is presumably not affected by the gauge field.
Thus statistically particle-hole symmetric disorder will not localize the composite fermions but rather drives the composite fermi liquid to the critical point of the integer quantum Hall plateau transition.
\end{enumerate}
\section{Discussion}
\label{disc}
We have elaborated in this paper the connections between three seemingly disparate research topics in quantum many body physics. Here we briefly comment on some extensions and open questions.
For the half-filled Landau level, we presented various physical ways of understanding Son's proposed particle-hole symmetric theory. This understanding will hopefully guide future efforts to derive the particle-hole symmetric composite fermi liquid theory by working purely within the lowest Landau level. For composite fermi liquids of bosons at $\nu = 1$ such a derivation was provided by Ref. \onlinecite{read98} building on the formulation of Ref. \onlinecite{Pasquier1998}. For fermions at $\nu = \frac{1}{2}$ lowest Landau level approaches have been developed (see e.g., Ref. \onlinecite{gmrsrmp03}) but particle-hole symmetry has not been incorporated.
A different recent development\cite{kachru15} which we did not describe here is the application of mirror symmetry of supersymmetric quantum field theories in $2+1$-d to the half-filled Landau level. Ref. \onlinecite{kachru15} started with a supersymmetric massless theory which is free in the infra-red, and which is known to be dual to an interacting supersymmetric gauge theory. Turning on a magnetic field that couples to the conserved global $U(1)$ currents on the IR-free side of the duality breaks supersymmetry, and the low energy theory is simply that of a half-filled Landau level but for two species of fermions which couple with opposite electric charges to the external magnetic field. On the other side of the duality the effective gauge theory reduces essentially to Son's proposed theory but with two fermi surfaces corresponding to the two species of fermions.
In the introduction we raised the question of physical realization of correlated surface states of three dimensional topological insulators/superconductors. We now see that this has a surprising and interesting answer: a physical realization is the half-filled Landau level of a two dimensional electron gas. For topological insulators with $U(1) \times C$ symmetry with a $Z_8 \times Z_2$ classification, the $8$ distinct members of the $Z_8$ subgroup all have free fermion bulk realizations. The $n = 1$ member corresponds to the single half-filled Landau level. Higher values of $n$ are realized as multi-component quantum Hall systems where each component is at filling $\nu = \frac{1}{2}$. Such multicomponent systems have received a lot of attention over the years. We expect that the connection to topological insulator surface states will provide interesting insights just as it does for $n = 1$.
The bulk duality of the gauged topological insulator has crucial implications for the classification and understanding of time reversal symmetric $U(1)$ spin liquids in $3+1$-dimensions. It shows that there is a unique such spin liquid where the low energy effective action for the emergent $U(1)$ gauge field has a $\theta$ angle of $\pi$. On the other hand when $\theta = 0$ Ref. \onlinecite{tsymmu1} showed that there were precisely $6$ distinct phases distinguished by the structure of bulk excitations leading to a total of $7$ distinct phases. Additional phases are obtained by combining these with SPT phases of the underlying spin system protected by time reversal.
We thank Patrick Lee for strongly encouraging us to write this article, and for many discussions. We also thank M. Barkeshli, S. Das Sarma, J. Eisenstein, Liang Fu, Matthew Fisher, N. Read, I. Sodemann, M. Metlitski, M. Shayegan, D. Son, A. Vishwanath, Liujun Zou, and M. Zaletel for inspiring and informative discussions. This work was supported by NSF DMR-1305741. This work was also partially supported by a Simons Investigator award from the
Simons Foundation to Senthil Todadri. Since the submission of the initial version of this paper, two other papers (Refs. \onlinecite{geraedtsnum} and \onlinecite{msgmp15}) have appeared on the arxiv with further results on particle-hole symmetry in the half-filled Landau level.
|
1,116,691,498,522 | arxiv | \section*{\bf Summary}
The strong CP problem is that $SU(3)$ gauge field instantons naturally
induce a CP violating term in the QCD Lagrangian which is constrained
by experiment to be very small for no obvious reason. We show that
this problem disappears if one assumes
the existence of at least one black hole somewhere in the universe.
The argument is reminiscent of Dirac's argument for the quantization of charge,
in which the existence of one monople anywhere in the universe suffices to require the
quantization of electric charge everywhere.
\end{abstract}
\pacs{04.70.Bw,12.38.-t,11.30.Er}
\maketitle
\clearpage
\section{Introduction}
In quantum chromodynamics (QCD)\cite{generalrefs} - the generally
accepted theory of quarks and gluons - there was a prediction that there
should be a light pseudoscalar particle associated with the conserved current
associated with global chiral rotations of the quarks.
No such meson was observed, and this was called the
``U(1) problem''. It was realized that the quantum effects spoiled
the conservation of the quark axial current, making its divergence
proportional to $TrF_{\mu\nu}F^{\ast\mu\nu}$
where $F_{\mu\nu}$ is the $SU(3)$ field strength, $F^{\ast\mu\nu}$ its Hodge dual,
and the trace is taken over $SU(3)$ indices.
This divergence corresponds to a CP-violating Lagrangian density of the form
\begin{equation}
L_{CP-violating} = \frac{\theta g^2}{32\pi^2}TrF_{\mu\nu}F^{\ast\mu\nu}
\label{eqn:CP}
\end{equation}
\noindent where $g$ is the $SU(3)$ gauge coupling constant, and
$\theta$ is a free parameter. Overall, this expression is proportional to
the Pontryagin density, which on integration over spacetime
is an integer topological charge representing the number $n$ of times that $S^3$ (considered
as physical space ${\mathbb{R}}^3$ plus a point at infinity) nontrivially
``winds around'' $SU(3)$. The physical gauge-invariant vacuum is
constructed as a superposition of states of winding
number $n$, each weighted by $e^{in\theta}$ with the sum running from
$n=-\infty$ to $n=\infty$ in order
to preserve gauge invariance under the ``large'' gauge transformations which
are not continuously connected to the identity. $\theta$ is not determined by
the theory, and can, in principle, take any value between $0$ and $2\pi$.
When the weak interactions and quark masses are included,
$\theta$ is shifted by an amount
$arg(det(m))$ where $m$ is the quark mass matrix, but the basic form
of the expression remains the same and unless the shifted $\theta$ is zero (or $\pi$, but this
subtlety will not concern us here)
this term leads to a (CP-violating) electric dipole moment $|d_n|$ for the neutron.
The present upper bound $|d_n|<2.9\times 10^{-26}e\cdot$cm, where
$e$ is magnitude of the electron charge\cite{neutron-edm} ,
implies $\theta <10^{-9}$. The puzzle of why $\theta$ is so small is
the ``strong CP problem''.
A wide variety of solutions have been
proposed, generally involving new physics.
Many postulate particles
called axions\cite{generalrefs} associated with an additional $U(1)$ symmetry
which can be used to rotate $\theta$ to zero. These have not been observed and
are in general quite constrained by astrophysical considerations.
Other ideas include adding dimensions to the usual
3+1 that we know \cite{stdiml}, or making them
some fractional value a little less than four \cite{stdimf}.
Two-dimensional fundamental
objects (2-branes) \cite{branes} have been considered, as have
microscopic wormholes \cite{worm}, hypothetical
new interactions \cite{newint}, new (non-axion) particles \cite{newpart},
supersymmetry \cite{SUSY}, and magnetic monopoles\cite{monopCP}.
It has also been claimed that certain choices of regularization techniques
could solve the problem\cite{reg}. It has even been suggested that
a staggering $10^{32}$ standard model copies could do the job
\cite{gf}. It has also been argued\cite{Fort} that the strong CP problem
might naturally not appear at all if one simply reformulated QCD in terms of
holonomies (gauge invariant traces of Wilson loops).
This list of ideas and references is not meant
to be complete, but rather to show that the problem has
driven theorists to a wide range of quite exotic scenarios
in the search for an explanation.
Despite all this creativity, the strong
CP problem is still generally considered unsolved.
The point of this paper is that the problem could be resolved without unobserved
exotica, and without spoiling
the solution of the $U(1)$ problem, if the spacetime integral of the Pontryagin density were
somehow to be zero -- something I now argue will happen if one allows for the existence
of even one black hole.
In elementary particle physics one usually ignores gravity, and works with
quantum field theory in flat and topologically trivial spacetime.
While quantum field theory in a general curved spacetime\cite{qftc} is
highly nontrivial, the question asked here only requires a little topology.
First let us recall where the instantons come from that lead to the strong CP
problem\cite{Jackiw-and-Rebbi,CDG}. We look for $SU(3)$ gauge
field configurations $A_\mu$ which go to the identity (up to a gauge transformation)
at spatial infinity, with all the directions at infinity identified. These turn out to
fall into topologically distinct classes labelled by elements of $\pi_3(SU(3))$.
For completeness, and to make clear the origin of
the $\pi_3(SU(3))$, let us repeat the argument in more detail.
Pure gauge configurations are of the form
$A_\mu=iU^\dag(x)\partial_\mu U(x)$ where $U(x)$ takes its values in
$SU(3)$ and $x=(t,\vec{x})$. Using the gauge freedom to set $A_0=0$, which essentially means we
consider time-independent fields, only partially fixes the gauge. If we require
$U(\vec{x})=1$ as the spatial $\vec{x}$ goes to infinity in all directions
this is the same as looking for maps from spatial ${\mathbb{R}}^3$ compactified
at infinity (that is, $S^3$) into $SU(3)$.
Instantons and
correspond to homotopy classes $[S^3,SU(3)]$ of these maps.
By definition, $[S^3,SU(3)]=\pi_3(SU(3))$ and since $\pi_3(SU(3))=Z$ we have
homotopically distinct maps labelled by the integers, which turn out to be
the very topological charges that come from the integration of the Pontryagin
density in equation \ref{eqn:CP}.
The key observation of this paper is that if we have black holes present and
repeat the argument, we
should replace $[S^3,SU(3)]$ by $[M,SU(3)]$ where $M$ is a manifold (now
with boundary) created from the $S^3$ described above with a 3-ball bounded
by a 2-sphere excised for each black hole present -- effectively we are
removing a set of distinct points (and balls around them) from space.
Physically, we require that the gauge fields go to the identity
(up to gauge invariance)
on the surfaces of black holes (as well as at infinity), in
a similar spirit to reference \cite{CraneSmolin} in which this condition is invoked to argue
for spacetime foam as a universal regulator.
Note that we don't need to assume spacetime
foam or wormholes (as have been used to argue for solutions to the strong CP
problem before) and the black holes in question
need not be virtual or microscopic -- any astrophysical black holes
(or, indeed just one) would do. In particular, $\pi_1(M)$ is assumed to be
trivial, as is usual in considerations of the strong CP problem (and for
which there is no experimental evidence to the contrary) . In many ways,
this is meant to be a very conservative solution to the strong CP problem
invoking essentially no new physics beyond what is generally known.
Now let us consider the homotopy classes of maps $\left[M,SU(3)\right]$
from $M$ to $SU(3)$. $M$ is clearly simply connected ($\pi_1(M)=0$), as
every closed loop can be continuously shrunk to a point. If we consider
possibly topologically nontrivial maps from $M$ to $SU(3)$ then the
usual Postnikov construction \cite{topology} tells one that one has to
consider $\pi_2(SU(3))$, but this is zero, and one is left with nothing
to worry about except $\left[M,K(\pi_1(SU(3)),1)\right]$
with $K(\pi_1(SU(3)),1)$ being the relevant Eilenberg-MacLane space.
By definition that means that $\left[M,K(\pi_1(SU(3)),1)\right]=H^1(M,\pi_1(SU(3)))$.
Since $M$ is simply connected, one immediately sees that this is zero, and
thus all maps from $M$ to $SU(3)$ are homotopically trivial (continuously
deformable to the identity). We could argue directly
that it is also zero due to the
fact that $SU(3)$ is simply connected and $\pi_1(SU(3))=0$.
If one wants to argue that the
true gauge group should be $SU(3)$ with its $Z_3$ center divided
out\cite{Oraif}, making the first
homotopy group nonzero, then the first argument given
in the above paragraph still makes the case.
The integral in equation \ref{eqn:CP}
now vanishes as the corresponding topological charge is zero, and
the strong CP problem would seem to be solved. Clearly, analogous
arguments hold for any finite-dimensional Lie group $G$ in place of
$SU(3)$ since $\pi_2(G)$ is always zero in this case\cite{topology} and the same
reasoning applies. Some care is needed if multiply connected $M$ is
considered since one does not want to induce a $\theta$-like term
for the $U(1)$ of electromagnetism. Such an electromagnetic $\theta$ term
is absent in standard analyses
since $\pi_3(U(1))=0$ and thus there are no $U(1)$ instantons
to worry about.
In the case of topologically more complicated spacetimes additional
gravitationally-induced CP violating effects may be present\cite{Deser-Duff-Isham}.
In particular, terms proportional to $f_{\mu\nu}f^{\ast\mu\nu}$ where $f$ is the
electromagnetic field strength tensor and $R^{ab}_{\mu\nu} R^{\ast\mu\nu}_{ab}$
where $R$ is the spacetime curvature tensor can be present. These contributions
are not usually considered part of the ``strong CP problem'', although it is
very interesting that these terms are not obviously suppressed by powers of
the Planck scale. In the case of the $R^{ab}_{\mu\nu} R^{\ast\mu\nu}_{ab}$
the spacetimes involved clearly are not of the form considered here since
corresponding instantons do not refer to topologically nontrivial gauge
fields {\em over} spacetime but rather topologically nontrivial spacetimes.
Whether the arguments made here can be extended to this case is not
obvious, but I hope to be able to return to this interersting question in future work.
Of related interest is also \cite{Seiberg} in which the suggestion is made that
the usual instanton sums may need to be modified in some theories.
As this paper was being completed, I became aware
of a related paper\cite{Etesi} by Etesi. This author considers
both black and ``white'' holes (which it is not clear exist), finding results
for $\left[M,SU(3)\right]$ which agree with those here. The claim
in that paper
however is not that the Pontryagin term integrates to zero, but rather
that one should consider a sort of ``effective homotopy'' which takes
into account the causal structure of the relevant spacetime and for which
the corresponding homotopy classes are not trivial and the strong
CP problem remains. The idea is that one should only consider
homotopies whose initial and final stages can be compared by an
observer in finite time. This leads to a re-appearance
of the $\theta$ vacuum structure which we just got rid of, and the
solution of the strong CP problem is
based an additional assumption which is certainly not required
in the usual formulation of the strong CP problem.
In fact $\theta$ arises in a quantum mechanical superposition of
states of all instanton numbers making even the meaning of a
suitable observer unclear at best. Indeed the term ``instanton'' refers to the fact
that one considers field configurations which can be thought of as
at least approximately localized in time. This leads to that paper missing the key point
I make here which is
that even a single black hole (no ``white holes'' needed) suffices to
make all the $SU(3)$ field configurations topologically trivial. In this
way $TrF_{\mu\nu}F^{\ast\mu\nu}$ can still be nonzero locally to
solve the $U(1)$ problem, while globally all the corresponding gauge
field configurations are topologically trivial.
In contrast to essentially all other attempts to solve the strong CP
problem, the approach presented here requires no modification
of the standard treatment of the problem other than to include the
presence of normal (indeed classical) black holes as part of the
structure of spacetime. No undiscovered exotica need be invoked.
It may seem surprising that the existence of even one singular object - in this
case a black hole - could have implications for elementary particle physics,
but there is actually a rather old analog. Long ago in 1948,
Dirac had used topological arguments
to show\cite{monopoles} that the presence of just {\em one} magnetic monopole
would require electric charge everywhere to be quantized. Here we see that,
similarly, the presence of just one black hole can resolve the strong CP problem.
\section{Acknowledgements}
I would like to thank Ka\'{c}a Bradonji\'{c} and Tom
Paul for reading early
drafts of this paper. I would also like to thank Egil Lillestol, Danielle
Metral, Nick Ellis, and the hospitality
of the CERN Latin American School of High Energy Physics
2009 in Colombia where this work was started, Luis Alvarez-Gaum\'{e}
since it was during his lecture to the students that I started to think
about this problem again, and of course all the students who made
the school the great success that it was! I would also like to thank G\'{a}bor
Etesi for email correspondence on the first draft of this paper, as well
as for pointing out that while his argument in reference \cite{Etesi} allowed for
white holes it does not require them, and for emphasizing the possible
differences in considering Euclidean and Minkowsi spacetimes in
calcuations. Thanks are also due to Michael Duff and Stanley Deser for
email correspondence and
bringing references \cite{Deser-Duff-Isham} and \cite{Seiberg} to my attention.
This work
was supported in part by the US National Science Foundation under grant
NSF0855388.
|
1,116,691,498,523 | arxiv | \section{Introduction}
Throughout this paper, we assume that $R=K[x_1,\ldots,x_n]$ is the polynomial ring over a field $K$ and suppose that $G$ is a finite simple graph on the vertex set $V=\{x_1,\ldots,x_n\}$ and the edge set $E$.
For a vertex $v$ of $G$ the set of all neighbors of $v$ is denoted by $N(v)$ and we denote by $N[v]$ the set $N(v)\cup\{v\}$ and also we denote by $\deg(v)$ the number $\mid N(v)\mid$. An independent set of $G$ is a subset $A$ of $V(G)$ such that none of its elements are adjacent. The {\it edge ideal} of the graph $G$ is the quadratic square-free monomial ideal $I(G)=\langle x_ix_j\mid \{x_i,x_j\}\in E\rangle$ and was first introduced by Villarreal \cite{Vi}. Two edges $\{x,y\}$ and $\{z,u\}$ of $G$ are called $3$-{\it disjoint} if the induced subgraph of $G$ on $\{x,y,z,u\}$ is disconnected or equivalently in the complement of $G$ the induced graph on $\{x,y,z,u\}$ is a four-cycle. A subset $A$ of edges of $G$ is called a pairwise $3$-disjoint set of edges in $G$ if each pair of edges of $A$ is $3$-disjoint, see \cite{Ku, MTS1, Z}. The maximum cardinality of all pairwise $3$-disjoint sets of edges in $G$ is denoted by $a(G)$, see \cite{Ku,MTS1,Z}. Note that $a(G)$ is called {\it induced matching number}. The Castelnuovo-Mumford regularity of a graded $R$-module $M$ is defined as $\reg(M)=\max\{j-i| ~\beta_{i,j}(M)\neq 0\}$. Katzmann \cite{K} proved that $\reg(R/I(G))\geq a(G)$ for every simple graph $G$.
Stanley \cite{S} defined a graded $R$-module $M$ to be {\it sequentially Cohen-Macaulay} if there exists a finite filtration of graded $R$-modules
$0=M_0\subset M_1\subset\ldots\subset M_r=M$ such that each $M_{i}/M_{i-1}$ is Cohen-Macaulay, and the Krull dimensions of the quotients are increasing:
$\dim(M_1/M_0)<\dim(M_2/M_1)<\ldots<\dim(M_r/M_{r-1})$. In particular, we call the graph $G$ sequentially Cohen-Macaulay (resp., unmixed) if $R/I(G)$ is sequentially Cohen-Macaulay (resp., unmixed).
Herzog and Hibi \cite{HH1} defined the homogeneous ideal $I$ to be {\it componentwise linear} if $(I_d)$ has a linear resolution for all $d$, where $(I_d)$ is the ideal generated by all degree $d$ elements of $I$. They proved that if $I$ is a square-free monomial ideal, then $R/I$ is sequentially Cohen-Macaulay if and only if the square-free Alexander dual $I^{\vee}$ is componentwise linear. It is known that if $I$ has a linear resolution, then $I$ is componentwise linear. Note that for a square-free monomial ideal $I=\langle\{x_{i1}\ldots x_{in_i}\mid i=1,\ldots,t\}\rangle$ of $R$ the {\it Alexander dual} of $I$, denoted by $I^{\vee}$, is defined as $I^{\vee}=\cap_{i=1}^t\langle x_{i1},\ldots, x_{in_i}\rangle$. For a monomial ideal $I$, we write $(I_i)$ to denote the ideal generated by the degree $i$ elements of $I$. The monomial ideal $I$ is componentwise linear if $(I_i)$ has a linear resolution for all $i$ (see \cite{HH1}). If $I$ is generated by square-free monomials, then we denote by $I_{[i]}$ the ideal generated by the square-free monomials of degree $i$ of $I$. Herzog and Hibi \cite[Proposition 1.5]{HH1} proved that the square-free monomial ideal $I$ is componentwise linear if and only if $I_{[i]}$ has a linear resolution for all $i$.
Woodroofe \cite{W} defined the graph $G$ to be {\it vertex decomposable} if it is a totally
disconnected graph (with no edges) or if the following recursive conditions hold:\\
$(i)$ there is a vertex $v$ in $G$ such that $G\setminus v$ and $G\setminus N[v]$ are both vertex decomposable;\\
$(ii)$ no independent set in $G\setminus N[v]$ is a maximal independent set in $G\setminus v$.
The equality $\reg(R/I(G))=a(G)$ was proved in the following cases: $(i)$ $G$ is a tree graph; $(ii)$ $G$ is a chordal graph, where
the graph $G$ is called {\it chordal} if every cycle of length $>3$ has a chord;
$(iii)$ $G$ is a bipartite graph and unmixed;
$(iv)$ $G$ is a bipartite graph and sequentially Cohen-Macaulay;
$(v)$ $G$ is a very well-covered graph, where the graph $G$ is called
{\it very well-covered} if it is unmixed without an isolated vertices and $2\height(I(G))=\mid V\mid$;
$(vi)$ $G$ is a $C_5$-free vertex decomposable graph; $(vii)$ $G$ is an almost complete multipartite graph such that it is sequentially Cohen-Macaulay or unmixed. For details see \cite{Z, K, HV, V, MTS1, KM, H}.
Mahmoudi et al. in \cite[Question 4.11]{MTS} and in \cite[Question 4.13]{MTS1} raised the following question:
\begin{Question}
Let $G$ be a sequentially Cohen-Macaulay graph with $2n$ vertices which are not isolated and with $\hte (I(G)) = n$. Then do we have the following statements?
\begin{enumerate}
\item
$G$ has a vertex $v$ such that $\deg(v) = 1$.
\item
$G$ is vertex decomposable.
\item
$\reg (R/I(G)) = a(G)$.
\end{enumerate}
\end{Question}
In this paper we give a negative answer to this question by providing two counterexamples.
For every unexplained notion or terminology, we refer the reader to \cite{HH}.
\section{ Counterexamples}
We start this section by recalling the following definition:
\begin{Definition}
Let $I$ be a monomial ideal of $R$ all of whose generators have degree $d$. Then $I$ has a linear resolution if for all $i\geq 0$ and for all $j\neq i+d$, $\beta_{i,j}(I)=0$. In particular, $I$ has a linear resolution if and only if $\reg(I)=d$.
\end{Definition}
\begin{Lemma}(\cite[Lemma 2.3]{AMS})\label{L}
Let $I=\langle u_1,\ldots,u_m\rangle$ be a monomial ideal with $\deg(u_i)=d_i$ and $d_i\leq d_{i+1}$ for $1\leq i\leq m-1$. If $(I_i)$ has a linear resolution for all $i<d_m$ and $\reg(I)=d_m$, then $I$ is
componentwise linear.
\end{Lemma}
By the following example we show that the Question 1.1$(1)$ and $(3)$ have negative answers:
\begin{Example}
Let $G$ be the following graph:
\[\begin{tikzpicture}
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x1) at (0,2) [label=above:$x_{1}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x2) at (1,2) [label=above:$x_{2}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x3) at (2,2) [label=above:$x_{3}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x4) at (3,2) [label=above:$x_{4}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x5) at (0,0) [label=below:$x_{5}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x6) at (1,0) [label=below:$x_{6}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x7) at (2,0) [label=below:$x_{7}$] {};
\pscircle[fillstyle=solid,fillcolor=black]{0.03}[fill] (x8) at (3,0) [label=below:$x_{8}$] {};
\path
(x1) edge (x5)
(x1) edge (x6)
(x1) edge (x7)
(x1) edge (x8)
(x2) edge (x5)
(x2) edge (x6)
(x2) edge (x7)
(x2) edge (x8)
(x3) edge (x6)
(x3) edge (x7)
(x4) edge (x6)
(x4) edge (x8)
(x7) edge (x8);
\end{tikzpicture}\]
Then we may consider the edge ideal
\[I=(x_{1}x_{5}, x_{1}x_{6}, x_{1}x_{7}, x_{1}x_{8}, x_{2}x_{5}, x_{2}x_{6}, x_{2}x_{7}, x_{2}x_{8}, x_{3}x_{6}, x_{3}x_{7}, x_{4}x_{6}, x_{4}x_{8}, x_{7}x_{8})\]
of $R=K[x_1, \ldots, x_{8}]$.
This ideal has the following primary decomposition
\begin{align*}
I=&(x_5, x_6, x_7, x_8) \cap( x_1, x_2, x_3, x_4, x_7) \cap (x_1, x_2, x_3, x_4, x_8) \cap (x_1, x_2, x_3, x_6, x_8) \\
&\cap (x_1, x_2, x_4, x_6, x_7) \cap (x_1, x_2, x_6, x_7, x_8).
\end{align*}
So $\hte(I)=4$ and \[I^{\vee}=(x_5x_6x_7x_8,x_1x_2x_3x_4x_7,x_1x_2x_3x_4x_8,x_1x_2x_3x_6x_8,x_1x_2x_4x_6x_7,x_1x_2x_6x_7x_8).\]
Hence by using Macaulay2 \cite{G}, we have $\reg(R/I)=2$ and
$\reg(I^{\vee})=5$. Therefore by Lemma \ref{L} it readily follows that
$G$ is sequentially Cohen-Macaulay.
One can easily check that for any two edges $\{x_i, x_j\}$ and $\{x_k, x_l\}$ of $G$ such that $i,j,l, k$ are different positive integers, the induced subgraph of $G$ on the vertices $\{x_i, x_j, x_k, x_l\}$ is connected. Therefore, $a(G)=1\ne\reg(R/I)$
giving a negative answer to Question 1.1.(1) and, in addition, $G$ does not have a vertex of degree $1$ contradicting Question 1.1.(3).
\end{Example}
Recall that a {\it circulant graph} is defined as follows: let $n\geq 1$ be an integer and let $S\subseteq\{1,\ldots,\lfloor\frac{n}{2}\rfloor\}$. The circulant graph $C_n(S)$ is the graph on $n$ vertices
$V=\{x_1,\ldots,x_n\}$ such that $\{x_i,x_j\}$ is an edge of $C_n(S)$ if and only if $\min\{\vert i-j\vert,n-\vert i-j\vert\}\in S$. For ease of notation, we write $C_n(a_1,\ldots,a_t)$ instead of $C_n(\{a_1,\ldots,a_t\})$, for more details see \cite{KMV}.
Let $\Delta$ be a simplicial complex on the vertex set $V = \{x_1,\ldots, x_n \}$. Members of $\Delta$ are called faces of $\Delta$ and a facet of $\Delta$ is a maximal face of $\Delta$ with respect to inclusion. The simplicial complex $\Delta$ is pure if every facet has the same cardinality. Also, the simplicial complex $\Delta$ with the facets $F_1,\dots, F_r$ is denoted by $\Delta=\langle{F_1,\dots,F_r}\rangle$. The simplicial complex $\Delta$ is called a {\it simplex} when it has a unique facet.
For the simplicial complex $\Delta$ and the face $F \in \Delta$, one can introduce two new simplicial complexes. The {\it deletion} of $F$ from $\Delta$ is $\del_{\Delta}(F) = \{ A \in \Delta \vert F\cap A=\emptyset\} $. The {\it link} of $F$ in $\Delta$ is $\lk_{\Delta}(F) =\{ A\in \Delta \vert F \cap A =\emptyset, A\cup F\in\Delta \}$. If $F =\{v \}$, we write $\del_{\Delta} v$ (resp. $\lk_{\Delta}v$) instead of $\del_{\Delta}(\{v\})$ (resp. $\lk_{\Delta}(\{v\})$); see \cite{HH} for details information.
The {\it Stanley-Reisner} ideal of $\Delta$ over $K$ is the ideal $I_{\Delta}$ of $R$ which is generated by those square-free monomials $x_F$ with $F \notin \Delta$, where $x_{ F} = \prod_{x_i \in F} x_{i}$. Let $I$ be an arbitrary square-free monomial ideal.
Then there is a unique simplicial complex $\Delta$ such that $I = I_{\Delta}$.
Following \cite{W} a simplicial complex $\Delta$ is recursively defined to be {\it vertex decomposable} if it is either a simplex or else has some vertex $v$ so that
$(i)$ both $\del_{\Delta} v$ and $\lk_{\Delta}v$ are vertex decomposable, and
$(ii)$ no face of $\lk_{\Delta}v$ is a facet of $\del_{\Delta} v$.
A simplicial complex $\Delta$ is {\it shellable} if the facets of $\Delta$ can be ordered, say $F_1,\ldots,F_s$, such that for all $1\leq i<j\leq s$, there exists some $x\in F_j\setminus F_i$ and some $k\in\{1,2,\ldots,j-1\}$ with $F_j\setminus F_k=\{x\}$. Hence if $\Delta$ is shellable with shelling order $F_1,\ldots,F_s$, then
for each $2\leq j\leq s$, the subcomplex $\langle F_1,\ldots,F_{j-1}\rangle\cap\langle F_j\rangle$ is pure of dimension $\dim F_j-1$, for detials see \cite[Section 8.2]{HH}. The following implications hold:\\
vertex decomposable $\Longrightarrow$ shellable $\Longrightarrow$ sequentially Cohen-Macaulay.\\
Also, both implications are known to be strict.
The independence complex of the graph $G$ is defined by $\Ind(G)=\{F\subseteq V\mid F$ is an independence set in $G\}$. It is clear $I(G)=I_{\Ind(G)}$. Let $v$ be a vertex of $G$. By \cite{H} we have the following relations:\\
$\del_{\Ind(G)} v=\Ind(G\setminus v)$ and $\lk_{\Ind(G)}v=\Ind(G\setminus N[v])$.
Therefore one can deduce that the graph $G$ is vertex decomposable if and only if the independence complex
$\Ind(G)$ is vertex decomposable.
\begin{Theorem}(\cite[Theorem 6.1 (iii)]{KMV})\label{T1}
The graph $C_{16}(1, 4, 8)$ is the smallest well-covered circulant that is shellable but not
vertex decomposable.
\end{Theorem}
By the following example we show that Question 1.1$(2)$ has a negative answer:
\begin{Example}
Let $I$ be an ideal of $R=K[x_1, \ldots, x_{26}]$ generated by the following monomials
\begin{center}
\centering
\scalebox{0.8}{%
\begin{tabular}{lllllllllllll}
$x_{16}x_{26}$ & $x_{15}x_{26}$ & $x_{13}x_{26}$ & $x_{12}x_{26}$ & $x_{10}x_{26}$ & $x_{8}x_{26}$ & $x_{7}x_{26}$ & $x_{6}x_{26}$ & $ x_{5}x_{26}$ & $x_{4}x_{26}$ & $ x_{3}x_{26}$ &$ x_{2}x_{26} $ & $ x_{1}x_{26}$\\
$x_{16}x_{25}$ & $x_{15}x_{25}$ & $x_{13}x_{25}$ & $x_{12}x_{25}$ & $x_{10}x_{25}$ & $x_{8}x_{25}$ & $x_{7}x_{25}$ & $x_{6}x_{25}$ & $ x_{5}x_{25}$ & $x_{4}x_{25}$ & $ x_{3}x_{25}$ &$ x_{2}x_{25} $ & $ x_{1}x_{25}$\\
$x_{16}x_{24}$ & $x_{15}x_{24}$ & $x_{13}x_{24}$ & $x_{12}x_{24}$ & $x_{10}x_{24}$ & $x_{8}x_{24}$ & $x_{7}x_{24}$ & $x_{6}x_{24}$ & $ x_{5}x_{24}$ & $x_{4}x_{24}$ & $ x_{3}x_{24}$ &$ x_{2}x_{24} $ & $ x_{1}x_{24}$\\
$x_{16}x_{23}$ & $x_{15}x_{23}$ & $x_{13}x_{23}$ & $x_{12}x_{23}$ & $x_{10}x_{23}$ & $x_{8}x_{23}$ & $x_{7}x_{23}$ & $x_{6}x_{23}$ & $ x_{5}x_{23}$ & $x_{4}x_{23}$ & $ x_{3}x_{23}$ &$ x_{2}x_{23} $ & $ x_{1}x_{23}$\\
$x_{16}x_{22}$ & $x_{15}x_{22}$ & $x_{13}x_{22}$ & $x_{12}x_{22}$ & $x_{10}x_{22}$ & $x_{8}x_{22}$ & $x_{7}x_{22}$ & $x_{6}x_{22}$ & $ x_{5}x_{22}$ & $x_{4}x_{22}$ & $ x_{3}x_{22}$ &$ x_{2}x_{22} $ & $ x_{1}x_{22}$\\
$x_{16}x_{21}$ & $x_{15}x_{21}$ & $x_{13}x_{21}$ & $x_{12}x_{21}$ & $x_{10}x_{21}$ & $x_{8}x_{21}$ & $x_{7}x_{21}$ & $x_{6}x_{21}$ & $ x_{5}x_{21}$ & $x_{4}x_{21}$ & $ x_{3}x_{21}$ &$ x_{2}x_{21} $ & $ x_{1}x_{21}$\\
$x_{16}x_{20}$ & $x_{15}x_{20}$ & $x_{13}x_{20}$ & $x_{12}x_{20}$ & $x_{10}x_{20}$ & $x_{8}x_{20}$ & $x_{7}x_{20}$ & $x_{6}x_{20}$ & $ x_{5}x_{20}$ & $x_{4}x_{20}$ & $ x_{3}x_{20}$ &$ x_{2}x_{20} $ & $ x_{1}x_{20}$\\
$x_{16}x_{19}$ & $x_{15}x_{19}$ & $x_{13}x_{19}$ & $x_{12}x_{19}$ & $x_{10}x_{19}$ & $x_{8}x_{19}$ & $x_{7}x_{19}$ & $x_{6}x_{19}$ & $ x_{5}x_{19}$ & $x_{4}x_{19}$ & $ x_{3}x_{19}$ &$ x_{2}x_{19} $ & $ x_{1}x_{19}$\\
$x_{16}x_{18}$ & $x_{15}x_{18}$ & $x_{13}x_{18}$ & $x_{12}x_{18}$ & $x_{10}x_{18}$ & $x_{8}x_{18}$ & $x_{7}x_{18}$ & $x_{6}x_{18}$ & $ x_{5}x_{18}$ & $x_{4}x_{18}$ & $ x_{3}x_{18}$ &$ x_{2}x_{18} $ & $ x_{1}x_{18}$\\
$x_{16}x_{17}$ & $x_{15}x_{17}$ & $x_{13}x_{17}$ & $x_{12}x_{17}$ & $x_{10}x_{17}$ & $x_{8}x_{17}$ & $x_{7}x_{17}$ & $x_{6}x_{17}$ & $ x_{5}x_{17}$ & $x_{4}x_{17}$ & $ x_{3}x_{17}$ &$ x_{2}x_{17} $ & $ x_{1}x_{17}$\\
$x_{15}x_{16}$ & $x_{12}x_{16}$ & $x_{8}x_{16}$ & $x_{4}x_{16}$ & $x_{1}x_{16}$ & $x_{14}x_{15}$ & $x_{11}x_{15}$ & $x_{7}x_{15}$ & $ x_{3}x_{15}$ & $x_{13}x_{14}$ & $ x_{10}x_{14}$ &$ x_{6}x_{14} $ & $ x_{2}x_{14}$\\
$x_{12}x_{13}$ & $x_{9}x_{13}$ & $x_{5}x_{13}$ & $x_{1}x_{13}$ & $x_{11}x_{12}$ & $x_{8}x_{12}$ & $x_{4}x_{12}$ & $x_{10}x_{11}$ & $ x_{7}x_{11}$ & $x_{3}x_{11}$ & $ x_{9}x_{10}$ &$ x_{6}x_{10} $ & $ x_{2}x_{10}$\\
$x_{8}x_{9}$ & $x_{5}x_{9}$ & $x_{1}x_{9}$ & $x_{7}x_{8}$ & $x_{4}x_{8}$ & $x_{6}x_{7}$ & $x_{3}x_{7}$ & $x_{5}x_{6}$ & $ x_{2}x_{6}$ & $x_{4}x_{5}$ & $ x_{1}x_{5}$ &$ x_{3}x_{4} $ & $ x_{2}x_{3}$\\
$x_1 x_2$
\end{tabular}}
\end{center}
The ideal $I$ is an edge ideal of a graph, say $G$.
This ideal has the form
\[I=(J,x_{17}, x_{18}, \cdots, x_{26}) \cap (x_1, \cdots, x_8, x_{10}, x_{12}, x_{13}, x_{15}, x_{16}),\]
where $J$ is the edge ideal of circulant graph $C_{16}(1, 4, 8)$.
This ideal has the following primary decomposition
\begin{align*}
I=\mathop \cap \limits_{i = 1}^{80} {(\frk{p}_i , x_{17}, x_{18}, \cdots, x_{26})} \cap (x_1, \cdots, x_8, x_{10}, x_{12}, x_{13}, x_{15}, x_{16});
\end{align*}
where $\frk{p_{i}}$ for $1 \leq i \leq 80$ is an associated prime of circulant graph $C_{16}(1, 4, 8)$.
Therefore $\hte (I)=13$ and
the simplicial complex $\Ind(G)$ has $81$ facets as follows:\\
\scalebox{0.69}{%
~ $F_{0}=\{ x_{9}, x_{11}, x_{14}, x_{17},x_{18},x_{19},x_{20},x_{21},x_{22},x_{23},x_{24},x_{25},x_{26}\},$ }\\
\scalebox{0.66}{%
\begin{tabular}{lllll}
$F_{1}=\{ x_9, x_{11}, x_{14}, x_{16} \},$ & $F_{2}=\{ x_5, x_{11}, x_{14}, x_{16} \},$ & $F_{3}=\{ x_7, x_{9}, x_{14}, x_{16} \},$ & $F_{4}=\{ x_3, x_{9}, x_{14}, x_{16} \},$ & $F_{5}=\{ x_5, x_{7}, x_{14}, x_{16} \},$ \\
$F_{6}=\{ x_3, x_{5}, x_{14}, x_{16} \},$ & $F_{7}=\{ x_6, x_{9}, x_{11}, x_{16} \},$ & $F_{8}=\{ x_5, x_{7}, x_{10}, x_{16} \},$ & $F_{9}=\{ x_2, x_{5}, x_{11}, x_{16} \},$ & $F_{10}=\{ x_2, x_{9}, x_{11}, x_{16} \},$ \\
$F_{11}=\{ x_2, x_{7}, x_{13}, x_{16} \},$ & $F_{12}=\{ x_7, x_{10}, x_{13}, x_{16} \},$ & $F_{13}=\{ x_2, x_{11}, x_{13}, x_{16} \},$ & $F_{14}=\{ x_6, x_{11}, x_{13}, x_{16} \},$ & $F_{15}=\{ x_3, x_{5}, x_{10}, x_{16} \},$ \\
$F_{16}=\{ x_3, x_{10}, x_{13}, x_{16} \},$ & $F_{17}=\{ x_3, x_{6}, x_{13}, x_{16} \},$ & $F_{18}=\{ x_2, x_{7}, x_{9}, x_{16} \},$ & $F_{19}=\{ x_3, x_{6}, x_{9}, x_{16} \},$ & $F_{20}=\{ x_2, x_{5}, x_{7}, x_{16} \},$ \\
$F_{21}=\{ x_7, x_{9}, x_{12}, x_{14} \},$ & $F_{22}=\{ x_1, x_{4}, x_{10}, x_{15} \},$ & $F_{23}=\{ x_1, x_{8}, x_{10}, x_{15} \},$ & $F_{24}=\{ x_5, x_{8}, x_{10}, x_{15} \},$ & $F_{25}=\{ x_1, x_{10}, x_{12}, x_{15} \},$ \\
$F_{26}=\{ x_4, x_{10}, x_{13}, x_{15} \},$ & $F_{27}=\{ x_8, x_{10}, x_{13}, x_{15} \},$ & $F_{28}=\{ x_5, x_{10}, x_{12}, x_{15} \},$ & $F_{29}=\{ x_3, x_{9}, x_{12}, x_{14} \},$ & $F_{30}=\{ x_3, x_{8}, x_{10}, x_{13} \},$ \\
$F_{31}=\{ x_3, x_{5}, x_{8}, x_{14} \},$ & $F_{32}=\{ x_5, x_{8}, x_{11}, x_{14} \},$ & $F_{33}=\{ x_6, x_{8}, x_{11}, x_{13} \},$ & $F_{34}=\{ x_6, x_{8}, x_{13}, x_{15} \},$ & $F_{35}=\{ x_4, x_{6}, x_{13}, x_{15} \},$ \\
$F_{36}=\{x_2, x_{8}, x_{13}, x_{15} \},$ & $F_{37}=\{x_2, x_{8}, x_{11}, x_{13} \},$ & $F_{38}=\{x_1, x_{4}, x_{6}, x_{15} \},$ & $F_{39}=\{ x_4, x_{6}, x_{9}, x_{15} \},$ & $F_{40}=\{ x_6, x_{9}, x_{12}, x_{15} \},$ \\
$F_{41}=\{ x_1, x_{6}, x_{12}, x_{15} \},$ & $F_{42}=\{ x_1, x_{6}, x_{8}, x_{15} \},$ & $F_{43}=\{ x_2, x_{4}, x_{13}, x_{15} \},$ & $F_{44}=\{ x_2, x_{9}, x_{12}, x_{15} \},$ & $F_{45}=\{ x_2, x_{4}, x_{9}, x_{15} \},$ \\
$F_{46}=\{ x_4, x_{6}, x_{11}, x_{13} \},$ & $F_{47}=\{ x_4, x_{9}, x_{11}, x_{14} \},$ & $F_{48}=\{ x_4, x_{7}, x_{9}, x_{14} \},$ & $F_{49}=\{ x_2, x_{4}, x_{11}, x_{13} \},$ & $F_{50}=\{x_5, x_{7}, x_{10}, x_{12} \},$ \\
$F_{51}=\{x_1, x_{3}, x_{8}, x_{14} \},$ & $F_{52}=\{x_1, x_{8}, x_{11}, x_{14} \},$ & $F_{53}=\{ x_1, x_{3}, x_{12}, x_{14} \},$ & $F_{54}=\{ x_1, x_{7}, x_{12}, x_{14} \},$ & $F_{55}=\{ x_1, x_{7}, x_{10}, x_{12}\},$ \\
$F_{56}=\{ x_3, x_{6}, x_{8}, x_{13} \},$ & $F_{57}=\{ x_5, x_{7}, x_{12}, x_{14} \},$ & $F_{58}=\{ x_3, x_{5}, x_{12}, x_{14} \},$ & $F_{59}=\{ x_3, x_{5}, x_{10}, x_{12} \},$ & $F_{60}=\{ x_1, x_{3}, x_{10}, x_{12} \},$ \\
$F_{61}=\{x_2, x_7, x_9, x_{12} \},$ & $F_{62}=\{ x_3, x_{6}, x_{9}, x_{12} \},$ & $F_{63}=\{x_2, x_{5}, x_{7}, x_{12} \},$ & $F_{64}=\{ x_2, x_{5}, x_{8}, x_{11}\},$ & $F_{65}=\{x_1, x_{6}, x_{8}, x_{11} \},$ \\
$F_{66}=\{x_2, x_{4}, x_{9}, x_{11} \},$ & $F_{67}=\{ x_4, x_{6}, x_{9}, x_{11} \},$ & $F_{68}=\{ x_1, x_{3}, x_{6}, x_{12}\},$ & $F_{69}=\{ x_2, x_{5}, x_{8}, x_{15} \},$ & $F_{70}=\{x_2, x_{5}, x_{12}, x_{15} \},$ \\
$F_{71}=\{ x_1, x_{4}, x_{6}, x_{11} \},$ & $F_{72}=\{ x_1, x_{4}, x_{11}, x_{14} \},$ & $F_{73}=\{ x_1, x_{4}, x_{7}, x_{14} \},$ & $F_{74}=\{ x_3, x_{5}, x_{8}, x_{10} \},$ & $F_{75}=\{ x_1, x_{3}, x_{8}, x_{10} \},$ \\
$F_{76}=\{ x_1, x_{4}, x_{7}, x_{10} \},$ & $F_{77}=\{ x_4, x_{7}, x_{10}, x_{13} \},$ & $F_{78}=\{ x_1, x_{3}, x_{6}, x_{8} \},$ & $F_{79}=\{ x_2, x_{4}, x_{7}, x_{9} \},$ & $F_{80}=\{ x_2, x_{4}, x_{7}, x_{13} \}$ \\
\end{tabular}}
\\
By the proof of Theorem \ref{T1}, we have $F_{1}, \ldots, F_{80} $ is a shelling order of $\Ind(C_{16}(1, 4, 8))$ and the graph $C_{16}(1, 4, 8)$ is the smallest well-covered circulant that is shellable but not
vertex decomposable. We claim that $F_{0}, F_{1}, \ldots, F_{80}$ is a shelling order of $\Ind(G)$.
Since $F_{1}, \ldots, F_{80} $ is a shelling order, it is enough to show that for each $i$, there exists some $v \in F_{i} \setminus F_{0}$ and some $k < i$ such that $F_{i} \setminus F_{k} =\{ v\}$. If $i=1$, then it is clear
$F_{1} \setminus F_{0} =\{ x_{16}\}$. Now we assume that $1\ne i\leq 80$. Since $F_{i} \setminus F_{1}\subseteq
F_{i} \setminus F_{0}$, we may choose $v \in F_{i} \setminus F_{1}$ and so there exists some $1\leq k < i$ such that $F_{i} \setminus F_{k} =\{ v\}$. Therefore $\Ind(G)$ is shellable and so $G$ is sequentially Cohen-Macaulay.
Now, we claim that for each element $x_t $ with $1\leq t\leq 26$, $\del_{\Ind(G)} (x_t)$ is not vertex decomposable.
If $x_{t} \in \{x_9, x_{11}, x_{14}, x_{17},\ldots,x_{26} \}$, then by using the definition on the above facets it is obvious that $\del_{\Ind(G)} (x_t)$ has a facet, say $F^{\prime}$, such that $F^{\prime} \ne F_{i}$ for $0 \leq i \leq 80$, and in this case $\del_{\Ind(G)} (x_t)$ is not vertex decomposable. For the remaining claim, we assume that $x_{t} \in \{ x_{1}, \ldots, x_{8}, x_{10}, x_{12}, x_{13}, x_{15}, x_{16}\}$ and we will show that $\del_{\Ind(G)} (x_t)$ is not shellable and so it is not vertex decomposable.
By contrary, let $\del_{\Ind(G)} (x_t)$ be shellable and so we may consider the shelling order $F_0=F_{s_0}, F_{s_1}, \ldots, F_{s_r}$. By this shelling order we have $F_{0}=(F_{s_1}\setminus \{x_{m} \})\cup \{ x_{17},\ldots,x_{26}\}$ for some $x_m \in F_{s _1}$ and for all $i$ and $j <i$ there exists $x_{l} \in F_{s_i} \setminus F_{s_j}$ and $k < i$ such that $F_{s_i} \setminus F_{s_k} =\{ x_{l} \}$. By this assumption we claim that
$F_{s_1}, \ldots, F_{s_r}$ is shellable and for this it is enough for such $k$ to assume
$F_{s_k}=F_{0}$. In this case $F_{s_i}=(F_{0}\setminus \{ x_{17},\ldots,x_{26}\})\cup \{x_{l} \}=\{ x_{9}, x_{11}, x_{14}, x_{l} \}$. We may assume $F_{s_i}\ne F_{s_1}$. Since $F_{s_i}=\{ x_{9}, x_{11}, x_{14}, x_{l} \}$
and $F_{s_1}=\{x_9,x_{11},x_{14},x_m\}$, we have $F_{s_i}\setminus F_{s_1}=\{x_l\}$.
It therefore follows that $ F_{s_1}, \ldots, F_{s_r}$ is a shelling order. Hence $\del_{\Ind(C_{16}(1, 4, 8))} (x_t)=\langle F_{s_1}, \ldots, F_{s_r} \rangle$ and this means that $\del_{\Ind(C_{16}(1, 4, 8))} (x_t)$ is pure shellable and Cohen-Macaulay.
This is a contradiction by the proof of Theorem \ref{T1}. Thus $\del_{\Ind(G)} (x_t)$ is not shellable and so $G$ is not vertex decomposable. Hence we construct a sequentially Cohen-Macaulay graph with $26$ vertices such that $\hte(I)=13$ but it is not vertex decomposable.
\end{Example}
{\bf Acknowledgments:} The authors are indebted to Adam Van Tuyl for suggestion and many valuable comments.
We also thank the referee for a careful reading of the paper and for the improvements suggested.
|
1,116,691,498,524 | arxiv | \section{Annotation Interface}
In this section, we show the interface and instructions for engagement
annotation on Amazon Mechanical Turk. We include the full instructions and
annotation interface details in order to help reviewers evaluate the care with
which we collected the ground truth annotations.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth,clip,trim={0 4.2cm 0 0}]{interface.pdf}
\caption{Screen shot of annotation interface.}
\end{figure*}
\subsection{Task Description}\mbox{}
George is wearing a camera on his head. The camera captures video constantly as George goes about his daily life. Because the camera is on his head, when George moves his head to look around, the camera moves too. Basically, it captures the world just as George sees it.
Your job is to watch a video excerpt from George's camera that lasts 1-2 minutes, and determine when \textbf{something in the environment has captured George's attention}. You will first watch the entire video. Then you will go back and use a slider to navigate through the video frames and mark the intervals (start and end points) where he is paying close attention to something. \textbf{Note, the video may have more than one interval where George is paying close attention to something}.
\paragraph{Definition of Attention}\mbox{}
The following instructions will describe what we mean by ``capturing George's attention'' in more detail:
Humans' cognitive process has different levels of attention to the surrounding environment. For example, people pay very little attention to their surroundings when they are walking on a route they are familiar with, but the attention level will rise significantly if there are unusual events (such as a car accident) or something attracts their curiosity (such as a new advertisement on the wall), or if they want to inspect something more closely (such as a product on the shelf when shopping). You are asked to identify these ``high attention intervals'' in the video.
\textbf{In particular, we ask you to identify intervals where George's attention is focussed on an object or a specific location in the scene.}
During these intervals, George is attracted by an object and tries to have a better view/understanding about it intentionally. In general, George may:
\begin{itemize}
\item Have a closer look at the object
\item Inspect the object from different views
\item Stare at the object
\end{itemize}
In some situations, George may even interact physically with the object capturing his attention to gather more information. For example, he may grab the object to have a closer view of it, or he may turn the object to inspect it from different views. To identify these situations, we also ask you to annotate \textbf{whether George touched the object} capturing his attention during the interval.
The following video shows examples of attention interval:
\emph{please refer to the video on our project webpage}.
\paragraph{Important Notes}
\begin{itemize}
\item You should watch the entire video (3 minutes) first before doing any annotation. This will give you the context of the activity to know when George is paying close attention.
\item A video may contain \textbf{multiple or no} intervals where George's attention is captured. You should label each one separately. The intervals are mutually exclusive and should not overlap.
\item Each interval where George's attention is captured may vary in length. Some could be a couple seconds long, others could be closer to a minute long. The minimum length of each interval is 15 frames (1 second).
\item You may need to scroll back and forth in the video using our slider interface to determine exactly when the attention starts and stops. Mark the interval as tightly as possible.
\item After labeling where an attention interval starts and ends, you will mark whether George has physical contact (grab, touch, etc.) with the object during the interval or is just looking at it.
\item You will also mark your confidence in terms of how strongly George's attention was captured in that interval (Obvious, Fairly clear, Subtle).
\end{itemize}
\subsection{Interface Introduction}\mbox{}
The following introduction will give you tips on how to best use the tool. Please watch the below video (and/or read the below section) for instructions:
\emph{please refer to the video on our project webpage}.
\paragraph{Getting Started}
\begin{itemize}
\item Press the \textbf{Play} button to play the video.
\item After the video finished, press the \textbf{Rewind} button and start annotation.
\begin{figure}[H]
\centering
\includegraphics[width=.5\linewidth]{rewind.png}
\end{figure}
\item Play the video, \textbf{Pause} the video when you reach the frame at the beginning of high attention interval.
\item Click the \textbf{Start} button to mark the ``Start'' of the interval.
\begin{figure}[H]
\centering
\includegraphics[width=.5\linewidth]{intervalstart.png}
\end{figure}
\item On the right, directly below the Start button, you will find a colorful box showing the frame number corresponding to the `Start' of the interval.
\item Similarly, click the \textbf{End} button to mark the ``End'' of the interval.
\item After you mark the end of the interval, you will be asked whether George contact (grabbing, touching, etc.) the object that captures his attention.
\item Next, you will be asked about how obvious is the attention interval. Specify whether the interval is \textbf{Obvious, Fairly clear, Subtle}.
\begin{figure}[H]
\centering
\includegraphics[width=.5\linewidth]{attentionlevel.png}
\end{figure}
\item Finally, you will be asked to describe what attracts George's attention. Type in what attracts George's attention (object, scene, event, etc.) and \textbf{Submit} the interval.
\begin{figure}[H]
\centering
\includegraphics[width=.5\linewidth]{attentiondescription.png}
\end{figure}
\item When you are ready to submit your work, rewind the video and watch it through one more time. Do the ``Start'' and ``End'' you specified cover the complete high attention interval? After you have checked your work, press the \textbf{Submit HIT} button. We will pay you as soon as possible.
\item Do \textbf{not} reload or close the page before redirected to next hit. This may cause submission failure.
\end{itemize}
\paragraph{How We Accept Your Work}\mbox{}
We will hand review your work and we will only accept high quality work. Your annotations are not compared against other workers.
\paragraph{Keyboard Shortcuts}\mbox{}
These keyboard shortcuts are available for your convenience:
\begin{itemize}
\item \textbf{t} toggles play/pause on the video
\item \textbf{r} rewinds the video to the start
\item \textbf{d} jump the video forward a bit
\item \textbf{f} jump the video backward a bit
\item \textbf{v} step the video forward a tiny bit
\item \textbf{c} step the video backward a tiny bit
\end{itemize}
\subsection{Initial frame-wise estimates}
\label{sec:framewise_estimation}
To first compute frame-wise predictions, we construct one motion descriptor per
frame. We divide the frame into a grid of $16 \times 12$ uniform cells and
compute the optical flow vector in each cell. Then we temporally smooth the
grid motion with a Gaussian kernel. Since at this stage we want to capture
attention within a granularity of a second, we set the width of the kernel to
two seconds. As shown in~\cite{poleg2014cvpr}, smoothing the flow is valuable
to integrate out the regular unstable head bobbles by the recorder; it helps
the descriptor focus on prominent scene and camera motion. The frame
descriptor consists of the smoothed flow vectors concatenated across cells,
together with the mean and standard deviation of all cells in the frame. It
captures dominant egomotion and dynamic scene motion---both of which are
relevant to first-person engagement.
We use these descriptors, together with the frame-level ground truth
(cf.~Sec.~\ref{sec:datacollection}), to train an i.i.d. classifier. We use
random forest classifiers due to their test-time efficiency and relative
insensitivity to hyper-parameters, though of course other classifiers are
possible. Given a test video, the confidence (posterior) output by the random
forest is used as the initial frame-wise engagement estimate.
\subsection{Generating interval proposals}
\label{sec:candidategeneration}
After obtaining the preliminary estimate for each frame, we generate multiple
hypotheses for engagement \emph{intervals} using a level set method as follows.
For a given threshold on the frame-based confidence, we obtain a set of
positive intervals, where each positive interval consists of contiguous frames
whose confidence exceeds the threshold. By sweeping through all possible
thresholds (we use the decile), we generate multiple such sets of candidates.
Candidates from all thresholds are pooled together to form a final set of
\emph{interval proposals}.
We apply this candidate generation process on both training data and test data.
During training, it yields both positive and negative example intervals that we
use to train an interval-level classifier (described next). During testing, it
yields the hypotheses to which the classifier should be applied. This
detection paradigm not only lets us avoid sliding temporal window search, but
it also allows us to detect engagement intervals of variable length.
\subsection{Describing and classifying intervals}\label{sec:interval}
For each interval proposal, we generate a motion descriptor that captures both
the motion distribution and evolution over time. Motion evolution is important
because a recorder usually performs multiple actions within an interval of
engagement. For example, the recorder may stop, turn his head to stare at an
object, reach out to touch it, then turn back to resume walking. Each action
leads to a different motion pattern. Thus, unlike the temporally local
frame-based descriptor above, here we aim to capture the statistics of the
entire interval. We'd also like the representation to be robust to time-scale
variations (i.e., yielding similar descriptors for long and short instances of
the same activity).
To this end, we use a temporal pyramid representation. For each level of the
pyramid, we divide the interval from the previous level into two equal-length
sub-intervals. For each sub-interval, we aggregate the frame motion computed
in Sec.~\ref{sec:framewise_estimation} by taking the dimension-wise mean and
variance. So, the top level aggregates the motion of the entire interval, and
its descendants aggregate increasingly finer time-scale intervals. The
aggregated motion descriptors from all sub-intervals are concatenated to form a
temporal pyramid descriptor. We use 3-level pyramids. To provide further
context, we augment this descriptor with those of its temporal neighbor
intervals (i.e., before and after). This captures the motion \emph{change}
from low engagement to high engagement and back.
We train a random forest classifier using this descriptor and the interval
proposals from the training data, this time referring to the interval-level
ground truth from Sec.~\ref{sec:datacollection}. At test time, we apply this
classifier to a test video's interval proposals to score each one. If a frame
is covered by multiple interval proposals, we take the highest confidence score
as the final prediction per frame.
\subsection{Discussion}
Our method design is distinct from previous work in video \emph{attention},
which typically operates per frame and uses temporally local measurements of
motion~\cite{nakamura2000pr,cheatle2004icpr,rudoy2013cvpr,nguyen2013mm,ejaz-2013,han2014neurocomputing}.
In contrast, we estimate enagement from interval hypotheses bootstrapped from
initial frame estimates, and our representation captures motion changes over
time at multiple scales. People often perform multiple actions during an
engagement interval, which is well-captured by considering an interval
together. For example, it is hard to tell whether the recorder is attracted by
an object when we only know he glances at it, but it becomes clear if we know
his following action is to turn to the object or to turn away quickly.
Simply flagging periods of low
motion~\cite{rallapalli-mobicom2014,poleg2014cvpr,cheatle2004icpr} is
insufficient to detect all cases of heightened attention, since behaviors
during the interval of engagement are often non-static and also exhibit
learnable patterns. For example, shoppers move and handle objects they might
buy; people sway while inspecting a painting; they look up and sweep their gaze
downward when inspecting a skyscraper.
External sensors beyond the video stream could potentially provide cues useful
to our task, such as inertial sensors to detect recorder motion and head
orientation. However, such sensors are not always available, and they are
quite noisy in practice. In fact, recent attempts to detect gazing behavior
with inertial sensors alone yield false positive rates of
33\%~\cite{rallapalli-mobicom2014}. This argues for the need for visual
features for the challenging engagement detection task.
\section{Introduction}
\input{introduction_kg}
\section{Related Work}
\input{related_kg.tex}
\section{First-Person Engagement: Definition and Data}
Next we define first-person engagement. Then we describe our data collection procedure, and quantitatively
analyze the consistency of the resulting annotations. We introduce our
approach for predicting engagement intervals in Sec.~\ref{sec:approach}.
\subsection{Definition of first-person engagement}
\label{sec:definition}
\input{temporalattention}
\subsection{Data collection}
\label{sec:datacollection}
\input{datacollection}
\subsection{Evaluating data consistency}
\input{dataanalysis}
\section{Approach}
\label{sec:approach}
\input{approach}
\section{Experiments}
\subsection{Experiment Setting}
We validate on two datasets and compare to many existing methods.
\input{baseline}
\input{implementation}
\subsection{UT Egocentric Engagement (UT EE) dataset}
\input{temporalattention_result}
\subsection{UT Egocentric dataset}
\input{utego}
\subsection{Start point correctness}
\label{sec:startpoint_result}
\input{startpoint}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgements}
We thank the friends and labmates who assisted us with data collection. This
research is supported in part by ONR YIP N00014-12-1-0754 and a gift from
Intel.
\IEEEpeerreviewmaketitle
\appendices
\input{appendix}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\subsection{Third-person image and video saliency}
Researchers often equate human gaze fixations as the gold standard with which a
\emph{saliency} metric ought to correlate~\cite{itti-motion2003,harel2006nips}.
There is increasing interest in estimating saliency from video. Initial
efforts examine simple motion cues, such as frame-based motion and
flicker~\cite{itti-motion2003,liu2008mm,seo-2009}. One common approach to
extend spatial (image) saliency to the video domain is to sum image saliency
scores within a temporal segment, e.g.,~\cite{ma2002mm}. Most methods are
unsupervised and entail no
learning~\cite{itti-motion2003,itti-2009,liu2008mm,mahadevan-pami2010,abdollahian2010tmm,rahtu-eccv2010,seo-2009}.
However, some recent work develops learned measures, using ground truth gaze
data as the target
output~\cite{kienzle-dagm2007,lee-tip2011,rudoy2013cvpr,nguyen2013mm,han2014neurocomputing}.
Our problem setting is quite different than saliency. Saliency aims to
\emph{predict viewer attention} in terms of where in the frame a third party is
likely to fixate his gaze; it is an image property analyzed independent of the
behavior of the person recording the image. In contrast, we aim to
\emph{detect recorder engagement} in terms of when (which time intervals) the
recorder has paused to examine something in his
environment.\footnote{Throughout, we will use the term ``recorder" to refer to
the
photographer or the first-person camera-wearer; we use the term ``viewer" to
refer to a third party who is observing the data captured by some other
recorder.} Accounting for
this distinction is crucial, as we will see in results. Furthermore, prior
work in video saliency is evaluated on short video clips (e.g., on the order of
10 seconds~\cite{dorr2010jvis}), which is sufficient to study gaze movements.
In contrast, we evaluate on long sequences---30 minutes on average per clip,
and a total of 14 hours---in order to capture the broad context of ego-behavior
that affects engagement in browsing scenarios.
\subsection{Third-person video summarization}
In video summarization, the goal is to form a concise representation for a long
input video. Motion cues can help detect ``important" moments in third-person
video~\cite{kender2000accv,ma2002mm,li2012bmvc,ejaz-2013,gygli-eccv2014},
including temporal differences~\cite{ejaz-2013} and cues from active camera
control~\cite{kender2000accv,li2012bmvc,gygli-eccv2014}. Whereas prior methods
try to extract what will be interesting to a third-party viewer, we aim to
capture \emph{recorder} engagement.
\subsection{First-person video saliency and gaze}
Researchers have long expected that ego-attention detection requires methods
distinct from bottom-up saliency~\cite{hp}. In fact, traditional motion
saliency can actually \emph{degrade} gaze prediction for first-person
video~\cite{yamada2010accv}. Instead, it is valuable to separate out camera
motion~\cite{yamada2011psivt} or use head motion and hand locations to predict
gaze~\cite{li2013iccv}. Whereas these methods aim to predict spatial
coordinates of a recorder's gaze at every frame, we aim to predict time
intervals where his engagement is heightened. Furthermore, whereas they study
short sequences in a lab~\cite{yamada2011psivt} or kitchen~\cite{li2013iccv},
we analyze long data in natural environments with substantial scene changes per
sequence.
We agree that first-person attention, construed in the most general sense, will
inevitably require first-person ``user-in-the-loop" feedback to
detect~\cite{hp}; accordingly, our work does not aim to detect arbitrary
subjective attention events, but instead to detect moments of engagement to
examine an object more closely.
Outside of gaze, there is limited work on attention in terms of head fixation
detection~\cite{poleg2014cvpr} and ``physical
analytics"~\cite{rallapalli-mobicom2014}. In~\cite{poleg2014cvpr}, a novel
``cumulative displacement curve" motion cue is used to categorize the
recorder's activity (walking, sitting, on bus, etc.) and is also shown to
reveal periods with fixed head position. They use a limited definition of
attention: a period of more than 5 seconds where the head is still but the
recorder is walking. In~\cite{rallapalli-mobicom2014}, inertial sensors are
used in concert with optical flow magnitude to decide when the recorder is
examining a product in a store. Compared to
both~\cite{rallapalli-mobicom2014,poleg2014cvpr}, engagement has a broader
definition, and we discover its scope from data from the crowd
(vs.~hand-crafting a definition on visual features). Crucially, the true
positives reflect that a person can have heightened engagement yet still be in
motion.
\subsection{First-person activity and summarization}
Early methods for egocentric video summarization extract the camera motion and
define rules for important moments (e.g., intervals when camera rotation is
below a threshold)~\cite{nakamura2000pr,cheatle2004icpr}, and test
qualitatively on short videos. Rather than inject hand-crafted rules, we
propose to \emph{learn} what constitutes an engagement interval. Recent
methods explore ways to predict the ``importance" of spatial regions (objects,
people) using cues like hand detection and frame
centrality~\cite{lee-cvpr2012,lu-cvpr2013}, detect
novelty~\cite{carlsson-cvpr2011}, and infer ``social saliency" when multiple
cameras capture the same
event~\cite{hoshen2014cvpr,park-nips2012,fathi-cvpr2012}. We tackle
engagement, not summarization, though likely our predictions could be another
useful input to a summarization system.
In a sense, detecting engagement could be seen as detecting a particular
ego-activity. An array of methods for classifying activity in egocentric video
exist,
e.g.,~\cite{fathi,peleg,pirsiavash,damen,farhadi,kitani-activity,spriggs,yinli}.
However, they do not address our scenario: 1) they learn models specific to
the objects~\cite{fathi,yinli,pirsiavash,damen,spriggs,farhadi} or
scenes~\cite{kitani-activity} with which the activity takes place (e.g., making
tea, snowboarding), whereas engagement is by definition object- and
scene-independent, since arbitrary things may capture one's interest; and 2)
they typically focus on recognition of trimmed video clips, versus temporal
detection in ongoing video.
|
1,116,691,498,525 | arxiv |
\section{Distributed Gradient Computations for the Localizability Potentials}
\label{sec:gradients}
In order to implement the gradient descent scheme \eqref{eq:descent_per_agent},
in Section \ref{section: gradient FIM} we provide analytical forms for
the gradients of the localizability potentials \eqref{eq:pot:A}, \eqref{eq:pot:D} and \eqref{eq:pot:E}.
Then, in Sections \ref{section: decentralized D and A} and \ref{section: decentralized E},
we describe decentralized deployment algorithms
by showing how each agent can compute its components of the gradient of the chosen localizability
potential, using its own local information as well as data obtained from its neighbors in the ranging graph.
\subsection{Partial Derivatives of the FIM}
\label{section: gradient FIM}
Irrespective to the potential considered, we need to evaluate the derivative
of the FIM $\mbf F_\mathcal{U}$ in \eqref{eq: FIM partitioning}
with respect to any coordinate $\xi_i \in \{x_i,y_i,z_i\}$ of a mobile agent $i$
(anchor or tag) located at $\mathbf p_i = [x_i,y_i,z_i]^\top$.
We provide formulas for the case $n=3$, the case $n=2$ being similar.
Define the notation $\xi_{ij}=\xi_i - \xi_j$ and
$\gamma_{ij} = \frac{\kappa}{\sigma^2d_{ij}^{2(\kappa+1)}} \mathsf{1}_{\mathcal N_i}(j)$.
For $\mathbf F_{ij}$, $i\neq j$, the $3 \times 3$ blocks introduced in \eqref{eq:ourfim},
we find
\begin{equation}
\begin{split}
\der{\mathbf F_{ij}}{x_i}
&=
\gamma_{ij}
\begin{bmatrix}
x_{ij}^{3} - \frac{d^{2}_{ij} x_{ij}}{\kappa}
& x_{ij}^{2} y_{ij} - \frac{d^{2}_{ij} y_{ij}}{2\kappa}
& x_{ij}^{2} z_{ij} - \frac{d^{2}_{ij} z_{ij}}{2\kappa} \\
\star &
x_{ij} y_{ij}^{2} &
x_{ij} y_{ij} z_{ij}\\
\star&
\star &
x_{ij} z_{ij}^{2}
\end{bmatrix}
\\
\der{\mathbf F_{ij}}{y_i}
&= \gamma_{ij}
\begin{bmatrix}
x_{ij}^{2} y_{ij} &
x_{ij} y_{ij}^{2} - \frac{d^{2}_{ij} x_{ij}}{2\kappa} &
x_{ij} y_{ij} z_{ij}\\
\star &
y_{ij}^{3} - \frac{d^{2}_{ij} y_{ij}}{\kappa} &
y_{ij}^{2} z_{ij} - \frac{d^{2}_{ij} z_{ij}}{2\kappa}\\
\star &
\star &
y_{ij} z_{ij}^{2}
\end{bmatrix}
, \\
\der{\mathbf F_{ij}}{z_i}
&= \gamma_{ij}
\begin{bmatrix}
x_{ij}^{2} z_{ij} &
x_{ij} y_{ij} z_{ij} &
x_{ij} z_{ij}^{2} - \frac{d^{2}_{ij} x_{ij}}{2\kappa}\\
\star &
y_{ij}^{2} z_{ij} &
y_{ij} z_{ij}^{2} - \frac{d^{2}_{ij} y_{ij}}{2\kappa} \\
\star &
\star &
z_{ij}^{3} - \frac{d^{2}_{ij} z_{ij}}{\kappa}
\end{bmatrix},
\end{split}
\label{eq:derfim}
\end{equation}
where the symbol $\star$ replaces symmetric terms. These expressions are sufficient to compute
the whole matrix $\partial \mathbf F_{\mathcal U} / \partial \xi_i$,
because $\mathbf F_{ji} = \mathbf F_{ij}$, $\mathbf{F}_{kk} = -\sum_{l \in \mathcal N_k} \mathbf F_{kl}$,
and $\partial \mathbf F_{kl}/\partial \xi_i = \mathbf 0$ if $k \neq l$ and $i \notin \{k,l\}$.
Using standard differentiation rules \cite{petersen_matrix_2012}, the partial derivatives
of the A-Opt potential \eqref{eq:pot:A} are
\begin{equation}
\der{J_A(\mathbf p)}{\xi_i} =\der{\trace{\mbf F_\mathcal{U}^{-1}}}{\xi_i} = -\trace{\mbf F_\mathcal{U}^{-2}\der{\mathbf F_{\mathcal U}}{\xi_i}}.
\label{eq:der:Aopt}
\end{equation}
Similarly, we can compute the derivatives of the D-Opt potential \eqref{eq:pot:D} as
\begin{equation}
\der{J_D(\mathbf p)}{\xi_i} = - \der{\ln \det \mbf F_\mathcal{U}}{\xi_i} = -\trace{\mbf F_\mathcal{U}^{-1}\der{\mathbf F_{\mathcal U}}{\xi_i}}.
\label{eq:der:Dopt}
\end{equation}
Finally, if
$\lambda_{\min}(\mbf F_\mathcal{U})$ is a non-repeated eigenvalue with associated unit norm eigenvector $\mathbf v$,
we can compute the derivative of the E-Opt potential \eqref{eq:pot:E} as
\cite[p. 565]{harville_matrix_1997}
\begin{align} \label{eq:der:Eopt - 1}
\der{J_E(\mathbf p)}{\xi_i} = -\der{\lambda_{\min}(\mathbf p)}{\xi_i} = -\mathbf v^\top \der{\mbf F_\mathcal{U}}{\xi_i} \mathbf v.
\end{align}
Hence, we can in principle compute the gradient of the chosen localizability
potential, using the expressions for the FIM and its derivatives.
However, in practice we would also like to be able to implement these computations in a
distributed manner, in order to obtain deployment strategies that can scale to large multi-robot
systems.
\subsection{Decentralized Gradient Computations for the D- and A-Opt Potentials}
\label{section: decentralized D and A}
We propose now a new method to estimate in a distributed way the gradient of the D-
and A-Opt potentials at a given configuration $\mathbf p$, which have similar expressions,
see \eqref{eq:der:Aopt} and \eqref{eq:der:Dopt}.
As mentioned in Remark \ref{rmk: estimate in the loop}, we assume that the nodes
have already executed a localization algorithm such as the one in
\cite{Moore:Sensys04:networkLoc} to estimate $\mathbf p$, and we ignore the effect
of the location estimation error in the gradient computation. Hence, we omit $\mathbf p$
from the notation in the following, writing $\mbf F_\mathcal{U}$ instead of $\mbf F_\mathcal{U}(\mathbf p)$.
The method essentially relies on
inverting $\mbf F_\mathcal{U}$ in a decentralized manner, which we discuss first.
\subsubsection{Auxiliary Problem}
Suppose that each tag $i \in \mathcal U$ knows initially a matrix $\mathbf E_i \in \r^{n \times m}$,
for some integer $m$, and the tags need to compute $\mbf F_\mathcal{U}^{-1} \mathbf E$ in a distributed manner
over the network $\mathcal G$, where $\mathbf E = \text{col}(\mathbf E_1,\ldots,\mathbf E_U) \in \r^{nU \times m}$.
This is equivalent to solving in a decentralized manner the linear
system $\mbf F_\mathcal{U} \mathbf x = \mathbf E$, with the variable $\mathbf x \in \r^{nU \times m}$.
A special case of this problem is to compute $\mbf F_\mathcal{U}^{-1}$, when $\mathbf E = \mathbf I_{nU}$.
Consider the following system of differential equations
\begin{equation}
\dot{\mathbf x}(t) = -\mathbf F_\mathcal{U} \mathbf x(t) + \mathbf E, \; \mathbf x(0) = \mathbf x_0.
\label{eq:diffsyst}
\end{equation}
If $\mathbf F_\mathcal{U} \succ \mathbf 0$, as guaranteed for instance by
Theorem \ref{thm: Fu invertible}, then $-\mathbf F_\mathcal{U}$ has strictly negative
eigenvalues, i.e., is stable, so the solution $\mathbf x(t)$ to the system \eqref{eq:diffsyst}
converges to the solution $\mathbf F^{-1}_\mathcal{U} \mathbf E$ of the linear system as $t \to \infty$,
no matter the choice of initial condition $\mathbf x_0$.
A discrete-time version of the flow \eqref{eq:diffsyst} can be implemented for $k \geq 0$ as
\[
\mathbf x_{k+1} = \mathbf x_{k} - \eta_k \, (\mathbf F_\mathcal{U} \mathbf x_k - \mathbf E),
\]
for some stepsizes $\eta_k$, which reads more explicitly for each tag $1 \leq i \leq U$
\begin{align}
\mathbf x_{i,k+1} = & \, \eta_k \sum_{j \in \mathcal N_i \cap \mathcal U} \mathbf F_{ij}
(\mathbf x_{i,k} - \mathbf x_{j,k}) \nonumber \\
&+ \left( \mathbf I_n + \eta_k \sum_{j \in \mathcal N_i \cap \mathcal K} \mathbf F_{ij}\right)
\mathbf x_{i,k} + \mathbf E_i.
\label{eq:diffsyst_i}
\end{align}
Again, the iterates $\mathbf x_k$ converge to the desired solution $\mbf F_\mathcal{U}^{-1} \mathbf E$
if we choose for example $\eta_k = \eta$ constant and sufficiently small
(namely, as long as $\eta < 2/\lambda_{\max}(\mbf F_\mathcal{U})$).
The iterations \eqref{eq:diffsyst_i} can be implemented in a decentralized manner
by the tags, i.e., at each step $k$ tag $i$ only need to exchange its
matrix $\mathbf x_i$ with its neighboring tags. This also requires that tag $i$ knows
$\mathbf F_{ij}$ for $j \in \mathcal N_i$, which is the case if prior to the iterations,
the nodes (tags and anchors) broadcast their position (estimates) to their neighbors.
When the iterations have converged, the $n \times m$ matrix $\mathbf x_i$ at tag $i$
represents the $i^{\text{th}}$ block of rows of $\mbf F_\mathcal{U}^{-1} \mathbf E$, i.e.,
$\mbf F_\mathcal{U}^{-1} \mathbf E = \text{col}(\mathbf x_1,\ldots,\mathbf x_U)$.
\begin{remark}
The iterations \eqref{eq:diffsyst_i} correspond to Richardson iterations
to solve the linear system $\mbf F_\mathcal{U} \mathbf x = \mathbf E$ in a decentralized way \cite{Bertsekas:2015:parallel}.
Other distributed iterative methods could be used, such as the Jacobi over-relaxation iterations
\begin{align*}
\mathbf x_{i,k+1} = \, &(1-\eta) \, \mathbf x_{i,k} +
\eta \, \mathbf F_{ii}^{-1} \, \left( \mathbf E_i - \sum_{j \in \mathcal N_i \cap \mathcal U} \mathbf F_{ij} \mathbf x_{j,k} \right),
\end{align*}
with potentially better convergence properties,
but a detailed discussion of such alternatives, which can be found in \cite[Chapter 2]{Bertsekas:2015:parallel},
is outside of the scope of this paper.
\end{remark}
\subsubsection{Application to compute $\partial J_D / \partial \xi_i$}
To implement the gradient descent scheme \eqref{eq:descent_per_agent} for D-optimization,
each mobile node $i$ (tag or anchor) needs to compute
$\partial J_D / \partial \xi_i$ for $\xi_i \in \{x_i,y_i,z_i\}$, which is given by \eqref{eq:der:Dopt}.
Denote $\mathbf M = \mbf F_\mathcal{U}^{-1} \in \r^{nU \times nU}$ and its $n \times n$ blocks $\mathbf M_{ij}$,
for $1 \leq i,j \leq U$.
First, the tags run the iterations \eqref{eq:diffsyst_i}, with the matrix $\mathbf E = \mathbf I_{nU}$.
That is, tag $j$ uses the matrix $\mathbf E_j = \mathbf e^\top_j \otimes \mathbf I_n$, where $\mathbf e_j$ is
the $j^{\text{th}}$ unit vector in $\r^U$. After convergence, tag $j$ stores an approximation
of the matrix $\mathbf M_j = [\mathbf M_{j1},\ldots,\mathbf M_{jU}] \in \mathbb R^{n \times nU}$.
Next, note from \eqref{eq:derfim} that the only $n \times n$ non-zero blocks
$\partial \mathbf F_{kl}/\partial \xi_i$, with $0 \leq k,l \leq U$, are those for
which: a) $k=l$ and $k \in \mathcal N_i$; b) $k = l = i$;
c) $k = i$ and $l \in \mathcal N_i$; or d) $l = i$ and $k \in \mathcal N_i$.
Moreover, if $i$ is a mobile anchor (so $i \geq U+1$), only case $a)$ can occur.
From this remark, we can derive the following expressions. If $i \in \mathcal U$
\begin{equation} \label{eq:der:Dopt tag}
\frac{\partial J_D(\mathbf p)}{\partial \xi_i} = \sum_{j \in \mathcal N_i \cap \mathcal U}
\trace{ \left( \mathbf M_{jj} + \mathbf M_{ii} - 2 \mathbf M_{ij} \right)
\frac{\partial \mathbf F_{ij}}{\partial \xi_i}},
\end{equation}
and if $i \in \mathcal K$
\begin{equation} \label{eq:der:Dopt anchor}
\frac{\partial J_D(\mathbf p)}{\partial \xi_i} = \sum_{j \in \mathcal N_i \cap \mathcal U}
\trace{ \mathbf M_{jj} \frac{\partial \mathbf F_{ij}}{\partial \xi_i}}.
\end{equation}
Since we assume that each node knows an estimate of its coordinates and of its neighbors'
coordinates, node $i$ can obtain from its neighbor tags $j$ the terms
$\trace{\mathbf M_{jj} \partial \mathbf F_{ij}/\partial \xi_i}$, and also compute the terms
$\trace{(\mathbf M_{ii} - 2 \mathbf M_{ij}) \partial \mathbf F_{ij}/\partial \xi_i}$ if $i \in \mathcal U$.
Hence, overall this provides a method allowing each mobile node $i$ to compute
$\partial J_D / \partial \xi_i$ by communicating only with its neighbors.
Algorithm \ref{algo:dopt} summarizes the distributed gradient computation procedure
for D-optimization.
\begin{algorithm}[h]
\KwData{Each node $i$ knows an estimate of its $\mathbf p_i$ from a localization
algorithm, or exactly if $i \in \mathcal K$}
\KwResult{Each mobile node $i$ knows $\partial J_D(\mathbf p)/\partial \mathbf p_i$}
%
Each node $i \in \mathcal U \cup \mathcal K$ broadcasts $\mathbf p_i$ to its neighbors\;
The tags run the iterations \eqref{eq:diffsyst_i} until acceptable convergence,
with $\mathbf E_j = \mathbf e^\top_j \otimes \mathbf I_n$ for tag $j$,
and each tag $j$ stores the resulting matrix $\mathbf M_j$\;
Each mobile tag $i$ computes $\sum_{j \in \mathcal N_i \cap \mathcal U}
\trace{ \left( \mathbf M_{ii} - 2 \mathbf M_{ij} \right) \frac{\partial \mathbf F_{ij}}{\partial \xi_i}}$\;
Each tag $j$ computes and sends $\trace{ \mathbf M_{jj} \frac{\partial \mathbf F_{ij}}{\partial \xi_i}}$
to each of its mobile neighbors $i \in \mathcal N_j$ ($i$ tag or anchor)\;
Each mobile node $i$ computes its gradient using \eqref{eq:der:Dopt tag} or \eqref{eq:der:Dopt anchor}\;
%
\caption{D-Opt distributed gradient computation}
\label{algo:dopt}
\end{algorithm}
The same steps can be used to compute the gradient \eqref{eq:der:Aopt}
at each mobile node for A-optimization.
The only difference is that the matrices $\mathbf M_i$ above should represent rows
of $\mbf F_\mathcal{U}^{-2}$ instead of $\mbf F_\mathcal{U}^{-1}$. For this, the tags first compute the rows $\mathbf{\tilde M}_i$
of $\mbf F_\mathcal{U}^{-1}$ using the iterations \eqref{eq:diffsyst_i}. Then, we restart these iterations
but now replacing the matrices $\mathbf E_i = \mathbf e^\top_i \otimes \mathbf I_{n}$ by $\mathbf{\tilde M}_i$.
This computes an approximation of $\mbf F_\mathcal{U}^{-1} \mbf F_\mathcal{U}^{-1} = \mbf F_\mathcal{U}^{-2}$, as desired.
\subsection{Decentralized Computation of E-Opt Gradient}
\label{section: decentralized E}
The decentralized computation of the gradient of the E-Opt potential can be done
using the methodology developed in \cite{decentralized_2010} for the standard Laplacian,
also used in \cite{zelazo_decentralized_2015} for the symmetric rigidity matrix.
Hence, our presentation is brief and focuses on adapting this methodology to $\mbf F_\mathcal{U}(\mathbf p)$.
Using the sparsity of $\mbf F_\mathcal{U}$, if $i \in \mathcal{U}$, we can rewrite \eqref{eq:der:Eopt - 1} as
\begin{align}
\der{J_E(\mathbf p)}{\xi_i}
= &\sum_{j \in \mathcal{N}_i \cap \mathcal{U}}
(\mathbf v_i - \mathbf v_j)^T \der{\mathbf F_{ij}}{\xi_i} (\mathbf v_i - \mathbf v_j)^T \nonumber \\
& + \mathbf v_i^T \left( \sum_{j \in \mathcal{N}_i \cap \mathcal{K}} \der{\mathbf F_{ij}}{\xi_i} \right)
\mathbf v_i,
\label{eq:gradientJe:sparsity}
\end{align}
and if $i \in \mathcal{K}$
\begin{equation}
\der{J_E(\mathbf p)}{\xi_i} = \sum_{j \in \mathcal{N}_i \cap \mathcal{U}}
\mathbf v_j^\top \der{\mathbf F_{ij}}{\xi_i} \mathbf v_j,
\label{eq:gradientJe:sparsity2}
\end{equation}
where $\mathbf v = \text{col}(\mathbf v_1,\dots, \mathbf v_U) \in \r^{n U}$.
Computing
these expressions requires a decentralized
algorithm to estimate the components of $\mathbf v$, a unit norm eigenvector
associated with
$\lambda_1 := \lambda_{\min}(\mathbf F_\mathcal{U})$.
\subsubsection{Power-iteration eigenvector estimator}
To compute $\mathbf v$ in a decentralized manner, consider the solution $t \mapsto \mathbf w(t) \in \mathbb R^{nU}$
to the following differential equation, adapted from \cite{decentralized_2010},
\begin{equation}
\dot{\mathbf w} = -[\beta \mbf F_\mathcal{U} + \mu((n U)^{-1}\|\mathbf w\|^2 - 1)\mathbf I_{n U}]\mathbf w,
\label{eq:eopt:scheme}
\end{equation}
with an initial condition $\mathbf w_0 := \mathbf w (0)$
and $\beta, \mu> 0$.
\begin{proposition}
If $\mu > \lambda_1 \beta$ and $\mathbf w_0 ^\top \mathbf v \neq 0$, then the solution $\mathbf w(t)$ to \eqref{eq:eopt:scheme}
converges to an eigenvector $\mathbf w_\infty$ of $\mbf F_\mathcal{U}$, associated with $\lambda_1$ and proportional to $\mathbf v$.
\label{prop:eopt_conv}
\end{proposition}
\begin{proof}
Follows from the argument in the appendix of \cite{decentralized_2010}.
\end{proof}
In practice, we can choose $\mathbf w_0$ randomly to fulfill the condition
$\mathbf w_0 ^\top \mathbf v \neq 0$ with probability one. To set the gains $\beta, \mu$,
note that $\trace{\mbf F_\mathcal{U}} > \lambda_1$ since $\mbf F_\mathcal{U} \succ 0$.
Then, for the additive measurement noise model \eqref{eq:model_meas}, we have
$\trace{\mbf F_\mathcal{U}} \leq \frac{2 P}{\sigma^2}$. So, if we choose $\beta = \sigma^2/(2 P)$
and $\mu > 1$, the condition of Proposition \ref{prop:eopt_conv} is satisfied.
For the log-normal model \eqref{eq:model_meas_lognormal}, we have
$\trace{\mbf F_\mathcal{U}} \leq \frac{2}{\sigma^2}\sum_{\{i,j\} \in \mathcal{E}, i \in \mathcal{U}} d_{ij}^{-2}$.
Hence, if we set again $\beta = \sigma^2/(2 P)$ and now $\mu > 1/d_{min}^2$,
such that $d_{ij} \geq d_{\min}$ for all $i, j$, then the condition of Proposition
\ref{prop:eopt_conv} is satisfied. The minimum distance $d_{\min}$ between robots
could be enforced as part of a collision avoidance scheme.
An estimation algorithm for $\mathbf v$ is obtained by discretizing
\eqref{eq:eopt:scheme}, leading to the following iterations
for each agent $i \in \mathcal{U}$
\begin{align}
\mathbf w_{i,k+1} = &\mathbf w_{i,k}
- \eta_k \Big(
\mu ( 1 - s_{k} ) \mathbf w_{i,k} \nonumber \\ &+
\beta \sum_{l\in (\mathcal N_i \cup \{i\})\cap \mathcal{U}} \mathbf F_{il} \mathbf w_{l,k} \Big),
\label{eq:eopt:discr}
\end{align}
where $\eta_k > 0$ is a sufficiently small step-size and $s_{k} := \| \mathbf w_k \|^2 / nU$.
All the terms in \eqref{eq:eopt:discr} can be obtained locally by node $i$ using
one-hop communication with its neighbors, except for the global average $s_k$,
which can be computed by a consensus algorithm as described next.
The last step is to normalize $\mathbf w_\infty$, obtained after convergence
in \eqref{eq:eopt:discr}. This can again be done by each individual agent,
since $\mathbf v := \mathbf w_\infty/\sqrt{nU s_\infty}$
is a unit-norm vector.
\subsubsection{Estimation of $s_{k}$ via a consensus algorithm}
Since $s_{k} = \|\mathbf w_k\|^2/(n U) = \frac{1}{U} \sum_{i=1}^{U} (\|\mathbf w_{i,k}\|^2/n)$,
this term can be computed by the tags
using a decentralized averaging consensus algorithm.
We assume for simplicity that the graph of the tags $\mathcal{G}_\mathcal{U}$ is connected.
To solve the averaging problem, each tag $i$ initializes a variable
$\hat{s}_{i,0}:=\|\mathbf w_{i,k}\|^2/n$. Then, they execute in a distributed
manner the iterations
\begin{equation}
\hat{\mathbf s}_{l+1} = \mathbf L \, \hat{ \mathbf s}_{l}, \forall l \geq 0,
\label{eq:eoptdistri:consensusFilter}
\end{equation}
where $\hat{\mathbf s}_{l} = \text{col}(\hat{s}_{1,l},\ldots,\hat{s}_{U,l})$,
and $\mathbf L$ is a doubly stochastic matrix of weights $L_{ij}$ associated
with the edges of $\mathcal{G}_\mathcal{U}$
(i.e., $\sum_{k=1}^U L_{ik} = \sum_{k=1}^U L_{ki}=1$, for $1 \leq i \leq U$,
and $L_{ij} = 0$ if $j \notin \mathcal N_i$),
for instance
the \emph{Metropolis-Hastings} weights
\[
\begin{cases}
L_{ij} = \mathsf{1}_{\mathcal N_i \cap \mathcal U}(j)(1+\max(|\mathcal N_i|,|\mathcal N_j|))^{-1},
\forall i \neq j, \\
L_{ii} = 1 - \sum_{k=1}^U L_{ik}.
\end{cases}
\]
We then have $\hat{\mathbf s}_l \to s_k \mathbf 1_U$ \cite[p. 58]{bullo_distributed_2009}, so
that each tag knows after convergence the scalar value $s_k$ needed for \eqref{eq:eopt:discr}.
Algorithm \ref{algo:distriEoptVect} summarizes the decentralized computation of the
estimate $\hat{\mathbf v}_i$ of the $i$-th component of $\mathbf v$ by a given tag $i \in \mathcal{U}$.
After decentralized estimation of $\mathbf v$ by the tags, each mobile agent $i$
can compute its components of the gradient of $J_E$ from \eqref{eq:gradientJe:sparsity}
or \eqref{eq:gradientJe:sparsity2} by communicating with its neighbors.
\begin{algorithm}
\KwData{
$\mathbf w_{i,0}$ random,
$\mathbf L$, $\mu,\beta,n_\text{iter},\tilde{n}_\text{iter}$}
\For{$0 \leq k \leq n_\text{iter}$}
{
$\hat{s}_{i,0} = \|\mathbf w_{i,k}\|^2/n$; \\
\For{$0 \leq l \leq \tilde{n}_\text{iter}$}
{
$\hat{s}_{i,l+1} = L_{ii} \hat{s}_{i,l}
+ \sum_{j \in \mathcal N_i \cap \mathcal U} L_{ij} \hat{s}_{j,l}$; \\
}
\textbf{compute} $\mathbf w_{i,k+1}$, setting $s_k := \hat{s}_{i,\tilde{n}_\text{iter}}$
in \eqref{eq:eopt:discr}.
}
\textbf{transmit}
$\hat{\mathbf v}_i := \frac{\mathbf w_{i,n_\text{iter}}}{ \sqrt{ n U
\hat s_{\tilde{n}_{i,\text{iter}}}}}$
to the neighborhood;
\caption{Estimation of $\mathbf v_i$ by tag $i \in \mathcal U$.}
\label{algo:distriEoptVect}
\end{algorithm}
\begin{remark}
If $\mathcal{G}_\mathcal{U}$ is not fully connected,
there exists an $U\times U$ permutation matrix $\mathbf P$
such that $\check{\mathbf F}_\mathcal{U}=(\mathbf P \otimes \mathbf I_n)^{-1} \mbf F_\mathcal{U} (\mathbf P \otimes \mathbf I_n) = \mathrm{diag}
(\mathbf F_{\mathcal{S}_1} \dots \mathbf F_{\mathcal{S}_l} \dots)$ is block diagonal,
where each $\mathcal{S}_l$ represents a subset of connected tags.
Hence, the minimal eigenvalue $\lambda$ of $\mbf F_\mathcal{U}$ is among the minimal eigenvalues $\lambda_{\mathcal{S}_l}$ of the blocks $\mathbf F_{\mathcal{S}_l}$.
Therefore, each subset $\mathcal{S}_l$ can use Algorithm \ref{algo:distriEoptVect}
to compute its eigenvector $\mathbf{v}_{\mathcal{S}_l}$ associated to $\lambda_{\mathcal{S}_l} :=\mathbf{v}_{\mathcal{S}_l}^\top \mathbf F_{\mathcal{S}_l}\mathbf{v}_{\mathcal{S}_l}$.
On the other hand, the graph $\mathcal{G}$ with all nodes is assumed rigid
and hence fully connected. This allows comparing the $\lambda_{S_l}$ through
the network $\mathcal{K}$ formed by the anchors in order to find
$\lambda := \min_{\mathcal{S}_l} \lambda_{\mathcal{S}_{l}}$ corresponding
to the subset $\mathcal{S}^*$.
Since $\check{\mathbf F}_\mathcal{U}$ is block diagonal,
its eigenvector associated with $\lambda$ is $\text{col}(0, \dots, \mathbf v_{\mathcal{S}^*}, \dots 0)$,
which then yields
$\mathbf v = (\mathbf P \otimes \mathbf I_n) \, \text{col}(0, \dots, \mathbf v_{\mathcal{S}^*}, \dots 0)$
for $\mbf F_\mathcal{U}$.
\end{remark}
\section{Localizability Optimization for Rigid Bodies}
\label{sec:extensions_rigidity}
\subsection{Constrained Localizability Optimization}
\label{section: constrainted loc}
In this section, we consider scenarios where mobile robots can carry several tags, see Fig. \ref{fig:setup_rigid_body}.
Hence, the relative motion and position of some tags are constrained by the fact
that they are attached to the same rigid body. More generally, let $\mathbf f_c: \mathbf R^{n U} \to \mathbf R^C$
be a known function defining $C$ constraints $\mathbf f_c(\mathbf p_{\mathcal U})=\mathbf 0$ that the tag
positions must satisfy, and define the feasible set
\begin{equation}
\label{eq:feasible set}
\mathcal{C} \coloneqq \left\{ \mathbf p = \text{col}(\mathbf p_{\mathcal U},\mathbf p_{\mathcal K}) \in \r^{n N} \big|
\mathbf f_c(\mathbf p_\mathcal{U}) = \mathbf 0 \right\}.
\end{equation}
To use the CRLB as localizability potential, the bound should now reflect the fact
that localization algorithms can leverage the information provided by the constraints
to improve their performance.
We use the following result generalizing Proposition \ref{prop: constrained CRLB for network}.
\begin{proposition}
\label{prop: constrained CRLB for network refined}
Assume that the tag positions are subject to the constraints \eqref{eq:feasible set}.
Let $\mathbf A_{\mathcal U}(\mathbf p_{\mathcal U})$ be a matrix whose columns span
$\ker \partial \mathbf f_c / \partial \mathbf p_{\mathcal U}$
(which depends on $\mathbf p_{\mathcal U}$ in general).
Let $\hat{\mathbf p}_{\mathcal U}$ be an unbiased estimate of the tag positions
$\mathbf p_{\mathcal U}$, based on the measurements $\tilde{\mathbf d}$, the knowledge
of the anchor positions $\mathbf p_{\mathcal K}$, and the knowledge of the constraints
\eqref{eq:feasible set}. Then
\begin{align} \label{eq: cov tags CRLB rigid}
\mathsf{cov}[\mathbf{\hat{p}}_\mathcal{U}] \succeq \mathbf B_{\mathcal U}(\mathbf p),
\end{align}
where
\begin{equation}
\label{eq: constrained CRLB network def}
\mathbf B_{\mathcal U}(\mathbf p) := \mathbf A_{\mathcal U}
[\mathbf A_{\mathcal U}^T \mathbf F_{\mathcal U} \mathbf A_{\mathcal U}]^{\dagger}
\mathbf A_{\mathcal U}^T.
\end{equation}
\end{proposition}
\begin{proof}
We have both the trivial constraint
$\mathbf f_t(\mathbf p_\mathcal{U}) = \mathbf p_{\mathcal K} - \mathbf p^*_{\mathcal K} = \mathbf 0$ with
$\mathbf p^*_{\mathcal K}$ the known positions of the anchors,
and the equality constraint $\mathbf f_c(\mathbf p_{\mathcal U}) = \mathbf 0$.
Define $\mathbf h(\mathbf p) = \text{col} (\mathbf f_c(\mathbf p_\mathcal{U}),\mathbf f_t(\mathbf p_\mathcal{K}))$.
We then have :
\[
\frac{\partial \mathbf h}{\partial \mathbf p} = \begin{bmatrix}
\frac{\partial \mathbf f_c}{\partial \mathbf p_{\mathcal U}} & \mathbf 0 \\
\mathbf 0 & \mathbf I_{n K}
\end{bmatrix}.
\]
We apply the result of Theorem \ref{thm:gorman}, with the matrix $\mathbf A$ in \eqref{eq:crlb}
\[
\mathbf A = \begin{bmatrix} \mathbf A_\mathcal{U} \\ \mathbf 0 \end{bmatrix}
\text{ so }
\mathbf F_c = \mathbf A_\mathcal{U}^\top \mathbf F_{\mathcal U} \mathbf A_\mathcal{U},
\;
\mathbf B_c = \begin{bmatrix} \mathbf A_\mathcal{U}\mathbf F_{c}^\dagger \mathbf A_\mathcal{U}^\top & \mathbf 0 \\ \mathbf 0 & \mathbf 0 \end{bmatrix}.
\]
In \eqref{eq:crlb}, the $nU \times nU$ top-left corner of the matrix inequality gives
\eqref{eq: cov tags CRLB rigid} for the covariance of $\hat{\mathbf p}_{\mathcal U}$.
The other parts of the bound \eqref{eq:crlb} are trivial ($\mathbf 0 \succeq \mathbf 0$)
and correspond to the fact that a reasonable estimate
$\hat{\mathbf p} = \text{col}(\hat{\mathbf p}_{\mathcal U},\hat{\mathbf p}_{\mathcal K})$ should set
$\hat{\mathbf p}_{\mathcal K} = \mathbf p_{\mathcal K}$, so that $\hat{\mathbf p}_{\mathcal K}$
will have zero covariance.
\end{proof}
\begin{figure}
\centering
\input{parts/setup_rigid_body}
\caption{Setup for two robots, seen as rigid bodies, carrying multiple tags.}
\label{fig:setup_rigid_body}
\end{figure}
Note that to simplify the notation, we have omitted in \eqref{eq: constrained CRLB network def}
to state the dependencies $\mathbf A_{\mathcal U}(\mathbf p_{\mathcal U})$ and $\mathbf F_{\mathcal U}(\mathbf p)$.
From the matrix-valued bound \eqref{eq: constrained CRLB network def}, we can define
constrained localizability potentials as in Section \ref{section: localizability potentials unconstrained}.
Here, for conciseness, we only consider the A-Opt potential
\begin{equation}
\label{eq:constrained CRLB potential}
J_c(\mathbf p):=\trace{\mathbf B_{\mathcal U}(\mathbf p)}.
\end{equation}
Moreover, the desired tag positions should also respect the constraints specified by \eqref{eq:feasible set}.
In other words, we aim to adjust the positions of the mobile nodes (anchors or tags) in
order to minimize, at least locally, the potential $J_c$ in \eqref{eq:constrained CRLB potential}
subject to the constraints \eqref{eq:feasible set}.
For this, we replace the gradient-descent method \eqref{eq:descent_per_agent} by the following first-order
primal-dual method \cite[p. 528]{dimitri_p_bertsekas_nonlinear_2016}:
\begin{equation} \label{eq:lagrangian_descent}
\begin{cases}
\mathbf p_{k+1} =
\mathbf p_{k} - \eta_k \left(
\frac{\partial J_c(\mathbf p_{k})}{\partial \mathbf p}
+
\pmb \lambda_k^T \frac{\partial \mathbf f_c(\mathbf p_{\mathcal{U},k})}{\partial \mathbf p} \right)^T,
\\
\pmb \lambda_{k+1} = \pmb \lambda_k + \delta \, \mathbf f_c(\mathbf p_{\mathcal{U},k}),
\end{cases}
\end{equation}
where $\eta_k \in \r$ is a sequence of decreasing stepsizes (following for
instance Armijo's rule \cite{dimitri_p_bertsekas_nonlinear_2016}), $\delta$
a fixed parameter and $\pmb \lambda_k$ are dual variable iterates.
The scheme \eqref{eq:lagrangian_descent} provides a sequence of waypoints
converging toward a local constrained optimum $\mathbf p^*$.
Feasibility of the constraints \eqref{eq:feasible set} is not maintained during the iterations
\eqref{eq:lagrangian_descent}, but the algorithm contributes to keeping $\mathbf p_{k+1}$
close to $\mathcal C$.
In addition, for each iterate $\mathbf p_k$ that we actually want to use as waypoint
for motion planning (some iterates could be skipped), since \eqref{eq:feasible set}
represents rigidity constraints, we can enforce feasibility by computing for each
robot the pose minimizing the distance between the desired and achievable tag locations,
in a least-squares sense (this corresponds to a standard pose estimation problem \cite[Section 8.1]{Barfoot:book17:stateEst}).
In the rest of this section, we specialize the discussion above to the deployment problem
where some some robots carry multiple tags, which requires evaluating
the cost function \eqref{eq:constrained CRLB potential} and its gradient.
First, we only take into account in the CRLB the constraints on the distances
between the intra-robot tags, as this leads to somewhat simpler expressions
and computations.
In Section \ref{ss:CRLB_RP}, we include in the CRLB the full information about the relative
positions of these tags.
\subsection{CRLB with Distance Constraints}
\label{ss:CRLB_rigid}
\label{ss:ext:dist}
Considering Fig. \ref{fig:setup_rigid_body}, as robots carrying multiple tags move,
their tags' relative positions must satisfy rigid displacement constraints.
We partition the set of tags $\mathcal U$ into $R$ groups $\mathcal U_1, \ldots \mathcal U_R$,
with $\sum_{r=1}^R |\mathcal U_r| = U$, such that the tags in group $\mathcal U_r$
are rigidly connected (mounted on the same robot).
To simplify the discussion in the following, we assume that each group has
$|\mathcal U_r| \geq n$ tags in dimension $n$ and that these tags are in general position
(no $3$ tags aligned, and no $4$ tags coplanar in dimension $3$). As a result, each group of tags forms
an infinitesimally rigid framework for the complete graph (note that all pairwise distances within
a group $\mathcal U_r$ are known).
For example, we can simply have $2$ tags on each robot if $n = 2$, or $3$ non-aligned
tags if $n = 3$. We also ignore the possibility of having
known rigid constraints between anchors and tags.
The analysis can be extended to mixed networks of robots carrying a single or multiple tags,
or both anchors and tags, in a straightforward manner.
In principle, we know the relative positions of the tags in $\mathcal U_r$ in the robot frame
of reference, so this information should be included in the CRLB. First, however, we only include
the information about relative \emph{distances} between tags in each group, as this leads to simpler
algorithms.
In this case, in the framework of Section \ref{section: constrainted loc}, $\mathbf f_c$ has
one component for each pair of tags $\{i,j\}$ in the same group $\mathcal U_r$, of the form
\[
\mathbf f_c^{\{i,j\}}(\mathbf p_{\mathcal U}) = ||\mathbf p_{ij}||^2 - d_{ij}^2,
\]
where $d_{ij}$ is perfectly known.
If we order these components by listing all pairs of tags in the same set $\mathcal U_1$,
$\mathcal U_2$, \ldots, then $\mathcal U_R$,
we obtain for the Jacobian matrix
\begin{align}
\label{eq: jacobian distance constraints}
\frac{\partial \mathbf f_c(\mathbf p_{\mathcal U})}{\partial \mathbf p_{\mathcal U}} = \text{diag}(\mathbf R_1,\ldots,\mathbf R_R),
\end{align}
where $\mathbf R_r$ is the rigidity matrix defined in Section \ref{ss:rigidity_theory}, for the
framework formed by a complete graph among the tags in group $\mathcal U_r$.
Because the framework within each group is infinitesimally rigid, the kernel of each matrix $\mathbf R_r$
is spanned by three explicitly known vectors if $n = 2$, or six if $n = 3$, as described in
Proposition \ref{prop:kerthree}. By completing these vectors with zeros, we can form the
matrix $\mathbf A_{\mathcal U} = \begin{bmatrix} \mathbf A_1 & \ldots & \mathbf A_R \end{bmatrix}$ with $n U$
rows and $3R$ (if $n = 2$) or $6R$ (if $n = 3$) columns spanning the kernel of \eqref{eq: jacobian distance constraints}.
For example, based on the discussion above Proposition \ref{prop:kerthree},
if $n = 2$ we can take
$\mathbf A_r = \begin{bmatrix} \mathbf v_{T_x}^r & \mathbf v_{T_y}^r & \mathbf v_{R_z}^r \end{bmatrix}$,
with $[\mathbf v_{T_x}^{r}]_{2i-1}=1$, $[\mathbf v_{T_y}^r]_{2i}=1$, $[\mathbf v_{R_z}^r]_{2i-1} = -y_i$
and $[\mathbf v_3^r]_{2i} = x_i$ for all $i \in \mathcal U_r$ and zeros everywhere else.
From these explicit expressions of $\mathbf A_{\mathcal U}$, we can also immediately compute the derivatives
$\partial \mathbf A_{\mathcal U} / \partial \xi_i$, for $\xi_i \in \{x_i,y_i,z_i\}$.
Since determining $\mathbf A_{\mathcal U}(\mathbf p_\mathcal U)$ allows us to compute
$J_c(\mathbf p_{\mathcal U})$ using \eqref{eq: constrained CRLB network def}, the only
missing element to execute the iterations \eqref{eq:lagrangian_descent} is the gradient
of $J_c$.
For this, assuming that $\mathbf F_c \coloneqq \mathbf A_{\mathcal U}^T \mathbf F_{\mathcal U} \mathbf A_{\mathcal U}$
is invertible, we obtain
\begin{align}
&\der{J_c}{\xi_i} = \der{}{\xi_i} \trace{
{\mathbf A}_{\mathcal U} {\mathbf F_c}^{-1} {\mathbf A}_{\mathcal U}^T} \label{eq: gradient constrained CRLB}
\\
&= 2\trace{ {\mathbf F}_c^{-1} \mathbf A_{\mathcal U}^T \der{{\mathbf A_{\mathcal U}}}{\xi_i}}
- \trace{{\mathbf A_\mathcal{U}} {\mathbf F}_c^{-1} \der{ {\mathbf F}_c}{\xi_i} {\mathbf F_c}^{-1} {\mathbf A_\mathcal{U}}^\top}
\nonumber \\
&= 2\trace{ {\mathbf F}_c^{-1} \mathbf A_{\mathcal U}^T (\mathbf I - \mathbf B_{\mathcal U} \mathbf F_{\mathcal U}) \der{{\mathbf A_{\mathcal U}}}{\xi_i} }
- \trace{ \mathbf B_{\mathcal U}^2 \frac{\partial \mathbf F_{\mathcal U}}{\partial \xi_i}}. \nonumber
\end{align}
\subsection{CRLB with Constrained Relative Positions}
\label{ss:CRLB_RP}
To improve its accuracy, a position estimator can in fact use the full knowledge of the relative
positions (RP) $\mathbf p_{ij}^r$ in the frame of robot $r$ for each pair of tags $(i,j)$ carried by
the same robot $r$.
Correspondingly, a CRLB should be derived for this case.
To simplify the presentation, we assume in this section that each robot carries at least two tags.
To obtain the CRLB, let us first introduce $R$ new parameters $\boldsymbol{\theta} \coloneqq \text{col}(\boldsymbol{\theta}_1,\ldots,\boldsymbol{\theta}_R)$, one for each
robot, where $\boldsymbol{\theta}_i \in \mathbb R^{q}$, with $q = 1$ if $n = 2$ and $q = 3$ if $n = 3$.
Then, for the extended set of parameters
$\tilde{\mathbf{p}}_{\mathcal U} = (\mathbf p_{\mathcal U},\boldsymbol \theta)$
and the measurements \eqref{eq:model_meas} or \eqref{eq:model_meas_lognormal}, we denote the
extended FIM
\begin{equation} \label{eq: Fu extended}
\tilde{\mathbf{F}}_{\mathcal U} =
-\esp{\der{^2 \ln f(\tilde{\mathbf d};\tilde{\mathbf p}_{\mathcal U})}{\tilde{\mathbf p}_{\mathcal U} \partial
\tilde{\mathbf p}_{\mathcal U}^\top}}
= \begin{bmatrix}
\mbf F_\mathcal{U} & \mathbf 0_{n U, qR} \\
\mathbf 0_{qR, n U} & \mathbf 0_{qR, qR}
\end{bmatrix}.
\end{equation}
In the following, we add constraints between the tag positions and the parameters $\boldsymbol \theta$,
in such a way that the latter then represent the robot orientations in exponential coordinates,
and the FIM $\tilde{\mathbf{F}}_{\mathcal U}$ is then appropriately changed using Theorem \ref{thm:gorman}.
It is convenient to number and order the tags as follows. Consider robot $r \in \{1,\ldots,R\}$
and associated tags $\mathcal U_r$, using the notation of Section \ref{ss:ext:dist}.
Pick one tag in $\mathcal U_r$, denoted in the following $1^r$. The other tags of $\mathcal U_r$
are denoted $2^r,\ldots,U_r^r$, with $U_r = |\mathcal U_r|$. We group these latter tags by robot and
list them in the order
\begin{align} \label{eq: other tags}
\mathbf p_{o} \coloneqq \text{col} ( \mathbf p_{2^1},\dots \mathbf p_{U^1_{1}}, \dots,
\mathbf p_{2^R},\dots \mathbf p_{U^R_{R}}) \in \mathbb R^{n (U-R)},
\end{align}
from robot $1$ to robot $R$. The positions of the $R$ tags $1^r$ are also grouped in the vector
\[
\mathbf p_c \coloneqq \text{col}(\mathbf p_{1^r},\ldots,\mathbf p_{1^R}) \in \mathbb R^{n R}.
\]
Then, we have $\tilde{\mathbf p}_{\mathcal U} = \text{col}(\mathbf p_o,\mathbf p_c, \boldsymbol{\theta})$.
Next, for each tag $j^r \in \mathcal U_r$ other than $1^r$, we add the constraint
$\mathbf f^{(r,j^r)}(\mathbf p_{1^r},\mathbf p_{j^r},\boldsymbol{\theta}_r) = \mathbf 0 \in \mathbb R^n$, where
\begin{equation} \label{eq: RP constraint exp}
\mathbf f^{(r,j^r)}(\mathbf p_{1^r},\mathbf p_{j^r},\boldsymbol{\theta}_r) =
\mathbf p_{j^r} -\mathbf p_{1^r} - \exp(\cpop{\boldsymbol{\theta}_r}) \mathbf p^r_{j^r 1^r},
\end{equation}
with the notation (depending if $n = 2$ or $n = 3$)
\begin{align*}
\cpop{\theta} &= \begin{bmatrix} 0 & -\theta \\ \theta & 0 \end{bmatrix}, \text{ if } \theta \in \mathbb R, \\
\cpop{\boldsymbol \theta} &= \begin{bmatrix}
0 & -\theta_z & \theta_y \\ \theta_z & 0 & -\theta_x \\ -\theta_y & \theta_x & 0
\end{bmatrix}, \text{ if } \boldsymbol \theta = [\theta_x, \theta_y, \theta_z]^T \in \mathbb R^3.
\end{align*}
There are $U_r-1$ constraints of the form \eqref{eq: RP constraint exp} for robot $r$, each
of dimension $n$. With these constraints, the matrix $\exp(\cpop{\boldsymbol{\theta}_r})$
represents the rotation matrix from the world frame $\mathfrak{F}$ to the frame of robot $r$,
using the exponential coordinate representation \cite{Lynch2017ModernRM}. These constraints
then simply represent the change of coordinates for the known vector $\mathbf p^r_{j^r 1^r}$ in the
robot frame to the (unknown) coordinates $\mathbf p_{j^r 1^r}$ in frame $\mathfrak{F}$.
Define in the following the notation
\[
\boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} \coloneqq \exp(\cpop{\boldsymbol{\theta}_r}) \mathbf p^r_{j^r 1^r}, \;\;
\text{ for } j^r \in \mathcal U_r, 1 \leq r \leq R.
\]
\begin{remark}
Recall that when $n = 2$, we have simply
\[
\exp(\cpop{\theta}) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix},
\]
and when $n = 3$, $\exp(\cpop{\boldsymbol \theta})$ can be computed efficiently using
Rodrigues' formula \cite[Proposition 3.1]{Lynch2017ModernRM}.
\end{remark}
Considering \eqref{eq: RP constraint exp} for all $R$ robots, we obtain $U-R$ constraints on
the parameters $\tilde{\mathbf p}_{\mathcal U}$, each of dimension $n$. We list these constraints
in the same order as for $\mathbf p_o$ in \eqref{eq: other tags} and denote them
$\mathbf f_{\text{RP}}(\mathbf{p}_o,\mathbf p_c,\boldsymbol{\theta}) = \mathbf 0$. For the constrained
CRLB, we are interested in the kernel of the Jacobian matrix of $\mathbf f_{\text{RP}}$.
Remark that with the chosen ordering of tags and constraints, we have
$\frac{\partial \mathbf f_{\text{RP}}}{\partial \mathbf{ p}_o} = \mathbf I_{n (U-R)}$.
If we define
\begin{equation} \label{eq: partial Jacobian RP}
\mathbf N \coloneqq \begin{bmatrix} \frac{\partial \mathbf f_{\text{RP}}}{\partial \mathbf{ p}_c} &
\frac{\partial \mathbf f_{\text{RP}}}{\partial \boldsymbol{\theta}} \end{bmatrix},
\end{equation}
then immediately
\begin{align}
\mathbf A_\text{RP} &\coloneqq \spanv { \ker \frac{\partial \mathbf f_{\text{RP}}}{\partial \mathbf{\tilde p}_\mathcal U} }
= \spanv { \ker \begin{bmatrix} \mathbf I_{n (U-R)} & \mathbf N \end{bmatrix} } \nonumber \\
\mathbf A_\text{RP} & = \text{col} \left( -\mathbf N, \mathbf I_{(n+q)R} \right). \label{eq: A_RP}
\end{align}
Indeed, $\frac{\partial \mathbf f_{\text{RP}}}{\partial \mathbf{\tilde p}_\mathcal U}$ is of rank $n(U-R)$,
so $\mathbf A_{RP}$ should have $n U + q R - n(U-R) = (n+q)R$ independent columns, and clearly
\[
\frac{\partial \mathbf f_{\text{RP}}}{\partial \mathbf{\tilde p}_\mathcal U} \mathbf A_{\text{RP}} =
-\mathbf N + \mathbf N = \mathbf 0.
\]
Hence, it is sufficient to compute $\mathbf N$ to obtain $\mathbf A_{\text{RP}}$.
\begin{proposition}
The matrix $\mathbf N$ in \eqref{eq: partial Jacobian RP} is defined by
\[
\mathbf N = \text{col} \left( \{\mathbf N^{(r,j^r)} \}_{1 \leq r \leq R,2^r \leq j^r \leq U_r^r} \right)
\; \in \mathbb R^{n (U-R) \times (n+q) R}.
\]
where the blocks $\mathbf N^{(r,j^r)} \in \mathbb R^{n \times (n+q)R}$ are stacked in the same
order as $\mathbf p_o$ in \eqref{eq: other tags} and are of the form
\[
\mathbf N^{(r,j^r)} = - \begin{bmatrix}
\mathbf 0_{n, n(r-1)} & \mathbf I_{n} & \mathbf 0_{n, s} & \mathbf N_{\boldsymbol{\theta}_r}^{(r,j^r)}
& \mathbf 0_{n, (R-r)q}
\end{bmatrix}
\]
with $s = (R-r) n +(r-1) q$, where
\[
\mathbf N_{\boldsymbol \theta_r}^{(r,j^r)} =
\mathbf W \boldsymbol \Phi(\boldsymbol \theta_r)^{(r,j^r)}
\in \mathbb R^{n}
\]
if $n = 2$, with $\mathbf W = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$, and
\[
\mathbf N_{\boldsymbol \theta_r}^{(r,j^r)} =
\begin{bmatrix} \mathbf W_x \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} &
\mathbf W_y \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} &
\mathbf W_z \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} \end{bmatrix}
\in \mathbb R^{n \times 3}
\]
if $n = 3$, with $\mathbf W_\xi = \cpop{\mathbf e_\xi}$, $\xi \in \{x,y,z\}$, and
$\mathbf e_x, \mathbf e_y, \mathbf e_z$ the standard unit vectors in $\mathbb R^3$.
\end{proposition}
\begin{proof}
Decompose $\mathbf N^{(r,j^r)}$ by blocks
\[
\mathbf N^{(r,j^r)} = \begin{bmatrix}
\mathbf G_1 & \ldots \mathbf G_R & \mathbf H_1 & \ldots & \mathbf H_R \end{bmatrix}
\]
with $\mathbf G_i \in \mathbb R^{n \times n}$ and $\mathbf H_i \in \mathbb R^{n \times q}$.
The matrix $\mathbf N^{(r,j^r)}$ is obtained by taking
the partial derivatives of $\mathbf f^{(r,j^r)}$ in \eqref{eq: RP constraint exp}
with respect to the coordinates of $\mathbf p_{1^r}$, which gives the block
$\mathbf G_r = -\mathbf I_{n}$, and with respect to the coordinates
of $\boldsymbol{\theta}_r$, which gives the block
$\mathbf H_r = -\mathbf N_{\boldsymbol{\theta}_r}^{(r,j^r)} \in \mathbb R^{n \times q}$.
All other blocks are zero.
The computation of $\mathbf H_r$ comes from the fact that
$\cpop{\boldsymbol \theta_r} = \theta_r \mathbf W$ when $n = 2$, $q=1$, and
$\cpop{\boldsymbol \theta_r} = \theta_{r,x} \mathbf W_x + \theta_{r,y} \mathbf W_y +
\theta_{r,z} \mathbf W_z$ when $n = 3$, $q = 3$.
\end{proof}
With the matrices $\tilde{\mathbf F}_{\mathcal U}$ and $\mathbf A_\text{RP}$ defined in \eqref{eq: Fu extended}
and \eqref{eq: A_RP}, we can follow the discussion of Section \ref{section: constrainted loc} and define
$\mathbf B_\text{RP} := \mathbf A_\text{RP} [\mathbf A_\text{RP}^\top \tilde{\mathbf{F}}_{\mathcal U}
\mathbf A_\text{RP}]^{\dagger} \mathbf A_\text{RP}^\top$ to obtain a CRLB taking the RP constraints
into account.
We can choose the cost function
\begin{equation} \label{eq:constrained CRLB potential - bis}
J_c (\mathbf p)
= \trace{\mathbf C \mathbf B_\text{RP} \mathbf C^\top},
\end{equation}
similarly to \eqref{eq:constrained CRLB potential}, where
$\mathbf C = [\mathbf I_{nU} \; \mathbf 0_{nU, qR}]$ is introduced to select only the uncertainty
in the position estimates.
To compute the gradient with respect to $\mathbf p$ for \eqref{eq:lagrangian_descent}, similarly to
\eqref{eq: gradient constrained CRLB}, we have, for $\xi \in \{x,y,z\}$ :
\[
\der{J_c}{\xi_i}=2\trace{\mathbf C \der{\mathbf A_{\text{RP}}}{\xi_i}\mathbf D^\top}
- \trace{\mathbf D \der{\mathbf F_c}{\xi_i} \mathbf D^\top},
\]
with $\mathbf D := \mathbf C \mathbf A_{\text{RP}} \mathbf F_c^{-1}$, assuming
$\mathbf F_c = \mathbf A_\text{RP}^\top \tilde{\mathbf{F}}_{\mathcal U} \mathbf A_\text{RP}$
to be invertible. To compute the derivative $\partial \mathbf A_{\text{RP}} / \partial \xi_i$,
it is sufficient to know how to compute the terms
$\partial \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} / \partial \xi_i$.
Since
\[
\boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} = \mathbf p_{j^r} -\mathbf p_{1^r}
\]
from the constraint \eqref{eq: RP constraint exp}, then
\[
\frac{\partial \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)}}{\partial \xi_{j^r}} = \mathbf e_{\xi}, \;\;
\frac{\partial \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)}}{\partial \xi_{1^r}} = -\mathbf e_{\xi},
\]
for $\xi \in \{x,y,z\}$ and
$\partial \boldsymbol \Phi_{\boldsymbol \theta_r}^{(r,j^r)} / \partial \xi_{j} = \mathbf 0$ if
$j \notin \{1^r,j^r\}$.
\section{Problem Statement}
\label{sec:statement}
\begin{figure}
\centering
\includegraphics{figs/hybrid_deployment.pdf}
\caption{Illustration of the setup in 2D with 3 mobile tags and 3 anchors, 2 of whom are fixed.
The links for the ranging pairs are shown. The ranging graph includes $3$ additional implicit
links between the anchors, not shown.}
\label{fig:setup_deployment}
\end{figure}
Consider a set of $N$ nodes in the $n$-dimensional Euclidean space, where $n = 2$ or
$n = 3$. We fix a global reference frame denoted $\mathfrak{F}=(O, \vec{x}, \vec{y}, \vec{z})$
if $n = 3$ or $\mathfrak{F}=(O, \vec{x}, \vec{y})$ if $n = 2$.
For $1 \leq i \leq N$, we write the coordinates of node $i$ in that frame
$\mathbf p_i := [x_i,y_i,z_i]^\top$ if $n = 3$ or $\mathbf p_i := [x_i,y_i]^\top$ if $n = 2$,
and we let $\mathbf p \coloneqq \text{col}(\mathbf p_1, \ldots, \mathbf p_N) \in \mathbb R^{nN}$ denote
the global spatial configuration of the nodes, which can vary with time.
As illustrated on Fig. \ref{fig:setup_deployment}, some of these nodes are carried
by mobile robots, while others could remain at fixed locations.
We suppose that the coordinates of a subset $\mathcal{K}$ of the nodes are
perfectly known in $\mathfrak{F}$, for $1 < |\mathcal{K}|:=K < N$,
and refer to these nodes as \emph{anchors}.
The anchors could be placed at fixed locations or they could be mobile, as long
as we can precisely localize them via external means, e.g., using accurate GNSS receivers.
The other nodes, also mobile or fixed and whose positions are unknown and need
to be estimated, are called \emph{tags} in the following. They form a set denoted
$\mathcal{U}$, with $|\mathcal{U}|:=U=N-K$.
Next, we assume that $P$ pairs of nodes, called ranging pairs, can measure their relative distance
(with each such pair containing at least one tag).
For such a pair of nodes $(i,j)$, we denote $d_{ij}$ the true distance between the nodes
and \linebreak $\tilde d_{ij}$ a corresponding measurement, to which both nodes $i$
and $j$ have access. In the following, we consider measurement models assuming either
additive Gaussian noise
\begin{equation}
\tilde{d}_{ij} = d_{ij}+\nu_{ij}, \; \nu_{ij} \sim \mathcal{N}(0,\sigma^2),
\label{eq:model_meas}
\end{equation}
or multiplicative log-normal noise
\begin{equation}
\tilde{d}_{ij} = d_{ij} \, e^{\mu_{ij}}, \; \mu_{ij} \sim \mathcal{N}(0,\bar{\sigma}^2),
\label{eq:model_meas_lognormal}
\end{equation}
where the noise realizations $\nu_{ij}$ or $\mu_{ij}$ are independent for all $i, j$.
We collect all the measured distances $\tilde{d}_{ij}$ in the vector
$\tilde{\mathbf d}=[\dots,\tilde{d}_{ij}, \dots]^\top \in \r^{P}$.
We also define an undirected graph $\mathcal{G}=(\mathcal{E},\mathcal{V})$,
called the \emph{ranging graph}, whose vertices $\mathcal V$ are the $N$ nodes
and with an edge in $\mathcal E$ for each ranging pair and for each pair of anchors.
In particular, the subgraph of $\mathcal G$ formed by the anchors
is a complete graph, which is consistent with the fact that the distances
between anchors are implicitly known from their coordinates.
Two nodes linked by an edge in $\mathcal G$ are called neighbors and we denote by
$\mathcal N_i$ the set of neighbors of $i$ or \emph{neighborhood} of $i$, for $1 \leq i \leq N$.
Let $E = P + \frac{K(K-1)}{2}$ be the total number of edges in $\mathcal G$.
A concrete implementation of the previous set-up is as follows.
The nodes could correspond to RF transceivers capable of measuring
their distance with respect to other nodes within their communication radius.
Radiolocation protocols such as Two-Way Ranging (TWR), Time of Arrival (ToA)
or Time Difference of Arrival (TDoA) \cite{sahinoglu_ultra-wideband_2008,Bensky:book16:wirelesPos}
use the timestamps of messages exchanged by the transceivers to estimate the ToF of these messages
and deduce distance measurements, which can be assumed to be of the form \eqref{eq:model_meas},
at least under line-of-sight signal propagation conditions.
Another ranging method consists in measuring the strength of a received signal (RSS)
to deduce the distance to the transmitter using a path loss propagation model \cite{Bensky:book16:wirelesPos}.
This method typically leads to a distance measurement model of the
form \eqref{eq:model_meas_lognormal}, assuming again a simple radio
propagation environment \cite{coulson_statistical_1998,patwari_locating_2005}.
We assume that the nodes implement a cooperative localization scheme, in order to jointly
produce an estimate $\hat{\mathbf p}$ of all their coordinates $\mathbf p$ in $\mathfrak{F}$, based
on the noisy measurements $\tilde{\mathbf d}$ and the knowledge of the anchor coordinates.
As we explain in Section \ref{sec:loca_potentials}, the value of $\mathbf p$ itself
strongly influences the achievable accuracy of its estimate.
Hence, we introduce in that section some real-valued functions
$J_\text{loc}:\r^{n N} \to \r$ that can serve as \emph{localizability potentials},
i.e., such that a lower value for $J_\text{loc}(\mathbf p)$ means that the performance
of an estimator at configuration $\mathbf p$ is expected to be better.
A localizability potential can then serve as an artificial potential for motion
planning \cite{choset_principles_2005}, to guide or constrain the motion of an MRS
to configurations that are favorable for accurate cooperative localization.
For example, one can generate a sequence of configurations $\mathbf p(0),\mathbf p(1), \dots,$
for the MRS of increasingly better localizability, by following the gradient descent scheme
\begin{equation} \label{eq:descent_per_agent}
\mathbf p_{i,k+1} = \mathbf p_{i,k} - \gamma_k \left( \der{J_\text{loc}(\mathbf p_k)}{\mathbf p_{i}} \right )^T,
\end{equation}
for each mobile node $i$, with $\{\gamma_k\}_{k \geq 0}$ a sequence of appropriate stepsizes.
The potential $J_{\text{loc}}$ can also be added to other potentials that enforce
collision avoidance constraints, connectivity maintenance \cite{decentralized_2010},
desired final positions \cite{Khatib:art86:potentials} or
coverage control tasks \cite{Bullo:book09:distributedRobotics}, etc.
A key issue when relying on artificial potentials to provide goal configurations to an MRS is
to ensure that each mobile node $i$ can compute the gradient
$\left(\partial J_{\text{loc}}(\mathbf p(k))/\partial \mathbf p_i\right)^T$ with respect to its coordinates
in \eqref{eq:descent_per_agent} by exchanging information only with its immediate neighbors
in the communication network, \emph{which we assume here to coincide with the ranging graph}
(although in general the anchors will not need to communicate with each other).
This ensures scalability to large networks and improves the robustness of
the network against the loss of nodes. The design of distributed gradient descent schemes
for the localizability potentials is discussed in Section \ref{sec:gradients}.
In summary, the problem considered in this paper is to first define appropriate
functions that can serve as localizability potentials and then design distributed
gradient descent algorithms for these potentials in order to deploy an MRS with
ranging sensors while ensuring that its cooperative localization scheme performs well.
In addition, we show in Section \ref{sec:extensions_rigidity} how to adapt the definition
of the localizability potentials and the gradient descent scheme to a more complex
situation where multiple tags can be carried by the same robot, which introduces
additional constraints on the positions $\mathbf p$ that should be taken into account
by localization and motion planning algorithms.
This set-up can be used in practice to provide more accurate full pose estimates for
the robots, including their orientations.
\begin{remark} \label{rmk: estimate in the loop}
In practice, the tags have access to their position $\mathbf p$ only through their
estimates $\hat{\mathbf p}$. As a result, the gradient descent scheme \eqref{eq:descent_per_agent}
cannot be directly implemented, and a standard approach is to compute and follow the gradient
at the current estimate, i.e., use $\partial J_{loc}(\hat{\mathbf p}(k))/\partial \mathbf p_i$
in \eqref{eq:descent_per_agent}.
Alternatively, \eqref{eq:descent_per_agent} can also be used to compute a sequence of steps,
i.e., plan a future trajectory for the MRS, in which case we assume at the planning stage
that the agents will be able to track that trajectory perfectly.
In this paper, as in much of the related literature, we do not consider the tracking errors
due to the fact that only imperfect position estimates are obtained during the deployment
of an MRS.
At least, the fact that the planned configurations promote
accurate localization tends to mitigate the impact of these errors.
\end{remark}
\section{Properties of the Fisher Information Matrix}
\label{sec:closed_form_crlb}
In this section, we study certain algebraic properties of the FIM
that are useful for the design of algorithms in the next sections.
For this, we establish connections between the FIM and rigidity theory.
\subsection{Infinitesimal Rigidity}
\label{ss:rigidity_theory}
For the ranging graph $\mathcal{G}=(\mathcal{E},\mathcal{V})$, the incidence matrix
$\mathbf{H} \in \mathbb{Z}^{E \times N}$
is defined by first assigning an arbitrary direction $i \to j$ to each edge $\{i,j\}$ of $\mathcal{E}$,
and then setting each element as follows:
\[
\text{for } \{i,j\} \in \mathcal E, k \in \mathcal V,
H_{i \to j,k} =
\begin{cases}
1 & \text{ if } k = i, \\
-1 & \text{ if } k = j, \\
0 & \text{ otherwise}.
\end{cases}
\]
For concreteness, we use throughout the paper the lexicographic ordering to order the
edges $i \to j$
and hence the rows of $\mathbf H$. As a result, the rows of $\mathbf H$ corresponding
to pairs of tags (in $\mathcal U \times \mathcal U$) appear first, followed by pairs in
$\mathcal U \times \mathcal K$ and finally by pairs of anchors, in $\mathcal K \times \mathcal K$.
Given a ranging graph $\mathcal{G}$, a \emph{framework} is a pair $(\mathcal{G},\mathbf p)$,
where the vector $\mathbf p \in \r^{n N}$ contains the positions of all agents, as before.
The \textit{rigidity function} $\mathbf r : \r^{n N} \to \r^{E}$ of a framework
$(\mathcal{G},\mathbf p)$ is defined componentwise by
\begin{equation}
[\mathbf r(\mathcal G,\mathbf p)]_{i \to j}
= \frac{1}{2} \|\mathbf p_{ij}\|^2, \;\; \forall \{ i,j \} \in \mathcal E,
\label{eq:rigidity_function}
\end{equation}
and its \emph{rigidity matrix} $\mathbf R(\mathcal G, \mathbf p) \in \r^{E \times n N}$ is
the Jacobian $\partial \mathbf r/\partial \mathbf p$ of the rigidity function
\cite{tay_generating_1985,zelazo_decentralized_2015}, which can be written explicitly as
\begin{equation}
\mathbf{R}(\mathcal G, \mathbf p) = \text{diag}(\dots ,\mathbf p_{ij}^\top,\dots) \, [\mathbf H \otimes \mathbf I_n].
\label{eq:rigidity_matrix}
\end{equation}
In other words, the row
$i \to j$
of $\mathbf R(\mathcal G, \mathbf p)$ is
\[
\begin{bmatrix} \mathbf 0 & \ldots & \mathbf 0 & \mathbf p_{ij}^T & \mathbf 0 & \ldots & \mathbf 0
& -\mathbf p_{ij}^T & \mathbf 0 \ldots & \mathbf 0 \end{bmatrix}
\]
with $\mathbf p_{ij}^T$ occupying the $i^{th}$ block of $n$ coordinates and $-\mathbf p_{ij}^T$ the $j^{th}$ block.
Next, when the node positions vary with time, consider motions that do not change the distances
between nodes in ranging pairs, in other words, motions that keep the rigidity
function constant. These motions must then satisfy
\[
\frac{d \mathbf r(\mathcal G,\mathbf p)}{d t} = \mathbf R(\mathcal G, \mathbf p) \frac{d \mathbf p}{d t} = \mathbf 0,
\]
i.e., the corresponding velocity vectors $d \mathbf p/dt$ must lie in the kernel of
$\mathbf R(\mathcal G, \mathbf p)$. This constraint is rewritten more explicitly in the following
definition.
\begin{definition}[Infinitesimal motion of a framework]
\label{def: inf. motions}
An infinitesimal motion of a framework $(\mathcal G, \mathbf p)$ is
any vector $\mathbf v = \text{col}(\mathbf v_1,\ldots,\mathbf v_N)$ in $\mathbb R^{n N}$,
such that $\mathbf v \in \ker \mathbf R(\mathcal G, \mathbf p)$.
Equivalently, for each edge $\{i,j\} \in \mathcal{E}$, we have
$
\mathbf p_{ij}^T (\mathbf v_i - \mathbf v_j) = \mathbf 0.
$
\end{definition}
Any framework admits a basic set of infinitesimal motions, namely, the \emph{Euclidean}
infinitesimal motions of the framework \cite{tay_generating_1985,bonin_matroids_1996},
which can be defined for $n=3$ as
\[
\text{Eucl}^3_{\mathbf p} = \left \{ \text{col}(\mathbf v + \boldsymbol{\omega}\times \mathbf p_1,\ldots,
\mathbf v + \boldsymbol{\omega}\times \mathbf p_n) \, | \, \mathbf v, \boldsymbol{\omega} \in \r^3 \right\},
\]
and for $n=2$, with the notation $\mathbf p_i = [x_i,y_i]^T$,
\begin{align*}
\text{Eucl}^2_{\mathbf p} = \Big\{ \text{col}\left( \mathbf v + \omega \begin{bmatrix} y_1 \\ -x_1 \end{bmatrix},\ldots,
\mathbf v + \omega \begin{bmatrix} y_n \\ -x_n \end{bmatrix} \right) \Big | \\
\mathbf v \in \r^2, \omega \in \mathbb R \Big \}.
\end{align*}
These infinitesimal motions correspond to the global rigid translations and rotations
of the whole framework, and it is immediate to verify that the subspace $\text{Eucl}_{\mathbf p}$
is always contained in $\ker \mathbf R(\mathcal G,\mathbf p)$.
Infinitesimally rigid frameworks do not admit other infinitesimal motions,
which would correspond to internal deformations.
\begin{definition}[Infinitesimal rigidity]
\label{def: infinitesimal rigidity}
A framework $(\mathcal G,\mathbf p)$ in $\mathbb R^{n N}$
is called infinitesimally rigid if
all its infinitesimal motions are Euclidean, i.e., if
$
\ker \mathbf R(\mathcal G,\mathbf p) = \text{Eucl}^n_{\mathbf p}.
$
\end{definition}
The following result provides a basis of $\text{Eucl}^{n}_{\mathbf p}$ and is used
in Section \ref{sec:extensions_rigidity}. When $n=3$, with $\mathbf e_x$, $\mathbf e_y$, $\mathbf e_z$
the standard unit vectors in $\mathbb R^3$, define
$\mathbf v_{T_\xi} = \mathbf 1_N \otimes \mathbf e_\xi$ as well as
$\mathbf v_{R_\xi} = \text{col}(\mathbf e_\xi \times \mathbf p_1,\ldots,\mathbf e_\xi \times \mathbf p_N)$,
for $\xi \in \{x,y,z\}$.
Similarly, if $n=2$ and $\mathbf e_x$, $\mathbf e_y$ are the standard unit vectors in $\mathbb R^2$, define $\mathbf v_{T_x} = \mathbf 1_N \otimes \mathbf e_x$,
$\mathbf v_{T_y} = \mathbf 1_N \otimes \mathbf e_y$ and
\[
\mathbf v_{R_z} = \text{col} \left( \begin{bmatrix} -y_1 \\ x_1 \end{bmatrix}, \ldots,
\begin{bmatrix} -y_N \\ x_N \end{bmatrix} \right).
\]
\begin{proposition}
\label{prop:kerthree}
Suppose that $N \geq n$. If $n=2$ and at least $2$ nodes are at distinct locations,
the dimension of $\text{Eucl}^{2}_{\mathbf p}$ is $3$ and a basis of this subspace
is given by $(\mathbf v_{T_x},\mathbf v_{T_y},\mathbf v_{R_z})$.
If $n=3$ and we have at least $3$ nodes that are not aligned, the dimension of
$\text{Eucl}^{3}_{\mathbf p}$ is $6$ and a basis of this subspace is given by
$(\mathbf v_{T_x},\mathbf v_{T_y},\mathbf v_{T_z},\mathbf v_{R_x},\mathbf v_{R_y},\mathbf v_{R_z})$.
\end{proposition}
\begin{proof}
We provide a proof for $n=3$, the case $n=2$ is similar.
The fact that the vectors in the proposition span $\text{Eucl}^{3}_{\mathbf p}$
is clear by definition, so it is sufficient to prove their independence.
Consider a linear combination equal to zero
\begin{align*}
&\alpha_1 \mathbf v_{T_x} + \alpha_2 \mathbf v_{T_y} + \alpha_3 \mathbf v_{T_z}
+ \alpha_4 \mathbf v_{R_x} + \alpha_5 \mathbf v_{R_y} + \alpha_6 \mathbf v_{R_z} \\
&= \text{col}(\mathbf v + \boldsymbol{\omega}\times \mathbf p_1,\ldots,
\mathbf v + \boldsymbol{\omega}\times \mathbf p_n) = \mathbf 0,
\end{align*}
where $\mathbf v = [\alpha_1,\alpha_2,\alpha_3]^T$ and
$\boldsymbol \omega = [\alpha_4,\alpha_5,\alpha_6]^T$. Suppose that the nodes indexed
by $i$, $j$ and $k$ are not aligned. We have from the equation above
$\mathbf v = - \boldsymbol \omega \times \mathbf p_i$, and so
\[
\boldsymbol \omega \times (\mathbf p_j - \mathbf p_i) = \boldsymbol \omega \times (\mathbf p_k - \mathbf p_i) = \mathbf 0.
\]
Since $(\mathbf p_j - \mathbf p_i)$ and $(\mathbf p_k - \mathbf p_i)$ are by assumption independent,
this gives $\boldsymbol \omega = 0$ and hence $\mathbf v = 0$. This proves the independence
of the vectors in the proposition, which therefore form a basis of $\text{Eucl}^{3}_{\mathbf p}$.
\end{proof}
\subsection{Relations between the Rigidity Matrix and the FIM}
Throughout this section, we consider the set of nodes (tags and anchors) to be at positions $\mathbf p$,
with corresponding ranging graph $\mathcal G$. This defines a framework $(\mathcal G,\mathbf p)$,
as discussed the previous section. The FIM $\mathbf F$ is given by \eqref{eq:ourfim}, whereas the rigidity
matrix $\mathbf R \coloneqq \mathbf R(\mathcal G, \mathbf p)$ is given by \eqref{eq:rigidity_matrix}.
\begin{proposition}
\label{prop: FIM - R relation}
We have $\mathbf F = \mathbf R^\top \mathbf Q \mathbf R$, for the positive definite matrix
$
\mathbf Q = \text{diag} \left( \ldots, 1 / (d_{ij}^{2\kappa} \sigma^2), \ldots \right) \in \r^{E \times E}.
$
\end{proposition}
To explain this result, remark that $\mathbf F$ in \eqref{eq:ourfim} has a structure similar to the
Laplacian matrix $\mathbf L$ of the graph $\mathcal G$ \cite[Chapter 12]{Godsil:book01:algebraicGraphTheory}.
The expression of Proposition \ref{prop: FIM - R relation} then corresponds to the standard relationship
$\mathbf L = \mathbf H^\top \mathbf H$ between the incidence matrix and the usual Laplacian of an undirected graph.
Hence, the FIM $\mathbf F$ can be considered as a type of weighted Laplacian matrix.
In \cite{zelazo_decentralized_2015}, matrices of the form $\mathbf R^\top \mathbf Q \mathbf R$,
for any diagonal matrix $\mathbf Q$, are called (weighted) ``symmetric rigidity matrices''.
Hence, with this terminology, Proposition \ref{prop: FIM - R relation} says that the FIM is
a symmetric rigidity matrix, for a specific set of weights in $\mathbf Q$ determined by the
properties of the measurement noise model. In particular, these weights depend inversely on
the (true) distances between ranging nodes.
\begin{proof}
Starting from \eqref{eq:rigidity_matrix}, we have
\[
\mathbf R^\top \mathbf Q \mathbf R = (\mathbf H^\top \otimes \mathbf I_n) \,
\text{diag} \left(\ldots,\frac{\mathbf p_{ij} \mathbf p_{ij}^\top}{d_{ij}^{2\kappa} \sigma^2},\ldots \right)
(\mathbf H \otimes \mathbf I_n).
\]
Hence, for $i \neq j$, the block $i,j$ of $\mathbf R^\top \mathbf Q \mathbf R$ is
\[
[\mathbf R^\top \mathbf Q \mathbf R]_{ij} = \sum_{e \in \mathcal E} H_{ei} H_{ej} \mathbf Q_{ee}
= -\frac{\mathbf p_{ij} \mathbf p_{ij}^\top}{d_{ij}^{2\kappa} \sigma^2} \mathsf{1}_{\mathcal N_i}(j) = \mathbf F_{ij},
\]
using the fact that $H_{ei} H_{ej} = -1$ if
$e$ is $i \to j$ and $0$ otherwise. Similarly, for all $i$
\[
[\mathbf R^\top \mathbf Q \mathbf R]_{ii} = \sum_{e \in \mathcal E} H_{ei} H_{ei} \mathbf Q_{ee}
= \sum_{j \in \mathcal N_i}
\frac{\mathbf p_{ij} \mathbf p_{ij}^\top}{d_{ij}^{2\kappa} \sigma^2} = \mathbf F_{ii}.
\]
\end{proof}
The following result then follows immediately from the fact that $\mathbf Q \succ 0$
in Proposition \ref{prop: FIM - R relation}.
\begin{corollary} \label{prop:samekernel}
We have $\ker \mathbf F = \ker \mathbf R$.
\end{corollary}
We can now establish a link between infinitesimal rigidity and the invertibility of the partial
FIM $F_\mathcal U$ appearing in \eqref{eq: FIM partitioning}.
\begin{theorem} \label{thm: Fu invertible}
If the framework $(\mathcal G,\mathbf p)$ is infinitesimally rigid and contains at least $n$ anchors
at distinct locations, and if moreover when $n=3$ at least $3$ of these anchors are not
aligned, then $\mathbf F_{\mathcal U}$ is positive definite.
\end{theorem}
\begin{proof}
We give the proof in the more involved case $n=3$.
With the assumed ordering of nodes and edges, the rigidity matrix has the following
block structure
\[
\mathbf R = \begin{bmatrix}
\mathbf R_1 & \mathbf R_2 \\
\mathbf 0 & \mathbf R_{3}
\end{bmatrix}, \text{ with } \mathbf R_1 \in \mathbb R^{P \times U},
\mathbf R_3 \in \mathbb R^{\frac{K(K-1)}{2} \times K}.
\]
In other words, the rows of the matrix $\mathbf R_1$ correspond to the edges
internal to $\mathcal U$ and between $\mathcal U$ and $\mathcal K$, whereas
$\mathbf R_3$ is the rigidity matrix of the complete subgraph formed by the
anchors and the links between them. Now, we have
$\mathbf F_\mathcal U = \mathbf R_1^\top \mathbf Q_1 \mathbf R_1$, with $\mathbf Q_1$ diagonal and
invertible, as in Proposition \ref{prop: FIM - R relation}, so
$\ker \mathbf F_{\mathcal U} = \ker \mathbf R_1$.
Consider some vector $\mathbf x_1 \in \mathbb{R}^U$ with $\mathbf x_1 \in \ker \mathbf R_1$.
Then,
\begin{equation}
\mathbf R \begin{bmatrix} \mathbf x_1 \\ \mathbf 0 \end{bmatrix} = \begin{bmatrix}
\mathbf R_1 & \mathbf R_2 \\
\mathbf 0 & \mathbf R_3
\end{bmatrix}
\begin{bmatrix} \mathbf x_1 \\ \mathbf 0 \end{bmatrix}
= \mathbf 0,
\end{equation}
hence $\text{col}(\mathbf x_1,\mathbf 0)$ is in $\ker \mathbf R$. Since $\mathcal G$ is infinitesimally
rigid, there must exist $\mathbf v$, $\boldsymbol \omega$ in $\mathbb R^3$ such that
\[
\begin{bmatrix} \mathbf x_1 \\ \mathbf 0 \end{bmatrix} =
\text{col}(\mathbf v+\boldsymbol \omega\times \mathbf p_1,\ldots,\mathbf v + \boldsymbol \omega \times \mathbf p_N).
\]
In particular, for the 3 anchors that are not aligned, indexed by $i$, $j$ and $k$, we must have
\[
\mathbf v+ \boldsymbol \omega \times \mathbf p_i = \mathbf v + \boldsymbol \omega \times \mathbf p_j
= \mathbf v + \boldsymbol \omega \times \mathbf p_k = \mathbf 0.
\]
From this, we conclude as in the proof of Proposition \ref{prop:kerthree} that
$\mathbf v = \boldsymbol \omega = \mathbf 0$, which in turns implies $\mathbf x_1 = \mathbf 0$.
Hence $\ker \mathbf F_{\mathcal U} = \{\mathbf 0\}$, i.e., $\mathbf F_{\mathcal U} \succ 0$.
\end{proof}
\begin{remark}
If we have only one tag, then one can show that $\mathbf F_{\mathcal U}$ is invertible
if and only if we have at least $n$ anchors and the nodes' locations span an affine space
of full dimension $n$ (i.e., we have $3$ non aligned nodes if $n=2$, and $4$ non coplanar
nodes if $n=3$). Note that if we have just $n$ anchors, we cannot localize uniquely the tag
in general, even with perfect measurements, because the intersection of $n$ spheres
in $\mathbb R^n$ gives two possible locations. Hence, even when $\mathbf F_{\mathcal U}$ is invertible,
the localization problem might not be uniquely solvable. Unicity of the localization solution
can be characterized by the stronger notion of global rigidity \cite{Aspnes:TMC06:theoryLoc},
which however is more complex to check if $n=2$ and for which no exact test is currently known
if $n=3$.
\end{remark}
Theorem \ref{thm: Fu invertible} can be used to produce an initial node placement
and choose ranging links to guarantee that $\mathbf F_{\mathcal U}$ is already
invertible at the start of the deployment. For this, we should ensure that $(\mathcal G, \mathbf p)$
is infinitesimally rigid. One convenient way to satisfy this condition (in fact, the
stronger condition of global rigidity) is to construct a \emph{triangulation graph}
\cite{Aspnes:TMC06:theoryLoc,Moore:Sensys04:networkLoc}: starting from a set of at least
$n+1$ anchors, we add tags one by one, with each new tag connected to at least $n+1$ previous
nodes that are in general position ($3$ non-aligned nodes if $n=2$, $4$ non-coplanar nodes if $n=3$).
Although this construction requires more anchors and links than the strict minimum necessary for
the invertibility of $\mathbf F_{\mathcal U}$, the resulting network supports efficient distributed
localization algorithms that are robust to measurement noise \cite{Moore:Sensys04:networkLoc}.
\subsection{Deployment of An UGV Carrying Several Anchors}
Here we illustrate
the results of Section \ref{sec:extensions_rigidity} and the performance
difference between
leveraging information only on relative distances or on the full relative positions.
Consider the robot shown on Fig. \ref{fig:robotsetup}, following the kinematic
model \eqref{eq:kine_monocycle} and carrying two tags $\mathcal{U}=\{1,2\}$ placed
at positions $\mathbf p_1^r = [1,0]^\top$ and $\mathbf p_2^r = [-1,0]^\top$ in the robot frame,
centered at $\mathbf p_M = \frac{1}{2}(\mathbf p_1+\mathbf p_2)$.
Three fixed anchors $\mathcal{K}=\{3,4,5\}$ are placed at the coordinates
$\mathbf p_3 = [-5, 5]^\top$, $\mathbf p_4=[5, -5]^\top$ and $\mathbf p_5 = [5, 5]^\top$
in the absolute frame. All nodes communicate and obtain range measurements with
each other, following the Gaussian additive model \eqref{eq:model_meas}
with $\sigma=0.1~\mathrm{m}$. The heading of the robot is $\theta$ and $\exp \cpop{\theta}$
is the rotation matrix between $\mathcal{F}$ and the robot frame.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\linewidth]{figs/RD_RP_robot_setup.pdf}
\caption{Robot equipped with two tags.}
\label{fig:robotsetup}
\end{figure}
In scenario (D), we include the constraint $d_{12}=2~\mathrm{m}$ as in Section \ref{ss:ext:dist},
and define the cost function as \eqref{eq:constrained CRLB potential}. In scenario (RP), we include
the constraint $\mathbf p_{12}^r=[2,0]^\top$ as in Section \ref{ss:CRLB_RP} and define the cost
function as \eqref{eq:constrained CRLB potential - bis}, so that it can be compared to the previous one.
We compute the potentials and their derivatives with the results of
Section \ref{sec:extensions_rigidity} and implement the scheme
\eqref{eq:lagrangian_descent} to compute a sequence of desired poses,
which are then reached using the pose controller presented in
\cite{astolfi_exponential_1999}.
At $k=0$, the initial configuration of the robot in both cases is given
by $\mathbf p_M(0) = [-15,-4]^\top$ and $\theta(0) = -\pi/8$.
The cost and robot trajectories are shown on Fig. \ref{fig:robotMultitagConv}.
The steady state configuration prescribed by \eqref{eq:lagrangian_descent}
is feasible for the robot, thanks to the dual penalization of the rigidity
constraint.
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{figs/convergence_montage_edit_december21.pdf}
\caption{Deployment results.}
\label{fig:robotMultitagConv}
\end{figure}
The following constrained least-squares estimators
$\hat{\mathbf p}_\mathcal{U}^\text{D}$ and $\hat{\mathbf p}_\mathcal{U}^\text{RP}$
of $\mathbf p_\mathcal{U}$ are implemented in scenarios (D) and (RP)
\[
\begin{cases}
\hat{\mathbf p}_\mathcal{U}^\text{D}
=
\underset{\hat{\mathbf p}_\mathcal{U}}{\text{argmin}} ~Q(\hat{\mathbf p}_\mathcal{U}), \\
\text{s.t }\hat{d}_{12}-d_{12}=0
\end{cases}
\text{and}
\begin{cases}
\hat{\mathbf p}_\mathcal{U}^\text{RP}
=
\underset{\hat{\mathbf p}_\mathcal{U}}{\text{argmin}} ~Q(\hat{\mathbf p}_\mathcal{U}), \\
\text{s.t }\hat{\mathbf p}_{21}- \exp \cpop{\hat \theta} \mathbf p_{12}^r=\mathbf 0
\end{cases}
\]
where $\hat \theta := \text{atan2}(\hat y_{21}, \hat x_{21})$ and
$Q(\mathbf p_\mathcal{U})$ is defined in \eqref{eq:quadratic_cost}.
We evaluate the localization performance by computing the empirical MSE $\widetilde{MSE}_k:=\frac{1}{2}[\widetilde{MSE}_{1,k}+\widetilde{MSE}_{2,k}]$
for the two tag positions, using
the same process as in \ref{ss:sim:perf}, with $M = 500$ simulations.
The results shown in table \ref{tab:monte} indicate that the motion
significantly improves the estimate accuracy. Moreover, the relative
position constraints provides a clear improvement compared to
only using the relative distance information.
\begin{table}[h]
\caption{Monte Carlo Simulation Results.}
\label{tab:monte}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $\widetilde{MSE}_0$ & Confidence & $\widetilde{MSE}_F$ & Confidence & ET \\
\hline
(D) & $4.28~\mathrm{m^2}$ & $\pm 0.03~\mathrm{m^2}$ & $0.93~\mathrm{m^2}$ & $\pm 0.02~\mathrm{m^2}$ & $1.70~\mathrm{s}$ \\
\hline
(RP) & $2.97~\mathrm{m^2}$ & $\pm 0.04~\mathrm{m^2}$ & $0.63~\mathrm{m^2}$ & $\pm 0.002~\mathrm{m^2}$ & $1.89~\mathrm{s}$\\
\hline
\end{tabular}
\end{table}
Table \ref{tab:monte} also provides the Execution Times (ET) of the deployment algorithms
for all the steps shown on Fig. \ref{fig:robotMultitagConv}.
The simulation is coded in \texttt{Matlab R2018b} and runs on
a computer equipped with an
\texttt{Intel I7} processor.
The ET for the (RP) scenario is about 10\% larger than for (D), due to the
increased complexity to evaluate $\mathbf A$ and its derivative.
In summary, compared to (D), deployment using (RP) leads to a significant improvement
of the precision and a moderated increase of the computation time.
\section{Simulations}
\label{sec:sim_standard}
In this section, we present two deployment scenarios. The first is a structure
inspection problem by a multi-robot network maintaining localizability while
the task is performed. The second concerns the deployment of an Unmanned Ground
Vehicle (UGV) carrying several tags, where we include the distance and relative
position constraints in the CRLB-based potential.
\subsection{Cooperative Structure Inspection}
Consider $N=5$ mobile robots, where three of them are anchors, i.e., $\mathcal{K} = \{3,4,5\}$,
assumed to be independently perfectly localized (e.g., via RTK GNSS), and the other two
are tags, i.e., $\mathcal{U}=\{1,2\}$.
Each robot carries an UWB transceiver to communicate and take ranging measurements
with any other robot, via TWR \cite{mai_local_2018,prorok_models_2013}.
The ranging measurements follow the model \eqref{eq:model_meas}.
The tags are required to go underneath an $L\times H := 6~\mathrm{m} \times 10 ~\mathrm{m}$
rectangular structure, represented on Fig. \ref{fig:d-opt-traj}, in order to inspect it.
Once under the structure, the tags lose access to the independent localization
system (e.g., because GNSS signals are blocked).
The anchors' task is then to provide accurate localization for the tags,
using the algorithms described in this paper.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/D-opt-trajectory.pdf}
\caption{Trajectories for the cooperative inspection scenario.}
\label{fig:d-opt-traj}
\end{figure}
\subsubsection{Motion Planner Design}
First, we assume that the tags $\mathcal U = \{1,2\}$ perform the inspection by following
specified straight paths under the structure, described by the sequences of waypoints
$\mathbf p_{1,l}^d = [L/3,a l]^\top$ and $\mathbf p_{2,l}^d = [2L/3,a l]^\top$ with
$a = 0.1~\mathrm{m}$, $1 \leq l \leq 100$.
The tags follow this path without taking their localization performance
into account.
On the other hand, the purpose of the mobile anchors is to provide accurate localization
for the tags, measured here by $J_\text{loc}(\mathbf p) = J_D(\mathbf p)$ for its
computational efficiency. Moreover, the anchors must not wander away from the
tags nor go under the structure, to maintain good localization.
Hence, we introduce a potential
$J_\text{con}(\mathbf p_\mathcal{K}) = \sum_{i\in \mathcal{K}} \sum_{e\in \mathcal{B}_i} g(d(i,e),d_s)$
penalizing the anchors if they approach the boundaries of specified bounding boxes $\mathcal{B}_i$,
see Figure \ref{fig:d-opt-traj}.
We use standard repulsive potentials $g(d(i,e),d_s)=0.5(1/d(i,e)-1/d_s)^{2}\mathsf{1}_{d(i,e)<d_s}$,
where $d(i,e)$ is the distance between agent $i$ and edge $e$ of box $\mathcal B_i$,
and $d_s = 1.5~\mathrm{m}$ defines the influence region of the edge \cite{Lynch2017ModernRM}.
Therefore, we define the overall potential for the the anchors as
$J_\text{anchors}(\mathbf p)= K_l J_D(\mathbf p) + K_c J_{con}(\mathbf p_\mathcal{K})$
where $K_l, K_c>0$ are constant parameters.
The anchors $i \in \mathcal{K}$
implement the descent gradient scheme
\begin{equation}
\mathbf p_{i,k+1} =
\mathbf p_{i,k} -
K_l \left( \der{J_D}{\mathbf p_{i,k}} \right)^\top
- K_c \left( \der{J_\text{con}}{\mathbf p_{i,k}} \right)^\top.
\label{eq:simulations:potdesc}
\end{equation}
For $\xi_i \in \{x_i,y_i\}$, we compute
$\partial J_D/\partial \xi_i$
analytically by \eqref{eq:der:Dopt}.
The expressions of the derivatives $\partial J_\text{con} / \partial {\xi_i}$ of the repulsive potential
are standard \cite{Lynch2017ModernRM}.
The gradient descent scheme is used to obtain desired waypoints
for the anchors,
which we can track using controllers on the robots.
For concreteness, assume that all robots are identical with monocycle kinematics \cite[Chap. 4]{corke_robotics_2011}
\begin{equation}
\dot{x}_M = v\cos(\theta), \quad
\dot{y}_M = v\sin(\theta), \quad
\dot{\theta} = \omega
\label{eq:kine_monocycle}
\end{equation}
where $\omega$ and $v$ are the rotational and translational velocities
(see Fig \ref{fig:controleptapt}),
$(M,\vec{x}^r,\vec{y}^r)$ is the robot frame and $\theta$ is the robot heading with
respect to $\mathfrak{F}$.
The transceiver's coordinates in the robot frame are
$\mathbf p^r = [\alpha,\beta]^\top$, and we assume that $\alpha \neq 0$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figs/controle_pt_a_pt}
\caption{Robot and anchor trajectory tracking configuration.}
\label{fig:controleptapt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figs/d-opt_with_entropy.pdf}
\caption{Localizability potential $J_D(\mathbf p(k))$ and empirical plot of $\ln \det \mathsf{cov}(\hat{\mathbf p}(k))$.}
\label{fig:d-opt-pot}
\end{figure}
With $\tilde{\mathbf u} \in \mathbb R^2$ the coordinates of
the anchor velocity in $\mathfrak{F}$,
the following Proportional-Integral (PI) controller
\begin{equation}
\tilde{\mathbf u} = K_p(\mathbf p_d(t) - \mathbf p(t)) + K_i \int_{\tau = 0}^t (\mathbf p_d(\tau) - \mathbf p(\tau))d\tau,
\label{eq:velcontroller}
\end{equation}
for $K_p, K_i >0$,
allows the anchors to track the desired trajectory $\mathbf p_d$. This provides
a velocity command $\mathbf u := [v,\omega]^\top$ for the robot,
since $\mathbf u = \mathbf T (\theta) \tilde{\mathbf u}$ \cite[p. 529]{Lynch2017ModernRM} with
\[
\mathbf T(\theta) =
\frac{1}{\alpha}
\begin{bmatrix}
\alpha \cos \theta - \beta \sin \theta & \alpha \sin \theta + \beta \cos \theta \\
- \sin \theta &
\cos \theta
\end{bmatrix}.
\]
\subsubsection{Simulation and Performance Analysis}
\label{ss:sim:perf}
We set the initial positions for the tags to
$\mathbf p_{1,0} = [1,-1/2]^\top,\mathbf p_{2,0} = [5,-1/2]^\top$
and for the anchors to $\mathbf p_{3,0} = [-2,0]^\top,\mathbf p_{4,0} = [-1.5,0]^\top,\mathbf p_{5,0} = [8,0]^\top$.
We choose the parameter values in \eqref{eq:simulations:potdesc} as
$K_l = 2$ and $K_c = 1/100$. To take into account the physical constraints on the robot velocities,
we bound the stepsizes $||\mathbf p_{i,k+1}-\mathbf p_{i,k}||\leq \Delta_\text{vel}$, by
implementing the iterations
\[
\mathbf p_{i,k+1}=\mathbf p_{i,k}-\frac{\partial J}{\partial \mathbf p_{i,k}} \times
\min \left\{1,\frac{\Delta_\text{vel}}{||\partial J/\partial \mathbf p_{i,k}||}\right \}.
\]
In our simulation, we set $\Delta_{vel} = 0.2~\mathrm{m}$.
We set $\alpha = 0.5~m$ and $\beta = 0.5~m$
and the PI controller gains are $K_p = 3$, $K_i = 0.5$.
The controller \eqref{eq:velcontroller} follows the trajectory computed
by \eqref{eq:simulations:potdesc} with a maximum tracking error of about $10$ cm.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figs/Dopt_mse_montage.pdf}
\caption{Empirical mean-squared error.}
\label{fig:d-opt-mse}
\end{figure}
As shown by the trajectories on Fig \ref{fig:d-opt-traj}, the tags
follow their assigned path and
the anchors maintain the localizability. Initially, all robots are nearly aligned,
a geometry with poor localizability. As shown on Fig. \ref{fig:d-opt-pot},
the anchor deployment quickly and significantly decreases the localizability potential.
In Fig. \ref{fig:d-opt-mse}, at each iteration $k$ and for each tag $i$, we plot
the empirical total MSE, i.e.,
$\widetilde{MSE}_{i,k} = \frac{1}{M} \sum_{\rho=1}^M \epsilon_{i,k}^\rho$,
evaluated via $M$ Monte Carlo simulations
computing $\epsilon_{i,k}^\rho = ||\hat{\mathbf p}_{i,k}^\rho -\mathbf p_{i,k}||^2$,
with $\hat{\mathbf p}_{i,k}^\rho$ the estimate of $\mathbf p_{i,k}$ at simulation $\rho$,
obtained by solving the least squares problem
\begin{equation}
\small
\hat{\mathbf p}_{\mathcal{U},k}^\rho =
\underset{\mathbf p_{\mathcal{U}}\in \r^{2 U}}{\text{argmin}} Q(\mathbf p_\mathcal{U}) \coloneqq
\sum_{i\in \mathcal{U}} \sum_{j\in \mathcal N_i} [\hat{d}_{ij,k}-\tilde{d}_{ij,k}^\rho]^2,
\label{eq:quadratic_cost}
\normalsize
\end{equation}
where $\hat{d}_{ij}:=||\hat{\mathbf p}_{i,k} - \hat{\mathbf p}_{j,k}||$
(with $\hat{\mathbf p}_{j,k}$ the anchor position if $j \in \mathcal{K}$)
and $\tilde{d}_{ij,k}^\rho$ the measurements at simulation $\rho$, obtained
with the Gaussian additive model \eqref{eq:model_meas} for $\sigma = 10~\mathrm{cm}$.
We simulate $M=500$ realizations and give $3\sigma$ confidence bounds $b_+,b_-$
on Fig. \ref{fig:d-opt-mse}.
These bounds are defined by $b_{\pm,k} = \widetilde{MSE}_{i,k} \pm 3 \widetilde{\sigma}_{i,k}/\sqrt{M}$,
where $\widetilde{\sigma}_{i,k}^2 = \frac{1}{M-1} \sum_{\rho=1}^M [\epsilon_{i,k}^\rho - \widetilde{MSE}_{i,k}]^2$ is the empirical variance of the samples. We remark that the motion planing method improves
the precision of the estimates significantly during the first $25$ iterations of motion planning,
dividing the MSE of tag $1$ by a factor of ten. We plot $\ln \det \tilde{\Sigma}_k$
on Fig. \ref{fig:d-opt-pot}, with $\tilde{\Sigma}_k$ the empirical covariance
of $\hat{\mathbf p}_{\mathcal{U},k}$ obtained using \eqref{eq:quadratic_cost},
which shows that localizability as defined by the D-Opt criterion is
properly maintained.
\section{Conclusion and Perspectives}
This paper presents deployment methods applicable to Multi Robots Systems (MRS)
with relative distance measurements, which enforce good localizability.
Constrained Cram\'er-Rao Lower Bounds (CRLB) are used to predict the localization error
of a given configuration, assuming Gaussian ranging measurement models.
A connection between Fisher information matrices and rigidity matrices is highlighted,
which yields useful properties, e.g., for initial MRS placement.
The CRLB is used to design artificial potentials,
so that gradient descent schemes can be developed to plan robot
motions that enhance the overall localizability of the network.
Moreover, we show how to distribute the execution of the resulting
algorithms among the robots, so that they only need to communicate
with their neighbors in the ranging graph.
Finally, we extend the methodology to MRS with robots carrying multiple tags,
again leveraging the theory of equality-constrained CRLBs.
Future work
could focus on using the proposed measures of localizability to constrain
other types of motion planning algorithms.
\section*{Acknowledgements}
The authors wish to thank Drs. \'Eric Chaumette, Ga\"el Pag\`es and Ali Naouri
from ISAE-Supa\'ero (France) for helpful discussions.
\section{Localizability Potentials}
\label{sec:loca_potentials}
This section is concerned with defining artificial potentials that can be used
as localizability potentials. The proposed definitions require that we first
recall some elements from estimation theory related to the CRLB.
\subsection{Constrained Cram\'er-Rao Lower Bound}
\label{ss:pstat_crlb}
We assume that the position estimator implemented by the MRS is unbiased, i.e., satisfies
$\mathbb{E}[\hat{\mathbf p}] = \mathbf p$. We then focus on finding configurations $\mathbf p$
for which the error covariance matrix
$\mathbb{E} \left[ (\hat{\mathbf p} - \mathbf p)(\hat{\mathbf p} - \mathbf p)^\top \right]$ for $\hat{\mathbf p}$,
which is then also the covariance matrix $\mathsf{cov}[\hat{\mathbf p}]$,
is ``small'' in some sense.
More precisely, since the error covariance depends on the specific estimator used
and can be difficult to predict analytically,
we use the CRLB, a lower bound on the covariance of any unbiased estimator, to quantify
the quality of a configuration $\mathbf p$.
Although this implicitly assumes that an estimator can be constructed to achieve or approach
this lower bound, this methodology is commonly used in optimal experiment design
and sensor placement \cite{pukelsheim_optimal_2006,Ucinski:book04:optimalSensing}.
In general, the CRLB corresponds to the inverse of the Fisher Information Matrix (FIM),
which we define below.
\begin{definition}[FIM] \label{def: FIM}
Let $\mathbf x \in \r^p$ be a deterministic parameter vector and $\mathbf y \in \r^q$
a random observation vector, for some positive integers $p,q$.
Define $f : \r^q \times \r^p \to \r^+$ the Probability Density Function (PDF) of $\mathbf y$,
which depends on the parameter $\mathbf x$, so that we write $f(\mathbf y;\mathbf x)$.
Under some regularity assumptions on $f$ (see \cite[Chap. 14]{haug_bayesian_2012}),
the ${p\times p}$ Fisher Information Matrix (FIM) of this PDF is defined as
\begin{equation} \label{eq: FIM def}
\mathbf F(\mathbf x) = - \mathbb{E}_{\mathbf y} \left[ \der{^2 \ln f(\mathbf y;\mathbf x)}{\mathbf x \partial \mathbf x^\top} \right].
\end{equation}
The matrix $\mathbf F(\mathbf x)$ is symmetric and positive semi-definite.
\end{definition}
In the position estimation problem, the parameters of interest are the node coordinates
in the vector $\mathbf p \in \r^{n N}$,
whereas the random observations are contained in the vector $\tilde{\mathbf d}$.
As computed in \cite{patwari_locating_2005},
the FIM of the PDF $f(\tilde{\mathbf d};\mathbf p)$ is an $n N\times n N$ matrix
that depends on $\mathbf p$ and can be decomposed into $n \times n$ blocks
$\mathbf F_{ij}$ such that
\begin{align}
\begin{split}
\mathbf F_{ij}(\mathbf p) &= \mathbf F_{ij}(\mathbf p_{ij}) = - \frac{1}{d_{ij}^{2 \kappa}\sigma^2} \mathbf p_{ij} \mathbf p_{ij}^\top \,
\mathsf{1}_{\mathcal N_i}(j), \text{ if } i\neq j, \\
\mathbf F_{ii}(\mathbf p) &= -\sum_{j \neq i} \mathbf F_{ij},
\end{split}
\label{eq:ourfim}
\end{align}
where $\mathbf p_{ij} \coloneqq \mathbf p_i - \mathbf p_j$,
and $\kappa=1$ for the additive noise model \eqref{eq:model_meas}
or $\kappa=2$ for the multiplicative noise model \eqref{eq:model_meas_lognormal}.
Note however that estimating the anchor positions is not needed, since the locations
of these nodes are known. The fact that $\hat{\mathbf p}_i := \mathbf p_i$ for all $i \in \mathcal{K}$,
with these coordinates $\mathbf p_i$ known, should be taken into account by an estimator of the
tag positions, and hence should also be taken into account when bounding the covariance of these
estimators. We can rely on the theory of CRLBs with equality constraints on the estimated
parameters in order to include these trivial constraints on the anchor positions and later
in Section \ref{sec:extensions_rigidity} also additional rigid constraints on the tag positions.
\begin{theorem}[Equality constrained CRLB \cite{hero_lower_1990}] \label{thm:gorman}
Let $\mathbf x \in \r^p$ be a deterministic parameter vector and $\mathbf y \in \r^q$
a random observation vector, for some positive integers $p,q$.
Let $\mathbf h: \r^p \to \r^c$, for $c\leq p$, be a differentiable function such that
$\mathbf h(\mathbf x)= \mathbf 0$.
Let $\hat{\mathbf x}$ be an unbiased estimate of $\mathbf x$ also satisfying $\mathbf h(\hat{\mathbf x}) = \mbf0$
and with finite covariance matrix.
Define $\mathbf F_c \coloneqq \mathbf A^\top \mathbf F \mathbf A$, the \emph{constrained Fisher Information Matrix},
where $\mathbf A$ is any matrix whose columns span $\ker \der{\mathbf h}{\mathbf x}$,
and $\mathbf F$ is the FIM defined in \eqref{eq: FIM def}.
Then, the following inequality holds
\begin{equation}
\mathsf{cov}[\hat{\mathbf x}] \succeq
\mathbf A \left(\mathbf F_c\right)^\dagger \mathbf A^\top =: \mathbf B_c
\label{eq:crlb}
\end{equation}
where $\dagger$ denotes the Moore-Penrose pseudo-inverse \cite[p. 21]{petersen_matrix_2012}.
\end{theorem}
Consider now the problem of estimating the vector of tag coordinates $\mathbf p_{\mathcal U} \in \r^{nU}$
based on the distance measurements $\tilde{\mathbf d}$ and knowledge of the anchor coordinates
$\mathbf p_{\mathcal K} \in \r^{nK}$.
Order the nodes so that $\mathbf p = \text{col}(\mathbf p_{\mathcal U}, \mathbf p_{\mathcal K}$),
and partition the FIM defined in \eqref{eq:ourfim} accordingly as
\begin{equation} \label{eq: FIM partitioning}
\mathbf F = \begin{bmatrix}
\mathbf F_\mathcal{U} & \mathbf F_\mathcal{UK} \\
\mathbf F_\mathcal{UK}^\top & \mathbf F_\mathcal{K}
\end{bmatrix},
\end{equation}
with in particular $\mathbf F_\mathcal{U}$ a symmetric positive semi-definite matrix
of size $nU \times nU$. We then have the following result.
\begin{proposition} \label{prop: constrained CRLB for network}
Let $\hat{\mathbf p}_{\mathcal U}$ be an unbiased estimate of the tag positions $\mathbf p_{\mathcal U}$,
based on the measurements $\tilde{\mathbf d}$ and the knowledge of the anchor positions
$\mathbf p_{\mathcal K}$. Then
\begin{align} \label{eq: cov tags CRLB simple}
\mathsf{cov}[\mathbf{\hat{p}}_\mathcal{U}] \succeq \mathbf F_\mathcal{U}^\dagger(\mathbf p).
\end{align}
\end{proposition}
\begin{proof}
This result is a corollary of Proposition \ref{prop: constrained CRLB for network refined}
stated below, with $\mathbf f_c \equiv \mathbf 0$ in \eqref{eq:feasible set} and so
$\mathbf A_\mathcal{U} = \mathbf I_{n U}$.
\end{proof}
\subsection{Localizability Potentials and Optimal Design}
\label{section: localizability potentials unconstrained}
Given \eqref{eq: cov tags CRLB simple}, the following functions are possible candidates
to define potential functions that penalize configurations of the ranging network leading
to poor localizability
\begin{align}
J_A(\mathbf p) &= \trace{\mathbf F_\mathcal{U}^{-1}(\mathbf p)} \;\; \text{(A-Optimal Design)},
\label{eq:pot:A} \\
J_D(\mathbf p) &= - \ln \det \{\mathbf F_\mathcal{U}(\mathbf p)\} \;\; \text{(D-Optimal Design)}, \label{eq:pot:D} \\
J_E(\mathbf p) &= - \lambda_{\min} \{\mbf F_\mathcal{U} (\mathbf p)\} \;\; \text{(E-Optimal Design)}, \label{eq:pot:E}
\end{align}
assuming in the first two cases that $\mathbf F_{\mathcal U}(\mathbf p)$ is invertible.
In the following, we refer to the functions $J_A$, $J_D$ and $J_E$ as the
A-Opt, D-Opt and E-Opt potentials respectively, using standard terminology
from optimal experiment design \cite{pukelsheim_optimal_2006,sagnol_optimal_2010}.
In each case, configurations $\mathbf p$ for which $J(\mathbf p)$ takes large values correspond
to geometries for which the error covariance matrix of an unbiased position estimator will
necessarily be ``large'' in a sense defined by the choice of potential. Hence,
for \eqref{eq:pot:A}, we have from \eqref{eq: cov tags CRLB simple} that
$J_A(\mathbf p)$ is a lower bound on $\trace{\mathsf{cov}[\hat{\mathbf p}_{\mathcal U}]}$, which
represents the total mean-squared error (MSE) of the unbiased estimator $\hat{\mathbf p}_{\mathcal U}$.
Similarly, \eqref{eq:pot:D} corresponds to a lower bound
on $\ln \det (\mathsf{cov}[\hat{\mathbf p}_{\mathcal U}])$, which would be equal (up to a constant)
to the statistical entropy of $\hat{\mathbf p}_{\mathcal U}$,
if this estimate were to follow a normal distribution. Finally, still assuming
$\mathbf F_\mathcal{U} \succ 0$, minimizing $J_E$ in \eqref{eq:pot:E} aims to minimize
the maximum eigenvalue of $\mathbf F_\mathcal{U}^{-1}$ (equal to $1/\lambda_{\min}({\mathbf F_\mathcal{U})}$),
which is a lower bound on the maximum eigenvalue or
induced $2$-norm of $\mathsf{cov}[\hat{\mathbf p}_{\mathcal U}]$.
Potentials like $J_E$ are often used to maintain the connectivity
\cite{Kim:TAC06:maximizingEigenvalue, decentralized_2010,siciliano_maintaining_2009}
or rigidity \cite{zelazo_decentralized_2015,sun_distributed_2015} of an MRS, which
are closely related problems.
Once a potential has been chosen, it can be used to move the
nodes to configurations of low potential values, where the localization accuracy is
expected to be high. This can be done for example by descending the gradient of
the potential, as discussed in Sections \ref{sec:gradients} and \ref{sec:extensions_rigidity}.
\begin{remark}
Another a priori possible potential is
\[
J_T(\mathbf p) = - \trace{\mbf F_\mathcal{U}(\mathbf p)}.
\]
Configurations $\mathbf p$ that minimize this potential are called T-optimal designs \cite{pukelsheim_optimal_2006}.
However, in our case we can compute
\[
J_T(\mathbf p) = - \alpha \sum_{\{i,j\}\in \mathcal{E}} d_{ij}^{2-2\kappa},
\]
with $\alpha$ a positive constant. Hence, in the case of additive Gaussian noise \eqref{eq:model_meas},
$\kappa = 1$ and $J_T$ is constant, so that it cannot be used to optimize $\mathbf p$.
In the case of multiplicative noise \eqref{eq:model_meas_lognormal}, we have $\kappa=2$ so
$J_T(\mathbf p) = -\alpha \sum_{\{i,j\} \in \mathcal{E}} d^{-2}_{ij}$ becomes a simple attractive
potential. In this case, $J_T$ cannot be used alone as a potential, since its global minimum
is trivially achieved when all agents occupy the same position.
In view of these remarks, $J_T$ is not considered further in the following.
\end{remark}
\section{Introduction}
\IEEEPARstart{M}{obile} robots require accurate, computationally efficient and low power
localization systems to navigate their environment and perform their assigned tasks.
Positioning can rely on various technologies, e.g., wheel odometry, computer vision
or long- and short-range radio frequency (RF) systems, each with
distinct advantages and drawbacks, depending on the environment and requirements.
For example, the most common methods of terrestrial localization rely on RF signals from
Global Navigation Satellite Systems (GNSS) to achieve meter- to centimeter-level accuracy,
but these systems do not operate indoors or when the line of sight to the satellites is
obstructed, and are sensitive to interference.
Multiple robots can collaborate to improve the accuracy and coverage of their
individual localization solution \cite{sheu_distributed_2010,Prorok:ICRA12:relativeLoc}.
In particular, they can leverage proximity \cite{sheu_distributed_2010}, relative position \cite{Prorok:ICRA12:relativeLoc}, bearing \cite{xu_aoa_2008}, or distance measurements \cite{wei_noisy_2015,carlino_robust_2018} between them to estimate their individual
positions in a common reference frame.
Relative bearing measurements can be provided by monocular cameras for example,
range measurements by short-range RF systems, and relative position measurements
by LiDARs or stereo cameras. In this paper, we focus on collaborative localization
in Multi-Robot Systems (MRS) using only range measurements. This is motivated by the
fact that accurate distance measurements can be deduced from Time-of-Flight (ToF)
measurements obtained from inexpensive short-range RF communication systems,
.e.g., Ultra-Wide Band (UWB) transceivers \cite{sahinoglu_ultra-wideband_2008,mueller_fusing_2015,cano_kalman_2019}.
In particular, such systems associate distance measurements unambiguously with
pairs of robots, simply by having the robots broadcast their IDs.
Once the robots have measured their relative distances, many algorithms exist to
compute from these measurements an estimate of the robot positions, see, e.g., \cite{Buehrer:IEEE18:collaborativeLocSurvey} for a recent survey.
These algorithms can be centralized or decentralized, applicable to static or
mobile networks, appropriate for real-time localization or require longer
processing times, etc. Two major factors determine the ability of these
algorithms to solve the position estimation problem and their accuracy.
First, enough relative distance measurements should be available, which links
the feasibility of the location estimation problem to the concept of \emph{rigidity} \cite{tay_generating_1985,connelly_generic_2005,Aspnes:TMC06:theoryLoc}
of the \emph{ranging graph} corresponding to these measurements.
Second, satisfying the graph-theoretic condition of rigidity is still insufficient
to guarantee accurate localization of the individual agents, when measurement noise
is inevitably present. For example, a group of robots that are almost aligned can
form a rigid formation if enough range measurements are available, but can only
achieve poor localization accuracy in practice. Indeed, the spatial \emph{geometry}
of the network strongly influences the accuracy of position estimates in the presence
of measurement noise \cite{patwari_locating_2005}, a phenomenon known
as Dilution of Precision (DOP) in the navigation literature \cite[Chap. 7]{groves_principles_2013}.
We call here \textit{localizability} the ability to accurately estimate the positions
of the individual robots of an MRS in a given geometric configuration,
using relative measurements.
In contrast to static sensor networks or GNSS, an MRS can actively adjust its geometry,
\textit{e.g.}, some of the robot positions and orientations, in order to improve
its overall localizability. This results in a coupling between the motion planning
and localization problem for the group.
Maintaining the rigidity of the ranging graph during the motion
of an MRS is a stronger condition than maintaining its connectivity,
but similar techniques can be used to address both problems.
In particular, we can capture the degree of connectivity or rigidity of the graph
using a function of the first non-zero eigenvalue of a type of Laplacian matrix,
and guide the MRS along paths or configure its nodes in ways that increase this function.
This is the approach adopted for example
in \cite{Kim:TAC06:maximizingEigenvalue,siciliano_maintaining_2009,decentralized_2010}
for improving connectivity and in \cite{Shames:Automatica09:LocMinimization,zelazo_rigidity_2012,zelazo_decentralized_2015,sun_distributed_2015}
for improving rigidity.
This article builds on this principle to optimize localizability.
Following an approach that we initially proposed in
\cite{le_ny_jerome_localizability-constrained_2018, Cano:ICRA21:constrainedCRLB},
we leverage Cram\'er Rao Lower Bounds (CRLBs) \cite[Chap. 7]{haug_bayesian_2012} to construct
localizability potentials, which can then be used as artificial potentials
\cite{choset_principles_2005} to drive the motion of an MRS toward geometric configurations
promoting good localization.
The CRLB provides a lower bound on the covariance of any unbiased position estimate
constructed from the relative range measurements available in the robot network.
Tighter covariance lower bounds exist, such as Barankin bounds \cite{mcaulay_barankin_1971},
but an advantage of the CRLB is that it is relatively easy to compute and admits
a closed-form expression for the problem considered here,
assuming Gaussian noise \cite{patwari_locating_2005}.
Moreover, as we show in Section \ref{sec:closed_form_crlb}, the CRLB for Gaussian
noise is in fact closely related to the so-called \emph{rigidity matrix} of the ranging graph.
This can be expected since the Gaussian CRLB is known to correspond to DOP expressions
for least-squares estimators, which are implicitly derived in \cite{Shames:Automatica09:LocMinimization} for example and also linked to the rigidity matrix.
The CRLB only provides a lower bound on estimation performance and there is generally
no guarantee that a position estimator actually achieves it.
Nonetheless, using this bound as a proxy to optimize sensor placement is a well accepted
approach \cite{Ucinski:book04:optimalSensing}.
An important advantage of this approach is that the motion planning strategy becomes
independent of the choice of position estimator implemented in the network.
Our main contributions and the structure of this paper are as follows.
First, we formulate in Section \ref{sec:statement} a novel motion planning problem allowing
an MRS to optimize its localizability. This is done by minimizing appropriate cost functions
based on the Fisher Information Matrix (FIM) appearing in the CRLB, as detailed in
Section \ref{sec:loca_potentials}. We derive in Section \ref{sec:closed_form_crlb} a closed-form
expression for the FIM and establish an explicit connection with the weighted rigidity matrices
introduced in \cite{zelazo_decentralized_2015,sun_distributed_2015}. One of the benefits
of establishing this connection is to see that various artificial potentials can be
constructed from the FIM to capture localizability, as discussed in the literature on optimal
experimental design \cite{pukelsheim_optimal_2006} or optimal sensing with mobile robots, see,
e.g., \cite{Ucinski:book04:optimalSensing,LeNy:CDC09:activeSensingGP,carrillo_comparison_2012}.
Some of these functions may be more conveniently optimized than the smallest nonzero
eigenvalue, which is the standard potential used for connectivity and rigidity maintenance.
We study the structure of the FIM matrix and compute the gradients of the
localizability potentials, which allows us to propose in Section \ref{sec:gradients} new
distributed algorithms enabling the deployment of large groups of robots carrying ranging
sensors in a scalable and robust manner.
We then extend the results in Section \ref{sec:extensions_rigidity} to robots carrying
multiple ranging sensors, by leveraging the theory of \emph{constrained} CRLBs \cite{hero_lower_1990}
to account for the presence of additional rigidity constraints.
This can be viewed as an alternative and simpler approach to deriving intrinsic
CRLBs on the manifold of rigid motions \cite{bonnabel_intrinsic_2015,chirikjian_wirtinger_2018}.
Finally, Section \ref{sec:sim_standard} illustrates the performance of the proposed algorithms
in simulation for two simple robot deployment scenarios.
This article builds on the conference paper \cite{le_ny_jerome_localizability-constrained_2018},
which introduced the the concept of localizability potentials for the deployment of MRS
in two dimensions. Here we extend the methodology to three dimensions, introduce new
distributed optimization schemes, discuss useful properties on the FIM and make a clearer
connection with rigidity theory.
We also generalize the conference paper \cite{Cano:ICRA21:constrainedCRLB},
which considered robots carrying multiple sensors, by developing the results
in three dimensions and integrating the full relative position information
in the CRLB rather than just relative distances, which is significantly more challenging.
We demonstrate in simulation the improvement achievable with this extension.
\textbf{Notation:} We write vectors and matrices with a bold font.
The all-one vector of size $p$ is denoted $\mathbf 1_p$.
The notation $\mathbf x = \text{col}(\mathbf x_1,\ldots,\mathbf x_n)$ means that the
vectors or matrices $\mathbf x_i$ are stacked on top of each other,
and $\text{diag}(\mathbf A_1,\ldots,\mathbf A_k)$ denotes a block diagonal matrix
with the matrices $\mathbf A_i$ on the diagonal.
The nullspace of a matrix $\mathbf A$ is denoted $\ker \mathbf A$.
For $\mathbf A$ and $\mathbf B$ symmetric matrices of the same dimensions,
$\mathbf A \succeq \mathbf B$ means that $\mathbf A - \mathbf B$ is positive semidefinite
and $\mathbf A \succ \mathbf B$ that it is positive definite.
If $\mathbf A$ is a symmetric matrix, $\lambda_{\min}(\mathbf A)$ and $\lambda_{\max}(\mathbf A)$ denote
its minimum and maximum eigenvalues.
The time derivative of a vector-valued function $t \mapsto \mathbf x(t)$ is denoted $\dot{\mathbf x}$.
The expectation of a random vector $\mathbf x$ is denoted $\mathbb{E}[\mathbf x]$ and its covariance matrix
$\mathsf{cov}[\mathbf x] = \mathbb{E} \left[ \left(\mathbf x - \mathbb{E} \left[\mathbf x \right] \right)
\left(\mathbf x - \mathbb{E} \left[\mathbf x\right] \right)^T\right]$.
For a differentiable function $f: \mathbb R^p \to \mathbb R^q$, $\frac{\partial f(\mathbf p)}{\partial \mathbf p}$
represents the $q \times p$ Jacobian matrix of $f$, with components $\partial f_i(\mathbf p)/\partial p_j$
for $1 \leq i \leq q$, $1 \leq j \leq p$.
When $q = 1$, $\partial^2 f(\mathbf p)/\partial \mathbf p \partial \mathbf p^T$ denotes the Hessian,
i.e., the square matrix with components $\partial^2 f(\mathbf p)/\partial p_i \partial p_j$.
Finally, for a set $\mathcal S$, $\mathsf 1_{\mathcal S}(i) = 1$ if $i \in \mathcal S$ and $0$ otherwise.
|
1,116,691,498,526 | arxiv | \section{Introduction\label{sec:intro}}
Understanding and controlling mechanical stability is important
to fields ranging from structural engineering to granular
materials and glasses. We do not want buildings or bridges to
fail, and we want to be able to manage the elastic response of
materials of everyday life. This review focusses on the
elastic and dynamical properties of periodic ball-and-spring
networks that are at or near mechanical collapse. This may
seem like a fairly narrow subject, but it is one of
considerable richness and impact.
In a remarkable 1864 paper \cite{Maxwell1864}, James Clerk
Maxwell undertook the first systematic study of the mechanical
stability of frames consisting of points, which we will refer
to as \emph{sites}, with connections, which we will usually
refer to as \emph{bonds}, between them as a model for such
real-world structures as the Warren Truss (patented in 1848)
shown in \fref{fig:WarrenTruss}. He defined a ``stiff" frame
as one in which ``the distance between two points cannot be
altered without changing the length of one or more
connections". He showed that a stiff frame containing $N$
sites in $d$ dimensions requires
\begin{equation}
N_B = dN-f(d)
\label{eq:Maxwell-rule1}
\end{equation}
connections, where $f(d)=d(d+1)/2$ is the number of rigid
translations and rotations under free boundary conditions.
Under periodic boundary conditions, which Maxwell did not
consider, $f(d) = d$. This relation, often referred to as
\emph{Maxwell's rule}, can be reexpressed as a critical
coordination number ($z \equiv 2 N_B/N$),
\begin{equation}
z^N_c = 2 d - 2\frac{f(d)}{N}.
\label{eq:z_c}
\end{equation}
If $z<z^N_c$, the system is not stiff, and if $z>z^N_c$, it is.
As we shall discuss in more detail in the next section,
Maxwell's rule for the stability of frames requires
modification \cite{Calladine1978}. Nevertheless, it provides a
useful and universally used benchmark for the analysis of the
stability of frames. We will refer to free frames, i.e., ones
under free boundary conditions, satisfying Maxwell's rule as
\emph{Maxwell frames}. It is fairly common practice to use the
term \emph{isostatic} for frames satisfying Maxwell's rule.
Though isostatic frames do satisfy Maxwell's rule, they have
more restrictive properties, which we will discuss in
\sref{sec:Max-Index}.
Our principal interest here is in frames whose sites can be
collected into identical contiguous unit cells whose origins
lie on a Bravais lattice and which fill some region of space.
We will refer to these frames as \emph{lattices} and if they
are subjected to periodic (free) boundary conditions as
periodic (free) lattices. Any frame, even those that are not
lattices, e.g., whose sites are randomly distributed, can also
be subjected to periodic (free) boundary conditions in which
case we will refer to them as periodic (free) frames. For
reasons that we will justify more fully later, we will use the
term \emph{periodic Maxwell frame (lattice)} for periodic
frames (lattices) with average coordination number $z_c=2d =
z_c^\infty$ rather than $z_c^N$. Free frames can be liberated
from periodic ones by cutting of order $N^{(d-1)/d}$ bonds.
Maxwell's analysis applies to frames with an arbitrary number
of sites and bonds. In addition to being a workhorse of the
structural engineering community
\cite{Heyman1999,Kassimali2005}, it has seen extensive use
(though not always with attribution to Maxwell) in physics,
materials science, and mathematics. It is a critical component
of the theory of structural glasses
\cite{Phillips1981,Thorpe1983,Phillips1985,ThorpePhi2000a,BoolchandTho2005},
rigidity percolation
\cite{Feng1984,JacobsTho1995,JacobsTho1996,GuyonCra1990,ChubynskyTho2007},
framework silicates like $\beta$-cristobalite
\cite{Hammonds1996}, jamming of packed spheres
\cite{Liu1998,Wyart2005a,WyartWit2005b,vanHecke2010,LiuNag2010a},
biopolymer networks
\cite{BroederszMac2014,Head2003,Wilhelm2003,Heussinger2006,HeussingerFre2007a,HuismanLub2011,Broedersz2011,MaoLub2013a},
and of some theories of protein folding \cite{WellsTho2005}.
Rigidity percolation and jamming generally involve central
forces only, and the Maxwell relation and its generalization
can be applied to these problems directly. Structural glasses
and biopolymer networks have bending forces that require a
modification of the Maxwell rules, and we will not make much
contact with them in what follows.
\begin{figure}
\centering
\includegraphics{figure-1.pdf}
\caption{(a) The Warren Truss. This is an isostatic structure composed of equilateral triangles with $N=9$ sites
and $N_B = 15$ connections, so that $2 N - N_B = 3$. The lower left site (indicated by the small triangle)
is fixed with respect to the earth, and the lower
right site is constrained to
move only horizontally along a track with wheels. These constraints reduce the number of
free degrees of freedom of the sites by $3$ to $N_{\text{free}}=2 N-3 =15 = N_B$. (b) A reduced version of
the isostatic Warren Truss indicating conventions for labeling sites and bonds.
The arrows indicate indicate bond directions following the convention described in section \ref{ssec:equil-comp},
Eqs.~(\ref{eq:bond-vector}) to (\ref{eq:equil2}).}
\label{fig:WarrenTruss}
\end{figure}
Both randomly packed spheres (\fref{fig:spheres-at-jamming}(a))
and diluted elastic networks (\fref{fig:spheres-at-jamming}(b))
pass from a state that does not support stress to one that does
as the number of bonds or contacts increases. At the critical
point separating these two states, the coordination number is
at or near the Maxwell critical value of $z_c$. The rigidity
percolation transition is generally continuous
\cite{Feng1984,JacobsTho1995,JacobsTho1996,ChubynskyTho2007} in
two-dimensions, and it is well-described by the language of
critical phenomena, applied so successfully to percolation in
random resistor networks
\cite{StaufferAha1994,Kirkpatrick1973}. Both shear and bulk
elastic moduli increase from zero as a power law in $(z-z_c)$,
and there is a divergent length scale associated with the
probability that two sites are in the same rigid cluster as a
function of their separation. In three dimensions, the rigidity
transition is apparently first order \cite{ChubynskyTho2007}.
Jammed systems exhibit different behavior. They are constructed
by increasing the density of spheres in a fixed volume until
they first resist a further increase. At this point, they are
jammed, and their bulk modulus $B$ (which resists compression)
is nonzero \cite{OhernNag2002,OHern2003}, but their shear
modulus $G$ is of order $1/N$
\cite{GoodrichNag2012,GoodrichNag2014}. At large $N$, $G$
increases linearly in $\Delta z = (z-z_c)$
\cite{Durian1995,OhernNag2002,OHern2003}. The density of states
of systems with $\Delta z >0$ exhibits a crossover from
Debye-like behavior ($\sim \omega^{d-1}$ in $d$ dimensions as a
function of frequency $\omega$) at low frequency to a
flat-plateau beyond a characteristic frequency $\omega^* \sim
(\Delta z)$ \cite{OHern2003,SilbertNag2005}. There are two
diverging length scales, $l^*\sim(\Delta z)^{-1}$ and $l_T\sim
(\Delta z)^{-1/2}$, which can be extracted
\cite{SilbertNag2005} by, respectively, comparing $\omega^*$ to
the longitudinal and transverse sound frequencies $c_L l^{-1}$
and $c_T l^{-1} \sim (\Delta z)^{1/2} l^{-1}$ at wavenumber $q
\sim l^{-1}$, where $c_L \sim B^{1/2}$ and $c_T \sim G^{1/2}$
are the longitudinal and transverse sound velocities. Other
interpretations of these lengths invoke finite-size effects in
the modes of a finite sample cut from a larger one
\cite{WyartWit2005b,Wyart2005a,Wyart2005} for $l^*$ and
stability of the system to boundary perturbations
\cite{GoodrichLiu2013a,Schoenholz2103}, crossover between
strong and weak scattering at $\omega^*$
\cite{XuNa2009,XuNag2010}, or correlations in systems with
$z<z_c$ \cite{LernerWya2012,DuringWya2013} for $l_T$.
\begin{figure}
\centering
\includegraphics{figure-2.pdf}
\caption{(a) packed discs of two sizes just above the
jamming transition. The dark red lines are chains of force that
are a response to the pressure required to pack the particles
at $z>z_c$ (courtesy of Carl Goodrich). (b) A representative
bond-diluted lattice near the rigidity percolation threshold of
$z_c \approx 3.96$.}
\label{fig:spheres-at-jamming}
\end{figure}
Though the transition to elastic stability in both rigidity
percolation and jamming occurs at or near the Maxwell critical
point $z=z_c$, properties at and above the two transitions are
quite different. Presumably, the difference is a reflection of
the different geometrical arrangements of the Maxwell lattices
at or near the two transitions. An interesting question then
is what precisely are the differences. Can they be quantified,
and if so how? These questions then raise a broader question of
whether there are wider classes of Maxwell frames that lead to
elastic and vibrational structure different from those of the
percolation and jamming lattices. As a first step toward
answering that question, it is useful to study periodic Maxwell
lattices, like the square and kagome lattices shown in
\fref{fig:square-kagome} with NN bonds and $z=2d=4$, and the
reduced coordination lattices they spawn when bonds are cut to
create free surfaces. They can easily be moved away from the
Maxwell limit toward greater rigidity by uniformly
\cite{Souslov2009} or randomly \cite{MaoLub2010} adding further
neighbor bonds, and they have an advantage that they lend
themselves to exact calculations of elastic response both in
the uniform and random systems (via effective medium theory).
In addition, in spite of their simplicity, they present a
surprisingly rich phenomenology that inform us generally about
Maxwell frames.
\begin{figure}
\centering
\includegraphics{figure-3.pdf}
\caption{(a) Square, (b) distorted square, (c) kagome lattice, and (d) twisted kagome lattice}
\label{fig:square-kagome}
\end{figure}
Much of the language
\cite{Calladine1978,Pellegrino1993,PellegrinoCal1986,GuestHut2003}
for probing the mechanical properties of frames used in this
review was developed by members of the Structural Mechanics
group at the University of Cambridge, which provided an elegant
generalization of Maxwell's relation based on general
principles of linear algebra and used it to deepen our
understanding of the more subtle properties of frames. Though
well-known in the engineering community, it is less so in the
physics and materials science communities. This review will,
therefore, begin in section \ref{sec:Max-Index} with a fairly
comprehensive review of this work. It is followed by five more
sections and four appendices. Section \ref{sec:elas-lim} deals
the long-wavelength elastic limit and how it can be calculated
using the developments in references
\cite{Pellegrino1993,PellegrinoCal1986}. Section
\ref{sec:periodic-lattices} tailors the formalism of the
preceding sections specifically to periodic lattices. Section
\ref{sec:examples} looks in detail at the properties of the
three simple lattices in \fref{fig:square-kagome}, and section
\ref{sec:topological} explores lattices with topologically
protected states at interfaces, which are closely related to
topologically protected electronic states in the quantum Hall
effect \cite{halperin82,haldane88}, polyacetylene \cite{ssh}
and topological insulators
\cite{km05b,bhz06,mb07,fkm07,HasanKane2010,QiZhang2011}.
Section \ref{sec:review} presents some final thoughts and
speculates about future directions. The four appendices provide
mathematical detail and display derivations of various
important relations.
\Sref{sec:examples} contains several subsections.
\Sref{ssec:square-lattice} with its simple analytical treatment
of the square lattice sets the stage for the study in
\sref{ssec:kagome} of the simple kagome lattice. Both lattices
have lines of zero modes in their Brillouin zones arising from
their having straight sample-traversing filaments of colinear
bonds. \Sref{ssec:twisted-kagome} then explores how the simple
geometrical operation of ``twisting" neighboring triangles in
the standard kagome lattice [\fref{fig:square-kagome}(c)] to
convert it to the twisted lattice [\fref{fig:square-kagome}(d)]
gaps the phonon spectrum of the simple kagome lattice at all
wavenumbers except at the origin and leads to zero-frequency
Rayleigh surface modes \cite{Landau1986} not present in either
the square or standard kagome lattices.
This review is intended to be as much a pedagogical
introduction to periodic Maxwell lattices as an overview of the
subject. Except in section \ref{sec:topological}, which
requires the use of some fairly subtle concepts detailed in
reference \cite{KaneLub2014}, it provides sufficient
calculational detail that even someone totally new to the
subject should be able to follow it. Though, as the
introductory paragraphs have indicated, this review was
inspired in part by jamming and rigidity percolation, it is not
about these subjects, and they will be considered only when
they have direct overlap with the ideas being presented.
\section{Generalized Maxwell relation as an index theorem
\label{sec:Max-Index}}
\subsection{The Maxwell rule and states of self stress \label{ssec-Max-SSS}}
Each site in $d$-dimensions has $d$ independent translational
degrees of freedom, and in the absence of constraints on point
motion, a collection of $N$ points without connections has
$N_{\text{free}}=dN$ zero-frequency displacement modes, which
we will refer to as \emph{zero modes}. In the presence of
constraints, $N_{\text{free}}$ will be less that $dN$. Each
connection reduces the number of zero modes by one. Thus if
there are $N_B$ connections and no constraints, there are
\begin{equation}
N_0 = dN - N_B
\label{eq:simplecount}
\end{equation}
zero modes. Of these, $f(d)$ are the trivial ones associated
with rigid translations and rotations. Any other zero modes
involve internal displacements of the sites and are generally
called \emph{mechanisms} \cite{Calladine1978} in the
engineering literature and \emph{floppy modes} in the physics
literature \cite{Thorpe1983}. Equation (\ref{eq:simplecount})
reexpressed in terms of the number of mechanisms $M$ is
\begin{equation}
M= dN - N_B - f(d) .
\label{eq:Maxwell-mech}
\end{equation}
We will refer to Eqs.~(\ref{eq:simplecount}) and
(\ref{eq:Maxwell-mech}) as the \emph{Maxwell count}. A frame is
stiff if it has no mechanisms. Setting $M=0$ yields the Maxwell
rule [Eqs.~(\ref{eq:Maxwell-rule1})].
\Fref{fig:selftstress1}(a) depicts a simple frame that obeys
Maxwell's count. It consists $N=6$ sites and $N_B=7$ bonds,
and it has $N_0 = 2 \times 6 - 7 =5$ zero modes and $M = N_B =
5-3=2$ mechanisms.
\begin{figure}
\centering
\includegraphics{figure-4.pdf}
\caption{(a) to (c) Frames satisfying the Maxwell rule. (a) has $6$ sites, $7$ bonds,
$5$ zero modes, and two mechanisms indicated by the dotted bonds. (b) has $6$ sites, $8$ bonds, $4$ zero modes,
and one mechanism. (c) and
(d) are constructed from (b) by adding an additional diagonal bond. (c) satisfies the
Maxwell rule with only the three trivial zero modes. (d) has $4$ zero modes
and one state of self stress indicated by the arrows on the bonds in the left square.}
\label{fig:selftstress1}
\end{figure}
The simple Maxwell rule does not apply to all frames
\cite{Calladine1978}. Consider the two-square frame with $N=6$
sites and $N_B=8$ bonds shown in \fref{fig:selftstress1}(b). It
has one mechanism as expected from the Maxwell count. If an
extra bond is added, Maxwell's rule would say that the frame is
stiff with no mechanisms. The extra bond, however, can be
placed as a diagonal in the right square
[\fref{fig:selftstress1}(c)] or as an extra diagonal in the
left square [\fref{fig:selftstress1}(d)]. In the first case,
there are no mechanisms, and Maxwell's rule applies. In the
second case, however, the mechanism present before the extra
bond was added remains, and the Maxwell count is violated. But
the left square with crossed diagonal bonds has an extra
\emph{redundant} bond not needed for its rigidity. It also has
a new and interesting property: the outer bonds of the square
can be placed under tension (compression) and the inner
diagonal bonds under compression (tension) such that the net
force on all sites is zero. This is a \emph{state of self
stress}, which, because of its repeated use in this review, we
will usually abbreviate as SSS. This theme can clearly be
repeated with each added bond either decreasing the number of
zero modes or increasing the number of states of self stress to
yield the modified Maxwell count \cite{Calladine1978}:
\begin{equation}
N_0 = dN -N_B + N_S \qquad \text{ or } \qquad N_0 - N_S = dN -N_B ,
\label{eq:index1}
\end{equation}
where $N_S$ is the number of SSSs. This is an \emph{index
theorem} \cite{AtoyahSin1963,Nakahara2003} , which we will
derive in section \ref{ssec:equil-comp}, relating mode and
self-stress count to geometric properties of the lattice. We
will refer to it simply as the Index theorem .
Two types of mechanisms can be distinguished: ``finite" ones in
which finite-amplitude displacements of sites stretch no bonds
and ``infinitesimal" ones in which bond lengths do not change
to first order in the magnitude of displacements but do so to
second (or higher) order. The Index theorem \cite{Calladine1978}, as we
shall show below, follows from the assumption of a linear
relation between site displacements and bond lengths, and it
applies only to infinitesimal displacements, i.e., it counts
both finite and infinitesimal mechanisms but does not identify
which is which. Figures \ref{fig:selftstress2}(a)-(b) show how
a finite mechanism can be converted into two infinitesimal
mechanisms and one \SSSa. A configuration of self-stress that
is particularly important for the current study is any straight
line of bonds under periodic boundary conditions, which we will
refer to as straight \emph{filaments}, as shown in figures
\ref{fig:selftstress2}(c)-(d). Changing the straight filament
to a zigzagged one removes this state of self stress. On the
other hand the ``zigzaging" periodic ladder configuration shown
in \fref{fig:selftstress2}(e) has one \SSSa, rather than the
two that a straight ladder would have. Tensions alternate in
sign from bond to bond in this \SSSa, a property, which will be
important in what follows, that prevents it from having any
zero wave-number component.
\begin{figure}
\centering
\includegraphics{figure-5.pdf}
\caption{States of self stress: (a) A frame with four sites ($\mathbf{1}$-$\mathbf{4}$) and four bonds
(italic $\it{1}$, $\it{2}$, $\it{3}$, and $\it{4}$) with
$4$ zero modes and one finite mechanism (dashed lines); (b) A frame in which the length of
bond $\it{4}$ is equal to the sum of the lengths of bonds $1$ to $3$. Now there is
a \SSS in which bond $\it{4}$ is under compression (tension) and the other three
are under tension (compression). Both sites $\mathbf{2}$ and $\mathbf{3}$ can undergo
infinitesimal displacements without changing the length of any bonds, and there are
two infinitesimal mechanisms.
(c) A line of parallel bonds forming a sample-traversing filament under periodic boundary conditions
as depicted in (d). The forces on all sites are zero if all of the bonds
are under equal tension or compression, and there is one \SSS.
(e) A zigzag ladder under periodic boundary conditions with one \SSS.}
\label{fig:selftstress2}
\end{figure}
A system in which there are neither any mechanisms ($M=0$) nor
any states of self-stress ($N_S=0$) is \emph{isostatic} . A
finite isostatic system necessarily satisfies the Maxwell
relation $z=z^N_c$, but a system with $z=z^N_c$ can have any
number of mechanisms provided it is equal to the number of
\SSSsa. The distinction between satisfying the Maxwell rule and
being isostatic is often lost in the literature, and it is
common practice to refer to any system that satisfies Maxwell's
rule as isostatic. In this review, we will keep the
distinction, referring to any free frame satisfying Maxwell's
rule as a Maxwell frame, reserving the term isostatic for those
free Maxwell frames satisfying $M=N_S = 0$. As we shall see in
\sref{ssec:Iso-Per}, the extension of this definition to
periodic frames presents some problems \cite{GuestHut2003}.
Since the term isostatic has become so prevalent, we propose in
that section a definition of this term that is in the spirit of
the definition for free frames and consistent with common usage
for periodic frames.
\subsection{Equilibrium and compatibility matrices\label{ssec:equil-comp}}
In the absence of external forces, the equilibrium force at
each site in a frame is determined by the tensions in the bonds
it shares with other sites. This is true whether or not the
site is in mechanical equilibrium; if the force at a site is
nonzero, the mass at that site will accelerate according to
Newton's laws. If forces at sites arising from bond tensions
are nonzero, they can be balanced by external loads to create
an equilibrium situation in which the total force on each site
is zero. Clearly, in mechanical equilibrium, the external loads
are the negative of the forces at each site arising from bond
tensions.
For central forces, the tension in a bond is parallel to the
bond. Thus its direction is specified by bond orientation, but
its magnitude and sign can vary. Let $\ma{F}$ be a vector in
the $dN$-dimensional space, $V_{\ma{F}}$, of the $d$
components of force at each site on the lattice exerted by
tensions in the bonds that terminate on it, and let $\ma{T}$ be
a vector in the $N_B$-dimensional space, $V_{\ma{T}}$, of the
of the bond tensions, which are scalars of either sign.
External loads at sites are represented by the $dN$-dimensional
vector $\ma{L}$. In equilibrium when sites do not accelerate
(or move if there is external friction), $\ma{L}=-\ma{F}$.
Since the relation between forces and tensions is linear, there
is a $d N \times N_B$ dimensional matrix $\ma{Q}$ (with $dN$
rows and $N_B$ columns), called the \emph{equilibrium} matrix,
that maps $V_\ma{T}$ to $V_\ma{F}$:
\begin{equation}
\ma{Q} \, \ma{T} = -\ma{F} = \ma{L} ,
\label{eq:QTF}
\end{equation}
where the final relation only applies in static situations.
The null space or kernel of $\ma{Q}$, $\text{ker}(\ma{Q})$ of
dimension $\text{nullity}(\ma{Q})$, is the set of all vectors mapped
to the zero vector. Any vector in the null space of $\ma{Q}$
represents a state of self stress because it corresponds to
tensions on a set of bonds for which the forces on all sites
are zero. Thus $\text{nullity}(\ma{Q})$ is equal to the number of
\SSSs $N_S$. Vectors $\ma{T}$ not mapped into the null space of
$\ma{Q}$ are in the \emph{orthogonal complement},
$\text{OC}(\ma{Q})$, of $\text{ker}(\ma{Q})$. The dimension of
$\text{OC}(\ma{Q})$ is equal the to the rank of $\ma{Q}$. The
rank-nullity theorem of linear algebra \cite{Birkhoff-Mac1998}
relates the rank and nullity of a matrix to its column
dimension:
\begin{equation}
\text{rank}(\ma{Q}) + \text{nullity}(\ma{Q}) = \text{rank}(\ma{Q}) + N_S = N_B.
\label{eq:rank-nullityQ}
\end{equation}
Elongation of bonds are determined by the displacements of the
sites to which they are attached. The elongations of individual
bonds are necessarily parallel to bond vectors for central
forces. Let $\ma{U}$ be a vector in the $dN$-dimensional space,
$V_{\ma{U}}$, of site displacements and $\ma{\EE}$ be a vector
in the $N_B$-dimensional space, $V_\ma{\EE}$, of bond
elongations. The $N_B \times dN$-dimensional compatibility
matrix $\ma{C}$ maps $V_\ma{U}$ to $V_\ma{\EE}$:
\begin{equation}
\ma{C} \, \ma{U} = \ma{\EE} .
\label{eq:CUE}
\end{equation}
The null space of $\ma{C}$ is the set of displacements $\ma{U}$
that do not change the length of bonds, i.e., the set of zero
modes of the system; thus, $\text{nullity}(\ma{C})=N_0$. The
rank-nullity theorem applied to $\ma{C}$ yields
\begin{equation}
\text{rank}( \ma{C}) + \text{nullity}(\ma{C}) = \text{rank}(\ma{C})+ N_0 = d N .
\label{eq:rank-nullityC}
\end{equation}
The equilibrium and compatibility matrices are not independent:
they are matrix transposes of each other. To see this, we can
calculate the work done under infinitesimal distortions of the
system in the presence of sites forces (and thus necessarily
tensions in the bonds) in two ways: First the work $W_L$ done
by external loads in displacing sites and second the work $W_T$
done by bond tensions in stretching bonds. The two
calculations must yield the same result:
\begin{equation}
W_L = \ma{L}^T \ma{U} = \ma{T}^T \ma{Q}^T \ma{U} =
W_T = \ma{T}^T \ma{E} = \ma{T}^T \ma{C} \ma{U} ,
\label{eq:workQC}
\end{equation}
where the superscript $T$ refers to the transpose of a matrix.
Since this relation is valid for all $\ma{U}$ (even $\ma{U}$ in
the null space of $\ma{C}$) it must be that $\ma{C}= \ma{Q}^T$.
The rank of a matrix is equal to the rank of its transpose,
$\text{rank}(\ma{Q}) = \text{rank}(\ma{C})$, and subtracting
\eref{eq:rank-nullityQ} from \eref{eq:rank-nullityC} yields the
Index theorem of \eref{eq:index1}. The equilibrium and compatibility
matrices have proven useful in many contexts, and we mention
only one in which frames with spring lengths that do not
correspond to the length of the bonds they occupy are
constructed from frames in which they do \cite{YanWya2013}.
To construct $\ma{Q}$ and $\ma{C}$ for a particular frame, it
is necessary to have explicit representations for the vectors
$\ma{U}$, $\ma{E}$, $\ma{T}$, and $\ma{F}$. A frame consists of
$N$ sites labeled $s=1, ..., N$ at equilibrium positions
${\bm R}(s)$ connected by $N_B$ bonds labeled $\beta = 1, ...,
N_B$. Under frame distortions, site positions change to
\begin{equation}
{\bm X}(s) = {\bm R}(s) + {\bm u}(s) ,
\end{equation}
where ${\bm u}(s)$ with components $u_i(s)$, $ i = 1, ..., d$, is
the displacement vector at site $s$. The force at site $s$ is
${\bm f}(s)= (f_1(s) , ... f_d(s))$, and the $dN$-dimensional
displacement and force vectors are, respectively,
$\ma{U}=({\bm u}_1, ...,{\bm u}_N )$ and $\ma{F} = ({\bm f}_1, ..., {\bm f}_N
)$.
Each bond $\beta = [s_\beta,s'_\beta]$ connects a pair of sites
$s_\beta$ and $s'_\beta$, whose separation in the equilibrium
frame is the vector
\begin{equation}
{\bm b}(\beta)\equiv {\bm b}([s_\beta,s'_\beta])= {\bm R}(s'_\beta ) - {\bm R}(s_\beta) \equiv
b_\beta\hat{\bv}_\beta ,
\label{eq:bond-vector}
\end{equation}
from $s_\beta$ to $s'_\beta$, where $b_\beta$ is the length of
bond $\beta$ and $\hat{\bv}_\beta$ is the unit vector along bond
$\beta$. The arrows in \fref{fig:WarrenTruss}(b) show an
arbitrarily chosen choice of directions of vectors
$\hat{\bv}_{\beta}$ for bonds in a simple frame. Other choices will
lead to different equations relating forces to tensions, but no
the physical tensions in the frame. Let the tension in bond
$\beta$ be $t_\beta$, and associate with that bond a vector
tension,
\begin{equation}
{\bm t}_\beta = t_\beta \hat{\bv}_\beta .
\end{equation}
With this convention, the force exerted on sites $s_\beta$ and
$s'_\beta$ by bond $[s_\beta,s'_\beta]$ are, respectively,
${\bm t}_\beta$ and $-{\bm t}_\beta$, or, equivalently, the force on a
site $s$ from a bond $\beta$ that it shares with another site
is ${\bm t}_\beta$ if $\hat{\bv}_\beta$ points away from the site and
$-{\bm t}_\beta$ if it points toward the site. With these
definitions, we can construct the compatibility matrix from the
the bond elongation relations,
\begin{equation}
e_\beta = \hat{\bv}_\beta\cdot ({\bm u}(s'_\beta) -{\bm u}(s_\beta)) ,
\label{eq:compatibility2}
\end{equation}
and the equilibrium matrix from the site force equations,
\begin{equation}
{\bm f}(s) = \sum_{\beta=1}^{z_s} \text{sign} (\beta) {\bm t}_\beta ,
\label{eq:equil2}
\end{equation}
where $z_s$ is the number of bonds site $s$ shares with its
neighbors and $\text{sign} (\beta)= +1 (-1)$ if the arrow of
bond $\beta$ points away from (toward) site $s$.
It is instructive to carry out explicit calculations of
$\ma{Q}$ and $\ma{C}$ for a simple frame. We consider the
frame shown in \fref{fig:WarrenTruss}(b) with $N=4$, $N_B = 5$
and $2 N - N_B =3$, the number of modes of rigid translation
and rotation. This frame has no floppy modes and no state of
self stress, and it is isostatic. The figure labels sites
($s=0, ... ,3)$ , bonds ($\beta= 1, ... ,5$), and bond
directions. The five displacement-elongation equations are
\begin{align}
e_1 & = \hat{\bv}_1\cdot ({\bm u}_1 - {\bm u}_0) \qquad & e_2 = \hat{\bv}_2\cdot ({\bm u}_2 - {\bm u}_1)
\qquad & e_3 = \hat{\bv}_3 \cdot ({\bm u}_2 - {\bm u}_3) \nonumber\\
e_4 & = \hat{\bv}_4 \cdot ({\bm u}_3 - {\bm u}_0 ) \qquad & e_5 = \hat{\bv}_5 \cdot ({\bm u}_3 - {\bm u}_1 ) ,
\qquad &
\label{eq:elongations-1}
\end{align}
and the four vector force equations are
\begin{align}
{\bm f}_0 & = {\bm t}_1 +{\bm t}_4 \qquad & {\bm f}_1 = -{\bm t}_1 + {\bm t}_2 + {\bm t}_5\nonumber \\
{\bm f}_2 & = - {\bm t}_2 - {\bm t}_3 \qquad & {\bm f}_3 = {\bm t}_3 - {\bm t}_4 - {\bm t}_5 .
\label{eq:forces-2}
\end{align}
The $8 \times 5$ compatibility and $5\times 8$ equilibrium
matrices are easily constructed from equations
(\ref{eq:elongations-1}) and (\ref{eq:forces-2}). If, however,
we are interested only in internal deformations of the frame
and not its uniform translations and rotations, we can apply
three constraints to the motion, most easily by pinning site
$0$ so that ${\bm u}_0 = 0$ and placing site $1$ on a horizontal
rail as shown in \fref{fig:WarrenTruss}(a) to fix $u_{1,y} = 0$
so that $N_{\text{free}}=5$. We also allow ${\bm f}_0$ and
$f_{1,x}$ to take on whatever values needed to satisfy the
constraints, so they do not enter into our equations. This
leaves us with a $5 \times 5$ compatibility matrix,
\begin{equation}
\ma{C} =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
-\tfrac{1}{2} & \tfrac{1}{2} & \tfrac{\sqrt{3}}{2} & 0 & 0 \\
0 & 1 & 0 & -1 & 0 \\
0 & 0 & 0 & \tfrac{1}{2} & \tfrac{\sqrt{3}}{2} \\
\tfrac{1}{2} & 0 & 0& -\tfrac{1}{2} & \tfrac{\sqrt{3}}{2}
\end{pmatrix} ,
\end{equation}
mapping $\ma{U} = (u_{1,x}, u_{2,x},u_{2y},u_{3,x},u_{3,y})$ to
$\ma{E} = (e_1,e_2,e_3,e_4,e_5 )$. The equilibrium matrix,
mapping $\ma{T} = (t_1,t_2,t_3, t_4,t_5)$ to $\ma{L}=-\ma{F} =
-(f_{1,y},f_{2,x},f_{2,y},f_{3,x},f_{3,y})$ constructed from
Eq.~(\ref{eq:forces-2}), is trivially equal to $\ma{C}^T$. Both
$\ma{Q}$ and $\ma{C}$ are square invertible matrices: their
nullspaces are empty, both $N_0=0$ and $M=0$, and the system is
isostatic as required. Thus, the tensions on the bonds are
uniquely determined by the forces on the sites and vice versa:
the frame is \emph{statically determinate}. And the elongations
of the bonds are uniquely determined by the site displacements
and vice versa: the frame is \emph{kinematically determinate}.
Thus, an alternative, and perhaps preferable, definition of an
isostatic frame is that it be both statically and kinematically
determinate. Another way of dealing with the trivial zero modes
is to introduce ``reaction forces" to yield $8 \times 8$
matrices $\ma{Q}$ and $\ma{C}$.
\subsection{The dynamical matrix}
So far, we have only discussed tensions and stretches of bonds
without specifying any relation among them. In our
``ball-and-spring" frames, each bond $\beta$ is occupied by a
Hooke's law spring whose energy is half its spring constant
$k_b$ times the square of its elongation. Let $\ma{k}$ be the
$N_B \times N_B$ diagonal matrix of spring constants, then the
elastic energy of the lattice is
\begin{equation}
V_{\text{el}} = \frac{1}{2} \ma{E}^T \ma{k} \ma{E} =
\frac{1}{2}\ma{U}^T \ma{K} \ma{U} ,
\label{eq:Vel}
\end{equation}
where
\begin{equation}
\ma{K} = \ma{Q} \ma{k} \ma{Q}^T =\ma{C}^T \ma{k} \ma{C}
\end{equation}
is the $dN \times dN$ \emph{stiffness} matrix. Normal-mode
frequencies depend on mass as well as the stiffness matrix. The
kinetic energy requires the introduction of a mass matrix
$\ma{M}$. We will restrict our attention of frames in which
the mass of all mass points is equal to $m$, in which case
$\ma{M} = m \,\ma{I}$, where $\ma{I}$ is the unit matrix, and
the kinetic energy is
\begin{equation}
E_{\text{kin}}=
\tfrac{1}{2}m \dot{\ma{U}}^T \dot{\ma{U}},
\end{equation}
where $\dot{\ma{U}}$ is the velocity vector. Normal modes are
then eigenvectors of the \emph{dynamical} matrix:
\begin{equation}
\ma{D} =
\tfrac{1}{m} \ma{K} .
\end{equation}
The Lagrangian in the presence of eternal loads is thus
\begin{equation}
L = \tfrac{1}{2}m\dot{\ma{U}}^T \dot{\ma{U}}-V_{\text{el}}-\ma{U}^T \ma{L} ,
\end{equation}
and the equation of motion is
\begin{equation}
m \ddot{\ma{U}} =-\frac{\partial{V}_\text{el}}{\partial \ma{U}^T} -\ma{L}= - \ma{K} \ma{U} -\ma{L}
=\ma{F} - \ma{L} ,
\end{equation}
which vanishes when the external load $\ma{L}$ is equal to the
force $\ma{F} = -\ma{K}\ma{U} = - \ma{Q} \ma{T}$ exerted by
bond stretching. Note that the equilibrium matrix can be used
to calculate $\ma{F}$ whether or not the system is in static
equilibrium or not. On the other hand $\ma{L}= \ma{Q}\ma{T}$
only in equilibrium when there is no acceleration (assuming no
friction forces).
\section{The elastic limit \label{sec:elas-lim}}
\subsection{Strain and the elastic energy}
Strain is a measure of macroscopic distortions of an elastic
medium. The macroscopic deformation tensor
$\bm{\lambda}=\bm{I}+ \bm{\eta}$, where $\bm{I}$ is the unit
tensor, determines displacements at boundary sites $s_B$ of
either a finite sample or the periodic box under periodic
boundary conditions:
\begin{equation}
{\bm X}(s_B) = \bm{\lambda}{\bm R} (s_B) , \text{ or }
{\bm u}(s_B) = \bm{\eta}\, {\bm R}(s_B),
\label{eq:deformation1}
\end{equation}
and the macroscopic strain tensor is
\begin{equation}
\pmb{\st}=\tfrac{1}{2}(\bm{\lambda}^T \bm{\lambda} - \bm{I}) =
\tfrac{1}{2}(\bm{\eta}+\bm{\eta}^T +\bm{\eta}^T \bm{\eta} )
\approx \tfrac{1}{2}( \bm{\eta}+\bm{\eta}^T ),
\end{equation}
where the final form is the linearized limit, which is all that
concerns us here.
The elastic energy density associated with the macroscopic
strain is
$$
f_{\text{el}} = \tfrac{1}{2}K_{ijkl} \varepsilon_{ij} \varepsilon_{kl} ,
$$
where $K_{ijkl}$ is the elastic constant tensor and the
Einstein convention on repeated indices is understood. The
elastic strain $\varepsilon_{ij}$ is symmetric and has $a_d=d(d+1)/2$
independent components in $d$ dimensions. It can be expressed
\cite{Kittel1971,AshcroftMer1976} as an $a_d$-dimensional
vector (Voigt notation), which in two dimensions takes the
form,
\begin{equation}
\pmb{\st}_V=(\varepsilon_{xx},\varepsilon_{yy}, \varepsilon_{xy} ).
\end{equation}
The elastic tensor is then an $a_d \times a_d$ matrix, which in
two dimension is
\begin{equation}
\mbb{K} =
\begin{pmatrix}
K_{xxxx} & K_{xxyy} &2 K_{xxxy} \\
K_{xxyy} & K_{yyyy} &2 K_{yyxy} \\
2K_{xxxy} & 2 K_{yyxy} & 4 K_{xyxy}
\end{pmatrix}
\xrightarrow{\text{isotropic}}
\begin{pmatrix}
B + \msh & B-\msh & 0 \\
B-\msh & B+ \msh & 0 \\
0 & 0 & \msh
\end{pmatrix} ,
\label{eq:elas-matrix}
\end{equation}
where the final form is the isotropic limit, where $B=\lambda +
\mu$ is the bulk modulus and $\msh=\mu$ is the shear modulus
with $\lambda$ and $\mu$ the standard Lam\'{e} coefficients
\cite{Landau1986}. The linearized Cauchy stress tensor is
\begin{equation}
\sigma_{ij} = K_{ijkl} \varepsilon_{kl} .
\end{equation}
Mechanical stability requires that all $a_d$ eigenvalues of the
Voigt elastic matrix $\mbb{K}$ be positive. Thus the elastic
energy of an elastically stable system can be expressed as a
sum of squares of $a_d$ independent linear combinations of
strains (the eigenvectors of the elastic matrix) with positive
coefficients (the eigenvalues of the elastic matrix). We shall
see shortly that some or all of the eigenvalues of the elastic
matrix in lattices at or near the Maxwell limit may be zero.
\subsection{Elastic limit and States of Self Stress}
Calculations of the elastic tensor require some method of
maintaining strain. The usual picture is that boundary sites
of a finite frame are clamped to conform with
\eref{eq:deformation1}. Since these sites are fixed, their
displacements and the forces associated with them do not enter
the calculation of $\ma{Q}$ and $\ma{C}$. An alternative
approach, which we implement, is to apply periodic boundary
conditions (PBCs) to the frame, which may or may not be
composed of repeated unit cells. In this approach, it is the
boundaries of the periodic cell that satisfy
\eref{eq:deformation1}.
If the positions of all sites, rather than just boundary sites,
displace according to Eq.~(\ref{eq:deformation1}), the elastic
distortion is \emph{affine}. Under such distortions, the
relative displacement of the sites associated with bond $\beta$
is
\begin{equation}
{\bm u}(s'_\beta) - {\bm u}(s_\beta)= \bm{\eta}\,({\bm R}(s'_\beta) -
{\bm R}(s_\beta) ) = \bm{\eta}\, {\bm b}_\beta ,
\end{equation}
and the affine stretch of bond $\beta$ is
\begin{equation}
e_\beta^{\text{aff}} = \hat{\bv}_\beta^T \,\bm{\lambda} \, {\bm b}_\beta =
\hat{b}_{\beta,i} \varepsilon_{ij} b_{\beta,j} ,
\label{eq:affine-stretch}
\end{equation}
where we used the fact that $\hat{b}_{\beta,i} b_{\beta,j}$ is
symmetric in $i$ and $j$ to convert $\bm{\eta}$ to $\pmb{\st}$ in
the linearized limit. The affine elastic energy density is then
\begin{equation}
f_{\text{el}}^{\text{aff}} = \frac{1}{2V} \,
\ma{E}_{\text{aff}}^T \,\ma{k} \,\ma{E}_{\text{aff}} ,
\end{equation}
where $V$ is the volume and $\ma{E}_{\text{aff}}$ is the vector
of affine elongations $e_\beta^{\text{aff}}$.
Affine response throughout a sample is the exception rather
than the rule. It is guaranteed to occur in absolutely
homogeneous systems and in lattices with one site per unit
cell, but it can occur in certain other systems with special
relations among elastic constants or special arrangements of
sites in periodic systems with multi-site unit cells (for
example, as we shall see, in the kagome lattice). Generally,
however, the forces at at least some sites under an affine
strain imposed by macroscopic stain at the boundary are
nonzero, and these sites will relax to positions of zero force
(or equivalently until the energy reaches its minimum value),
in which cases their displacements are nonaffine. In
\ref{app:elas-SSS}, we show that the result of this relaxation
is that the elastic energy density becomes
\cite{PellegrinoCal1986,Pellegrino1993,Goodrich2014}
\begin{equation}
f_{\text{el}} = \frac{1}{2V} \ma{E}_{\text{aff},s}^T [(\ma{k}^{-1})_{ss})]^{-1}
\ma{E}_{\text{aff},s}
\xrightarrow{\ma{k} \rightarrow k\,\ma{I}}
\frac{k}{2V} \sum_{\alpha} (\ma{E}_{\text{aff}}\cdot \hat{\ma{t}}_\alpha)^2
\label{eq:elastic-self-stress}
\end{equation}
where $\ma{E}_{\text{aff},s}$ and $(\ma{k}^{-1})_{ss}$ are the
projections of $\ma{E}_{\text{aff}}$ and $\ma{k}^{-1}$ onto
$\text{ker}(\ma{Q})$, and $\hat{\ma{t}}_\alpha$ is the $\alpha$th
orthonormal basis vector of $\text{ker}(\ma{Q})$. Thus, only the
projections of the affine displacement vectors onto states of
self-stress contribute to the elastic energy.
Equation (\ref{eq:elastic-self-stress}) encodes a great deal of
information.
\begin{itemize}
\item First, it shows that lattices cannot be elastically
stable unless they have \SSSs in the presence of
conditions that constrain the macroscopic strain - a
simple reflection of the fact that forces on each site
must be zero once equilibrium is reached in the
presence of imposed strains, which necessarily induce
bond tension.
\item Second, only those states of self-stress with a
nonzero overlap with the affine bond elongations
contribute to the elastic energy. These states
necessarily traverse the sample, and they are
\emph{load-bearing}. The straight filament of
\fref{fig:selftstress2}(c) (wound to a circle -
\fref{fig:selftstress2}(d)), whose bonds all have the
same sign of tension provides an example of a
load-bearing state, whereas the zigzag state of
\fref{fig:selftstress2}(e) and the localized crossed
square of \fref{fig:selftstress1}(d) does not.
\item Third, because it is a sum of squares of linear
combinations of strain, it shows that there must be at
least $a_d =d(d+1)/2$ load-bearing \SSSs to produce an
elastically stable system with an elastic matrix with
$a_d$ positive eigenvalues.
\end{itemize}
\subsection{Isostaticity and periodic boundary conditions\label{ssec:Iso-Per}}
In \sref{ssec-Max-SSS}, isostatic lattices were defined as ones
that are both kinematically ($N_0 = f(d)$) and statically
determinate ($N_S=0$). This definition is unambiguous for
finite free frames. It would seem natural to define an
isostatic frame under PBCs in the same way, but there is a
problem with this definition \cite{GuestHut2003}. Under PBCs,
the shape and size of the frame boundary is fixed; the
compatibility matrix (and by extension, the equilibrium matrix)
applies to displacements and does not apply to changes in the
shape or size of the periodic boundary \cite{Dagois-Heck2012,
GoodrichNag2012,GoodrichNag2014}, which are described by
macroscopic strain. In order for a lattice to be elastically
stable, it must have at least $d(d+1)/2$ \SSSsa. Reference
\cite{GuestHut2003} defines a lattice under PBCs to be
statically determinate if $N_S=d(d+1)/2$ and kinematically
determinate if $N_0 = d$, but it does not propose applying the
terms \emph{isostatic} to such systems. We propose calling a
frame under PBCs isostatic if $N_0 =d$ and $N_S$ lies between
$1$ and $d(d+1)/2$. This corresponds more or less to common
usage in the jamming literature. If greater precision is
required, the value of $N_S$ can be specified via the term
$N_S$-isostatic.
The above discussion applies to any frame, whether it is a
lattice with unit cells on a Bravais lattice or not, and in
particular to finite-frame approximations to infinite random
systems such as those encountered studies of jamming. These
frames can be subjected to PBCs by wrapping them on a torus to
create a single cell in which sites separated by repeat
distances of the torus are identified. We will refer to these
as toroidal PBCs. Since there is only one cell with randomly
placed sites, there is no wavenumber ${\bm q}$ to index the $dN$
vibrational states. Alternatively, the single (large) cell can
be periodically repeated in a Bravais lattice, in which case
there are $dN$ vibrational bands with wave-number-dependent
frequencies. The ${\bm q}=0$ limit of these bands are the
vibrational states under the toroidal PBCs and are generally
the ones of physical interest, though interesting information
about the stability of jammed structures can be obtained from
an examination of spectrum at ${\bm q} \neq 0$
\cite{Schoenholz2103}.
Lattices under periodic boundary conditions are special. They
consist of $N_c$ periodically repeated unit cells with $n$
sites and $n_b$ bonds, and the Index theorem reads $N_0-N_S = (dn -
n_b)N_c$ . In a periodic Maxwell lattice, $n_b=dn$, and
$N_0=N_S$. Thus if $N_0$ has its minimum value of $d$, $N_S=d$,
and under the definition proposed above, such a lattice would
be isostatic. It is impossible to have $N_0=d$ and for $N_S$ to
have any other value than $d$ in the interval between $1$ and
$d(d+1)/2$ because a change in $N_S$ requires a change in $n$
or $n_b$ and thus of $dn-n_b$, which would lead to a change in
$N_0-N_S$ of order $N_c \gg 1$ rather than of order one. Thus
we can uniquely define an \emph{isostatic periodic lattice
under} PBCs to be one with $N_0=N_S=d$.
Finite frames can be constructed from ones subjected to PBCs by
cutting bonds along the periodic boundaries and ``liberating"
the sites attached to them. Since opposite sides of the
boundary are equivalent, it is only necessary to cut bonds
along half of the boundary to liberate a full free lattice.
Thus, an $N_x \times N_y$ free square lattice is obtained by
cutting $N_x+N_y$ bonds in the lattice under PBCs. The cutting
process thus reduces the number of bonds by of order
$N^{(d-1)/d}$ in a lattice of $N$ sites in $d$ dimensions and
increases $N_0-N_S$ by the same amount. If the periodic
lattice is isostatic, then there are necessarily of order
$N^{(d-1)/d}$ extra zero modes in the finite lattice. If on the
other hand, the lattice under PBCs has of order $N^{(d-1)/d}$
\SSSs that are removed on cutting, there may be no increase in
zero modes at all upon cutting.
\section{Periodic Lattices\label{sec:periodic-lattices}}
Our primary interest is in periodic lattices, and we review
here notation that we will use to describe them.
\subsection{Notation}
A general lattice has $N_c$ unit cells consisting of $n$ sites
and $n_b$ bonds so that the total number of sites and bonds
are, respectively, $N= N_c n$ an $N_B = N_c n_b$. The Bravais
lattice vectors $\bm{R}_\ell$, where $\ell=(l_1, \cdots l_d )$
with each $l_i$ an integer, are linear combinations of the
primitive translation vectors $\bm{a}_j$, $j=1, \cdots d$:
\begin{equation}
\bm{R}_{\ell} = \sum_{j}^d l_j\bm{a}_j .
\end{equation}
The positions of sites and bonds in cell $\ell$ in the
undistorted lattice with a basis are
\begin{eqnarray}
\bm{R}_{\ell, \mu} & = & \bm{R}_\ell + \bm{r}_{\mu}, \qquad \mu = 1, \cdots , n \nonumber\\
\bm{R}_{\ell,\beta} & = & \bm{R}_\ell + \bm{r}_\beta, \qquad \beta = 1, \cdots , n_b ,
\end{eqnarray}
where $\bm{r}_\mu$ and $\bm{r}_\beta$ are, respectively the
positions of the sites and bonds relative to the origin of the
unit cell. The positions of lattice sites in a distorted
lattice are
\begin{equation}
\bm{X}_{\ell,\mu} = \bm{R}_{\ell,\mu} + \bm{u}_{\mu}(\ell) ,
\end{equation}
where $\bm{u}_\mu (\ell)$ with Cartesian components $u_{\mu,i}
(\ell) \equiv u_\sigma (\ell)$ is the displacement vector of site
$(\ell,\mu)$. The components of $\ma{U}$ are thus the $dN$
displacements $u_\sigma(\ell)$. The Cartesian components
$f_\sigma (\ell)$ of the force vector $\bm{f}_\mu (\ell)$ are
the $dN$ components of $\ma{F}$. The $N_B$ bond elongations
$\ee_\beta (\ell)$ and bonds tensions $t_\beta (\ell)$ are the
components of the of $\ma{E}$, and $\ma{T}$, respectively.
Fourier transforms in periodic lattices are defined in the
usual way in terms of wavenumbers ${\bm q}$ in the first Brillouin
zone:
\begin{align}
\bm{u}_\mu (\ell) & = \frac{1}{N_c} \sum_{\bm{q}} e^{i \bm{q}\cdot (\bm{R}_\ell + \bm{r}_\mu)}
\bm{u}_\mu (\bm{q}), \qquad
& \bm{u}_\mu (\bm{q}) & = \sum_{\ell} e^{- i \bm{q}\cdot (\bm{R}_\ell + \bm{r}_\mu)}
\bm{u}_\mu (\ell ) \\
t_\beta (\ell) &= \frac{1}{N_c} \sum_{\bm{q}} e^{i \bm{q}\cdot (\bm{R}_\ell + \bm{r}_\beta)}
t_\beta (\bm{q}) \qquad
& t_\beta (\bm{q}) &=\sum_{\ell} e^{-i \bm{q}\cdot (\bm{R}_\ell + \bm{r}_\beta)}
t_\beta (\ell) ,
\end{align}
and similarly for $\bm{f}_\mu$, $\ee_\beta$ and other site and
bond variables. These quantities can also be defined without
the site and bond basis vectors, $\bm{r}_\mu$ and
$\bm{r}_\beta$ in the exponentials, and we will usually leave
them out. They will, however, be of use in our discussion of
topological states in \sref{sec:topological}. The components of
the equilibrium matrix $\ma{Q}$ in periodic lattices are
${\mathbf Q}_{\sigma,\beta}(\ell, \ell')$. Its Fourier transform is
\begin{equation}
{\mathbf Q}_{\sigma \beta} (\bm{q}) = \sum_{\ell} e^{-i \bm{q}\cdot (\bm{R}_{\ell,\sigma}-\bm{R}_{0,\beta})}
{\mathbf Q}_{\sigma\beta} (\ell,0) .
\end{equation}
Again, the basis vectors in the exponents are not required. The
compatibility matrix is ${\mathbf C} ({\bm q}) = {\mathbf Q}^\dag ( {\bm q})$.
With these definitions, we can reexpress equations
(\ref{eq:QTF}) and (\ref{eq:CUE}) as
\begin{equation}
{\mathbf Q}(\bm{q}) \,\mathbf{t}\, (\bm{q}) = -\mathbf{f}(\bm{q}) \qquad
{\mathbf C}\, (\bm{q})\, \mathbf{u} (\bm{q}) = \mathbf{e}({\bm q}) ,
\end{equation}
where the ``math boldface" vectors are defined as
$\mathbf{t}({\bm q})=(t_1 ({\bm q}), ...,t_{n_b}({\bm q}))$ and
$\mathbf{u}({\bm q}) = ({\bm u}_1 ({\bm q}), ..., {\bm u}_n ({\bm q}))$ and
similarly for $\mathbf{e}({\bm q})$ and $\mathbf{f} ({\bm q})$, and
$\mathbf{Q}({\bm q})$ is the $dn \times n_b$ matrix with components
$\bm{Q}_{\sigma \beta}({\bm q})$. There is one equation for each of
the $N_c$ values of $\bm{q}$, giving us as many independent
equations as \eref{eq:QTF} and \eref{eq:CUE}. The Index theorem applies
to these equations for each ${\bm q}$:
\begin{equation}
n_0({\bm q}) - n_s({\bm q}) = dn - n_b ,
\end{equation}
where $n_0({\bm q}) = \text{dim}\text{ker}({\mathbf C})$ is the number of zero modes
and $n_s({\bm q}) = \text{dim}\text{ker}({\mathbf Q})$ is the number of states of self
stress at wavevector ${\bm q}$. Of course, $N_0 = \sum_{\bm q}
n_0({\bm q})$ and $N_S = \sum_{{\bm q}} n_s({\bm q})$.
In periodic Maxwell lattices, $dn = n_b$, and there are exactly
as many zero modes as states of self stress at each ${\bm q}$.
Under PBCs, there are always $d$ zero modes that arise from
translational invariance, and these are necessarily at ${\bm q}=0$,
implying that there are at least $d$ \SSSs at ${\bm q}=0$. There
may be more, but each additional \SSS will require an
additional zero mode, which is a ${\bm q}=0$ mechanism. At nonzero
${\bm q}$, there is no general reason for zero modes to exist, but
if they do, the are necessarily accompanied by \SSSsa. We will
see that this is a common theme in our study of specific
lattices in \sref{sec:examples}: removing states of self stress
eliminates zero modes and ``gaps" the phonon spectrum.
Following \eref{eq:Vel}, the potential energy in terms of a
periodic harmonic lattice is
\begin{equation}
V_{\text{el}}= \frac{1}{2N_c} \sum_{\bm{q}} \mathbf{e}^\dag (\bm{q})\, \mathbf{k}
\, \mathbf{e}(\bm{q}) =\frac{1}{2N_c}\sum_{\bm{q}}\mathbf{u}^\dag (
\bm{q})
\mathbf{K}({\bm q}) \mathbf{u}(\bm{q}) ,
\end{equation}
where $\mathbf{k}$ is the $n_b \times n_b$ diagonal matrix of
spring constants, and
\begin{equation}
\mathbf{K} ({\bm q}) = \mathbf{Q}(\bm{q})\, \mathbf{k}\, \mathbf{Q}^\dag(\bm{q}) \equiv
m \mathbf{D} ({\bm q})
\label{eq:stiffness2}
\end{equation}
is the dynamical matrix. In periodic systems, nearest-neighbor
($NN$), next-nearest neighbor ($NNN$), and further-neighbor
bonds are well defined, and bond vectors can be expressed as
the direct sum of $NN$ and $NNN$ components, e.g. $\mathbf{\ee}
= \mathbf{\ee}_{NN}\oplus\mathbf{\ee}_{NNN}$, and elastic
constant matrix and dynamical matrices can be decomposed into
$NN$ and $NNN$:
\begin{equation}
\mathbf{D} ({\bm q}) = \mathbf{D}_{NN} ({\bm q})+ \mathbf{D}_{NNN} ({\bm q}) .
\end{equation}
\subsection{The elastic limit}
\subsubsection{The elastic energy\label{ssec:elastic2}}
Under affine strain, the strain of equivalent bonds in
different unit cells are identical, and we can describe affine
strain in terms of the $n_b$ dimensional vector
$\mathbf{e}_{\text{aff},s}$. As a result, the elastic energy
depends only on the projection of the affine strain onto the
${\bm q}=0$ \SSSsa. Following \eref{eq:elastic-self-stress} and
\ref{app:elas-SSS}, the elastic energy density is thus
\begin{equation}
f_{\text{el}} = \frac{1}{2 v_c} \mathbf{e}_{\text{aff},s}^T (\mathbf{k}_{ss}^{-1})^{-1}
\mathbf{e}_{\text{aff},s} \rightarrow \frac{1}{2v_c}k \sum_\alpha
(\mathbf{e}_{\text{aff}}\cdot \hat{\mathbf{t}}_\alpha )^2 ,
\label{eq:elastic-self-stressP}
\end{equation}
where $v_c = N_c/V$ is the volume of a unit cell and
$\mathbf{k}_{ss}$ is the projection of the $\mathbf{k}$ on to
the nullspace of $\mathbf{Q}({\bm q}=0)$. The final form with
$\hat{\mathbf{t}}_{\alpha}$ ($\alpha = 1, ..., n_s(0)$) basis
vectors for the null space of $\mathbf{Q}({\bm q}=0)$ applies when
there is a single spring constant $k$.
Equation (\ref{eq:elastic-self-stressP}) constrains the elastic
energy of \emph{Maxwell} lattices, which depend on the number,
$m_0$, of ${\bm q}=0$ mechanisms and on the overlap of ${\bm q}=0$ SSSs
with the ${\bm q}=0$ affine bond elongation
$\mathbf{e}_{\text{aff}}$. Consider periodic Maxwell lattices,
for which with $N_0 = N_S$.
\begin{enumerate}
\item $m_0=0$: In this case, there are exactly $d$ zero
modes and exactly $d$ SSSs. There are now two
possibilities:
\begin{enumerate}
\item All $d$ \SSSs have a nonzero overlap with the
affine bond elongations, and the elastic matrix
\eref{eq:elas-matrix} then has $d$ positive and
$d(d+1)/2- d = d(d-1)/2$ zero eigenvalues,
which correspond to zero-energy elastic
deformations, now referred to as Guest modes
\cite{GuestHut2003}. This case corresponds to
what we call an isostatic periodic lattice. As
we shall see, this is the situation for the
square, twisted kagome, and topological kagome
lattices [\sref{sec:top-lattice}] in which
there are two positive eigenvalues and one
zero-energy elastic mode.
\item Fewer than $d$ \SSSs have a nonzero overlap
with affine bond elongations, and as a result
there are fewer than $d$ positive eigenvalues
to the elastic matrix and more than $d(d-1)/2$
zero-energy elastic distortions. The zigzagged
square lattice to be discussed in
\sref{sec:other-lattices} provides an example
of this behavior. Finite periodic approximates
to infinite-unit cell systems, such as packed
spheres at the jamming transition
\cite{GoodrichNag2012,GoodrichNag2014} and
randomized quasi-crystalline Penrose tilings
[\sref{sec:other-lattices}]
\cite{StenullLub2014} exhibit smaller and
smaller overlap as the order of the approximate
increases, leading to shear moduli that vanish
as $1/n$ as $n\rightarrow \infty$.
\end{enumerate}
\item $m_0 >0$, and there are $d+m_0$ \SSSsa, that may or
may not have an overlap with affine bond stretches. If
more than $d$ overlap, the additional \SSSs stabilize
elastic response relative that of the lattice with
$m_0= 0$, and we are presented with the curious
situation in which additional zero modes increase
elastic rigidity. The usual cause of this effect is the
appearance of sample traversing straight filaments that
support macroscopic stress, but they also, as we have
seen, introduce additional infinitesimal zero modes.
The untwisted kagome lattice, with three ${\bm q}=0$ states
of self stress produced by parallel filaments along
three independent directions, is elastically stable
with all eigenvalues of the elastic matrix positive. In
spite of extra mechanisms it is possible for only some
or even none of the \SSSs to overlap with affine bond
elongations, in which case the elastic energy can even
fall to zero. The latter situation occurs in
unrandomized periodic approximates to Penrose tilings
\cite{StenullLub2014} in which there are of order
$\sqrt{n}$ zero modes and states of self-stress but in
which all elastic moduli vanish
[\sref{sec:other-lattices}].
\end{enumerate}
\subsubsection{Stiffness matrix of $2d$ periodic lattices with Guest modes}
Two-dimensional periodic lattices with one or two ${\bm q}=0$ \SSSs
and two ${\bm q}=0$ zero modes have two and one Guest modes,
respectively. Fully gapped isostatic lattices have two such
\SSSs, but so do lattices that have additional zero modes at
non-zero ${\bm q}$. When there is only one Guest mode, the elastic
matrix of \eref{eq:elas-matrix}, which acts on the strain
vector $\pmb{\st}_V$, has one zero eigenvalue with normalized
eigenvector ${\bm v}_0=(v_{0,1},v_{0,2},v_{03})$ (i.e., the
strain of the Guest mode with amplitude $U_G$ is $U_G {\bm v}_0$)
and two positive eigenvalues, $K_1$ and $K_2$, with respective
associated eigenvectors ${\bm v}_1$, and ${\bm v}_2$ so that
\begin{equation}
\mbb{K}_{ij}= K_1 v_{1,i}v_{1,j} + K_2 v_{2,i}v_{2,j}
\label{eq:Guest-K}
\end{equation}
The long-wavelength stiffness matrix $\ma{K}$ is determined by
$\mbb{K}$, and as \ref{App:Guest} shows, its determinant
depends only on $K_1$, $K_2$, and the Guest-mode eigenvector
${\bm v}_0$:
\begin{equation}
\det {\mathbf K}({\bm q})=\frac{1}{4} K_1 K_2 \,(v_{02} q_x^2 - 2 v_{03} q_x q_y
+v_{01} q_y^2 )^2.
\label{eq:GuestK}
\end{equation}
Since ${\mathbf K} = k {\mathbf Q} \cdot {\mathbf Q}^\dag$, this implies that
\begin{equation}
\det {\mathbf C} = (\det {\mathbf Q})^* = (c/2) \sqrt{K_1 K_2/k}\,\, (v_{02} q_x^2 - 2 v_{03} q_x q_y
+v_{01} q_y^2 ) ,
\label{Eq:detCv}
\end{equation}
where $c$ is some unit amplitude complex number. This simple
form implies that in the $q\rightarrow 0$ limit the zeroes of
$\det\bm C({\bm q})$, which occur at \cite{square}
\begin{equation}
q_y = \frac{q_x}{v_{01}}\left(v_{03} \pm \sqrt{v_{03}^2
-v_{01}v_{02}} \right) ,
\end{equation}
depend only on ${\bm v}_0$ and not on the elastic moduli $K_1$ and
$K_2$. The negative of the quantity under the square root is
proportional to the determinant of the strain of the Guest
mode:
\begin{equation}
\det \pmb{\st}_G = \varepsilon_{xx}\varepsilon_{yy}-\varepsilon_{xy}^2 = U_G^2 (v_{01}v_{02}-v_{03}^2) .
\end{equation}
Thus, to order $q^2$, there is a zero mode with real values of
$q_x$ and $q_y$ if $\det \pmb{\st}_G <0$ and complex values when
$\det \pmb{\st}_G >0$. The complex values of ${\bm q}$ correspond to
decaying or growing surface modes. The zeros at real values of
${\bm q}$ can either become complex when higher-order terms in the
$q$-expansion are included, in which case they again correspond
to surface modes albeit with an inverse decay length quadratic
rather than linear in the surface wavenumber, or they can
remain real. In the latter case, they can occur at specific
values of ${\bm q}$, in which case they are isolated. These
points, which have a topologically protected winding number are
called Weyl points, and they have received considerable
attention recently in relation to Dirac semimetals and
topological insulators
\cite{km05b,Aji2012,WeiChao2012,WeiWang2012,AjiHe2013,ZhuAji2013,PhillipsAji2014,SosenkoWei2014,WeiChao2014}
and in photonic materials
\cite{LuJoan2012,LuJoan2013,LuJoan2014}. The distorted square
lattices of \fref{fig:square-kagome}(b) has isolated Weyl
points \cite{square}. It is also possible to have Weyl lines
rather than isolated points. These occur in distorted versions
of the $3d$ pyrochlore lattice \cite{StenullLub2014b} and
gyroid photonic crystals \cite{LuJoan2013}.
\section{Examples of periodic lattices\label{sec:examples}}
In this section, we will study three simple lattices: the
square, kagome, and twisted kagome lattices depicted in
\fref{fig:square-kagome}. They provide specific examples of
the phenomena discussed in the preceding section. Section
\ref{ssec:square-lattice} and \ref{ssec:kagome} rely heavily of
reference \cite{Souslov2009} and section
\ref{ssec:twisted-kagome} on reference \cite{SunLub2012}.
\subsection{Square Lattice\label{ssec:square-lattice}}
\subsubsection{The nearest-neighbor lattice}
The periodic square lattice (\fref{fig:square1}) with only
nearest neighbor bonds, along ${\bm b}_1 = a(1,0)$ and
${\bm b}_2=a(0,1)$, and one site and two bonds per unit cell is the
simplest example of a periodic Maxwell lattice. Many of its
properties follow without calculation from the observations of
the previous sections. It consists of straight filaments in
orthogonal directions, each of which develops an \SSS when
placed under periodic boundary conditions. Thus an $N_x\times
N_y$ lattice has $N_S = N_x+N_y$ states of self stress and
\begin{equation}
N_0 = N_x + N_y
\label{eq:square-N0}
\end{equation}
zero modes, which, apart from the trivial translation modes,
are infinitesimal floppy modes in which any row or column is
rigidly displaced as shown in \fref{fig:square1}(c). These
rigid displacements form a basis for the null space of
$\ma{C}$, and their Fourier transforms do as well. If
filaments parallel to the $x$-axis are rigidly displaced, then
$q_x=0$, and the Fourier transform of a set of displaced
filaments parallel to the $x$-axis are indexed by a wavevector
${\bm q}_1=(0,q_y)$. Similarly rigidly displaced filaments parallel
to the $y$-axis are indexed by ${\bm q}_2=(q_x,0)$. There are $N_x$
independent values of $q_x$ and $N_y$ of $q_y$. Each wavevector
represents a zero mode, and since there are $N_x$ independent
values of $q_x$ and $N_y$ independent values of $q_y$, these
account for all of the $N_x+N_y$ zero modes, linear
combinations of which account for rigid translations and
rotations. Thus, the bulk phonon spectrum for the periodic
square Maxwell lattice has two lines of zero modes running from
the center $\Gamma$ of the Brillouin Zone to the midpoint $M$
and $M'$ at the zone boundaries as shown in
\fref{fig:NNsq-spectrum}.
\begin{figure}
\centering
\includegraphics{figure-6.pdf}
\caption{(a) An $N_x=5$ by $N_y=5$ square lattice with $NN$ (full lines) and $NNN$
(dashed lines) bonds showing bond directions for the calculation of self stress. The gray bonds
at the right and upper boundaries are the $N_x+N_y = 10$ $NN$ and $2(N_x+N_y-1)$
$NNN$ bonds that must be cut to from a $N_x \times N_y$ lattice under periodic boundary
conditions to produce the $N_x \times N_y$ lattice under free boundary conditions.
(b) A unit cell showing its two Bravais lattice vectors ${\bm a}_1$ and ${\bm a}_2$ and
it two $NN$ ($1$ and $2$) and two $NNN$ ($3$ and $4$) bonds. (c) A finite mechanism
of the finite square lattice with $NN$ bonds only. This is also an infinitesimal mechanism in the
periodic lattice.}
\label{fig:square1}
\end{figure}
\begin{figure}
\centering
\includegraphics{figure-7.pdf}
\caption{(a) Density plot of the lowest-frequency mode of the $NN$ square
lattice showing lines of zero modes running from the Brillouin zone center
at $\Gamma$ to the midpoints $M$ and $M'$ of the zone edge. (b) $3d$ plot of the single
mode $\omega_x({\bm q})$, which is independent of $q_y$ and equal to zero for $q_x=0$.
\emph{After reference \cite{Souslov2009}}}
\label{fig:NNsq-spectrum}
\end{figure}
An $N_x\times N_y$ lattice with free boundary conditions can be
created by cutting and removing $N_x+N_y$ bonds from an
$N_x\times N_y$ lattice under PBCs. Thus, the cut lattice has
only $2N_x N_y - (N_x+N_y)$ bonds [\fref{fig:square1}]. In the
cutting process, however, all $N_x+ N_y$ \SSSs are lost, and
the zero mode count of the free lattice is the same as that of
the periodic lattice. To infinitesimal order in the
displacement, the modes themselves are identical in the two
cases. In the free lattice, however, the modes are nonlinear as
shown in \fref{fig:square1}(c). The bulk zero modes, which are
seen under periodic boundary conditions, exhaust the Index theorem
count. There are no additional zero modes at the surface of the
free lattice. It is clear that any distortion of one of the
surfaces parallel to the $x$- or $y$-axis of the free lattice
will be transmitted across the entire sample by the rigid
filaments, which support the \SSS under periodic boundary
conditions. So, zero modes are not localized near the surface;
surface distortions have infinite penetration depth and thus do
not constitute surface modes.
There are exactly two \SSSs at ${\bm q}=0$, and they clearly
overlap affine bond elongations, which are equal to $
a\varepsilon_{xx}$ for bonds parallel to the $x$-axis and to
$a\varepsilon_{yy}$ for bonds parallel to the $y$ axis. Thus the
elastic energy density is simply
\begin{equation}
f_{\text{el}}=\tfrac{1}{2}k\, (\varepsilon_{xx}^2 + \varepsilon_{yy}^2) .
\end{equation}
There are two independent lattice distortions, $\varepsilon_{xx}$ and
$\varepsilon_{yy}$, that cost energy and one $\varepsilon_{xy}$ that does not
in agreement with the analysis of \sref{ssec:elastic2}.
\subsubsection{The next-nearest-neighbor lattice}
Introducing $NNN$ bonds, ${\bm b}_3 = a(1,-1)$ and ${\bm b}_4
=a(1,1)$, increases the bond number without changing the number
of sites. If $NNN$ bonds are added one at a time, initially no
additional \SSSs are created, and each additional bond
decreases the zero-mode count by one. As additional bonds are
added, eventually additional states of self stress are created,
for example in isolated cells with two $NNN$ bonds in
configurations like that of \fref{fig:selftstress1}(d).
Consider for simplicity the case with $N_x=N_y$. If a filament
of equivalent contiguous $NNN$ bonds (pointing either up or
down) traverses the sample, a new \SSS along that line is
created. Thus, if there are no other $NNN$ bonds, the change in
bond number and number of \SSSs relative to those in the state
with no $NNN$ bonds are, respectively, $\Delta N_B = N_x$ and
$\Delta N_S = 1$, leading to a decrease in the number of zero
modes of $\Delta N_0= -N_x +1$ for a total of $N_0 = N_x +1$.
If each unit cell has an upward pointing $NNN$ bond and not a
downward pointing one, $\Delta N_B = N_x^2$. In addition, there
is one \SSS for every ${\bm q}$ except ${\bm q}=0$ for which there are
three for a total of $N_S = N_x^2+2$ leading to $N_0=2$, i.e.,
leaving only the required two zero modes of uniform
translation. Each added down pointing bond increases $N_B$ and
$N_S$ by one leaving $N_0$ fixed at two. Thus, if all $NNN$
bonds are present, there are no zero modes beyond the two at
${\bm q}=0$, and the spectrum is gapped everywhere except at
${\bm q}=0$ as shown in \fref{fig:k'>0sqdis}(a). The gap at points
$M$ provides a characteristic frequency,
\begin{equation}
\omega^* = 2 \sqrt{\frac{k'}{m}} ,
\label{eq:omega*}
\end{equation}
and a comparison of the $k'=0$ linear dispersion with the
$k'\neq 0$ dispersion along the zone edge near $M$
[\fref{fig:k'>0sqdis}(b)] yields a characteristic length
\begin{equation}
\ell^* = \frac{a}{2} \sqrt{\frac{k}{k'}} ,
\label{eq:l*}
\end{equation}
which diverges as $k' \rightarrow 0$. The approach of
$\omega^*$ and $(\ell^*)^{-1}$ to zero as the Maxwell limit,
$k'\rightarrow 0$, is similar to the behavior of these
quantities near jamming where $\omega^* \sim (\Delta z)$ and
$l^* \sim (\Delta z)^{-1}$ \cite{SilbertNag2005}. This result
is not altogether surprising, given that weakening the spring
constants of $NNN$ bonds to zero yields the Maxwell limit in
which there is no characteristic frequency or inverse length.
An effective medium analysis \cite{MaoLub2010} of the case in
which $NNN$ bonds are randomly added with probability $p =
(z-z_c)/4$ leads to exactly the same results as jamming.
\begin{figure}
\centering
\includegraphics{figure-8.pdf}
\caption{(a) Comparison of the square-lattice phonon frequencies along symmetry directions in
the BZ with $k'=0$ (dashed lines) and $k'>0$ (full lines) The blue curves depict
$\omega_1(k',q)$, the lower and the red curves, $\omega_2(k',q)$, the higher
of the two modes. The single blue dashed line
from $R$ to $\Gamma$ represents the curves $\omega_1(0,q)$, $\omega_1(k1,q)$,
and $\omega_2(0,q)$, all of which are the same. (b) Frequencies for $k'=0$ and $k'>0$
along the BZ edge from $R$ to $M$ and back to $R$. When $k'=0$, the frequency grows linearly with $q$ away from
$M$. When $k'>0$, there is a gap defining the characteristic
frequency $\omega^* = \sqrt{k'}$ and a length scale
when $l^* \sim 1/\omega^*$. }
\label{fig:k'>0sqdis}
\end{figure}
\subsubsection{Equilibrium and dynamical matrices}
We now investigate how these results follow from explicit
calculations of the equilibrium and compatibility matrices. We
set ${\bm a}_1 = a(1,0)$ and ${\bm a}_2=a(0,1)$ and designate the $NN$
bonds $1$ and $2$ to be to the right and above each lattice
site, respectively, and $NNN$ bonds $3$ and $4$ to be along
${\bm a}_1-{\bm a}_2$ and ${\bm a}_1+{\bm a}_2$, respectively, as shown in
\fref{fig:square1}(b). Following the rules outlined in section
\ref{ssec:equil-comp}, the force at site $\ell$ is ${\bm f}({\bm R}_\ell)
= {\bm t}_1({\bm R}_\ell) -{\bm t}_1({\bm R}_\ell -{\bm a}_1) + {\bm t}_2({\bm R}_\ell)-
{\bm t}_2({\bm R}_\ell - {\bm a}_2)$, and
\begin{equation}
{\mathbf Q}_{NN} ({\bm q})= -
\begin{pmatrix}
1-e^{-iq_x a} & 0 \\
0 & 1-e^{-iq_y a}
\end{pmatrix} .
\end{equation}
Thus if $q_x = 0$ and $q_y \neq 0$, the one-dimensional null
space of $\mathbf{Q}({\bm q})$ is spanned by the unit vector
$\hat{{\bm t}}_x(q_y)= (1,0)$; and if $q_y=0$ and $q_x \neq 0$, it
is spanned by the vector $\hat{{\bm t}}_y(q_x) = (0,1)$. In other
words, for each $q_y\neq 0$, there is a \SSS for each value of
$q_y$ with independent tensions in bonds parallel to the
$x$-axis that have the same value for every bond in a given
filament and similarly for $q_x\neq 0$. When both $q_x$ and
$q_y=0$, there are two \SSSs. Thus, there is one state \SSS
for each value of $q_x$ and one for each value of $q_y$ for a
total of $N_S=N_x+ N_y$ \SSSs. The null pace of
$\bm C({\bm q})={\mathbf Q}_{NN}^\dag ({\bm q})$, which consists of the set of
zero modes, is similar with one zero mode per $q_x$ and $q_y$
for a total of $N_0 = N_S$ zero modes as required by Index theorem. It
consists of rigid displacements of individual rods as already
discussed.
The force at site $\ell$ arising from $NNN$ bonds is
${\bm f}^{NNN}({\bm R}_\ell)= {\bm t}_3({\bm R}_\ell)-{\bm t}_3({\bm R}_\ell - {\bm b}_3)
+{\bm t}_4({\bm R}_\ell) - {\bm t}_4({\bm R}_\ell - {\bm b}_4)$, and the $NNN$
equilibrium matrix is
\begin{equation}
{\mathbf Q}_{NNN}({\bm q}) = -\frac{1}{\sqrt{2}}
\begin{pmatrix}
1-e^{-i(q_x - q_y)a} & 1-e^{-i(q_x + q_y)a} \\
-1+e^{-i(q_x - q_y)a} & 1-e^{-i(q_x + q_y)a}
\end{pmatrix} .
\end{equation}
When both $NN$ and $NNN$ bonds are present, there are four
bonds per unit cell, and the full equilibrium matrix is a the $
2 \times 4$ matrix
\begin{equation}
{\mathbf Q} ({\bm q})=
\begin{pmatrix}
{\mathbf Q}_{NN}({\bm q}) & {\mathbf Q}_{NNN}({\bm q})
\end{pmatrix} .
\end{equation}
At ${\bm q}=0$, all entries in ${\mathbf Q}$ are zero, and all ${\bm q}=0$ bond
vectors are in its null space. Thus, the elastic energy is
simply the expected affine result,
\begin{subequations}
\begin{align}
f & = \tfrac{1}{2 V}\sum_{\alpha=1}^4 k_\alpha (\hat{\bv}_{\alpha, i} \varepsilon_{ij} {\bm b}_{\alpha,j})^2 \\
& = \tfrac{1}{2} k (\varepsilon_{xx}^2+ \varepsilon_{yy}^2)
+ \tfrac{1}{4} k' [(\varepsilon_{xx}+\varepsilon_{yy} + 2 \varepsilon_{xy})^2+
(\varepsilon_{xx}+\varepsilon_{yy} - 2 \varepsilon_{xy})^2] \\
& = \frac{1}{2}K_{11} (\varepsilon_{xx}^2 + \varepsilon_{yy}^2) + K_{12} \varepsilon_{xx}\varepsilon_{yy}
+ 2 K_{44} \varepsilon_{xy}^2 ,
\end{align}
\end{subequations}
where $K_{11} = k+k'$, and $K_{12} =K_{44} = k'$. As expected,
shear moduli vanish when $k'=0$.
The $NN$ and $NNN$ dynamical matrices are
\begin{subequations}
\begin{eqnarray}
{\mathbf D}^{NN} ({\bm q}) & = & 4 \frac{k}{m}
\begin{pmatrix}
\sin^2 (q_x a/2) & 0 \\
0 & \sin^2 (q_y a/2)
\end{pmatrix} ,\\
{\mathbf D}^{NNN}({\bm q})& =& 2 \frac{k'}{m}
\begin{pmatrix}
1- \cos q_x a \cos q_y a & \sin q_x a \sin q_y a \\
\sin q_x a \sin q_y a & 1- \cos q_x a \cos q_y a
\end{pmatrix} .
\end{eqnarray}
\end{subequations}
The spectrum arising from the sum of these dynamical matrices
is shown in \fref{fig:k'>0sqdis}. When $k'=0$, only ${\mathbf D}^{NN}$
contributes, and there are two independent one-dimensional
modes with
\begin{equation}
\omega_{x,y} = 2\omega_0|\sin (q_{x,y}a/2)| ,
\end{equation}
where $\omega_0 = \sqrt{k/m}$. For every $q_y$, $\omega_x({\bm q})$
goes to zero with $q_x$ as $c q_x$, where $c=\omega_0 a$ is the
longitudinal (compressive) sound velocity, and reaches a
maximum of $2 \omega_0$ at the point $M$ on the zone edge [${\bm q}
= (\pi/a,0)$] and similarly for $\omega_y({\bm q})$ as shown in
\fref{fig:k'>0sqdis}. These one-dimensional spectra produce
\cite{Kittel1971} (pages 205-209) a density of states
\begin{equation}
\rho(\omega) = \frac{1}{2 \pi \omega_0 a} \left(1-\frac{\omega^2}{4 \omega_0^2}\right)^{-1/2} ,
\end{equation}
which approaches a constant as $\omega \rightarrow 0$ and
diverges as $\omega \rightarrow 2 \omega_0$ at the zone edge.
\begin{figure}
\centering
\includegraphics{figure-9.pdf}
\caption{Densities of states $\tilde{\rho}(\omega)=\rho(\omega)/\rho(0)$ for
the (a) square and (b) kagome lattices. The dashed black line (labeled $\bf 1$) is the flat, one-dimensional
$\tilde{\rho}(\omega)$ at $\omega\ll\omega_0$ when $k'=0$, and the full red line (labeled $\bf 2$) is
$\tilde{\rho}(\omega)$ for $k'>0$ showing linear-in-$\omega$ Debye behaviour at
$\omega \ll \omega^*$, van Hove singularities near $\omega = \omega^*$ and constant
behaviour at $\omega >\omega^*$. Lines $\bf 3$ and $\bf 4$ in (a) are effective medium
results for different probabilities of occupation of NNN bonds showing the washing-out
of the van Hove singularity and the smooth transition from Debye to one-dimensional behavior.
\emph{Adapted from references \cite{Souslov2009} and \cite{MaoLub2010}}}
\label{fig:DOS-sq-kag}
\end{figure}
When $k'>0$, modes [with ${\bm q} = q(\cos\theta, \sin \theta)$]
exhibit a $\cos (4 \theta)$ angular modulation at low frequency
and one-dimensional $k'=0$ behavior at larger ${\bm q}$. When $0<k'
\ll k$, $D_{ij}({\bm q})$ is well approximated as a diagonal matrix
with $m D_{xx}({\bm q}) = k q_x^2 + 4 k' \sin^2(q_y/2)$ with
associated eigenfrequency $\omega_x({\bm q}) \sim
\sqrt{D_{xx}({\bm q})}$. These expressions immediately define the
characteristic frequency of \eref{eq:omega*} at the point
$M=(0,\pi)$ on the BZ edge. The first term in $D_{xx}({\bm q})$
represents the long-wavelength one-dimensional $NN$-modes that
are present when $k'=0$, whereas the second represents the
effects of $NNN$ coupling. When $q_x=0$, the only length scale
in the problem is the unit lattice spacing, and no divergent
length scale can be extracted from $D_{xx}( 0 , q_y) $. When
the first term is large compared to the second, $D_{xx}({\bm q})$
reduces to its form for the Maxwell $k'=0$ limit, and we can
extract a length by comparing these two terms. The shortest
length we can extract is that of \eref{eq:l*}, which comes from
comparing $k q_x^2$ to $D_{xx}({\bm q})$ at point $M$ on the zone
edge as depicted in \fref{fig:k'>0sqdis} If $q_y<\pi$, the
Maxwell limit is reached when $q_x
> q^*$. A similar analysis applies to $D_{yy}({\bm q})$ when $q_y >
q^*$. If a square of length $l$ is cut from the bulk, the
wavenumbers of its excitations will be greater than $\pi/l$,
and for $ql^*>1$, all modes within the box will be effectively
those of the lattice without $NNN$ bonds. This construction is
equivalent to the cutting argument of Wyart et al.
\cite{WyartWit2005b,Wyart2005}.
The characteristic length of \eref{eq:l*} is identical to the
length at which the frequency of the compressional mode
$\omega_x ( q_x\sim 1/l) = \sqrt{k}/l^* ~\sim
\sqrt{K_{11}}/l^*$ becomes equal to $\omega^*$. A meaningful
length from the transverse mode $\omega_x(0, q_y)$ cannot be
extracted in a similar fashion. The full phonon spectrum
[\fref{fig:k'>0sqdis}(a)] exhibits acoustic phonons identical
to those of a standard square lattice at $q\ll 1$ and a saddle
point at the point $M$. Thus, the low-frequency density of
states shown in \fref{fig:DOS-sq-kag}(a) is Debye-like:
$\rho(\omega) \sim \omega/\sqrt{k k'}$ with a denominator that,
because of the anisotropy of the square lattice, is
proportional to the geometric mean of longitudinal and
transverse sound velocities rather than to a single velocity.
In addition $\rho(\omega)$ exhibits a logarithmic van Hove
singularity at $\omega^*$ and approaches the one-dimensional
limit $(1/\pi)/\sqrt{k}$ at $\omega^* \ll \omega \ll
2\sqrt{k}$. The frequency $\omega^*$ [\eref{eq:omega*}] is
recovered by equating the low-frequency Debye form at
$\omega^*$ to the high-frequency isostatic form of the density
of states.
\subsection{Kagome lattice\label{ssec:kagome}}
The $NN$ kagome lattice consists of three grids of straight
parallel filaments intersecting at lattice sites as shown in
\fref{fig:kagome1}. This figure also shows two different unit
cells reflecting the $3$-fold symmetry of the lattice. For the
moment, we focus on lattices with $N_x=N_y$ cells on a side as
shown in the figure. \ref{app:compat} derives the compatibility
matrix for a generalized kagome lattice, of which the simple
kagome lattice considered here is a special case. The
equilibrium and dynamical matrices are straightforwardly
calculated from it, as are the phonon spectrum and zero modes.
As in the square lattice, each of the kagome-lattice filaments
supports a \SSS under periodic boundary conditions (care must
be taken to join equivalent sites at the boundaries to create a
single filament for bonds slanting to the left from the
bottom), and the expectation is that a periodic lattice will
have $3N_x$ states of self stress, and this is indeed the case.
There is one state of self stress for each wavevector ${\bm q} = q
\bm G_j/G_0$ along the symmetry equivalent lines from $\Gamma$ to
$M$ in the BZ [\fref{fig:kagome-disperion1}(a)] parallel to the
three reciprocal lattice vectors $\bm G_j$, where $G_0 =|\bm G_j|=
4 \pi/(\sqrt{3} a)$. Since there are $N_x$ values of ${\bm q}$
along each of these directions, there are a total of $3N_x$
\SSSsa, from which \SSSs for individual filaments can be
constructed.
\begin{figure}
\centering
\includegraphics{figure-10.pdf}
\caption{(a) An $N_x=5$ by $N_y = 5$ kagome lattice showing $NN$ (full lines) and
$NNN$ (dotted lines) bonds. The gray bonds along the right and upper edges
are the $2(N_x+N_y-1)$ $NN$ and $4(N_x+N_y-1)$ $NNN$ bonds that must be cut from a
$N_x \times N_y$ lattice under periodic boundary conditions to produce the
the free $N_x \times N_y$ lattice. (b) Representation of a zero modes.
(c) and (d) two different symmetric versions of the kagome unit cells showing labeling of sites and $NN$ bonds.
The vectors ${\bm a}_1$ and ${\bm a}_2$, ${\bm a}_1$ and $-{\bm a}_3$, or any other similar pair can serve
as basis vectors for the triangular Bravais lattice.}
\label{fig:kagome1}
\end{figure}
\begin{figure}
\centering
\includegraphics{figure-11.pdf}
\caption{(a) Shortest reciprocal lattice vectors, related by $3$-fold rotations, of the kagome lattice that satisfy
$\bm G_j\cdot {\bm a}_j=0$.
(b) Dispersion of lowest-frequency mode, showing ``knife-edges" along the
$\Gamma-M$ in the BZ. (c) Density plot of lowest-frequency mode with $k'/k = 0.02$.
Note the isotropic behavior near ${\bm q}=0$. \emph{Adapted from reference \cite{Souslov2009}}}
\label{fig:kagome-disperion1}
\end{figure}
The $3N_x$ \SSSs require an equal number of zero modes, which,
as in the square lattice, occur along lines in reciprocal space
that have no component parallel to one of the three grids,
i.e., along the lines $\Gamma M$ in reciprocal space as shown
in \fref{fig:kagome-disperion1}(b). The zero modes for a
filament parallel to the $x$-axis consist of displacements of
all $1$-sites and $3$ sites by $s (\cos \pi/6, \sin \pi/6)$
and $s(\cos \pi/6, -\sin\pi/6)$, respectively, for
infinitesimal $s$. This corresponds to rigid rotations of
triangles about site $2$ as shown in \fref{fig:kagome1}(b). An
alternative description of the mode is that the entire filament
is displaced a distance $s \cos \pi/6$ to the right, and sites
$1$ and $3$ are, respectively, displaced upward and downward a
distance $s\sin\pi/6$ producing the zero-modes structure of
\fref{fig:selftstress2}(b). The spectrum of the
lowest-frequency modes has a linear dispersion with $\omega=cq$
($c=\sqrt{3} k a/8$) in the direction perpendicular to the
$\Gamma-M$ zero modes (\fref{fig:kagome-disperion2}(c)).
As in the square lattice, adding $NNN$ bonds gaps the spectrum
leading to a characteristic frequency $\omega^* \sim \sqrt{k'}$
and associated length scale $\ell^* \sim 1/\sqrt{k'}$
calculated from the dispersion along the line $M$ to $K$ at the
zone edge (\fref{fig:kagome-disperion2}). Other characteristic
frequencies can be calculated from the lowest frequency optical
modes or from the frequency at which the low-frequency acoustic
phonon modes crosses over to a nearly flat dispersion.
\begin{figure}
\centering
\includegraphics{figure-12.pdf}
\caption{(a) Phonon spectrum for the three lowest modes of the undistorted
kagome lattice. Dashed lines depict frequencies at $k'=0$
and full lines at $k'>0$. The inset shows the Brillouin zone
with symmetry points $\Gamma$, $M$, and $K$. (b) Phonon spectrum
of the twisted kagome lattice with $\alpha
>0$ and $k'=0$. (c) Phonon dispersion
along the zone edges from $K$ to $M$ in schematic form for both the
undistorted lattice at $k'>0$ and for the twisted lattice with $\alpha \neq 0$,
showing the characteristic
frequency $\omega^*=2 \sqrt{k'/m}$ ($\omega_{\alpha}
\sim \sqrt{k/m} |\sin \alpha |$) and length $l^*=(q^*)^{-1}=(a/2)(k'/k)^{1/2}$
($l_{\alpha} \sim 1/|\sin \alpha|$) for the
untwisted (twisted) kagome lattice. \emph{Adapted from references \cite{Souslov2009} and
\cite{SunLub2012}.}}
\label{fig:kagome-disperion2}
\end{figure}
A total of $4 N_x -1$ bonds must be cut to liberate a $N_x
\times N_x$-unit-cell free lattice from its periodic parent.
There are no states of self-stress in the free lattice, so
there must be as many zero modes as bonds that are cut. This
is more zero modes than the $3N_x$ in the periodic system. The
difference between the two numbers arises from the joining of
lines slanting to the left under periodic conditions. The
number of horizontal and right slanting filaments is the same
in both cases. However, under PBCs, there are $N_x$ distinct
left-slanting filaments; under free BCs, there are $N_x$ such
lines terminating at the bottom and $N_x-1$ terminating at the
right side of the lattice.
Because of the three sets of filaments aligned along ${\bm a}_n$,
at ${\bm q}=0$, there are now three rather than two \SSSsa,
characterized by bond vectors
$\hat{\mathbf{t}}_1=(1,0,0,1,0,0)/\sqrt{2}$,
$\hat{\mathbf{t}}_2=(0,1,0,0,1,0)/\sqrt{2}$, and
$\hat{\mathbf{t}}_1=(0,0,1,0,0,1)/\sqrt{2}$, each of which has
a nonzero overlap with the vector of affine distortions. This
gives enough \SSSs to fully stabilize the elastic energy of the
$NN$ kagome lattice with nonzero Lam\'{e} coefficients
\cite{HutchinsonFle2013}. Addition of $NNN$ bonds increases
these coefficients.
\begin{equation}
\lambda = \mu = \frac{\sqrt{3}}{8}( k +3 k')
\label{eq:kagome-elas}
\end{equation}
The response is affine even though three sites per unit cell
introduces the possibility of their undergoing nonaffine
displacement to lower energy. However, the geometry of this
lattice is special, and response is necessarily affine.
\subsection{Twisted kagome lattice\label{ssec:twisted-kagome}}
The twisted kagome lattice is constructed from the finite zero
modes by oppositely rotating triangles along all of the
filaments of the untwisted lattice through an angle $\alpha$ as
shown in \fref{fig:twisted-kagome1}. There are a continuum of
lattices indexed by the angle $\alpha$ that bond $1$ makes with
the $x$-axis. As we shall see, this lattice has properties that
at first blush seem surprising but that in fact are simple
consequences of the Index theorem .
\subsubsection{Bulk and elastic properties}
The straight filaments of the untwisted kagome lattice become
``zigzagged" and lose their ability to sustain \SSSs. The
result is that there are only the two ${\bm q}=0$ states of self
stress required by the Index theorem and the existence of two zero modes
of translation, and there are no zero modes other than those at
${\bm q}=0$. Thus the simple rotation of triangles to create the
twisted lattice from the the untwisted ones gaps all but the
${\bm q}=0$ bulk phonons, just as does adding $NNN$ bonds to the
untwisted lattice (\fref{fig:kagome-disperion2}). The untwisted
spectrum is approached continuously as $\alpha \rightarrow 0$
leading to a characteristic frequency (measured by the gap at
the symmetry point $M$) and associated length scale,
\begin{equation}
\omega_\alpha \sim |\sin \alpha|, \qquad l_\alpha \sim \frac{1}{|\sin \alpha|} .
\end{equation}
that, respectively, vanish and diverge as $\alpha \rightarrow
0$.
\begin{figure}
\centering
\includegraphics{figure-13.pdf}
\caption{(a) Twisted kagome lattice showing the displacements of sites in the
two unit cells shown in \fref{fig:kagome1} and rotation of triangles
through $\pm \alpha$. Bond $\bf 1$ (connecting sites $3$ and $1$) in this figure is rotated through an angle $-\alpha$.
The lattice spacing for bonds of length $a/2$ is
$a_L = a \cos \alpha$. (b) Lattices with different values of $\alpha$ superposed showing
how changing $\alpha$ reduces the lattice area. \emph{From reference \cite{SunLub2012}}}
\label{fig:twisted-kagome1}
\end{figure}
As \fref{fig:twisted-kagome1} shows, twisting the lattice
uniformly compresses it. If bond lengths are fixed at $a/2$,
the Bravais lattice vectors are reduced in length from $a$ to
$a_L=a \cos \alpha$, and the volume of each unit cell from
$(\sqrt{3}/2) a^2$ to $(\sqrt{3}/2)a^2\cos^2 \alpha$. Thus
angle changes modify the area of the lattice without changing
any bond length of a $NN$ lattice, implying that the bulk
modulus $B$ of these lattices vanishes for all $\alpha \neq 0$.
Observe that the twisted lattice has the peculiar property that
it expands or contracts isotropically at no energy cost. If it
expands in one direction, it will also do so in the opposite
direction. Elastic materials with this property have a negative
Poisson ratio \cite{Landau1986}; they are \emph{auxetic}. The
twisted kagome lattice has the most negative allowed Poisson
ratio of $-1$, which it retains for all strain, and it
sometimes called \emph{maximally auxetic} \cite{SunLub2012}. In
addition, the unit cell contracts isotropically. Such lattices
are termed \emph{equiauxetic} \cite{MitschkeGue2013}. There are
not many naturally occurring materials that have this property,
but artificial ones can be created \cite{Lakes1987}.
As indicated above, there are two ${\bm q}=0$ states of self-stress
that have the potential to create non-vanishing elastic moduli.
The long-wavelength elasticity is necessarily isotropic, so if
the bulk modulus is zero, the only option if for there to be an
isotropic shear modulus or for none of the elastic moduli to be
nonzero. The two states of self-stress overlap with affine
strain, the shear modulus $\mu = \sqrt{3} k/8$ is nonzero (and
curiously independent of $\alpha$ and identical to that of the
untwisted kagome lattice [\eref{eq:kagome-elas}], and the
elastic energy density is
\begin{equation}
f_{\text{el}} = \frac{1}{2}\mu \,\tilde{\varepsilon}_{ij} \tilde{\varepsilon}_{ij}
\end{equation}
where $\tilde{\varepsilon}_{ij} = \varepsilon_{ij} - \frac{1}{2}\varepsilon_{kk}$ is
the symmetric, traceless shear strain tensor.
The strain is related to the metric tensor $g_{ij}$ via
$\varepsilon_{ij} = (g_{ij} - \delta_{ij})/2$. The traceless part of
the strain, $\tilde{\varepsilon}_{ij}=(1/2)(g_{ij} -
\frac{1}{2}\delta_{ij} g_{kk})$, which is zero for $g_{ij} =
\delta_{ij}$, is invariant, and thus remains equal to zero,
under conformal transformations that take the metric tensor
from its reference form $\delta_{ij}$ to $h({\mathbf x} )\delta_{ij}$
for any continuous function $h({\mathbf x})$ of position ${\mathbf x}$. The
zero modes of the theory thus correspond simply to conformal
transformations, which in two dimensions are best represented
by the complex position and displacement variables $z=x+iy$ and
$w(z) = u_x(z) + i u_y ( z)$. All conformal transformations
are described by an analytic displacement field $w(z)$. Since
by Cauchy's theorem, analytic functions in the interior of a
domain are determined entirely by their values on the domain's
boundary (the ``holographic" property \cite{Susskind1995}), the
zero modes of a given sample are simply those analytic
functions that satisfy its boundary conditions. For example, a
disc with fixed edges (${\bm u}=0$) has no zero modes because the
only analytic function satisfying this FBC is the trivial one
$w(z)=0$; but a disc with free edges (stress and thus strain
equal to zero) has one zero mode for each of the analytic
functions $w(z) = a_n z^n$ for integers $n \geq 0$. The
boundary conditions $\lim_{x\to \infty} {\bm u} (x,y) = 0$ and
${\bm u}(x,y) = {\bm u}(x+L, y)$ on a semi-infinite cylinder with axis
along $x$ are satisfied by the function $w(z) = \rm e^{\rm i q_x
z}=\rm e^{\rm i q_x x} e^{-q_x y}$ when $q_x = 2n\pi/L$, where
$n$ is an integer. This solution is identical to that for
classical Rayleigh waves \cite{Landau1986} on the same
cylinder. Like the Rayleigh theory, the conformal theory puts
no restriction on the value of $n$ (or equivalently $q_x$).
Both theories break down, however, at $q_x =q_c \approx \min
(l_\alpha^{-1}, a^{-1})$ beyond which the full lattice theory,
which yields a complex value of $q_y=q_y'+i
q_y^{\prime\prime}$, is needed.
\subsubsection{Surface modes\label{ssec:surface-m}}
As we have seen free two-dimensional lattices of $N$ sites cut
from a periodic Maxwell lattice necessarily have of order
$\sqrt{N}$ zero modes because of order $\sqrt{N}$ bonds must be
cut, and any sample-spanning states of self stress are lost
under the cut. In the untwisted kagome lattice, these modes are
identical to the bulk zero modes calculated under PBCs. In the
twisted lattice, whose cut lattice must have the same number of
zero modes as the untwisted lattice, there are no bulk zero
modes (except at ${\bm q}=0$), and as a result the zero modes must
be localized at surfaces. In the long-wavelength limit, these
modes must reduce to the zero-frequency Rayleigh waves of an
isotropic elastic continuum with vanishing bulk modulus with
decay length $l_s \equiv \kappa^{-1}$ equal to the inverse
surface wavenumber $q$. At shorter wavelength, $l_s$ is
determined by the length $l_\alpha$ associated with the twisted
phonon gap.
\begin{figure}
\centering
\includegraphics{figure-14.pdf}
\caption{(color online) (a) A free twisted kagome lattice with
free horizontal and vertical boundaries. Sites and bonds of unit cells that match
the surface are shown in black and red, respectively, and the geometric form of these unit
cells are indicated by dashed quadrilaterals. Cutting the two (four) dashed bonds per cell on the
horizontal (vertical) boundary produces a lattice with bottom (left) and top (right) boundary
cells $1$ ($4$) and $2$ ($5$). Alternatively removing these bonds along with one circled site and
the two bonds crossed by grey lines changes the top (right) boundary cell from $2$ ($5$) to $3$ ($6$).
The number of zero modes per site is $1$ ($2$) per site on both the bottom (right)
and top (left) boundaries. Cell $7$ has one additional bond cut from it. (b) The reduced inverse
decay length $\kappa_r = \kappa/(G_r/2)$ of the horizontal boundaries
as a function of $q_r = q/(G_r/2)$, where $G_r =2 \pi/a$ for
$\alpha = \pi/20, \pi/10, 3 \pi/20, \pi/5, \pi/4$ in order from bottom to top. All curves
follow $\kappa_r = q_r$ near $q_r = 0$. The curve at $\alpha = \pi/4$ diverges at $q_r = 1.0$.
(c) $\kappa_r$ as a function of $q_r$ with $G_r = 2 \pi/\sqrt{3}$ for $\alpha = \pi/8$. The positive
curves are for the left boundary and the negative ones are minus $\kappa_r$ for the right boundary. The
two positive curves merge at $q_r \approx 0.4$. There are still two distinct decays beyond this point
with the same real part and different imaginary parts. The straight grey lines are the
elastic limit $\kappa_r = q_r$.
}
\label{twisted-kagome-surface}
\end{figure}
\Fref{twisted-kagome-surface} depicts a finite rectangular
sample with free horizontal surfaces parallel to ${\bm a}_1$ and
vertical surfaces parallel to ${\bm a}_2 - {\bm a}_3$ along with unit
cells constructed so that all sites and bonds on a surface lie
in periodically repeated continuous cells. It also shows which
bonds (or bonds and sites) must be removed to liberate the
finite lattice from the one under PBCs. The removal of two
bonds or four bonds and one site per unit cell liberate the
horizontal surfaces. In either case, the number of zero modes
per cell is $n_0 = 2 \Delta n - \Delta n_b = 2$, where $\Delta
n=0$, $\Delta n_b = -2$ in the first case and $\Delta n=-1$,
$\Delta n_b = -4$ in the second. Similar arguments yield $n_0
= 4$ for vertical surfaces. As might be expected, the modes
are distributed equally between opposite surfaces, i.e., there
is one zero mode per unit cell on horizontal surfaces and two
modes per unit cell on vertical surfaces. Equivalently, there
is one (two) mode per surface wavenumber $-\pi/G_s <q<\pi/G_s$,
where $G_s$ is the magnitude of the surface reciprocal lattice
vector, $2\pi/a$ for horizontal and $2\pi/(\sqrt{3}a)$ for
vertical surfaces.
The amplitude of the surface modes decay as
$\exp[-\tilde{\kappa} s]$ with distance $s$ away from the
surface all of the way to the opposite free surface, where
$\tilde{\kappa}$ is in general complex indicating oscillations
along with decay. $\kappa \equiv \text{Re}\, \tilde{\kappa} =
l_s^{-1}$ is the inverse decay length. In the case of
horizontal surfaces, the decay length is the same on opposite
surfaces. \Fref{twisted-kagome-surface}(b) plots the single
$\kappa(q,\alpha)$ for different values of $\alpha$. In the
case of vertical surfaces, the two decay lengths for the left
surface differ from those of the right one.
\Fref{twisted-kagome-surface}(c) plots these for $\alpha =
\pi/8$. In all cases, one $\kappa(q,\alpha)$ reduces to
$\kappa(q,\alpha)=q$ in the long-wavelength limit as required
by the continuum theory.
Surface zero modes are by definition in the null space of the
compatibility matrix $\ma{C}$. Systems with parallel free
surfaces with PBCs along their direction of alignment can be
viewed as a series of layers $n=1,...,N$, and $\ma{C}$ can be
decomposed as
\begin{equation}
\ma{C} =
\begin{pmatrix}
{\mathbf C}_{11} & {\mathbf C}_{12} & 0 & \dots & 0 & 0 \\
0 & {\mathbf C}_{11} & {\mathbf C}_{12} & \dots & 0 & 0 \\
\hdotsfor{6} \\
0 & 0 & 0 & \dots & {\mathbf C}_{11}& {\mathbf C}_{N-1,N} \\
0 & 0 & 0 & \dots & 0 & {\mathbf C}_{NN}
\end{pmatrix} ,
\label{eq:matrix-C}
\end{equation}
where all ${\mathbf C}_{ij}$'s depend on the surface wavenumber $q$ and
$\alpha$. ${\mathbf C}_{11}$ is the $6\times6$ matrix connecting bonds
and sites within in a single unit cell and ${\mathbf C}_{12}$ is the
$6\times 6$ matrix connecting bonds in one unit cell to sites
in unit cells one layer deeper in the sample. The same unit
cells are used throughout the sample. The opposite surface may
terminate with only a partial version of these cells, and the
exact forms of $ {\mathbf C}_{N-1,N}$ and ${\mathbf C}_{NN}$ depend on how the
surface is terminated. Consider, for example, horizontal
surfaces, the bottom of which is characterized by the unit cell
labeled $1$ in \fref{twisted-kagome-surface}(a) and the top of
which is characterized by unit cell $2$. For modes localized
at the bottom surface, unit cell $1$ is used through out the
sample. The termination cell $2$ at the opposite surface has
all three sites but only four bonds of unit cell $1$. The
missing bonds ($5$ and $6$) are not affected by cell $N-1$.
Therefore, ${\mathbf C}_{N,N-1} = {\mathbf C}_{12}$ is a $6\times 6$ matrix,
and ${\mathbf C}_{NN}$ is a $4\times 6$ matrix. If the top surface is
terminated by unit cell $3$, which has only $2$ sites and $2$
bonds of cell $1$, ${\mathbf C}_{N-1,N}$ is a $6 \times 4$ matrix and
${\mathbf C}_{N,N}$ is a $2 \times 4$ matrix. Thus displacements
$\ma{U} = ({\bm u}_1, ... ,{\bm u}_N)$ in the null space of $\ma{C}$
satisfy
\begin{equation}
{\mathbf C}_{11} {\bm u}_n + {\mathbf C}_{12} {\bm u}_{n+1} = 0 ,
\end{equation}
for $n=1, ... N-2$. These equations are solved by ${\bm u}_{n+1} =
\lambda{\bm u}_n$ and
\begin{equation}
\det ({\mathbf C}_{11} + \lambda {\mathbf C}_{12}) = 0
\label{eq:detC-s}
\end{equation}
subject to the boundary conditions that ${\mathbf C}_{11}{\bm u}_{N-1} +
{\mathbf C}_{N-1,N} {\bm u}_N = 0$ and ${\mathbf C}_{NN} {\bm u}_N = 0$.
The inverse penetration depth is determined by $\lambda$ via
$\exp[-\kappa a_{\perp}] = \lambda$, where $a_{\perp}= a
\sqrt{3}/2$ is the distance between unit cells in the direction
perpendicular to the surface. In case of termination with
either unit cell $2$ or unit cell $3$, ${\bm u}_{N}$ must equal
$\lambda {\bm u}_{N-1}$ to solve the first boundary condition.
Though required by the known existence of the zero mode, it is
a remarkable fact that the projection of ${\bm u}_1$, which is in
the null space of ${\mathbf C}_{11} + \lambda {\mathbf C}_{12}$, onto the space
of displacements of the last layer is also in the null space of
${\mathbf C}_{NN}$, and as advertised earlier, the surface mode decays
exponentially from one free surface to the next. A similar
analysis applies to the vertical surface for which $a_{\perp}=
a/2$. Of course, any linear combination of the exponentially
decaying modes on the two sides of a strip is a zero mode if
both are individually. In the usual situation in which the
Rayleigh waves have a nonzero frequency, the eigenstates of a
finite strip are symmetric and anti-symmetric combinations of
states on the two surfaces that interact across the strip
yielding a peristaltic mode with $\omega \sim q$ and a bending
mode with $\omega\sim q^2$. It should be noted that $\kappa$
for modes that are not localized within the surface unit cell
can also be calculated from $\det C({\bm q})$, which is a function
of $\exp[\rm i {\bm q}\cdot {\bm a}_i]$, $i = 1,...,3$, by setting ${\bm q}=
q_{\perp} \hat{{\bm n}}_\text{in} + {\bm q}_{||}$ where ${\bm q}_{||}$ is
the component of ${\bm q}$ parallel to the surface and
$\hat{{\bm n}}_\text{in}$ is the unit inward normal to the surface,
setting $q_{\perp} = i \kappa$, and solving for $\kappa$ in
$\det C(i \kappa, {\bm q}_{||})=0$. This approach does not
directly provide eigenvectors satisfying boundary conditions.
A surface consisting of unit cells $7$ in
\fref{twisted-kagome-surface}(a) differs from surfaces composed
of other unit cells shown in that figure in that it is missing
a bond on the surface: it is obtained from cell $1$ by cutting
the dashed downward pointing bond, and as a result, this
surface has two, not one zero mode per cell. The calculation
just outlined indeed produces two zero modes per $q$, one of
which is localized completely on the first row because
${\mathbf C}_{11}$ has a non-empty null space. Similarly, $\kappa$
diverges as shown in \fref{twisted-kagome-surface}(b) at $q =
\pi/G_s$ for $\alpha=\pi/4$ because ${\mathbf C}_{11}(\pi/G_s,\pi/4)$
has a non-empty null space.
\subsection{Other lattices\label{sec:other-lattices}}
So far, we have focussed on the three simplest examples of
two-dimensional periodic Maxwell lattices and the free lattices
cut from them. There are many others that can be constructed
from these without changing the local coordination numbers or
the lengths of any bonds and whose properties can easily be
understood in the context of the Index theorem. One of the simplest of
such lattices is the ``zigzagged" square lattice with two sites
per unit cell \cite{SouslovLub2014}, shown in
\fref{fig:zigzag-square}, in which every other row is displaced
to the right while allowing the requisite compression along the
vertical direction. This lattice retains the straight
horizontal filaments of the original square lattices but loses
those in the vertical direction. It does not, however, lose
any vertical \SSSs because the \SSSs from individual straight
filaments are converted to ones like those of
\fref{fig:selftstress2}(e) on pairs of zigzagged filaments of
which there are total of $N_x$ under PBCs . These \SSSs must be
accompanied by zero modes in the spectrum, which show up as
horizontal and vertical lines of zeros in the lowest-frequency
mode and a vertical line of zeros in the second-lowest
frequency mode as shown in figures \ref{fig:zigzag-square}(b)
and (c). The vertical \SSSs have no overlap with affine strain,
and the lattice does not resist vertical compression. The
result is that the elastic energy density is simply
$f_{\text{el}} = k u_{xx}^2/2$.
\begin{figure}
\centering
\includegraphics{figure-15.pdf}
\caption{(a) The zigzagged square lattice with two sites per unit cell and
rectangular unit cells and Brillouin zone. (b) and (c) density plots of the two
lowest-frequency bands, showing lines of zero modes along $q_y=0$ and $q_x=0$
in (a) and along $q_x=0$ in (b). The total number of zero modes in the two bands
is the same as in the undistorted square lattice.}
\label{fig:zigzag-square}
\end{figure}
The kagome lattice offers more interesting variations
\cite{SouslovLub2014,SunLub2012}.
\Fref{fig:kagome-variation}(a) shows one such variation with
intriguing properties. It consists of alternating rows of
oppositely tilted distorted hexagons. It has rectangular
symmetry with $6$ sites per unit cell. As
\fref{fig:kagome-variation}(b) shows, it has an unusual
spectrum: its modes are fully gapped (except at ${\bm q}=0$) near
the origin but exhibit curved lines of zero modes at large
${\bm q}$. It has zero-frequency surface modes for surface
wavenumbers in the gapped region but not in the ungapped
region. Curiously, the elastic energy density is identical to
that of the twisted kagome lattice even though it has a lower
symmetry.
\begin{figure}
\centering
\includegraphics{figure-16.pdf}
\caption{(a) An example of one of the many lattices that can be constructed
from the kagome lattice without changing any bond lengths overlayed on an
undistorted kagome lattice. This one is
a stacked lattice with alternating rows of oppositely tilted hexagons.
(b) Density map of the lowest-frequency band showing the gapped spectrum
near the origin and two curved lines of zero modes passing between two corners.
\emph{Adapted from reference \cite{SunLub2012}}}
\label{fig:kagome-variation}
\end{figure}
The square and kagome lattice are the two-dimensional Maxwell
lattices with the smallest unit cells. As discussed above, the
kagome lattice can be distorted in various ways to produce
larger-unit-cell Maxwell lattices, but there are many other
lattices including random ones \cite{Henley1991} that can be
created. An intriguing set of periodic Maxwell lattices
\cite{StenullLub2014} are those arising from rational
approximates to Penrose rhombohedral tilings \cite{Penrose74}
that approach a quasi-crystalline lattice with $5$-fold
symmetry
\cite{LevineSte1984,SteinhardtOst1987,DiVincenzoSte1991}. A
unit cell for the second periodic approximate with $80$ sites
and $160$ bonds per cell is shown in \fref{fig:Penrose}. These
periodic lattices, which can be constructed via projection from
a five-dimensional cubic lattice \cite{deBruijn1981}, all have
an average coordination of exactly four even though the
coordination of local sites varies from three to as high as
ten. The size of the unit cell increases rapidly from $N=30$
in the first approximate to $N=25840$ in the eighth
approximate. Each of the approximates is a legitimate periodic
lattices with a full set of phonon branches with dispersions
depending on lattice wavenumber ${\bm q}$. They can also be
interpreted as a single-cell system under toroidal PBCs that
approach the infinite-cell quasicrystalline limit. In this
interpretation, which we pursue here, only the ${\bm q}=0$ part of
the spectrum is of physical interest as is the case for
periodic approximates to randomly packed spheres at jamming.
These lattices have a number of interesting properties: (1)
their undistorted versions have of order $\sqrt{N}$ \SSSs and
zero modes, but none of the \SSSs overlap with affine strain,
and all elastic moduli are zero, much as in rigidity
percolation. (2) Randomizing site positions removes all but
the two required \SSSs and zero modes. The two \SSSs overlap
with affine strain to produce two independent positive
eigenvalues of the elasticity matrix, one corresponding in the
large $N$ limit to the bulk modulus with associated eigenvector
of pure compression and one to a shear modulus with associated
shear eigenvector. The bulk modulus increases with $N$
reaching a saturation value at $N\approx 10^4$. The shear
modulus on the other hand approaches zero as $1/N$. The latter
behavior is required by $5$-fold-symmetric quasicrystalline
limit whose elasticity must be isotropic with both shear moduli
equal. Because there are only two \SSSsa, one shear modulus
must be zero if the bulk modulus is nonzero, and the other must
approach zero with $N$. This is essentially the same behavior
observed in randomly packed spheres at the jamming threshold
with $z=2d$ \cite{GoodrichNag2012,GoodrichNag2014}.
\begin{figure}
\centering{\includegraphics{figure-17.pdf}}
\caption{(color online) Unit cell of the second periodic approximant of the Penrose tiling \
showing \SSSs (circled $1$, $2$, and $3$). In all states, stress is localized
on vertical ladders with different signs of stress on opposite sides as in \fref{fig:selftstress2}(c).
States $2$ and $3$ share
the bond marked $\text{b}_s$, and are not orthogonal. They can be orthogonalized to produce states mostly,
but not completely localized on the two ladders. \emph{From reference \cite{StenullLub2014}}}
\label{fig:Penrose}%
\end{figure}
\section{Topological phonons\label{sec:topological}}
Twisting the kagome lattice gaps all zero modes of the
untwisted lattice except those at ${\bm q}=0$. This gapping is
reminiscent of that of the electron spectrum in systems like
polyacetylene \cite{ssh,NiemiSem1986}, quantum Hall materials
\cite{halperin82,haldane88} and topological insulators
\cite{km05b,bhz06,mb07,fkm07,HasanKane2010,QiZhang2011}, which
is associated with the appearance of topologically protected
zero modes at free boundaries and at boundaries separating two
topological classes. An interesting and natural question is
then whether or not the phonon spectra of Maxwell lattices can
be gapped in different ways to produce distinct topological
classes with protected boundary modes. The answer is yes they
can be, and this section, which is mostly based on reference
\cite{KaneLub2014}, will explore both how they can be
constructed and the nature of their interface states.
\subsection{A one-dimensional Model \label{sec:one-d-model}}
We begin with a one-dimensional model whose phonon spectrum
(including both positive and negative frequencies) is identical
to that of the one-dimensional Su-Schrieffer-Heeger model for
polyacetylene \cite{ssh,NiemiSem1986} schematically depicted in
figures \ref{fig:1d-model}(a) and (b). Our model, depicted in
figures \fref{fig:1d-model}(c) and (d), consists of rigid bars
of length $r$ that can rotate freely about fixed positions on a
one-dimensional periodic lattice. The ends of neighboring bars
are connected by harmonic springs whose lengths are adjusted so
that the equilibrium configuration is one in which alternate
rods make an angle $\bar{\theta}$ with the upward or downward
normals. Bars tilt towards the right if $\bar{\theta}>0$ and to the
left if $\bar{\theta}<0$. Each rod $s$ has one degree of freedom
$\theta_s =\bar{\theta}-\delta \theta_s$, and each spring
provides one constraint. Under periodic boundary conditions,
the number $N$ of rods equals the number $N_B$ of springs. In
the linearized limit, the compatibility matrix with components
$C_{\beta s}$ connects the stretch in spring $\beta$ with
rotations of rod $s$: $\delta l_\beta = C_{\beta s} \delta
\theta_s$, and the equilibrium matrix $Q_{s\beta}$ relates
torques on rod $s$ to tensions $t_\beta$ in spring $\beta$:
$\tau_s = -Q_{s\beta} t_\beta$. With appropriate sign
convention for the torque, $C_{\beta s} = Q^T_{\beta s}$. In a
state of self stress, springs are under tension, but there are
no torques on the rods. The Index theorem thus applies directly to this
system. Under periodic boundary conditions $N_B= N$ and $N_0 =
N_S$. Cutting one bond creates a free lattice with $N_0 = N_S
+1$. Thus, if there are no \SSSsa, the free system must have
one zero mode, which can either be a mode in the bulk spectrum
or a surface mode on one of the boundaries. But what determines
whether it is localized on the right or left boundary? As we
shall now, it is the topological class.
\begin{figure}
\centering
\includegraphics{figure-18.pdf}
\caption{(a) and (b) depict
the SSH model of polyacetylene, with A and B sublattices indicated by blue circles
and red squares, respectively.
(a) describes the gapless state with all bonds identical, while (b) describes the
gapped AB dimerized state, with double (single) bonds on the AB (BA) bonds. The BA
dimerized state with single and double bonds interchanged is not shown.
(c) and (d) show the 1D isostatic lattice model in which masses, represented by the
larger blue dots, are connected by springs in red and are constrained
to rotate about fixed pivot points, represented by small black dots.
(c) is the gapless
high-symmetry state with $\bar\theta=0$, and (d) is the gapped lower-symmetry phase
with $\bar\theta >0$. (c) and (d) are equivalent to (a) and (b) if we identify
the masses (springs) with the A (B) sublattice sites.
(e) shows a domain wall in polyacetylene connecting the AB and BA dimerized states.
There is a topologically protected zero-energy state associated with the A sublattice at the defect. (f) shows
the equivalent state for the mechanical model with a topologically protected
zero-frequency mode at the domain wall connecting a $\bar{\theta}=+\theta_c$ lattice with
a $\bar{\theta}=-\theta_c$ lattice. (g) shows a domain wall connecting the BA and AB dimerized states,
which has a zero energy state associated with the B sublattice.
(h) shows the equivalent isostatic state with a state of self-stress (SS) at the domain wall and zero modes at
the end so that Index count $N_0-N_S=1$ is satisfied. \emph{Adapted from reference \cite{KaneLub2014}}}
\label{fig:1d-model}
\end{figure}
We proceed now to a more detailed analysis of the our model.
The components of the compatibility matrix at rest angle
$\bar{\theta}$ are
\begin{equation}
C_{s\beta}(\bar{\theta}) = c_1(\bar{\theta}) \delta_{s,\beta} - c_2(\bar{\theta})\delta_{\beta,s+1} ,
\end{equation}
where
\begin{equation}
c_{1(2)} = \frac{(a \pm 2 r \sin \bar{\theta} )r \cos \bar{\theta}}{\sqrt{a^2 + 4 r^2 \cos^2 \bar{\theta}}} .
\end{equation}
Thus $|c_1|>|c_2|$ for all $0<\bar{\theta}<\pi$, and $|c_1|<|c_2|$
for all $-\pi<\bar{\theta}<0$. The energy of the system (contained
entirely in the stretching of the springs) is then
\begin{equation}
E =\tfrac{1}{2} k \sum_{\beta} (\delta l_\beta)^2 =
\tfrac{1}{2} k \sum_s (c_1 \delta \theta_s - c_2 \delta \theta_{s+1})^2.
\end{equation}
The Fourier transform of $C_{s\beta}$ is
\begin{equation}
C(q) = c_1 - \rm e^{\rm i q a} c_2 ,
\label{eq:C(q)}
\end{equation}
and bulk phonon modes have frequency
\begin{equation}
\omega(q) = \pm|C(q)| = \pm \sqrt{(c_1 -c_2)^2 + 4 c_1 c_2 \sin^2 (qa/2)}
\end{equation}
(for unit mass). When $\bar{\theta} = 0$ (vertical rods), $c_1 =
c_2$, the energy becomes invariant with respect of $\delta
\theta_s \rightarrow \delta \theta_s +\delta$ for every $s$,
and there is necessarily a bulk zero mode at $q=0$ - this in
spite of the fact that the bases of the rods are anchored,
breaking translational invariance. For other values of
$\bar{\theta}$, the phonon spectrum is fully gapped.
In a finite system, the compatibility matrix can be expressed
in the form of \eref{eq:matrix-C} with ${\mathbf C}_{11} = c_1$ and
${\mathbf C}_{12} = -c_2$. The decay length of the surface state is
determined by $c_1 - \lambda c_2 = 0$ or $\lambda = c_1/c_2 =
e^{-\kappa a}\,\,\,(e^{\kappa a})$ for states localized on the
left (right) boundary. Thus, the one zero mode is on the left
if $|c_1|<|c_2|$ and on the right for $|c_1|>|c_2|$. The left
zero mode is particularly easy to see at the critical angle
$\theta_c = -\sin^{-1} [a/(2 r)]$ at which $c_1 = 0$ as shown
in \fref{fig:1d-model}(s). The equilibrium matrix ${\mathbf Q}={\mathbf C}^T$
for a finite system has no zero modes, and there are no \SSSs
as required by the Index count. This follows because ${\mathbf C}_{11}
t_1 = 0$ for any ${\bm t}$ in the null space of ${\mathbf Q}$. Then the
equation $-{\mathbf C}_{12} t_n + {\mathbf C}_{11} t_{n+1}=0$ sets all $t_n$
for $n>1$ equal to zero. Under periodic boundary conditions,
there must be one localized state of self stress for each
localize zero mode.
But what does this have to do with topology? The compatibility
matrix $C(q)\equiv |C(q)|^{i \phi}$ (or more generally its
determinant) maps points in the Brillouin zone ($-\pi/a < q\leq
\pi/a$) to a path in the complex plane. Since it depends on
$\rm e^{\rm i q a}$, the path will be a closed and return to its
starting point as $q$ advances between equivalent points in the
zone [$q\rightarrow q + (2 \pi/a)$]. These curves are
characterized by an integer winding number:
\begin{equation}
n = \frac{1}{2 \pi} \int_0^{2 \pi} d\phi =
\frac{1}{2 \pi \rm i}\int_0^{2\pi/a} dq \frac{\rm d}{\rm d q}\mbox{Im} \ln \det C(q) ,
\label{eq:top-index1}
\end{equation}
which for the $C(q)$ of \eref{eq:C(q)} is either $+1$ or $0$.
Clearly $n=0$ if the path in the complex plane does not enclose
the origin, and $n=1$ if it does. The first case occurs
whenever $|c_1|>|c_2|$ and the second whenever $|c_1|<|c_2|$ as
shown in \fref{fig:SSH-contour}. When $|c_1|=|c_2|$, the
boundary of the curve passes through the origin. The winding
number is thus a topological invariant equal to $1$ for all
$-\pi<\bar{\theta}<0$ and to $0$ for all $0<\bar{\theta}<\pi$. The only
way it can change values is by passing through the critical
angles $\bar{\theta}=0$ or $\bar{\theta} = \pi$ (bars lie along the
horizontal axis). When $n=0$, the zero mode is on the right and
if $n=1$, the zero modes is on the left. The connection between
the topological invariant and the existence of zero modes is
easy to see in this case. A zero surface mode exists if there
is a solution to $c(\lambda)=c_1 - c_2 \lambda=0$ with
$|\lambda|<1$. The compatibility matrix is simply
$C(q)=c(\lambda =\rm e^{\rm i q a})$, and along the closed path
it describes in the complex plane, $|\lambda|=1$. In the
interior of the path $|\lambda|<1$. Thus, if the path encloses
the origin, which is the point at which $c(\lambda)=0 $, the
solution for $\lambda$ will have a magnitude less than one. A
complementary perspective, based on the Cauchy argument
principle, is discussed in \ref{App:gauge},
\begin{figure}
\centering
\includegraphics{figure-19.pdf}
\caption{Contour plots in the complex plane of $C(q)$ for complete circuit
of $q$ from $q=0$ to $q= 2\pi/a$. (a) $c_2=1.0$ and $c_1=0.5<c_2$; the circuit contains the origin
and the topological charge is $n=1$. (b) $c_1=c_2 = 1.0$; this is the transition case in
which the contour just touches the origin, and $n=0$. (c) $c_1=1.5>c_2 = 1.0$; the
circuit does not contain the origin and $n=0$. Note the circuits begin with $q=0$ on the far left and
circulate counterclockwise.}
\label{fig:SSH-contour}
\end{figure}
Here we considered the winding number associated with $C(q)$.
We could equally well have considered that associated with the
equilibrium matrix $Q(q) = C^*(q)$. Because $\rm e^{-\rm i qa}$
winds clockwise rather than counter clockwise with increasing
$a$, its winding number is either $-1$ or $0$, with $-1$
corresponding to $|c_1|<|c_2|$. Thus the surface state is at
the left when the winding number of $Q$ is $-1$. We will use a
generalization of the $Q$-winding number in our
characterization of topological classes of kagome lattices in
what follows.
As is well known for the SSH model \cite{ssh,jackiw76}, an
interface between the two dimerizations binds a zero mode, as
indicated. Similarly, the interface separating the two signs
of $\bar{\theta}$ does as well. This is most easily seen for
$\bar\theta = \pm \theta_c$ where the springs are colinear with
the bars, so that $c_1$ or $c_2 = 0$. \Fref{fig:1d-model}(f)
shows a domain wall between $+\theta_c$ and $-\theta_c$, in
which the center two sites share a localized zero mode.
\Fref{fig:1d-model}(h) shows an interface between $-\theta_c$
and $+\theta_c$ with a state of self-stress localized to the
middle three bonds, in addition to floppy modes localized at
either end. As long as there is a bulk gap, the zero modes
cannot disappear when $\bar{\theta}$ deviates from $\pm \theta_c$.
The zero modes remain exponentially localized, with a
localization length, $l = a/\ln|c_1/c_2|$, that diverges when
$\bar{\theta}\rightarrow 0$.
\subsection{Topological lattices \label{sec:top-lattice}}
We have just seen the intimate connection between topological
properties of the compatibility or equilibrium matrices and
zero-modes at boundaries in a one-dimensional system. Here we
discuss how these ideas can be extended to higher-dimensional
Maxwell lattices. We will for the most part only quote results,
and not provide details of how they were obtained. The latter
can be found in reference \cite{KaneLub2014}. There are two
significant differences between these lattices and those in one
dimension. First, because ${\mathbf Q}({\bm q})$ (or ${\mathbf C}({\bm q})$) depends on
the vector wavenumber ${\bm q}$ with two rather than one
independent component, topological characterization will
require a vector rather than a scalar winding number. Second
there are boundary surface modes imposed by the Index theorem that are
required whether or not lattices have any topologically
properties. We will thus need to divide the surface mode count
into those parts imposed by the Index theorem and those parts implied by
topological considerations. The result is that topology cannot
change the global count of the Index theorem, but it can move zero
modes from one boundary to another and give rise to
topologically protected zero modes at interfaces between two
different topological classes.
\subsubsection{Topological and total mode
count\label{ssec:top-tot-count}}
To unify the treatment of zero modes arising from mismatch of
sites and bonds and those that arise in locations where there
is no local mismatch, reference \cite{KaneLub2014} generalized
the Index theorem so that it determines a zero mode count
$\nu^S=N_0^S-N_S^S$ in a subsystem $S$ of a larger system. This
is well defined provided the boundary of $S$ is deep in a
gapped phase where zero modes are absent. This count has the
two separate contributions just discussed:
\begin{equation}
\nu^S = \nu_L^S + \nu_T^S ,
\label{eq:surfacenu}
\end{equation}
where $\nu_L^S$ is a local count of sites and bonds in $S$ and
$\nu_T^S$ is a topological count, which depends on the
topological structure of the gapped phases in the boundary of
$S$. The topological contribution has a form similar to that
of the one-dimensional system. For the periodic lattices we are
considering, it is best expressed as a count per unit cell on
an edge indexed by a reciprocal lattice vector $\bm G$ (i.e.,
$\bm G$ is normal to the surface with a magnitude of $2
\pi/a_\perp$, where $a_{\perp}$ is the distance between lines
of cells identical to the line of surface cells),
\begin{equation}
\tilde{\nu}_T^S =\nu^S/N_{\text{cell}}=\bm G\cdot {\bm R}_T /(2 \pi) ,
\label{eq:tnuT}
\end{equation}
where $N_{\text{cell}}$ is the number of surface unit cells and
${\bm R}_T$, a generalization of the one-dimensional winding
number, is a lattice vector
\begin{equation}
{\bm R}_T = \sum_i n_i {\bm a}_i ,
\label{eq:RvT}
\end{equation}
where ${\bm a}_i$ are the primitive translation vectors and
\begin{equation}
n_i = \frac{1}{2\pi \rm i}\oint_{C_i} d{\bm q}\cdot {\rm Tr}[{\mathbf Q}({\bm q})^{-1}
\nabla_{{\bm q}} {\mathbf Q}({\bm q}) ].
= \frac{1}{2\pi }\oint_{C_i} d {\bm q}\cdot \nabla_{\bm q} \phi ({\bm q}) ,
\label{eq:ni}
\end{equation}
where $\phi({\bm q})$ is the phase of $\det {\mathbf Q}({\bm q})$ (${\mathbf Q}({\bm q}) =
|{\mathbf Q}({\bm q})|^{\rm i \phi({\bm q})}$). Here $C_i$ is a cycle of the BZ
connecting ${\bm q}$ and ${\bm q} + {\bm B}_i$, where ${\bm B}_i$ is a
primitive reciprocal vector satisfying ${\bm a}_i \cdot {\bm B}_j =
2\pi \delta_{ij}$ (${\bm B}_1=-\bm G_2$ and ${\bm B}_2=\bm G_1$ in
\fref{fig:kagome-disperion1}a). The $n_i$ are winding numbers
of the phase of $ \det {\mathbf Q}({\bm q})$ around the cycles of the BZ,
where ${\mathbf Q}({\bm q})$ is the equilibrium matrix in a Bloch basis.
This winding number is independent of path, and thus
independent of ${\bm q}$ so long as the spectrum is gapped
everywhere except the origin. The zero-mode at the origin of
${\bm q}$ is not topologically protected (i.e, it can be gapped by
a weak potential breaking translational symmetry), so it does
not cause any problems. It is possible, however, for there to
be topologically protected gapless points. These would be
point zeros around which the phase of $\det {\mathbf Q}({\bf k})$
advances by $2\pi$. These lead to topologically protected bulk
modes that form the analog of a ``Dirac semimetal" in
electronic systems like graphene
\cite{km05b,Aji2012,WeiChao2012,WeiWang2012,AjiHe2013,ZhuAji2013,PhillipsAji2014,SosenkoWei2014,WeiChao2014}.
These singularities could be of interest, but they do not occur
in lattices derived from the kagome lattice we study below.
They do, however, occur in modified square lattices
\cite{square} of the type shown in \fref{fig:square-kagome}
considered in reference \cite{GuestHut2003} and in the
pyrocholore lattice \cite{StenullLub2014b}.
In general, the winding number is not gauge invariant and
depends on how the sites and bonds are assigned to unit cells.
It is, however, possible to adopt a ``standard unit cell" with
basis vectors ${\bm r}_{\mu(\beta)}$ for the $n_s$ sites ($n_b =d
n_s$ bond centers) per cell for which the ``dipole" moment of
the site and bond charges, ${\bm R}_{\text{stan}} = d \sum_\mu
{\bm r}_\mu - \sum_\beta {\bm r}_\beta$, is equal to zero. The two
unit cells of \fref{fig:kagome1} satisfy this zero dipole
condition even after being distorted to those of the twisted
lattice. ${\mathbf Q}({\bm q})$ is defined using Bloch basis states
$|{\bm q},a=\mu,\beta\rangle \propto \sum_{l}\exp \rm i
{\bm q}\cdot({\bm R}_l + {\bm r}_a)|{\bm R}_l + {\bm r}_a \rangle$, where ${\bm R}_l$
is a Bravais lattice vector. In this gauge, ${\bm R}_T$ is
uniquely defined, and the zero-mode count is given by equations
\eref{eq:surfacenu} to \eref{eq:RvT}.
Because ${\bm R}_{\text{stan}}$ is zero, we could equally well use
a basis in which all ${\bm r}_\mu$ and ${\bm r}_\beta$ are simply zero.
This is in fact the basis we use for all of the calculations
presented here. It should be noted that it is not always
possible to find a symmetric ``standard" unit cell with a
vanishing dipole moment defined in terms of charges at sites
and bond centers. Indeed, there is no such cell for the SSH
model or for 3D pyrochlore lattices. Not having such a
standard cell is not a problem, however. The number and the
nature of surface modes do not depend on the choice of a
``standard" or reference unit cell and are unambiguous. The
easiest choice is usually to set the positions of all of the
sites and bonds in the unit cell to be zero. As discussed in
\ref{App:gauge}, $\det{\mathbf C}$ for different unit cell choices,
such as those used to calculate the zero surface modes in
\sref{ssec:surface-m}, will have different phase factors, which
account for the differences in the topological integral of
\eref{eq:ni} for different choices of unit cells..
The local count, $\nu^S_L$, depends on the details of the
termination at the surface and can be determined by evaluating
the macroscopic ``surface charge" that arises when charges $+d$
($-1$) are placed on the sites (bonds) in a manner analogous to
the ``pebble game" \cite{JacobsTho1995}. This can be found by
defining a bulk unit cell with basis vectors $\tilde{{\bm r}}_a$
that accommodates the surface with no leftover sites or bonds
(see figures \ref{twisted-kagome-surface} and
\ref{fig:parameter}) as discussed in \sref{ssec:surface-m}.
This unit cell depends on the surface termination and, in
general, will be different from the ``standard" unit cell
(\fref{fig:kagome1}) used for the calculation of $\nu^S_T$. The
local count is then the surface polarization charge given by
the dipole moment ${\bm R}_L$ per unit cell:
\begin{equation}
\tilde{\nu}_L \equiv \nu^S_L/N_{\text{cell}} = \bm G\cdot {\bm R}_L /2\pi,
\label{eq:tnuL}
\end{equation}
where
\begin{equation}
{\bm R}_L = d \sum_{\text{sites}\, \mu} \tilde{{\bm r}}_\mu - \sum_{\text{bonds}\,\beta} \tilde{{\bm r}}_\beta.
\label{eq:RL}
\end{equation}
The total zero-mode count on the surface then follows from
equations (\ref{eq:surfacenu}), (\ref{eq:tnuT}), and
(\ref{eq:tnuL}). The polarization of the standard cell is zero
so that ${\bm R}_L = {\bm R}_L - {\bm R}_{\text{stan}}$ can be calculated
from the displacements $\tilde{{\bm r}}_a - {\bm r}_a$ of the surface
cell sites and bonds relative to those of the standard cell.
\subsubsection{Constructing topological
lattices\label{ssec:cons-topo}}
The kagome lattice has three sites per unit cell, and
displacing these sites while maintaining the size and shape of
the unit cell creates different lattices, each of which can be
smoothly connected with the other across domain walls. The
twisted kagome lattice, with bond length increased by $1/\cos
\alpha$ to connect smoothly with the untwisted lattice, is an
example of this operation. The most general such lattice is
described by four parameters (one site in the the unit cell can
always be fixed). The most useful parametrization is one in
which the ``straightness" of the three sets of filaments are
controlled individually. Such a set is depicted in
\fref{fig:parameter}. Displacing site $1$ of the unit cell by
$-\sqrt{3} x_1 {\bm p}_1$, where ${\bm p}_1$ is the vector of length
$a$ perpendicular to lattice vector ${\bm a}_1$, ``zigzags" the
filaments parallel to ${\bm a}_1$. Though this operation leaves
filaments parallel of ${\bm a}_3$ straight, it zigzags the
filaments parallel to ${\bm a}_2$. The latter can be straightened
by the simple operation of displacing site $2$ along ${\bm a}_3$ by
$x_1 {\bm a}_3$, as shown in \fref{fig:parameter}(a). This process
is repeated to zigzag filaments parallel of ${\bm a}_2$ and ${\bm a}_3$
as shown in figures \ref{fig:parameter} (b) and (c). Finally
the $1-2-3$ triangle can be isotropically expanded by
displacing the three sites the same amount along directions
${\bm p}_1$, ${\bm p}_2$ and ${\bm p}_3$. The basis vectors for the unit
cell are then
\begin{equation}
{\bm r}_\mu = {\bm r}_\mu^0 - \sqrt{3} x_\mu {\bm p}_\mu + x_{\mu-1} {\bm a}_{\mu+1}
+ (z/\sqrt{3}) {\bm p}_{\mu-1} ,
\label{eq:top-parameters1}
\end{equation}
where ${\bm r}_\mu^0$ (e.g., ${\bm r}_1={\bm a}_1/2$, ${\bm r}_2 = - {\bm a}_3/2$,
${\bm r}_3 = 0$) are the basis vectors of the untwisted unit cell
and it is understood that all subscripts are evaluated mod $3$.
The bond vectors are then
\begin{equation}
{\bm b}_\beta = {\bm d}_{\beta+1} - {\bm d}_\beta =\tfrac{1}{2} {\bm a}_
\beta -(-1)^{\rm{Int((\beta-1)/3)}}[(x_{\beta+1}-x_{\beta-1} - z) {\bm a}_\beta
+\sqrt{3} x_\beta {\bm p}_\beta ] ,
\label{eq:top-parameters2}
\end{equation}
where ${\rm Int}(x)$ is the integer part of $x$ and it is
understood that ${\bm a}_{\beta+3} = {\bm a}_\beta$ and ${\bm p}_{\beta+3}
= {\bm p}_\beta$. The expressions for ${\bm b}_\beta$ for $\beta =
4,5,6$ are obtained from those for $\beta=1,2,3$ via the
relation ${\bm b}_{\beta+3} = {\bm a}_\beta - {\bm b}_{\beta}$. The
untwisted kagome lattice corresponds to
$X=(x_1,x_2,x_3;z)=(0,0,0;0)$ and the twisted kagome with twist
angle $\alpha = \tan^{-1}(2 \sqrt{3} x)$ to $X=(x,x,x;0)$. For
a lattice with straight filaments along ${\bm a}_1$ only,
$X=(0,x_2,x_3;z)$ and similarly for straight filaments along
${\bm a}_2$ and ${\bm a}_3$. These lattices have \SSSs and associated
zero modes along a single direction in the Brillouin Zone.
Moving off $x_1=0$ gaps the spectrum as shown in
\fref{fig:top-lat-dis}.
\begin{figure}
\centering
\includegraphics{figure-20.pdf}
\caption{(a) and (b) depict the geometry used to derive equations (\ref{eq:top-parameters1}) and
(\ref{eq:top-parameters2}). In (a), site $1$ of the symmetric unit cell of \fref{fig:kagome1}(c) is displaced
downward by $\sqrt{3}\, x_3\, {\bm p}_1$, perpendicular to the lattice vector ${\bm a}_1$, and
site $2$ is displaced along ${\bm a}_3$ by $x_1 {\bm a}_3$. The result is that the filaments along
${\bm a}_2$ and ${\bm a}_3$ remain straight whereas those along ${\bm a}_1$ are zigzagged. (b) depicts similar
displacements that zigzag only filaments along ${\bm a}_2$ or ${\bm a}_3$. (c) shows the ternary phase diagram
for fixed $x_1+x_2+x_3 >0$. The space enclosed by the central triangle corresponds to the
to the state with ${\bm R}_T=0$. The direction of ${\bm R}_T$ is the six sectors surrounding the
central triangle are indicated by the blue arrows. The red dots from right to left correspond to the
three lattices shown in the opposite order in \fref{fig:top-lat-dis}}
\label{fig:parameter}
\end{figure}
The topological polarization in terms of $X$ is
\begin{equation}
{\bm R}_T = \frac{1}{2} \sum_{p=1}^3 {\bm a}_p \,\text{sign}( x_p ).
\end{equation}
This leads to the ternary phase diagram shown in
\fref{fig:parameter}(c) as a function of $x_1$, $x_2$, and
$x_3$ for fixed $x_1+x_2+x_3$ and $z=0$. It has eight octants
corresponding to the eight possible sign combinations of
$(x_1,x_2,x_3)$. The $(+,+,+)$ and $(-,-,-)$ octants correspond
to the class of the twisted kagome lattice. The remaining 6
octants are states that are topologically equivalent, but
related to each other by $C_6$ rotations.
\Fref{fig:top-lat-dis} shows representative structures for the
${\bm R}_T=0$ phase (\fref{fig:top-lat-dis}(a)), the ${\bf R}_T \ne
0$ phase (\fref{fig:top-lat-dis}(c)), and the transition
between them (\fref{fig:top-lat-dis}(b)). The insets show
density plots of the lowest frequency mode, which highlight the
gapless point due to the acoustic mode in
\fref{fig:top-lat-dis}(a) and the gapless line due to states of
self stress in \fref{fig:top-lat-dis}(b). In
\fref{fig:top-lat-dis}(c), the gap vanishes only at the origin,
but there is a low-frequency cross that arises because acoustic
modes vary quadratically rather than linearly with ${\bm q}$ along
its axes. This behavior will be discussed in the next section.
\begin{figure}
\centering
\includegraphics{figure-21.pdf}
\caption{Representations of lattices and the density maps of their associated
lowest frequency modes for (a) a twisted kagome lattice with
$X=(0.1,0.1,0.1;0)$ and ${\bm R}_T=0$ (right-most red dot in \fref{fig:parameter}(c)), (b) a critical lattice with
$X=(0,0.1,0.1;0)$ (central red dot), and (c) a topological lattice with
$X=(-0.1,0.1,0.1;0)$ and ${\bm R}_T = -{\bm a}_1$ (left-most red dot). Note the isotropically gapped
spectrum (except for the origin) of (a), the line of zero modes in (b), and the soft-mode cross in
(c). \emph{Adapted from reference \cite{KaneLub2014}}}
\label{fig:top-lat-dis}
\end{figure}
\begin{figure}
\centering
\includegraphics{figure-22.pdf}
\caption{Real part of the reduces inverse penetration depth $\kappa_r =2 \kappa/G_S$
for various orientations of surfaces and ${\bm R}_T$
as a function of reduced surface wavenumber $q_r = 2 q/
G_S$, where $G_S$ is the magnitude of the surface reciprocal lattice vector. In
(a) the surface lattice vector $\bm G=\bm G_2-\bm G_3$, ${\bm R}_T = -{\bm a}_1$,
$\bm G\cdot {\bm R}_T/(2\pi) = 2$, and
there are four zero surface modes. The opposite surface with $-\bm G$ has no
zero modes. The full magenta curve is doubly degenerate with opposite imaginary
parts, and the two dashed curves are not degenerate. (b) $\bm G=-\bm G_1$, ${\bm R}_T = {\bm a}_3$ and $\bm G\cdot {\bm R}_T/(2 \pi) = 1$.
There are two surface zero modes. (c)
$\bm G= \bm G_2 - \bm G_3$, ${\bm R}_T = -{\bm a}_2$, $\bm G\cdot {\bm R}_T/(2 \pi) = 1$, and there are three surface zero modes.
The full magenta curves in (a) to (c) correspond to acoustic surface states, and in all cases $\kappa_r$ approaches
$0$ as $q_r^2$ as $q_r \rightarrow 0$.
(d) shows how to construct the dipole moments ${\bm R}_L$ for the surface unit cells in (a) to (c). On the left, black bonds
$\bf{4}$ and $\bf{6}$ of the symmetric unit cell are, respectively, displaced by ${\bm a}_2$ and $-{\bm a}_3$
while the circled site $1$ is displaced by ${\bm a}_2$ to produce the surface unit cell
of (a) and (c) with ${\bm R}_L= 2 {\bm a}_2 -({\bm a}_2-{\bm a}_3) = -{\bm a}_1$ and $\tilde{\nu}_L^S=2$.
On the right, the black bond $\bf{6}$ is displaced
through $-{\bm a}_3$ to produce the surface unit cell of (b) with ${\bm R}_L={\bm a}_3$ ad $\tilde{\nu}_L=1$. }
\label{fig:topological-surface}
\end{figure}
\begin{table}
\caption{\label{table1} Reciprocal lattice vectors $\bm G$
indexing surfaces, dipole moment ${\bm R}_L$, and local count
$\tilde{\nu}_L$ for the seven surfaces cells of
\fref{twisted-kagome-surface}(a). The reciprocal lattice
vectors $\bm G_1$, $\bm G_2$, and $\bm G_3$ are depicted in
\fref{fig:kagome-disperion1} and the Bravais lattice vectors
${\bm a}_1$, ${\bm a}_2$, and ${\bm a}_3$ in \fref{fig:kagome1}.}
\begin{ruledtabular}
\begin{tabular}{@{}c|ccccccc
cell & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
$\bm G$ & $-\bm G_1$ & $\bm G_1$ & $\bm G_1$ & $\bm G_2 - \bm G_3$ & $\bm G_3 - \bm G_2$ & $\bm G_3 - \bm G_2$ & $- \bm G_1$ \\
${\bm R}_L$ & ${\bm a}_3$ & ${\bm a}_2$ & $-{\bm a}_3$ & $- {\bm a}_1$ & ${\bm a}_1$ & ${\bm a}_1$ & ${\bm a}_3- {\bm a}_2$ \\
$\tilde{\nu}_L$ & $1$ & $1$ & $1$ & $2$ & $2$ & $2$ & $2$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
We are now in a position to analyze zero interface modes for
different $X$. Surfaces in the twisted-kagome octants are in
the same class of those discussed in \sref{ssec:surface-m} and
in \fref{twisted-kagome-surface}. The bonds and sites to be
displaced to convert the standard unit cell to a surface one
are depicted for two cells in
\fref{fig:topological-surface}(d). Table \ref{table1} lists the
surface polarization vector, the surface reciprocal lattice
vector $\bm G$, and the reduced surface index $\tilde{\nu}_L$ (which
because there are no \SSSs equals the number of zero modes per
surface cell) for the seven surface cells shown in
\fref{twisted-kagome-surface}. As required, the count
corresponds to the number obtained by direct evaluation.
\begin{figure}
\centering
\includegraphics{figure-23.pdf}
\caption{Zero modes at domain walls. (a) shows a
lattice with periodic boundary conditions and two domain walls,
the left one between $(.1,.1,.1;0)$ and $(.1,.1,-.1;0)$ with zero modes
and the right one between
$(.1,.1,-.1;0)$ and $(.1,.1,+.1;0)$ with states of self stress.
The zero mode eigenvectors at $q_x=\pi$ are indicated
for the floppy mode (arrows) and the state of self-stress
(red (+) and green (-) thickened bonds). (b) shows the vibrational
mode dispersion as a function of $q_x$. \emph{From reference \cite{KaneLub2014}}}
\label{fig:topological-domain}
\end{figure}
Converting ${\bm R}_T=0$ to ${\bm R}_T \neq 0$ modifies the surface
count by the term ${\bm R}_T \cdot \bm G/(2 \pi )$ in $\tilde{\nu}$. Since
$\bm G$ has opposite signs on opposite parallel surfaces, the
effect is to move zero surface modes between the two surfaces
while keeping the total count fixed as required.
\Fref{fig:topological-surface} plots the real part of the
surface penetration wavenumber, $\kappa$, of three
representative surfaces and orientations of ${\bm R}_T$ relative to
the surface normals. In \fref{fig:topological-surface}(a),
${\bm R}_T$ is parallel to $\bm G$ of a vertical surface, and there
are four rather than the two zero modes of the non-topological
surface of \fref{twisted-kagome-surface} with surface cell $4$.
Thus on the opposite surface (with surface cells $5$ an $6$ in
\fref{twisted-kagome-surface}), there are no zero modes.
Similarly in \fref{fig:topological-surface}(b) with ${\bm R}_T$ at
$30^\circ$ to $\bm G$ of a horizontal surface, there are two zero
surface modes whereas the same surface (cell $1$ in the
${\bm R}_T=0$ lattice of \fref{twisted-kagome-surface}) has only
one. Finally \fref{fig:topological-surface}(c) with ${\bm R}_T$ at
$60^\circ$ to $\bm G$ of a vertical surface, there are three
rather than two surface modes per unit cell. A striking feature
is that the curves versus the surface wavenumber $q$ of
$\kappa$ for acoustic modes when ${\bm R}_T \neq 0$ is that their
approach to $0$ as $q\rightarrow 0$ is quadratic rather than
linear in $q$. This is a consequence of the modes with $q^2$
dispersion shown in \fref{fig:top-lat-dis}.
\subsubsection{Continuum limit}
As we have seen, there are two related long-wavelength
properties of the topological lattices that differ from their
non-topological counterparts: Their spectrum has peculiar
low-frequency lobes [\fref{fig:top-lat-dis}(c)], and
penetration wavenumbers of their acoustic modes approach zero
with $q$ as $q^2$ rather than as $q$. These properties must be
reflected in the in the form of the long-wavelength elastic
energy, and indeed they are. For simplicity we focus on states
with $X=(x_1,x_2,x_2;0)$, where $x_2>0$ is fixed and $x_1$ is
allowed to vary. The elastic energy density $f$ can be written
\begin{equation}
f=\tfrac{1}{2}K[(u_{xx}-r_1 u_{yy})^2 + 4 r_4 u_{xy}^2] ,
\label{eq:elastic}
\end{equation}
where $r_{1}\propto x_1$ for small $x_1$, while $r_4>0$ and $K$
are positive and smoothly varying near $x_1=0$. Thus, the
${\bm R}_T=0$ and ${\bm R}_T \neq 0$ sectors are distinguished by the
sign of $r_1$. The Guest mode \cite{GuestHut2003}, for which
$f=0$, corresponds to shape distortions with $u_{xx} = r_1
u_{yy}$ and $u_{xy}=0$. When $r_1>0$, $u_{xx}$ and $u_{yy}$
have the same sign, and the distortion has a negative Poisson
ratio \cite{Lakes1987}, expanding or contracting in orthogonal
directions (a feature shared by the twisted kagome lattice
\cite{SunLub2012}); when $r_1<0$, $u_{xx}$ and $u_{yy}$ have
the same sign, and the distortion has the more usual positive
Poisson ratio. Finally when $r_1=0$, uniaxial compressions
along $y$ costs no energy. Note that this energy consists of
two independent positive definite quadratic forms as it must in
a periodic isostatic lattice with two \SSSsa.
\Fref{Fig:PositivePoisson} shows the effects of evolution of
the nonlinear Guest mode in response to compression along the
$x$-axis of various topological lattices.
\begin{figure}
\centering
\includegraphics{figure-24.pdf}
\caption{Evolution
of the nonlinear Guest mode in topological
lattices in response to uniform compression along the $x$-axis.
(a) $X=\{-0.05,0.05,0.05\}$: Small compression along $x$ produces area change
along with pure shear with a positive Poisson ratio in agreement with \eref{eq:elastic}.
At large compression, there is contraction along both $x$ and $y$
reflecting a nonlinear negative Poisson ratio. (b) $X=\{0.05,-0.05,0.05\}$: Compression
along $x$ now produces a simple shear component with length contraction
in all directions (negative Poisson ratio). (c) $X=\{-0.1,0.025,0.075\}$:
Compression along $x$ now produces some simple shear, but the Poisson ratio
($-u_{yy}/u_{xx}$) remains positive for all compressions.}
\label{Fig:PositivePoisson}
\end{figure}
Expanding $\det {\mathbf C}$ for small ${\bm q}$ provides useful
information about the bulk- and surface-mode structure. To
order $q^3$,
\begin{equation}
\det {\mathbf C} = A[q_x^2+ r_1 q_y^2 + i \gamma (q_x^3 - 3q_x q_y^2)]
+O(q^4),
\label{eq:detQT}
\end{equation}
where $A>0$ and $\gamma> 0$ for small $x_1$. The quadratic
part of the equation follows from \eref{Eq:detCv} and the
elastic matrix, \eref{eq:elas-matrix}, with $K_{xxxx}= K$,
$K_{yyyy} = r_1^2 K$, and $K_{xyxy}= r_4 K$, associated with
the free energy of \eref{eq:elastic}. Long-wavelength zero
modes are solutions of $\det {\mathbf C} = 0$. The quadratic term,
which corresponds to the elastic theory reveals an important
difference between the bulk acoustic modes of ${\bm R}_T= 0$ and
${\bm R}_T \neq 0$. In the former case, $r_1>0$, $\det {\mathbf C}=0$ only
at ${\bm q}=0$. For $r_1<0$, though, to order $q^2$, $\det {\mathbf C} = 0$
for $q_x = \pm \sqrt{|r_1|} q_y$, so the elastic theory
predicts lines of gapless bulk modes. The degeneracy is lifted
by the $q^3$ term, leading to a $q^2$ dispersion along those
lines, which can be seen by the cross in the density map of
\fref{fig:top-lat-dis}(c).
As we have seen, $\det {\mathbf C} ({\bm q}\rightarrow 0)$ vanishes for
complex wavenumbers associated with zero-frequency Rayleigh
surface waves. Writing ${\bm q} = q_{\perp} \hat{{\bm n}} + q_{||} \hat
z \times \hat {\bm n}$ for a surface whose outward normal $\hat
{\bm n}$ makes an angle $\theta$ with $\hat x$, there is an
$\omega=0$ Rayleigh wave with penetration depth $\kappa^{-1} =|
{\rm Im}\,q_{\perp}|^{-1}$ if ${\rm Im}\, q_{\perp} <0$. To
order $q_{||}^2$ there are two solutions,
\begin{equation}
q_{\perp}^\pm = \rm i \kappa
\frac{\sin \theta \pm \rm i \sqrt{r_1}\cos \theta}{\cos \theta \mp \rm i \sqrt{r_1} \sin \theta} q_{||}
+ \frac{\sqrt{r_1} ( 3 + r_1) \gamma}{2(\cos \theta \pm \rm i \sqrt{r_1} \sin \theta )^3}
q_{||}^2 .
\end{equation}
When $r_1>0$, the linear term is always finite and nonzero, and
${\rm Im}\ q_{\perp}^\pm$ have opposite signs, indicating that
there can be acoustic surface zero modes on all surfaces. These
are the classical Rayleigh waves predicted by the elastic
theory \cite{Landau1986}, with penetration depth
$O(q_{||}^{-1})$. When $r_1<0$, the linear term in $q_{||}$ is
real and $\kappa_r = \mbox{Im}\, q_{\perp}^{\pm} \propto q_{||}^2$.
The number of long wavelength surface zero modes depends on the
angle of the surface. When $|\theta| <\theta_c = \cot^{-1}
\sqrt{|r_1|}$, $\mbox{Im}\,q_{\perp}^{\pm}$ are both positive, and
there are no acoustic surface zero modes. The opposite
surface, $|\theta-\pi|<\theta_c$, has two acoustic surface
modes. For $\theta_c < \theta < \pi - \theta_c$,
$\mbox{Im}\,q_{\perp}^{\pm}$ have opposite sign, so there is one
mode. This is consistent with the results shown in figures
\ref{twisted-kagome-surface} and \ref{fig:topological-surface}.
In the former figure, $r_1>0$, and there are acoustic zero
modes on every surface. In the latter, ${\bm R}_T$ is chosen so
that the coordinate system can always be rotated so that its
direction corresponds to the $-x$ direction. Thus, in
\fref{twisted-kagome-surface}(a), the surface normal $\bm G$ and
${\bm R}_T$ are parallel, implying by the above considerations that
$\theta = \pi$ and that there should be two acoustic zero
surface modes. In (b), $\theta = 5\pi/6$, and there are also
two zero acoustic modes whereas in (c), $\theta = 3\pi/3$, and
there is only one acoustic mode. This is consistent with the
above long-wavelength analysis if $\pi/6 < \theta_c < \pi/3$.
Finally, for surfaces such as those with cells $1$, $2$, $3$,
and $7$ in \fref{twisted-kagome-surface}, $\theta = \pm\pi/2$
for systems with ${\bm R}_T$ in the $\pm x$ direction, and there
should be one acoustic zero mode on each surface as there are,
\section{Review and future directions\label{sec:review}}
In this review, we have presented a pedagogical introduction to
Maxwell frames, in free versions of which $N_B = dN - d(d+1)/2$
and in the periodic versions of which $N_B = dN$, and to
related frames on the verge of mechanical collapse. We made
extensive use of the Index theorem \cite{Calladine1978}, which relates
the number of zero modes and \SSSs of a frame to its site
number $N$ and bond number $N_B$, and of a relation between
elastic energy and \SSSs \cite{Pellegrino1993,Goodrich2014} to
frame our discussion of the elasticity and vibrational spectrum
of these frames. We concentrated on Maxwell lattices, whose
sites can be collected into unit cells whose origins lie on
sites in a Bravais lattice, and we paid particular attention to
the surface zero-modes that necessarily arise when periodic
Maxwell lattices are cut to create free surfaces.
All of the physical examples we studied were two-dimensional,
both because important concepts are most easily explored in two
rather than higher dimensions and because there is very little
work of the type we discuss on higher dimensional systems.
There is nonetheless, interesting work to be done on
three-dimensional systems. The most obvious frames to
investigate are variants of the pyrochlore lattice, a
generalization of the two-dimensional kagome lattice composed
of corner sharing tetrahedra arranged so that lines of bonds
form straight filaments. Under periodic boundary conditions,
this lattice is a Maxwell lattice with $N_B = 6N$. Preliminary
work \cite{StenullLub2014b} indicates that this lattice can be
distorted in much the same way as the kagome lattices to gap
the spectrum and produce topologically distinct states with
protected interfacial zero modes. Three-dimensional Maxwell
lattices other than the simple cubic lattice include the many
zeolite lattices, which like the pyrochlore lattice are based
on corner sharing tetrahedra and which are in a sense $3D$
generalizations of the distorted kagome lattices like the
twisted lattice of \fref{fig:square-kagome}(d) or the more
complex layered lattice of \fref{fig:kagome-variation}. These
lattices all have a ``flexibility window" that, like the
twisted kagome lattice allow for easy change in volume
\cite{Sartbaeva2006}. It is likely that judicious
rearrangements of lattice sites in these lattice will yield
different topological states.
We considered here only linearized elasticity and vibrational
structure. Maxwell frames obviously have interesting nonlinear
properties, which in the end are responsible, for example, for
the ability of the kagome lattice to undergo large area change
upon twisting. References \cite{ChenVit2014,VitelliChe2014}
explored the nonlinear properties of the one-dimensional model
\cite{KaneLub2014} discussed in \sref{sec:one-d-model} and
found that under appropriate boundary conditions, the surface
zero-modes become elevated to zero-energy nonlinear topological
modes that propagate freely throughout the bulk. The system is
a nonlinear mechanical conductor whose carriers are nonlinear
solitary waves not captured by the linearized theory. An
obvious question is whether similar behavior will be found in
appropriately designed two- or three-dimensional frames.
As we have seen Maxwell lattices exhibit a surprisingly rich
variety of vibrational responses. Ideal lattice-structures are
generally the exception rather than the rule, and one can ask
what effect do defects like dislocations have on the linear and
nonlinear vibrational spectrum of Maxwell lattices. Reference
\cite{PauloseVit2014} studied just this question in dislocated
topological kagome lattices and found that zero modes can be
localized at dislocations as a result of the interplay between
between the topological dipole ${\bm R}_T$ of the lattice and the
topological Burgers vector of the dislocations. Thus zero modes
can be localized at a point by dislocations and along a a grain
boundary separating two topologically distinct phases.
Localized modes can also be created by enclosing a region of
topological type B in a lattice of topological type A. There is
certainly the potential for interesting and perhaps useful
generalizations of these ideas.
{\bf Acknowledgments} We are grateful for illuminating
discussions with Simon Guest, who brought reference
\cite{Calladine1978} to our attention, members of the
soft-matter theory group at the University of Pennsylvania
including Randall Kamien, Andrea Liu, Carl Goodrich, and Daniel
Sussman, and Bryan Chen and Vincenzo Vitelli of the University
of Leiden. This work was supported in part by NSF under grants
DMR-1104707 and DMR-1120901 (TCL), DMR-1207026 (AS),
DMR-0906175 (CLK), by a Simons Investigator award to CLK, the
Georgia Institute of Technology (AS).
|
1,116,691,498,527 | arxiv | \section{Introduction}
The purpose of this paper is two fold: First we shall give elementary
construction of bilinear invariant differential operators
on smoothly induced
principal series representations of the Lorentz group
$SO_0(n, 1)$, and second we shall prove the boundedness
of the operators on complementary series, which are
unitarization of principal series.
The study of bilinear invariant differential
operators is of natural interests
in representation theory of Lie algebras
and in quantization. It can be put, roughly speaking, in the
{\it smooth or algebraic}
regime
of representations, namely representations on spaces of smooth
functions
on manifolds or algebraic sums of finite dimensional representations.
A related question in
representation theory of Lie group is to
find decomposition of tensor products
of {\it unitary} representations of Lie groups, i.e., decomposition
of tensor products of pairs of Hilbert spaces under the tensor product actions.
In certain circumstances
there are bilinear differential operators defined on dense
subspaces of the tensor products of Hilbert spaces, and it
is thus an immediate question to find their boundedness.
In this sense the two problems are closely related.
The questions above have been studied for quite some time.
The most
well-known case might be the Rankin-Cohen brackets
on tensor products
of holomorphic discrete series of $\operatorname{SL}(2, \mathbb R)$, which yield
also a decomposition of the tensor products
in the unitary sense.
(There
exist further formal sums of the brackets
producing associative products, or quantizations; see e.g.
\cite{Connes-Mosc-2}.)
In this case the two regimes of representations, {\it smooth} and {\it
unitary},
in principle match each other, and they can be further put
under the holomorphic setup.
The representations we consider here
are principal series and complementary series representations of rank one groups.
While
it is not expected the existence of bounded intertwining
bilinear differential operators on principal series, one may still
try to find the operators on
complementary series, which exhibit better
smoothness properties than principal
series. Indeed we shall prove
that this is the case for the real orthogonal group $SO_0(n, 1)$.
We prove also that the natural diagonal restriction $f(x, x)$
of functions $f(x, y)$ defines a bounded linear operator
on tensor products of complementary series of general
rank one classical Lie groups $SO_0(n, 1; \mathbb F)$,
$\mathbb F=\mathbb R, \mathbb C, \mathbb H$, for appropriate parameters,
and prove thus
the existence of discrete component
in the tensor product.
We proceed to explain our main results and related works.
Earlier Ovsienko and Redou
\cite{Ovsienko-Redou} have
found a family of differential operators
on tensor product of spherical principal series
representations $\pi_{\alpha}, \alpha\in \mathbb C$,
of the conformal group $O(n, 1)$;
the representations considered there are defined on spaces
of smooth functions on $\mathbb R^n$
and are viewed as conformal densities.
They found the operators by using an Ansats
expressing the operators as
polynomials of the Laplacian operators $\mathcal L_x$, $\mathcal L_y$,
and the inner product $\nabla_x\cdot \nabla_y$ evaluated
on the diagonal $x=y$.
The same operators are obtained in \cite{Clerc-Beckmann}
as residues of a family of integral
bilinear intertwining operators, which
is based on their earlier works
on trilinear form \cite{Clerc-2011,
Clerc-Orsted, CKOP}. For general rank-one groups the
operators are studied in \cite{BKZ}.
In the present paper we shall give
a direct construction of the operators by using
the Knapp-Stein intertwining operators. While the technical
computations are slightly different from those
in the existing literature, the underlying idea
is still the same, the explicit application of Knapp-Stein
operators here making the computations conceptually clear.
We prove further
that there are finitely many discrete
components of complementary series of the form
$\pi_{\alpha + \beta+2j}$, $j\ge 0$,
in the tensor product $\pi_{\alpha}\otimes
\pi_{\beta}, \alpha, \beta>0$,
of complementary
series when the parameters $\alpha$ and $\beta$
are relatively small (in our parametrization);
see Sections \S\S3-4.
In \S5 we treat the other rank one group $SU(n, 1)$
and $Sp(n, 1)$. The complementary
series representations can be realized as space
of distributions on Heisenberg-type groups,
and certain formal invariant bilinear
differential operators have been constructed in \cite{BKZ, Kob-Pev}.
However it seems the above method of estimating
the norm of these operators becomes far more complicated.
We shall still prove the existence
of the first component
$\pi_{\alpha +\beta} $ in the tensor product for smaller
parameter $\alpha, \beta$ using the method of holomorphic
extension.
In the case of $n=2$ with $\mathfrak{so}(2, 1)=\mathfrak{sl}(2,\mathbb
R)$ our method yields
a straightforward
proof for
the appearance of $\pi_{\alpha +\beta}
$ in
$\pi_{\alpha}\otimes \pi_{\beta }$ for small $\alpha, \beta$.
Tensor product
decompositions for representations of $SL(2, \mathbb R)$
had been started
in the work of Pukanszky
\cite{Pukanszky62} and completed by Repka
\cite{Repka78}; see also \cite{Asmuth-Repka}.
Theses
results combined with the
general theory of Burger-Li-Sarnak \cite{Burger-Li-Sarnak}
have also found applications in automorphic forms \cite{Clozel-ias-park-lec}.
Tensor products of representations of $SL(2, \mathbb C)$ (i.e.,
locally isomorphic to the Lorentz group $SO_0(3, 1)$) have
been studied by Naimark \cite{Naimark-1961}; see also
\cite{Neretin-1995} where some complementary series
representations were constructed using restriction
of holomorphic representations, the same
idea being used in the present paper in the construction
of discrete components for the groups
$SU(n, 1)$ and $Sp(n, 1)$.
I would like to thank Jean-Louis Clerc
for some stimulating discussions. I thank also the anonymous
referee for some expert comments.
\section{Spherical representations of rank one group $G$}
We fix notation and
recall some known results on
induced representations of $G$
and the Knapp-Stein intertwining operator.
We shall use the non-compact realization
of the representations.
We shall be quite brief, and most of the technical
formulas can be found e.g. in
\cite{Johnson-Wallach,CDKR-jga}
where the general case of rank one groups
is studied.
\subsection{Classical rank one Lie groups}
We recall some standard
facts on rank one Lie groups
and fix notation.
Let $G=O(n, 1; \mathbb F)$
for $\mathbb R=\mathbb R, \mathbb C, \mathbb H$
be the classical rank one Lie group
in its standard realization \cite{Johnson-Wallach,
Speh-Zhang}.
Let $\mathfrak g=\mathfrak k +\mathfrak p$ be the Cartan decomposition of the Lie algebra $\mathfrak g$.
We fix an element $H\in \mathfrak p$
and the subspace $\mathfrak a:=\mathbb RH_0\in \mathfrak p$
such that $\operatorname{Ad}(H)$ has eigenvalues $\proclaim 2,
\proclaim 1, 0$.
The root space decomposition of $\mathfrak g$ under $H$
is
$$
\mathfrak g=\mathfrak n_{-2} + \mathfrak n_{-1} +
(\mathfrak a+ \mathfrak m) + \mathfrak n_{1} +\mathfrak n_{2}
$$
with $\proclaim 2, \proclaim 1, 0$,
if $\mathbb F=\mathbb C, \mathbb H$,
and with the convention that $\mathfrak n_{2} =0$
if $\mathbb F=\mathbb R$.
Here $\mathfrak m\subset \mathfrak l$ is the zero root space.
We denote by $\mathfrak n=\mathfrak n_1 \oplus \mathfrak n_2$ the sum of the positive root spaces.
Then
$\mathfrak m+\mathfrak a +\mathfrak n$
is a maximal parabolic subalgebra
of $\mathfrak g$.
Let $\rho$
be the half sum of positive roots. Then
$$
\rho(H)=
\begin{cases}
\frac{n-1}2, &\quad \mathbb F=\mathbb R\\
n, &\quad \mathbb F=\mathbb C\\
2n+1, &\quad \mathbb F=\mathbb H
\end{cases}
$$
and we shall identify $\rho=\rho(H)$.
\subsection{Spherical representations and complementary series
for $G=SO_0(n, 1; \mathbb F)$}
Denote $M, A, N$
the corresponding
subgroups of $G$ with Lie algebras
$\mathfrak m, \mathfrak a, \mathfrak n$, and $P=MAN$ the parabolic subgroup.
For $\mu\in \mathbb C$
let $\pi_{\mu}^\infty$
be the
induced {\it smooth}
representation
of $G$ from the character $
e^{-\mu}:
me^{tH}
n \in P=MAN\mapsto e^{-\mu t}$ consisting of $C^\infty$-functions $f$ on $G$
such that
\begin{equation}
\label{eq:ind-norm}
f(g
me^{tH_0}
n
)=
e^{-\mu t}
f(g), e^{tH_0}mn\in MAN.
\end{equation}
In particular $f$ are determined by their
restriction on $K$ and are identified further
as smooth functions on $K/M$. We have $\pi_{\mu}^\infty
=C^\infty(K/M)=C^\infty(S)$ as vector spaces. Restricting
smooth functions in
$\pi_{\mu}^\infty$
to $N^-$ results in an injective
map to a subspace of $C^{\infty}(N^-)
=C^{\infty}(\mathbb R^{n-1})$. We shall fix this realization
of $\pi_{\mu}^\infty$. (Indeed it is only a subspace
of $C^{\infty}(N^-)$ as some matching conditions at infinity are needed.)
The explicit formulas for $\pi_\nu(g)$ can
be found in \cite{Johnson-Wallach} in the compact
picture and in
\cite{Speh-Venk-2} for the non-compact picture. We shall only need
the formula for the real group
$G=SO_0(1, n)$.
$G$ is generated by
the parabolic group
$ MAN^-$ and
the Weyl group element
$w$, which as on $\mathbb R{n-1}$
by the defining action
and by inversion, $w(x)=-\frac{1}{x}:=
-\frac{x}{|x|^2}$.
Their actions on $\pi_{\mu}^\infty$
are given by
$$
\pi_{\nu}(g)f(x)= e^{-t\nu} f(e^{t}m^{-1}(x-x_0)),
\quad (m, e^{tH}, x_0)
=me^{tH} x_0
\in MAN^-, \quad N^{-}=\mathbb R^{n-1},
$$
and
$$
\pi_{\nu}(w)f(x)= \Vert x\Vert^{-2\nu }
f(-\frac{x}{\Vert x\Vert^2}).
$$
Note also that the Jacobians of
$g=(m, e^{tH}, x_0)$ and of the Weyl group element $w$
on $N^-=\mathbb R^{n-1}$
are given by
\begin{equation}
\label{eq:jac}
J_g(x)=e^{t(n-1)},
\quad
J_w(x)=\frac{1}{|x|^{2(n-1)}}.
\end{equation}
We return now back to $G=SO_0(n, 1; \mathbb F)$.
The representation $\pi_{\mu}(g), g\in G$,
$\mu\in (\rho +i\mathbb R)$ is already unitary
for the natural unitary norm in $L^2(K/M)$. However
for $\mu\in (0, 2\rho)$
a different $\mathfrak g$-invariant inner product on the space of $K$-finite
vectors can possibly be defined and completed to a unitary representation of
$G$, the {\it complementary series}; see
\cite{Johnson-Wallach}. The precise parameter is given by
\begin{equation} \label{mu-fixed}
\mu\in \begin{cases}
(0, 2\rho), \mathbb F=\mathbb R, \mathbb C
\\
(2, 2\rho-2), \mathbb F=\mathbb H.
\end{cases}
\end{equation}
We shall
denote the corresponding representation
still by $\pi_{\mu}$, and shall
use its non-compact realization for the real case $G=SO_0(n,
1)$, allowing us to find (generically) more than one
discrete components in the tensor product decomposition in \S4.
\subsection{Realization
of complementary series
for $G=SO_0(n, 1)$ on $\mathbb R^{n-1}$}
The unitarization of the complementary series
is obtained via the Knapp-Stein intertwining operator,
defined preliminarily on $K$-finite vectors (which
can be obtained from $K$-finite vectors on $K/M$ via Cayley transform),
\begin{equation}
\label{eq:j-n-c}
J_{\mu} f(x)
=\int_{\mathbb R^{n-1}} K_{\mu}(x, y)
f(y) dy,
\end{equation}
where
\begin{equation}
\label{eq:j-n-c-k}
K_{\mu}(x, y) = C_{\mu}
\frac1 {|x-y|^{2\mu}}, \quad
C_\mu
=\frac
{\Gamma(\rho-\frac{\mu}2)
\Gamma(\rho-\frac{\mu}2 +\frac 12)}
{\Gamma(\frac{n}2)
\Gamma(\rho-\mu)}
=\frac
{2^{\mu-2\rho +1} \sqrt \pi
\Gamma(2\rho -\mu)
}
{\Gamma(\frac{n}2)
\Gamma(\rho-\mu)}.
\end{equation}
(The normalization is chosen here so that in the compact picture
$J_\mu 1_S=1_S,$ where $1_S$ is the constant function on $S$ viewed
as a function on $G$ restricted to $\mathbb R^{n-1}.$)
Then $J_\mu$ is a $G$-intertwining operator
$$
\boxed{J_\mu: \pi_{\widetilde \mu}^\infty
\to \pi_{\mu}^\infty, \,
\quad \widetilde \mu :={2\rho-\mu}}
$$
for $\mu <<0$.
It has holomorphic continuation
to the whole complex plane, and in particular holomorphic and non-zero in the two symmetric
strips around $\Re \mu=\rho $,
\begin{equation}
\label{eq:hol-str}
\{\mu; 0<\Re\mu < \rho\},
\quad
\{\mu; \rho <\Re\mu < 2\rho
\}.
\end{equation}
The formal intertwining property
can be proved by using the following transformation
rule of $K_\mu$,
$$
K_\mu(gz, gw)= (cz +d)^{-\mu}
K_\mu(z, w)(cw +d)^{-\mu}
= J_g(z)^{\frac{\mu}{n-1}}
K_\mu(z, w)
J_g(w)^{\frac{\mu}{n-1}}
$$
where $J_g$ is the Jacobian of the action of $g\in G$ on
$N^-=\mathbb R^{n-1}$. The holomorphic
continuation can also be done using the
identity (\ref{eq:L-on-K}) below.
The smooth case is also consequence of the general
theory
of intertwining operators \cite{Vogan-Wallach}.
The inner product
\begin{equation}
\label{eq:j-n-c-norm}
(f_1, f_2)_\mu=(J_{\widetilde \mu}f_1, f_2)_{L^2(\mathbb R^{n-1})}
\end{equation}
for $f_1, f_2\in C^\infty_0(\mathbb R^{n-1})$
is a pre-Hilbert norm, and is invariant under $g\in G$ sufficiently
close
to the identity (depending on
$f_1, f_2$). The completion defines the
complementary series, $\mu\in (0, 2\rho)$.
We shall use its description
in term of Fourier transform $f\mapsto \mathcal Ff$.
The space $\pi_\mu$ is the completion
of $C_0^\infty(\mathbb R^{n-1})$
under the (equivalent) norm
\begin{equation}
\label{eq:Four-norm}
\Vert
f\Vert_{\mu}^2
=\int_{\mathbb R^{n-1}}
|\mathcal F f(\xi)|^2 |\xi|^{n-1-2\mu} d\xi
=\Vert
\mathcal F f(\cdot)
|\cdot |^{\frac{\widetilde\mu}{2}}
\Vert^2_{
L^2(
\mathbb R^{n-1}
)
},
\end{equation}
for $0<\mu <2\rho$.
See e.g. \cite{Speh-Venk-2}.
\subsection{Complementary series
for $G=SU(n, 1), Sp(n, 1)
$
and their holomorphic extensions}
We shall use a different method for the cases
$G=SU(n, 1), Sp(n, 1)$. The method is based on roughly
speaking holomorphic extension, no explicit
realization will be needed.
We consider a Hermitian Lie group
$G_1$ (called overgroup in some Russian literature) containing $G$
as a symmetric subgroup such that
$G/K$ is a real form of the
Hermitian subgroup $G_1/K_1$; see \cite{Dijk-Hille-jfa, Speh-Zhang}.
More precisely let
$$
G_1=\begin{cases}
SU(n, 1)\times SU(n, 1), \mathbb F=\mathbb C\\
SU(2n, 2),
\mathbb F=\mathbb H.
\end{cases}
$$
with $G=SU(n, 1), Sp(n, 1)$ being
realized as the
diagonal subgroup of $G_1$
and respectively as complex
transformations via the standard identification
of $\mathbb H=\mathbb C^2$.
The holomorphic discrete series
of $G_1$ can be realized on
the space of holomorphic functions
on $D_1$. To fix notation we let
$$
V_1=\begin{cases}
\mathbb C^n \oplus \overline{\mathbb C^n} & \mathbb F=\mathbb C\\
\\
M_{2n, 2}(\mathbb C) & \mathbb F=\mathbb H
\end{cases},
$$
and the space
$D_1=G_1/K_1$
is realized as a bounded symmetric domain in $V_1$,
$$
D_1=\begin{cases}
B^n \times \overline{B^n} & \mathbb F=\mathbb C
\\
\{Z\in M_{2n, 2}(\mathbb C); Z^*Z<I\} &
\mathbb F=\mathbb H
\end{cases}.
$$
Let
$\mathcal H_\nu(D_1)$
be the space
of holomorphic functions on $D_1$ with reproducing kernel
$h(z, w)^{-\nu}$ for $\nu$ sufficiently large,
where $$
h(z, w)=
\begin{cases}
(1-\langle z_1, w_1\rangle )
(1-\langle w_2, z_2\rangle )
& \mathbb F=\mathbb C
\\
\operatorname{det}(1- w^\ast z) & \mathbb F=\mathbb H
\end{cases}
$$
It is now well-known \cite{FK} that if $\nu$ is in the set
$$
\begin{cases} (0, \infty) & \mathbb F
=\mathbb R,
\mathbb C \\
(1, \infty) & \mathbb F= \mathbb H
\end{cases}
$$
then the kernel $h(z,w)^{-\nu}$
is positive definite and it defines
a unitary (projective) representation
of $G_1$.
We denote this representation by
$(\mathcal H_\nu(D_1), \tau_\nu, G_1)$.
We recall the following theorem
\cite{Dijk-Hille-jfa}; see further \cite{Speh-Zhang}
for the present reformulation.
\begin{theo+}
The complementary series
$(\pi_{\mu}, G)$ appears as
a discrete summand
in
$(\mathcal H_\nu(D_1), \tau_\nu, G_1)$ restricted to $G$ if $\nu$ and $\mu$ are
related by
$$2\nu
=
\begin{cases}
\mu, \,\, \mu \in (0, n), \quad & \mathbb F=\mathbb C \\
\mu, \,\, \mu\in (2, {2n-1}), \quad & \mathbb F=\mathbb H.
\end{cases}
$$
\end{theo+}
Note that the range of $\mu$ is, disregarding the Weyl group symmetry,
precisely the whole range of the complementary series representations.
In other words, any complementary series of $G$ is a discrete component
in the holomorphic representation of $G_1$.
\section{Invariant bilinear differential operators for general
spherical series representations, $G=SO_0(n, 1)$}
We denote $\pi_{\alpha}^\infty
\otimes
\pi_{\beta}^\infty$ the induced smooth representation
of $G\times G$ from the parabolic subgroup
$P\times P$ and the character $e^{-\alpha}\times
e^{-\beta}$.
The group $G$ is viewed as the diagonal subgroup
of $G\times G$.
\begin{theo+} For any nonnegative integer $j\ge 0$
there exists
a $G$-intertwining differential operator
$\mathcal D_{\alpha, \beta, j}
$ of degree $2j$
meromorphic in $(\alpha, \beta)\in \mathbb C^2$,
$$
\mathcal D_{\alpha, \beta, j} :
\pi_{\alpha}^\infty
\otimes
\pi_{\beta}^\infty
\to \pi_{\alpha+\beta +2j}^\infty.
$$
The only possible
poles of $
\mathcal D_{\alpha, \beta, j}
$ appear
when $\alpha$ or $\beta\in \Lambda_j$, where
$$
\Lambda_j=\{0, -1, -j+1\} \cup
\left( \rho -1 +\{0, -1, -j+2\}
\right).
$$
\end{theo+}
The proof will be divided into a few elementary Lemmas.
Let $S_{\alpha, \beta, j}
(x, y; z, w) $ be the kernel
\begin{equation}
\label{eq:S-ker}
S_{\alpha, \beta, j}
(x, y; z, w)
=
\left(
\frac{|(x-z)-(y-w)|^2}
{|x-z|^{2}
|y-w|^{2}
}
\right)^j
\frac{1}
{|x-z|^{2\alpha}
|y-w|^{2\beta}
},
\end{equation}
and write for simplicity
$$
S_{\alpha, \beta, j}(x; z, w)=S_{\alpha, \beta, j}(x, x; z, w),
\quad S_{\alpha, \beta, j}(x, y)=S_{\alpha, \beta, j}(x, y; 0, 0)
$$
the diagonal $x=y$ restriction respectively the evaluation
at $z=w=0$.
\begin{lemm+} The integral operator
$$
T_{j}f(x)=T_{\alpha, \beta, j}f(x):
=
C_\alpha
C_\beta
\int_{\mathbb R^{2(n-1)}}
S_{\alpha, \beta, j}(x; z, w)
f(z, w) dz dw
$$
defines an intertwining operator
$$
\pi_{\widetilde \alpha}^\infty
\otimes \pi_{\widetilde \beta}^\infty
\to \pi_{\alpha+\beta+2j}^\infty
$$
\end{lemm+}
\begin{proof}
Recall the group $G$ is generated by $P$
and $w$
as a consequence of the
Bruhat decomposition
\cite[Theorem 1.4, Ch.~IX]{He2}.
The formal intertwining property
follows directly from
a change of variables $(x, y)\mapsto (gx, gy)$
for $g\in P$ and $g=w$
along with the formula
(\ref{eq:jac})
for the Jacobians.
To prove the meromorphic
continuation in $\alpha$ and $\beta$
we observe that changing
$(x, y)$ to $(x-z, y-z)$ we need only
to prove that the integral
$$
\int_{\mathbb R^{2(n-1)}}
\frac{|x-y|^{2j}
}
{|x|^{2j}
|y|^{2j}
}
\frac{1}
{|x|^{2\alpha}
|y|^{2\beta}
}
f(x, y)
dx dy
$$
is
meromorphic in $(\alpha, \beta)$. But this
is just up to normalization constants
the integral $(J_{\alpha +j} \otimes
J_{\beta +j})(F)$, $F(x, y)=
|x-y|^{2j} f(x, y) $ and thus has the
continuation.
\end{proof}
In the compact-realization this operator is
$$
T_j f(x)=
\int_{S\times S}
\left(
\frac{
1-\langle z, w\rangle
}
{
(1-\langle x, z\rangle)
(1-\langle x, w\rangle)
}
\right)^j
\frac{C_\alpha
C_\beta }
{
(1-\langle x, z\rangle)^\alpha
(1-\langle x, w\rangle)^\beta
}
f(z, w)dz dw.$$
That the integral is well-defined for $\alpha, \beta <<0$
can also be deduced from this formula.
Next we need some known Bernstein-Sato type identities
for the Laplacian operator $\mathcal L=\partial_1^2
+\cdots +\partial_n^2$ acting on $|x|^{-2\alpha}$.
Recall the Pochammer symbol defined by $(\alpha)_j=\alpha
(\alpha+1)\cdots (\alpha +j-1)$.
\begin{lemm+}
The following differentiation formula holds
\begin{equation}
\label{eq:L-on-K}
\mathcal L^j |x|^{-2\alpha}
=2^{2j}(\alpha)_j
(\alpha+1-\rho)_j
|x|^{-2(\alpha+j)}, \quad x\ne 0.
\end{equation}
\end{lemm+}
We define a family of differential
operators of constant coefficients on $C^\infty(\mathbb R^{2(n-1)})$
by $$
M_{\alpha, \beta, 0}=I,\quad M_{\alpha, \beta, 1}=\nabla_x \cdot
\nabla_y$$
and
$$
\mathcal M_{\alpha, \beta, j+1}
=(\nabla_x \cdot \nabla_y)
\mathcal M_{\alpha, \beta, j}
-\frac {j( n-1-3j-2 \alpha -2\beta )
}
{
(\alpha +1-\rho)
(\beta +1-\rho)
}
\mathcal M_{ \alpha+1, \beta+1, j-1
}
\mathcal L_x
\mathcal L_y
$$
It follows from the construction that the only possible poles of
$M_{\alpha, \beta, j}$, $j\ge 2$, appear
when $\alpha $ or $\beta$ is in
$$
\{\rho -i; i=1, \cdots, j-1\}
$$
\begin{lemm+}
The following formula holds for all $(\alpha, \beta)\in \mathbb C^2$
and $m\in \mathbb N$,
\begin{equation}
\label{eq:M-on-kernel}
\mathcal M_{\alpha, \beta, m}
S_{\alpha, \beta}(x, y; z, w)
= 2^{2j}(\alpha)_m (\beta)_m
\left(
\frac{
\langle x-z, y-w\rangle
}
{
|x-z|^{2}
|y-w|^{2}
}
\right)^m
S_{\alpha, \beta}(x, y; z, w)
\end{equation}
\end{lemm+}
\begin{proof} By invariance
we can assume $z=w=0$.
We prove the identity using induction.
It is trivially true for $m=0$.
Assuming the identity holds for $0\le m\le j$
for all $\alpha, \beta$
we perform the differentiation
$\nabla_x \cdot \nabla_y$ on the identity with $m=j$. We have
\begin{equation}
\label{eq:induct}
\nabla_x \cdot \nabla_y \mathcal M_{\alpha, \beta, j}
S_{\alpha, \beta}(x, y)
=2^{2N}(\alpha)_j (\beta)_j
(I +II)
\end{equation}
a sum of two terms, with the first term
\begin{equation*}
\begin{split}
I&=2^{2j}(\alpha)_j (\beta)_j
2^2(\alpha +j)(\beta +j)
\left(
\frac{
\langle x, y\rangle
}
{
|x|^{2}
|y|^{2}
}
\right)^{j+1}
S_{\alpha, \beta}(x, y)\\
&=
2^{2(j+1)}(\alpha)_{j+1} (\beta)_{j+1}
\left(
\frac{
\langle x, y\rangle
}
{
|x|^{2}
|y|^{2}
}
\right)^{j+1}
S_{\alpha, \beta}(x, y)
\end{split}
\end{equation*}
being the RHS of (\ref{eq:M-on-kernel}) for $m=j+1$,
and
$$
II=
j (n-1-3j-2\alpha -2\beta)
\left(
\frac{
\langle x, y\rangle
}
{
|x|^{2}
|y|^{2}
}
\right)^{j-1}
S_{\alpha+1, \beta+1}(x, y).
$$
We treat the second term using the induction hypothesis for $m=j-1$
with $(\alpha,\beta)
$ being replaced
by $(\alpha+1,\beta+1)$,
$$
2^{2(j-1)}
(\alpha+1) _{j-1}
(\beta+1) _{j-1}
\left(
\frac{
\langle x, y\rangle
}
{
|x|^{2}
|y|^{2}
}
\right)^{j-1}
S_{\alpha+1, \beta+1}(x, y)
=\mathcal M_{\alpha+1,\beta+1,
j-1
}
S_{\alpha+1, \beta+1}(x, y),
$$
which is furthermore
$$
\frac 1{2^2\alpha (\alpha +\rho-1)
\beta (\beta +\rho-1)
}
\mathcal M_{
\alpha+1,\beta+1,
j-1}
\mathcal L_x
\mathcal L_y
S_{\alpha, \beta}(x, y).
$$
Rewriting
(\ref{eq:induct}) we find
$$
(\nabla_x \cdot \nabla_y \mathcal M_{
\alpha, \beta, j}
-\frac {j(n-1-3j-2\alpha -2\beta)
}
{ (\alpha +\rho-1)
(\beta +\rho-1)
}
\mathcal M_{
\alpha+1,\beta+1,
j-1}
\mathcal L_x
\mathcal L_y)
S_{\alpha, \beta}(x, y),
$$
which is
$\mathcal M_{
\alpha, \beta, j+1
}
S_{\alpha, \beta}(x, y) $ by the definition. This finishes the proof.
\end{proof}
Combining the two Lemmas we have
\begin{equation*}
\mathcal M_{\alpha+j, \beta+i, k}
\mathcal L_x^j
\mathcal L_x^i
\frac{1}
{|x|^{2\alpha}
|y|^{2\beta}
}
=c_{i, j, k}(\alpha, \beta)
\left(
\frac{\langle x, y\rangle}
{
|x|^{2}
|y|^{2}
}
\right)^k
\frac{1}
{|x|^{2\alpha+2j}
|y|^{2\beta+2i}
},
\end{equation*}
where
\begin{equation}
c_{i, j, k}(\alpha, \beta)
= 2^{2k+2j+2i}
(\alpha)_{j+k}
(\alpha+1-\rho)_j
(\beta)_{i+k}
(\beta+1-\rho)_i.
\end{equation}
Here we have used the fact that
$$
(\gamma)_{j}
(\gamma+j)_{k}
=(\gamma)_{j+k}.
$$
By translation invariance
we have
\begin{equation}
\label{eq:M-L-on-kernel}
\begin{split}
&\quad\,\mathcal M_{
\alpha+j, \beta+i,
k}
\mathcal L_x^j
\mathcal L_x^i
\frac{1}
{|x-z|^{2\alpha}
|y-w|^{2\beta}
}
\\
&=c_{i, j, k}(\alpha, \beta)
\left(
\frac{\langle x-z, y-w\rangle}
{
|x-z|^{2}
|y-w|^{2}
}
\right)^k
\frac{1}
{|x-z|^{2\alpha+2j}
|y-w|^{2\beta+2i}
},
\end{split}
\end{equation}
We prove now Theorem 3.1
\begin{proof}
The operator
$$
T_{\alpha, \beta, j}
(J_{\alpha}\otimes
J_{\beta}):
\pi_{\widetilde \alpha}^\infty
\otimes \pi_{\widetilde \beta}^\infty
\to \pi_{\alpha+\beta+2j}^\infty
$$
is an intertwining operator by Lemma 3.2. We prove
it is a differential operator.
The idea is
to differentiate
the identity $f=
(J_\alpha\otimes
J_\beta)
(J_{\widetilde \alpha}\otimes
J_{\widetilde \beta}) f$.
We shall perform formal computations
on the integral first and justify them
in the end.
Let $f\in \pi_{\alpha}^\infty\otimes
\pi_{\beta}^\infty$ and $
g=
J_{\widetilde \alpha}\otimes
J_{\widetilde \beta} f.
$
We denote
\begin{equation*}
\begin{split}
\label{eq:2}
&\quad\, \mathcal E_{\alpha, \beta, m}
f(z, w)\\
&=
\sum_{i+j +k=m}
\varepsilon_{i, j, k}(\alpha, \beta)
\mathcal
M_{\alpha +j,
\beta +i, k}
\mathcal L_x^{j}
\mathcal L_y^{i} f(x, y)
\end{split}
\end{equation*}
and
\begin{equation}
\label{eq:D-N}
\mathcal D_{\alpha, \beta, m}f(x)=
\mathcal E_{\alpha, \beta, m}f{|}_{x=y},
\end{equation}
for $f\in C^\infty(\mathbb R^{2(n-1)})$,
where
$$
\varepsilon_{i, j, k}(\alpha, \beta)
:=\binom{m}{i, j, k}
\frac{(-2)^{k}}
{c_{i, j, k}(\alpha, \beta)
}.
$$
We claim
that
\begin{equation}
\label{eq:D-N-TJ}
\mathcal D_{\alpha, \beta, j}
f=
T_{\alpha, \beta, j}
(J_{\alpha}\otimes
J_{\beta})f, \quad f\in \pi_{\widetilde \alpha}^\infty
\otimes \pi_{\widetilde \beta}^\infty
\end{equation}
proving the intertwining property of the differential
operator $ \mathcal D_{\alpha, \beta, m} $.
The binomial expansion of
$S(x, y; z, w)$
reads as follows
\begin{equation*}
\begin{split}
S(x, y; z, w)
&=\left(\frac{
|x-z|^2
+|y-w|^2
-2
\langle x-z, y-w\rangle}
{|x-z|^2 |y-w|^2
}
\right)^m
\frac{1}
{|x-z|^{2\alpha}
|y-w|^{2\beta}
}
\\
&=\sum_{i+j +k=m}
\binom{m}{i, j, k}
{(-2)^{k}}
\left(\frac
{\langle x-z, y-w\rangle}
{
|x-z|^{2}
|y-w|^{2}
}
\right)^{k}
\frac{1}
{
|x-z|^{2j+2\alpha}
|y-w|^{2i+2\beta}
}
\end{split}
\end{equation*}
Summing the formula
(\ref{eq:M-L-on-kernel})
over $(i, j, k)$ we have then
$$
\mathcal E_{\alpha, \beta, m}
\frac{1}
{|x-z|^{2\alpha}
|y-w|^{2\beta}
}
=S(x, y; z, w)
$$
which further implies that
\begin{equation}
\label{eq:d-n-a-b}
\mathcal D_{\alpha, \beta, m}
\frac{1}
{|x-z|^{2\alpha}
|y-w|^{2\beta}
}
=S(x, x; z, w) =S(x; z, w)
\end{equation}
The identity $f=(J_{ \alpha}\otimes
J_{ \beta})
(J_{\widetilde \alpha}\otimes
J_{\widetilde \beta})f
=(J_{ \alpha}\otimes
J_{ \beta})
g$ reads
$$
f(x, y)=(J_{ \alpha}\otimes
J_{ \beta})g
=C_{\alpha}
C_{\beta}
\int_{\mathbb R^{2(n-1)}}
\frac 1{|x-z|^{2\alpha}
|y-w|^{2\beta}} g(z, w) dz dw.
$$
We perform the differentiation
$\mathcal D_{\alpha, \beta, j}$
on this identity
and find
$$
\mathcal D_{\alpha, \beta, j}f(x)=
C_{\alpha}
C_{\beta}
\int_{\mathbb R^{2(n-1)}}
S(x; z, w) g(z, w) dz dw
=T_j g(x) =T_j
J_{\widetilde \alpha}\otimes
J_{\widetilde \beta} f(x),
$$
proving (\ref{eq:D-N-TJ}).
Finally the differentiation
under integral sign can be justified by taking first
$\alpha, \beta << 0$
and $\alpha\notin \mathbb Z_-, \beta\notin \mathbb Z_-$,
with $\widetilde \alpha >>0,
\widetilde \beta >>0$, in which case Lemma 2.1 implies that all integrals
involved are absolutely convergent. The rest is obtained
by analytic continuation.
\end{proof}
\section{Finitely many discrete components in the tensor product
$\pi_{\alpha}\otimes
\pi_{\beta}$, $G=SO_o(n, 1, \mathbb R)$}
We apply the intertwining operators $\mathcal D_j=\mathcal D_{\alpha,
\beta, j}$
to
the study of appearance of discrete components
in the tensor product $\pi_{\alpha} \otimes
\pi_{\beta} $
of complementary series.
For $\alpha, \beta\in (0, \rho)$ the tensor product
$\pi_{\alpha} \otimes
\pi_{\beta} $ in the non-compact picture
is the completion of $C^\infty_0(\mathbb R^{2(n-1)})$
with norm
$$
\Vert
f\Vert_{\alpha\otimes \beta}^2
:=\int_{\mathbb R^{2(n-1)}}
|\mathcal F f(\xi, \eta)
|^2 |\xi|^{n-2\alpha}
|\eta|^{n-2\beta} d\xi d\eta;
$$
cf. (\ref{eq:Four-norm}).
\begin{theo+} Suppose $\alpha>0, \beta> 0$ and $j\in \mathbb N$ satisfy
$0<\alpha<\rho$,
$0<\beta<\rho$, $\alpha +\beta +2j <\rho$. Then the
intertwining operator $\mathcal D_{\alpha, \beta, j}$ is
a non-zero bounded intertwining operator
$\pi_{\alpha} \otimes
\pi_{\beta}
\to \pi_{\alpha+\beta +2j}$, and
thus
$\pi_{\alpha+\beta +2j}$
appears in the tensor product
$\pi_{\alpha} \otimes
\pi_{\beta}
$ as an irreducible component.
\end{theo+}
\begin{proof} Noticing that for $\alpha, \beta$ and $j$ as above we have
that the operator $\mathcal D_j$ is well-defined,
and
$\pi_{\alpha}$, $\pi_{\beta }$ and $\pi_{\alpha+\beta +2j}$ are
unitary representations.
Recall also the notation $\tilde \alpha
=2\rho -\alpha=n-1-\alpha$
in \S2.3 and the unitary
norm (2.8). Let $f\in
C^\infty_0(\mathbb R^{2(n-1)})
\subset
\pi_\alpha\otimes
\pi_\beta
$.
We claim that
$$ \Vert
\mathcal D_j
f
\Vert_{\alpha+ \beta+2j}^2
\le C \Vert
f\Vert_{\alpha\otimes \beta}^2.
$$
Thus $\mathcal D_j $
defines a non-zero intertwining operator from
$\pi_{\alpha} \otimes
\pi_{\beta} $ into $
\pi_{\alpha+\beta +2j}$, proving our theorem.
Using Fourier inversion we have
$$
f(x, y)=C
\int_{\mathbb R^{2(n-1)}}
e^{i
\langle
x, \xi
\rangle
+i
\langle
y, \eta
\rangle
}
\mathcal F f(\xi, \eta)
d\xi d\eta
$$
where $C$ is a normalization constant. We write
the differential operator $\mathcal E_{\alpha, \beta, m}$
in the proof of Theorem 3.1
as $Q(\mathcal L_x, \mathcal L_y,
\nabla_x\cdot
\nabla_y
)$
where $Q$ is a homogeneous polynomial of three variables
of degree $j$.
Thus
$\mathcal D_jf(x)
=
Q(\mathcal L_x, \mathcal L_y,
\nabla_x\cdot
\nabla_y
)
f(x, y)|_{x=y}.
$
Its action on
the inversion formula results in
\begin{equation*}
\begin{split}
\mathcal D_j
f(x)&=
C
\int_{\mathbb R^{2(n-1)}}
e^{i
\langle
x, \xi+ \eta
\rangle }
Q(-|\xi|^2, -|\eta|^2, -\langle
\xi, \eta \rangle)
\mathcal F f(\xi, \eta)
d\xi d\eta\\
&=C\int_{\mathbb R^{n-1}}
e^{i(x, \zeta)}
\int_{\mathbb R^{n-1}}
Q(-|\zeta-\eta|^2, -|\eta|^2, -\langle
\zeta-\eta, \eta \rangle)
\mathcal F f(\zeta-\eta, \eta)d\eta
d\zeta.
\end{split}
\end{equation*}
That is
\begin{equation*}
\mathcal F(\mathcal D_j
f)(\zeta)=C
\int_{\mathbb R^{n-1}}
Q(-|\zeta-\eta|^2, -|\eta|^2, -\langle
\zeta-\eta, \eta \rangle)
\mathcal F f(\zeta-\eta, \eta)d\eta,
\end{equation*}
and furthermore
\begin{equation*}
| \mathcal F(\mathcal D_j
f)(\zeta)|^2\le
A(\zeta)
\int_{\mathbb R^{n-1}}
|\mathcal F f(\zeta-\eta, \eta)|^2
|\zeta-\eta|^{2\widetilde\alpha}
|\eta|^{2\widetilde\beta}
d\eta
\end{equation*}
with
$$
A(\zeta):
=C\int_{\mathbb R^{n-1}}
|Q(-|\zeta-\eta|^2, -|\eta|^2, -\langle
\zeta-\eta, \eta \rangle)|^2
|\zeta-\eta|^{-2\widetilde\alpha}
|\eta|^{-2\widetilde\beta}
d\eta.
$$
To estimate the integral $A(\zeta)$ we write $\zeta=|\zeta|u$, $|u|=1$,
and perform a change of variables $\eta=|\zeta|v$. It is
$$
A(\zeta)
=C|\zeta|^{4j-
2\widetilde\alpha
-2\widetilde\beta +(n-1)
}
\int_{\mathbb R^{n-1}}
|Q(-|u-v|^2, -|v|^2, -\langle
u-v, v \rangle)|^2
|u-v|^{-2\widetilde\alpha}
|v|^{-2\widetilde\beta}
dv
$$
and the integral is convergent; indeed it is
locally integrable near $v=0$, and $v=u$
for $2\widetilde\alpha, 2\widetilde\beta <n-1$ and
is integrable at infinity
for the integrand is dominated by
$$
(1+|v|^2)^{ -(\widetilde\alpha
+\widetilde\beta-2j)}
$$
with $\widetilde\alpha
+\widetilde\beta-2j=n-1+ (n-1-\alpha-\beta-2j) <n-1$.
Thus
$$
| \mathcal F(\mathcal D_j
f)(\zeta)|^2
|\zeta|^{-4j+
2\widetilde\alpha
+2\widetilde\beta -n
}\le C
\int_{\mathbb R^{n-1}}
|\mathcal F f(\zeta-\eta, \eta) |^2
|\zeta-\eta|^{2\widetilde\alpha}
|\eta|^{2\widetilde\beta}
d\eta,
$$
and its integration over $\zeta$ gives
$$
\int_{\mathbb R^{n-1}}
| \mathcal F(\mathcal D_j
f)(\zeta)|^2
|\zeta|^{-4j+
2\widetilde\alpha
+2\widetilde\beta -n
}d\zeta \le C \int_{\mathbb R^{n-1}}
\int_{\mathbb R^{n-1}}
|\mathcal F f(\zeta-\eta, \eta) |^2
|\zeta-\eta|^{2\widetilde\alpha}
|\eta|^{2\widetilde\beta}
d\eta= C\Vert
f\Vert_{\alpha\otimes \beta}^2
$$
whereas the LHS is precisely
$ \Vert
\mathcal D_jf\Vert_{\alpha+ \beta+2j}^2$.
This finishes the proof.
\end{proof}
When $n=2$ then $j=0$ and the theorem
states
that
$\pi_{\alpha+\beta}$ appears in the tensor product
$\pi_{\alpha} \otimes
\pi_{\beta}$ if $\alpha+\beta <1$. This has been
proved earlier in \cite{Repka78}.
\section{The appearance of
one component $\pi_{\alpha+\beta}$ in $\pi_{\alpha}\otimes
\pi_{\beta}$ for other rank one groups $G=SU(n, 1),
Sp(n, 1)$}
We treat now the other rank one groups.
\begin{theo+} Let $G=SU(n, 1)$ and $Sp(n, 1)$,
$\pi_{\alpha}$ and $\pi_{\beta}$
be the complementary series for $\alpha, \beta$ as in
(\ref{mu-fixed}),
$0<\alpha, \beta <\rho=n$ and respectively
$2<\alpha, \beta <\rho=2n-1$.
Then the complementary series
$(\pi_{\alpha+\beta}, G)$ of
$G$
appears discretely in the tensor product
$\pi_{\alpha}\otimes \pi_{\beta}$
if
$$
\alpha +\beta <
\begin{cases}
n & \mathbb F=\mathbb C \\
2n-1\quad & \mathbb F=\mathbb H.
\end{cases}
$$
\end{theo+}
\begin{proof} We prove the case for $G=SU(n, 1)$ and the same methods applies also
to $G=Sp(n, 1)$. We consider
the diagonal imbedding of $G$
in $G_1$. It follows from Theorem 2.1
that for $\alpha, \beta\in (0, \rho)$
the complementary series $\pi_\alpha$
and $\pi_\beta$ appear in $\tau_{\frac \alpha
2 }$
and $\tau_{\frac \beta 2
}$, respectively.
Now $\tau_{\nu}$ of $G_1=SU(n, 1)
\times SU(n, 1)$
is the tensor product
$\lambda_\nu \otimes \overline
{\lambda_{\nu}}$ on
$\mathcal H_\nu
\otimes \overline
{\mathcal H_\nu}
$
where $\mathcal H_\nu$ is the space
of holomorphic functions on the unit ball
$B^n$ with the reproducing kernel $(1-\langle z, w\rangle)^{-\nu}$.
If $\alpha +\beta<n$
then
$\pi_\alpha$
appears in $\tau_{\frac \alpha 2}$, so does
$\pi_\beta$ in
$\tau_{\frac \beta 2}$.
The tensor product
$\tau_{\frac \alpha 2
}\otimes
\tau_{\frac \beta 2}$ is now
$$
H:=(\mathcal H_{\frac \alpha 2}
\otimes \overline
{\mathcal H_{\frac \alpha 2}} )
\otimes
(\mathcal H_{\frac \beta 2}
\otimes \overline
{\mathcal H_{\frac \beta 2}} ).
$$
Its restriction to $G$
is
$$H=(\mathcal H_{\frac \alpha 2}
\otimes
\mathcal H_{\frac \beta 2})\otimes
\overline
{(\mathcal H_{\frac \alpha 2}
\otimes\mathcal H_{\frac \beta 2})
}.
$$
However the tensor product
$\mathcal H_{\frac \alpha 2}
\otimes
\mathcal H_{\frac \beta 2}$
of two holomorphic representation is
decomposed discretely under $G$ and contains
a component
$\mathcal H_{\frac{\alpha+\beta }2}
$. Thus $H$ contains a discrete component
$
\mathcal H_{\frac{\alpha+\beta }2}
\otimes
\overline
{\mathcal
H_{\frac{\alpha+\beta }2}
}.
$
We use again Theorem 2.1 and deduce that
this space has a discrete component $(\pi_{\alpha +\beta}, G)$.
\end{proof}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,116,691,498,528 | arxiv | \section{Introduction}\label{intro}
The deterministic SIR models are usually investigated through ordinary differential equations for prediction \cite{howard}. It can also be viewed in a stochastic framework, which is more realistic but also more complicated to analyze.\\
In population dynamics, the deterministic models are developed with success in many situations. In these models appear some parameters and in concrete applications, its estimated values play a crucial role in the prediction of the studied system and even in the decision-making policies. Usually one considers that they are deterministic, but many times due to errors in measurements, variability in the populations, and other factors that introduce uncertainties, one can think of parameters as random variables.
To consider these aspects, it is necessary to get some skills in probability theory, statistics, and differential equations. And the stochastic differential equation theory invites itself and fortunately there are many excellent books in this topics. For more details see for instance \cite{LCE,Ok,IM}.\\
And with tools developed in this topic, interdisciplinary areas such as mathematical biology, biostatistics, and bio-engineering have become possible with rapid growth. There are interesting works done and the reader can see for example \cite{A1,A2,LS} and references therein.\\
Our aim in this work is one hand, to propose a stochastic model to analyze the pandemic of COVID-19. And on the other hand, we would like to deepen the numerical analysis of such phenomena in situations where the settings may be random.\\
Many theoretical studies of the evolution of infectious diseases of the COVID-19 are recently proposed in \cite{ndiaye,seydi, hiroshi,Kamalich,steven}.
\noindent In the simple SIR model, the total population for each country \cite{pyramid} is assumed to be constant and divided into three classes (susceptible, infected, and recovered).
In the numerical simulations, we start with the deterministic case, following by the new proposed stochastic model in section \ref{model}, then 3 others SIR models and machine learning for forecasting where algorithms can include artificial neural networks, deep learning, association rules, decision trees, reinforcement learning and bayesian networks \cite{arkes,litsa}.\\
\noindent First, we collect carefully the pandemic data from \cite{datahub},
e.g. \url{https://www.tableau.com/covid-19-coronavirus-data-resources}, from January 21, 2020 to April 19, 2020. After exploratory data analysis, we propose six(6) technics, a simple SIR model, a stochastic SIR with Brownian motion, SIR with Deaths, SIR with Fatal, SIR Exposed and Waiting cases with Fatal, and machine learning tools, to analyze the coronavirus pandemic in the worldwide. A special study is done for Senegal.
\noindent The paper is organized as follows. In section \ref{model}, we present a stochastic SIR model with the existence, uniqueness and some qualitative results. In section \ref{numsim}, we present approximation methods to estimate different parameters involving in the SIR models. It is followed by various numerical tests for comparative prediction. Finally, in section \ref{ccl}, we present conclusions and perspectives.
\section{Modeling, Existence, Uniqueness and Properties}\label{model}
\subsection{Stochastic Model}
The stochastic aspect would be very interesting due to the lack of data, especially in the case of Senegal with the expansion of COVID 19 pandemic. Some infected people do not develop the disease and spread it (the case of some children, as it is suspected) not to mention the unmonitored asymptomatic cases that are the cause of community transmission. There is also the probability of touching infected objects: too many random factors in the transmission of the disease. Let us propose among many possibilities the following stochastic SIR model:
\begin{equation}\label{stoeq1}
\left\{
\begin{array}{ll}
\frac{dS}{dt}=-\beta IS + \sigma_1 \xi_1\\[3mm]
\frac{dI}{dt}= \beta IS-\gamma I + \sigma_2 \xi_2\\[3mm]
\frac{dR}{dt}=\gamma I
\end{array}\right.
\end{equation}
\begin{itemize}
\item $S$ is the number of individuals susceptible to be infected at time $t$.
\item $I$ is the number of both asymtomatic and symptomatic infected individuals at time $t$.
\item $R$ is the number of recovered persons at time $t$.
\item The parameters $\beta$ and $\gamma$ are respectively the transmission rate through exposure of the disease and the rate of recovering.
\item $\sigma_1$ and $\sigma_2$ are diffusion coefficients that are interpreted as volatility rates. They may depend on the time $t$, the suspects and the infected. $\xi_1$ and $\xi_2$ are white noises.
\end{itemize}
One interesting case that we endeavor to look at in this work, is when the above system is written as follows:
\begin{equation}\label{stoeq2}
\left\{
\begin{array}{ll}
dS=-\beta IS dt - \sigma_1 I S dW_1\\[3mm]
dI= \beta IS dt -\gamma I + \sigma_2 I S dW_2\\[3mm]
\frac{dR}{dt}=\gamma I
\end{array}\right.
\end{equation}
$W_1$ and $W_2$ are classical Brownian motions; $\sigma_1$ and $\sigma_2$ are positive constants.
\begin{rem}
Let us note that instead the term $\sigma_i IS dW_i, i= 1, 2$ one could propose $\sigma_i \sqrt{IS} dW_i, i= 1, 2.$
\end{rem}
\noindent As in the deterministic case, let us consider a sample of population. We denote it by $N$ satisfying the relation
\begin{eqnarray}
N= S(t)+ I(t)+ R(t)
\end{eqnarray}
The balance property of the above equation implies the following constraint
\begin{eqnarray}\label{bal}
-\sigma_1 dW_1+ \sigma_2 dW_2= 0.
\end{eqnarray}
With this constraint, we shall need in the simulation of the stochastic case to have an estimation of one of $\sigma_1, \sigma_2.$\\
Before proceeding further, we are going to present some classical results for the stochastic differential equation model, such as existence and uniqueness results, the stopping notion, and its properties.
\subsection{Existence and Uniqueness}
This section is started by a few useful reminds in probability theory. For more details, see for instance \cite{LCE,Ok}.
\begin{defi}
Let $\Omega$ be a set.\\
A $\sigma-$algebra is a collection $\mathcal U$ of subsets of $\Omega$ with the following properties
\begin{itemize}
\item $\emptyset, \Omega \in \mathcal U;$
\item if $A\in \mathcal U,$ then $A^c:= \Omega \backslash A \in \mathcal U;$
\item if $A_1, A_2,..., \in \mathcal U$ then $\displaystyle \bigcup_{i=1}^{\infty}A_i, \bigcap_{i=1}^{\infty}A_i \in \mathcal U.$
\end{itemize}
\end{defi}
\noindent Let $ W (.) $ be a $1-$dimensional Brownian motion defined on some probability space $(\Omega, \mathcal U , \mathbb{P} ).$
\begin{defi}
The $\sigma-$algebra $\mathcal W (t):= \mathcal U (W (s)/ 0\leq s\leq t)$ is called the history of Brownian motion up to and including $t.$\\
The $\sigma-$algebra $\mathcal W^{+}(t):= \mathcal U (W (s)/ s\geq t)$ is called the future of Brownian motion beyond time $t.$
\end{defi}
\begin{defi}
A family $\mathcal F (.)$ of $\sigma-$ algebras included in $\mathcal U$ is called non anticipating with respect to $W(.)$ if
\begin{enumerate}
\item $\mathcal F (t) \supseteq \mathcal F (s)$ for all $t\geq s \geq 0;$
\item $\mathcal F (t) \supseteq \mathcal W (t)$ for all $t \geq 0;$
\item $\mathcal F (t) $ is independent of $\mathcal W^{+}(t)$ for all $t \geq 0;$
\end{enumerate}
It is referred to $ \mathcal F (t)$ as a filtration.
\end{defi}
\noindent Let us state a general a Cauchy Lipschitz Theorem version and for the details (\ref{stoeq2}), see for example \cite{LCE}.
\begin{thm}\label{GCL}
Suppose that the two functions $\textbf{b}:\mathbb R^n\times[0,T]\longrightarrow \mathbb R^n$ and $\textbf{B}:\mathbb R^n\times[0,T]\longrightarrow \mathbb M^{m\times n}$ are continuous and satisfy the following conditions:
\begin{itemize}
\item[(a)] $\mid \textbf{b}(x,t)-\textbf{b}(\hat x,t)\mid \leq L\mid x-\hat x\mid $ \;and\; $\mid \textbf{B}(x,t)-\textbf{B}(\hat x,t)\mid \leq L\mid x-\hat x\mid $ $\forall \;0\leq t\leq T \; and\; x,\hat x \in \mathbb R^n$.
\item[(b)] $\mid \textbf{b}(x,t)\mid \leq L(1+\mid x\mid) $ and $\mid \textbf{B}(x,t)\mid \leq L(1+\mid x\mid) $ $\forall \;0\leq t\leq T \; , x \in \mathbb R^n$
for some constant L.\\
Let $\textbf{X}_0$ be any $\mathbb R^n$-valued random variable such that
\item[(c)] $E(\mid \textbf{X}_0\mid^2)<\infty$, and
\item[(d)] $\textbf{X}_0$ is independent of $\mathcal W^+(0)$,
where $\textbf{W}(.)$ is a given m-dimensional Brownian motion.
\end{itemize}
Then there exists a unique solution $\textbf{X}\in \mathbb L^2_n(0,T)$ of the stochastic differential equation:
\begin{equation}\label{stoeq}
\left\{
\begin{array}{ll}
d\textbf{X}=\textbf{b}(\textbf{X},t){dt} + \textbf{B}(\textbf{X},t) d\textbf{W} \ \ \ \ (0\leq t\leq T) \\
\textbf{X}(0)=\textbf{X}_0
\end{array}\right.
\end{equation}
\end{thm}
\noindent The above theorem is merely adapted in our study case.
\begin{thm}
Let $\textbf{X}_0 = (S_0^*, I_0^*, R_0^*)$ be any $\mathbb R^3$-valued random variable such that:
\begin{itemize}
\item[(i)] $E(\mid \textbf{X}_0\mid^2)<\infty$, and
\item[(ii)] $\textbf{X}_0$ is independent of $\mathcal W^+(0)$,
\end{itemize}
There exists a unique solution of the following stochastic differential equation:
\begin{equation}\label{stoeq}
\left\{
\begin{array}{ll}
dS=-\beta IS dt - \sigma_1 IS dW_1\\[3mm]
dI= (\beta IS-\gamma I )dt+ \sigma_2 ISdW_2\\[3mm]
\frac{d{ R}}{dt}=\gamma I\\ [3mm]
S(0)=S_0^*, \ I(0)=I_0^*, \ R(0)=R_0^*
\end{array}\right.
\end{equation}
\end{thm}
\begin{proof}
The proof is simple. It suffices only to verify if the hypotheses of Theorem \ref{GCL} are satisfied.
$$ \left(\begin{array}{ccc}
dS \\
dI \\
dR\\
\end{array}
\right) =
\left(\begin{array}{ccc}
-\beta I & 0& 0\\
\beta I & -\gamma&0\\
0 & \gamma & 0\\
\end{array}
\right)
\left(\begin{array}{ccc}
S \\
I \\
R\\
\end{array}
\right) dt
+ \left(\begin{array}{ccc}
-\sigma_1IS \\
\sigma_2IS \\
0\\
\end{array}\right) d\textbf{W}
$$
Our system fits well to the one considered in the above general theorem.
with:
$\textbf{X}= \left(\begin{array}{ccc}
S \\
I \\
R\\
\end{array}
\right) $, \ $\textbf{W} =\left(\begin{array}{ccc}
W_1 \\
W_2 \\
0
\end{array}
\right)$, \ $ \textbf{b}(\textbf{X})=
\left(\begin{array}{ccc}
-\beta I & 0& 0\\
\beta I & -\gamma&0\\
0 & \gamma & 0\\
\end{array}
\right) $ \; and $\textbf{B}(\textbf{X})= \left(\begin{array}{ccc}
- \sigma_1 IS \\
\sigma_2 IS \\
0\\
\end{array}
\right) $.
\noindent By chosen $\hat X= \left(\begin{array}{ccc}
\hat S \\
\hat I \\
\hat R\\
\end{array}
\right) $, we have:
$ \textbf{b}(X)-\textbf{b}(\hat X)=
\left(\begin{array}{ccc}
-\beta (I-\hat I) & 0& 0\\
\beta (I-\hat I) & 0&0\\
0 & 0 & 0\\
\end{array}
\right) \Longrightarrow \mid\mid b(X)-b(\hat X)\mid\mid_\infty=\beta|I-\hat I| $
$X-\hat X= \left(\begin{array}{ccc}
S- \hat S \\
I- \hat I \\
R- \hat R\\
\end{array}
\right) \Longrightarrow \mid\mid X-\hat X\mid\mid_\infty=\max(|S-\hat S|, |I-\hat I| ,|R-\hat R|) $
$$|I-\hat I|\leq \max(|S-\hat S|,|I-\hat I|, |R-\hat R|)$$
$ \Longrightarrow \mid\mid b(X)-b(\hat X)\mid\mid_\infty=\beta|I-\hat I| \leq \beta \mid\mid X-\hat X\mid\mid_\infty$. We have $\beta(1+\mid\mid X\mid\mid) \geq 0$
$$\mid\mid b(X)\mid\mid_\infty=\max(\beta I, \beta I+\gamma,\gamma)=\beta I+\gamma \quad \mbox{and} \quad \mid\mid X\mid\mid_\infty=\max(S,I,R)$$
$\beta I\leq\beta \max(S,I,R)=\beta\mid\mid X\mid\mid $ \,or $\gamma<\beta \Longrightarrow$
$$ \mid\mid b(X)\mid\mid_\infty=\beta I+\gamma\leq \beta+\beta\mid\mid X\mid\mid=\beta(1+\mid\mid X\mid\mid)$$
$$\displaystyle \mid\mid B\mid\mid=\sup_{\|X\|\leq 1}\frac{|B(X)|}{\|X\|}\leq C_0 +C_0 \max(S,I,R)= C_0 (1+\mid\mid X\mid\mid)$$
where $C_0= \max\{\sigma_1, \sigma_2\} $.
We see that it suffices to take $L= \max\{\beta, C_0 \}$
\end{proof}
\subsection{Stopping time}\label{stopping}
Let $(\Omega, \mathcal U, \mathbb{P})$ be a probability space and $\mathcal F(.)$ a filtration of $\sigma-$algebras.
\begin{defi}
A random variable $\tau: \Omega \Longrightarrow [0, +\infty]$ is called a stopping time with respect to $\mathcal F (.)$ if the set
\begin{eqnarray*}
\{\tau \leq t\}, \, \, \mbox{for all}\, \, t \geq 0.
\end{eqnarray*}
\end{defi}
\noindent This means that the set of all $\omega \in \Omega$ such that $\tau (\omega)\leq t $ is an $\mathcal F(t)-$ measurable set.
For our model, it could be interesting to study the stopping time. For instance, the situation in which efforts are done to contain or eradicate the pandemic. And an interesting one is can we find a finite stopping time when the susceptible infected and recovered or removed variables $S(t),$ $I(t)$ and $R(t)$ respectively are less than thresholds?\\
Let us state a general theorem where the stopping time does exist but it may be taken $+\infty.$
\begin{thm}
Let $V$ be either a nonempty closed subset or a nonempty open subset of $\mathbb{R}^3$. Then
\begin{eqnarray}
\tau:= \inf\{t\geq 0 / (S(t), I(t), R (t)) \in V\}
\end{eqnarray}
is a stopping time
\end{thm}
\noindent Let us point out that the stopping time has interesting properties as a remark.
Let $\tau_1$ and $\tau_2$ be stopping times with respect to $\mathcal F (.),$ then:
\begin{itemize}
\item $\{\tau < t\} \in \mathcal F(t),$ and so $ \{\tau = t\} \in \mathcal F(t)$ for all times $t\geq 0.$
\item $\tau_1\wedge \tau_2:= \min (\tau_1, \tau_2), \tau_1 \vee \tau_2: =\max (\tau_1, \tau_2) $ are stopping times
\end{itemize}
In the numerical simulations, we shall wonder if it is possible to find a finite stopping time by considering
$V$ a closed subset of $\mathbb{R}^3$ in the form of $[0, S^*] \times [0, I^*]\times[0, R^*].$
\section{Parameter estimation and numerical simulations}\label{numsim}
In this section, we present the simulations of the simple SIR, stochastic SIR, SIR with Deaths, SIR with Fatal, SIR Exposed and Waiting cases with Fatal and the machine learning technic for forecasting of the pandemic.
The numerical tests were performed by using the Python with the Panda library
\cite{python}. The numerical experiments were executed on a computer with the following characteristics: intel(R) Core-i7 CPU 2.60GHz, 24.0Gb of RAM, under the UNIX system.
\subsection{Exploratory data analysis}\label{eda}
As stated in the introduction, the simulations are carried out from data in \cite{datahub}, from January 21 2020, to April 19, 2020.
We first analyze and make some data preprocessing before simulations. It is a good practice to know data types, along with finding whether columns contain null values or not.\\
We get various summary statistics, by giving the count (number of observations), mean, standard deviation, minimum and maximum values, and the quantiles of the data (see tables \ref{futurconfcases} and \ref{totdat}).
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c| }
\hline
{\bf Values } & {\bf Confirmed} & {\bf Deaths } & {\bf Recovered} \\
\hline
count & 16729 & 16729 & 16729 \\
\hline
mean & 2532.4691 & 143.0855 & 624.3309\\
\hline
std & 13183.2366 & 1140.9791 & 4818.1358 \\
\hline
min & 0 & 0 & 0\\
\hline
25\% & 8 & 0 & 0 \\
\hline
50\% & 85 & 1 & 1 \\
\hline
75\% & 576 & 6 & 52 \\
\hline
max & 247815 & 23660 & 88000 \\
\hline
\end{tabular}
\end{center}
\caption{Worldwide summary statistics (per day) until April 19th, 2020.}\label{futurconfcases}
\end{table}
\par\vspace{-.75cm}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c| c| }
\hline
{\bf Date} & {\bf Confirmed} &{\bf Infected} & {\bf Deaths} &{\bf Recovered} \\
\hline
2020-04-17 & 2240191 & 1518026 & 153822 & 568343 \\
\hline
2020-04-18 & 2317759 &1565930 & 159510 & 592319 \\
\hline
2020-04-19 & 2401379 & 1612432 &165044 & 623903 \\
\hline
\end{tabular}
\end{center}
\caption{Worldwide total data until April 19th, 2020.}\label{totdat}
\end{table}
\noindent The worldwide cumulative of confirmed, deaths and recovered cases are illustrated in Figure \ref{ww_cidr}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.\linewidth]{figures/worldwide_CIDR.png}
\par\vspace{-1.cm}
\caption{Worldwide - confirmed, deaths and recovered} \label{ww_cidr}
\end{figure}
\subsection{Parameter estimation}
The identification of a real dynamical system (called object) is to characterize another system (called model), starting from the experimental knowledge of the inputs and outputs to obtain an identity of behavior. In practice, the purpose of identification is generally to determine the conducted model, which can be used to simulate, control, or regulate a process. This model can be physical (in the sense of analog or digital simulator and reduced model), or abstract (mathematical model, i.e. system of algebraic or differential equations (ODE or PDE)). \\
This subsection is started by the estimation of the parameters $\beta, \gamma$ in the deterministic SIR model by the standard least square method.
\subsubsection{Deterministic case}
\begin{equation}\label{det}
\left\{
\begin{array}{ll}
\frac{dS}{dt}=-\beta IS\\[3mm]
\frac{dI}{dt}= \beta IS-\gamma I\\[3mm]
\frac{dR}{dt}=\gamma I
\end{array}\right.
\end{equation}
\noindent Let us consider a time interval $[0, T]$ and subdivide it as follows:
$\forall i\in\{0,..., n-1\}$, $t_{i+1}- t_i =1$, $t_0= 0$, $t_{n-1}= T$.
Let: $\beta (t_i)= \beta_i, \gamma (t_i)= \gamma_i, S (t_i)= S_i$, and $I(t_i)= I_i$.\\
Discretizing $(\ref{det}),$ we get the following system of $2n$ unkown ($\forall i= 0,..., n-1$):
\begin{equation}\label{det2}
\left\{
\begin{array}{ll}
S_{i+1}- S_i =-\beta_i I_i S_i\\[3mm]
I_{i+1}- I_i= \beta I_i S_i-\gamma_i I_i\\[3mm]
R_{i+1}- R_i=\gamma_i I_i
\end{array}\right.
\end{equation}
\noindent Let us set: $$Y= (S_1,,..., S_n, I_1,..., I_n) \in \mathbb{R}^{2n}\quad \mbox{and} \quad F (\beta_0,..., \beta_{n-1}, \gamma_0,..., \gamma_{n-1})\in \mathbb{R}^{2n} $$ the vector coming from the right hand side of the above two first equations of $(\ref{det2}).$
$$P:= (\beta_0, \beta_1,..., \beta_{n-1}, \gamma_0, \gamma_1,..., \gamma_{n-1})\in [0, 1]^{2n}. $$
We try to minimize:
$$\mathcal E (P):= \|Y-F(P)\|_2^2$$ on $ [0, 1]^{2n} $, where $\|.\|_2$ is the Euclidean norm in $ \mathbb{R}^{2n}.$\\
The differentiability of $\mathcal E$ and its convex structure ensure existence of a minimizer that we note by $P_{opt} = (\beta_0^*, \beta_1^*,..., \beta_{n-1}^*, \gamma_0^*, \gamma_1^*,..., \gamma_{n-1}^*).$\\
And the approximated parameters proposed are:
\begin{eqnarray*}
\beta_{app}= \frac{1}{n}\sum_{i=0}^{n-1} \beta_i^* \qquad \mbox{and}\qquad
\gamma_{app}= \frac{1}{n}\sum_{i=0}^{n-1} \gamma_i^*
\end{eqnarray*}
We have:
\begin{itemize}
\item $S$ : Susceptible (= $N$ - Confirmed)
\item $I$ : Infected (= Confirmed - Recovered - Deaths)
\item $R$ : Recovered (= Recovered + Deaths)
\item $S+I+R=N$, where $N$ is the total population that can be obtained in \cite{pyramid}.
\end{itemize}
\noindent The basic reproduction number (also called basic reproduction ratio) is defined as is $R_0 = \beta \gamma^{-1}$.
This ratio is derived as the expected number of new infections (these new infections are sometimes called secondary infections) from a single infection in a population where all subjects are susceptible.
For the model, we also have:
$$ S+I \overset{\beta}{\longrightarrow} 2I \qquad \mbox{and}\qquad I \overset{\gamma}{\longrightarrow} R $$
with: $\beta$= effective contact rate [1/day], \
$\gamma$= recovery(+mortality) rate [1/day].
\begin{rem}
An easy way to compute $\gamma_i$, is to use the equation $R_{i+1}- R_i=\gamma_i I_i$, and set $y_i=R_{i+1}- R_i$, $x_i=I_i$\quad ($\forall i=1,...,n-1$). Then, we obtain $y_i=f(x_i)$. \\
Finally, as $\gamma_i$ are bounded $\forall i=1,...,n$, we can call the {\tt curve\_fit} procedure. The {\tt scipy.optimize.curve\_fit} use non-linear least squares to fit a function, $f$, to data, assuming {\tt ydata = f(xdata, *params) + $\epsilon$}.\\
The return value {\tt popt} contains the best-fit values of the parameters. The return value {\tt pcov} contains the covariance (error) matrix for the fit parameters. From them, we can determine the standard deviations of the parameters. We can also determine the correlation between the fit parameters.
\end{rem}
\noindent To estimate $\beta$, we use the same procedure with the first equation $y_i = S_i-S_{i+1} =\beta_i I_i S_i=\beta_i x_i, \quad (\forall i=1,...,n-1)$. Recall that $y_i\geq 0$, and $x_i=I_i S_i$.
%
\subsubsection{Volatility rates}
To approximate the volatility rates $\sigma_1, \sigma_2,$ we propose the standard deviation. And we need to compute the variances of the distributions obtained in the deterministic estimations in the previous sub sub section.
\begin{eqnarray*}
V_1= \frac{1}{n}\sum_{i=0}^{n-1}(\beta_i^*- \beta_{app})^2 \qquad \mbox{and}\qquad
V_2= \frac{1}{n}\sum_{i=0}^{n-1}(\gamma_i^*- \gamma_{app})^2
\end{eqnarray*}
And first idea for approximating volatilities are:
$$\sigma_1= \sqrt{V_1}, \qquad \sigma_2= \sqrt{V_2}.$$
But in the numerical simulation section we have to take into account the equilibrium condition $(\ref{bal})$ that derives from the modeling.\\
A second idea that could be better, is to take:
$$\sigma_i= \frac{\sqrt{V_1}+ \sqrt{V_2}}{2}, \quad i= 1, 2. $$
\begin{rem}
\textbf{Before proceeding further, we would like to underline that the numerical tests that we shall realize below, are to be understood under hypotheses. If nothing is done on time, it could be possible to fall on the below predictions. And an invitation is to see how it is possible to organize minimal actions to mitigate strongly possible damage caused by the COVID 19 pandemic in a country like Senegal.}
\end{rem}
\subsection{Simple SIR model}\label{simpleSIR}
First, we use the SIR model simulations for the Senegal case study. For the initial values, we use date already stated in section \ref{eda}. The first 3 days cases are given in Table \ref{senI0}.
We have: $N=16,296,361$ until the year 2019 (see \cite{pyramid}), $I_0^*$ = Confirmed[0]=1, $S_0^* = N-I_0^*-R_0^*$. \\
To estimate parameters $\beta$ and $\gamma$, we call the procedure {\tt Optuma} package \cite{optuna} with python, an open source hyper-parameter optimization framework to automate hyper-parameter search.
For the contact rate ($\beta$) and the mean recovery rate ($\gamma$), we have: $\beta=0.115953$, $\gamma=0.030912$ (1/[days]) and $R_0=3.75$. The prediction is given in Figure \ref{fig_sir}.
\par\vspace{-.5cm}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c| }
\hline
Date & Confirmed & Infected & Deaths \\
\hline
2020-03-02 & 1 & 1 & 0 \\
2020-03-04 & 2 & 2 & 0 \\
2020-03-05 & 4 & 4 & 0 \\
\hline
\end{tabular}
\end{center}
\caption{Senegal: first three days}\label{senI0}
\end{table}
\par\vspace{-1.cm}
\begin{center}
\begin{figure}[h!]
\centering
\includegraphics[width=.9\linewidth]{figures/Senegal_Prediction_deterministe.png}
\caption{Senegal: prediction with SIR}\label{fig_sir}
\end{figure}
\end{center}
\par\vspace{-.5cm}
\subsection{Stochastic SIR model }
In this stochastic part, the following relation is to be considered
\begin{eqnarray}
-\sigma_1 dW_1+ \sigma_2 dW_2= 0
\end{eqnarray}
with $dW = Z\sqrt{\Delta t}$, and $Z \backsim \mathcal{N} (0 ,1 )$.\\
For the numerical simulation of the stopping time, we consider, $V$ a closed subset of $\mathbb{R}^3$ in the form of $[0, S^*] \times [0, I^*]\times[0, R^*].$\\
We consider the same procedure like in section \ref{simpleSIR} with the same population and the same initialization, $I_0^*$ = Confirmed[0]=1, $S_0^* = N-I_0^*-R_0^*$. \\
To estimate parameters $\beta$ and $\gamma$, we call the procedure {\tt scipy.optimize.curve\_fit} with python.
For the contact rate ($\beta$) and the mean recovery rate ($\gamma$), we have: $\beta=0.135005$, $\gamma=0.026979$ (1/[days]), $\sigma_2=0.036254$ and $R_0=5$. As $\sigma_1 dW_1 = \sigma_2 dW_2$, we only use $\sigma_2 dW_2$ in the simulations. \\
Predictions with different simulations (because of Brownian) are given in Figure \ref{sn_ssir}.
{\bf For the Brownian, the curve changes for each simulation}. For 4 tests, this gives us the results of Figures \ref{ssir_1}, \ref{ssir_2}, \ref{ssir_3} and \ref{ssir_4}.
\begin{rem}
The stopping time (see section \ref{stopping}) is illustrated in dotted line and appears around the middle April.
\end{rem}
\begin{figure}[h!]
\subfloat[a first stochastic SIR]{
\begin{minipage}[1\width]{0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ssir1.png}\label{ssir_1}
\end{minipage}}
\subfloat[a second stochastic SIR]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ssir2.png}\label{ssir_2}
\end{minipage}}
\newline
\subfloat[a third stochastic SIR]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ssir3.png}\label{ssir_3}
\end{minipage}}
\subfloat[a fourth stochastic SIR]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ssir4.png}\label{ssir_4}
\end{minipage}}
\caption{Senegal: prediction with stochastic SIR}\label{sn_ssir}
\end{figure}
\subsection{Interpretation of figures}
The values of the parameters having been estimated, the SIR deterministic model (Figure \ref{fig_sir}) shows that the peak of infection could be reached by mid-May with about 37.5\% of the population infected. The same time period of peak (mid-May) for infected, is observed for Figures \ref{fig_sird}, \ref{fig_sirf} and \ref{fig_sewir}. \\
\noindent The stochastic SIR model illustrates that it is possible to have nearly the same peak than the deterministic one (but with a larger infected population) if random factors are not too important (Figures \ref{ssir_1} and \ref{ssir_2}).\\
\noindent In the other hand if the effect of hazard is important, the stochastic SIR model is less optimistic and show that the peak of infection could be reached in early June (Figures \ref{ssir_3} and \ref{ssir_4}) with about 56\% of the population infected.\\
\noindent A finite stopping time exists (in dotted line) and is established in the second half of April.
\subsection{Others modifications of the SIR model}
There are a large number of modifications of the SIR model, including those that include births and deaths separately, where some cases are reported as fatal cases before clinical diagnosis of COVID-19, where the number of exposed cases in latent period and the waiting cases for confirmation are un-measurable variables, etc.
All models allow for understanding how different situations may affect the outcome of the pandemic.
\subsubsection{SIR with Deaths \cite{diekmann,hethcote,Keeling} (SIR-D)}
It's possible to measure the number of fatal cases and recovered cases separately. We can use two variables Recovered and Deaths, instead of Recovered + Deaths in the mathematical model.\\
The model is given by:
\begin{equation}\label{model_sird}
\left\{
\begin{array}{ll}
\frac{dS}{dt}=-\beta IS\\[3mm]
\frac{dI}{dt}= \beta IS-(\gamma+\alpha) I\\[3mm]
\frac{dR}{dt}=\gamma I \\[3mm]
\frac{dD}{dt}=\alpha I
\end{array}\right.
\end{equation}
We have:\\
$S$ : Susceptible, $I$ : Infected, $R$ : Recovered, $D$ : Fatal. \\
In addition, $S+I+R+D=N$, where $N$ is the total population, always obtained from \cite{pyramid}.
\noindent The basic reproduction number (also called basic reproduction ratio) is defined as is $R_0 = \beta (\gamma+\alpha)^{-1}$.
For the model, we also have:
$$ S+I \overset{\beta}{\longrightarrow} 2I, \ I \overset{\gamma}{\longrightarrow} R \ \ \mbox{and}\ \ I \overset{\alpha}{\longrightarrow} D $$
with: $\beta$= effective contact rate [1/day], \
$\gamma$= recovery rate [1/day], $\alpha$= mortality rate [1/day].
We also use the optuna package to estimate the parameters. \\
We obtain: $\beta=0.002343$, $\gamma=0.042431$, $\alpha=0.008784$ (1/[days]) and $R_0=3.81$. The prediction is given in Figure \ref{fig_sird}.
\begin{center}
\begin{figure}[h!]
\centering
\includegraphics[width=.9\linewidth]{figures/Senegal_Prediction_SIR-D.png}
\caption{Senegal: prediction with SIR with Deaths}\label{fig_sird}
\end{figure}
\end{center}
\par\vspace{-.5cm}
\subsubsection{SIR with Fatal \cite{diekmann,hethcote,Keeling} (SIR-F)}
We can have a situation where some cases are reported as fatal cases before clinical diagnosis of COVID-19. To consider this issue, S + I $\longrightarrow$
Fatal + I will be added to the model.\\
The model is given by:
\begin{equation}\label{model_sirdf}
\left\{
\begin{array}{ll}
\frac{dS}{dt}=-\beta IS\\[3mm]
\frac{dI}{dt}= (1-\alpha_1) \beta IS-(\gamma+\alpha_2) I\\[3mm]
\frac{dR}{dt}=\gamma I \\[3mm]
\frac{dD}{dt}=\alpha_1 \beta IS + \alpha_2 I
\end{array}\right.
\end{equation}
We have:\\
$S$ : Susceptible, $S^*$ : Confirmed and un-categorized, $I$ : Confirmed and categorized as $I$, $R$ : Recovered, $F$: Fatal with confirmation.\\
In addition, $S+I+R+F=N$, where $N$ is the total population, always obtained from \cite{pyramid}.
\noindent The basic reproduction number (also called basic reproduction ratio) is defined as is $R_0 = \beta (1-\alpha_1)(\gamma+\alpha)^{-1}$.
For the model, we also have:
$$ S\overset{\beta I}{\longrightarrow} S^* \overset{\alpha_1}{\longrightarrow} F,\ S\overset{\beta I}{\longrightarrow} S^* \overset{1-\alpha_1}{\longrightarrow} I, \ I \overset{\gamma}{\longrightarrow} R \ \ \mbox{and}\ \ I \overset{\alpha_2}{\longrightarrow} F $$
with: $\beta$= effective contact rate [1/day], \
$\gamma$= recovery rate [1/day], $\alpha_1$= mortality rate of $S^*$ cases [1/day], $\alpha_2$= mortality rate of $I$ cases [1/day].
We also use the Optuna package to estimate the parameters. \\
We obtain: $\beta=0.103144$, $\gamma=0.02308$, $\alpha_1=0.057568$, $\alpha_2=0.000192$ (1/[days]) and $R_0=4.18$. The prediction is given in Figure \ref{fig_sirf}.
\begin{center}
\begin{figure}[h!]
\centering
\includegraphics[width=.9\linewidth]{figures/Senegal_Prediction_SIR-F.png}
\caption{Senegal: prediction with SIR with Fatal}\label{fig_sirf}
\end{figure}
\end{center}
\par\vspace{-.5cm}
\subsubsection{SIR Exposed and Waiting cases with Fatal \cite{diekmann,hethcote,Keeling} (SEWIR-F)}
We can consider the number of exposed cases in latent period (E) and the waiting cases for confirmation (W) which are un-measurable variables.
If E and W are large, outbreak will occur in the near future. W and some rules were added to explain COVID-19 dataset, but this is like-SEIR model. \\
The model is given by:
\begin{equation}\label{model_sirfew}
\left\{
\begin{array}{ll}
\frac{dS}{dt}=-\beta_1 (W+I)S\\[3mm]
\frac{dE}{dt}= \beta_1 (W+I)S-\beta_2 E\\[3mm]
\frac{dW}{dt}= \beta_2 E - \beta_3 W \\[3mm]
\frac{dI}{dt}= (1-\alpha_1) \beta_3 W-(\gamma+\alpha_2) I\\[3mm]
\frac{dR}{dt}=\gamma I \\[3mm]
\frac{dF}{dt}=\alpha_1 \beta_3 W + \alpha_2 I
\end{array}\right.
\end{equation}
We have:\\
$S$ : Susceptible, $E$ : Exposed and in latent period (without infectivity), $W$ : Waiting cases for confirmation (with infectivity),
$I$ : Confirmed and categorized as $I$, $R$ : Recovered and $F$: Fatal with confirmation.\\
In addition, Total population - Confirmed = $S+E+W+S^*$, Confirmed = $I+R+F$, Recovered = $R$, Deaths = $F$ and $S+E+W+I+R+F=N$, where $N$ is the total population, always obtained from \cite{pyramid}.
\noindent The basic reproduction number (also called basic reproduction ratio) is defined as is $R_0 = \beta_1 (1-\alpha_1)(\gamma+\alpha_2)^{-1}$.
For the model, we also have:
$$ S\overset{\beta_1 (W+I)}{\longrightarrow} E \overset{\beta_2}{\longrightarrow} W \overset{\beta_3}{\longrightarrow} S^* \overset{\alpha_1}{\longrightarrow} F,\ S \overset{\beta_1 (W+I)}{\longrightarrow} E \overset{\beta_2}{\longrightarrow} W \overset{\beta_3}{\longrightarrow} S^* \overset{1-\alpha_1}{\longrightarrow} I
$$
$$
\ I \overset{\gamma}{\longrightarrow} R \ \ \mbox{and}\ \ I \overset{\alpha_2}{\longrightarrow} F $$
with: $\beta_1$= exposure rate (the number of encounter with the virus in a minute) [1/day], $\beta_2$= inverse of latent period [1/day], \ $\beta_3$= inverse of waiting time for confirmation [1/day], \ $\gamma$= recovery rate [1/day], $\alpha_1$= mortality rate of $S^*$ cases [1/day] ($S^*$ = Confirmed and un-categorized), $\alpha_2$= mortality rate of $I$ cases [1/day]. \\
We also use the Optuna package to estimate the parameters. \\
To estimate $\beta_2$ and $\beta_3$, we first calculate median value of latent period $L_E$ and waiting time for confirmation $L_W$. We assume that latent period is equal to incubation period (patients start to have infectivity from onset dates). \\
We obtain: $\beta_1=0.0.108008$, $\beta_2=0.407707$, $\beta_3=0.728274$, $\gamma=0.018478$, $\alpha_1=0.068890$, $\alpha_2=2.066063e-05$ (1/[days]) and $R_0=5.44$. The prediction is given in Figure \ref{fig_sewir}.
\begin{center}
\par\vspace{-.5cm}
\begin{figure}[h!]
\centering
\includegraphics[width=.9\linewidth]{figures/Senegal_Prediction_SEWIR-F.png}
\caption{Senegal: prediction with SEWIR with Fatal}\label{fig_sewir}
\end{figure}
\end{center}
\par\vspace{-1.cm}
\begin{rem}
Note that it is quite possible to propose stochastic variants for all the above various deterministic models.
\end{rem}
\subsection{Forecasting using Prophet}
In this section, we develop the machine learning technics for forecasting to compare with the previous SIR models.
We use Prophet \cite{prophet, sean}, a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. \\
For the average method, the forecasts of all future values are equal to the average (or “mean”) of the historical data. If we let the historical data be denoted by $y_1,...,y_T$, then we can write the forecasts as
$$
\hat{y}_{T+h|T}=\bar{y}=(y_1+y_2+...+y_T)/T
$$
The notation $\hat{y}_{T+h|T}$ is a short-hand for the estimate of $y_{T+h}$ based on the data $y_1,...,y_T$.
A prediction interval gives an interval within which we expect $y_t$ to lie with a specified probability. For example, assuming that the forecast errors are normally distributed, a 95\% prediction interval for the $h$-step forecast is
$$
\hat{y}_{T+h|T}\pm1.96\hat{\sigma_h}
$$
where ${\sigma_h}$ is an estimate of the standard deviation of the $h$-step forecast distribution. \\
For the data preparation, when we are forecasting at country level, for small values, it is possible for forecasts to become negative. To counter this, we round negative values to zero. Also, no tweaking of seasonality-related parameters and additional regressors are performed.\\
\noindent We can carry out simulations for a longer time and forecast the potential trends of the COVID-19 pandemic. In Senegal, the predicted cumulative number of confirmed cases are first plotted for a shorter period of the next 7 days, and 3 weeks ahead forecast with Prophet, with 95\% prediction intervals. \\
The confirmed predictions for Senegal are given in Figures \ref{sn_1w} and \ref{sn_2w} (see Tables \ref{sn_1w_confcases} and \ref{sn_2w_confcases} for the value of the confidence interval).
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c| }
\hline
ds & $ \hat{y}$ & $\hat{y}_{lower}$ & $\hat{y}_{upper}$ \\
\hline
2020-04-22 & 390.082137 & 381.573661 &398.870511 \\
\hline
2020-04-23 & 399.143136 & 389.013079 & 408.336247 \\
\hline
2020-04-24 & 411.338180 & 400.868963 & 422.324577 \\
\hline
2020-04-25 & 420.807645 & 408.116029 & 432.783086\\
\hline
2020-04-26 & 432.420448 & 418.126379 & 446.991749 \\
\hline
\end{tabular}
\end{center}
\caption{Senegal: predicted cumulative confirmed cases
$\sim$April 26, 2020.}\label{sn_1w_confcases}
\end{table}
\par\vspace{-1.cm}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
ds & $ \hat{y}$ & $\hat{y}_{lower}$ & $\hat{y}_{upper}$ \\
\hline
2020-05-06 & 534.240980 & 483.962306 & 586.061842 \\
\hline
2020-05-07 & 543.301979 & 487.616917 & 600.747825 \\
\hline
2020-05-08 & 555.497023 & 498.895290 & 616.731785 \\
\hline
2020-05-09 & 564.966488 & 501.869902 & 631.122912 \\
\hline
2020-05-10 & 576.579291 & 509.887788 & 647.035185 \\
\hline
\end{tabular}
\end{center}
\caption{Senegal: predicted cumulative confirmed cases $\sim$May 10, 2020.}\label{sn_2w_confcases}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/sn_1week_confirmed.png}
\caption{Senegal (Confirmed) : Forcasting for the next week $\sim$April 29, 2020 } \label{sn_1w}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figures/sn_3weeks_confirmed.png}
\caption{Senegal (Confirmed) : Forcasting for the next 3 weeks $\sim$May 10, 2020} \label{sn_2w}
\end{figure}
\par\vspace{-.5cm}
\subsection{Main comments}
\noindent With Prophet, we perform also for the worldwide and three selected countries China, Italy, and Iran. Firstly, the worldwide predicted cumulative number of confirmed cases and deaths cases are plotted for a shorter period of the next 7 days. Secondly, we perform for 3 weeks for both confirmed and deaths cases.
We plot only for the next three days for China, Italy, and Iran countries.\\
We can summarize our basic predictions as follows, for the worldwide and by country:
\begin{itemize}
\item For Senegal (see Figure \ref{sn_comp}), the peak of the pandemic will be no later than the end of May. The predictions given by the SIR models and machine learning give roughly the same estimates. The authorities must take strict measures to stop the pandemic of COVID-19, because the peak can be reached around mid-May.
\item For worldwide, (see Figure \ref{ww_conf_deaths}), overall, each country of the whole world must take strict measures to stop the pandemic. At $\sim$May 10, 2020 we may obtain $>$ 3 million 740 000 confirmed cases (see Table \ref{ww_2w_confcases}).
\item For Italy (see Figure \ref{it}) and Iran (see Figure \ref{ir}), the success of the anti-pandemic will be no later than the middle of May. The situation in Italy and Iran is still very severe.
\item For China (see Figure \ref{cn}), based on optimistic estimation, the pandemic of COVID-19 would soon be ended within a few weeks in China.
\end{itemize}
\begin{figure}[h!]
\subfloat[SIR]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/Senegal_Prediction_deterministe.png}
\end{minipage}}
%
\subfloat[Stochastic SIR]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ssir2.png}
\end{minipage}}
\newline
%
\subfloat[SIR-D]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/Senegal_Prediction_SIR-D.png}
\end{minipage}}
%
\subfloat[SIR-F]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/Senegal_Prediction_SIR-F.png}
\end{minipage}}
\newline
%
\subfloat[SEWIR-F]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/Senegal_Prediction_SEWIR-F.png}
\end{minipage}}
%
\subfloat[forecasting 3 weeks]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/sn_3weeks_confirmed.png}
\end{minipage}}
\caption{Senegal: Comparative prediction between SIR models and machine learning technics}\label{sn_comp}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
ds & $ \hat{y}$ & $\hat{y}_{lower}$ & $\hat{y}_{upper}$ \\
\hline
2020-05-06 & 3.777760e+06 & 3.512616e+06 & 4.006792e+06\\
2020-05-07 & 3.861820e+06 & 3.573719e+06 & 4.106810e+06\\
2020-05-08 & 3.945427e+06 & 3.633930e+06 & 4.213883e+06\\
2020-05-09 & 4.026816e+06 & 3.691283e+06 & 4.311569e+06\\
2020-05-10 & 4.108123e+06 & 3.740562e+06 & 4.415467e+06\\
\hline
\end{tabular}
\end{center}
\caption{Worldwide: predicted cumulative confirmed cases $\sim$May 10, 2020.}\label{ww_2w_confcases}
\end{table}
\begin{figure}[h!]
\subfloat[1 week confirmed]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ww_1week_confirmed.png}
\end{minipage}}
%
\subfloat[3 weeks confirmed]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ww_3weeks_confirmed.png}
\end{minipage}}
\newline
%
\subfloat[1 week deaths]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ww_1week_deaths.png}
\end{minipage}}
%
\subfloat[3 weeks deaths]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ww_3weeks_deaths.png}
\end{minipage}}
\caption{Worldwide: forecasting for confirmed \& deaths (next week and next 3 weeks)}\label{ww_conf_deaths}
\end{figure}
\begin{figure}[h!]
\subfloat[Forcasting - confirmed in Italy]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/it_3weeks_confirmed.png}\label{it}
\end{minipage}}
%
\subfloat[Forcasting - confirmed in Iran]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.1\textwidth]{figures/ir_3weeks_confirmed.png}\label{ir}
\end{minipage}}
%
\newline
\subfloat[Forcasting - confirmed in China]{
\begin{minipage}[1\width]{ 0.5\textwidth}
\centering
\includegraphics[width=1.3\textwidth]{figures/cn_3weeks_confirmed.png}\label{cn}
\end{minipage}}
\caption{Italy, Iran, and China : forecasting for confirmed (next 3 weeks)}\label{ww_conf}
\end{figure}
\noindent Finally, due to the inclusion of suspected cases with clinical diagnosis into confirmed cases (quarantined cases), we can see severe situation in some cities, which requires much closer attention.
Individuals, communities and governments have to fight against the spread of the coronavirus. And thoughtful actions are to be taken.
\section{Conclusion and Perspectives}\label{ccl}
Under optimistic estimation, the pandemic in some countries (like China) will end soon within few weeks, while for most of the countries in the world, the hit of anti-pandemic will be no later than mid-May. In Senegal, we expect the situation will end up at the beginning of May.\\
In front of good forecasting, it is fundamental to get back to the estimation of parameters in general. And in the stochastic models, there is also another main issue: the estimation of volatility parameters. These bring more works. And most of the time one considers the standard deviation to approximate them. But, because of the difficulty to identify the asymptomatic cases, like in Finance, should it be possible to introduce other ways to estimate them? One investigation way could be:
\begin{itemize}
\item $V= \displaystyle \frac{1}{n}\sum_{i=0}^{n-1} (x_i- \bar{x})^2 , \quad x_i, i= 0,..., n-1$ are the measures given by the infected numbers between two consecutive periods:
$x_i = \ln (S_i, I_i, R_i, S_{i-1}, I_{i-1}, R_{i-1})$ (one could think to $x_i= \ln( \frac {I_i}{I_{i-1}})$ provided that $, I_i, I_{i-1}>0$ with possible other conditions); $\bar{x}$ is statistical mean value of $x_i.$ And then $\sigma_1= \sqrt{V}.$\\
\item If one thinks that the volatility cannot be equal to $0,$ i. e we exclude the situation where $x_i= \bar{x}$ for any $i.$
A possibility to take always into account the presence of volatility is to use the approximation where the mean value of the sample is removed in the variance formula: then one can consider the following estimation for the
variance $V= \displaystyle \frac{1}{n}\sum_{i=0}^{n-1} x_i^2.$
\end{itemize}
\noindent At the end of this work, we think that other questionings could merit to be studied such as
\begin{itemize}
\item the deepening of the stochastic models, by considering the fractional Brownian motion,
\item the consideration of non-local terms in some models,
\item some minimal actions such as stochastic optimization to control the spreading of the disease,
\item and finally, the mean-field games could be an interesting way to investigate in that pandemic, lockdown problems.
\end{itemize}
\subsection*{Acknowledgement}
The authors thanks the Non Linear Analysis, Geometry and Applications (NLAGA) Project for supporting this work. They thanks also the anonymous authors for their helpful comments.
|
1,116,691,498,529 | arxiv |
\section{Introduction and Related Work}
\label{sec:introduction}
The Multi-Agent Path-Finding (MAPF) problem is a special and important type of the more general Multi-Agent Planning (MAP) problem \cite{torreno2017}.
In MAPF \cite{stern2019}, the task is to find paths for each agent in a group, from a start to a goal location, where interactions between agents are restricted to collision avoidance, as agents move in a shared environment.
While relevant to many real-world applications, such as warehouse automation \cite{wurman2008}, autonomous vehicles \cite{dresner2008,vsvancara2019} and robotics \cite{honig2016}, recent research in the field has focused on expanding the classical MAPF framework to fit more real-world applications \cite{ma2016a,felner2017,salzman2020}.
A main research direction towards the real-world applicability of MAPF problems is the problem of lifelong MAPF, also known as the Multi-Agent Pickup and Delivery (MAPD) problem.
In this problem, a group of autonomous agents operate in a shared environment to complete a stream of incoming tasks, each with start and goal locations, while avoiding collisions with each others \cite{ma2017,liu2019}.
A similar problem, studied by Ma et al.~\cite{ma2016b} is the package-exchange robot-routing problem (PERR) where payload exchanges and transfers are allowed thus enabling the modelling of more general transportation problems.
In this work, we introduce the \emph{Cooperative-MAPF ({\cmapf})} framework, a MAPF extension, in which a group of agents collaborate towards completing a \emph{cooperative task}.
The classical MAPF problem is inherently cooperative, since each agent has to arrive at its goal, without colliding with other agents.
However, in many real-world applications, agents that operate in a shared environment are often \emph{heterogeneous} \cite{atzmon2020a} and may have a different set of abilities and restrictions.
Therefore, in the {\cmapf} framework, achieving goals and completing tasks may not depend only on avoiding collisions between agents, but also on actively coordinating their actions.
Simply put, we may want agents not just to ``not interrupt'' each other, but also help each other achieve their goals.
We term this a \emph{truly cooperative} setting.
Our motivating problem is taken from the warehouse-automation domain \cite{wurman2008}.
In this problem, storage locations host inventory pods that hold goods of different kinds.
Robots operate autonomously in the warehouse, picking up and carrying inventory pods to designated drop-off locations, where goods are manually taken off the pods for packaging.
In this scenario, the robot's main task is to transport the pods around the warehouse, and we refer to robots executing such tasks as \emph{transfer units}.
Research in a different, yet closely-related area, has studied the problem of autonomous robotic arms capable of picking-up a specific item from an inventory pod \cite{correll2016}.
We refer to a moving robot with such arm as a \emph{grasp unit}.
This motivates the investigation of an improved warehouse scenario, where robots of two types, grasp and transfer units, can work together in coordination (for example, by scheduling a meeting between them) to improve some optimization objective.
For instance, the number of completed tasks for a given time period.
This motivating example is depicted in Fig. \ref{fig:warehouse-illustration}.
We incorporate a truly cooperative behavior to classical MAPF by assigning cooperative tasks (rather than goals) to agents, similar to (non-cooperative) tasks defined in the MAPD literature~\cite{ma2017,liu2019}.
Agents cooperate in the context of these cooperative tasks, and are only able to complete
tasks by coordinating their actions and goals with each other.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\columnwidth]{figures/warehouse-illustration.png}
\caption{Two pairs of robots operate in a warehouse--two grasp units and two transfer units.
Grasp unit $\#1$ arrived at the task start location, i.e., next to the shelf. It will pick up the box and then drive to the meeting location (marked with a yellow square) to transfer the box to transfer unit $\#1$.
The transfer unit has a path (marked with blue arrows) to the meeting point, and from there to the task goal (the P square), where the box will be picked by a human employee.
The second pair of robots ($\#2$) are at their meeting location.}
\label{fig:warehouse-illustration}
\vspace{-0.5em}
\end{figure}
We suggest a formulation to the {\cmapf} problem which is derived from the classical MAPF formulation~\cite{stern2019}. In addition, we discuss differences and further extensions to the {\cmapf} framework which can be used towards achieving more cooperative capabilities in a MAPF problem.
In the suggested formulation, presented in Section~\ref{sec:background}, there is more than one set of agents, possibly representing heterogeneous real-world agents, and we specifically focus on the case of two sets of agents.
The cooperation between agents is restricted to the form of \emph{meetings}, where agents have to schedule a meeting location and time to complete a task.
We also discuss other forms of agent interactions, and generalizations to the suggested formulation.
Besides the aforementioned warehouse problem, more real-world problems can be modeled using the {\cmapf} framework, such as the involvement of aerial robots in fulfilment centers~\cite{shome2020}, the truck-and-drone ``last-mile'' delivery problem~\cite{murray2020} and multi-drone delivery using transit networks~\cite{choudhury2020}.
Based on the suggested formulation, we introduce (in Section~\ref{sec:algorithm}) \emph{Cooperative Conflict-Based Search (\ouralg)}, an optimal three-level algorithm that is heavily based on two previously-suggested optimal algorithms:
the well-known Conflict-Based Search (\cbs)~\cite{sharon2015} for solving a classical MAPF problem and the Conflict-Based Search with Optimal Task Assignment (\cbsta)~\cite{honig2018} for solving the anonymous MAPF problem, where we also need to assign goals (or tasks) to each agent.
We define, similarly to MAPD problems~\cite{cap2015,ma2017}, a notion of \emph{well-formed} problem instances, representing realistic and practical environments in MAPF domains, for which a solution to the {\cmapf} problem is guaranteed to exist.
\newtext{Finally, we introduce two improvements to the basic version of \ouralg.}
For clarity of exposition, the description of our \ouralg algorithm is based on the original \cbs algorithm which has numerous extensions and improvements. Many of these improvements can be immediately applied to \ouralg, as we discuss in Section~\ref{sec:discussion}.
A theoretical analysis of \ouralg is presented in Section~\ref{sec:theory} where we prove that \ouralg finds an \emph{optimal} solution for any well-formed {\cmapf} problem instance (formally defined in Section~\ref{sec:well-formed}).
Since the MAPF problem is NP-hard, so is {\cmapf}.
We therefore discuss \ouralg runtime, provide a qualitative analysis, and show empirically that it can solve nontrivial problem instances.
More specifically, we present results of running \ouralg on several MAPF benchmarks (detailed in Section~\ref{sec:experiments}).
\newtext{We show that our two suggested \ouralg improvements significantly improve the algorithm's performance.}
Finally, in Section~\ref{sec:discussion} we discuss some extensions and research directions, specifically for \ouralg, but more importantly, general for the {\cmapf} framework.
\section{Background and Setting}
\label{sec:background}
We first describe and formulate the classical MAPF problem followed by a formulation of our proposed Cooperative-MAPF ({\cmapf}) framework. Then, we define the objective function used in {\cmapf}.
\subsection{Classical MAPF}
In the classical MAPF problem \cite{stern2019}, we are given an undirected graph $G=\argument{V,E}$ whose vertices $V$ correspond to locations and whose edges $E$ correspond to connections between the locations that the agents can move along.~${A=\set{a_1,\dots,a_\numagents}}$ is a set of $\numagents$ agents, each is provided with a start and goal location, $\argument{s_i, g_i}$ s.t.~${s_i,g_i\in V}$.
Time is discretized and at each time step, each agent can either \textit{move} on the graph or \textit{wait} at its current vertex. A feasible MAPF solution is a set of paths $\calP=\set{p_1,\dots,p_k}$ such that $p_i$ is a path for agent $a_i$ from vertex $s_i$ to vertex $g_i$ and there are no conflicts between any two paths in~$\calP$. We consider two types of conflicts---a \emph{vertex conflict}, in which two agents occupy the same vertex at the same time step, and an \emph{edge conflict} (or \emph{swapping conflict}), in which two agents traverse the same edge from opposite sides (``switch sides'') at the same time step. An optimal solution is a feasible set of paths~$\calP$ which optimizes some objective function (specifically defined in Section~\ref{sec:objective}).
\subsection{Cooperative-MAPF ({\cmapf})}
\label{sec:cooperative-mapf}
We wish to incorporate cooperative behavior into the classical MAPF problem.
This is done by replacing agent goals with a set of \emph{cooperative tasks}, i.e., tasks that require the cooperation and coordination of a group of agents in order to be completed.
Specifically, here we limit ourselves to cooperative tasks (simply referred to as tasks in the rest of this paper) that require pre-defined pairs of agents to work together.
We discuss possible extensions in Section~\ref{sec:discussion}.
In the {\cmapf} problem we are given an undirected graph $G=\argument{V, E}$.
The set of agents $A$ consists of two distinguishable sets, i.e.,~${A=\groupA \cup \groupB}$.
Each set includes~$\numagents$ agents of a specific type, namely~${\groupA=\set{\agentAi{1},\dots,\agentAi{\numagents}}}$ and~${\groupB=\set{\agentBi{1},\dots,\agentBi{\numagents}}}$.
The two types of agents may differ in their traversal capabilities or possible actions in a location (for instance, picking up an object).
We are also given a set of tasks~${\taskset=\set{\task{1},\dots,\task{\numagents}}}$ s.t. each task~$\task{i}$ is assigned to a pair of agents~${\argument{\agentAi{i}, \agentBi{i}}}$.
We refer to~${\agentAi{i}}$ and~${\agentBi{i}}$ as the \emph{\agentAname} and \emph{\agentBname} agents, respectively.
Each task~${\task{i} \in \taskset}$ is defined by a start location $s_i$ and a goal location $g_i$.
Each agent has a unique start location given by a function~${\agentstart:A \rightarrow V}$ s.t.~${\agentstartfunc{a}}$ is the location of agent~$a$ at time step 0.
An agent goal is not directly given but rather derived from its assigned task.
In our setting, a task~${\task{i} = (s_i, g_i)}$ for agents~${(\agentAi{i}, \agentBi{i})}$ is composed of the following steps:
(i)~moving the {\agentAname} agent $\agentAi{i}$ to the task's start location~$s_i$,
(ii)~moving both agents to a so-called \emph{meeting}~${m_i=\meeting{i}}$ where~${\meetinglocation{i} \in V}$ is the meeting location and~${\meetingtime{i}}$ is the meeting time step, both of which are computed by the algorithm (and not specified by the task\footnote{Note that a meeting $m_i$ is defined by its location and time. Thus, when referring to a meeting, we mean both.}),
(iii)~moving the {\agentBname} agent to the task's goal location $g_i$.
For a visualization, see Fig. \ref{fig:warehouse-illustration}.
Formally, a solution to a {\cmapf} instance is a set of paths pairs~${\calP=\set{(p_1^\agentA, p_1^\agentB),\dots,(p_k^\agentA, p_k^\agentB)}}$ s.t. for each pair~${1 \leq i \leq k}, \:$ $p_i^\agentA, p_i^\agentB$ start in~${\agentstartfunc{\agentAi{i}}}$ and $\agentstartfunc{\agentBi{i}}$, respectively. Path~${p_i^\agentA}$ goes through~${s_i}$ at some time step $t_i$, and both paths contain a meeting at vertex~${\meetinglocation{i}}$ at the same time~${\meetingtime{i}}$ s.t.~${t_i \leq \meetingtime{i}}$.
Finally,~${p_i^\agentA}$ ends in vertex~${\meetinglocation{i}}$ at time~${\meetingtime{i}}$~and ${p_i^\agentB}$ ends in vertex~${g_i}$.
Similarly to classical MAPF, in order for a solution to be feasible, there should be no conflicts between the paths in $\mathcal{P}$, with the exception that the paths of agents sharing a task intersect at their meeting point.
\subsection{Objective functions for Cooperative MAPF}
\label{sec:objective}
Arguably, the most common objective functions used in classical MAPF to evaluate solutions are \emph{makespan (MKSP)} and \emph{sum-of-costs (SOC)} \cite{stern2019}, both to be minimized.
\newtext{
MKSP is defined as the number of time steps required for all agents to reach their target, while SOC is the sum of time steps required by each agent to complete all tasks.
In this paper we focus on the SOC objective, which is,
arguably, more natural for our setting---it implicitly minimizes both the time it takes to complete a task, and the time the {\agentAname} finishes its part in the task.
We note that all results presented can be applied to the MKSP objective as well.}
The sum of costs of~$\calP$ is defined as $\sum_{1\leq i \leq k}{\setsize{p_i^\agentA}+|p_i^\agentB|}$. Wait actions are counted until an agent finishes its plan (i.e., after the meeting for $\agentAi{i}$ and after arriving at $g_i$ for $\agentBi{i}$).
\subsection{Well-Formed {\cmapf} Instances}
\label{sec:well-formed}
It is possible to efficiently check if a MAPF instance is solvable \cite{yu2014}.
However, checking if a {\cmapf} instance is solvable is not trivial due to the additional requirement that meetings need to be computed.
Therefore, we restrict our discussion to \emph{well-formed} instances \cite{cap2015, ma2017}.
The intuition behind the well-formed definition is that agents can rest (that is, stay forever) in locations, called \emph{endpoints}, where they cannot block the execution of other tasks.
The set~$V_{ep}$ of endpoints contains the start locations of all agents together with the start and goal locations of all tasks.
The complement set~${V \setminus V_{ep}}$ contains all non-endpoints vertices.
We define a pair of vertices as \emph{connected} if there exists a path between them which only includes non-endpoint vertices.
\begin{definition}
\label{def:well-formed}
A {\cmapf} instance is well-formed iff
\renewcommand{\labelenumi}{\textbf{(C\arabic{enumi})}}
\begin{enumerate}
\item
\label{C1}
\changed{
For every task there exists a vertex~${v
\in V \setminus V_{ep}}$ that is: (i)~connected with the task start vertex, (ii)~connected with the task goal vertex, and (iii)~connected with the {\agentBname} agent's start vertex.}
\item
\label{C2}
\changed{
For every task, the task start vertex is connected with the {\agentAname} agent's start vertex.}
\end{enumerate}
\end{definition}
\input{figures/well-formed}
Fig. \ref{fig:not-well-formed} shows an example of a {\cmapf} instance which is not well-formed.
In this problem,~\textbf{(C\ref{C1})} in Definition~\ref{def:well-formed} is violated: there does not exist a vertex~$u$ which is connected to both $s_1$ and $\agentstartfunc{\agentBi{1}}$.
Fig. \ref{fig:well-formed} shows a well-formed instance: all white squares are valid meeting points.
\newtext{
We restrict the discussion on Co-MAPF only to well-formed instances.
It allows us to efficiently test if an instance is solvable and ensure completeness of our suggested algorithm.
We summarize this guarantee in the following claim and lemma. Proofs omitted due to space considerations.
}
\begin{claim} \label{claim:well-formed}
Checking if a {\cmapf} instance is well-formed can be done in polynomial time.
\end{claim}
\begin{lemma} \label{lemma:well-formed}
Every well-formed {\cmapf} instance is solvable.
\end{lemma}
\section{Cooperative Conflict-Based Search}
\label{sec:algorithm}
We now present the Cooperative Conflict-Based Search (\ouralg) algorithm, a three-level optimal planning algorithm for solving well-formed {\cmapf} problem instances.
As our suggested algorithm is based on \cbs \cite{sharon2015}, we start with a brief description of it.
\cbs is a two-level search algorithm.
The high-level performs a best-first search over a so-called conflicts-tree (CT). Each CT node consists of a solution, its cost and a set of constraints.
\cbs finds conflicts in the solution and resolves them by imposing constraints on agents.
A constraint is either a vertex constraint~$\argument{a,v,t}$, or an edge constraint~$\argument{a, u, v, t}$.
The low-level constructs paths for each individual agent while satisfying the imposed constraints.
\cbs resolves conflicts by splitting a CT node and introducing an additional constraint for each agent participating in the conflict at the lower level.
We now continue with an overview of \ouralg (depicted in Fig. \ref{fig:co-cbs-example} and outlined in Algorithm~\ref{alg:co-cbs}). We then continue with lower-level details.
\subsubsection{Algorithm overview}
\ouralg is a search algorithm based on \cbs that considers the cooperative aspect of the problem.
More specifically, \ouralg consists of three levels of search in three different spaces (similar to~\cite{honig2018} and~\cite{surynek2020}):
(i)~the \emph{meetings space},
(ii)~the \emph{conflicts space}
and
(iii)~the \emph{paths space}.
The meetings space contains all possible combinations of meetings, one for each task.
We'll refer to the three levels of search as the meetings level, conflicts level and paths level, respectively.
\input{algorithms/co-cbs}
\ouralg simultaneously searches over all possible meetings and for each meeting, over all possible paths.
To perform this search in a systematic and efficient manner, we need to consider an \emph{ordering} of the meetings.
Indeed, in Equation~\ref{eq:meeting_cost_soc} we define a meeting's cost which is dependent both on the meeting's location and time.
To efficiently traverse the set of possible meetings, we introduce the notion of a \emph{Meetings Table} which stores for each meeting location the currently-best meeting time.
As we will see, this table will allow us to iterate over all meetings in a best-first manner.
In contrast to \cbs that constructs a single conflicts-tree (CT), \ouralg creates a forest of CTs, similar to~\cite{honig2018}.
Each CT starts in a \emph{root} node and corresponds to a specific set of meetings (a specific meeting for each task).
In \ouralg, each CT node has two additional fields (when compared to \cbs): \emph{root} specifies if the node is a root or a \emph{regular} node and \emph{meetings} specifies the current set of meetings (one for each task) which is used during the path-level search.
\ouralg starts with a single root node, with the optimal set of meetings
(see Equation \ref{eq:optimal_meetings_set}), while ignoring possible conflicts between agents.
In each iteration, \ouralg selects a lowest-cost node from the \textsc{Open} list (either a root or regular node), in a best-first approach similar to \cbs.
Whenever a root node is selected, in addition to splitting the tree due to a conflict,
\ouralg also expands it in the meetings space by generating the next best sets of meetings.
Namely, new root nodes are created only on demand.
For each expanded node, given its set of meetings and constraints, the paths level computes a solution by planning the different steps a task solution is composed of
Section~\ref{sec:cooperative-mapf}).
\input{algorithms/meetings_table}
\input{algorithms/expand_root}
\subsubsection{Computing the Meetings Table}
\label{sec:meeting_tables}
We denote the cost of a meeting $m_i=\meeting{i}$ as $C_i(\meetinglocation{i},\meetingtime{i})$. $C_i$ is given for the SOC objective, by
\begin{equation}
\label{eq:meeting_cost_soc}
C_i(v,t) = \left\{\begin{array}{cc}
2\cdot t + d\argument{v, g_i}, & t \geq \earliestvertextime{i}\\
\infty, & \text{otherwise}
\end{array}\right.,
\end{equation}
where $\earliestvertextime{i}$ is the earliest possible meeting time at $v$ for task $\task{i}$, i.e., the earliest time both assigned agents can arrive at $v$. Specifically, $\earliestvertextime{i}$ is defined as
\begin{equation}
\label{eq:earliest_meeting_time}
\begin{split}
\earliestvertextime{i} = \max\left\{
d\argument{\agentstartfunc{\agentAi{i}}, s_i} + d\argument{s_i, v}, \right.\\
\left.d\argument{\agentstartfunc{\agentBi{i}}, v}\right\}
\end{split},
\end{equation}
where~$d(u,v)$ is the length of the single-agent shortest path from $u$ to $v$.
If~${d(u,v)=\infty}$, no such path exists.
The first step of \ouralg is to compute~$\meetingtable{i}{}$, the meetings table for each task~$\task{i}$ (lines 3-4).
The meetings table is a function~${\meetingtable{i}{}: V \rightarrow \mathbb{R}\cup\set{\infty}}$ that returns for each vertex~${v\in V}$ the cost for completing task~$\task{i}$ with a meeting in~$v$ at the earliest possible time.
${\meetingtable{i}{}\argument{v}}$ is initialized for each~${v \in V}$ with~${\meetingtable{i}{}\argument{v} = C_i(v,\earliestvertextime{i})}$.
Each meetings table is stored as a heap which allows for $insert$, $update$ and $getMin$ operations in $\calO(\log\setsize{V})$.
These operations are used during the root node expansion which will be described shortly.
We compute $\meetingtable{i}{}\argument{v}$ for all $v\in V$ in polynomial time using \astar and \dij's algorithm as described in Algorithm~\ref{alg:meetings_table}.
Computing the meetings table for each task $\task{i}$ requires finding paths from every node $v\in V$ to the agents' start locations, as well as tasks' start and goal locations.
Given the meeting tables, we can check if the given problem instance is well-formed (see Definition \ref{def:well-formed}).
More specifically, it is well-formed iff for every meetings table $\meetingtable{i}{}$ there exists a vertex $v \in V \setminus V_{ep}$ such that $\meetingtable{i}{}\argument{v}<\infty$.
\input{figures/Co-CBS-example}
\subsubsection{Root initialization}
We define the cost of a set of meetings~${\meetingsetset}$ as follows:
${C\argument{\meetingset} = \sum_{i=1}^{\numagents}{C_i\meeting{i}}}$.
$\meetingset^*$ is an optimal set of meetings that minimizes the problem objective while ignoring possible conflicts between agents.
Namely,
\begin{equation}
\label{eq:optimal_meetings_set}
\meetingset^*\in \argmin_{\meetingset}{C\argument{\meetingset}}.
\end{equation}
\ouralg's search starts with creating the initial CT root node with an empty set of constraints, and an optimal set of meetings~${\meetingset^*}$, by choosing a lowest-cost meeting for each task from the meeting tables (lines 5-7).
Given $\meetingset^*$, the paths level is called to compute individual paths for each agent (line~8).
This is similar to \cbs, except that in the path-level search we plan for each task~$\task{i}$ in parts: (i)~for~${\agentAi{i}}$ from~${\agentstartfunc{\agentAi{i}}}$ to~$s_i$, and then from~$s_i$ to~${\meetinglocation{i}}$ at time $\meetingtime{i}$, and (ii)~for~${\agentBi{i}}$ from~${\agentstartfunc{\agentBi{i}}}$ to~${\meetinglocation{i}}$ at time~${\meetingtime{i}}$ and then to~$g_i$.\footnote{For simplicity, we assume a disappear-at-target behavior \cite{stern2019}, such that the {\agentAname} agent disappears after the meeting, and the {\agentBname} agent disappears after completing the task (at the task goal location).}
Note that when planning for a meeting, we should consider both the meeting location and time.
The initial CT root node cost is computed and it is inserted to the \textsc{Open} list (lines~{9-10}).
\subsubsection{Selecting a node for expansion}
As long as there are nodes in the \textsc{Open} list (line~11), we follow \cbs's best-first search approach and select a node with a lowest cost (line~12).
If the \textsc{Open} list contains both root and regular nodes with the same lowest cost, \ouralg chooses to expand a regular node (to perform this in practice, \ouralg keeps root and regular nodes in two separate \textsc{Open} lists).
\subsubsection{Expanding a root node}
\label{sec:expand_root}
After selecting a lowest-cost node~$N$ from the \textsc{Open} list, \ouralg checks for conflicts in its solution (line 13).
If none are found, $N.solution$ is returned as the optimal solution (lines 14-15).
Otherwise, if~$N$ is a root node, it is expanded to get its successors in the meetings space (lines 16-17).
The process of expanding a root node is described in Algorithm~\ref{alg:expand_root}.
Given the current set of meetings (in the expanded root node)~${\meetingsetset}$, \ouralg generates up to $\numagents$ new sets of meetings, one for each task.
This is done in a non-decreasing manner, by replacing one meeting $m_i\in\meetingset$ at a time, an idea similar to the Increasing Cost Tree Search (\icts) \cite{sharon2013} algorithm, thus creating $\numagents$ new root nodes.
To get the next-best meeting for task $\task{i}$, we have to search both for different locations and time steps in the meetings space.
The meetings table $\meetingtable{i}{}$ of $\task{i}$ initially consists of meetings at each possible location, at the earliest time possible.
Each time \ouralg invokes the get-next-meeting procedure for $\task{i}$ (line 6 in Algorithm \ref{alg:expand_root}), it returns the lowest-cost meeting~${m_i=\meeting{i}}$ from $\meetingtable{i}{}$.
The table is then updated so that it holds the next lowest-cost meeting.
This is done by updating~${\meetingtable{i}{}\argument{\meetinglocation{i}} =C_i\argument{\meetinglocation{i},\meetingtime{i}+1}}$.
Namely, updating the cost of meeting at~${\meetinglocation{i}}$, but at time~${\meetingtime{i}+1}$ rather than~${\meetingtime{i}}$.
The next time the get-next-meeting procedure is invoked, the next best meeting will be returned by the table.
Subsequently, a new path is planned for the pair of agents whose meeting changed, the new CT node cost is computed and it is inserted into the \textsc{Open} list.
\subsubsection{Resolving a conflict}
The last part of the algorithm is almost identical to \cbs: when expanding a node $N$ (either root or regular) \ouralg splits its CT and creates a regular node for each agent by the first conflict found (lines 18-19).
These nodes has the same set of meetings as $N$ (line 22).
\section{Theoretical Analysis}
\label{sec:theory}
\subsection{\ouralg Completeness}
We restrict our discussion to well-formed {\cmapf} instances.
We guarantee completeness of \ouralg on well-formed instances using Claim~\ref{claim:well-formed}, which states that if the instance is not well-formed we can identify it by running a polynomial-time test procedure before executing \ouralg.
\begin{theorem} \label{theorem:completness}
\ouralg will return a solution for any well-formed {\cmapf} instance.
\end{theorem}
\begin{proofs}
By Lemma~\ref{lemma:well-formed} we know that there exists a solution.
Denote the set of meetings which forms the solution by~${\meetingset=\set{\meeting{1}, \meeting{2},\dots,\meeting{k}}}$.
\ouralg's meetings level preforms a systematic best-first search across the meetings space, thus, will eventually create a root node, denoted by~$R_\meetingset$, whose set of meetings is~$\meetingset$.
There exists a feasible solution such that each pair of agents~${\argument{\agentAi{i},\agentBi{i}}}$ meet at~${\meeting{i}}$.
By the completeness of \cbs it is guaranteed that the search from the CT root node~${R_\meetingset}$ will eventually find the solution.
\end{proofs}
\subsection{\ouralg Optimality}
We again restrict the discussion to well-formed instances.
By Theorem~\ref{theorem:completness} we are guaranteed that \ouralg solves every well-formed {\cmapf} instance.
We show that it returns an optimal solution for the SOC objective function.
\begin{lemma} \label{lemma:alg-non-decreasing}
Let $\meetingset$ be a set of meetings with~${C\argument{\meetingset}=c}$ and let~$N$ be a CT node with cost larger than~$c$.
\ouralg will generate a root node corresponding to~$\meetingset$ before expanding~$N$.
\end{lemma}
\begin{proofs}
Assume that there exists a set of meetings~$\meetingset$ s.t.~${C(\meetingset) = c}$, that hasn't been generated yet.
Assume by contradiction that \ouralg expands a node~$N$ that has a solution with cost~${c' > c}$.
By definition, the first set of meetings $\meetingset_0$ that is generated (line 6 in Algorithm~\ref{alg:co-cbs}) induces a solution which minimizes the SOC objective function.
In particular, this implies that the cost of completing all tasks in the (possibly infeasible) solution induced by~${\meetingset_0}$ is less than or equal to~$c$.
Similarly, the cost of completing all tasks in the solution induced by~${\meetingset}$ is less than or equal to~$c$.
Therefore, there exists a sequence of meeting placements that may be generated during the meetings-level search, from~${\meetingset_0}$ to~${\meetingset}$, such that the cost of the meetings sets in the sequence would remain smaller or equal to $c$ at all time, i.e., each set of meetings $\meetingset'$ in this sequence holds~${C(\meetingset') \leq c}$.
The way the meetings-level search works ensures that there must be at least one root node in the \textsc{Open} list consisting of one of these meeting sets.
Therefore, there exists a root node that hasn't been expanded yet in the \textsc{Open} list with a cost smaller than~$c'$, in contradiction to the best-first search approach which chose node~$N$ with a larger cost for expansion.
\end{proofs}
\begin{theorem}
\ouralg returns an optimal solution for any well-formed {\cmapf} instance.
\end{theorem}
\begin{proofs}
Assume that there exists an optimal solution with some cost $c^*$. \ouralg performs a \cbs-like search on each generated CT, namely, it searches through a forest of conflict trees.
By Lemma~\ref{lemma:alg-non-decreasing} we get that the cost of each expanded root node of each CT constitutes a lower-bound on~$c^*$.
From the optimality guarantees of \cbs, we get that any node expanded in each of those CTs (i.e., regular nodes) is also a lower bound on~$c^*$.
Due to \ouralg's best-first approach, it won't expand a node with a cost larger than~$c^*$ before completing a search through all possible CT nodes with cost~$c^*$ (by expanding neither a root node nor a regular one).
Since there exists a solution with such cost, and the number of possible solutions with a specific cost is finite, \ouralg will eventually expand a node with an optimal and feasible solution and return it.
\end{proofs}
\subsection{\ouralg Runtime Analysis}
\ouralg is an extension of \cbs which adds a level that searches in the meeting space.
The size of the meeting space is~${\calO((|V| \cdot c^*)^k)}$.
In the worst case, \ouralg would generate all possible meetings and perform a \cbs search for each set of meetings (up-to cost $c^*$).
It may result in a number of expanded nodes relative to the number of meeting points, times the number of nodes expanded by \cbs~{\cite{sharon2015,gordon2021}}.
In practice, in our empirical evaluation (Section \ref{sec:experiments}) we observe that the number of generated meeting sets is typically small, especially for large or sparse environments (see Fig.~\ref{fig:generated_meetings}).
This means that the number of full \cbs searches (one for each meetings set) is usually small.
However, in scenarios with many conflicts, a large number of root nodes (and, meeting sets) are created.
This causes an increase in run time which is exponential in the number of tasks.
\section{\ouralg Improvements}
\newtext{
\label{sec:improvements}
In the previous section, we introduced the basic version of \ouralg for solving the {\cmapf} problem.
\ouralg creates a forest of conflict trees and runs \cbs on each tree.
Thus, we can apply previously-suggested \cbs improvements to \ouralg.
One such improvement that has been shown to significantly decrease \cbs's run-time is \emph{prioritizing conflicts (PC)} \cite{boyarski2015}.
In this section we present in details the application of PC to \ouralg.
More \cbs improvements are discussed in Section~\ref{sec:discussion}.
In addition, we introduce a unique improvement for \ouralg called \emph{Lazy Expansion (LE)}, which exploits special characteristics of root nodes.
Both improvements keep \ouralg optimal, while introducing a significant improvement in run time, as shown empirically in Section~\ref{sec:experiments}.
}
\subsection{Prioritizing Conflicts (PC) for \ouralg}\
\newtext{
\label{sec:PC}
The Improved \cbs (\icbs) algorithm \cite{boyarski2015} introduced an enhancement to \cbs by defining rules dictating how to split the CT.
In particular, conflicts are divided into three types: \emph{cardinal}, \emph{semi-cardinal} and \emph{non-cardinal}.
Cardinal conflicts always cause an increase in the solution cost, therefore \icbs chooses to split cardinal conflicts first.
Cardinal conflicts are identified by examining the width of a \emph{multi-value decision diagram (MDD)} \cite{sharon2013}, which is constructed for each low-level path found.
The MDD is a directed a-cyclic graph which compactly stores all possible paths of a given cost~$c$ for a given agent, from its start vertex to its goal vertex.
An MDD of cost~$c$ consists of~$c$ layers, corresponding to~$c$ time steps
}
\newtext{
Applying PC to \ouralg is not straightforward, since an MDD stores paths from a start vertex to a goal vertex, while in {\cmapf} paths are constrained to ensure cooperation between agents.
More specifically, in our {\cmapf} setting, each agent has an \emph{intermediate goal}, i.e., the task start location, or the meeting location (at a specific time).
We therefore need to modify the way an MDD is constructed, and indeed we suggest a method for efficiently doing so for both agents.
}
\newtext{
For the {\agentAname} agent, we must ensure it passes through the task's start location.
In other words, we need to prune MDD nodes that are not part of any of the agent's paths which pass the task's start location.
We refer to such nodes as \emph{invalid nodes}.
Constructing an MDD efficiently is done using two breadth-first searches--one forward and one backward (start to goal and vise versa)~\cite{sharon2013}.
In order to efficiently prune invalid nodes, we follow the following procedure: during the forward search, we mark MDD nodes corresponding to the task start location and all their descendants as \emph{valid\_forward}.
Similarly, during the backward search, we mark these nodes and all their ancestors as \emph{valid\_backward}.
Finally, all MDD nodes that are not marked with either flags are pruned.
}
\newtext{
For the {\agentBname} agent, constructing the MDD requires only slight changes.
We need to constrain the agent to be at the meeting's location at the meeting's time.
We simply do it by eliminating all other nodes from the MDD layer corresponds to the meeting time during the forward pass in the MDD construction.
}
\subsection{Lazy Expansion (LE) of root nodes}\
\newtext{
\label{sec:LE}
\ouralg searches the meetings space by creating root nodes, each corresponding to a unique set of meetings.
Note that since no constraints are imposed on paths of root nodes, their cost is given as an aggregation of their meeting costs.
Furthermore, meeting costs are computed a-priori during the construction of meeting tables (see Section~\ref{sec:algorithm}).
This means that when a root node is expanded, and new root nodes are created, they can immediately be inserted into the \textsc{Open} list \emph{without} computing their low-level paths.
The low-level paths will be computed only when these root nodes are extracted from the \textsc{Open} list. We term this \emph{Lazy Expansion (LE)}.}
\newtext{
Each time a root node is expanded, it creates $\numagents$ new root nodes by replacing the meeting of each of the tasks.~We emphasize that while generating those nodes is mandatory in order to guarantee optimality, most of them won't be expanded. Thus, the run-time saved by LE can be significant.
}
\vspace{-0.3em}
\section{Experimental Evaluation}
\label{sec:experiments}
\newtext{
\ouralg solves the newly introduced {\cmapf} problem.
To the best of our knowledge, there does not exist an off-the-shelf optimal solver for MAPF problems involving cooperative behavior.
Suggesting a centralized \astar-based implementation for solving the {\cmapf} problem is challenging due to constraints imposed on low-level paths to achieve cooperation.
Such approach would require to perform a search in the meetings space, resulting in an exponentially-large state space.
Moreover, an attempt to solve {\cmapf} using such implementation would yield similar results as solving classical MAPF problem using \astar \cite{sharon2015}, due to their similar search approach and conflict-resolution mechanism.
Thus, we restrict our empirical evaluation to the algorithms presented in this paper.
}
\changed{To measure the quality of \ouralg, we present the results of an empirical evaluation performed on standard MAPF benchmarks \cite{stern2019,sturtevant2012} showing the performance of the basic version of \ouralg, as well as the two suggested improvements (see Section~\ref{sec:improvements}).
\ouralg is implemented in C++\footnote{Upon acceptance, we will make the code publicly available.} and is based on the implementation of Li et al.~\cite{li2021}.
All simulations were performed on an Intel Xeon Platinum~{8000}~@~{3.1}Ghz machine with~{32.0 GB RAM}.
}
\subsection{Benchmarks and setup}
\changed{
We evaluated \ouralg on several 2D grid-based benchmarks. Specifically, we tested \ouralg on different types of maps---a dense game map (\textit{DAO, den312d}), random map (\textit{random-32-32-20}), a large warehouse {(\textit{warehouse-10-20-10-2-1})} and a custom small warehouse ($57\times 27$).
We ran~25 random queries for each benchmark for the SOC objective with the number of tasks ranging from $6$ tasks ($12$ agents) to~$22$ tasks (44 agents) and with a timeout of two minutes.
On each benchmark, we compare the performance of three different variances of \ouralg: (i)~basic \ouralg, (ii)~\ouralg with prioritizing conflicts (PC), and (iii)~\ouralg with PC and lazy expansion (LE) of root nodes.
}
As opposed to the classical MAPF, where each agent is provided with start and goal locations, in {\cmapf}, a task's start and goal need to be provided (instead of explicitly providing an agents' goal).
Thus, we defined the tasks in each scenario as follows, based on the original benchmark scenario:
for each pair of agents, one set of start and goal locations is used for the task, and the other set is used for the agents' start locations.
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\textwidth]{figures/results/den312d_Success_Rate,_timeout=120.png}
\caption{den312d}
\label{fig:den312d_Success_Rate}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\textwidth]{figures/results/random-32-32-20_Success_Rate,_timeout=120.png}
\caption{random-32-32-20}
\label{fig:random-32-32-20_Success_Rate}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\textwidth]{figures/results/warehouse-10-20-10-2-1_Success_Rate,_timeout=120.png}
\caption{warehouse-10-20-10-2-1}
\label{fig:warehouse-10-20-10-2-1_Success_Rate}
\end{subfigure}
\begin{subfigure}{0.245\textwidth}
\includegraphics[width=\textwidth]{figures/results/warehouse-57-27_Success_Rate,_timeout=120.png}
\caption{warehouse-57-27}
\label{fig:warehouse-57-27_Success_Rate}
\end{subfigure}
\caption{Success rates.}
\label{fig:success_rate_dense}
\vspace{-0.4em}
\end{figure*}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.49\columnwidth}
\includegraphics[width=\textwidth]{figures/results/generated_meeting_sets.png}
\caption{}
\label{fig:generated_meetings}
\end{subfigure}
\begin{subfigure}{0.49\columnwidth}
\includegraphics[width=\textwidth]{figures/results/first_meetings_ratio.png}
\caption{}
\label{fig:first_meetings}
\end{subfigure}
\setlength{\abovecaptionskip}{2pt}
\caption{(\subref{fig:generated_meetings}) Number of generated sets of meetings. (\subref{fig:first_meetings}) Ratio~$\eta$ between the number of instances solved using the first set of meetings, and the total number of instances.}
\label{fig:first_generated_meetings}
\end{figure}
\subsection{Results}
\changed{
We first examine the algorithm's success rate (i.e., the ratio of solved instances within the time limit) for all benchmarks.
Figures~\ref{fig:success_rate_dense} shows the success rates of \ouralg on all maps.
\ouralg successfully solves more than~$80\%$ of the instances (excluding the den312d benchmark) with ten tasks.
The success rate sharply drops below~$20\%$ for twelve tasks or more on the dense den312d map.
}
\newtext{
Using PC improves the basic \ouralg in all cases, achieving up to $30\%$ increase in the success rate.
Furthermore, adding LE on top of PC further improves the performance in most cases, and never degrades the performance.
This is especially notable with a large number of tasks, where many root nodes are created.
}
\changed{
Fig.~\ref{fig:generated_meetings} shows the average number of generated meeting sets.
Fig. \ref{fig:first_meetings} shows the ratio~$\eta$ between the number of instances where the first set of meetings is used to obtain the solution and the total number of instances.
Both warehouse environments are typically sparser, causing fewer conflicts between agents.
Thus, a feasible solution is usually quickly found using the first set of meetings.
This is especially true in the large warehouse,
namely when $\eta$ is close to one.
The search in this case is equivalent to running \cbs with the first set of meetings.
For the same reason, PC does not improve the performance in this environment.
Applying LE as well, however, does manage to improve the success rate for the majority of tasks.
In other environments, on the other hand, maps are smaller and denser, and most solutions are not obtained using the first generated set of meetings.
A more exhaustive meeting-space search is therefore required to find an optimal solution, as shown in Fig.~\ref{fig:generated_meetings}.
}
\section{Discussion and Future Work}
\label{sec:discussion}
In this paper, we introduced the Cooperative Multi-Agent Path Finding ({\cmapf}) problem, an extension to classical MAPF that incorporates cooperative behavior.
We introduced \ouralg, a three-level search algorithm that optimally solves {\cmapf} instances, \newtext{as well as two improvements, Prioritizing Conflict (PC) and Lazy Expansion (LE).}
In this section, we provide a comprehensive discussion regarding the suggested model and algorithm.
Specifically, we discuss further possible improvements that can be applied to \ouralg and suggest possible extensions to the {\cmapf} model.
We argue that \ouralg forms a basic framework that may serve as a starting point for future extensions.
\subsection{\ouralg's Extensions and Improvements}
\subsubsection{Information reusing between conflict trees}
\ouralg expands root nodes by only changing one meeting in the newly-created node.
Moreover, the next selected meeting is usually very close to the current meeting, both in location and time.
This implies that \ouralg searches over multiple trees that potentially have very similar solutions.
We may exploit this for more efficient computation.
\subsubsection{Meetings-level search}
\ouralg uses a simple-yet-effective method for finding an optimal meeting for each task.
For large problem instances, this method may become memory and run-time expensive, due to the maintenance of large meeting tables.
We may consider incorporating an algorithm such as the recently-proposed \mmstar algorithm \cite{atzmon2020b}, for the Multi-Agent Meeting problem.
Furthermore, we may couple meetings and paths planning, and handle conflicts during the search for a meeting.
This may be advantageous as meetings and conflicts may be tightly coupled.
\subsubsection{Existing \cbs improvements}
\changed{
In addition to the PC improvement presented in Section~\ref{sec:improvements}, more \cbs improvements exist.
Some of these include adding heuristics~\cite{felner2018}, disjoint splitting~\cite{li2019}, bypassing a conflict~\cite{boyarski2015b}, symmetry breaking~\cite{li2020} and exploiting similarities between nodes in a single conflict tree~\cite{boyarski2020}.
We can also apply variants of \cbs~\cite{barer2014} that compute (bounded) sub-optimal solutions to \ouralg.
}
\subsection{Extensions to the {\cmapf} Framework}
\label{sec:cmapf_extensions}
\subsubsection{Number and types of collaborating agents}
A rather straightforward generalization of {\cmapf} is to require more than two agents to collaborate on a task.
The problem introduced in Section \ref{sec:introduction} motivates this extension: several grasp units may pickup several items for a single transfer unit.
\ouralg can solve this problem with a few minor changes.
However, if the number of agents per task isn't fixed, additional work is required.
Moreover, we may consider agents with different traversal capabilities (e.g., different velocities~\cite{honig2016}), by possibly changing the single-agent planner.
\subsubsection{Other forms of cooperative interaction}
We introduced a
definition for the {\cmapf} problem, where interaction between agents is expressed via meetings between two types of agents.
While this interaction is very intuitive, more forms of cooperative interaction can be modeled (for example, temporal constraints).
We may generalize the formulation to include a finite set of possible agent types, and define more complex tasks where each agent type has its dedicated role
The framework provided by \ouralg might allow to address such general definitions by only adjusting the \emph{cooperation-level} search
(the meetings level in our case).
Any cooperative planning, which results in inducing goals for an agent,
can be easily plugged in into \ouralg.
\subsubsection{Task assignment and lifelong planning}
In this problem we assume cooperative tasks are pre-assigned to collaborating agents.
However, optimizing the task assignment as well may significantly affect solution quality (as in classical MAPF).
This is extremely relevant for lifelong-planning problems, where agents have to attend to a stream of incoming tasks.
Generalizing the {\cmapf} framework in this direction will advance it even further towards more real-world problems, but introduce significant challenges as well.
\bibliographystyle{IEEEtran}
|
1,116,691,498,530 | arxiv | \section*{References}
|
1,116,691,498,531 | arxiv | \section{Introduction}
Mixture models are widely recognized as a useful tool for inference in a variety of settings.
Having been first used over 100 years ago \citep[for example, in][]{pearson1894}, more recently mixture models are enjoying a revival, thanks to advances in computational methods for inference. In particular, the EM algorithm \citep{dempster77} and MCMC \citep[see, for example,][]{diebolt94} have driven considerable advances in the field.
See \cite{mclachlan00} for a general overview of mixture models; \cite{schnatter06} provides an overview of Bayesian mixture models, which are the focus of this paper.
We recall the definition of a mixture model and introduce notation.
Suppose $n$ observations, $y_1,\ldots,y_n$, are taken from a $K$-component mixture distribution where all the components have the same distributional form, with mixture-specific parameters $\bm{\theta}=(\bm{\theta}_1,\ldots,\bm{\theta}_K)$, global parameters $\bm{\eta}$ and mixing weights $\bm{\pi}=(\pi_1,\ldots,\pi_K)$, summarised by $\bm{\gamma} = (\bm{\pi}, \bm{\theta}, \bm{\eta})$.
The mixture distribution for a single observation $Y_i$ is then given by
\begin{equation}\label{eq:firstform}
g\left(y_i|\mathbf{\bm{\gamma}}\right) = \sum_{k=1}^K \pi_k f_k\left(y_i|\bm{\theta}_k,\bm{\eta}\right),
\end{equation}
with $K \geq 1$, $\pi_k > 0 \ (k=1,2,\ldots,K)$, $\sum_{k=1}^K\pi_k = 1$
and $f_k(\cdot|\bm{\theta}_k,\bm{\eta})$ is a density function parametrised by $\bm{\theta}_k$ and $\bm{\eta}$.
A Bayesian approach to estimating the parameters of the mixture distribution of Equation (\ref{eq:firstform}) involves the specification of priors for the parameters $\bm{\gamma}$. The issue of prior specification in this context has a number of difficulties.
First, fully improper priors cannot be used for component-specific parameters in mixture models, since doing so causes the posterior to be improper also \citep[see, for example,][]{mclachlan00}. However, proper priors, even with large variance, can have considerable influence on the posterior distribution, and the extent of this influence can be difficult to assess \citep{marin05}.
Re-parametrisation in a hierarchical manner and allowing only the global parameters to be improper is one solution: this is considered by \cite{mengersen96}, and \cite{roeder97}. Another possibility is to use data-dependent priors, as considered by \cite{richardson97}, and \cite{wasserman00}.
Second, where no component specific information is available, identical priors may be proposed for the components of each parameter. This leads to a non-identifiable posterior, which is known as the label switching problem. This has been well studied \citep[see, for example][and references therein]{stephens00,jasra05,sperrin09labels}.
Third, constructing independent priors for component parameters may not be sensible, as the components only have meaning relative to one another \citep{lee08}.
This third issue is the focus of this paper. We consider in detail the idea that priors should be specified relative to each other. We introduce a strategy for doing so that we call `proximity penalty priors' (PPPs). The basic idea is that priors are specified in two parts: first, each prior is specified independently, corresponding to standard existing approaches; second, a proximity penalty is applied, which penalises the joint prior distribution of certain configurations of parameters. We show that the construction makes theoretical sense.
Section \ref{sec:ppp} introduces the idea of PPPs. Section \ref{sec:results} illustrates the consequences of the PPP approach on real and simulated data; the paper concludes with a discussion in Section \ref{sec:discuss}.
\section{Proximity Penalty Priors} \label{sec:ppp}
We begin with a simple result that establishes the validity of the PPP approach.
\begin{propn} \label{ppthm}
Suppose the prior for $\bm{\gamma}$, given by $p(\bm{\gamma})$,
can be separated as
\begin{equation*}
p(\bm{\gamma}) = p_1(\bm{\gamma}) p_2(\bm{\gamma}).
\end{equation*}
Denote the likelihood by $L(\bm{\gamma})$ and the posterior by $q(\bm{\gamma})$, so that $q(\bm{\gamma}) \propto L(\bm{\gamma})p(\bm{\gamma})$.
Suppose that a new parameter vector $\bm{\gamma}^*$ can be simulated from a proposal distribution $r(\bm{\gamma^*}) = L(\bm{\gamma^*}) p_1(\bm{\gamma^*})$, and the existing value of $\bm{\gamma}$ is
$\bm{\gamma}^{m}$.
Then if we set
\begin{equation}
\bm{\gamma}^{m+1} = \left\{ \begin{array}{ll}
\bm{\gamma}^* &\textrm{with probability} \ \min\left(1,\frac{p_2(\bm{\gamma}^*)}{p_2(\bm{\gamma}^{m})}\right)\\
\bm{\gamma}^{m} &\textrm{otherwise},
\end{array}\right.
\end{equation}
\\
the result is equivalent to a Metropolis-Hastings update.
\end{propn}
\proof The acceptance probability for the Metropolis-Hastings procedure with proposal density $r(\cdot)$ and posterior $q(\cdot)$ is
\begin{equation*}
\min\left(1,\frac{q(\bm{\gamma}^*) r(\bm{\gamma}^{m})}{q(\bm{\gamma}^{m}) r(\bm{\gamma}^*)} \right).
\end{equation*}
Substituting in these densities gives the result.
\qed
In the context of this work the portion of the prior $p_1(\cdot)$ corresponds to the independent specification of the parameters, for which standard distributions could be used; the portion $p_2(\cdot)$ corresponds to the novel part of the prior that jointly assesses the values of the parameters and penalises undesirable combinations.
Suppose that the priors $p_1(\cdot)$ are conjugate. Then an MCMC approach would proceed, on each iteration, by generating proposed new parameters according to a Gibbs sampling scheme with the full conditionals based on the prior component $p_1(\cdot)$, then accepting the proposed parameters according to a Metropolis Hastings ratio on the prior component $p_2(\cdot)$.
We illustrate the idea with an example.
Consider a mixture of two normal distributions
\begin{equation} \label{eq:normmix}
p\left(y_i|\mathbf{\bm{\gamma}}\right) = \pi_1 N(y_i; \mu_1,\sigma_1^2) + \pi_2 N (y_i; \mu_2, \sigma_2^2),
\end{equation}
with $\pi_1 + \pi_2 = 1$, and all the parameters $\bm{\gamma} = (\pi_1,\pi_2,\mu_1,\mu_2,\sigma_1^2,\sigma_2^2)$ unknown. Standard conjugate prior choices would then be a Dirichlet distribution for the pair $(\pi_1,\pi_2)$, normal distributions for $\mu_1$ and $\mu_2$, and inverse-gamma distributions for $\sigma_1^2$ and $\sigma_2^2$. Throughout this paper we will use the empirical Bayes prior distributions suggested by \cite{richardson97} unless otherwise stated.
We may believe a-priori that the key difference between the two components is the location. If the components are not well separated or the amount of data is small it is important that such prior information is captured. By Proposition 1, we can reflect these beliefs in a separate part of the prior $p_2(\cdot)$. A sensible such choice is
\begin{equation} \label{eq:simple-mu}
p_2(\bm{\gamma}) = |\mu_1 - \mu_2|.
\end{equation}
Such a function assigns more prior weight to larger differences between $\mu_1$ and $\mu_2$.
In isolation, the above $p_2(\cdot)$ is improper but provided $p_1(\cdot)$ is proper the overall prior is proper.
Such a prior enjoys scale invariance in the sense that $p_2(a\bm{x}_1)/p_2(a\bm{x}_2) = p_2(\bm{x}_1)/p_2(\bm{x}_2)$ for all non-zero $a$. This may or may not be desirable. An alternative would be to specify a distance $\delta$ as a minimum distance between $\mu_1$ and $\mu_2$, i.e.
$$
p_2(\bm{\gamma}) = \bm{1}_{(|\mu_1 - \mu_2| > \delta)}.
$$
This generates the question of how $\delta$ should be specified, but may be appropriate in some situations.
More generally, for a mixture distribution with $K$ parameters, suppose there exists a component-specific parameter $\phi_k$ for each component $k=1,\ldots,K$, and the difference between the components is a-priori believed (or, from the point of view of model interpretation, desired) to be in terms of this parameter. Then we propose setting
\begin{equation} \label{eq:ppp-diff}
p_2(\bm{\gamma}) = \min_{k \neq l} |\phi_k - \phi_l|.
\end{equation}
On the other hand, for a mixture distribution with $K$ parameters, if there exists a component-specific parameter $\psi_k$ for each component $k=1,\ldots,K$, and each component is a-priori expected or desired to have \emph{similar} values of this parameter, we could set
\begin{equation} \label{eq:ppp-same}
{p}_2(\bm{\gamma}) = \max_{k \neq l} |\psi_k - \psi_l|^{-1}.
\end{equation}
Here, the scale free nature of $p_2(\cdot)$ is an advantage in that we do not have to quantify `similar'.
More generally, $p_2(\bm{\gamma})$ could be constructed as any multiplicative combination of Equations (\ref{eq:ppp-diff}) and (\ref{eq:ppp-same}).
The procedure can also be applied when the number of components $K$ is allowed to vary, in which case it makes sense only within fixed values of $K$ in the same way that the label switching problem only has meaning within fixed values of $K$ \citep{nobile07}.
\section{Examples} \label{sec:results}
\subsection{Mixture of Two Normals}
Our first illustration takes the simple mixture of two normals example. We generate 100 observations from the density given in Equation (\ref{eq:normmix}), with $\mu_1 =0$,
$\mu_2 = 2$, $\sigma_1^2 = \sigma_2^2 = 1$ and $\pi_1 = \pi_2 = 0.5$. We consider two prior specifications:
\begin{enumerate}
\item[(a)] the standard specification given in \cite{richardson97}, denoted \emph{without PPP};
\item[(b)] a two part prior $p(\bm{\gamma}) = p_1(\bm{\gamma}) p_2(\bm{\gamma})$, with $p_1(\bm{\gamma})$ as given in \cite{richardson97} and $p_2(\bm{\gamma})$ as given in Equation (\ref{eq:simple-mu}), denoted \emph{with PPP}.
\end{enumerate}
In both cases we fix the number of components $K=2$. In (b), we are therefore adding an explicit prior opinion that the difference between the two components is in the locations $\mu_1$ and $\mu_2$.
Figure \ref{fig:maxsig} compares a bivariate projection of the posterior onto the absolute difference $|\mu_1 - \mu_2|$ and max$(\sigma_1^2,\sigma_2^2)$ without and with the PPP. Without the PPP, posterior mass is assigned to the situation where $|\mu_1 - \mu_2|$ is small and max$(\sigma_1^2,\sigma_2^2)$ is large. This corresponds to a case where a mixture distribution with similar means but different variances is fitted. In Figure \ref{fig:twodensities} we see that such a mixture is well supported by the data (dashed line in the figure). Once the PPP is applied, far less posterior mass is assigned to this scenario, since our prior distribution specifically tells us to exclude such cases.
\begin{figure}[htp]
\centering
\subfigure[without PPP]{
\includegraphics[scale=0.5] {diffmu-v-maxsig-contour.png}
}
\subfigure[with PPP]{
\includegraphics[scale=0.5] {diffmu-v-maxsig-contour-1.png}
}
\caption{Posterior contour plots of $|\mu_1 - \mu_2|$ versus max$(\sigma_1^2,\sigma_2^2)$}
\label{fig:maxsig}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[scale=0.5] {twodensities.png}
\caption{Histogram of 100 realisations from $0.5N(0,1) + 0.5N(2,1)$ with true density overlaid (solid line) and alternative density, $0.5N(1,1) + 0.5N(1,4)$ also overlaid (dashed line)}
\label{fig:twodensities}
\end{figure}
Figure \ref{fig:mu1mu2} gives the marginal bivariate posterior of $(\mu_1,\mu_2)$, with and without the PPP. Without the PPP, the posterior appears to have a single mode at approximately $\mu_1 = \mu_2 = 1$; with the PPP, the posterior is bimodal with modes at approximately ($\mu_1 = 0, \mu_2=2$) and ($\mu_1=2, \mu_2 = 0$). The bimodality in the PPP case is a consequence of label switching; if component-specific inference is required, post-hoc relabelling should be carried out \citep[see, for example,][]{sperrin09labels}. The unimodality in the non PPP case is caused by the two means being very close together and the variances to differ, corresponding to a different interpretation of the mixture components.
\begin{figure}[htp]
\centering
\subfigure[without PPP]{
\includegraphics[scale=0.5] {bi-mu-post.png}
}
\subfigure[with PPP]{
\includegraphics[scale=0.5] {bi-mu-post-1.png}
}
\caption{Posterior contour plots of $\mu_1$ versus $\mu_2$}
\label{fig:mu1mu2}
\end{figure}
We also ran the same comparison without assuming a fixed number of components $K$ \citep[using the birth-death method of][]{stephens00}, putting a Poisson$(1)$ prior distribution on the number of components $K$ \citep[see][for a justification of the use of this prior]{nobile07}. Similar results to the above were observed when we looked at the output conditional on $K=2$.
\subsection{Galaxy Data}
The galaxy dataset is commonly used to illustrate mixture modelling techniques \citep[see][for a recent investigation of this dataset in the mixture modelling context]{jasra05}. Briefly, it consists of the velocities of 82 galaxies, but the velocities appear to cluster, suggesting different groups of galaxies that we may wish to identify (see Figure \ref{fig:gal}). If we model these data using a mixture, it is likely that we wish our mixture components to represent the clusters with different mean velocities, hence the PPP of Equation (\ref{eq:ppp-diff}) could be considered in this scenario. We run a variable dimension sampler with the details as above, with normally distributed components assumed and a Poisson$(1)$ prior distribution on the number of components $K$. We compare the results of standard priors \citep[i.e.\ those given in][]{richardson97} with the standard priors plus the PPP. Both with and without the PPP, the values of $K$ with the majority of posterior support are $K=3$ and $K=4$ \citep[but see][for discussion on the posterior of the number of components in a mixture model]{aitkin01}. For the $K=3$ case the posterior means are already well separated, and the PPP has little or no effect on the posterior means. We look in more detail at the $K=4$ case.
\begin{figure}[htp]
\centering
\includegraphics[scale=0.5] {gal.pdf}
\caption{Histogram of the velocities of 82 galaxies}
\label{fig:gal}
\end{figure}
In order to avoid the label switching issue, we first consider the posterior of a generic $\mu_k$ without relabelling, estimating this by combining into a single vector all samples from the posterior $\mu_k$, for $k=1,2,3,4$, conditional on $K=4$. We can do this since invariance of the posterior under re-parametrisation means we can ignore the labels. The resulting density plot is given in Figure \ref{fig:gal-muall}. The interesting difference to note here is that with the PPP four distinct peaks can be observed in the density, whereas without the PPP the middle two peaks cannot be distinguished.
This does, however, depend on the smoothing parameter used in the non-parametric density estimate.
\begin{figure}[htp]
\centering
\includegraphics[scale=0.5] {gal-muall.png}
\caption{Smoothed density of a generic $\mu_k$ for the galaxy data. Without PPP: dashed line; with PPP: solid line.}
\label{fig:gal-muall}
\end{figure}
To consider this further we mitigate the label switching issue by applying the identifiability constraint $\mu_1 < \mu_2 < \mu_3 < \mu_4$, then look at the posterior density of $(\mu_3 - \mu_2)$. This is given in Figure \ref{fig:gal-mudiff}. We see that applying the PPP causes more separation between the two component means (less mass at small differences).
\begin{figure}[htp]
\centering
\includegraphics[scale=0.5] {gal-mudiff.png}
\caption{Smoothed density of $(\mu_3 - \mu_2)$ for the galaxy data with $K=4$ after an IC is applied. Without PPP: dashed line; with PPP: solid line.}
\label{fig:gal-mudiff}
\end{figure}
\section{Discussion} \label{sec:discuss}
In this paper we have introduced the idea of incorporating weak joint information about parameters in a mixture model into the prior specification. In particular we have introduced proximity penalty priors (PPPs) as a method of explicitly declaring an a-priori opinion (or interest) in components that differ on a certain parameter.
The formulation is designed to allow this opinion to be as vague as possible: we avoid making any statement about the magnitude of the difference that should be observed between the components, i.e.\ the method is scale-free.
With the focus of this paper being introduction of the idea, the examples were kept fairly simple. The idea, however, is very general and could be applied in more complex models. For example, in an application such as genetics we may wish to construct a mixture of regressions with many covariates. Suppose there are $p$ covariates and $K$ mixtures, with the coefficient of the \kth{j} covariate in the \kth{k} mixture given by $\beta_{jk}$. Then we could consider the PPP
$$
p_2(\bm{\gamma}) = \max_j \min_{k \neq l} |\beta_{jk} - \beta_{jl}|,
$$
to reflect a belief that each component should have at least one coefficient that differs from the value in every other component.
Another potential extension is to replace the $L_1$-norm assumed in the PPP with an $L_s$-norm, i.e. considering a generalisiation of, for example, Equation (\ref{eq:simple-mu}), to
\begin{equation*}
p_2(\bm{\gamma}) = |\mu_1 - \mu_2|^s.
\end{equation*}
In this generalised setting, we note that $s=0$ clearly corresponds to an unpenalised prior and $s=1$ reduces to the original Equation (\ref{eq:simple-mu}). Also, setting $s=-1$ encodes a PPP like Equation (\ref{eq:ppp-same}). This generalisation then raises the question of how should $s$ be chosen? We suggest $s=1$ is a very natural choice, since this means the penalty is being applied on the original scale of the data. We have, however, looked at the sensitivity to the choice of $s$. For the example considered in Section \ref{sec:results}, once $s$ becomes large the posteriors for $\bm{\mu}$ become very flat.
\bibliographystyle{apalike}
|
1,116,691,498,532 | arxiv | \section{The triangulation algorithm}
\begin{defn}[{Computable ring \cite[Subsection~1.2]{BR}}]\label{compdef}
A left and right noetherian ring is called \textbf{computable} if there exists an algorithm which solves one sided inhomogeneous linear systems $XA=B$ and $AX=B$, where $A$ and $B$ are matrices with entries in $D$. The word ``solves'' means: The algorithm can decide if a solution exists, and, if solvable, is able to compute a particular solution as well as a finite generating set of solutions of the corresponding homogeneous system.
\end{defn}
From now on the ring $D$ is assumed computable. Let $M$ be a finitely generated left $D$-module. Then $M$ is finitely presented, i.e. there exists a matrix ${{\mathtt{M}}} \in D^{p\times q}$, viewed as a morphism ${{\mathtt{M}}}:D^{1\times p} \to D^{1\times q}$, such that $\coker {{\mathtt{M}}} \cong M$. ${{\mathtt{M}}}$ is called a \textbf{matrix of relations} or a \textbf{presentation matrix} for $M$. It forms the beginning of a free resolution
\[
0 \xleftarrow{} M \xleftarrow{} D^{1\times q} \xleftarrow{d_1 = {{\mathtt{M}}}} D^{1\times p} \xleftarrow{d_2} D^{1\times p_2} \xleftarrow{d_3} \cdots.
\]
$d_i$ is called the $i$-th syzygies matrix of $M$ and $K_i:=\coker d_{i+1}$ the $i$-th syzygies module. $K_i$ is uniquely determined by $M$ up to \textbf{projective equivalence}.
Now suppose that $M$ is endowed with an $m$-filtration $F=(F_p M)$. We will sketch an algorithm that, starting from a presentation matrix ${{\mathtt{M}}}\in D^{p \times q}$ for $M$ and presentation matrices ${{\mathtt{M}}}_p$ for the graded parts $M_p:=\gr_p M$, computes another \textbf{upper triangular} presentation matrix ${{\mathtt{M}}}_F$ of the form\footnote{Note that choosing a generating system of $M$ adapted to the filtration $F$ is not enough to produce a triangular presentation matrix, as changing the set of generators only corresponds to column operations on ${{\mathtt{M}}}$.}
\[
{{\mathtt{M}}}_F = \left(
\begin{matrix}
{{\mathtt{M}}}_{p_{m-1}} & * & \cdots & \cdots & * \\
& {{\mathtt{M}}}_{p_{m-2}} & * & \cdots & * \\
& & \ddots & \ddots & \vdots \\
& & & {{\mathtt{M}}}_{p_1} & * \\
& & & & {{\mathtt{M}}}_{p_0}
\end{matrix}
\right) \in D^{p'\times q'}
\]
and an isomorphism $\coker {{\mathtt{M}}}_F \xrightarrow{\cong} \coker {{\mathtt{M}}}$ given by a matrix $\mathtt{T}\in D^{q' \times q}$:
Let $(\psi_p)$ be an ascending $m$-filtration system computing $F$ (cf.~Def.~\ref{system}). To start the induction take $p$ to be the highest degree $p_{m-1}$ in the filtration and set ${{\mathtt{F}}}_p {{\mathtt{M}}} := {{\mathtt{M}}}$. Since
\[
\mu_p:=\psi_p:M_p=\coker{{\mathtt{M}}}_p \to \coker {{\mathtt{F}}}_p{{\mathtt{M}}}
\]
is a generalized isomorphism, its unique generalized inverse exists and is an epimorphism (cf.~Cor.~\ref{geninv}), which we denote by $\pi_p:F_p M \to M_p$. (Note: $\coker {{\mathtt{F}}}_p{{\mathtt{M}}}=F_p M= M$ for $p=p_{m-1}$.) Since $D$ is computable we are able to determine (a matrix of) an injective morphism $\iota_p$ mapping onto the kernel of $\pi_p$. The source of $\iota_p$ is a module isomorphic to $F_{p-1} M$, which we also denote by $F_{p-1} M$. No confusion can occur since we will only refer to the latter. All maps in the exact-rows diagram
\[
\xymatrix{
0 & M_p \ar@{=}[d] \ar[l] & P_0 \ar[d]^{\eta_0} \ar[l]_{\nu} & K_1 \ar[d]^{\eta} \ar[l]_{{{\mathtt{M}}}_p} & 0 \ar[l] \\
0 & M_p \ar[l]& F_p M \ar[l]_{\pi_p} & F_{p-1}M \ar[l]_{\iota_p} & 0 \ar[l]
}
\]
are computable, where $P_0$ is a free $D$-module and $K_1$ is the $1$-st syzygies module of ${{\mathtt{M}}}_p$: $\eta_0$ is computable since $P_0$ is free and $\eta$ is computable since $\iota_p$ is injective (see~\cite[Subsection~3.1]{BR}). This yields the short exact sequence
\[
0 \xrightarrow{} K_1 \xrightarrow{\kappa:=\left(\begin{matrix}{{\mathtt{M}}}_p & \eta \end{matrix}\right)} P_0 \oplus F_{p-1} M \xrightarrow{\rho:=\left(\begin{matrix}-\eta_0 \\ \iota_p\end{matrix}\right)} F_p M \xrightarrow{} 0.
\]
Hence, the cokernel of $\kappa:=\left(\begin{matrix}{{\mathtt{M}}}_p & \eta \end{matrix}\right)$ is isomorphic to $F_p M$ which therefore admits a presentation matrix of the form
\[
{{\mathtt{M}}}^p_F = \left(\begin{matrix} {{\mathtt{M}}}_p & \eta \\ 0 & {{\mathtt{F}}}_{p-1} {{\mathtt{M}}} \end{matrix}\right),
\]
where ${{\mathtt{F}}}_{p-1}{{\mathtt{M}}}$ is a presentation matrix of $F_{p-1} M$ (for more details see~\cite[Subsection~7.1]{BB}). If $\chi: P_0 \oplus F_{p-1} M \xrightarrow{} \coker\kappa=\coker {{\mathtt{M}}}^p_F$ denotes the natural epimorphism and $\rho:=\left(\begin{matrix}-\eta_0 \\ \iota_p\end{matrix}\right)$, then the matrix $\mathtt{T}^p$ of the morphism $T^p := \rho \circ\chi^{-1}$ is an isomorphism between $\coker{{\mathtt{M}}}^p_F$ and $\coker {{\mathtt{F}}}_p{{\mathtt{M}}}$. By the induction hypothesis we have
\[
\widetilde{{{\mathtt{M}}}}^{p+1}_F
:=
\left(\begin{array}{c|c} \mathrm{stable}_p & \eta_p \\ \hline 0 & {{\mathtt{F}}}_p {{\mathtt{M}}} \end{array}\right)
=
\left( \begin{array}{c|c|c}
\mathrm{stable}_{p+1} & * & * \\ \hline
0 & M_{p+1} & * \\ \hline
0 & 0 & {{\mathtt{F}}}_p {{\mathtt{M}}}
\end{array} \right)
=
\left(\begin{array}{c|c} \mathrm{stable}_{p+1} & *\ * \\ \hline 0 & {{\mathtt{M}}}^{p+1}_F \end{array}\right)
\]
with $\coker \widetilde{{{\mathtt{M}}}}^{p+1}_F \cong \coker {{\mathtt{M}}}$. (Since $p$ was decreased by one the old ${{\mathtt{F}}}_{p-1} {{\mathtt{M}}}$ is now addressed as ${{\mathtt{F}}}_p {{\mathtt{M}}}$, etc.). Before proceeding inductively on the submatrix ${{\mathtt{F}}}_p {{\mathtt{M}}}$ of $\widetilde{{{\mathtt{M}}}}^{p+1}_F$ take the quotient
\[
\mu_p:=(\iota_{p_{m-1}}\circ\cdots\circ \iota_{p+1})^{-1}\circ\psi_p:M_p=\coker{{\mathtt{M}}}_p \to \coker {{\mathtt{F}}}_p {{\mathtt{M}}},
\]
which is like $\mu_{p+1}$ again a generalized isomorphism. Note that matrix $\mathtt{T}^p$ of the morphism $T^p:=\rho \circ\chi^{-1}$ providing the isomorphism between $\coker{{\mathtt{M}}}^p_F$ and $\coker {{\mathtt{F}}}_p{{\mathtt{M}}}$ now has to be multiplied from the right to the submatrix $\eta_p$ of $\widetilde{{{\mathtt{M}}}}^{p+1}_F$ which lies above ${{\mathtt{F}}}_p {{\mathtt{M}}}$. This completes the induction. The algorithm terminates with ${{\mathtt{M}}}_F := \widetilde{{{\mathtt{M}}}}^{p_0}_F$ and $\mathtt{T}$ is the composition of all the successive column operations on ${{\mathtt{M}}}$. \qed
\bigskip
The above algorithm is implemented in {\tt homalg} package \cite{homalg-package} under the name {\tt Isomor\-phism\-OfFiltration}. It takes an $m$-filtration system $(\psi_p)$ of $M=\coker{{\mathtt{M}}}$ as its input and returns an isomorphism $\tau:\coker {{\mathtt{M}}}_F \to \coker {{\mathtt{M}}}$ with a triangular presentation matrix ${{\mathtt{M}}}_F$, as described above. {\tt IsomorphismOfFiltration} will be extensively used in the examples in Appendix~\ref{Examples}.
\section{Examples with {\sf GAP}'s {\tt homalg}}\label{Examples}
The packages {\tt homalg}, {\tt IO\_ForHomalg}, and {\tt RingsForHomalg} are assumed loaded:
\medskip
\noindent
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ LoadPackage( "RingsForHomalg" );+}
\begin{verbatim}
true
\end{verbatim}
For details see the {\tt homalg} project \cite{homalg-project}.
\begin{exmp}[{\tt LeftPresentation}]\label{LeftPresentation}
Define a left module $W$ over the polynomial ring $D:=\Q[x,y,z]$. Also define its right mirror $Y$.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Qxyz := HomalgFieldOfRationalsInDefaultCAS( ) * "x,y,z";;+} \\
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ wmat := HomalgMatrix( "[ \+
\begin{verbatim}
x*y, y*z, z, 0, 0, \
x^3*z,x^2*z^2,0, x*z^2, -z^2, \
x^4, x^3*z, 0, x^2*z, -x*z, \
0, 0, x*y, -y^2, x^2-1,\
0, 0, x^2*z, -x*y*z, y*z, \
0, 0, x^2*y-x^2,-x*y^2+x*y,y^2-y \
]", 6, 5, Qxyz );
\end{verbatim}
}
\begin{verbatim}
<A homalg external 6 by 5 matrix>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ W := LeftPresentation( wmat );+}
\begin{verbatim}
<A left module presented by 6 relations for 5 generators>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Y := Hom( Qxyz, W );+}
\begin{verbatim}
<A right module on 5 generators satisfying 6 relations>
\end{verbatim}
}
\end{exmp}
\begin{exmp}[Homological {\tt GrothendieckSpectralSequence}]\label{ExtExt}
Example~\ref{LeftPresentation} continued. Compute the double-$\Ext$ spectral sequence for $F:=\Hom(-,Y)$, $G:=\Hom(-,D)$, and the $D$-module $W$. This is an example for Subsection~\ref{HGB}.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ F := InsertObjectInMultiFunctor( Functor_Hom, 2, Y, "TensorY" );+}
\begin{verbatim}
<The functor TensorY>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ G := LeftDualizingFunctor( Qxyz );;+}\\
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ II_E := GrothendieckSpectralSequence( F, G, W );+}
\begin{verbatim}
<A stable homological spectral sequence with sheets at levels [ 0 .. 4 ]
each consisting of left modules at bidegrees [ -3 .. 0 ]x[ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( II_E );+}
\begin{verbatim}
The associated transposed spectral sequence:
a homological spectral sequence at bidegrees
[ [ 0 .. 3 ], [ -3 .. 0 ] ]
---------
Level 0:
* * * *
* * * *
. * * *
. . * *
---------
Level 1:
* * * *
. . . .
. . . .
. . . .
---------
Level 2:
s s s s
. . . .
. . . .
. . . .
Now the spectral sequence of the bicomplex:
a homological spectral sequence at bidegrees
[ [ -3 .. 0 ], [ 0 .. 3 ] ]
---------
Level 0:
* * * *
* * * *
. * * *
. . * *
---------
Level 1:
* * * *
* * * *
. * * *
. . . *
---------
Level 2:
* * s s
* * * *
. * * *
. . . *
---------
Level 3:
* s s s
* s s s
. . s *
. . . *
---------
Level 4:
s s s s
. s s s
. . s s
. . . s
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filt := FiltrationBySpectralSequence( II_E, 0 );+}
\begin{verbatim}
<An ascending filtration with degrees [ -3 .. 0 ] and graded parts:
0: <A non-zero left module presented by 33 relations for 23 generators>
-1: <A non-zero left module presented by 37 relations for 22 generators>
-2: <A non-zero left module presented by 20 relations for 8 generators>
-3: <A non-zero left module presented by 29 relations for 4 generators>
of
<A non-zero left module presented by 111 relations for 37 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ ByASmallerPresentation( filt );+}
\begin{verbatim}
<An ascending filtration with degrees [ -3 .. 0 ] and graded parts:
0: <A non-zero left module presented by 25 relations for 16 generators>
-1: <A non-zero left module presented by 30 relations for 14 generators>
-2: <A non-zero left module presented by 18 relations for 7 generators>
-3: <A non-zero left module presented by 12 relations for 4 generators>
of
<A non-zero left module presented by 48 relations for 20 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ m := IsomorphismOfFiltration( filt );+}
\begin{verbatim}
<An isomorphism of left modules>
\end{verbatim}
}
\end{exmp}
\begin{exmp}[{\tt PurityFiltration}]\label{PurityFiltration}
Example~\ref{LeftPresentation} continued. This is an example for Subsections~\ref{bidual}~and~\ref{codegree}.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filt := PurityFiltration( W );+}
\begin{verbatim}
<The ascending purity filtration with degrees [ -3 .. 0 ] and graded parts:
0: <A codegree-[ 1, 1 ]-pure rank 2 left module presented by
3 relations for 4 generators>
-1: <A codegree-1-pure codim 1 left module presented by
4 relations for 3 generators>
-2: <A cyclic reflexively pure codim 2 left module presented by
2 relations for a cyclic generator>
-3: <A cyclic reflexively pure codim 3 left module presented by
3 relations for a cyclic generator>
of
<A non-pure rank 2 left module presented by 6 relations for 5 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ W;+}
\begin{verbatim}
<A non-pure rank 2 left module presented by 6 relations for 5 generators>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ m := IsomorphismOfFiltration( filt );+}
\begin{verbatim}
<An isomorphism of left modules>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ IsIdenticalObj( Range( m ), W );+}
\begin{verbatim}
true
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Source( m );+}
\begin{verbatim}
<A left module presented by 12 relations for 9 generators (locked)>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( last );+}
\begin{verbatim}
0, 0, x, -y,0,1, 0, 0, 0,
x*y,-y*z,-z,0, 0,0, 0, 0, 0,
x^2,-x*z,0, -z,1,0, 0, 0, 0,
0, 0, 0, 0, y,-z,0, 0, 0,
0, 0, 0, 0, x,0, -z, 0, 1,
0, 0, 0, 0, 0,x, -y, -1, 0,
0, 0, 0, 0, 0,-y,x^2-1,0, 0,
0, 0, 0, 0, 0,0, 0, z, 0,
0, 0, 0, 0, 0,0, 0, y-1,0,
0, 0, 0, 0, 0,0, 0, 0, z,
0, 0, 0, 0, 0,0, 0, 0, y,
0, 0, 0, 0, 0,0, 0, 0, x
Cokernel of the map
Q[x,y,z]^(1x12) --> Q[x,y,z]^(1x9),
currently represented by the above matrix
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( filt );+}
\begin{verbatim}
Degree 0:
0, 0, x, -y,
x*y,-y*z,-z,0,
x^2,-x*z,0, -z
Cokernel of the map
Q[x,y,z]^(1x3) --> Q[x,y,z]^(1x4),
currently represented by the above matrix
----------
Degree -1:
y,-z,0,
x,0, -z,
0,x, -y,
0,-y,x^2-1
Cokernel of the map
Q[x,y,z]^(1x4) --> Q[x,y,z]^(1x3),
currently represented by the above matrix
----------
Degree -2:
Q[x,y,z]/< z, y-1 >
----------
Degree -3:
Q[x,y,z]/< z, y, x >
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( m );+}
\begin{verbatim}
1, 0, 0, 0, 0,
0, -1, 0, 0, 0,
0, 0, -1, 0, 0,
0, 0, 0, -1, 0,
-x^2,-x*z, 0, -z, 0,
0, 0, x, -y, 0,
0, 0, 0, 0, -1,
0, 0, x^2,-x*y,y,
x^3, x^2*z,0, x*z, -z
the map is currently represented by the above 9 x 5 matrix
\end{verbatim}
}
\end{exmp}
\begin{exmp}[{\tt PurityFiltration}, \emph{non}commutative]\label{PurityFiltration:A3}
This is a \emph{non}commutative example for Subsections~\ref{bidual}~and~\ref{codegree}. Let $A_3 := \Q[x,y,z]\langle D_x, D_y, D_z \rangle$ be the $3$-dimensional {\sc Weyl} algebra.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ A3 := RingOfDerivations( Qxyz, "Dx,Dy,Dz" );;+}\\
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ nmat := HomalgMatrix( "[ \+
\begin{verbatim}
3*Dy*Dz-Dz^2+Dx+3*Dy-Dz, 3*Dy*Dz-Dz^2, \
Dx*Dz+Dz^2+Dz, Dx*Dz+Dz^2, \
Dx*Dy, 0, \
Dz^2-Dx+Dz, 3*Dx*Dy+Dz^2, \
Dx^2, 0, \
-Dz^2+Dx-Dz, 3*Dx^2-Dz^2, \
Dz^3-Dx*Dz+Dz^2, Dz^3, \
2*x*Dz^2-2*x*Dx+2*x*Dz+3*Dx+3*Dz+3,2*x*Dz^2+3*Dx+3*Dz\
]", 8, 2, A3 );
\end{verbatim}
}
\begin{verbatim}
<A homalg external 8 by 2 matrix>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ N := LeftPresentation( nmat );+}
\begin{verbatim}
<A left module presented by 8 relations for 2 generators>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filt := PurityFiltration( N );+}
\begin{verbatim}
<The ascending purity filtration with degrees [ -3 .. 0 ] and graded parts:
0: <A zero left module>
-1: <A cyclic reflexively pure codim 1 left module presented by
1 relation for a cyclic generator>
-2: <A cyclic reflexively pure codim 2 left module presented by
2 relations for a cyclic generator>
-3: <A cyclic reflexively pure codim 3 left module presented by
3 relations for a cyclic generator>
of
<A non-pure codim 1 left module presented by 8 relations for 2 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ II_E := SpectralSequence( filt );+}
\begin{verbatim}
<A stable homological spectral sequence with sheets at levels [ 0 .. 2 ]
each consisting of left modules at bidegrees [ -3 .. 0 ]x[ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( II_E );+}
\begin{verbatim}
The associated transposed spectral sequence:
a homological spectral sequence at bidegrees
[ [ 0 .. 3 ], [ -3 .. 0 ] ]
---------
Level 0:
* * * *
. * * *
. . * *
. . . *
---------
Level 1:
* * * *
. . . .
. . . .
. . . .
---------
Level 2:
s . . .
. . . .
. . . .
. . . .
Now the spectral sequence of the bicomplex:
a homological spectral sequence at bidegrees
[ [ -3 .. 0 ], [ 0 .. 3 ] ]
---------
Level 0:
* * * *
. * * *
. . * *
. . . *
---------
Level 1:
* * * *
. * * *
. . * *
. . . .
---------
Level 2:
s . . .
. s . .
. . s .
. . . .
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ m := IsomorphismOfFiltration( filt );+}
\begin{verbatim}
<An isomorphism of left modules>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ IsIdenticalObj( Range( m ), N );+}
\begin{verbatim}
true
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Source( m );+}
\begin{verbatim}
<A left module presented by 6 relations for 3 generators (locked)>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( last );+}
\begin{verbatim}
Dx,-1/3,-2/9*x,
0, Dy, -1/3,
0, Dx, 1,
0, 0, Dz,
0, 0, Dy,
0, 0, Dx
Cokernel of the map
R^(1x6) --> R^(1x3), ( for R := Q[x,y,z]<Dx,Dy,Dz> )
currently represented by the above matrix
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( filt );+}
\begin{verbatim}
Degree 0:
0
----------
Degree -1:
Q[x,y,z]<Dx,Dy,Dz>/< Dx >
----------
Degree -2:
Q[x,y,z]<Dx,Dy,Dz>/< Dy, Dx >
----------
Degree -3:
Q[x,y,z]<Dx,Dy,Dz>/< Dz, Dy, Dx >
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( m );+}
\begin{verbatim}
1, 1,
-3*Dz-3, -3*Dz,
-3*Dz^2+3*Dx-3*Dz,-3*Dz^2
the map is currently represented by the above 3 x 2 matrix
\end{verbatim}
}
\end{exmp}
\begin{exmp}[Cohomological {\tt GrothendieckSpectralSequence}]\label{TorExt:Grothendieck}
Example~\ref{LeftPresentation} continued. Compute the $\Tor$-$\Ext$ spectral sequence for the triple $F:=-\otimes W$, $G:=\Hom(-,D)$, and the $D$-module $W$. This is an example for Subsection~\ref{CGB}.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ F := InsertObjectInMultiFunctor( Functor_TensorProduct, 2, W, "TensorW" );+}
\begin{verbatim}
<The functor TensorW>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ G := LeftDualizingFunctor( Qxyz );;+}\\
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ II_E := GrothendieckSpectralSequence( F, G, W );+}
\begin{verbatim}
<A stable cohomological spectral sequence with sheets at levels [ 0 .. 4 ]
each consisting of left modules at bidegrees [ -3 .. 0 ]x[ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ homalgRingStatistics(Qxyz);+}
\begin{verbatim}
rec( BasisOfRowModule := 110, BasisOfColumnModule := 16,
BasisOfRowsCoeff := 50, BasisOfColumnsCoeff := 60, DecideZeroRows := 241,
DecideZeroColumns := 31, DecideZeroRowsEffectively := 51,
DecideZeroColumnsEffectively := 63, SyzygiesGeneratorsOfRows := 184,
SyzygiesGeneratorsOfColumns := 63 )
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( II_E );+}
\begin{verbatim}
The associated transposed spectral sequence:
a cohomological spectral sequence at bidegrees
[ [ 0 .. 3 ], [ -3 .. 0 ] ]
---------
Level 0:
* * * *
* * * *
. * * *
. . * *
---------
Level 1:
* * * *
. . . .
. . . .
. . . .
---------
Level 2:
s s s s
. . . .
. . . .
. . . .
Now the spectral sequence of the bicomplex:
a cohomological spectral sequence at bidegrees
[ [ -3 .. 0 ], [ 0 .. 3 ] ]
---------
Level 0:
* * * *
* * * *
. * * *
. . * *
---------
Level 1:
* * * *
* * * *
. * * *
. . . *
---------
Level 2:
* * s s
* * * *
. * * *
. . . *
---------
Level 3:
* s s s
. s s s
. . s *
. . . s
---------
Level 4:
s s s s
. s s s
. . s s
. . . s
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filt := FiltrationBySpectralSequence( II_E, 0 );+}
\begin{verbatim}
<A descending filtration with degrees [ -3 .. 0 ] and graded parts:
-3: <A non-zero cyclic left module presented by
3 relations for a cyclic generator>
-2: <A non-zero left module presented by 17 relations for 6 generators>
-1: <A non-zero left module presented by 19 relations for 9 generators>
0: <A non-zero left module presented by 13 relations for 10 generators>
of
<A left module presented by 66 relations for 41 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ ByASmallerPresentation( filt );+}
\begin{verbatim}
<A descending filtration with degrees [ -3 .. 0 ] and graded parts:
-3: <A non-zero cyclic left module presented by
3 relations for a cyclic generator>
-2: <A non-zero left module presented by 12 relations for 4 generators>
-1: <A non-zero left module presented by 18 relations for 8 generators>
0: <A non-zero left module presented by 11 relations for 10 generators>
of
<A left module presented by 21 relations for 12 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ m := IsomorphismOfFiltration( filt );+}
\begin{verbatim}
<An isomorphism of left modules>
\end{verbatim}
}
\end{exmp}
\begin{exmp}[$\Tor$-$\Ext$ spectral sequence]\label{TorExt:Bifunctor}
Here we compute the $\Tor$-$\Ext$ spectral sequence of the bicomplex $B :=\Hom(P^W,D)\otimes P^W$. This is an example for Subsection~\ref{HPP}.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ P := Resolution( W );+}
\begin{verbatim}
<A right acyclic complex containing 3 morphisms of left modules at degrees
[ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ GP := Hom( P );+}
\begin{verbatim}
<A cocomplex containing 3 morphisms of right modules at degrees [ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ FGP := GP * P;+}
\begin{verbatim}
<A cocomplex containing 3 morphisms of left complexes at degrees [ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ BC := HomalgBicomplex( FGP );+}
\begin{verbatim}
<A bicocomplex containing left modules at bidegrees [ 0 .. 3 ]x[ -3 .. 0 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ p_degrees := ObjectDegreesOfBicomplex( BC )[1];+}
\begin{verbatim}
[ 0 .. 3 ]
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ II_E := SecondSpectralSequenceWithFiltration( BC, p_degrees );+}
\begin{verbatim}
<A stable cohomological spectral sequence with sheets at levels [ 0 .. 4 ]
each consisting of left modules at bidegrees [ -3 .. 0 ]x[ 0 .. 3 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ homalgRingStatistics(Qxyz);+}
\begin{verbatim}
rec( BasisOfRowModule := 109, BasisOfColumnModule := 1,
BasisOfRowsCoeff := 48, BasisOfColumnsCoeff := 0, DecideZeroRows := 190,
DecideZeroColumns := 1, DecideZeroRowsEffectively := 49,
DecideZeroColumnsEffectively := 0, SyzygiesGeneratorsOfRows := 166,
SyzygiesGeneratorsOfColumns := 2 )
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( II_E );+}
\begin{verbatim}
The associated transposed spectral sequence:
a cohomological spectral sequence at bidegrees
[ [ 0 .. 3 ], [ -3 .. 0 ] ]
---------
Level 0:
* * * *
* * * *
* * * *
* * * *
---------
Level 1:
* * * *
. . . .
. . . .
. . . .
---------
Level 2:
s s s s
. . . .
. . . .
. . . .
Now the spectral sequence of the bicomplex:
a cohomological spectral sequence at bidegrees
[ [ -3 .. 0 ], [ 0 .. 3 ] ]
---------
Level 0:
* * * *
* * * *
* * * *
* * * *
---------
Level 1:
* * * *
* * * *
* * * *
* * * *
---------
Level 2:
* * s s
* * * *
. * * *
. . . *
---------
Level 3:
* s s s
. s s s
. . s *
. . . s
---------
Level 4:
s s s s
. s s s
. . s s
. . . s
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filt := FiltrationBySpectralSequence( II_E, 0 );+}
\begin{verbatim}
<A descending filtration with degrees [ -3 .. 0 ] and graded parts:
-3: <A non-zero cyclic left module presented by
3 relations for a cyclic generator>
-2: <A non-zero left module presented by 17 relations for 7 generators>
-1: <A non-zero left module presented by 25 relations for 12 generators>
0: <A non-zero left module presented by 13 relations for 10 generators>
of
<A left module presented by 38 relations for 24 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ ByASmallerPresentation( filt );+}
\begin{verbatim}
<A descending filtration with degrees [ -3 .. 0 ] and graded parts:
-3: <A non-zero cyclic left module presented by
3 relations for a cyclic generator>
-2: <A non-zero left module presented by 12 relations for 4 generators>
-1: <A non-zero left module presented by 21 relations for 8 generators>
0: <A non-zero left module presented by 11 relations for 10 generators>
of
<A left module presented by 23 relations for 12 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ m := IsomorphismOfFiltration( filt );+}
\begin{verbatim}
<An isomorphism of left modules>
\end{verbatim}
}
\end{exmp}
\begin{exmp}[{\tt CodegreeOfPurity}]\label{CodegreeOfPurity}
For two torsion-free $D$-modules $V$ and $W$ of rank $2$ compute the three homological invariants
\begin{itemize}
\item projective dimension,
\item {\sc Auslander}'s degree of torsion-freeness, and
\item codegree of purity
\end{itemize}
mentioned in Subsection~\ref{codegree} are computed. The codegree of purity is able to distinguish the two modules.
\medskip
\noindent
{\small
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ vmat := HomalgMatrix( "[ \+
\begin{verbatim}
0, 0, x,-z, \
x*z,z^2,y,0, \
x^2,x*z,0,y \
]", 3, 4, Qxyz );
\end{verbatim}
}
\begin{verbatim}
<A homalg external 3 by 4 matrix>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ V := LeftPresentation( vmat );+}
\begin{verbatim}
<A non-zero left module presented by 3 relations for 4 generators>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ wmat := HomalgMatrix( "[ \+
\begin{verbatim}
0, 0, x,-y, \
x*y,y*z,z,0, \
x^2,x*z,0,z \
]", 3, 4, Qxyz );
\end{verbatim}
}
\begin{verbatim}
<A homalg external 3 by 4 matrix>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ W := LeftPresentation( wmat );+}
\begin{verbatim}
<A non-zero left module presented by 3 relations for 4 generators>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Rank( V );+}
\begin{verbatim}
2
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Rank( W );+}
\begin{verbatim}
2
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ ProjectiveDimension( V );+}
\begin{verbatim}
2
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ ProjectiveDimension( W );+}
\begin{verbatim}
2
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ DegreeOfTorsionFreeness( V );+}
\begin{verbatim}
1
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ DegreeOfTorsionFreeness( W );+}
\begin{verbatim}
1
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ CodegreeOfPurity( V );+}
\begin{verbatim}
[ 2 ]
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ CodegreeOfPurity( W );+}
\begin{verbatim}
[ 1, 1 ]
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filtV := PurityFiltration( V );+}
\begin{verbatim}
<The ascending purity filtration with degrees [ -2 .. 0 ] and graded parts:
0: <A codegree-[ 2 ]-pure rank 2 left module presented by
3 relations for 4 generators>
-1: <A zero left module>
-2: <A zero left module>
of
<A codegree-[ 2 ]-pure rank 2 left module presented by
3 relations for 4 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ filtW := PurityFiltration( W );+}
\begin{verbatim}
<The ascending purity filtration with degrees [ -2 .. 0 ] and graded parts:
0: <A codegree-[ 1, 1 ]-pure rank 2 left module presented by
3 relations for 4 generators>
-1: <A zero left module>
-2: <A zero left module>
of
<A codegree-[ 1, 1 ]-pure rank 2 left module presented by
3 relations for 4 generators>>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ II_EV := SpectralSequence( filtV );+}
\begin{verbatim}
<A stable homological spectral sequence with sheets at levels [ 0 .. 4 ]
each consisting of left modules at bidegrees [ -3 .. 0 ]x[ 0 .. 2 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( II_EV );+}
\begin{verbatim}
The associated transposed spectral sequence:
a homological spectral sequence at bidegrees
[ [ 0 .. 2 ], [ -3 .. 0 ] ]
---------
Level 0:
* * *
* * *
* * *
. * *
---------
Level 1:
* * *
. . .
. . .
. . .
---------
Level 2:
s . .
. . .
. . .
. . .
Now the spectral sequence of the bicomplex:
a homological spectral sequence at bidegrees
[ [ -3 .. 0 ], [ 0 .. 2 ] ]
---------
Level 0:
* * * *
* * * *
. * * *
---------
Level 1:
* * * *
* * * *
. . * *
---------
Level 2:
* . . .
* . . .
. . * *
---------
Level 3:
* . . .
. . . .
. . . *
---------
Level 4:
. . . .
. . . .
. . . s
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ II_EW := SpectralSequence( filtW );+}
\begin{verbatim}
<A stable homological spectral sequence with sheets at levels [ 0 .. 4 ]
each consisting of left modules at bidegrees [ -3 .. 0 ]x[ 0 .. 2 ]>
\end{verbatim}
{\color{blue}\verb+gap>+}{\color{OrangeRed}\verb+ Display( II_EW );+}
\begin{verbatim}
The associated transposed spectral sequence:
a homological spectral sequence at bidegrees
[ [ 0 .. 2 ], [ -3 .. 0 ] ]
---------
Level 0:
* * *
* * *
. * *
. . *
---------
Level 1:
* * *
. . .
. . .
. . .
---------
Level 2:
s . .
. . .
. . .
. . .
Now the spectral sequence of the bicomplex:
a homological spectral sequence at bidegrees
[ [ -3 .. 0 ], [ 0 .. 2 ] ]
---------
Level 0:
* * * *
. * * *
. . * *
---------
Level 1:
* * * *
. * * *
. . . *
---------
Level 2:
* . . .
. * . .
. . . *
---------
Level 3:
* . . .
. . . .
. . . *
---------
Level 4:
. . . .
. . . .
. . . s
\end{verbatim}
}
\end{exmp}
\bigskip
An alternative title for this work could have been "Squeezing spectral sequences".
\endinput
\section{Introduction}
The motivation behind this work was the need for algorithms to explicitly construct several natural filtrations of modules. It is already known that all these filtrations can be described in a unified way using spectral sequences of filtered complexes, which in turn suggests a unified algorithm to construct all of them. Describing this algorithm is the main objective of the present paper.
Since {\sc Verdier} it became more and more apparent that one should be studying complexes of modules rather than single modules. A single module is then represented by one of its resolutions, all quasi-isomorphic to each other. The idea is now very simple:
\smallskip
\begin{center}
\framebox{
\begin{minipage}[c]{0.95\linewidth}
If there is no direct way to construct a certain natural filtration on a module $M$, it might be simpler to explicitly realize $M$ as one of the (co)homologies $H_n(C)$ of some complex $C$ with some easy constructible (natural) filtration, such that the filtration induced on $H_n(C)$ (by the one on $C$) maps by the explicit isomorphism $H_n(C)\cong M$ onto the looked-for filtration on $M$.
\end{minipage}
}
\end{center}
\smallskip
In this work it will be shown how to compute the induced filtration on $H_n(C)$ using spectral sequences of filtered complexes, enriched with some extra data. This provides a unified approach for constructing numerous important filtrations of modules and sheaves of modules (cf.~\cite[Chap.~5]{WeiHom} and \cite[Chap.~11]{Rot}). Since we are interested in effective computations we restrict ourself for simplicity to \emph{finite type} complexes carrying \emph{finite} filtrations.
\smallskip
When talking about $D$-modules the ring $D$ is assumed associative with one.
\begin{defn}[Filtered module]
Let $M$ be a $D$-module.
\begin{enumerate}
\item[(a)] A chain of submodules $(F_p M)_{p\in\Z}$ of the module $M$ is called an \textbf{ascending filtration} if $F_{p-1} M \leq F_p M$. The $p$-th \textbf{graded part} is the subfactor module defined by $\gr_p M := F_p M / F_{p-1} M$.
\item[(d)] A chain of submodules $(F^p M)_{p\in\Z}$ of the module $M$ is called a \textbf{descending filtration} if $F^p M \geq F^{p+1} M$. The $p$-th \textbf{graded part} is the subfactor module defined by $\gr^p M := F^p M / F^{p+1} M$.
\end{enumerate}
All filtrations of modules will be assumed \textbf{exhaustive} (i.e.\ $\bigcup_p F_p M = M$), \textbf{Hausdorff} (i.e.\ $\bigcap_p F_p M= 0$), and will have \textbf{finite length} $m$ (i.e.\ the difference between the highest and the lowest stable index is at most $m$). Such filtrations are called $m$-step filtrations.
\end{defn}
We start with two examples that will be pursued in Section \ref{appl}:
\begin{enumerate}
\item[\textbf{(d)}] Let $M$ and $N$ be right $D$-modules and $M^*:=\Hom_D(M,D)$ the dual (left) $D$-module of $M$. The map
\[
\phi:
\left\{
\begin{array}{ccc}
N\otimes_D M^* &\to& \Hom_D(M,N) \\
n \otimes \alpha &\mapsto& (m \mapsto n\alpha(m))
\end{array}
\right.
\]
is in general neither injective nor surjective. In fact, $\img\phi$ is the last (graded) part of a \textbf{d}escending filtration of $\Hom(M,N)$.
\begin{equation*}\label{HomMN}
\xymatrix@C=3cm@R=0.62cm{
& *=0{\mbox{\small\textbullet}}
\jumpdir{\Hom(M,N)}{/:a(-90) +1.3cm/}
\ar@{-}[d]
\\
& *=0{{\mbox{\small\textbullet}}}
{\ar@{.}[d]}
\\
& *=0{{\mbox{\small\textbullet}}}
\ar@{-}[d]
\\
*=0{\mbox{\small\textbullet}}
\ar@{-}[d] \jumpdir{N\otimes M^*}{/:a(190) +0.5cm/}
\ar[r]
& *=0{{\mbox{\small\textbullet}}}
{\jumpdir{\rklammer{1.2cm}\coker\phi}{/:a(50)1.65cm/}}
\ar@{-}[d]
\\
*=0{{\mbox{\small\textbullet}}}
{\ar@{-}[d]}
\jumpdir{\phi}{/:a(100) +1.6cm/}
\ar[r]
{\jumpdir{\phantom{\coim\phi \big\{}}{/:a(160)+1cm/}}
{\jumpdir{\coim\phi \big\{}{/:a(161)+0.98cm/}}
& *=0{\mbox{\small\textbullet}}
{\jumpdir{\big\}\img\phi}{/:a(23)+0.85cm/}}
\\
*=0{{\mbox{\small\textbullet}}} {\jumpdir{\ker\phi \big\{}{/:a(160)+0.85cm/}}
}\end{equation*}
\item[\textbf{(a)}] Dually, let $M$ be a left module, $L$ a right module, and
\[
\varepsilon:M \to M^{**}:=\Hom(\Hom(M,D),D)
\]
the \textbf{evaluation map}. The composition $\psi$
\[
\xymatrix@1{
L\otimes_D M
\ar@/_1pc/[rr]_{\psi}
{\ar[r]^<(.25){{\mathrm{id}}\otimes\varepsilon}} &
{L\otimes M^{**} \ar[r]^>(.75)\phi} &
\Hom_D(M^*,L)
}
\]
is in general neither injective nor surjective. It will turn out that its coimage $\coim\psi$ is the last graded part of an \textbf{a}scending filtration of $L\otimes M$.
\begin{equation*}\label{MN}
\xymatrix@C=3cm@R=0.62cm{
& *=0{\mbox{\small\textbullet}}
\jumpdir{\Hom(M^*,L)}{/:a(-80) +1.3cm/}
\ar@{-}[d]
\\
*=0{\mbox{\small\textbullet}}
\ar@{-}[d] \jumpdir{L\otimes M}{/:a(190) +0.5cm/}
\ar[r]
& *=0{{\mbox{\small\textbullet}}}
{\jumpdir{\big\}\coker\psi}{/:a(20)1.07cm/}}
\ar@{-}[d]
\\
*=0{{\mbox{\small\textbullet}}}
{\ar@{-}[d]}
\jumpdir{\psi}{/:a(100) +1.6cm/}
\ar[r]
{\jumpdir{\phantom{\coim\psi \big\{}}{/:a(160)+1cm/}}
{\jumpdir{\coim\psi \big\{}{/:a(161)+0.98cm/}}
& *=0{\mbox{\small\textbullet}}
{\jumpdir{\big\}\img\psi}{/:a(22)+0.85cm/}}
\\
*=0{{\mbox{\small\textbullet}}}
{\jumpdir{\ker\psi \lklammer{0.98cm}}{/:a(200)+0.9cm/}}
{\ar@{.}[d]}
\\
*=0{{\mbox{\small\textbullet}}}
\ar@{-}[d]
\\
*=0{\mbox{\small\textbullet}}
}\end{equation*}
\end{enumerate}
Example \textbf{(a)} has a geometric interpretation.
\begin{enumerate}
\item[\textbf{(a')}] Let $D$ be a commutative {\sc Noether}ian ring with $1$. Recall that the {\sc Krull} dimension $\dim D$ is defined to be the length $d$ of a maximal chain of prime ideals $D>\mathfrak{p}_0> \cdots > \mathfrak{p}_d$. For example, the \textbf{{\sc Krull} dimension} of a field $k$ is zero, $\dim \Z = 1$, and $\dim D[x_1,\ldots,x_n]=\dim D+n$.
The definition of the {\sc Krull} dimension is then extended to nontrivial $D$-modules using
\[
\dim M := \dim \frac{D}{\Ann_D(M)}.
\]
Define the \textbf{codimension} of a nontrivial module $M$ as
\[
\codim M := \dim D - \dim M
\]
and set the codimension of the zero module to be $\infty$. If for example $D$ is a (commutative) principal ideal domain which is not a field, then the finitely generated $D$-modules of codimension $1$ are precisely the finitely generated torsion modules.
\begin{defn}[Purity filtration]
Let $D$ be a commutative {\sc Noether}ian ring with $1$ and $M$ a $D$-module. Define the submodule $\tor_{-c}M$ as the biggest submodule of $M$ of codimension $\geq c$. The \emph{ascending} filtration
\[
\cdots \leq \tor_{-(c+1)}M \leq \tor_{-c}M \leq \cdots \leq \tor_{-1} M \leq \tor_0 M := M
\]
is called the \textbf{purity filtration} of $M$ \cite[Def.~1.1.4]{HL}. The graded part $M_c:=\tor_{-c}/\tor_{-(c+1)}$ is \textbf{pure} of codimension $c$, i.e.\ any nontrivial submodule of $M_c$ has codimension $c$. $\tor_{-1} M$ is nothing but the torsion submodule $\tor(M)$. This suggests calling $\tor_{-c} M$ the \textbf{$c$-th (higher) torsion submodule} of $M$.
\end{defn}
Early references to the purity filtration are {\sc J.-E.~Roos}'s pioneering paper \cite{Roos} where he introduced the \textbf{bidualizing complex}, {\sc M.~Kashiwara}'s master thesis (December~1970) \cite[Theorem 3.2.5]{Kash} on algebraic $D$-modules, and {J.-E.~Björk}'s standard reference \cite[Chap.~2, Thm.~4.15]{Bjo}. All these references address the construction of this filtration from a homological\footnote{{\sc Kashiwara} did not use spectral sequences: ``Instead of using spectral sequences, Sato devised [...] a method using associated cohomology'', \cite[Section~3.2]{Kash}.} point of view, where the assumption of commutativity of the ring $D$ can be dropped.
Under some mild conditions on the \emph{not} necessarily commutative ring $D$ one can characterize the purity filtration in the following way: There exist so-called \textbf{higher evaluation maps} $\varepsilon_c$, generalizing the standard evaluation map, such that the sequence
\[
0 \xrightarrow{} \tor_{-(c+1)}M \xrightarrow{} \tor_{-c} M \xrightarrow{\varepsilon_c} \Ext^c_D(\Ext^c_D(M,D),D)
\]
is exact (cf.~\cite{AB,QEB}). $\varepsilon_c$ can thus be viewed as a \textbf{natural transformation} between the \textbf{$c$-th torsion functor} $\tor_{-c}$ and the \textbf{$c$-th bidualizing functor} $\Ext^c(\Ext^c(-,D),D)$. In Subsection~\ref{bidual} it will be shown how to use spectral sequences of filtered complexes to construct all the higher evaluation maps $\varepsilon_c$. More generally it is evident that spectral sequences are natural birthplaces for many natural transformations.
Now to see the connection to the previous example (a) set $L=D$ as a right $D$-module. $\psi$ then becomes the evaluation map $\varepsilon$.
\end{enumerate}
There still exists a misunderstanding concerning spectral sequences of filtered complexes and it might be appropriate to address it here. Let $C$ be a filtered complex (cf.~Def.~\ref{filt_complex} and Remark~\ref{comp}). (*) We even assume $C$ of \emph{finite type} and the filtration \emph{finite}. The filtration on $C$ \emph{induces} a filtration on its (co)homologies $H_n(C)$. It is sometimes believed that the spectral sequence $E^r_{pq}$ associated to the filtered complex $C$ cannot be used to determine the induced filtration on $H_n(C)$, but can only be used to determine its graded parts $\gr_p H_n(C)$. One might be easily led to this conclusion since the last page of the spectral sequence consists of precisely these graded parts $E^\infty_{pq}=\gr_p H_{p+q}(C)$, and computing the last page is traditionally regarded as the last step in determining the spectral sequence. It is clear that even the knowledge of the total (co)homology $H_n(C)$ as a whole (along with the knowledge of the graded parts $\gr_p H_n(C)$) is in general \emph{not} enough to determine the filtration. Another reason might be the use of the phrase ``computing a spectral sequence''. Very often this means a successful attempt to figure out the morphisms on some of the pages of the spectral sequence, or even better, working skillfully around determining most or even all of these morphisms and nevertheless deducing enough or even all information about of the last page $E^\infty$. This often makes use of ingenuous arguments only valid in the example or family of examples under consideration. For this reason we add the word \textbf{effective} to the above phrase, and by ``effectively computing the spectral sequence'' we mean \emph{explicitly} determining \emph{all} morphisms on \emph{all} pages of the spectral sequence. Indeed, the definition one finds in standard textbooks like \cite[Section~5.4]{WeiHom} of the spectral sequence associated to a complex of \emph{finite type} carrying a \emph{finite} filtration \emph{is constructive} in the sense that it can be implemented on a computer (see \cite{homalg-package}). The message of this work is the following:
\smallskip
\begin{center}
\framebox{
\begin{minipage}[c]{0.9\linewidth}
If the spectral sequence of a filtered complex is effectively computable, then, with some extra work, the induced filtration on the total (co)homology is effectively computable as well.
\end{minipage}
}
\end{center}
\smallskip
By definition, the objects $E^r_{pq}$ of the spectral sequence associated to the filtered complex $C$ are subfactors of the total object $C_{p+q}$ (see Sections~\ref{les} and \ref{filt}). In Section~\ref{genmor} we introduce the notion of a \textbf{generalized embedding} to keep track of this information. The central idea of this work is to use the generalized embeddings $E^\infty_{pq} \to C_{p+q}$ to filter the total (co)homology $H_{p+q}(C)$ --- also a subfactor of $C_{p+q}$. This is the content of Theorem~\ref{mainthm}.
Effectively computing the induced filtration is not a main stream application of spectral sequences. Very often, especially in topology, the total filtered complex is not completely known, or is of \emph{infinite} type, although the (total) (co)homology is known to be of finite type. But from some page on, the objects of the spectral sequence become \emph{intrinsic} and of \emph{finite type}. Pushing the spectral sequence to convergence and determining the isomorphism type of the low degree total (co)homologies is already highly nontrivial. The reader is referred to \cite{RS} and the impressive program {\tt Kenzo} \cite{kenzo}. In its current stage, {\tt Kenzo} is able to compute $A_\infty$-structures on cohomology. The goal here is nevertheless of different nature, namely to effectively compute the induced filtration on the \emph{a priori known} (co)homology. The shape of the spectral sequence starting from the \emph{intrinsic} page will also be used to define new numerical invariants of modules and sheaves of modules (cf.~Subsection~\ref{codegree}).
\smallskip
The approach favored here makes extensive use of \textbf{generalized maps}, a concept motivated in Section~\ref{les}, introduced in Section~\ref{genmor}, and put into action starting from Section~\ref{filt}.
\smallskip
\begin{center}
\framebox{
\begin{minipage}[c]{0.9\linewidth}
Generalized maps can be viewed as a \emph{data structure} that allows \emph{reorganizing} many algorithms in homological algebra as \emph{closed formulas}.
\end{minipage}
}
\end{center}
\medskip
Although the whole theoretical content of this work can be done over an abstract abelian category, it is sometimes convenient to be able to refer to elements. The discussion in \cite[p.~203]{Har} explains why this can be assumed without loss of generality.
\section{A generality on subobject lattices}
The following situation will be repeatedly encountered in the sequel. Let $C$ be an object in an abelian category, $Z$, $B$, and $A$ subobjects with $B\leq Z$. Then the subobject lattice\footnote{I learned drawing these pictures from Prof.~{\sc Joachim Neubüser}. He made intensive use of subgroup lattices in his courses on finite group theory to visualize arguments and even make proofs.} of $C$ is at most a \textbf{degeneration} of the one in Figure~\ref{ABZ}.
\begin{figure}[htb]
\begin{minipage}[c]{0.55\linewidth}
\centering
\psfrag{$C$}{$C$}
\psfrag{$A$}{$A$}
\psfrag{$B$}{$B$}
\psfrag{$Z$}{$Z$}
\psfrag{$A'$}{$A'$}
\includegraphics[width=0.4\textwidth]{ABZ.eps}
\end{minipage}
\caption{A general lattice with subobjects $B\leq Z$ and $A$}
\label{ABZ}
\end{figure}
This lattice makes no statement about the ``size'' of $B$ or $Z$ compared to $A$, since, in general, neither $B$ nor $Z$ is in a $\leq$-relation with $A$. The \textbf{second\footnote{Here we follow the numbering in {\sc Emmy Noether}'s fundamental paper \cite{HomSatz}.} isomorphism theorem} can be applied ten times within this lattice, two for each of the five parallelograms. The subobject $A$ leads to the \textbf{intermediate subobject} $A':=(A+B)\cap Z$ sitting between $B$ and $Z$, which in general neither coincides with $Z$ nor with $B$. Hence, a $2$-step filtration $0\leq A \leq C$ leads to a $2$-step filtration $0 \leq A'/B \leq Z/B$.
Arguing in terms of subobject lattices is a manifestation of the isomorphism theorems, all being immediate corollaries of the homomorphism theorem (cf.~\cite{HomSatz}).
\section{Long exact sequences as spectral sequences}\label{les}
Long exact sequences are in a precise sense a precursor of spectral sequences of filtered complexes. They have the advantage of being a lot easier to comprehend. The core idea around which this work is built can already be illustrated using long exact sequences, which is the aim of this section.
Long exact sequences often occur as the sequence connecting the homologies
\[
\cdots \leftarrow{} {\color{darkgreen} H_{n-1}(A)} \xleftarrow{{\color{red}\partial_*}} {\color{brown} H_n(R)}
\xleftarrow{\nu_*} {\color{blue} H_n(C)} \xleftarrow{\iota_*}
{\color{darkgreen} H_n(A)}\xleftarrow{{\color{red}\partial_*}} {\color{brown} H_{n+1}(R)} \xleftarrow{} \cdots
\]
of a \emph{short exact} sequence of complexes $0 \xleftarrow{} {\color{brown}R} \xleftarrow{\nu} {\color{blue}C} \xleftarrow{\iota} {\color{darkgreen}A} \xleftarrow{} 0$. If one views $(A,\partial_A)$ as a subcomplex of $(C,\partial)$, then $(R,\partial_R)$ can be identified with the quotient complex $C/A$. Moreover $\partial_A$ is then $\partial_{|A}$ and $\partial_R$ is boundary operator induced by $\partial$ on the quotient $R$. The natural maps $\partial_*$ appearing in the long exact sequence are the so-called connecting homomorphisms and are, like $\partial_A$ and $\partial_R$, induced by the boundary operator $\partial$ of the total complex $C$.
To see in which sense a long exact sequence is a special case of a spectral sequence of a filtered complex we first recall the definition of a filtered complex.
\begin{defn}[Filtered complex]\label{filt_complex}
We distinguish between chain and cochain complexes:
\begin{enumerate}
\item[(a)] A chain of subcomplexes $(F_p C)_{p\in\Z}$ (i.e. $\partial(F_p C_n) \leq F_p C_{n-1}$ for all $n$)
of the chain complex $(C_\bullet,\partial)$ is called an \textbf{ascending filtration}
if $F_{p-1} C \leq F_p C$. The $p$-th \textbf{graded part} is the subfactor chain complex
defined by $\gr_p C := F_p C / F_{p-1} C$.
\item[(d)] A chain of subcomplexes $(F^p C^n)_{p\in\Z}$ (i.e. $\partial(F^p C^n) \leq F^p C^{n+1}$ for all $n$)
of the \emph{co}chain complex $(C^\bullet,\partial)$ is called a \textbf{descending filtration}
if $F^p C \geq F^{p+1} C$. The $p$-th \textbf{graded part} is the subfactor cochain complex
defined by $\gr^p C := F^p C / F^{p+1} C$.
\end{enumerate}
Like for modules all filtrations of complexes will be \textbf{exhaustive} (i.e.\ $\bigcup_p F_p C = C$), \textbf{Hausdorff} (i.e.\ $\bigcap_p F_p C= 0$), and will have \textbf{finite length} $m$ (i.e.\ the difference between the highest and the lowest stable index is at most $m$). Such filtrations are called $m$-step filtrations in the sequel.
Convention: For the purpose of this work filtrations on chain complexes are automatically ascending whereas on \emph{co}chain complexes descending.
\end{defn}
\begin{rmrk}
Before continuing with the previous discussion it is important to note that
\begin{enumerate}
\item[(a)] The filtration $(F_p C_n)$ of $C_n$ \emph{induces} an ascending filtration
on the homology $H_n(C)$. Its $p$-th graded part is denoted by $\gr_p H_n(C)$.
\item[(d)] The filtration $(F^p C^n)$ of $C^n$ \emph{induces} a descending filtration
on the cohomology $H^n(C)$. Its $p$-th graded parts is denoted by $\gr^p H^n(C)$.
\end{enumerate}
More precisely, $F_p H_n(C)$ is the image of the morphism $H_n(F_p C) \to H_n(C)$.
\end{rmrk}
A short exact sequence of (co)chain complexes $0 \xleftarrow{} {\color{brown}R} \xleftarrow{\nu} {\color{blue}C} \xleftarrow{\iota} {\color{darkgreen}A} \xleftarrow{} 0$ can be viewed as a $2$-step filtration $0\leq A \leq C$ of the complex $C$ with graded parts $A$ and $R$. Following the above convention the filtration is ascending or descending depending on whether $C$ is a chain or cochain complex.
The main idea behind long exact sequences is to relate the homologies of the total chain complex $C$ with the homologies of its graded parts $A$ and $R$. This precisely is also the idea behind spectral sequences of filtered complexes but generalized to $m$-step filtrations, where $m$ may now be larger than $2$. Roughly speaking, the spectral sequence of a filtered complex measures how far the graded part $\gr_p H_n(C)$ of the filtered $n$-th homology $H_n(C)$ of the total filtered complex $C$ is away from simply being the homology $H_n(\gr_p C)$ of the $p$-th graded part of $C$. This would for example happen if the filtration $F_p C$ is induced by its own grading\footnote{In the context of long exact sequences this would mean that the short exact sequence of complexes $0 \xleftarrow{} Q \xleftarrow{\nu} C \xleftarrow{\iota} T \xleftarrow{} 0$ splits.}, i.e.\ $F_p C = \bigoplus_{p' \leq p} \gr_p' C$, since then the homologies of $C$ will simply be the direct sum of the homologies of the graded parts $\gr_p C$. In general, $\gr_p H_n(C)$ will only be a \emph{subfactor} of $H_n(\gr_p C)$.
Long exact sequences do not have a direct generalization to $m$-step filtrations, $m>2$. The language of spectral sequences offers in this respect a better alternative. In order to make the transition to the language of spectral sequences notice that the graded parts $\coker(\iota_*)$ and $\ker(\nu_*)$ of the filtered total homology $H_n(C)$ indicated in the diagram below
\begin{equation}\label{LES}
\xymatrix@R=0.4cm{
{\color{darkgreen}H_{n-1}(A)}
&
\color{brown}H_n(R)
{\ar[l]_<(.2){\color{red}\partial_*}}
&
{\color{blue}H_n(C)}
{\ar[l]_<(.18){\nu_*}}
&
{\color{darkgreen}H_n(A)}
{\ar[l]_<(.18){\iota_*}}
&
{\color{brown}H_{n+1}(R)}
{\ar[l]_{\color{red}\partial_*}}
\\
*=0{{\color{darkgreen}\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
\\
*=0{{\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
&
*=0{{\color{brown}\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
{\color{red}\ar[l]}
\\
*=0{{\mbox{\tiny\textbullet}}}
&
*=0{{\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
{\ar[l]}
{\jumpdir{\color{red}\partial_*}{/:a(-11)1.1cm/}}
&
*=0{\color{blue}\mbox{\tiny\textbullet}}
\ar@{-}[d]
{\ar[l]}
\\
&
*=0{\mbox{{\tiny\textbullet}}}
&
*=0{{\mbox{\tiny\textbullet}}}
\ar@{-}[d]
{\ar[l]}
{\jumpdir{\nu_*}{/:a(-13)1.07cm/}}
{\jumpdir{\iota_*}{/:a(-17)-1.1cm/}}
{\jumpdir{{\color{brown}\}}\mbox{ \tiny$\coker(\iota_*)$}}{/:a(19)-0.9cm/}}
{\jumpdir{\mbox{\tiny$\ker(\nu_*)$ }{\color{darkgreen}\big\{}}{/:a(27)0.7cm/}}
&
*=0{{\color{darkgreen}\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
{\ar[l]}
\\
&
&
*=0{\mbox{\tiny\textbullet}}
&
*=0{{\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
{\ar[l]}
&
*=0{{\color{brown}\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
{\ar[l]}
\\
&
&
&
*=0{{\mbox{\tiny\textbullet}}}
&
*=0{{\mbox{\tiny\textbullet}}}
{\ar@{-}[d]}
{\ar[l]}
{\jumpdir{\color{red}\partial_*}{/:a(-15)1.1cm/}}
\\
&
&
&
&
*=0{{\mbox{\tiny\textbullet}}}
}
\end{equation}
both have an alternative description in terms of the connecting homomorphisms:
\begin{equation}\label{iota_nu}
\coker(\iota_*) \cong {{\color{brown}\ker}({\color{red}\partial_*}) \quad\quad \mbox{and} \quad \quad \ker(\nu_*) \cong \color{darkgreen}\coker}({\color{red}\partial_*}).
\end{equation}
These natural isomorphisms are nothing but the statement of the homomorphism theorem applied to $\iota_*$ and $\nu_*$.
Below we will give the definition of a spectral sequence and in Section \ref{filt} we will recall how to associate a spectral sequence to a filtered complex. But before doing so let us describe in simple words the rough picture, valid for general spectral sequences (even for those not associated to a filtered complex).
A spectral sequence can be viewed as a book with several pages $E^a$, $E^{a+1}$, $E^{a+2}$, $\ldots$ starting at some integer $a$. Each page contains a double array $E^r_{pq}$ of objects, arranged in an array of complexes. The pattern of arranging the objects in such an array of complexes depends only on the integer $a$ and is fixed by a common convention once and for all. The objects on page $r+1$ are the homologies of the complexes on page $r$. It follows that the object $E^r_{pq}$ on page $r$ are \emph{subfactors} of the objects $E^t_{pq}$ on \emph{all} the previous pages $t<r$.
Now we turn to the morphisms of the complexes. From what we have just been saying we know that at least the source and the target of a morphism on page $r+1$ are completely determined by page $r$. This can be regarded as a sort of restriction on the morphism, and indeed, in the case when zero is the only morphism from the given source to the given target, the morphism then becomes uniquely determined. This happens for example whenever either the source or the target vanishes, but may happen of course in other situations ($\Hom_\Z(\Z/2\Z,\Z/3\Z)=0$). So now it is natural to ask whether page $r$ or any of its previous pages impose further restrictions on the morphisms on page $r+1$, apart from determining their sources and targets. The answer is, in general, no. This will become clear as soon as we construct the spectral sequence associated to a $2$-step filtered complex below (or more generally for an $m$-step filtration in Section \ref{filt}) and understand the nature of data on each page.
Summing up: Taking homology only determines the objects of the complexes on page $r+1$, but not their morphisms. Choosing these morphisms not only completes the $(r+1)$-st page, but again determines the objects on the $(r+2)$-nd page. Iterating this process finally defines a spectral sequence.
Typically, in applications of spectral sequences there exists a natural choice of the morphisms on the successive pages. This is illustrated in the following example, where we associate a spectral sequence to a $2$-filtered complex. But first we recall the definition of a spectral sequence.
\begin{defn}[Homological spectral sequence]
A \textbf{homological spectral sequence} (starting at $r_0$) in an abelian category $\mathcal{A}$ consists of
\begin{enumerate}
\item Objects $E^r_{pq} \in \mathcal{A}$, for $p,q,r\in\Z$ and $r \geq r_0\in \Z$; arranged as a sequence (indexed by $r$) of lattices (indexed by $p,q$);
\item Morphisms $\partial^r_{pq}:E^r_{pq} \to E^r_{p-r,q+r-1}$ with $\partial^r \partial^r=0$, i.e. the sequences of slope $-\frac{r+1}{r}$ in $E^r$ form a chain complex;
\item Isomorphisms between $E^{r+1}_{pq}$ and the homology $\ker \partial^r_{pq}/\img \partial^r_{p+r,q-r+1}$ of $E^r$ at the spot $(p,q)$.
\end{enumerate}
$E^r$ is called the $r$-th \textbf{sheet} (or \textbf{page}, or \textbf{term}) of the spectral sequence.
\end{defn}
Note that $E^{r+1}_{pq}$ is by definition (isomorphic to) a subfactor of $E^r_{pq}$. $p$ is called the \textbf{filtration degree} and $q$ the \textbf{complementary degree}. The sum $n=p+q$ is called the \textbf{total degree}. A morphism with source of total degree $n$, i.e.\ on the $n$-th diagonal, has target of degree $n-1$, i.e. on the $(n-1)$-st diagonal. So the total degree is \emph{decreased} by one.
\begin{figure}[htb]
\begin{minipage}[c]{1\linewidth}
\[
\xymatrix{
*=0{}
\jumpdir{q}{/:a(90).5cm/}
&
E^2_{02}
\ar@{.}[rd]
&
E^2_{12}
\ar@{.}[rd]
&
E^2_{22}
\\
&
E^2_{01}
\ar@{.}[rd]
&
E^2_{11}
\ar@{.}[rd]
&
E^2_{21}
{\ar[llu]_{{\color{red}\partial}}}
\\
&
E^2_{00}
&
E^2_{10}
&
E^2_{20}
{\ar[llu]_{{\color{red}\partial}}}
\\
*=0{} \ar[uuu] \ar[rrr]
&&&
*=0{}
\jumpdir{p}{/:a(-100).5cm/}
}
\]
\end{minipage}
\caption{$E^2$}
\label{E_2_with_arrows}
\end{figure}
\begin{defn}[Cohomological spectral sequence]
A \textbf{cohomological spectral seq\-uence} (starting at $r_0$) in an abelian category $\mathcal{A}$ consists of
\begin{enumerate}
\item Objects $E_r^{pq} \in \mathcal{A}$, for $p,q,r\in\Z$ and $r \geq r_0\in \Z$; arranged as a sequence (indexed by $r$) of lattices (indexes by $p,q$);
\item Morphisms $d_r^{pq}:E_r^{pq} \to E_r^{p+r,q-r+1}$ with $d_r d_r=0$, i.e. the sequences of slope $-\frac{r+1}{r}$ in $E_r$ form a cochain complex;
\item Isomorphisms between $E_{r+1}^{pq}$ and the cohomology of $E_r$ at the spot $(p,q)$.
\end{enumerate}
$E_r$ is called the $r$-th \textbf{sheet} of the spectral sequence.
\end{defn}
Here the total degree $n=p+q$ is \emph{increased} by one. Reflecting a cohomological spectral sequence at the origin $(p,q)=(0,0)$, for example, defines a homological one $E^r_{pq}=E_r^{-p,-q}$, and vice versa. For more details and terminology (\textbf{boundedness}, \textbf{convergence}, \textbf{fiber terms}, \textbf{base terms}, \textbf{edge homomorphisms}, \textbf{collapsing}, \textbf{$E^\infty$ term}, \textbf{regularity}) see \cite[Section~5.2]{WeiHom}.
Part of the data we have in the context of long exact sequences can be put together to construct a spectral sequence with three pages $E^0$, $E^1$, and $E^2$:
\[
\xymatrix{
E^{0}_{pq}:
&
{\color{darkgreen}A_n}
\ar@{.}[rd]
&
{\color{brown}R_{n+1}}
\\
&
{\color{darkgreen}A_{n-1}}
\ar@{.}[rd]
&
{\color{brown}R_n}
\\
&
{\color{darkgreen}A_{n-2}}
&
{\color{brown}R_{n-1}}
}
\xymatrix{
&& \\
& \ar@{~>}[r]^{\textrm{add the}}_{\textrm{arrows}} &
}
\xymatrix{
E^{0}_{pq}:
&
{\color{darkgreen}A_n}
{\ar[d]^{\partial_\colorA}}
\ar@{.}[rd]
&
{\color{brown}R_{n+1}}
{\ar@{}[d]^{\phantom{\partial_\colorR}}}
{\ar[d]^{\partial_\colorR}}
\\
&
{\color{darkgreen}A_{n-1}}
{\ar[d]^{\partial_\colorA}}
\ar@{.}[rd]
&
{\color{brown}R_n}
{\ar[d]^{\partial_\colorR}}
\\
&
{\color{darkgreen}A_{n-2}}
&
{\color{brown}R_{n-1}}
}
\]
\[
\xymatrix{
& \ar@{~>}[dl]^{\textrm{homology}}_{\textrm{take}} \\
\mbox{\phantom{$1$}} &
}
\]
\[
\xymatrix{
E^{1}_{pq}:
&
\ar@{.}[rd]
{\color{darkgreen}H_n(A)}
&
{\color{brown}H_{n+1}(R)}
\\
&
\ar@{.}[rd]
{\color{darkgreen}H_{n-1}(A)}
&
{\color{brown}H_n(R)}
\\
&
{\color{darkgreen}H_{n-2}(A)}
&
{\color{brown}H_{n-1}(R)}
}
\xymatrix{
&& \\
& \ar@{~>}[r]^{\textrm{add the}}_{\textrm{arrows}} &
}
\xymatrix{
E^{1}_{pq}:
&
\ar@{.}[rd]
{\color{darkgreen}H_n(A)}
&
{\color{brown}H_{n+1}(R)}
{\ar[l]_{\color{red}\partial_*}}
\\
&
\ar@{.}[rd]
{\color{darkgreen}H_{n-1}(A)}
&
{\color{brown}H_n(R)}
{\ar[l]_{\color{red}\partial_*}}
\\
&
{\color{darkgreen}H_{n-2}(A)}
&
{\color{brown}H_{n-1}(R)}
{\ar[l]_{\color{red}\partial_*}}
}
\]
\[
\xymatrix{
& \ar@{~>}[dl]^{\textrm{homology}}_{\textrm{take}} \\
\mbox{\phantom{$1$}} &
}
\]
\[
\xymatrix{
E^{2}_{pq}:
&
\ar@{.}[rd]
{{\color{darkgreen}\coker}({\color{red}\partial_*})}
&
{{\color{brown}\ker}({\color{red}\partial_*})}
\\
&
\ar@{.}[rd]
{{\color{darkgreen}\coker}({\color{red}\partial_*})}
&
{{\color{brown}\ker}({\color{red}\partial_*})}
\\
&
{{\color{darkgreen}\coker}({\color{red}\partial_*})}
&
{{\color{brown}\ker}({\color{red}\partial_*})}
}
\xymatrix{
&& \\
& \ar@{~>}[r]^{\textrm{no arrows}}_{\textrm{to add}} &
}
\xymatrix{
E^{2}_{pq}:
&
\ar@{.}[rd]
{{\color{darkgreen}\coker}({\color{red}\partial_*})}
&
{{\color{brown}\ker}({\color{red}\partial_*})}
\\
&
\ar@{.}[rd]
{{\color{darkgreen}\coker}({\color{red}\partial_*})}
&
{{\color{brown}\ker}({\color{red}\partial_*})}
\\
&
{{\color{darkgreen}\coker}({\color{red}\partial_*})}
&
{{\color{brown}\ker}({\color{red}\partial_*})}
}
\]
with $p,q\in\Z$, $n=p+q$. Taking the two columns over $p=0$ and $p=1$, for example, is equivalent to setting $F_{-1} C:=0$, $F_0 C:=A$, and $F_1 C:=C$.
Several remarks are in order. First note that all the arrows in the above spectral sequence are induced by $\partial$, the boundary operator of the total complex $C$. Since $\partial$ respects the filtration, i.e. $\partial(F_p C) \leq F_p C$, the induced map $\bar{\partial}:F_p C \to C/F_p C$ vanishes. So respecting the filtration means that $\partial$ cannot carry things up in the filtration. But since $\partial$ does not necessarily respect the grading induced by the filtration it may very well carry things down one or more levels. Now we can interpret the pages: $E^0$ consists of the graded parts $\gr_p C$ with boundary operators $\partial_A$ and $\partial_Q$ chopping off all what $\partial$ carries down in the filtration. $E^1$ describes what $\partial$ carries down exactly one level. This interpretation of the connecting homomorphisms $\partial_*$ puts them on the same conceptual level as $\partial_A$ and $\partial_Q$. Finally, $E^2$ describes what $\partial$ carries exactly two levels down, but since a $2$-step filtration has two levels it should now be clear why $E^2$ does not have arrows.
Second, as we have seen in (\ref{iota_nu}) using the homomorphism theorem, the objects of the last page $E^2$ can be naturally identified with the graded parts $\gr_p H_n(C)$ of the filtered total homology $H_n(C)$. And since the objects on each page are subfactors of the objects on the previous pages one can view the above spectral sequence as a process successively approximating the graded parts $\gr_p H_n(C)$ of the filtered total homology ${\color{blue}H_n(C)}$:
\[
({\color{darkgreen}A_n},{\color{brown}R_n}) \leadsto
({\color{darkgreen}H_n(A)},{\color{brown}H_n(R)}) \leadsto
({\color{darkgreen}\coker}({\color{red}\partial_*}),{\color{brown}\ker}({\color{red}\partial_*})).
\]
The approximation is achieved by successively taking deeper inter-level interaction into account.
Finally one can ask if the spectral sequence above captured all the information in the long exact sequence. The answer is \emph{no}. The long exact sequence additionally contains the short exact sequence
\begin{equation}\label{extension}
0 \xleftarrow{} {\color{brown}\ker}({\color{red}\partial_*})
\xleftarrow{\nu_*} {\color{blue}H_n(C)} \xleftarrow{\iota_*}
{\color{darkgreen}\coker}({\color{red}\partial_*}) \xleftarrow{} 0,
\end{equation}
explicitly describing the total homology ${\color{blue}H_n(C)}$ as an extension of its graded parts
${\color{darkgreen}\coker}({\color{red}\partial_*})$ and ${\color{brown}\ker}({\color{red}\partial_*})$.
Looking to what happens inside the subobject lattice of ${\color{blue}C_n}$ during the approximation process will help understanding how to remedy this defect.
\begin{figure}[htb]
\begin{minipage}[c]{1\linewidth}
\centering
\psfrag{$C_n$}{$\color{blue}C_n$}
\psfrag{$Z_n(R)$}{$\color{brown}Z_n(R)$}
\psfrag{$B_n(R)$}{$\color{brown}B_n(R)$}
\psfrag{$A_n$}{$\color{darkgreen}A_n$}
\psfrag{$Z_n(A)$}{$\color{darkgreen}Z_n(A)$}
\psfrag{$B_n(A)$}{$\color{darkgreen}B_n(A)$}
\psfrag{$Z_n(C)$}{$Z_n(C)$}
\psfrag{$B_n(C)$}{$B_n(C)$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\psfrag{$ker$}{$\cong{\color{brown}\ker}({\color{red}\partial_*})$}
\psfrag{$coker$}{$\cong{\color{darkgreen}\coker}({\color{red}\partial_*})$}
\includegraphics[width=0.5\textwidth]{LongExactSeq.eps}
\end{minipage}
\caption{The $2$-step filtration $0 \leq \colorA \leq \colorC$ and the induced
$2$-step filtration on $\color{blue}H_*(C)$}
\label{LongExactSeq}
\end{figure}
Figure~\ref{LongExactSeq} shows the $n$-th object ${\color{blue}C_n}$ in the chain complex together with the subobjects that define the different homologies: ${\color{brown}H_n(R)}:={\color{brown}Z_n(R)}/{\color{brown}B_n(R)}$, ${\color{darkgreen}H_n(A)}:={\color{darkgreen}Z_n(A)}/{\color{darkgreen}B_n(A)}$, and ${\color{blue}H_n(C)}:={\color{blue}Z_n(C)}/{\color{blue}B_n(C)}$. Here we replaced ${\color{brown}Z_n(R)}$ and ${\color{brown}B_n(R)}$ by their full preimages in ${\color{blue}C_n}$ under the canonical epimorphism ${\color{blue}C_n} \xrightarrow{\nu} {\color{brown}R_n}:={\color{blue}C_n}/{\color{darkgreen}A_n}$.
\begin{figure}[htb]
\begin{minipage}[c]{0.4\linewidth}
\centering
\psfrag{$C_n$}{$\color{blue}C_n$}
\psfrag{$E^0_{1,n-1}:=R_n$}{${\color{brown}E^0_{1,n-1}}={\color{brown}R_n}$}
\psfrag{$E^0_{0,n}:=A_n$}{${\color{darkgreen}E^0_{0,n}}={\color{darkgreen}A_n}$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\includegraphics[width=0.4\textwidth]{E0.eps}
\caption{$E^0$}
\label{E^0}
\end{minipage}
\quad $\leadsto$
\begin{minipage}[c]{0.4\linewidth}
\centering
\psfrag{$C_n$}{$\color{blue}C_n$}
\psfrag{$A_n$}{$\color{darkgreen}A_n$}
\psfrag{$E^1_{1,n-1}:=H_n(R)$}{${\color{brown}E^1_{1,n-1}}={\color{brown}H_n(R)}$}
\psfrag{$E^1_{0,n}:=H_n(A)$}{${\color{darkgreen}E^1_{0,n}}={\color{darkgreen}H_n(A)}$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\includegraphics[width=0.4\textwidth]{E1.eps}
\caption{$E^1$}
\label{E^1}
\end{minipage}
\quad \quad $\leadsto$
\begin{minipage}[c]{0.4\linewidth}
\vspace{0.5cm}
\centering
\psfrag{$C_n$}{$\color{blue}C_n$}
\psfrag{$A_n$}{$\color{darkgreen}A_n$}
\psfrag{$E^2_{1,n-1}$}{${\color{brown}E^2_{1,n-1}}={\color{brown}\ker}({\color{red}\partial_*})$}
\psfrag{$E^2_{0,n}$}{${\color{darkgreen}E^2_{0,n}}={\color{darkgreen}\coker}({\color{red}\partial_*})$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\includegraphics[width=0.4\textwidth]{E2.eps}
\caption{$E^2=E^\infty$}
\label{E^2}
\end{minipage}
\\
\vspace{0.5cm}
The approximation process of the graded parts of ${\color{blue}H_n(C)}$
\label{E}
\end{figure}
Figures~\ref{E^0}-\ref{E^2} show how the graded parts of ${\color{blue}H_n(C)}$ get successively approximated by the objects in the spectral sequence $E^r_{pq}$, naturally identified with certain subfactors of ${\color{blue}C_n}$ for $n=p+q$. Figure~\ref{E^2} proves that the second isomorphism theorem provides \emph{canonical} isomorphisms between the graded parts of the total homology ${\color{blue}H_n(C)}$ and the objects ${\color{brown}E^\infty_{1,n-1}}={\color{brown}E^2_{1,n-1}}$ and ${\color{darkgreen}E^\infty_{0,n}}={\color{darkgreen}E^2_{0,n}}$ of the stable sheet. And modulo these natural isomorphisms Figure~\ref{E^2} further suggests that knowing how to identify ${\color{brown}E^\infty_{1,n-1}}$ and ${\color{darkgreen}E^\infty_{0,n}}$ with the indicated subfactors of ${\color{blue}C_n}$ will suffice to explicitly construct the extension (\ref{extension}) in the form
\begin{equation}\label{extensionE}
0 \xleftarrow{} {\color{brown}E^\infty_{1,n-1}}
\xleftarrow{} {\color{blue}H_n(C)} \xleftarrow{}
{\color{darkgreen}E^\infty_{0,n}} \xleftarrow{} 0.
\end{equation}
But since we cannot use maps to identify objects with subfactors of other objects we are lead to introduce the notion of \textbf{generalized maps} in the next Section. Roughly speaking, this notion enables us to interpret the pairs of horizontal arrows in Figure~\ref{Emb} as \textbf{generalized embeddings}.
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$C_n$}{$\color{blue}C_n$}
\psfrag{$A_n$}{$\color{darkgreen}A_n$}
\psfrag{$E^2_{1,n-1}$}{${\color{brown}E^\infty_{1,n-1}}$}
\psfrag{$E^2_{0,n}$}{${\color{darkgreen}E^\infty_{0,n}}$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\includegraphics[width=0.4\textwidth]{Emb.eps}
\end{minipage}
\caption{The generalized embeddings}
\label{Emb}
\end{figure}
\section{Generalized maps}\label{genmor}
A morphism between two objects (modules, complexes, \ldots) induces a map between their lattice of subobjects, and the \textbf{homomorphism theorem} implies that this map gives rise to a bijective correspondence between the subobjects of the target lying in the image and those subobjects of the source containing the kernel. This motivates the visualization in Figure~\ref{Mor} of a morphism $T\xleftarrow{\color{red}\phi}S$ with source $S$ and target $T$. The homomorphism theorem states that the morphism ${\color{red}\phi}$, indicated by the horizontal pair of arrows in Figure~\ref{Mor}, maps $S/\ker({\color{red}\phi})$ onto the \emph{subobject} $\img({\color{red}\phi})$ in a structure-preserving way. In this sense, the exact ladder of morphisms in (\ref{LES}) visualizes part of the long exact homology sequence.
\begin{figure}[htb]
\begin{minipage}[c]{0.8\linewidth}
\centering
\psfrag{$T$}{$T$}
\psfrag{$S$}{$S$}
\psfrag{$phi$}{$\color{red}\phi$}
\psfrag{$im(phi)$}{$\img{\color{red}\phi}$}
\psfrag{$ker(phi)$}{$\ker{\color{red}\phi}$}
\includegraphics[width=0.4\textwidth]{Mor.eps}
\end{minipage}
\caption{The homomorphism theorem}
\label{Mor}
\end{figure}
The simplest motivation for the notion of a generalized morphism $T\xleftarrow{\color{red}\psi}S$ is the desire to give sense to the picture in Figure~\ref{Gen} ``mapping'' a quotient of $S$ onto a \emph{subfactor} of $T$.
\begin{figure}[htb]
\begin{minipage}[c]{0.8\linewidth}
\centering
\psfrag{$T$}{$T$}
\psfrag{$S$}{$S$}
\psfrag{$L$}{$L$}
\psfrag{$psi$}{$\color{red}\psi$}
\psfrag{$im(psi)$}{$\img{\color{red}\psi}$}
\psfrag{$Im(psi)$}{$\Img{\color{red}\psi}$}
\psfrag{$ker(psi)$}{$\ker{\color{red}\psi}$}
\includegraphics[width=0.4\textwidth]{Gen.eps}
\end{minipage}
\caption{A generalized morphism}
\label{Gen}
\end{figure}
\begin{defn}[Generalized morphism]
Let $S$ and $T$ be two objects in an abelian category (of modules over some ring). A \textbf{generalized morphism} $\psi$ with source $S$ and target $T$ is a pair of morphisms $(\bar{\psi},\imath)$, where $\imath$ is a morphism from some third object $F$ to $T$ and $\bar{\psi}$ is a morphism from $S$ to $\coker\imath=T/\img(\imath)$. We call $\bar{\psi}$ the morphism \textbf{associated} to $\psi$ and $\imath$ the \textbf{morphism aid} of $\psi$ and denote it by $\Aid\psi$. Further we call $L:=\img\imath\leq T$ the \textbf{morphism aid subobject}. Two generalized morphisms $(\bar{\psi},\imath)$ and $(\bar{\phi},\jmath)$ with ($\img \imath = \img \jmath$ and) $\bar{\psi}=\bar{\phi}$ will be identified.
\end{defn}
Philosophically speaking, this definition frees one from the ``conservative'' standpoint of viewing $\psi$ as morphism to the quotient $T/\img\imath$. Instead it allows one to view $\psi$ as a ``morphism'' to the full object $T$ by directly incorporating $\imath$ in the very definition of $\psi$. The intuition behind the notion ``morphism aid'' (resp.\ ``morphism aid subobject'') is that $\imath$ (resp.\ $L=\img\imath$) \emph{aids} $\psi$ to become a (well-defined) morphism. Figure~\ref{GenMor} visualizes the generalized morphism $\psi$ as a pair $(\bar{\psi},\imath)$.
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$F$}{$F$}
\psfrag{$T$}{$T$}
\psfrag{$S$}{$S$}
\psfrag{$T/im(alpha)$}{$T/\img \imath$}
\psfrag{$barpsi$}{$\bar{\psi}$}
\psfrag{$pi$}{$\pi_\imath$}
\psfrag{$alpha$}{$\imath$}
\psfrag{$im(psi)$}{$\img{\color{red}\psi}$}
\psfrag{$im(barpsi)$}{$\img\bar{\psi}$}
\psfrag{$im(alpha)$}{$L=\img\imath$}
\psfrag{$ker(barpsi)$}{$\ker\bar{\psi}$}
\psfrag{$pi^{-1}(im(barpsi))$}{$\pi_\imath^{-1}(\img \bar{\psi}) =: \Img{\color{red}\psi}$}
\includegraphics[width=0.6\textwidth]{GenMor.eps}
\end{minipage}
\caption{The morphism aid $\imath$ and the associated morphism $\bar{\psi}$}
\label{GenMor}
\end{figure}
Note that replacing $\imath$ by a morphism with the same image does not alter the generalized morphism. We will therefore often write $(\bar{\psi},L)$ for the generalized morphism $(\bar{\psi},\imath)$, where $\imath$ is any morphism with $\img \imath = L \leq T$. The most natural choice would be the embedding $\imath:L \to T$. Figure~\ref{Gen} visualizes the generalized morphism $\psi$ as a pair $(\bar{\psi},L)$. It also reflects the idea behind the definition more than the ``expanded'' Figure~\ref{GenMor} does.
If $L=\img\imath$ vanishes, then $\psi$ is nothing but the (ordinary) morphism $\bar{\psi}$. Conversely, any morphism can be viewed as a generalized morphism with trivial morphism aid subobject $L=0$.
\begin{defn}[Terminology for generalized morphisms]
Let $\psi=(\bar{\psi},\imath): S \to T$ be a generalized morphism. Define the \textbf{kernel} $\ker(\psi):=\ker \bar{\psi}$, the kernel of the associated map. If $\pi_\imath$ denotes the natural epimorphism $T\to T/\img\imath$, then define the \textbf{combined image} $\Img\psi$ to be the \emph{submodule} $\pi_\imath^{-1}(\img \bar{\psi})$ of $T$. In general it differs from the \textbf{image} $\img\psi$ which is defined as the \emph{subfactor} $\Img\psi/\img \imath$ of $T$ (cf.~Figure~\ref{GenMor}). We call $\psi$ a \textbf{generalized monomorphism} (resp.\ \textbf{generalized epimorphism}, \textbf{generalized isomorphism}) if the associated map $\bar{\psi}$ is a monomorphism (resp.\ epimorphism, isomorphism).
\end{defn}
Sometimes we use the terminology \textbf{generalized map} instead of generalized morphism and \textbf{generalized embedding} instead of generalized monomorphism, especially when the abelian category is a category of modules (or complexes of modules, etc.).
\bigskip
As a first application of the notion of generalized embeddings we state the following definition, which is central for this work.
\begin{defn}[Filtration system]\label{system}
Let $\mathcal{I}=(p_0,\ldots,p_{m-1})$ be a finite interval in $\Z$, i.e.\ $p_{i+1}=p_i+1$. \\
A finite sequence of generalized embeddings $\psi_p=(\bar{\psi}_p,L_p)$, $p\in\mathcal{I}$ with common target $M$ is called an \textbf{ascending $m$-filtration system} of $M$ if
\begin{enumerate}
\item $\psi_{p_0}$ is an ordinary monomorphism, i.e.\ $L_{p_0}$ vanishes;
\item $L_p=\Img\psi_{p-1}$, for $p=p_1,\ldots,p_{m-1}$;
\item $\psi_{p_{m-1}}$ is a generalized isomorphism, i.e.\ $\Img\psi_{p_{m-1}}=M$.
\end{enumerate}
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$psi_0$}{$\psi_{p_0}$}
\psfrag{$psi_1$}{$\psi_{p_1}$}
\psfrag{$psi_{m-2}$}{$\psi_{p_{m-2}}$}
\psfrag{$psi_{m-1}$}{$\psi_{p_{m-1}}$}
\psfrag{$L_1$}{$L_{p_1}$}
\psfrag{$L_2$}{$L_{p_2}$}
\psfrag{$L_{m-2}$}{$L_{p_{m-2}}$}
\psfrag{$L_{m-1}$}{$L_{p_{m-1}}$}
\psfrag{$M$}{$M$}
\includegraphics[width=0.55\textwidth]{System.eps}
\end{minipage}
\caption{An ascending $m$-filtration system}
\label{System}
\end{figure}
A finite sequence of generalized embeddings $\psi^p=(\bar{\psi}^p,L^p)$, $p\in\mathcal{I}$ with common target $M$ is called a \textbf{descending $m$-filtration system} of $M$ if
\begin{enumerate}
\item $\psi^{p_0}$ is a generalized isomorphism, i.e.\ $\Img\psi^{p_0}=M$;
\item $L^p=\Img\psi^{p+1}$, for $p=p_0,\ldots,p_{m-2}$;
\item $\psi^{p_{m-1}}$ is an ordinary monomorphism, i.e.\ $L^{p_{m-1}}$ vanishes.
\end{enumerate}
We say $(\psi_p)$ \textbf{computes} a given filtration $(F_p M)$ if $\Img \psi_p = F_p M$ for all $p$.
\end{defn}
\bigskip
Now we come to the definition of the basic operations for generalized morphisms. Two generalized maps $\psi=(\bar{\psi},\imath)$ and $\phi=(\bar{\phi},\jmath)$ are summable only if $\img\imath=\img\jmath$ and we set $\psi\pm\phi:=(\bar{\psi}\pm\bar{\phi},\imath)$.
The following notational convention will prove useful: It will often happen that one wants to alter a generalized morphism $\psi=(\bar{\psi},L_\psi)$ with target $T$ by replacing $L_\psi$ with a larger subobject $L$, i.e. a subobject $L \leq T$ containing $L_\psi$. We will sloppily write $\widetilde{\psi}=(\bar{\psi},L)$, where $\bar{\psi}$ now stands for the composition of $\bar{\psi}$ with the natural epimorphism $T/L_\psi \to T/L$. We will say that $\psi$ was \textbf{coarsened} to $\widetilde{\psi}$ to refer to the passage from $\psi=(\bar{\psi},L_\psi)$ to $\widetilde{\psi}=(\bar{\psi},L)$ with $L_\psi\leq L \leq T$. As Figure~\ref{Coarsen} shows, coarsening $\psi$ might very well enlarge its combined image $\Img\psi$. The word ``coarse'' refers to the fact that the image $\img\widetilde{\psi}$ is naturally isomorphic to a \emph{quotient} of $\img\psi$, and Figure~\ref{Coarsen} shows that this natural isomorphism is given by the second isomorphism theorem. We say that the coarsening $\widetilde{\psi}=(\bar{\psi},L)$ of $\psi=(\bar{\psi},L_\psi)$ is \textbf{effective}, if $\Img\psi \cap L = L_\psi$. Figure~\ref{Coarsen} shows that in this case the images $\img\psi$ and $\img\widetilde{\psi}$ are naturally \emph{isomorphic}.
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$T$}{$T$}
\psfrag{$S$}{$S$}
\psfrag{$L$}{$L$}
\psfrag{$L_psi$}{$L_\psi$}
\psfrag{$psi$}{$\psi$}
\psfrag{$phi$}{$\widetilde{\psi}$}
\psfrag{$Im(psi)$}{$\Img\psi$}
\psfrag{$Im(phi)$}{$\Img\widetilde{\psi}$}
\psfrag{$im(psi)$}{$\img\psi$}
\psfrag{$im(phi)$}{$\img\widetilde{\psi}$}
\psfrag{$ker(psi)$}{$\ker\psi$}
\psfrag{$ker(phi)$}{$\ker\widetilde{\psi}$}
\includegraphics[width=0.6\textwidth]{Coarsen.eps}
\end{minipage}
\caption{Coarsening the generalized map $\psi=(\bar{\psi},K)$ to $\widetilde{\psi}=(\bar{\psi},L)$}
\label{Coarsen}
\end{figure}
For the composition $\psi\circ\phi$ of $S_\phi \xrightarrow{\phi} T_\phi=S_\psi \xrightarrow{\psi} T_\psi$ follow the filled area in Figure~\ref{Composition} from left to right.
\begin{figure}[htb]
\begin{minipage}[c]{1.06\linewidth}
\centering
\psfrag{$S_phi$}{$S_\phi$}
\psfrag{$T_phi=S_psi$}{$T_\phi=S_\psi$}
\psfrag{$T_psi$}{$T_\psi$}
\psfrag{$T_psi$}{$T_\psi$}
\psfrag{$phi$}{$\phi$}
\psfrag{$ker(phi)$}{$\ker\phi$}
\psfrag{$ker(tildephi)$}{$\ker \psi\circ\phi=\ker\widetilde{\phi}$}
\psfrag{$Im(phi)$}{$\Img\phi$}
\psfrag{$im(phi)$}{$\img\phi$}
\psfrag{$im(jota)$}{$\img\jmath$}
\psfrag{$K$}{$K$}
\psfrag{$psi$}{$\psi$}
\psfrag{$ker(psi)$}{$\ker\psi$}
\psfrag{$Im(psi)$}{$\Img\psi$}
\psfrag{$im(psi phi)$}{$\color{magenta}\img\psi\circ\phi$}
\psfrag{$Im(psi phi)$}{$\color{magenta}\Img\psi\circ\phi$}
\psfrag{$L$}{$\color{magenta} L := \pi_\imath^{-1}(\img (\bar{\psi} \circ \jmath))$}
\psfrag{$im(iota)$}{$\img\imath$}
\includegraphics[width=0.7\textwidth]{Composition.eps}
\end{minipage}
\caption{The composition $\psi\circ\phi$}
\label{Composition}
\end{figure}
Formally, first coarsen $\phi=(\bar{\phi},\jmath) \to \widetilde{\phi}=(\bar{\phi},K)$, where
\[
K:=\img\jmath + \ker\psi \leq T_\phi.
\]
Then coarsen $\psi=(\bar{\psi},\imath) \to \widetilde{\psi}=(\bar{\psi},L)$, where
\[
L := \pi_\imath^{-1}(\img (\bar{\psi} \circ \jmath)) = \pi_\imath^{-1}(\bar{\psi}(K)) \leq T_\psi
\]
and $\pi_\imath$ as above. Now set
\[
\psi\circ\phi := (\bar{\psi}\circ \bar{\phi},L).
\]
Note that $\ker \psi\circ \phi = \ker\widetilde{\phi}$.
\smallskip
Finally we define the division $\beta^{-1} \circ \gamma$ of two generalized maps $S_\gamma\xrightarrow{\gamma} T \xleftarrow{\beta}S_\beta$ under the conditions of the next definition.
\begin{defn}[The lifting condition]
Let $\gamma=(\bar{\gamma},L_\gamma)$ and $\beta=(\bar{\beta},L_\beta)$ be two generalized morphisms with the same target $N$.
\[
\xymatrix{
M' \ar[rd]^\gamma & \\
N' \ar[r]_\beta & N.
}
\]
Consider the \textbf{common coarsening} of the generalized maps $\beta$ and $\gamma$, i.e. the generalized maps $\widetilde{\beta}:=(\bar{\beta},L)$ and $\widetilde{\gamma}:=(\bar{\gamma},L)$, where $L=L_\gamma+L_\beta \leq N$. We say $\beta$ \textbf{lifts} $\gamma$ (or \textbf{divides} $\gamma$) if the following two conditions are satisfied:
\begin{enumerate}
\item[\textbf{(im)}] The combined image of $\widetilde{\beta}$ contains the combined image of $\widetilde{\gamma}$:
\[
\Img\widetilde{\gamma} \leq \Img\widetilde{\beta}.
\]
\item[\textbf{(eff)}] The coarsening $\gamma \to \widetilde{\gamma}$ is effective, i.e.\ $\Img\gamma \cap L = L_\gamma$.
\end{enumerate}
\end{defn}
We will refer to $\widetilde{\gamma}$ as \textbf{the effective coarsening of $\gamma$ with respect to $\beta$}.
The following lemma justifies this definition. Both the definition and the lemma are visualized in Figure~\ref{Lemma}. To state the lemma one last notion is needed: Define two generalized morphisms $\psi=(\bar{\psi},L_\psi)$ and $\phi=(\bar{\phi},L_\phi)$ to be \textbf{equal up to effective common coarsening} or \textbf{quasi-equal} if their common coarsenings $\widetilde{\psi}:=(\overline{\psi},L)$ and $\widetilde{\phi}:=(\overline{\phi},L)$ coincide \emph{and} are \emph{both} effective. We write $\psi\triangleq\phi$.
\begin{figure}[htb]
\begin{minipage}[c]{1\linewidth}
\centering
\psfrag{$N$}{$N$}
\psfrag{$M'$}{$M'$}
\psfrag{$N'$}{$N'$}
\psfrag{$L$}{$L$}
\psfrag{$Im(alpha)$}{$\Img\alpha$}
\psfrag{$im(alpha)$}{$\img\alpha$}
\psfrag{$L_alpha$}{$L_\alpha$}
\psfrag{$L_beta$}{$L_\beta$}
\psfrag{$L_gamma$}{$L_\gamma$}
\psfrag{$Im(tildebeta)$}{$\Img\widetilde{\beta}$}
\psfrag{$Im(tildegamma)$}{$\Img\widetilde{\gamma}$}
\psfrag{$beta$}{$\beta$}
\psfrag{$ker(beta)$}{$\ker\beta$}
\psfrag{$Im(beta)$}{$\Img\beta$}
\psfrag{$gamma$}{$\gamma$}
\psfrag{$ker(gamma)$}{$\ker\gamma$}
\psfrag{$Im(gamma)$}{$\Img\gamma$}
\psfrag{$phi$}{$\widetilde{\psi}$}
\psfrag{$Im(psi)$}{$\Img\psi$}
\psfrag{$Im(phi)$}{$\Img\widetilde{\psi}$}
\psfrag{$im(psi)$}{$\img\psi$}
\psfrag{$im(phi)$}{$\img\widetilde{\psi}$}
\psfrag{$ker(psi)$}{$\ker\psi$}
\psfrag{$ker(phi)$}{$\ker\widetilde{\psi}$}
\includegraphics[width=0.8\textwidth]{Lemma.eps}
\end{minipage}
\caption{The lifting condition and the lifting lemma}
\label{Lemma}
\end{figure}
\begin{lemma}[The lifting lemma]\label{lifting_lemma}
Let $\gamma=(\bar{\gamma},L_\gamma)$ and $\beta=(\bar{\beta},L_\beta)$ be two generalized morphisms with the same target $N$. Suppose that $\beta$ lifts $\gamma$. Then there exists a generalized morphism $\alpha:M' \to N'$ with $\beta\circ\alpha \triangleq \gamma$,
\[
\xymatrix{
M' \ar[rd]^\gamma \ar[d]^\alpha & \\
N' \ar[r]_\beta & N.
}
\]
i.e.\ $\beta\circ\alpha$ is equal to $\gamma$ up to effective common coarsening. $\alpha$ is called \textbf{a lift} of $\gamma$ along $\beta$. \\
Further let $\widetilde{\gamma}:=(\bar{\gamma},L_{\widetilde{\gamma}})$ be the effective coarsening of $\gamma$ with respect to $\beta$, i.e.\ $L_{\widetilde{\gamma}}=L=L_\gamma+L_\beta$. Then there exists a \emph{unique} lift $\alpha=(\bar{\alpha},L_\alpha)$ satisfying
\begin{enumerate}
\item[(a)] $\Img\alpha = \bar{\beta}^{-1}(\Img\widetilde{\gamma})$ and
\item[(b)] $L_\alpha = \bar{\beta}^{-1}(L_{\widetilde{\gamma}})$.
\end{enumerate}
This $\alpha$ is called \textbf{the lift} of $\gamma$ along $\beta$, or \textbf{the quotient} of $\gamma$ by $\beta$ and is denoted by $\beta^{-1}\circ \gamma$ or by ${\gamma}/{\beta}$.
\end{lemma}
\begin{proof}
The subobject lattice(s) in Figure~\ref{Lemma} describes the most general setup imposed by conditions (im) and (eff), in the sense that all other subobject lattices of configurations satisfying these two conditions are at most degenerations of the one in Figure~\ref{Lemma}. Now to construct the unique $\alpha$ simply follow the filled area from right to left.
\end{proof}
The reader may have already noticed that the choice of the symbol $\triangleq$ for quasi-equality was motivated by Figure~\ref{Lemma}, with $L$ at the tip of the pyramid. The proof makes it clear that the lifting lemma is yet another incarnation of the homomorphism theorem.
\begin{rmrk}[Effective computability]\label{comp}
Note that the lift $\alpha=(\bar{\alpha},L_\alpha)$ sees from $N'$ only its subfactor $N'/L_\alpha$. Replacing $N'$ by its subfactor $N'/L_\alpha$ turns $\beta$ into a generalized embedding, which we again denote by $\beta$. Now $\gamma$ and \emph{this} $\beta$ have effective common coarsenings $\widetilde{\gamma}=(\bar{\gamma},L)$ and $\widetilde{\beta}=(\bar{\beta},L)$, which see from $N$ only $N/L$, where $L=L_\gamma+L_\beta$. And modulo $L$ the generalized morphism $\widetilde{\gamma}$ becomes a morphism and the generalized embedding $\widetilde{\beta}$ becomes an (ordinary) embedding. So from the point of view of effective computations the setup can be reduced to the following situation: $\gamma:M'\to N$ is a morphism and $\beta:N' \to N$ is a \emph{monomorphism}. When $M'$, $N'$, and $N$ are finitely presented modules over a \textbf{computable ring} (cf.~Def.~\ref{compdef}) it was shown in \cite[Subsection~3.1.1]{BR} that in this case the unique morphism $\alpha:M'\to N$ is \textbf{effectively} computable.
\end{rmrk}
With the notion of a generalized embedding at our disposal we can finally give the horizontal arrows in Figure~\ref{Emb} a meaning. Now consider the three generalized embeddings $\iota: \color{blue}H_n(C) \to \color{blue}C_n$, $\iota_0: {\color{darkgreen}E^\infty_{0,n}} \to \color{blue}C_n$, and $\iota_1: {\color{brown}E^\infty_{1,n-1}} \to \color{blue}C_n$ in Figure~\ref{Lift}. $\iota_p$ is called the \textbf{total embedding} of $E^\infty_{p,n-p}$.
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$C_n$}{$\color{blue}C_n$}
\psfrag{$A_n$}{$\color{darkgreen}A_n$}
\psfrag{$E^2_{1,n-1}$}{${\color{brown}E^\infty_{1,n-1}}$}
\psfrag{$E^2_{0,n}$}{${\color{darkgreen}E^\infty_{0,n}}$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\psfrag{$iota$}{$\iota$}
\psfrag{$iota0$}{$\iota_0$}
\psfrag{$iota1$}{$\iota_1$}
\includegraphics[width=0.55\textwidth]{Lift.eps}
\end{minipage}
\caption{$\iota$ lifts $\iota_0$ and $\iota_1$}
\label{Lift}
\end{figure}
\begin{coro}\label{coro_2-filt}
The generalized embedding $\iota$ in Figure~\ref{Lift} lifts both total embeddings $\iota_0$ and $\iota_1$. Thus the two lifts $\epsilon_0 := {\iota_0}/{\iota}$ and $\epsilon_1 := {\iota_1}/{\iota}$ are generalized embeddings that form a filtration system of $H_n(C)$, visualized in Figure~\ref{2-Filtration}. More precisely, $\epsilon_0$ is an (ordinary) embedding and $\epsilon_1$ is a generalized isomorphism.
\end{coro}
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$E^2_{1,n-1}$}{${\color{brown}E^\infty_{1,n-1}}$}
\psfrag{$E^2_{0,n}$}{${\color{darkgreen}E^\infty_{0,n}}$}
\psfrag{$H_n(C)$}{$\color{blue}H_n(C)$}
\psfrag{$epsilon0$}{$\epsilon_0 = \iota_0 / \iota$}
\psfrag{$epsilon1$}{$\epsilon_1 = \iota_1 / \iota$}
\includegraphics[width=0.55\textwidth]{2-Filtration.eps}
\end{minipage}
\caption{The filtration of $H_n(C)$ given by the $2$-filtration system $\epsilon_0$, $\epsilon_1$}
\label{2-Filtration}
\end{figure}
\begin{proof}
There are two obvious degenerations of the subobject lattice(s) in Figure~\ref{Lemma}, both leading to a sublattice of the lattice in Figure~\ref{Lift}, one for the pair $(\beta,\gamma)=(\iota,\iota_0)$ and the other for $(\beta,\gamma)=(\iota,\iota_1)$. In other words: Following the two filled areas from right to left constructs $\epsilon_0:= {\iota}^{-1}\circ{\iota_0}$ and $\epsilon_1:={\iota}^{-1}\circ{\iota_1}$.
\end{proof}
\begin{coro}[Generalized inverse]\label{geninv}
Let $\psi:S \to T$ be a generalized epimorphism. Then there exists a \emph{unique} generalized epimorphism $\psi^{-1}:T \to S$, such that $\psi^{-1}\circ\psi = ({\mathrm{id}}_{S},\ker\psi)$ and $\psi\circ\psi^{-1} = ({\mathrm{id}}_{T},\Aid\psi)$. $\psi^{-1}$ is called the \textbf{generalized inverse} of $\psi$. In particular, if $\psi$ is an (ordinary) epimorphism, then $\psi^{-1}$ is a generalized isomorphism, and vice versa.
\end{coro}
\begin{proof}
Since $\psi$ lifts ${\mathrm{id}}_T$ define $\psi^{-1}:={\mathrm{id}}_T/\psi$.
\end{proof}
Rephrasing short exact sequences (also called $1$-extensions) in terms of $2$-filtration systems is now an easy application of this corollary. In particular, the information in the short exact sequence (\ref{extensionE}) is fully captured by the $2$-filtration system in Figure~\ref{2-Filtration}. This is last step of remedying the defect mentioned while introducing the short exact sequence (\ref{extension}) in Section~\ref{les}.
\section{Spectral sequences of filtered complexes}\label{filt}
Everything substantial already happened in Sections~\ref{les} and \ref{genmor}. Here we only show how the ideas already developed for $2$-filtrations and their $2$-step spectral sequences easily generalize to $m$-filtrations and their $m$-step spectral sequences.
We start by recalling the construction of the \textbf{spectral sequence associated to a filtered complex}. The exposition till Theorem~\ref{mainthm} closely follows \cite[Section~5.4]{WeiHom}. We also remain loyal to our use of subobject lattices as they are able to sum up a considerable amount of relations in one picture.
Consider a chain complex $C$ with (an ascending) filtration $F_p C$. The complementary degree $q$ and the total degree $n$ are dropped for better readability. Define the natural projection $F_pC \to F_p C/F_{p-1} C =: E^0_p$. It is elementary to check that the \textbf{subobjects of $r$-approximate cycles}
\[
A^r_p := \ker (F_p C \to F_p C / F_{p-r} C) = \{ c \in F_p C \mid \partial c \in F_{p-r} C\}
\]
satisfy the relations of Figure~\ref{E_r}, with $Z^r_p := A^r_p + F_{p-1} C$, $B^r_p := \partial A^{r-1}_{p+(r-1)} + F_{p-1} C$, and $E^r_p := Z^r_p / B^r_p$. These definitions deviate a bit from those in \cite[Section~5.4]{WeiHom}. Here $Z^r_p$ and $B^r_p$ sit between $F_p C$ and $F_{p-1} C$. His $Z^r_p$ and $B^r_p$ are the projections under $\eta_p$ onto $E^0_p:=F_p C / F_{p-1} C$ of the ones here, and hence sit in the objects of the $0$-th sheet $E^0_p$. The subobject lattice in Figure~\ref{E_r} should by now be considered an old friend as it is ubiquitous throughout all our arguments.
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$F_p C$}{$F_p C$}
\psfrag{$F_{p-1} C$}{$F_{p-1} C$}
\psfrag{$E^r_p$}{$E^r_p$}
\psfrag{$\iota^r_p$}{}
\psfrag{$Z^r_p$}{$Z^r_p$}
\psfrag{$B^r_p$}{$B^r_p$}
\psfrag{$A^r_p$}{$A^r_p$}
\psfrag{$A^{r-1}_{p-1}$}{$A^{r-1}_{p-1}$}
\psfrag{$partial(A^{r-1})$}{$\partial A^{r-1}_{p+(r-1)}$}
\psfrag{$partial(A^r)$}{$\partial A^r_{p-1+(r)}$}
\includegraphics[width=0.4\textwidth]{E_r.eps}
\end{minipage}
\caption{The fundamental subobject lattice}
\label{E_r}
\end{figure}
\bigskip
Setting $Z^\infty_p := \cap_{r=0}^\infty Z^r_p$ and $B^\infty_p := \cup_{r=0}^\infty B^r_p$ completes the
\textbf{tower} of subobjects
\[
F_{p-1} C =
B^0_p \leq B^1_p \leq \cdots \leq B^r_p \leq \cdots \leq
B^\infty_p \leq Z^\infty_p
\leq \cdots \leq Z^r_p \leq \cdots \leq Z^1_p \leq Z^0_p
= F_p C
\]
between $F_{p-1}C$ and $F_p C$.
From Figure~\ref{E_r} it is immediate that
\[
E^r_p := \frac{Z^r_p}{B^r_p} \cong \frac{A^r_p}{\partial A^{r-1}_{p+(r-1)}+A^{r-1}_{p-1}}.
\]
It is now routine to verify that the total boundary operator $\partial$ induces morphisms
\[
\partial^r_p:E^r_p \to E^r_{p-r}.
\]
And as mentioned in Section~\ref{les} these morphisms decrease the filtration degree by $r$. They complete the definition of the $r$-th sheet.
From the point of view of effective computations the above definition of $\partial^r_p$ \emph{is constructive}, as long as all involved objects are of \emph{finite type}. In fact, it can easily be turned into an algorithm using generalized maps. But since the filtered complexes relevant to our applications are total complexes of bicomplexes, the description of this algorithm is deferred to Section~\ref{bicomplexes}, where the bicomplex structure will be exploited.
To see that $(E^r)$ indeed defines a spectral sequence it remains to show the taking homology in $E^r$ reproduces the objects of $E^{r+1}$ up to (natural) isomorphisms. For this purpose one uses the statements encoded in Figure~\ref{E_r} to deduce that
\begin{enumerate}
\item[(a)] ${Z^r_p}/{Z^{r+1}_p} \cong {B^{r+1}_{p-r}}/{B^r_{p-r}}$,
\item[(b)] $\ker \partial^r_p \cong {Z^{r+1}_p}/{B^r_p}$,
\item[(c)] $\img \partial^r_{p+r} \cong {B^{r+1}_p}/{B^r_p}$, and finally
\item[(d)] $E^{r+1}_p \cong \ker \partial^r_p / \img \partial^r_{p+r}$.
\end{enumerate}
(c) follows from (a) and (b) since they state that $\partial^r_p$ decomposes as
\[
E^r_p := {Z^r_p}/{B^r_p} \xrightarrow{\text{(b)}} {Z^r_p}/{Z^{r+1}_p}
\xrightarrow{\text{(a)}} {B^{r+1}_{p-r}}/{B^r_{p-r}} \hookrightarrow {Z^r_{p-r}}/{B^r_{p-r}} =: E^r_{p-r},
\]
showing that $\img \partial^r_p \cong {B^{r+1}_p}/{B^r_p}$. Now replace $p$ by $p+r$. (d) is the first isomorphism theorem applied to $E^{r+1}_p:=Z^{r+1}_p/B^{r+1}_p$ using (b) and (c). For (a) and (b) see \cite[Lemma 5.4.7 and the subsequent discussion]{WeiHom}.
Before stating the main theorem we make some remarks about convergence. Recall that all our filtrations are assumed finite of length $m$. This means that $E^m$ runs out of arrows and thus stabilizes, i.e. $E^m = E^{m+1} = \cdots$. We already saw this for $m=2$ in Section~\ref{les}. As customary, the stable sheet is denoted by $E^\infty$. The stable form of Figure~\ref{E_r} is Figure~\ref{E_infty}, where $A_p^\infty:=\cup_{r=0}^\infty A^r_p$ and $A^\infty_{p+\infty}:=\cup_{r=0}^\infty A^r_{p+r}$.
\begin{figure}[htb]
\begin{minipage}[c]{1.1\linewidth}
\centering
\psfrag{$F_p C$}{$F_p C$}
\psfrag{$F_{p-1} C$}{$F_{p-1} C$}
\psfrag{$E^r_p$}{$E^\infty_p$}
\psfrag{$\iota^r_p$}{$\iota_p$}
\psfrag{$Z^r_p$}{$Z^\infty_p$}
\psfrag{$B^r_p$}{$B^\infty_p$}
\psfrag{$A^r_p$}{$A^\infty_p$}
\psfrag{$A^{r-1}_{p-1}$}{$A^\infty_{p-1}$}
\psfrag{$partial(A^{r-1})$}{$\partial A^\infty_{p+\infty}$}
\psfrag{$partial(A^r)$}{$\partial A^\infty_{p-1+\infty}$}
\includegraphics[width=0.4\textwidth]{E_r.eps}
\end{minipage}
\caption{The stable fundamental subobject lattice}
\label{E_infty}
\end{figure}
The identities
\begin{equation}\label{infty}
A^\infty_p = \ker \partial_{\mid F_p C} = \{ c \in F_p C \mid \partial c = 0 \}
\end{equation}
and
\begin{equation}\label{p+infty}
\partial A^\infty_{p+\infty} = \img \partial_{\mid F_p C} = \partial C \cap F_p C
\end{equation}
are direct consequences of the respective definitions.
\begin{axiom}[Beyond $E^\infty$]\label{mainthm}
Let $C$ be a chain complex with an ascending $m$-step filtration. The generalized embedding $\iota:H(C) \to C$ divides all generalized embeddings $\iota_p: E^\infty_p \to C$, called the \textbf{total embedding} of $E^\infty_p$. The quotients $\epsilon_p := \iota_p / \iota$ form an $m$-filtration system which computes the induced filtration on $H(C)$.
\end{axiom}
\begin{proof}
We only need to verify the two lifting conditions for the pairs $(\iota,\iota_p)$. Everything else is immediate. For the morphism aid subobjects of $\iota_p$ and $\iota$ we have
\[
L_{\iota_p}=\partial A^\infty_{p+\infty}+F_{p-1} C
\] (see Figure~\ref{E_infty}) and
\[
L_\iota = \partial C.
\]
Define
\[
L:=L_{\iota_p}+L_\iota=(\partial A^\infty_{p+\infty}+F_{p-1} C) + \partial C = \partial C + F_{p-1} C.
\]
\textbf{Condition (im)}: Since $\Img \iota_p = A^\infty_p + F_{p-1} C$ and $\Img \iota = \ker \partial$ we obtain
\begin{eqnarray*}
\Img \widetilde{\iota}_p \leq \Img \widetilde{\iota} & \iff & (A^\infty_p + F_{p-1} C) + L \leq \ker\partial + L \\
&\iff& A^\infty_p + \partial C+ F_{p-1} C \leq \ker\partial + F_{p-1} C.
\end{eqnarray*}
Now $\partial C \leq \ker \partial$ since $\partial$ is a boundary operator, and $A^\infty_p \leq \ker \partial$ by (\ref{infty}). \\
\textbf{Condition (eff)}:
\begin{eqnarray*}
\Img\iota_p \cap L &=& (\partial C + F_{p-1} C) \cap (A^\infty_p + F_{p-1} C) \\
& \stackrel{\text{(\ref{infty})}} {=} &(\partial C \cap F_p C) + F_{p-1} C \\
& \stackrel{\text{(\ref{p+infty})}}{=} & \partial A^\infty_{p+\infty} + F_{p-1} C \\
& = & L_{\iota_p}.
\end{eqnarray*}
The lifting lemma \ref{lifting_lemma} is now applicable, yielding the generalized embeddings $\epsilon_p := \iota_p / \iota$.
\end{proof}
Corollary \ref{coro_2-filt} is the special case $m=2$. In light of Remark \ref{comp} the theorem thus states that the induced filtration on the total (co)homology is effectively computable, as long as the generalized embeddings $\iota$ and $\iota_p$ are effectively computable for all $p$. Hence, it can be viewed as a (more) constructive version of the \textbf{classical convergence theorem} of spectral sequences of filtered complexes, a version that makes use of generalized embeddings:
\begin{axiom}[Classical convergence theorem {\cite[Thm.~5.5.1]{WeiHom}}]
Let $C$ be chain complex with a finite filtration $(F_pC)$. Then the associated spectral sequence converges to $H_*(C)$:
\[
E^0_{pq} := F_p C_{p+q} / F_{p-1} C_{p+q} \Longrightarrow H_{p+q}(C).
\]
\end{axiom}
Everything in this section can be reformulated for \emph{co}chain complexes and cohomological spectral sequences.
\section{Spectral sequences of bicomplexes}\label{bicomplexes}
Bicomplexes are one of the main sources for filtered complexes in algebra. They are less often encountered in topology. A \textbf{homological bicomplex} is a lattice $B=(B_{pq})$ ($p,q\in\Z$) of objects connected with \textbf{vertical} morphisms $\partial^\mathrm{v}$ pointing \emph{down} and \textbf{horizontal} morphisms $\partial^\mathrm{h}$ pointing \emph{left}, such that $\partial^\mathrm{v}\partial^\mathrm{h}+\partial^\mathrm{h}\partial^\mathrm{v}=0$.
\[
\xymatrix{
*=0{}
\jumpdir{q}{/:a(90).5cm/}
&
B_{02}
\ar@{.}[rd]
\ar[d]^{\partial_\mathrm{v}}
&
B_{12}
\ar@{.}[rd]
\ar[d]^{\partial_\mathrm{v}}
\ar[l]_{\partial_\mathrm{h}}
&
B_{22}
\ar[d]^{\partial_\mathrm{v}}
\ar[l]_{\partial_\mathrm{h}}
\\
&
B_{01}
\ar@{.}[rd]
\ar[d]^{\partial_\mathrm{v}}
&
B_{11}
\ar@{.}[rd]
\ar[d]^{\partial_\mathrm{v}}
\ar[l]_{\partial_\mathrm{h}}
&
B_{21}
\ar[d]^{\partial_\mathrm{v}}
\ar[l]_{\partial_\mathrm{h}}
\\
&
B_{00}
&
B_{10}
\ar[l]_{\partial_\mathrm{h}}
&
B_{20}
\ar[l]_{\partial_\mathrm{h}}
\\
*=0{} \ar[uuu] \ar[rrr]
&&&
*=0{}
\jumpdir{p}{/:a(-100).5cm/}
}
\]
The \textbf{sign trick} $\hat{\partial}_{pq}:=(-1)^p \partial^\mathrm{v}_{pq}$ converts the \textbf{anticommutative} squares into commutative ones, and hence turns the bicomplex into a \textbf{complex of complexes} connected with chain maps as morphisms, and vice versa.
The direct sum of objects $\Tot(B)_n := \bigoplus_{p+q=n} B_{pq}$ together with the \textbf{total boundary operator} $\partial_n := \sum_{p+q=n} \partial^\mathrm{v}_{pq}+\partial^\mathrm{h}_{pq}$ form a chain complex called the \textbf{the total complex} associated to the bicomplex $B$. $\partial \partial=0$ is a direct consequence of the anticommutativity.
The vertical morphisms $d_\mathrm{v}$ of a \textbf{cohomological bicomplex} $(B^{pq})$ point \emph{up} and the horizontal $d_\mathrm{h}$ point \emph{right}. We assume all bicomplexes bounded, i.e.\ only finitely many objects $B_{pq}$ are different from zero.
There exists a natural so-called \textbf{column filtration} of the total complex $\Tot(B)$ such that the $0$-th page $E^0=(E^0_{pq}) = (B_{pq})$ of the spectral sequence associated to this filtration consists of the vertical arrows of $B$ and the $1$-st page $E^1$ contains morphisms induced by the vertical ones. Its associated spectral sequence is called the \textbf{first spectral sequence} of the bicomplex $B$ and is often denoted by ${}^\mathrm{I}E$. For a formal definition see \cite[Def.~5.6.1]{WeiHom}. The \textbf{second spectral sequence} is the (first) spectral sequence of the \textbf{transposed bicomplex} $^\mathrm{tr} B = (^\mathrm{tr} B_{pq}) := (B_{qp})$. It is denoted by ${}^\mathrm{II}E$. Note that $\Tot(B)=\Tot(^\mathrm{tr}B)$, only the two corresponding filtrations and their induced filtrations on the total cohomology $H_*(\Tot(B))$ differ in general. So the short notation
\[
{}^\mathrm{I}E^a_{pq} \Longrightarrow H_{p+q}(\Tot(B)) \Longleftarrow {}^\mathrm{II}E^a_{pq}
\]
refers in general to two different filtrations of $H_{p+q}(\Tot(B))$.
Here is an algorithm using generalized maps to compute the arrows
\[
\partial^r_{pq}:E^r_{pq} \to E^r_{p-r,q+r-1}
\]
of the $r$-th term of the homological (first) spectral sequence $E^r$. Again, everything can be easily adapted for the cohomological case. Denote by
\[
\alpha_S: E^r_{pq} \to B_{pq} \quad \mbox{resp.} \quad \alpha_T: E^r_{p-r,q+r-1} \to B_{p-r,q+r-1}
\]
the generalized embedding of the \emph{s}ource resp.\ \emph{t}arget of $\partial^r_{pq}$ into the object $B_{pq}=E^0_{pq} \leq \Tot(B)_{p+q}$ resp.\ $B_{p-r,q+r-1}\leq\Tot(B)_{p+q-1}$. These so-called \textbf{absolute embeddings} are the successive compositions of the \textbf{relative embeddings} $E^r_{pq} \to E^{r-1}_{pq}$. For the sake of completeness we also mention the \textbf{total embeddings}
\[
\iota_S: E^r_{pq} \to \Tot(B)_{p+q} \quad \mbox{resp.} \quad \iota_T: E^r_{p-r,q+r-1} \to \Tot(B)_{p+q-1},
\]
the compositions of $\alpha_S$ resp.\ $\alpha_T$ with the \emph{generalized} embeddings\footnote{It identifies $B_{pq}$ with the \emph{subfactor} of $\Tot(B)_{p+q}$ dictated by the filtration.} $B_{pq} \to \Tot(B)_{p+q}$ resp.\ $B_{p-r,q+r-1} \to \Tot(B)_{p+q-1}$.
\begin{figure}[htb]
\begin{minipage}[c]{1\linewidth}
\centering
\psfrag{$E^infty_{pq}$}{$E^\infty_{pq}$}
\psfrag{$E^r_{pq}$}{$E^r_{pq}$}
\psfrag{$E^0_{pq}$}{$E^0_{pq}$}
\psfrag{$C_{p+q}$}{$C_{p+q}=\Tot(B)_{p+q}$}
\psfrag{$alpha$}{$\alpha_{pq}$}
\psfrag{$iota$}{$\iota_{pq}$}
\psfrag{$...$}{$\cdots$}
\includegraphics[width=0.6\textwidth]{TotEmb.eps}
\end{minipage}
\caption{The relative, absolute, and total embeddings}
\label{TotEmb}
\end{figure}
For $r>1$ let
\[
h^r_{pq}: {\color{brown} B_{pq}} \to {\color{blue} \bigoplus_{i=1}^{r-1} B_{p-i,q+i-1}}
\quad \mbox{and} \quad
v^r_{p-r+1,q+r-1}: {\color{red} B_{p-r+1,q+r-1}} \to {\color{blue} \bigoplus_{i=1}^{r-1} B_{p-i,q+i-1}}
\]
be the restrictions of the total boundary operator $\partial_{p+q}$ to the specified sources and targets. Similarly, for $r>2$ let
\[
l^r_{pq}:{\color{magenta} \bigoplus_{i=1}^{r-2} B_{p-i,q+i}} \to {\color{blue}\bigoplus_{i=1}^{r-1} B_{p-i,q+i-1}},
\]
again the restriction of the total boundary operator $\partial_{p+q}$ to the specified source and target.
\[
\xymatrix@!C=5.5pc{
E^r_{p-r,q+r-1} \ar@{_{(}->}[d]_{\alpha_T} \\
{\color{darkgreen} B_{p-r,q+r-1}} & {\color{red} B_{p-r+1,q+r-1}} \ar[l]_{\partial^\mathrm{h}} \ar[d]^{\partial^\mathrm{v}}\\
& {\color{blue} B_{p-r+1,q+r-2}} \ar@{.}[rd] & {\color{magenta} B_{p-r+2,q+r-2}} \ar[l] \ar@{.>}[d] \ar@{--}[rd] \\
& & {\color{blue} \ddots} \ar@{.}[rd] & {\color{magenta} B_{p-1,q+1}} \ar@{.>}[l] \ar[d] \\
& & & {\color{blue} B_{p-1,q}} & \ar[l]_<(.25){\partial^\mathrm{h}} {\color{brown} B_{pq}} \\
& & & & E_{pq} \ar@{^{(}->}[u]^{\alpha_S}
}
\]
We distinguish four cases $r=0,1,2$, and $r>2$.
\begin{enumerate}
\item[$r=0$:] $\partial^0_{pq}:=\partial^\mathrm{v}_{pq}$. Note that $E^0_{pq}:=B_{pq}$.
\item[$r=1$:] $\partial^1_{pq}:=\alpha_T ^{-1} \circ (\partial^\mathrm{h}_{pq}\circ\alpha_S)$.
\item[$r=2$:] $\partial^2_{pq}:=\alpha_T ^{-1} \circ (\partial^\mathrm{h}_{p-1,q+1} \circ (-\beta^{-1} \circ (h^2_{pq}\circ\alpha_S)))$, where $\beta:=v^2_{p-1,q+1}$. Note that $h^2_{pq} = \partial^\mathrm{h}_{pq}$ and $v^2_{p-1,q+1}=\partial^\mathrm{v}_{p-1,q+1}$.
\item[$r>2$:] $\partial^r_{pq}:=\alpha_T^{-1} \circ (\partial^\mathrm{h}_{p-r+1,q+r-1} \circ (-\beta^{-1} \circ (h^r_{pq}\circ\alpha_S)))$, with $\beta:=(v^r_{p-r+1,q+r-1},l^r_{pq})$, the coar\-sening of $v^r_{p-r+1,q+r-1}$ with aid $l_{pq}$. We say: $v^r_{p-r+1,q+r-1}$ aided by $l^r_{pq}$ lifts $h^r_{pq}\circ\alpha_S$.
\end{enumerate}
\smallskip
We announced an algorithm and provided closed formulas. This is the true value of generalized maps mentioned in the Introduction. As an easy exercise, the reader might try to rephrase the diagram chasing of the snake lemma as a closed formula in terms of generalized maps. The concept of a generalized map evolved during the implementation of the {\tt homalg} package in {\sf GAP} \cite{homalg-package}.
It follows from Remark~\ref{comp} that the spectral sequence of a finite type bounded bicomplex (in fact, of a finite type complex with finite filtration) over a computable ring is effectively computable (cf.~Def.~\ref{compdef}). The {\tt homalg} package \cite{homalg-package} contains routines to compute spectral sequences of bicomplexes.
\medskip
We end this section with a simple example from linear algebra. Let $k$ be a field and $\lambda\in k$ a field element. The {\sc Jordan}-form matrix
\[
J(\lambda) =
\left(\begin{matrix}
\lambda & 1 & \cdot \\
\cdot & \lambda & 1 \\
\cdot & \cdot & \lambda
\end{matrix}\right) \in k^{3\times 3}
\]
turns $V := k^{1\times 3}$ into a left $k[x]$-module (of finite length), where $x$ acts via $J(\lambda)$, i.e.\ $x v := J(\lambda) v$ for all $v\in V$. The $k[x]$-module $V$ is filtered and the filtrations stems from a bicomplex:
\begin{exmp}[\textbf{Spectrum} of an endomorphism]
Let $k$ be a field and $\lambda\in k$. Consider the second quadrant bicomplex $B_\lambda$
\[
\xymatrix{
B_{-2,3} \ar[d]_{\left( \begin{smallmatrix} x-\lambda \end{smallmatrix} \right)} \\
B_{-2,2} & B_{-1,2} \ar[l]_{\left( \begin{smallmatrix} -1 \end{smallmatrix} \right)}
\ar[d]_{-\left( \begin{smallmatrix} x-\lambda \end{smallmatrix} \right)} \\
& B_{-1,1} & B_{0,1} \ar[l]_{\left( \begin{smallmatrix} -1 \end{smallmatrix} \right)}
\ar[d]_{\left( \begin{smallmatrix} x-\lambda \end{smallmatrix} \right)} \\
& & B_{0,0}
}
\]
with $B_{0,0}=B_{0,1}=B_{-1,1}=B_{-1,2}=B_{-2,2}=B_{-2,3}=k[x]$, all other spots being zero. The total complex contains exactly two nontrivial $k[x]$-modules at degrees $0$ and $1$ and a single nontrivial morphism
\[
\partial_1(\lambda) : \xymatrix@1{ \Tot(B)_1=k[x]^{1\times 3} \ar[r] & k[x]^{1\times 3} = \Tot(B)_0}
\]
with matrix
\[
x\mathrm{Id} - J(\lambda) =
\left(\begin{matrix}
x-\lambda & -1 & \cdot \\
\cdot & x-\lambda & -1 \\
\cdot & \cdot & x-\lambda
\end{matrix}\right).
\]
The first spectral sequences ${}^\mathrm{I}E$ lives in the second quadrant and stabilizes already at ${}^\mathrm{I}E^1 =: {}^\mathrm{I}E^\infty$
\[
\xymatrix@!=0.8pc{
\cdot\ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
{}^\mathrm{I}E^1_{-2,-2} \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot\\
\cdot \ar@{.}[rd] & {}^\mathrm{I}E^1_{-1,-1} \ar@{.}[rd] & \cdot \\
\cdot & \cdot & {}^\mathrm{I}E^1_{0,0}
}
\]
with ${}^\mathrm{I}E^\infty_{0,0}={}^\mathrm{I}E^\infty_{-1,-1}={}^\mathrm{I}E^\infty_{-2,-2}=k[x]/\langle x - \lambda \rangle$.
The second spectral sequences ${}^\mathrm{II}E$ lives in the fourth quadrant, has only zero arrows at levels $1$ and $2$
\[
\xymatrix@!=0.9pc{
{}^\mathrm{II}E^1_{0,0} \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot & \cdot & \cdot & {}^\mathrm{II}E^1_{3,-2}
} \quad\quad
\xymatrix@!=0.8pc{
{}^\mathrm{II}E^2_{0,0} \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot & \cdot & \cdot & {}^\mathrm{II}E^2_{3,-2}
}
\]
with ${}^\mathrm{II}E^1_{0,0}={}^\mathrm{II}E^1_{3,-2}=k[x]$, and hence ${}^\mathrm{II}E^2_{0,0}={}^\mathrm{II}E^2_{3,-2}=k[x]={}^\mathrm{II}E^3_{0,0}={}^\mathrm{II}E^3_{3,-2}$. At level $3$ there exists a single nonzero arrow $\partial^3_{3,-2}$ with matrix $\left(\begin{matrix}(x-\lambda)^3\end{matrix}\right)$:
\[
\xymatrix@!=0.9pc{
{}^\mathrm{II}E^3_{0,0} \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot\\
\cdot & \cdot & \cdot & {}^\mathrm{II}E^3_{3,-2} \ar[uulll] |{\partial^3_{3,-2}}
}
\]
${}^\mathrm{II}E$ finally collapses to its $p$-axes at ${}^\mathrm{II}E^4 =: {}^\mathrm{II}E^\infty$
\[
\xymatrix@!=0.9pc{
{}^\mathrm{II}E^4_{0,0} \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \ar@{.}[rd] & \cdot \\
\cdot & \cdot & \cdot & \cdot
}
\]
with ${}^\mathrm{II}E^\infty_{0,0} = k[x] / \langle ( x - \lambda )^3\rangle$, providing a spectral sequence proof for the elementary fact that
\[
\coker\partial_1(\lambda) \cong k[x]/\langle ( x - \lambda )^3\rangle.
\]
Conversely, this isomorphism implies that the matrix of the morphism $\partial^3_{3,-2}$ is equal to $\left(\begin{matrix}( x - \lambda )^3\end{matrix}\right)$, up to a unit $a\in k^\times$.
\end{exmp}
\section{The {\sc Cartan-Eilenberg} resolution of a complex}
The \textbf{{\sc Cartan-Eilenberg} resolution} generalizes the \textbf{horse shoe lemma} in the following sense: The horse shoe lemma produces a \textbf{simultaneous} projective resolution\footnote{We will only refer to projective resolutions as they are more relevant to effective computations.}
\[
\xymatrix{
& 0 \ar[d] & 0 \ar[d] & \cdots & 0 \ar[d] \\
0 & M' \ar[l] \ar[d] & P'_0 \ar[l] \ar[d] & \cdots \ar[l] & P'_d \ar[l] \ar[d] & 0 \ar[l] \\
0 & M \ar[l] \ar[d] & P_0 \ar[l] \ar[d] & \cdots \ar[l] & P_d \ar[l] \ar[d] & 0 \ar[l] \\
0 & M'' \ar[l] \ar[d] & P''_0 \ar[l] \ar[d] & \cdots \ar[l] & P''_d \ar[l] \ar[d] & 0 \ar[l] \\
& 0 & 0 & \cdots & 0
}
\]
of a short exact sequence $0 \xleftarrow{} M'' \xleftarrow{} M \xleftarrow{} M' \xleftarrow{} 0$, where simultaneous means that each row is a projective resolution and all columns are exact. Now let us look at this threefold resolution in the following way: The short exact sequence defines a $2$-step filtration of the object $M$ with graded parts $M'$ and $M''$ and the horse shoe lemma states that any resolutions of the graded parts can be put together to a resolution of the total object $M$. In fact, as $P_i''$ is projective, it follows that the total object $P_i$ must even be the direct sum of the graded parts $P_i'$ and $P_i''$. The non-triviality of the filtration on $M$ is reflected in the fact that the morphisms of the total resolution $P_*$ are in general not merely the direct sum of the morphisms in the resolutions $P_*'$ and $P_*''$ of the graded parts $M'$ and $M''$. This statement can now be generalized to $m$-step filtrations simply by applying the ($2$-step) horse shoe lemma inductively.
Now consider a complex $(C,\partial)$, which is not necessarily exact. On each object $C_n$ the complex structure induces a $3$-step filtration $0\leq B_n \leq Z_n \leq C_n$, with boundaries $B_n := \img \partial_{n+1}$ and cycles $Z_n:=\ker \partial_n$. The above discussion now applies to the three graded parts $B_n$, $H_n:=Z_n/B_n$ and $C_n/Z_n$ and any three resolution thereof can be put together to a resolution of the total object $C_n$. If one takes into account the fact that $\partial_{n+1}$ induces an isomorphism between $C_{n+1}/Z_{n+1}$ and $B_n$ (for all $n$, by the homomorphism theorem), then all total resolutions of all the $C_n$'s can be constructed in a compatible way so that they fit together in one complex of complexes. This complex is called the {\sc Cartan-Eilenberg} resolution of the complex $C$.
A formal version of the above discussion can be found in \cite[Lemma~9.4]{HS} or \cite[Lemma~5.7.2]{WeiHom}. Since the projective horse shoe lemma is constructive, the projective {\sc Cartan-Eilenberg} resolution is so as well.
\section{{\sc Grothendieck}'s spectral sequences}
Let $\mathcal{C} \xleftarrow{F} \mathcal{B} \xleftarrow{G} \mathcal{A}$ be composable functors of abelian categories. The so-called {\sc Gro\-then\-dieck} spectral sequence relates, under mild assumptions, the composition of the derivations of $F$ and $G$ with the derivation of their composition $F\circ G$. There are 16 versions of the {\sc Grothendieck} spectral sequence, depending on whether $F$ resp.\ $G$ is co- or contravariant, and whether $F$ resp.\ $G$ is being left or right derived. Four of them do not use injective resolutions and are therefore rather directly accessible to a computer. In this section two versions out of the four are reviewed: The filtrations of $L\otimes_D M$ and $\Hom_D(M,N)$ mentioned in the Introduction are recovered in the next section as the spectral filtrations induced by these two {\sc Grothen\-dieck} spectral sequences, after appropriately choosing the functors $F$ and $G$.
\begin{axiom}[{\sc Grothendieck} spectral sequence, {\cite[Thm.~11.41]{Rot}}]\label{Gr1}
Let $F$ and $G$ be contravariant functors and let every object in $\mathcal{A}$ and $\mathcal{B}$ has a \emph{finite} projective resolution. Under the assumptions that
\begin{enumerate}
\item $G$ maps projective objects to $F$-acyclic objects and that
\item $F$ is left exact,
\end{enumerate}
then there exists a second quadrant homological spectral sequence with
\[
E^2_{pq}=\RR^{-p}F \circ \RR^qG \Longrightarrow \LL_{p+q}(F\circ G).
\]
\end{axiom}
\begin{proof}
Let $M$ be an object in $\mathcal{A}$ and $P_\bullet=(P_p)$ a finite projective resolution of $M$. Denote by $CE=(CE^{p,q})$ the projective {\sc Cartan-Eilenberg} resolution of the cocomplex $(Q^p):=(G(P_p))$. It exists since $\mathcal{B}$ has enough projectives. Note that $q\leq 0$ since $CE$ is a cohomological bicomplex. Define the homological bicomplex $B=(B_{p,q}):=\left(F(CE^{p,q})\right)$. We call $B$ the \textbf{Grothendieck bicomplex} associated to $M$, $F$, and $G$. It lives in the fourth quadrant and is bounded in both directions. \\
\textbf{The first spectral sequence $^\mathrm{I}E$}: \\
For fixed $p$ the vertical cocomplex $CE^{p,\bullet}$ is, by construction, a projective resolution of $G(P_p)$. Hence $^\mathrm{I}E^1_{pq}=\RR^{-q}F(G(P_p))$. But since $G(P_p)$ is $F$-acyclic by assumption (1), the first sheet collapses to the $0$-th row. The left exactness of $F$ implies that $R^0F=F$ and hence $^\mathrm{I}E^1_{p0}=(F\circ G)(P_p)$. I.e.\ the $0$-th row of $^\mathrm{I}E^1$ is nothing but the covariant functor $F\circ G$ applied to the projective resolution $(P_p)$ of $M$. The first spectral sequences of $B$ thus stabilizes at level $2$ with the single row $^\mathrm{I}E^2_{n,0}=\LL_n(F\circ G)(M)$. \\
\textbf{The second spectral sequence $^\mathrm{II}E$}: \\
The second spectral sequence of the bicomplex $B$ is by definition the spectral sequence of its transposed $(^\mathrm{tr}B_{pq}):=(B_{qp})$, a second quadrant bicomplex. Obviously $^\mathrm{tr}B=F(^\mathrm{tr}CE)$. By definition, the $q$-th row ${}^\mathrm{II}E^1_{\bullet,q}:=H^\mathrm{vert}_{\bullet,q}(^\mathrm{tr}B)=H^\mathrm{vert}_{\bullet,q}(F(^\mathrm{tr}CE))=F(H_\mathrm{vert}^{\bullet,q}(^\mathrm{tr}CE))$, where the last equality follows from the properties of the {\sc Cartan-Eilenberg} resolution and the additivity of $F$. Now recall that the vertical cohomologies $H_\mathrm{vert}^{\bullet,q}(^\mathrm{tr}CE)$ are for fixed $q$, again by construction, projective resolutions of the cohomology $H^q(G(P_\bullet))=:\RR^q G(M)$. Hence $^\mathrm{II}E^2_{pq} = \RR^{-p}F (\RR^q G(M))$.
\end{proof}
The proof shows that assumptions (1) and (2) only involve the first spectral sequence. Assumption (1) guaranteed the collapse of the first spectral sequence at the first level, while (2) ensures that the natural transformation $F \to \RR^0 F$ is an equivalence. In other words, dropping (2) means replacing $\LL_{p+q}(F\circ G)$ by $\LL_{p+q}(\RR^0F\circ G)$.
\begin{axiom}[{\sc Grothendieck} spectral sequence]\label{Gr2}
Let $F$ be a covariant and $G$ a contravariant functor and let every object in $\mathcal{A}$ and $\mathcal{B}$ has a \emph{finite} projective resolution. Under the assumptions that
\begin{enumerate}
\item $G$ maps projective objects to $F$-acyclic objects and that
\item $F$ is right exact,
\end{enumerate}
then there exists a second quadrant cohomological spectral sequence with
\[
E^2_{pq}=\LL_{-p}F \circ \RR^qG \Longrightarrow \RR^{p+q}(F\circ G).
\]
\end{axiom}
\begin{proof}
Again the first spectral sequence is a fourth quadrant spectral sequence while the second lives in the second quadrant. Assumption (2) ensures that the natural transformation $\LL^0 F \to F$ is an equivalence. The above proof and the subsequent remark can be copied with the obvious modifications.
\end{proof}
\begin{rmrk}[One sided boundedness]\label{bounded}
The existence of finite projective resolutions in $\mathcal{A}$ and $\mathcal{B}$ led the spectral sequences to be bounded in both directions. In order to avoid convergence subtleties it would suffice to assume boundedness in just one direction by requiring that either $\mathcal{A}$ or $\mathcal{B}$ allows finite projective resolutions while the other has enough projectives. The assumption of the existence of \emph{finite} projective resp.\ injective resolutions can be dropped when dealing with the versions of the {\sc Grothendieck} spectral sequences that live in the first resp.\ third quadrant.
\end{rmrk}
\section{Applications}\label{appl}
This section recalls how the natural filtrations mentioned in examples (a), (a'), and (d) of the Introduction can be recovered as \textbf{spectral filtrations}.
Theorems~\ref{Gr1} and \ref{Gr2} admit an obvious generalization. The composed functor $F\circ G$ can be replaced by a functor $H$ that coincides with $F\circ G$ on projectives (for other versions of the {\sc Grothendieck} spectral sequence the ``projectives'' has to be replaced by ``injectives''). As usual, $D$ is an associative ring with $1$. $\Ext^n_D$ and $\Tor_n^D$ are abbreviated as $\Ext^n$ and $\Tor_n$.
\bigskip
\noindent
\textbf{Assumption:} In this section the left \emph{or} right global dimension\footnote{
Recall, the \textbf{left} \textbf{global (homological) dimension} is the supremum over all projective dimensions of \emph{left} $D$-modules (see Subsection~\ref{codegree}). If $D$ is left {\sc Noether}ian, then the left global dimension of $D$ coincides with the \textbf{weak global (homological) dimension}, which is the largest integer $\mu$ such that $\Tor^D_\mu(M,N) \neq 0$ for some right module $M$ and left module $N$, otherwise infinity (cf.~\cite[7.1.9]{MR}). This last definition is obviously left-right symmetric. The same is valid if ``left'' is replaced by ``right''.} of $D$ is assumed finite. The involved spectral sequences will then be bounded in (at least) one direction (see Remark~\ref{bounded}).
\subsection{\texorpdfstring{The double-$\Ext$ spectral sequence and the filtration of $\Tor$}{The double-Ext spectral sequence and the filtration of Tor}}
\begin{coro}[The double-$\Ext$ spectral sequence]\label{doubleExt}
Let $M$ be a left $D$-module and $L$ a right $D$-module. Then there exists a second quadrant homological spectral sequence with
\[
E^2_{pq}=\Ext^{-p}(\Ext^q(M,D),L) \Longrightarrow \Tor_{p+q}(L,M).
\]
In particular, there exists an ascending filtration of $\Tor_{p+q}(L,M)$ with $\gr_p \Tor_{p+q}(L,M)$ naturally isomorphic to a subfactor of $\Ext^{-p}(\Ext^q(M,D),L)$, $p\leq 0$.
\end{coro}
The special case $p+q=0$ recovers the filtration of $L\otimes M$ mentioned in Example (a) of the Introduction via the natural isomorphism $L\otimes M \cong \Tor_0 (L,M)$.
\subsubsection{Using the {\sc Grothendieck} bicomplex}\label{HGB}
Corollary~\ref{doubleExt} is a consequence of Theorem~\ref{Gr1} for $F:=\Hom_D(-,L)$ and $G:=\Hom_D(-,D)$, since $F\circ G$ coincides with $L\otimes_D -$ on projectives.
To be able to effectively compute double-$\Ext$ (groups in) the {\sc Grothendieck} bicomplex the ring $D$ must be computable in the sense that \emph{two} sided inhomogeneous linear systems $A_1 X_1 + X_2 A_2 = B$ must be effectively solvable, where $A_1$, $A_2$, and $B$ are matrices over $D$ (see~\cite[Subsection~6.2.4]{BR}). This is immediate for computable commutative rings (cf.~Def.~\ref{compdef}). In \ref{ExtExt} an example over a commutative ring is treated.
\subsubsection{Using the bicomplex $I_L\otimes P^M$}
The \textbf{bifunctoriality} of $\otimes$ leads to the following homological bicomplex
\[
B := I_L\otimes P^M \cong \Hom(\Hom(P^M,D),I_L),
\]
where $P^M$ is an injective resolution of $M$ and $I_L$ is an injective resolution of $L$. Starting from $r=2$ the first and second spectral sequence of $B$ coincide with those of the {\sc Grothendieck} bicomplex associated to $M$, $F:=\Hom_D(-,L)$, and $G:=\Hom_D(-,D)$. In contrast to the {\sc Grothendieck} bicomplex the bicomplex $B$ is over most of the interesting rings in general highly nonconstructive as an injective resolution enters its definition. In \cite[Lemma~1.1.8]{HL} a sheaf variant of this bicomplex was used to ``compute'' the purity filtration (see below).
\subsubsection{The bidualizing complex}\label{bidual}
Taking $L=D$ as a right $D$-module in Corollary~\ref{doubleExt} recovers the \textbf{bidualizing spectral sequence} of {\sc J.-E.~Roos} \cite{Roos}.
\[
E^2_{pq}=\Ext^{-p}(\Ext^q(M,D),D) \Longrightarrow \left\{\begin{array}{cc} M & \mbox{ for } p+q = 0, \\ 0 & \mbox{ otherwise.} \end{array}\right.
\]
The {\sc Grothendieck} bicomplex is then known as the \textbf{bidualizing complex}. The case $p+q=0$ defines the \textbf{purity filtration}\footnote{Unlike \cite[Chap.~2, Subsection~4.15]{Bjo}, we only make the weaker assumption stated at the beginning of the section.} $(\tor_{-c} M)$ of $M$, which was motivated in Example (a') of the Introduction. For more details cf.~\cite[Chap.~2, §5,7]{Bjo}.
The module $M_c = E^\infty_{-c,c}$ is for $c=0$ and $c=1$ a submodule of $\Ext^c(\Ext^c(M,D),D)=E^2_{-c,c}$ and for $c\geq 2$ in general only a subfactor. All this is obvious from the shape of the bidualizing spectral sequence.
Since $M_c = \tor_{-c} M / \tor_{-(c+1)}M$ it follows that the \textbf{higher evaluations maps} $\varepsilon_c$
\[
0 \xrightarrow{} \tor_{-(c+1)}M \xrightarrow{} \tor_{-c} M \xrightarrow{\varepsilon_c} \Ext^c_D(\Ext^c_D(M,D),D)
\]
mentioned in the Introduction are only a different way of writing the generalized embeddings
\[
\bar{\varepsilon}_c:M_c \to \Ext^c(\Ext^c(M,D),D).
\]
So without further assumptions $\varepsilon_c$ (resp.\ $\bar{\varepsilon}_c$) is known to be an ordinary morphism (resp.\ embedding) only for $c=0$ and $c=1$. Now assuming that $E^2_{pq}:=\Ext^{-p}(\Ext^q(M,D),D)$ vanishes\footnote{This condition is satisfied for an \textbf{{\sc Auslander} regular} ring $D$: $\Ext^{-p}(\Ext^q(M,D),D) = 0$ for all $p+q>0$ and all $D$-modules $M$. See \cite[Chap.~2: Cor.~5.18, Cor.~7.5]{Bjo}.} for $p+q=1$, then all arrows ending at total degree $p+q=0$ vanish (as they all start at total degree $p+q=1$). It follows that for all $c$ the module $M_c$ is not merely a subfactor of $\Ext^c(\Ext^c(M,D),D)$ but a submodule, or, equivalently, $\varepsilon_c$ (resp.\ $\bar{\varepsilon}_c$) is an ordinary morphism (resp.\ embedding).
In any case the module $\Ext^c(\Ext^c(M,D),D)$ is called the \textbf{reflexive hull} of the \textbf{pure} subfactor $M_c$.
\begin{defn}[Pure, reflexively pure]
A module $M$ is called \textbf{pure} if it consists of exactly one nontrivial pure subfactor $M_c$ or is zero. A nontrivial module $M$ is called \textbf{reflexively pure} if it is pure and if the generalized embedding $M=M_c \to \Ext^c(\Ext^c(M,D),D)$ is an isomorphism. Define the zero module to be reflexively pure.
\end{defn}
If $M$ is a finitely generated $D$-module, then all ingredients of the bidualizing complex are again finitely generated (projective) $D$-modules, even if the ring $D$ is \emph{non}commutative. It follows that the purity filtration over a computable ring $D$ is effectively computable. A commutative and a noncommutative example are given in \ref{PurityFiltration} and \ref{PurityFiltration:A3} respectively. The latter demonstrates how the purity filtration (as a filtration that always exists) can be used to transform a linear system of PDEs into a triangular form where now a cascade integration strategy can be used to obtain exact solutions. The idea of viewing a linear system of PDEs as a module over an appropriate ring of differential operators was emphasized by {\sc B.~Malgrange} in the late 1960's and according to him goes back to {\sc Emmy Noether}.
\subsubsection{Criterions for reflexive purity}\label{reflexive}
This subsection lists some simple criterions for reflexive purity of modules.
First note that the existence of the bidualizing spectral sequence immediately implies that the set $c(M):=\{c \geq 0 \mid \Ext^c_D(M,D)\neq 0\}$ is empty only if $M=0$. Recall that if $c(M)$ is nonempty, then its minimum is called the \textbf{grade} or \textbf{codimension} of $M$ and denoted by $j(M)$ or $\codim M$. The codimension of the zero module is set to be $\infty$. Further define $\bar{q}(M):=\sup c(M)$ in case $c(M)\neq \emptyset$, and $\infty$ otherwise.
All of the following arguments make use of the shape of the bidualizing spectral sequence in the respective situation.
$\bullet$ If $c(M)$ contains a single element, i.e.\ if $\codim M = \bar{q}(M) =: \bar{q} < \infty$, then $M=M_{\bar{q}}$ is reflexively pure of codimension $\bar{q}$, giving a simple spectral sequence proof of \cite[Thm.~7]{QEB}.
For the remaining criterions assume that $\Ext^{-p}(\Ext^q(M,D),D) = 0$ for $p+q=1$:
$\bullet$ If $\bar{q}:=\bar{q}(M)$ is finite, then $E^2_{-\bar{q},\bar{q}}=E^\infty_{-\bar{q},\bar{q}}$, i.e.\ $M_{\bar{q}}$ is reflexively pure (possibly zero). This generalizes the above criterion (under the assumption just made).
$\bullet$ Now if $M$ is a left (resp.\ right) $D$-module, then assume further that the right (resp.\ left) global dimension $d$ of the ring $D$ is finite. It follows that $E^2_{-c,c}=E^\infty_{-c,c}$ for $c=d$ and $c=d-1$. This means that under the above assumptions the subfactors $M_d$ and $M_{d-1}$ are always reflexively pure\footnote{In case $D=A_n$, the $n$-th {\sc Weyl} algebra over a field, this says that \textbf{holonomic} and \textbf{subholonomic} modules are reflexively pure. See~\cite[Chap.~2, §7]{Bjo}.}.
\subsubsection{Codegree of purity}\label{codegree}
As a {\sc Grothendieck} spectral sequence the bidualizing spectral sequence becomes intrinsic at level $2$. Each $E^2_{-c,c}$ starts to ``shrink'' until it stabilizes at $E^\infty_{-c,c}=M_c$. Motivated by this define the \textbf{codegree of purity} $\cp M$ of a module $M$ as follows: Set $\cp M$ to $\infty$ if $M$ is not pure. Otherwise $\cp M$ is a tuple of nonnegative integers, the length of which is one plus the number of times $E^a_{-c,c}$ shrinks (nontrivially\footnote{i.e.\ passes to a \emph{true} subfactor.}) for $a\geq 2$ until it stabilizes at $M_c$. The entries of this tuple are the numbers of pages between the drops, i.e.\ the width of the steps in the staircase of objects $(E^a_{-c,c})_{c\geq 2}$. It follows that the sum over the entries of $\cp M$ is the number of pages it takes for $E^2_{-c,c}$ until it reaches $M_c$. In particular, a module is \emph{reflexively pure} if and only if $\cp M = (0)$.
The codegree of purity appears in Examples \ref{PurityFiltration} and \ref{PurityFiltration:A3}. In Example~\ref{CodegreeOfPurity} the codegree of purity is compared with two other classical homological invariants: \\
Recall, the \textbf{projective dimension} of a module $M$ is defined to be the length $d$ of the shortest projective resolution $0\xleftarrow{}M\xleftarrow{}P_0\xleftarrow{} \cdots \xleftarrow{} P_d\xleftarrow{} 0$. \textbf{{\sc Auslander}'s degree of torsion-freeness} of a module $M$ is defined following \cite[Def.~on~p.~2 \& Def.~2.15(b)]{AB} to be the smallest \emph{nonnegative} integer $i$, such that $\Ext^{i+1}(\mathrm{A}(M),D)\neq 0$, otherwise $\infty$, where $\mathrm{A}(M)$ is the so-called \textbf{{\sc Auslander} dual} of $M$ (see also \cite[Def.~5]{QEB}, \cite[Def.~19]{CQR05}). To construct $\mathrm{A}(M)$ take a projective presentation $0\xleftarrow{} M \xleftarrow{} P_0 \xleftarrow{d_1} P_1$ of $M$ and set
\[
\mathrm{A}(M):=\coker(P_0^* \xrightarrow{d_1^*} P_1^*),
\]
where $d_1^* := \Hom(d_1, D)$ (cf.~\cite[p.~1 \& Def.~2.5]{AB}). Like the syzygies modules, it is proved in \cite[Prop.~2.6(b)]{AB} that $\mathrm{A}(M)$ is uniquely determined by $M$ up to \textbf{projective equivalence} (see also \cite{Q} and \cite[Thm.~2]{PQ00}). In particular, the degree of torsion-freeness is well-defined. The fundamental exact sequence \cite[(0.1) \& Prop.~2.6(a)]{AB}
\[
0 \xrightarrow{} \Ext^1_D(\mathrm{A}(M), -) \xrightarrow{}
M \otimes_D - \xrightarrow{} \Hom_{D}( M^*, - )
\xrightarrow{} \Ext^2_{D}(\mathrm{A}(M),-) \xrightarrow{} 0,
\]
applied to $D$, characterizes torsion-freeness and reflexivity of the module $M$ (see also \cite[Exercise~IV.7.3]{HS}, \cite[Thm.~6]{CQR05}). For a characterization of projectivity using the degree of torsion-freeness see \cite[Thm.~7]{CQR05}.
The codegree of purity can be defined for quasi-coherent sheaves of modules replacing $D$ by the structure sheaf $\mathcal{O}_X$ or by the dualizing sheaf\footnote{It may even be defined for objects in an abelian category with a dualizing object.} if it exists. It is important to note that the codegree of purity of a coherent sheaf $\mathcal{F}$ of $\mathcal{O}_X$-modules on a projective scheme $X=\Proj(S)$ may differ from the codegree of purity of a graded $S$-module $M$ used to represent $\mathcal{F}=\widetilde{M}=\Proj{M}$. This is mainly due to the fact that $\mathcal{F}=\widetilde{M}$ vanishes for {\sc Artin}ian modules $M$.
There are several obvious ways how one can refine the codegree of purity to get sharper invariants. The codegree of purity is an example of what can be called a \textbf{spectral invariant}.
\subsection{\texorpdfstring{The $\Tor$-$\Ext$ spectral sequence and the filtration of $\Ext$}{The Tor-Ext spectral sequence and the filtration of Ext}}
\begin{coro}[The $\Tor-\Ext$ spectral sequence]\label{TorExt}
Let $M$ and $N$ be left $D$-modules. Then there exists a second quadrant cohomological spectral sequence with
\[
E_2^{pq}=\Tor_{-p}(\Ext^q(M,D),N) \Longrightarrow \Ext^{p+q}(M,N).
\]
In particular, there exists a descending filtration of $\Ext^{p+q}(M,N)$ with $\gr^p \Ext^{p+q}(M,N)$ naturally isomorphic to a subfactor of $\Tor_{-p}(\Ext^q(M,D),N)$, $p\leq 0$
\end{coro}
The special case $p+q=0$ recovers the filtration of $\Hom(M,N)$ mentioned in Example (d) of the Introduction via the natural isomorphism $\Hom(M,N) \cong \Ext^0 (M,N)$.
\bigskip
For \textbf{holonomic} modules $M$ over the {\sc Weyl} $k$-algebra $D:=A_n$ the special case formula
\[
\Hom(M,N) \cong \Tor_n(\Ext^n(M,D),N)
\]
(cf.~\cite[Chap.~2, Thm.~7.15]{Bjo}) was used by {\sc H.~Tsai} and {\sc U.~Walther} in the case when also $N$ is holonomic to compute the finite dimensional $k$-vector space of homomorphisms \cite{TW}.
The induced filtration on $\Ext^1(M,N)$ can be used to attach a numerical invariant to each extension of $M$ with submodule $N$. This gives another example of a \textbf{spectral invariant}.
\subsubsection{Using the {\sc Grothendieck} bicomplex}\label{CGB}
Corollary~\ref{TorExt} is a consequence of Theorem~\ref{Gr2} for $F:=-\otimes_D N$ and $G:=\Hom_D(-,D)$ since $F\circ G$ coincides with $\Hom_D(-,N)$ on projectives. See Example~\ref{TorExt:Grothendieck}.
\subsubsection{Using the bicomplex $\Hom(P^M,P^N)$}\label{HPP}
The \textbf{bifunctoriality} of $\Hom$ leads to the following cohomological bicomplex
\[
B := \Hom(P^M,P^N) \cong \Hom(P^M,D)\otimes P^N,
\]
where $P^L$ denotes a projective resolution of the module $L$. It is an easy exercise (cf.~\cite[Chap.~2, §4.14]{Bjo}) to show that starting from $r=2$ the first and second spectral sequence of $B$ coincide with those of the {\sc Grothendieck} bicomplex associated to $M$, $F:=-\otimes_D N$ and $G:=\Hom_D(-,D)$. Both bicomplexes are constructive as only projective resolutions enter their definitions. The bicomplex $B$ has the computational advantage of avoiding the rather expensive {\sc Cartan-Eilenberg} resolution used to define the {\sc Grothendieck} bicomplex. See Example~\ref{TorExt:Bifunctor}. Compare the output of the command {\tt homalgRingStatistics} in Example~\ref{TorExt:Bifunctor} with corresponding output in Example~\ref{TorExt:Grothendieck}.
Since the first spectral sequence of the bicomplex $B:=\Hom(P^M,P^N)$ collapses, a small part of it is often used to compute $\Hom(M,N)$ over a \emph{commutative} ring $D$, as then all arrows of $B$ are again morphisms of $D$-modules. See \cite[p.~104]{GP} and \cite[Subsection~6.2.3]{BR}.
If the ring $D$ is \emph{not} commutative, then the above bicomplex and the {\sc Grothendieck} bicomplex in the previous subsection fail to be $D$-bicomplexes (unless when $M$ or $N$ is a $D$-bimodule). The bicomplexes are even in a lot of applications of interest \emph{not} of finite type over their natural domain of definition. In certain situations there nevertheless exist \emph{quasi-isomorphic} subfactor (bi)complexes which can be used to perform effective computations. In \cite{TW}, cited above, and in the pioneering work \cite{OT} {\sc Kashiwara}'s so-called $V$-filtration is used to extract such subfactors.
|
1,116,691,498,533 | arxiv | \section{Introduction}
\label{sec:intro}
Robust statistics methods arise in a wide range of applications; in particular, the words ``robust'' and ``robustness'' frequently appear in the context of image and signal processing. Filters based on robust statistics such as the median filter, provide basic mechanisms in image processing (\cite{Ays}; \cite{Hua}; \cite{Tzu}). In general, image processing based on robust strategies has shown remarkable ability in image restoration and segmentation (see, for example, \cite{Ben}; \cite{Ji}; \cite{Tar}).
High-level image analysis, or vision, also benefits from the use of robust estimation techniques, as it can be seen in \cite{Com}; \cite{Got}; \cite{Kim}; \cite{Pra} and \cite{Sin}.
There is a large number of examples such as signal analysis and processing on the plane, imaging remote sensing and design of experiments in agronomy, in which data is recorded in a grid or lattice in $\mathbb{Z}^2$. A class of 2D autoregressive processes has been proposed (\cite{Whi}) as a set of plausible models for the spatial correlation in such data (\cite{Tjo}). These models are natural extensions of the autoregressive processes used in time series analysis (\cite{Bas}).
Thus, most of the robust procedures suggested for time series parametric models have been generalized for spatial parametric models when the process has been contaminated with innovation or additive outliers (\cite{Kas}). Because a single extreme value is capable of introducing significant distortions in estimators, most initiatives are focused on providing estimators that are robust to the appearance of anomalous data.
In this context, there are at least three classes of robust estimators that have been studied. They are the estimators M, GM and RA. The M estimators were successfully used to develop and implement a computational algorithm for image restoration, based on 2D autoregressive models (\cite{Kas}). Later, \cite{All1} analyzed the development and implementation of Generalized M (GM) estimators for the same class of models. The robust estimators of Residual Autocovariance (RA) were presented by \cite{Bus3} in the context of the time series, where in the procedure of recursive estimation the residues are cleaned through the application of a robust function. The extension of the RA estimators for two-dimensional autoregressive models and their corresponding computational treatment were developed by \cite{Oje4}. Monte Carlo simulation studies show that the performance of the RA estimator is higher than the M estimator and slightly better than the GM estimator when the model has been contaminated with additive type outliers. In addition, \cite{Bus5} studied the asymptotic behavior of the RA estimator for unilateral two-dimensional autoregressive processes, generalizing the results for the asymptotic behavior of one-dimensional series previously established by \cite{Bus4}. We do not yet know the asymptotic properties associated with the M and GM estimators when the model is contaminated.
One of the reasons why the spatial autoregressive model (AR-2D) has been extensively used in image analysis and processing is its remarkable ability to represent a variety of real scenarios without the need to use a large number of parameters. However, the robust estimators developed so far for the parameters of the AR-2D model have been constructed only under the assumption of innovative or additive type random noise; there are no proposals for parameter estimation when the model is contaminated from another more general pattern of noise. In this work we define a new class of robust estimators for the AR-2D contaminated models. This class of estimators is robust under replacement contamination that includes additive type contamination. To avoid the propagation of the effect of an outlier when calculating the innovative residuals in the AR-2D model we propose a new approach that consists of defining this residuals using an auxiliary model. For this reason, bounded innovation propagation AR-2D (BIP-AR-2D) models are proposed. With the help of these models we suggest an estimator for the AR-2D model that is robust when the process contains outliers.
Our proposal is the generalization of the two- dimensional case of the BMM estimators, developed by \cite{Mull} for ARMA time series models. The approach of generalizing one-dimensional proposals to the case of two or more dimensions is the strategy that is naturally used in many areas of science. The work is relevant because it offers a new way of robustly estimating the parameters in the two-dimensional autoregressive model. It is a contribution that can be beneficial in the field of signal processing, spatial statistics and, in general, in areas of applied mathematics that use this model intensively, for example, to represent texture images such as robotic vision, recognition of patterns and matching problems. Although estimators of the model parameters have been developed so far, the central objective of this work is to provide a new estimator that betters the estimation proposals already known.
The rest of the paper is organized as follows. In Section \ref{sec:preli}, the basic definitions are presented. We first presented some background material on bidimensional autoregressive processes (AR-2D) and parameter estimators of the model. We also define procedures for generating replacement contamination in such models. In Section \ref{sec:BMM}, the new model BIP-AR-2D for spatial processes and the new estimator of the AR-2D model parameters are presented. In Section \ref{sec:MC}, several Monte Carlo studies are carried out to evaluate the performance of the new estimator against different contamination schemes, compared to the LS, M, GM and RA estimators. Section \ref{sec:apli} presents two applications to real images that demonstrate the capabilities of the BMM-estimator to represent, segment and restore contaminated images. Conclusions and future works appear in Section \ref{sec:concl}. The results of the Monte Carlo studies (Section \ref{sec:MC}) are shown in the Appendix.
\section{Preliminaries}
\label{sec:preli}
\subsection{The spatial ARMA models}
A great variety of texture can be generated through the two-dimensional autoregressive moving average models (ARMA-2D models). For example, Figure \ref{figTexturas} shows textures generated for the particular case of ARMA-2D processes, the AR-2D process, with two and three parameters (Fig. \ref{figTexturas} (a)-(b) and (c)-(d) respectively). Besides, in the last years, several theoretical properties have been studied, and multiple applications have been developed for these processes (\cite{Bar,Vall3,Bus6,Qui,Zie,Yao,Sad}).
\begin{figure}[h!]
\begin{center}
\begin{tabular}[c]{cccc}
\begin{minipage}{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{tex1_2p_05_04999_0-eps-converted-to.pdf}
\begin{center}
(a)
\end{center}
\end{minipage} &
\begin{minipage}{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{tex1_2p_05_04_0-eps-converted-to.pdf}
\begin{center}
(b)
\end{center}
\end{minipage} &
\begin{minipage}{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{tex1_3p_001_001_08-eps-converted-to.pdf}
\begin{center}
(c)
\end{center}
\end{minipage} &
\begin{minipage}{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{tex1_3p_015_017_02-eps-converted-to.pdf}
\begin{center}
(d)
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Autoregressive Processes. (a) $\phi_1=0.5$, $\phi_2=0.4999$, $\phi_3=0$; (b) $\phi_1=0.5$, $\phi_2=0.4$, $\phi_3=0$; (c) $\phi_1=0.01$, $\phi_2=0.01$, $\phi_3=0.8$; (d) $\phi_1=0.15$, $\phi_2=0.17$, $\phi_3=0.2$.}
\label{figTexturas}
\end{center}
\end{figure}
Spatial autoregressive moving average (ARMA) processes can be defined over random fields indexed over $\mathbb{Z}^d, d \geq 2,$ where $\mathbb{Z}^d$ considers the usual partial order, i.e., for $s=(s_1,s_2,...,s_d), u=(u_1,u_2,..., u_d)$ in $\mathbb{Z}^d,$ $s\leq u$ if $s_i \leq u_i$ for $i=1,2,..., d$. We define $S[a,b]=\{x \in \mathbb{Z}^d|a \leq x \leq b\}$ and $S\langle a,b]=S[a,b]\backslash \{a\}$ for $a,b \in \mathbb{Z}^d,$ such that $a\leq b$ and $a\neq b$.
A random field $(Y_s)_{s \in \mathbb{Z}^d}$ is said to be a spatial ARMA of order $p,q\in \mathbb{Z}^d$ if it is weakly stationary and satisfies the equation
\begin{equation}\label{arma_model}
Y_s-\sum_{j\in S\langle 0,p]}\phi_jY_{s-j}=\varepsilon_s+\sum_{k\in
S\langle 0,q]}\theta_k\varepsilon_{s-k},
\end{equation}
where $(\phi_j)_{j\in S\langle 0,p]}$ and $(\theta_k)_{k\in S\langle 0,q]}$ denote the autoregressive and moving average parameters respectively with $\phi_0=\theta_0=1,$ and $(\varepsilon_s)_{s \in \mathbb{Z}^d}$ denotes a sequence of i.i.d. random variables with variance $\sigma^2.$ Note that if $q=0$ the process is called spatial autoregressive AR($p$) random field. An ARMA random field is called causal if it can be represent for the following equation:
\begin{equation}
Y_s=\sum_{j\in S[ 0,\infty]}\phi_j\varepsilon_{s-j},
\label{MAinf}
\end{equation}
with $\sum_j |\phi_j|<\infty.$
Similar to the time series case, there are conditions on the (AR or MA) polynomials for them to be stationary and invertible respectively (\cite{Bas}). As an example, consider a first-order autoregressive process as in (\ref{arma_model}) with $d=2$, $p=(1,1)$ and $q=(0,0)$. Then $S=\langle (0,0), (1,1)]=\{(1,0), (0,1), (1,1)\}$ and the model is of the form
\begin{equation}
\label{modeloAR}
Y_{i,j}=\phi_{1}Y_{i-1,j}+\phi_{2}Y_{i,j-1}+\phi_{3}Y_{i-1,j-1}+\varepsilon_{i,j}
\end{equation}
where to simplify the notation it took $\phi_1=\phi_{1,0}$, $\phi_2=\phi_{0,1}$ and $\phi_3=\phi_{1,1}$. In equivalent form, (\ref{modeloAR}) can be expressed as
\begin{equation}
\label{modeloARresum}
\Phi(B_1,B_2)Y_{i,j}=\varepsilon_{i,j}
\end{equation}
where $B_1$ and $B_2$ are the backward operators given by $B_1Y_{i,j}=Y_{i-1,j}$, $B_2Y_{i,j}=Y_{i,j-1}$ and in (\ref{modeloARresum}), $\Phi(B_1,B_2)=(1-\phi_{1} B_{1}-\phi_{2}B_{2}-\phi_{3} B_{1}B_{2})$. In the case that $\Phi(B_1,B_2)$ has inverse, we can write equation (\ref{MAinf}) as
\begin{equation*}
Y_{i,j}=\Phi(B_1,B_2)^{-1}\varepsilon_{i,j}
\end{equation*}
\cite{Bas} studied the correlation structure of a process like (\ref{modeloAR}). They obtained conditions to guarantee the existence of the stationary representation of the model (\ref{modeloAR}) as in (\ref{MAinf}). In that case, the use of a multinomial expansion for $\Phi(B_1,B_2)^{-1}$ implies the convergent representation
\begin{equation}
\label{MAinf2}
Y_{i,j}=\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\sum_{r=0}^{\infty}\lambda_{klr}\varepsilon_{i-k-r,j-l-r}
\end{equation}
where $\lambda_{klr}=\frac{(k+l+r)!}{k!l!r!}\phi_{1}^k\phi_{2}^l\phi_{3}^r$ with $k,l,r\in \mathbb{N}\cup\{0\}$ are the coefficients of this multinomial expansion. The increase in the number of parameters in the model also increases the diversity of possible textures but in contrast, the calculations become more complex. In this paper we worked with the AR 2D model with three parameters as in (\ref{modeloAR}).\\
\subsection{Types of contamination in AR-2D processes}
\cite{Maro} described (Chapter 8) some probability models for time series outliers, including additive outliers (AOs), replacement outliers (ROs) and innovation outliers (IOs).
In this section we generalized the notion of replacement outliers for spatial processes. Let $Y$ be a stationary process, for example an AR-2D process as (\ref{modeloAR}) and let $Z$ be the observed process. It is said that $Z$ process behaves like a two-dimensional Replacement Outlier model (RO) if it is given by
\begin{equation}
\label{eqContam}
Z_{i,j}=(1-\xi_{i,j}^{\alpha})Y_{i,j}+\xi_{i,j}^{\alpha}W_{i,j}
\end{equation}
where $\xi^{\alpha}$ is a zero-one process such that $P(\xi_{i,j}^{\alpha}=1)=\alpha$ and $P(\xi_{i,j}^{\alpha}=0)=1-\alpha$ and $W$ is a replacement process that is not necessarily independent of $Y$. The fraction $\alpha$ is positive and small.\\
A particular case of the RO models is the two-dimensional Additive Outlier model (AO), in which
\begin{equation*}
W_{i,j}=Y_{i,j}+\nu_{i,j},
\end{equation*}
$\nu$ is a stationary process independent of $Y$ and $\xi^{\alpha}$ is a Bernoulli process. This type of contamination is very important for satellite image processing; for example, it is present in optical images such as those from Landsat satellites. When $W$ does not follow the pattern of AO, we say that the contamination process follows a Strictly Replacement Outlier model (SRO).
\subsection{Robust Parametric Estimation}
Because LS estimators are very sensitive to the presence of atypical values (Martin, 1980), several alternative estimators arise to mitigate the impact of contaminated observations on estimates. Most of these proposals are natural extensions of robust estimators studied in time series.
Robust estimators have been defined for models with a small amount of parameters. Here, we summarize the well-known robust estimators for the model (\ref{modeloAR}); however, a more general development can be found for the AR and MA models in \cite{Kas}, \cite{All4}, \cite{Oje4}, \cite{Vall3} and \cite{Bus5}.
Note that model (\ref{modeloAR}) can be rewritten in the linear model form:
\begin{equation*}
Y_{i,j} = {\bm \phi}^T {Z}_{i, j} + \epsilon_{i,j},
\end{equation*}
where $\mbox{\boldmath{$\phi$}} ^{T}=( \phi_{1},\phi_{2},\phi_{3})$ is a parameter vector and $ {Z}_{i,j}^T = (Y_{i-1,j}; Y_{i,j-1}; Y_{i-1,j-1}).$
To obtain the LS estimator of $\bm \phi$, we minimize the following function:
\begin{equation*}
\sum_{i,j}\left[ \varepsilon_{i,j}({\bm \phi})\right]^2,
\end{equation*}
where
\begin{equation}
\label{resAR}
\varepsilon_{i,j}({\bm \phi}) = Y_{i,j}-{\bm \phi}^T{Z}_{i, j} = Y_{i,j}-\phi_{1}Y_{i-1,j}-\phi_{2}Y_{i,j-1}-\phi_{3}Y_{i-1,j-1}
\end{equation}
Similarly, the class of M estimators for 2D autoregressive processes (\cite{Kas}) is defined by the minimization of the function
\begin{equation}
M_{nm}({\bm \phi})=\frac{1}{(n-1)(m-1)}\sum_{i=2}^{n}\sum_{j=2}^{m}\rho\left(\frac{\varepsilon_{i,j}({\bm \phi})}{\hat{\sigma}}\right)
\label{FuncObjAR}
\end{equation}
Because the M estimators are very sensitive when the process is contaminated with additive outliers is that other robust estimators arise to reduce the effects of additive outliers. Alternatively, \cite{All4} developed the generalized M estimators (GM) for spatial AR processes. A GM estimator of $\bm \phi$ is the solution to the problem of minimizing the non-quadratic function defined by:
\begin{equation*}
Q(\phi,\sigma)=\sum_{i,j}l_{ij}t_{ij}\left[\rho\left(\frac{Y_{i,j}-{\bm\phi}^T{Z}_{i, j}}{l_{ij}\sigma}\right)+\frac{1}{2}\right]\sigma,
\end{equation*}
where $\rho$ is as a function like the one that was defined in (\ref{FuncObjAR}), $t_{ij}$ and $l_{ij}$ are weights corresponding to the respective $Z_{i,j}.$
In \cite{Oje4}, the authors presented the robust autocovariance (RA) estimator for AR-2D processes which was first defined for time series models by \cite{Bus3}. This estimator is determined by the following equations
\begin{equation*}
\label{RA}
\sum_{k,l,r=0}^\infty p_{ \bm \phi}(k,l,r) f(k,l,r) = 0
\end{equation*}
\begin{equation*}
\label{RA2}
\sum_{(m,n) \in (W_M \setminus S\langle 0,(1,1)])}\psi\left(\frac{r(m,n)}{\hat{\sigma}}\right)=0,
\end{equation*}
\noindent where $f$ is a function that depends of the residues, $p_{\bm \phi}$ are coefficients that depend on the parameters and $\hat{\sigma}$ is a independent estimate
\section{A new approach}
\label{sec:BMM}
\subsection{BIP-AR 2D models}
A new class of bounded nonlinear AR-2D models is presented in this work: the bounded innovation propagation AR-2D model (BIP-AR 2D). This model arises from the need to estimate the best possible parameters of an autoregressive central model when a contaminated process is observed. The BIP-AR 2D model is a two-dimension generalization of the model presented for time series by \cite{Mull}.\\
Given a stationary and invertible AR-2D model like in (\ref{modeloAR}), it supports a stationary representation as in (\ref{MAinf2}). We define the BIP-AR 2D auxiliary model as:
\begin{equation}
\label{modeloBIPgen}
Y_{i,j}=\sum_{(k,l,r)\in D}\lambda_{klr}\sigma\eta \left( \frac{\varepsilon_{i-k-r,j-l-r}}{\sigma}\right) +\varepsilon_{i,j}
\end{equation}
with $D=\{(k,l,r)\in \mathbb{N}^3_0\}\setminus \{(0,0,0)\}$, $\varepsilon_{i,j}$'s are i.i.d. random variables with symmetric distribution and $\eta(x)$ is an odd and bounded function. Besides, $\sigma$ is a robust M-scale of $\varepsilon_{i,j}$, which coincides with the standard deviation if $\varepsilon_{i,j}$ are normal, and is defined as the solution of the equation $E(\rho(\varepsilon_{i,j}/\sigma))=b$.\\
\noindent Note that (\ref{modeloBIPgen}) can also be written as
\begin{align*}
Y_{i,j} &= \sum_{\{k, l, r\geq 0\}}\lambda_{klr}\sigma\eta \left( \frac{\varepsilon_{i-k-r,j-l-r}}{\sigma} \right) -\sigma\eta \left( \frac{\varepsilon_{i,j}}{\sigma} \right) +\varepsilon_{i,j}\notag \\
&= \sigma\Phi(B_1,B_2)^{-1}\eta \left( \frac{\varepsilon_{i,j}}{\sigma} \right) -\sigma\eta \left( \frac{\varepsilon_{i,j}}{\sigma} \right) +\varepsilon_{i,j}
\end{align*}
and multiplying both members by $\Phi(B_1,B_2)$, we get
\begin{equation*}
\Phi(B_1,B_2)Y_{i,j} = \sigma\eta \left( \frac{\varepsilon_{i,j}}{\sigma} \right) -\sigma\Phi(B_1,B_2)\eta \left( \frac{\varepsilon_{i,j}}{\sigma} \right) +\Phi(B_1,B_2)\varepsilon_{i,j}
\end{equation*}
\noindent which is equivalent to
\begin{align*}
Y_{i,j} &=\phi_{1}Y_{i-1,j}+\phi_{2}Y_{i,j-1}+\phi_{3}Y_{i-1,j-1}+ \sigma\phi_{1}\eta \left( \frac{\varepsilon_{i-1,j}}{\sigma} \right) \nonumber\\
& \quad+\sigma\phi_{2}\eta \left( \frac{\varepsilon_{i,j-1}}{\sigma} \right) +\sigma\phi_{3}\eta \left( \frac{\varepsilon_{i-1,j-1}}{\sigma} \right)+\varepsilon_{i,j}\nonumber\\
& \quad -\phi_{1}\varepsilon_{i-1,j}-\phi_{2}\varepsilon_{i,j-1}-\phi_{3}\varepsilon_{i-1,j-1}
\end{align*}
\subsection{BMM estimator for AR-2D processes}
In time series, \cite{Mull} introduced the MM- estimators for ARMA models based in the definition of MM-estimate for regression where the residuals are calculated as in the BIP-ARMA model instead of just as in the pure ARMA model. The idea of MM-estimators in regression is to compute a highly robust estimator of the error scale in a first stage, and this estimated scale is used to calculate an M-estimator of the regression parameters in a second stage. However, in time series this differs somewhat because an MM-estimate is not enough to guarantee robustness.\\
In the same way that residues of AR-2D model exist (\ref{resAR}), there are residues obtained from BIP-AR 2D model:
\begin{align}
\label{resBIP}
\varepsilon_{i,j}^b({\bm \phi},\sigma) & = Y_{i,j}-\phi_{1}Y_{i-1,j}-\phi_{2}Y_{i,j-1}-\phi_{3}Y_{i-1,j-1}\nonumber\\
& \quad - \sigma \phi_{1}\eta \left( \frac{\varepsilon_{i-1,j}^b({\bm \phi},\sigma)}{\sigma} \right) - \sigma \phi_{2}\eta \left( \frac{\varepsilon_{i,j-1}^b({\bm \phi},\sigma)}{\sigma} \right) \nonumber\\
& \quad - \sigma \phi_{3}\eta \left( \frac{\varepsilon_{i-1,j-1}^b({\bm \phi},\sigma)}{\sigma}\right) + \phi_{1}\varepsilon_{i-1,j}^b({\bm \phi},\sigma) \nonumber\\
& \quad +\phi_{2}\varepsilon_{i,j-1}^b({\bm \phi},\sigma)+\phi_{3}\varepsilon_{i-1,j-1}^b({\bm \phi},\sigma)
\end{align}
for all $i,j\geq 2$. With this residue the objective function that must be minimized to obtain the M-estimator of the parameters under a model BIP-AR 2D is defined:
\begin{equation}
M_{nm}^b({\bm \phi})=\frac{1}{(n-1)(m-1)}\sum_{i=2}^{n}\sum_{j=2}^{m}\rho\left(\frac{\varepsilon_{i,j}^b({\bm \phi},\hat{\sigma})}{\hat{\sigma}}\right)
\label{FuncObjBIP}
\end{equation}
\noindent where $\hat{\sigma}$ is a robust estimate of $\sigma$.\\
One way of robustly estimating the scale was introduced in 1964 by \cite{Hub} as follows: Given a sample $\bm{u}=(u_1,...,u_n)$, with $u_i\in \mathbb{R}$, an M-estimate of scale $S_n(\bm{u})$ is defined by any value $s\in(0,\infty)$ satisfying
\begin{equation}
\frac{1}{n}\sum_{i=1}^n \rho\left( \frac{u_i}{s}\right)=b
\label{eqScale}
\end{equation}
where $\rho$ is a continuous and non- constant function, non-decreasing in $|x|$ and symmetric around zero as well. To make the M-scale estimate consistent with the standard deviation when the data are normal, it requires that $E(\rho(x))=b$ under the standard normal distribution. Taking $b=\max(\rho)/2$, we get a maximum breakdown point of 0.5. With all this, we can define the new BMM-2D estimator by following the two steps given below:\\
\vspace{0.5cm}
\textbf{\textsl{First Step:}}
At this stage, an estimate of $\sigma$ is obtained. For this purpose, two $\sigma$ estimates are considered: one using an AR-2D model, another using a BIP-AR 2D model; then we choose the smallest of them.\\
Let $\rho_1$ a continue, non-constant, non-decreasing in $|x|$, bounded and symmetric function and such that: $b=E(\rho_1(u)) \Rightarrow b/max(\rho_1)=0.5$. This guarantees that for a normal random sample, the M-scale estimator $s$ based on $\rho_1$ converges to the standard deviation and the breakdown point of $s$ is 0.5. Put
\begin{equation*}
\mathcal{B}=\{\bm{\phi}\in\mathbb{R}^3 : \ \text{if} \ (z_1,z_2)\in\mathbb{C}^2 \ \text{is such that} \ \bm{\Phi}(z_1,z_2)=0 \ \text{then} \ |z_1|\geq 1+\zeta \wedge |z_2|\geq 1+\zeta \}
\end{equation*}
\noindent for some $\zeta>0$. Then, we defined an estimate of ${\bm \phi}\in\mathcal{B}$:
\begin{equation*}
\hat{{\bm \phi}}_S=arg\min_{{\bm \phi}\in\mathcal{B}}S_{nm}({\boldsymbol\varepsilon}_{nm}({\bm \phi}))
\end{equation*}
and the corresponding estimate of $\sigma$:
\begin{equation}
s_{nm}=S_{nm}({\boldsymbol\varepsilon}_{nm}(\hat{{\bm \phi}}_S))
\label{eqDesvio}
\end{equation}
\noindent where ${\boldsymbol\varepsilon}_{nm}({\bm \phi})=(\varepsilon_{22}({\bm \phi}),...,\varepsilon_{n2}({\bm \phi}),...,\varepsilon_{2m}({\bm \phi}),...,\varepsilon_{nm}({\bm \phi}))$,\\
with $\varepsilon_{ij}({\bm \phi})$ as in (\ref{resAR}) and $S_{nm}$ is the M-estimate of scale based on $\rho_1$ and $b$ defined as in (\ref{eqScale}). Later, we described the estimate corresponding to the BIP-AR model. Define $\hat{{\bm \phi}}_S^b$ by the minimization of $S_{nm}({\boldsymbol\varepsilon}_{nm}^b({\bm \phi},\hat{\sigma}({\bm \phi})))$ over $\mathcal{B}$.
The value $\hat{\sigma}({\bm \phi})$ is an estimate of $\sigma$ computed as if ${\bm \phi}$ were the true parameters and the $\varepsilon_{i,j}$'s were normal. Then, from (\ref{modeloBIPgen}), because in this case the M-scale $\sigma$ coincides with the
standard deviation of $\varepsilon_{i,j}$, we had:
\begin{equation*}
\sigma^2=\frac{\sigma^2_Y}{1+\kappa^2\sum_{k,l,r\geq0}\lambda_{klr}^2}
\end{equation*}
where $\kappa^2=Var(\eta(\frac{\varepsilon_{i,j}}{\sigma}))$ and $\sigma^2_Y=Var(Y_{i,j})$. Let $\hat{\sigma}^2_Y$ a robust estimate of $\sigma^2_Y$ and $\kappa^2=Var(\eta(Z))$ where $Z\sim N(0,1)$. Then, we defined
\begin{equation*}
\hat{\sigma}^2(\bm \phi)=\frac{\hat{\sigma}^2_Y}{1+\kappa^2\sum_{k,l,r\geq0}\lambda_{klr}^2({\bm \phi})}
\end{equation*}
\noindent The scale estimate $s_{nm}^b$ corresponding to the BIP-AR-2D model is defined by
\begin{equation*}
\hat{{\bm \phi}}_S^b=arg\min_{{\bm \phi}\in\mathcal{B}}S_{nm}({\boldsymbol\varepsilon}_{nm}^b({\bm \phi},\hat{\sigma}({\bm \phi})))
\end{equation*}
and
\begin{equation*}
s_{nm}^b=S_{nm}({\boldsymbol\varepsilon}_{nm}^b(\hat{{\bm \phi}}_S^b,\hat{\sigma}(\hat{{\bm \phi}}_S^b)))
\end{equation*}
\noindent where, for simplify, we denoted $\tilde{\sigma}=\hat{\sigma} (\bm \phi),$ and
\begin{equation*}
{\boldsymbol\varepsilon}_{nm}^b({\bm \phi},\tilde{\sigma})=(\varepsilon_{22}^b({\bm \phi},\tilde{\sigma}),.,\varepsilon_{n2}^b({\bm \phi},\tilde{\sigma}),.,\varepsilon_{2m}^b({\bm \phi},\tilde{\sigma}),.,\varepsilon_{nm}^b({\bm \phi},\tilde{\sigma}))
\end{equation*}
\noindent with $\varepsilon_{ij}^b({\bm \phi},\tilde{\sigma})$ defined as in (\ref{resBIP}). Finally, our $\sigma$ estimate was
\begin{equation*}
s_{nm}^*=\min(s_{nm},s_{nm}^b)
\end{equation*}
\vspace{0.5cm}
\textbf{\textsl{Second Step:}}
We considered a bounded function $\rho_2$ that satisfies the same properties as $\rho_1$ but also $\rho_2\leq\rho_1$. This function was chosen such that the corresponding M-estimator is highly efficient under normal innovations. Given the objective functions defined in (\ref{FuncObjAR}) and (\ref{FuncObjBIP}) with the scale obtained in the first step ($s_{nm}^*$):
\begin{equation*}
\label{funcObjBMM1}
M_{nm}({\bm \phi})=\frac{1}{(n-1)(m-1)}\sum_{i=2}^{n}\sum_{j=2}^{m}\rho_2 \left( \frac{\varepsilon_{i,j}({\bm \phi})}{s_{nm}^*} \right)
\end{equation*}
and
\begin{equation*}
\label{funcObjBMM2}
M_{nm}^b({\bm \phi})=\frac{1}{(n-1)(m-1)}\sum_{i=2}^{n}\sum_{j=2}^{m}\rho_2 \left(\frac{\varepsilon_{i,j}^b({\bm \phi},s_{nm}^*) }{s_{nm}^*} \right)
\end{equation*}
\noindent The corresponding M-estimators of the parameters for each function are:
\begin{equation*}
\hat{{\bm \phi}}_M=arg\min_{{\bm \phi}\in\mathcal{B}}M_{nm}({\bm \phi})$$ and $$\hat{{\bm \phi}}_M^b=arg\min_{{\bm \phi}\in\mathcal{B}}M_{nm}^b({\bm \phi})
\end{equation*}
\noindent Then, we defined the BMM-estimator 2D as:
\[\hat{{\bm \phi}}^*_M=
\begin{cases}
\hat{{\bm \phi}}_M, & \text{if } M_{nm}(\hat{{\bm \phi}}_M)\le M^b_{nm}(\hat{{\bm \phi}}^b_M)\\
\hat{{\bm \phi}}^b_M, & \text{if } M_{nm}(\hat{{\bm \phi}}_M)> M^b_{nm}(\hat{{\bm \phi}}^b_M)\\
\end{cases}\]
\section{Monte Carlo Results}
\label{sec:MC}
The aim of this section is to analyze the performance of the new BMM estimator to estimate the parameters in the model (\ref{modeloAR}) compared to the LS, M, GM and RA estimators. We performed several experiments. Each experiment is based on different Monte Carlo studies. We set the parameter values of
(\ref{modeloAR}) as:
\begin{equation}
\label{modeloARE}
Y_{i,j}=0.15 Y_{i-1,j}+ 0.17Y_{i,j-1}+0.2Y_{i-1,j-1}+\varepsilon_{i,j}
\end{equation}
It is important to mention that the set parameters were chosen at random satisfying the conditions of invertibility of \cite{Bas}.
We performed our study under five different conditions of the model (Cases); in Case I, the model was non-contaminated, while in Cases II, III, IV and V, the model was contaminated according to (\ref{eqContam}):
\begin{itemize}
\item Case I) Non-contaminated model like in (\ref{modeloARE}), where $\varepsilon$ is a normal distribution process with $Var(\varepsilon_{i,j})=1$ and $E(\varepsilon_{i,j})=0$ for all $i,j$.
\item Case II) AO, where the $\nu$ process is independent of the $Y$ process and follows a normal distribution with zero mean and variance 50.
\item Case III) SRO, where the replacement process $W$ follows a t-student distribution with 2.3 f.d.
\item Case IV) SRO, where the replacement process $W$ is another autoregressive process, independent of the $Y$process, with parameters $\tilde{\phi_1}=0.1$, $\tilde{\phi_2}=0.2$ and $\tilde{\phi_3}=0.3$.
\item Case V) SRO, where the replacement process $W$ is a white noise process with normal distribution with zero mean and variance 50.
\end{itemize}
In each of the five variants of the model (\ref{modeloARE}), the parameters $\phi_1, \phi_2$ and $\phi_3$ were estimated by the five procedures presented in the previous sections. In each experiment, 500 simulations of the model were generated, and the mean value, the mean square error (MSE) and the sample variance were computed. For the contaminated models we considered four levels of contamination: 5\%, 10\%, 15\% and 20\%. Besides, we performed our study considering different window sizes: $8\times8$, $16\times16$, $32\times32$, and $57\times57$. For the calculation of the BMM estimator, a robust estimator of the scale was obtained as in (\ref{eqDesvio}). $\rho_1(x)=\rho_2(\frac{x}{0.405})$ was selected according to the same criteria that was taken for the definition of the BMM estimator for time series present in \cite{Mull}, where the function $\rho_2$ is given by:
\vspace{0.5cm}
\begin{equation*}
\label{funcRho2}
\rho_2(x)=
\begin{cases}
0.5x^2, & \text{if } |x|\leq 2;\\
0.002x^8-0.052x^6+0.432x^4-0.972x^2+1.792, & \text{if } 2<|x|\leq 3;\\
3.25, & \text{if } 3<|x|
\end{cases}
\end{equation*}
\vspace{0.5cm}
The same $\rho_2$ function was used to calculate the M estimators. In addition, for the implementation of the GM estimator the weights were set:
\begin{equation*}
\vspace{0.5cm}
l_{i,j}=1 \ \ \forall \ i,j
\end{equation*}
\begin{equation*}
t_{i,j}=\frac{\psi_H((Y_{i-1,j}^2+Y_{i,j-1}^2+Y_{i-1,j-1}^2)/3}{(Y_{i-1,j}^2+Y_{i,j-1}^2+Y_{i-1,j-1}^2)/3}
\end{equation*}
\vspace{0.5cm}
where $\psi_H$ is the following version of the Huber function:
\begin{equation*}
\label{funcPsiH}
\psi_H(x)=
\begin{cases}
x, & \text{if } |x|\leq 1.5;\\
1.5, & \text{if } 1.5<x;\\
-1.5, & \text{if } x<-1.5
\end{cases}
\end{equation*}
Finally the RA estimators were implemented according to the details formulated in \cite{Oje}.
To facilitate the paper reading, only the boxplots of the simulations have been included in the body of the work; the numerical Monte Carlo results are shown in the Appendix.
\subsection{Experiments}
In a first experiment, we studied the performance of the BMM estimator for the non-contaminated model (Case I). All the methods estimated the parameters quite well. Table \ref{tabSinCont} shows the results obtained for the four different window sizes considered. In Figure \ref{figBoxSinCont}, the corresponding boxplots are shown. In this case, it is convenient to use the LS method due to its simplicity and calculation speed.\\
The second experiment was developed in the context of Case II. We analyzed the ability of the BMM method to estimate the parameters of the model, considering a 10\% of additive contamination, and for window sizes $8\times 8$, $16\times 16$, $32\times 32$ and $57\times 57$, in comparison with the LS, M, GM and RA methods. Table \ref{tabContAdit} shows the estimated values for $\phi_1$, $\phi_2$ and $\phi_3$, using the different five procedures analyzed. Figure \ref{figBoxContAdit} exhibits the corresponding boxplots. For window size $32\times 32$ and $57\times 57$, it can be seen that the BMM estimator is the best because its values are closer to the parameter than the estimates produced by the other methods mentioned. In addition, BMM estimator has the lowest variance and the lowest MSE. When the window size was $8\times 8$ or $16\times 16$, the best performance corresponded to the GM and RA estimators; however, the BMM estimation values were similar to the RA and GM estimations. An analogue affirmation is valid to the sample variance and MSE of BMM. We also noted that for any window size, the M estimator had a very small sample variance but their estimations were wrong when compared to the ones in the other methods.\\
\clearpage
\begin{multicols}{2}
{\centering
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta1_sin_cont_todas_vent-eps-converted-to.pdf}
\centerline{(a)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta2_sin_cont_todas_vent-eps-converted-to.pdf}
\centerline{(b)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta3_sin_cont_todas_vent-eps-converted-to.pdf}
\centerline{(c)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\captionof{figure}{Case (I): LS, M, GM, RA and BMM estimation boxplots for (a) $\phi_1=0.15$, (b) $\phi_2=0.17$ and (c) $\phi_3=0.2$; model (\ref{modeloARE}) without contamination, varying the window sizes.}\label{figBoxSinCont}
\end{minipage}
}
{\centering
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta1_cont_adi_var50_10porc_todas_vent-eps-converted-to.pdf}
\centerline{(a)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta2_cont_adi_var50_10porc_todas_vent-eps-converted-to.pdf}
\centerline{(b)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta3_cont_adi_var50_10porc_todas_vent-eps-converted-to.pdf}
\centerline{(c)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\captionof{figure}{Case (II): LS, M, GM, RA and BMM estimation boxplots for (a) $\phi_1=0.15$, (b) $\phi_2=0.17$ and (c) $\phi_3=0.2$, varying the window sizes. Model (\ref{modeloARE}) with additive contamination $10\%$ level, with a normal noise.}\label{figBoxContAdit}
\end{minipage}
}
\end{multicols}
\clearpage
\begin{multicols}{2}
{\centering
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta1_cont_adi_var50_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(a)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta2_cont_adi_var50_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(b)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta3_cont_adi_var50_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(c)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\captionof{figure}{Case (II): LS, M, GM, RA and BMM estimation boxplots, for (a) $\phi_1=0.15$, (b) $\phi_2=0.17$ and (c) $\phi_3=0.2$ in model \ref{modeloARE}, with additive contamination, varying the contamination level, with a $32\times32$ window size. The contamination process is a normal noise, with $\sigma^2=50$. }\label{figBoxContAditPorc}
\end{minipage}
}
{\centering
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta1_cont_tstudent_2y3gl_vent57_todos_porc-eps-converted-to.pdf}
\centerline{(a)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta2_cont_tstudent_2y3gl_vent57_todos_porc-eps-converted-to.pdf}
\centerline{(b)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta3_cont_tstudent_2y3gl_vent57_todos_porc-eps-converted-to.pdf}
\centerline{(c)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\captionof{figure}{Case (III): LS, M, GM, RA and BMM estimation boxplots, for (a) $\phi_1=0.15$, (b) $\phi_2=0.17$ and (c) $\phi_3=0.2$ in model \ref{modeloARE}, with contamination of replacement, varying the contamination level, with a $57\times 57$ window size. The process of contamination follows a t-Student distribution with 2.3 d.f.}\label{figBoxContStudPorc}
\end{minipage}
}
\end{multicols}
\clearpage
The third experiment also refers to Case II. We set the window size at $32\times32$ and varied the additive contamination level, considering four levels: $5\%$, $10\%$, $15\%$ and $20\%$.
The BMM method was the best in most of the cases studied, followed by the RA estimator. This behavior is deduced from the comparison of the values estimated by the BMM method with the respective estimates obtained by the other procedures. The values of the dispersion measures also point out that the BMM estimator is the most accurate methodology. These results can be seen in Table \ref{tabContAditPorc}. In addition, from Figure \ref{figBoxContAditPorc} we can note that using any of the five estimators, the parameter $\phi_3$ was estimated with less precision than $\phi_1 $ and $\phi_2$ as the percentage of contamination increases. Besides, while $\phi_3 $ was underestimated by all methods, for all levels of contamination, the RA estimator was the only method that overestimated $\phi_1$ and $\phi_2,$ independently of the contamination level.
The fourth experiment is related to Case III. The process of contamination is a replacement contamination where the replacement process $W$ follows a t-student distribution with 2.3 f.d.. The simulations were performed for a $57 \times 57$ window. Table \ref{tabContStudent} and plot \ref{figBoxContStudPorc} show the results. Boxplots show that the BMM method is the best performing estimator, followed by the RA, GM, M and LS methods, in that order. We also noted that in all methods the estimates deteriorate as the level of contamination increases. Additionally, the classic LS estimator presented greater dispersion of the data.
The fifth experiment was performed in the context of Case IV, where the replacement process $W$ was a autoregressive process. We set the window size at $32\times 32$, varying the level of contamination ($5\%,$ $10\%,$ $15\%$ and $20\%$). Table \ref{tabContReemARPorc} shows these results. Besides, Figure \ref{figBoxContReemARPorc} displays the boxplots corresponding to these tables.
We can see a pattern similar to the fourth experiment; except that in this case, the variances seem very much alike.
Finally, the sixth experiment was carried out according to Case V. The replacement process of the contamination was a white noise with variance 50. As in the fifth experiment, we set the window size at $32\times 32$, varying the level of contamination. Table \ref{tabContReemRBPorc} presents the estimated values obtained. The corresponding boxplots are shown in Figure \ref{figBoxContReemRBPorc}. The parameter values $\phi_1,$ $\phi_2$ and $\phi_3$ were underestimated for all methods, excluding the RA estimator that overestimated the values of $\phi_1$ and $\phi_2$ parameters. The BMM estimator was less affected by the contamination process. The LS and M estimators are less accurate than the GM, RA and BMM estimators. Comparatively, the RA estimator presented the highest variance, whereas the GM estimator, although quite accurate, deteriorates more than the BMM estimator as the level of contamination increases.
\vspace{-0.6cm}
\subsection{Computational Time Evaluation}
\vspace{-0.3cm}
All the computational routines were developed in R and were carried out on the server JupiterAce of FaMAF-UNC. It has 12-cores 2.40GHz Intel Xeon E5-2620v3 processor, with 128 GiB 2133MHz of available DDR4 RAM. Running time as time logarithm of a single simulation to each estimator vs the window size in Case I is presented in Figure \ref{figTiempos}. Time was expressed in seconds. The graph shows that the computational cost of the RA estimator is the highest; for example, in a $32\times32$ window size, the RA running time was 43.812 seconds, while BMM, GM, M and LS computational costs were 2.936, 0.552, 0.516 and 0.436 seconds, respectively. This results show that, although RA estimator is one of the major competitors of BMM estimator, due to its accuracy and good asymptotic properties, it exhibits its computational cost as a disadvantage. This makes RA an unattractive estimator for the processing of big size images.
\vspace{-0.3cm}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8cm, height=4.7cm]{Tiempo_final-eps-converted-to.pdf}%
\caption{Logarithm of the estimation time (in seconds) when the process has additive contamination of $\sigma^2=50$ according to the window size.}
\label{figTiempos}
\end{center}
\end{figure}
\clearpage
\begin{multicols}{2}
{\centering
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta1_cont_reemAR_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(a)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta2_cont_reemAR_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(b)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta3_cont_reemAR_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(c)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\captionof{figure}{Case (IV): LS, M, GM, RA and BMM estimation boxplots, for (a) $\phi_1=0.15$, (b) $\phi_2=0.17$ and (c) $\phi_3=0.2$ in model \ref{modeloARE}, varying the contamination level, with a $32\times 32$ window size. The contamination process is of replacement type, by an AR process with $\tilde{\phi}_1=0.1$, $\tilde{\phi}_2=0.2$ and $\tilde{\phi}_3=0.3$ parameters. }\label{figBoxContReemARPorc}
\end{minipage}
}
{\centering
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta1_cont_reemRB_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(a)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta2_cont_reemRB_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(b)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\includegraphics[width=7.75cm, height=5cm]{boxplot_beta3_cont_reemRB_vent32_todos_porc-eps-converted-to.pdf}
\centerline{(c)}
\smallskip
\end{minipage}\\
\begin{minipage}[c]{7.75cm}
\captionof{figure}{Case (V): LS, M, GM, RA and BMM estimation boxplots, for (a) $\phi_1=0.15$, (b) $\phi_2=0.17$ and (c) $\phi_3=0.2$ in model \ref{modeloARE}, varying the contamination level, with a $32\times 32$ window size. The contamination process is of replacement type, with a white noise, of variance 50.}\label{figBoxContReemRBPorc}
\end{minipage}
}
\end{multicols}
\section{Application to real images}
\label{sec:apli}
The analysis of contaminated images is of great interest in several areas of research. For example, the reconstruction of contaminated images is relevant in modeling of images (\cite{All2}; \cite{Vall2}), and, in general, the reduction of the noise produced by interferences taking place in the processes of obtaining the physical image and transmitting it electronically plays an important role in the literature (\cite{Bus1}).\\
In \cite{Oje2}, two algorithms for image processing based on the unilateral AR-2D model with two parameters were presented. The foundations of the algorithms are random field theory and robustness for spatial autoregressive processes. The first one produces a local approximation of images, and the second one, is a segmentation algorithm. In this work, we proposed to use a variant of these algorithms using a unilateral AR-2D process with three parameters (model (\ref{modeloAR})), instead of two parameters as it was originally proposed. We called the modified algorithms as Algorithm 1 and Algorithm 2. We applied them to reconstruction and segmentation of images using the LS, GM and BMM estimators of the parameters in the model (\ref{modeloAR}). Later, we inspected and compared the performance of these estimators in Algorithms 1 and 2 on contaminated images. To compare the images generated by the algorithms and, therefore, the performance of the different estimators, we calculated three indexes used in the literature; the SSIM index (\cite{Wan}), the CQ index (\cite{Oje3}), and CQmax index (\cite{Pis}). Next, we present two numerical experiments using the image ``Lenna'', which was taken from the USC-SIPI image database http://sipi.usc.edu/database/. In Figure \ref{figLenRecVent}-(I), the original $512\times512$ image is shown.\\
In the following we present Algorithms \ref{alg1} and \ref{alg2}, for more details about the notation you can see the work \cite{Oje2}.
\begin{algorithm}[h!]
\begin{algorithmic}[1]
\REQUIRE Original image $Z$.
\ENSURE Approximated image $\hat{Z}$ of the original image $Z$\\
\STATE Define $X$ as $X=Z-\overline{Z}$
\STATE Generate block $B_X(i_b,j_b)$
\STATE Compute the estimations $\hat{\phi}_1^{(i_b,j_b)}$, $\hat{\phi}_2^{(i_b,j_b)}$, $\hat{\phi}_3^{(i_b,j_b)}$ of $\phi_1$, $\phi_2$ and $\phi_3$ corresponding to the block $B_X(i_b,j_b)$ extended to $B'_X(i_b,j_b)=[X_{r,s}]_{(k-1)(i_b-1)\leq r\leq (k-1)i_b, (k-1)(j_b-1)\leq s\leq (k-1)j_b}$
\STATE Define $\hat{X}$ on the block $B_X(i_b,j_b)$ by\\
$$\hat{X}_{r,s}=\hat{\phi}_1^{(i_b,j_b)}X_{r-1,s}+\hat{\phi}_2^{(i_b,j_b)}X_{r,s-1}+\hat{\phi}_3^{(i_b,j_b)}X_{r-1,s-1}$$
where $(k-1)(i_b-1)+1\leq r\leq (k-1)i_b$ and $(k-1)(j_b-1)+1\leq s\leq (k-1)j_b$\\
\STATE Define $\hat{Z}$ as $\hat{Z}=\hat{X}-\overline{Z}$\\
\end{algorithmic}
\caption{Local approximation of images by using AR-2D processes}\label{alg1}
\end{algorithm}
\begin{algorithm}[h]
\begin{algorithmic}[1]
\REQUIRE Original image $Z$
\ENSURE Segmentated image $W$
\STATE Generate an approximated image $\hat{Z}$ of $Z$ with the Algorithm 1.\\
\STATE Compute the residual image $W$ defined as $W=Z-\hat{Z}$.
\end{algorithmic}
\caption{Segmentation}\label{alg2}
\end{algorithm}
In the first experiment, Algorithm 1 was applied to image representation. We locally adjusted an AR-2D process to the original image for different window sizes, and estimated the parameters of the model with the BMM estimator. Fig. \ref{figLenRecVent}, (a), (b), (c) and (d) exhibits the BMM reconstructed images obtained respectively using the window sizes $8\times 8$, $16\times 16$, $32\times 32$ and $57\times 57$. For all window sizes, the BMM recostructed images are visually good; although a quantitative analysis of the similarity between each BMM reconstructed image and the original image showed differences. We calculated the SSIM, CQ (1,1) and CQ$_{\max}$ index, between each reconstructed image and the original image. The three indexes revealed that the similarity decreases as the size of the window increases (Table \ref{tabSimilLenavsRec}); so the best fits were obtained with small window sizes. This result reflects the assumption that the two-dimensional autoregressive model is a local adjustment model. Next, we applied Algorithm 2 and generated four difference images (e), (f), (g) and (h) shown in Fig. \ref{figLenRecVent}. We observed that the difference image (h) highlights the edges more than the others do. This shows that when we performed the reconstruction with a $57\times 57$ window size, (Fig. \ref{figLenRecVent} (d)), a lot of information got lost and this is reflected by the difference image (Fig. \ref{figLenRecVent} (h)).
\begin{figure}[h!]
\begin{center}
\begin{tabular}[c]{p{3.5cm} p{3.5cm} p{3.5cm} p{3.5cm}}
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_sin_cont_vent8-eps-converted-to.pdf}
\centerline{(I)}
\end{minipage} &
\begin{minipage}[h]{10.5cm}
\caption{Image (I) Lena original image. (Right) The first row has the reconstructions done for BMM estimator adjusting window sizes $8\times 8$, $16\times 16$, $32\times 32$ and $57\times 57$ respectively ((a) to (d)). The second row has the respective differences ((e) to (h)) with respect to the original image (I).}\label{figLenRecVent}
\end{minipage}\\
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconBMM_sin_cont_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(a)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconBMM_sin_cont_vent16_Tpor3-eps-converted-to.pdf}
\centerline{(b)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconBMM_sin_cont_vent32_Tpor3-eps-converted-to.pdf}
\centerline{(c)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconBMM_sin_cont_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(d)}
\end{minipage}\\
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconBMM_sin_cont_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(e)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconBMM_sin_cont_vent16_Tpor3-eps-converted-to.pdf}
\centerline{(f)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconBMM_sin_cont_vent32_Tpor3-eps-converted-to.pdf}
\centerline{(g)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconBMM_sin_cont_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(h)}
\end{minipage}\\
\end{tabular}
\end{center}
\end{figure}
\begin{table}[h!]
\begin{center}
\caption{SSIM, CQ and CQ$_{max}$ index between the original image and each one of the BMM reconstructed images (a), (b), (c) and (d) in Figure \ref{figLenRecVent}.}
\begin{tabular}[c]{c c c c}
\hline
Window size & SSIM & CQ(1,1) & CQ$_{\max}$ \\
\hline
$8\times 8$ & 0.9948914 & 0.8582201 & 0.9706984 \\
$16\times 16$ & 0.9827996 & 0.8309626 & 0.9544317 \\
$32\times 32$ & 0.9779204 & 0.8151581 & 0.9462133 \\
$57\times 57$ & 0.9762065 & 0.8073910 & 0.9423786 \\ \hline
\end{tabular}
\label{tabSimilLenavsRec}
\end{center}
\end{table}
\newpage
In the second experiment, the original image was 10\% additively contaminated (Fig. \ref{figLenaRec8y57} (II)), and we used it as input in Algorithm \ref{alg1}. We obtained four reconstructed images using the LS, GM and the BMM estimators. Next, the Algorithm \ref{alg2} was performed. The studies were carry out considering $8\times8,$ and $57\times 57$ window sizes. In the first two columns in Figure \ref{figLenaRec8y57}, we can observe the results obtained considering $8\times 8$ windows. Visually, there are not great differences between the different images reconstructed. When analyzing Table \ref{tabSimilLenavsRecVent8_cont}, we verify this because the measured indexes are comparable to each other. On the other hand, the third and fourth columns of Figure \ref{figLenaRec8y57} show the results obtained by adjusting $57\times 57$ windows. It is observed that the image (l), corresponding to the difference between the image restored with BMM (l) and the one contaminated with additive noise, highlights the edges slightly more.\\
\begin{figure}[h!]
\begin{center}
\begin{tabular}[c]{p{3.5cm} p{3.5cm} p{3.5cm} p{3.5cm}}
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_sin_cont_vent8-eps-converted-to.pdf}
\centerline{(I)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(II)}
\end{minipage} &
\begin{minipage}[c]{7cm}
\caption{Image (I) Lena original image; Image (II) image with 10\% additive contamination and $\sigma^2=50$. In the first row, adjustments made with LS; in the second row, with GM; and in the third row, with BMM. Columns 1 and 2 correspond to adjustments with $8\times 8$ windows, and columns 3 and 4 to $57\times 57$ windows. Columns 1 and 3 are the reconstruction for Algorithm 1 and columns 2 and 4 are the segmented images for Algorithm 2.}\label{figLenaRec8y57}
\
\end{minipage}\\
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconLS_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(a)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconLS_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(d)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconLS_contAdi_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(g)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconLS_contAdi_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(j)}
\end{minipage}\\
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconGM_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(b)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconGM_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(e)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconGM_contAdi_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(h)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconGM_contAdi_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(k)}
\end{minipage}\\
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconBMM_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(c)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconBMM_contAdi_vent8_Tpor3-eps-converted-to.pdf}
\centerline{(f)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2lenna_reconBMM_contAdi_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(i)}
\end{minipage} &
\begin{minipage}[c]{3.5cm}
\includegraphics[width=3.5cm, height=3.5cm]{2Dif_lenna_reconBMM_contAdi_vent57_Tpor3-eps-converted-to.pdf}
\centerline{(l)}
\end{minipage}\\
\end{tabular}
\end{center}
\end{figure}
\begin{table}[h!]
\begin{center}
\caption{Similarity between the original image and the reconstructions of Lena with additive contamination using $8\times 8$ window size (Figure \ref{figLenaRec8y57}-(I) vs. figures \ref{figLenaRec8y57}-(a),(b),(c)).}
\begin{tabular}[c]{c c c c}
\hline
Estimate & SSIM & CQ(1,1) & CQ$_{\max}$ \\
\hline
LS & 0.9836079 & 0.8416351 & 0.9588826 \\
GM & 0.9390820 & 0.7821954 & 0.9103257 \\
BMM & 0.9846007 & 0.8328356 & 0.9577393 \\ \hline
\end{tabular}
\label{tabSimilLenavsRecVent8_cont}
\end{center}
\end{table}
\section{Conclusions and discussions}
\label{sec:concl}
A new estimator called BMM was proposed to estimate the parameters in first-order two-dimensional autoregressive models with three parameters. The new estimator is a two-dimensional extension of the BMM estimator proposed by \cite{Mull}, for autoregressive models of time series. We also extended the definition of replacement contamination, given for one-dimensional models (\cite{Maro}), to the case of AR-2D models; this type of contamination includes additive-type contamination. The performance of the proposed estimator for AR-2D models with replacement contamination and without contamination was analyzed. Besides, the new estimator was compared with the classical least square estimator (LS) and robust estimators M, GM and RA. The comparative analysis was performed from six experiments, each of which involved several Monte Carlo studies, considering different replacement contamination patterns and varying the level of contamination and window sizes of observation. The LS estimator produced estimates that are very sensitive to the presence of atypical values, while the other estimators had better results. Using Monte Carlo simulation, we observed that the GM, RA and BMM estimators are highly superior to the M and LS estimators. However, the new estimator showed the best behavior, in both accuracy and precision, followed by the RA estimator in accuracy and the GM in precision. An analysis of the computational cost showed that the RA estimator is the most expensive, followed by the BMM, GM, M and LS estimators, in that order. Finally, in an application to real data, we introduced a variant of the algorithm developed by \cite{Oje2}, to perform image segmentation, using an AR-2D model with three parameters, and BMM estimators. In the light of the examples shown in Section \ref{sec:apli}, we conclude that the adapted algorithm is able to highlight the borders and contours in the images.\\
The following proposals outline some directions for future work. In \cite{Oje}, the author established the asymptotic normality and consistency of the robust RA estimator for the parameter ${\bm \phi}$ of a two-dimensional autoregressive process. Although the estimators M, GM and BMM are reasonable to estimate parameter ${\bm \phi}$, their asymptotic behavior is still an open problem. In the context of image processing, in this work, the difference between a real image and an BMM approximated image was computed. The resulting image could contribute to solve problems like that border detection, classification and restoration of images. It would be interesting to explore the limitations of a segmentation method based on the difference image between a real and a fitted image. It is also important to analyze the behavior of the BMM estimator in combination with image restoration techniques. The same relevance has the study of properties of BMM estimator, in particular, and robust estimators, in general, as alternatives to the least square estimators under non causal and semi causal AR-2D models.
\section*{Acknowledgement}
We thank to Ph. D. Oscar Bustos and Ph. D. Ronny Vallejos for helpful comments and suggestions. The authors were supported by Secyt-UNC grant (Res. Secyt 313/2016.), Argentina. The first author was partially supported by CIEM-CONICET, Argentina.
|
1,116,691,498,534 | arxiv | \section{Introduction}
\begin{figure}[b]
\centering
\includegraphics[width=.8\columnwidth]{Architecture.pdf}
\caption{Overall architecture of the proposed system}\label{fig:overallArchitecture}
\end{figure}
The term fourth industrial revolution, often abbreviated to Industry 4.0, is used to refer to the current trend of automation in manufacturing technologies. One of the promises of Industry 4.0 is related to facilitating of production of small batches of highly customised commodities \cite{Wang2016}. Even if not 100\% correct from the orthodox standpoint, preparing dishes in a restaurant may be viewed to be consistent with the primary assumptions of Industry 4.0. The orders are released at random moments, the batches are extremely short (up to a number of people in a group) and highly customised (e.g., blue, rare, medium rare, medium or well-done steaks). Consequently, a chef is often reported as one of the most stressful jobs \cite{Gibbons2007}.
Digital gastronomy aims to mitigate the chef life by enhancing traditional cooking with new capabilities rather than replacing the chef with an autonomous machine
\cite{Mizrahi2016}. Following this trend, modern technologies are increasingly more popular in restaurants. At the moment, the Internet of Things (IoT) is used to monitor the equipment that cooks, cleans or stores food \cite{Kiesel18}. However, in the near future technology will be probably applied to more sophisticated tasks. For example, the recent progress in developing so-called electronic tongues and noses can facilitate the automation of the process of food samples' quality estimation \cite{Podrazka18}. Such sensors can be used to fill the gap between recipes and actual cooking activities, identified in \cite{Sato2014}. Even Kinect-style cameras can be applied for fine-grained recognition of kitchen activities \cite{Lei2012}. Not only the sensors but also the actuators in smart kitchen appliances can act as things connected to the Internet, as for example a recently presented smart oven from Electrolux in which the time duration or temperature can be remotely altered \cite{Electrolux18}. Even if a certain task in a cooking process has to be done manually (e.g., adding ingredients to a pot), it can be guided by a robot speech following some recipes \cite{Suzuki2012} or even a cognitive conversational agent connected to a smart fridge \cite{Angara2017}. However, those systems process one recipe at a time. This is in contrast to \cite{Hamada2005}, where the cooking process has been treated as an optimisation problem. That system applied the list scheduling algorithm to minimise the food time preparation and maximise food quality, benefiting from the fact that some actions can be executed in parallel, reducing the cooking duration.
The above observations encourage us to apply a generic factory reconfiguration system, described in \cite{Dziurzanski2019EvoApp}, to food preparation in restaurants. In particular, similarly to a smart factory, a kitchen can receive a new order at any time. For such order, a process planning and scheduling need to be performed with no delay \cite{Dziurzanski2019ICIT}. Process planning and scheduling are required to be re-executed in case any unexpected event occurs in the kitchen, for example, a failure of a device (treated as a smart thing) is detected \cite{Wan19}. If a number of devices in a kitchen is considerable, process planning and scheduling will have similar computation cost as in a smart factory, which is rather significant \cite{Sobeyko2017}. Between subsequent executions of the process planning and scheduling, the computational power is not needed. Consequently, the workload related to the process planning and scheduling in a restaurant follows the on-and-off workload pattern. When using the process planning and scheduling approach that can be processed in a distributed way, for example the island model of a Genetic Algorithm (GA) similarly to \cite{Zhao19}, the workload satisfies all criteria for suitability for public cloud provided in \cite{Geyer2012}, namely an unpredictable load, different computational power requirements at different time intervals and horizontal scalability.
The rest of this paper is organised as follows. Section \ref{sec:generic} outlines our generic service for optimisation of smart factories. Application of this service to a smart kitchen is described in Section \ref{sec:applying}. In Section \ref{sec:experimentalResults}, experimental evaluation of the proposed approaches is carried out. Section \ref{sec:ConcludingRemark} concludes the paper.
\section{Generic service for smart factories re-configuration}\label{sec:generic}
The class of optimisation problems analysed in this paper concerns integrated process planning and scheduling in smart kitchens performed on demand. The optimisation is carried out by the module named Optimisation Engine (OE), which is a part of the larger system presented in Figure \ref{fig:overallArchitecture}. The operation of this system is triggered by the data ingested from assorted devices (things) such as smart hobs. This data is collected by the Situation Determination (SD) module, which derives the current situation of resources, products and processes, using a custom use-case situation model based on a common situation model. SD monitors raw data provided by things and the outcomes of the Predictive Analytics (PA) module and then determines the kitchen current state. In case any relevant change of the kitchen state is detected, the process plan and schedule is recomputed.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{FigureLayers.pdf}
\caption{Layers in the proposed approach}\label{fig:designFlow}
\end{figure}
In the proposed system, the popular streaming platform named Kafka has been applied to the communication between modules. The messages sent via Kafka follow a well-defined textual protocol named Metrics API, which defines three types of numeric or nominal values: key objectives to be optimised, control metrics that can be mutated to obtain various candidate solutions and observable metrics informing about situations relevant to the optimisation process, such as unavailability of a certain resource. This system architecture is compliant with the scheme proposed in \cite{Alsafi2010}, where three-layers of an intelligent factory reconfiguration system have been identified. The lowest layer, named \emph{Specification layer}, includes the knowledge description regarding the factory based on General system ontology. The middle layer, \emph{Analysing \& modelling layer}, includes both SD and PA modules. OE operates in the topmost layer which is named \emph{Decision layer}. These layers are visualised in Figure \ref{fig:designFlow}. From this layered architecture, it follows that the appropriate operation of both the SD and PA modules are crucial for performing effective optimisation. However, the detailed description of these two modules is out of the scope of this paper.
The overall specification of reconfiguration capability is most readily explained via division into two component parts: the Metrics API and Optimisation Engine. The Metrics API provides a complete configuration description for the variables associated with the food order and kitchen temporal state. The elements of the configuration data schema are termed metrics, i.e. either measurable physical values corresponding to appliance sensors or else key objective (quality) measures derived from these. The chief functionality of OE is to generate a food cooking plan and schedule in response to reconfiguration requests issued by SD. The quality of the candidate plans and schedules is determined by the objective function. This function is generated automatically based on a kitchen configuration and applies a digital twin of the corresponding smart kitchen specified with Interval Algebra \cite{Dziurzanski2019ICIT}.
The optimisation used by OE is based on \cite{Dziurzanski2019EvoApp}. There are given a set of recipes, a set of resources and an order. OE assigns resources and priorities to a multisubset (i.e. a combination with repetitions) of recipes so that the total processing time (makespan) and energy are minimised and the food quality is the highest. In genetic algorithms, candidate solutions are treated as individuals. During the optimisation process, these individuals are evolved using a set of bio-inspired operators such as selection, crossing-over and mutation. The solution to the problem considered in this paper can be then described with a chromosome whose odd genes identify the recipe to be applied and the even genes denote the priority of that recipe instance. The aim of introducing priorities is to determine the processing orders of recipes allocated to the same resource and thus to determine the temporal scheduling. This ordering does not change the amount of cooked food but can influence the makespan.
The problem analysed in this paper is characterised with multi-objective criteria. End-users should be then informed about a wide set of Pareto-optimal solutions to select the final solution based on their knowledge of the problem. The set of the alternative solutions presented to the end-users should be then diverse and, favourably, distributed over the entire Pareto front. This expectation is in line with the properties of the MOEA/D algorithm proposed by Zhang and Li in \cite{Zhang2007}, which is then used in OE.
\section{Applying generic re-configuration service for smart kitchen}\label{sec:applying}
The optimisation process is executed after sending a serialised configuration to a predefined Kafka topic. The configuration includes the list of dishes to be cooked together with the list of the recipes and the parameters of the available hobs. An example list of recipes used in this paper is provided in Table \ref{tab:Appendix}. For example, the list of compatible cooking zones (e.g. of the area sufficient for a certain pot or food amount) is provided in the domain of a controlled metric type, which is shown in Figure \ref{fig:22} for two examples related to boiled water. The resource name is composed of two parts: the actual cooking zone name and the pot type. Each recipe can be executed a number of times, thus the recipe name is followed by an instance index (e.g. suffix 0 in Boiled water A 0). As it is shown in this figure, the first recipe, Boiled water A 0 can be executed using Pot(1) and cooking zones 1-4 on hob named Hob, or not selected to be executed (i.e., No allocation). In the assumed hob, a few cooking zones can be used simultaneously to cook a dish in a larger pot, as shown in Figure \ref{fig:23}. In this figure, four cooking zones can be used with pot(1) (i.e., the circles on the hob) independently (left column), or two upper or two lower cooking zones can be combined and used with pot(2) (middle column). Notice, that using two middle cooking zones in such a way is impossible. Finally, three upper zones can be used simultaneously with pot(3) (right column). In the configuration, all cooking zones Hob(1)-Hob(7) are provided as separate resources, but the fact that certain resources cannot be used simultaneously (e.g. Hob(1) and Hob(5)) is defined using mutual exclusiveness of resources as explained in \cite{Dziurzanski2019ICIT}.
\begin{figure}
\footnotesize
\begin{verbatim}
ControlledMetricType[name=Boiled water A 0,
allocation,valueType=ValueType.Nominal[name=
Boiled water A 0 allocation type ,values={Hob(1)
Pot(1), Hob(2) Pot(1), Hob(3) Pot(1), Hob(4) Pot(1),
No allocation},typ=NOMINAL]
ControlledMetricType[name= Boiled water B 0
allocation,valueType=ValueType.Nominal[name=
Boiled water B 0 allocation type ,values={Hob(5)
Pot(2), Hob(6) Pot(2), No Allocation},typ=NOMINAL],
units=n/a]
\end{verbatim}
\caption{Two example controlled metrics}\label{fig:22}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig23.pdf}
\caption{Example of possible configuration of four cooking zones to heat pots of different sizes}\label{fig:23}
\end{figure}
The observable metrics are used to denote the availability of certain resources. Due to the compound naming structure of these resources, both a certain cooking zone or a certain pot type can be signalled as being unavailable. For example, temporal lack of pot(1) should result in the unavailability of all resources that are used in combination with that pot. The first example in Figure \ref{fig:24} signals unavailability of a certain cooking zone. The second and third observables inform OE about the cooking time of a certain recipe (here: Boiled water A) with a certain cooking zone (here: Hob(1)) and pot (here: Pot(1)), as determined by the predictive analytics or situation determination modules. This cooking time is taken into consideration when a schedule is determined. Finally, the recipe quality can be updated based on some user feedback.
\begin{figure}
\footnotesize
\begin{verbatim}
ObservableMetricType[name= Hob(6) availability,
valueType=ValueType.Integer[min=0,max=0,typ=INT],
units=n/a,sampleRate=SampleRate.EventDriven[]],
ObservableMetricType[name=Boiled water A Hob(1)
Pot(1) start, valueType=ValueType.Integer[min=0,
max=0,typ=INT]
ObservableMetricType[name=Boiled water A Hob(1)
Pot(1) end, valueType=ValueType.Integer[min=40,
max=40,typ=INT]
\end{verbatim}
\caption{Two example observable metrics in the Electrolux use case}\label{fig:24}
\end{figure}
\begin{figure}
\footnotesize
\begin{verbatim}
Optimisation took: 19.942 seconds
Schedule: Status: Succeeded.
Hob(2) Pot(1) -> [
Rice A 1_1 [35,37),
Rice A 1 [103,128),
Beef A 2_1 [188,200),
Beef A 2 [692,812),
DependentSetUp from Beef A
to Boiled Water A [812,822),
Boiled Water A 0 [1094,1109),
DependentSetUp from Boiled Water A
to Pasta A [1109, 1119),
Pasta A 0_1 [1297,1299),
Pasta A 0 [1299,1319)
]
…
makespan: 28:57.00
\end{verbatim}
\caption{Extract from an example OE report for the Electrolux use case}\label{fig:25}
\end{figure}
\section{Experimental results}\label{sec:experimentalResults}
In the considered scenario, the following amounts of food are required to be cooked: Boiled water - 5000g, Pasta - 1000g, Rice - 1500g, Meat (beef) - 1000g, Vegetable (potatoes) - 1000g, Mushrooms (with oil) - 500g.
These values are provided to OE via Kafka and then the optimisation process is executed. An example report of the optimisation process is presented in Figure \ref{fig:25}. As shown in the figure, the tasks are executed concurrently on all available resources, and the finish time of the last task indicates the makespan of this optimisation procedure. Note the start and end time of each task is relative to the starting time point 0. For the production that requires certain pre-cooking, subtasks are introduced and are executed before cooking the required production (i.e., task Rice A 1\_1 for cooking rice using task Rice A 1). In addition, dependent setup is required when different production is executed on the same resource (see task DependentSetUp).
\begin{figure}
\includegraphics[width=\columnwidth]{oneHub.pdf}
\caption{Pareto front approximation (Makespan, Quality and Energy Cost) while cooking recipes from Table \ref{tab:Appendix}}\label{fig:oneHub}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fourHub.pdf}
\caption{Pareto front approximation (Makespan, Quality and Energy Cost) while cooking scaled recipes from Table \ref{tab:Appendix} in a commercial scale scenario}\label{fig:fourHub}
\end{figure}
The trade-off between three conflicting objectives, makespan, energy cost and deficiency (the reverse of quality), has been investigated for the example scenario and visualised in Figure \ref{fig:oneHub} in a form of a Pareto front approximation. The chart demonstrates that the decreasing of makespan is obtained via sacrificing the quality (i.e., increasing deficiency). Note, during the experiment, we noticed that it is not necessary that a higher makespan must lead to a lower deficiency value (which indicates a higher food quality). This is because the recipes can be executed in parallel (based on the allocation decision during optimisation). Given the same recipes, executing in parallel can decrease the makespan but the deficiency objective will remain unchanged. Therefore, during this evaluation, we often observed new optimisation results that contain makespan and deficiency values that are both lower than certain previous optimisation results. Such a phenomenon also makes the final number of optimisation results low, as optimisation results that are strictly dominated will be removed from the Pareto front approximation. However, in general, we demonstrate the trend where the deficiency metric value is decreased (i.e., better quality is obtained) while the makespan is increasing.
In addition,
The trade-off between makespan and energy cost, has been
investigated for the example scenario and demonstrated in Figure \ref{fig:oneHub} in the form of a Pareto front approximation. The chart indicates that the decreasing of makespan can be achieved via consuming more energy.
The proposed method is applicable rather to large restaurants or company cafeterias where trade-offs between conflicting objectives such as energy, makespan and deficiency, needs to be investigated rather than to single hobs treated as home appliances. With a larger scenario, both in terms of the number of available resources and the ordered food, it needs to be considered to demonstrate the scalability of the proposed approach. Hence in the second scenario, the presence of 4 hobs from Figure \ref{fig:23} is assumed. In addition, the amount of the ordered food is 4 times larger than in the previous case.
\begin{sidewaystable*}
\footnotesize
\centering
\caption{Parameters of food cooking using various cooking zones, pots and recipes}\label{tab:Appendix}
\begin{tabular}{l|lllllllll}
Food type & Recipe name & Predecessor & Amount (g) & Cooking zone & Pot type & Energy (kJ) & Cooking time (min) & Monetary cost (€) & Deficiency \\ \hline
Boiled water & A & - & 1000 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 350 & 15 & 0.03 & 5 \\
& B & - & 2000 & Hob(5), Hob(6 & Pot 2 & 1400 & 10 & 0.12 & 8 \\
& C & - & 3000 & Hob(7) & Pot 3 & 3150 & 5 & 0.27 & 11 \\ \hline
Pasta & A & Boiled water A & 100 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 840 & 30 & 0.021 & 2 \\
& B & Boiled water A & 100 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 770 & 25 & 0.018 & 9 \\
& C & Boiled water B & 200 & Hob(5), Hob(6) & Pot 2 & 1120 & 20 & 0.021 & 14 \\
& D & Boiled water B & 200 & Hob(5), Hob(6) & Pot 2 & 1190 & 15 & 0.018 & 19 \\
& E & Boiled water C & 300 & Hob(7) & Pot 3 & 1520 & 10 & 0.021 & 22 \\
& F & Boiled water C & 300 & Hob(7) & Pot 3 & 1590 & 5 & 0.018 & 25 \\ \hline
Rice & A & Boiled water A & 200 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 1260 & 50 & 0.045 & 7 \\
& B & Boiled water A & 200 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 1400 & 45 & 0.039 & 15 \\
& C & Boiled water B & 400 & Hob(5), Hob(6) & Pot 2 & 1610 & 40 & 0.045 & 19 \\
& D & Boiled water B & 400 & Hob(5), Hob(6) & Pot 2 & 1750 & 35 & 0.039 & 22 \\
& E & Boiled water C & 600 & Hob(7) & Pot 3 & 1960 & 15 & 0.045 & 28 \\
& F & Boiled water C & 600 & Hob(7) & Pot 3 & 2100 & 13 & 0.039 & 33 \\ \hline
Meat (beef) & A & Boiled water A & 250 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 4550 & 120 & 0.27 & 5 \\
& B & Boiled water A & 250 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 6650 & 110 & 0.18 & 9 \\
& C & Boiled water B & 500 & Hob(5), Hob(6) & Pot 2 & 6900 & 90 & 0.27 & 12 \\
& D & Boiled water B & 500 & Hob(5), Hob(6) & Pot 2 & 7000 & 85 & 0.18 & 16 \\
& E & Boiled water C & 750 & Hob(7) & Pot 3 & 7350 & 60 & 0.27 & 21 \\
& F & Boiled water C & 750 & Hob(7) & Pot 3 & 7550 & 55 & 0.18 & 27 \\ \hline
Vegetable & A & Boiled water A & 200 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 1750 & 42 & 0.066 & 3 \\
(potatos) & B & Boiled water A & 200 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 1890 & 40 & 0.06 & 11 \\
& C & Boiled water B & 400 & Hob(5), Hob(6) & Pot 2 & 2100 & 32 & 0.066 & 19 \\
& D & Boiled water B & 400 & Hob(5), Hob(6) & Pot 2 & 2240 & 30 & 0.06 & 23 \\
& E & Boiled water C & 600 & Hob(7) & Pot 3 & 2450 & 22 & 0.066 & 26 \\
& F & Boiled water C & 600 & Hob(7) & Pot 3 & 2590 & 20 & 0.06 & 31 \\ \hline
Cooking & A & - & 200 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 700 & 38 & 0.072 & 11 \\
with oil & B & - & 200 & Hob(1), Hob(2), Hob(3), Hob(4) & Pot 1 & 840 & 36 & 0.06 & 16 \\
(mushroom) & C & - & 300 & Hob(5), Hob(6) & Pot 2 & 910 & 25 & 0.09 & 19 \\
& D & - & 300 & Hob(5), Hob(6) & Pot 2 & 1050 & 23 & 0.078 & 20 \\
& E & - & 400 & Hob(7) & Pot 3 & 1120 & 12 & 0.108 & 26 \\
& F & - & 400 & Hob(7) & Pot 3 & 1260 & 10 & 0.096 & 29
\end{tabular}
\end{sidewaystable*}
\section{Concluding remark}\label{sec:ConcludingRemark}
In this paper, a real-world food cooking planning and scheduling problem in a commercial smart kitchen has been described. Its goal is not only to minimise the cooking time but also to minimise the energy dissipation and maximising the food quality via selecting recipes multisubset to be executed. A typical multi-objective genetic algorithms named MOEA/D has been used. The experiments have demonstrated the applicability of the proposed approach which has been able to determine the trade-offs between the conflicting objectives. The proposed algorithm is scalable enough to be applied to a relatively large kitchen and high quantity of production.
\section*{Acknowledgement}
The authors acknowledge the support of the EU H2020 SAFIRE project
(Ref. 723634).
The authors would like to thank Andrea De Angelis and Claudio Cenedese from ELECTROLUX AS for their support in defining the scenario described in this paper.
The icons used in this paper have been created by Icons8, https://icons8.com.
\balance
\bibliographystyle{IEEEtran}
|
1,116,691,498,535 | arxiv | \section{Introduction}
Discrete time Markov chains ({\textnormal{MC}}s for short) are a standard
probabilistic modeling formalism that has been extensively used
in the litterature
to reason about software~\cite{whittaker1994markov} and real-life
systems~\cite{Husmeier2010}. However, when modeling real-life systems, the exact
value of transition probabilities may not be known precisely. Several
formalisms abstracting {\textnormal{MC}}s have therefore been
developed. Parametric Markov chains~\cite{Alur93} ({\textnormal{pMC}}s for short)
extend {\textnormal{MC}}s by allowing parameters to appear in transition
probabilities. In this formalism, parameters are variables and
transition probabilities may be expressed as polynomials over these
variables. A given {\textnormal{pMC}} therefore represents a potentially
infinite set of {\textnormal{MC}}s, obtained by replacing each parameter by a
given value. {\textnormal{pMC}}s are particularly useful to represent systems
where dependencies between transition probabilities are
required. Indeed, a given parameter may appear in several dinstinct
transition probabilities, therefore requiring that the same value is
given to all its occurences. Interval Markov chains~\cite{JonssonL91} ({\textnormal{IMC}}
for short)
extend {\textnormal{MC}}s by allowing precise
transition probabilities to be replaced by intervals, but cannot represent dependencies between distinct transitions. {\textnormal{IMC}}s have mainly
been studied with two distinct semantics interpretation. Under the
{\em once-and-for-all} semantics, a given {\textnormal{IMC}} represents a
potentially infinite number of {\textnormal{MC}}s where transition probabilities
are chosen inside the specified intervals while keeping the same
underlying graph structure. The {\em at-every-step} semantics, which
was the original semantics given to {\textnormal{IMC}}s in~\cite{JonssonL91}, does not
require {\textnormal{MC}}s to preserve the underlying graph structure of the
original {\textnormal{IMC}} but instead allows an ``unfolding'' of the original
graph structure where different probability values may be chosen
(inside the specified interval) at each occurence of the given
transition.
Model-checking algorithms and tools have been developed
in the context of
{\textnormal{pMC}}s~\cite{Prophesy,DBLP:conf/cav/HahnHWZ10,DBLP:conf/cav/KwiatkowskaNP11}
and {\textnormal{IMC}}s with the once-and-for-all
semantics~\cite{Chakraborty2015,benedikt2013ltl}. State of the art tools~\cite{Prophesy} for {\textnormal{pMC}}
verification compute a rational function on the parameters that
characterizes the probability of satisfying a given property, and then
use external tools such as SMT solving~\cite{Prophesy} for computing the
satisfying parameter values. For these methods to be viable in
practice, the number of parameters used is quite limited. On the other
hand, the model-checking procedure for {\textnormal{IMC}}s presented
in~\cite{benedikt2013ltl} is adapted from machine learning and builds
successive refinements of the original {\textnormal{IMC}}s that optimize the
probability of satisfying the given property. This algorithm
converges, but not necessarilly to a global optimum. It is worth noticing that
existing model checking procedures for {\textnormal{pMC}}s and {\textnormal{IMC}}s
strongly rely on their underlying graph structure. As a consequence,
to the best of our knowledge, no solutions for model-checking {\textnormal{IMC}}s
with the at-every-step semantics have been proposed yet.
In this paper, we focus on Parametric interval Markov
chains~\cite{DelahayeLP16} ({\textnormal{pIMC}}s for short), that generalize both
{\textnormal{IMC}}s and {\textnormal{pMC}}s by allowing parameters to appear in the endpoints
of the intervals specifying transition probabilities, and we provide four main contributions.
First, we formally compare abstraction formalisms for
{\textnormal{MC}}s in terms of succinctness: we show in particular that {\textnormal{pIMC}}s
are {\em strictly more succinct} than both {\textnormal{pMC}}s and {\textnormal{pIMC}}s when
equipped with the right semantics. In other words, everything that can
be expressed using {\textnormal{pMC}}s or {\textnormal{IMC}}s can also be expressed using
{\textnormal{pIMC}}s while the reverse does not hold. Second, we prove that the
once-and-for-all and the at-every-step semantics are equivalent w.r.t.
rechability properties, both in the {\textnormal{IMC}} and in the {\textnormal{pIMC}}
settings. Notably, this result gives theoretical backing to the
generalization of existing works on the verification of {\textnormal{IMC}}s to the
at-every-step semantics. Third, we study the parametric verification of
fundamental properties at the {\textnormal{pIMC}} level: consistency, qualitative
reachability, and quantitative reachability. Given the expressivity of
the {\textnormal{pIMC}} formalism, the risk of producing a {\textnormal{pIMC}} specification
that is incoherent and therefore does not model any concrete {\textnormal{MC}}
is high. We therefore propose constraint encodings for deciding
whether a given {\textnormal{pIMC}} is consistent and, if so, synthesizing
parameter values ensuring consistency. We then extend these encodings
to qualitative reachability, \ie ensuring that given state
labels are reachable in {\em all} (resp. {\em none}) of the {\textnormal{MC}}s
modeled by a given {\textnormal{pIMC}}. Finally, we focus on the quantitative
reachability problem, \ie synthesizing parameter values such
that the probability of reaching given state labels satisfies fixed
bounds in {\em at least one} (resp. {\em all}) {\textnormal{MC}}s modeled by a
given {\textnormal{pIMC}}. While consistency and qualitative reachability for
{\textnormal{pIMC}}s have already been studied in~\cite{DelahayeLP16}, the
constraint encodings we propose in this paper are significantly
smaller (linear instead of exponential). To the best of our knowledge,
our results provide the first solution to the quantitative reachability problem for {\textnormal{pIMC}}s.
Our last contribution is the implementation of all our verification algorithms in a prototype tool
that generates the required constraint encodings and can be plugged to
any SMT solver for their resolution.
Due to space limitation, the proofs of our results are given in Appendix.
\section{Background}\label{sec:background}
In this section we introduce notions and notations that will be used
throughout the paper. Given a finite set of variables $X = \{x_1, \ldots, x_k\}$, we write
$D_x$ for the domain of the variable $x \in X$ and $D_X$ for the set
of domains associated to the variables in $X$. A valuation
$v$ over $X$ is a set $v = \{(x,d) | x \in X, d \in D_x\}$ of
elementary valuations $(x,d)$ where for each $x \in X$ there exists a
unique pair of the form $(x, d)$ in $v$. When clear from the context,
we write $v(x) = d$ for the value given to variable $x$ according to
valuation $v$. A rational function $f$ over $X$ is a division of two
(multivariate) polynomials $g_1$ and $g_2$ over $X$ with rational
coefficients, \ie $f = g_1 / g_2$. We write ${\Qset}$ the set of
rational numbers and ${\Qset}_X$ the set of
rational functions over $X$. The evaluation $v(g)$ of a polynomial
$g$ under the valuation $v$ replaces each variable $x \in X$ by its value $v(x)$.
An {\em atomic constraint} over $X$ is a Boolean expression of the
form $f(X) \bowtie g(X)$, with ${\bowtie} \in \{\le, \ge, <, >, =\}$ and
$f$ and $g$ two functions over variables in $X$ and constants. A
constraint is {\em linear} if the functions $f$ and $g$ are
linear.
A {\em constraint} over $X$ is a
Boolean combination of atomic constraints over $X$.
Given a finite set of states $S$, we write $\ensuremath{\mathsf{Dist}}(S)$ for the set of
probability distributions over $S$, \ie the set of functions $\mu : S
\to [0,1]$ such that $\sum_{s\in S}\mu(s) = 1$. We write $\Inter$
for the set containing all the interval subsets of $[0,1]$. In the
following, we consider a universal set of symbols $A$ that we use for
labelling the states of our structures. We call these symbols {\em
atomic propositions}. We will use Latin alphabet in state context
and Greek alphabet in atomic proposition context.
\custompar{Constraints.}
Constraints are first order logic predicates used to model and solve combinatorial problems \cite{Rossi2006HCP}.
A problem is described with a list of variables, each in a given domain of possible values, together
with a list of constraints over these variables. Such problems are then sent to solvers which decide whether the problem is
satisfiable, \ie if there exists a valuation of the variables
satisfying all the constraints, and in this case computes a solution.
Checking satisfiability of constraint problems is difficult in general, as the space of all possible valuations has a size exponential in the number of variables.
Formally, a Constraint Satisfaction Problem
(\textnormal{CSP}) is a tuple $\Omega = (X, D, C)$ where $X$ is a finite set of
variables, $D = D_X$ is the set of all the domains associated to the
variables from $X$, and $C$ is a set of constraints over $X$. We say
that a valuation over $X$ satisfies $\Omega$ if and only if it
satisfies all the constraints in $C$. We write $v(C)$ for the
satisfaction result of the valuation of the constraints $C$ according
to $v$ (\ie true or false).
In the following we call {\em {\textnormal{CSP}} encoding} a scheme for formulating a given problem into a {\textnormal{CSP}}.
The size of a {\textnormal{CSP}} corresponds to the number of variables and atomic constraints appearing in the problem.
Note that, in constraint programming, having less variables or less constraints during the encoding does not necessarily imply
faster solving time of the problems.
\custompar{Discrete Time Markov Chains.}
A Discrete Time Markov Chain ({DTMC} or {\textnormal{MC}} for short) is a tuple
$\mathcal{M}$ $=$ $(S,$ $s_0,$ $p,$ $V)$, where $S$ is a finite
set of states containing the initial state $s_0$,
$V : S \to 2^A$ is a labelling function, and
$p : S \rightarrow \ensuremath{\mathsf{Dist}}(S)$ is a probabilistic transition
function. We write
{\mcSet} for the set containing all the discrete time Markov chains.
A Markov Chain can be seen as a directed graph
where the nodes correspond to the states of the {\textnormal{MC}} and
the edges are labelled with the probabilities given by the transition function of the {\textnormal{MC}}.
In this representation, a missing transition between two states
represents a transition probability of zero.
As usual, given a {\textnormal{MC}}~$\mathcal{M}$, we call a {\em path} of
$\mathcal{M}$ a sequence
of states obtained from executing $\mathcal{M}$, \ie a sequence
$\omega = s_1, s_2,\ldots $ s.t. the probability of taking the transition from $s_i$ to $s_{i+1}$ is strictly positive, $p(s_i)(s_i+1) >0$, for all $i$.
A path $\omega$ is finite iff it belongs to $S^*$, \ie it
represents a finite sequence of transitions from $\mathcal{M}$.
\begin{example}\label{ex:mc}
Figure \ref{fig:example_mc} illustrates the Markov chain
$\mathcal{M}_1 = (S, s_0, p, V) \in \mcSet$ where
the set of states $S$ is given by $\{s_0,s_1,s_2,s_3,s_4\}$,
the atomic proposition are restricted to $\{\alpha, \beta\}$,
the initial state is $s_0$, and the labelling
function $V$ corresponds to $\{(s_0,\emptyset), (s_1,\alpha), (s_2,\beta), (s_3,\{\alpha, \beta\}), (s_4,\alpha)\}$.
The sequences of states $(s_0,s_1,s_2)$, $(s_0,s_2)$, and $(s_0,s_2,s_2,s_2)$,
are three (finite) paths from the initial state $s_0$ to the state $s_2$.
\end{example}
\custompar{Reachability.}
A Markov chain $\mathcal{M}$ defines a unique probability measure
$\mathbb{P}^{\mathcal{M}}$ over the paths from
$\mathcal{M}$. According to this measure, the
probability of a finite path $\omega = s_0, s_1, \ldots, s_n$ in
$\mathcal{M}$ is the product of the probabilities of the transitions
executed along this path, \ie $\mathbb{P}^{\mathcal{M}}(\omega) =
p(s_0)(s_1) \cdot p(s_1)(s_2)\cdot \ldots \cdot p(s_{n-1})(s_n)$. This
distribution naturally extends to infinite paths (see
\cite{Baier2008PMC}) and to sequences of states over $S$
that are not paths of $\mathcal{M}$ by giving them a zero probability.
Given a \textnormal{MC}\ $\mathcal{M}$, the overall probability of reaching a
given state $s$ from the initial state $s_0$ is called the {\em
reachability probability} and written
$\mathbb{P}^{\mathcal{M}}_{s_0}(\ensuremath{\Diamond} s)$ or
$\mathbb{P}^{\mathcal{M}}(\ensuremath{\Diamond} s)$ when clear from the
context. This probability is computed as the sum of the probabilities
of all finite paths starting in the initial state and reaching this
state for the first time. Formally, let $\mathsf{reach}_{s_0}(s) =
\{\omega \in S^{*} \ensuremath{\ | \ } \omega = s_0, \ldots s_n \mbox{ with } s_n = s
\mbox{ and } s_i \ne s \ \forall 0 \le i < n\}$ be the set of such
paths. We then define $\mathbb{P}^{\mathcal{M}}(\ensuremath{\Diamond} s) =
\sum_{\omega \in \mathsf{reach}_{s_0}(s)} \mathbb{P}^{\mathcal{M}}(\omega)$
if $s \ne s_0$ and $1$ otherwise. This notation naturally extends to
the reachability probability of a state $s$ from a state $t$ that is
not $s_0$, written $\mathbb{P}^{\mathcal{M}}_{t}(\ensuremath{\Diamond} s)$
and to the
probability of reaching a label $\alpha \subseteq A$ written
$\mathbb{P}^{\mathcal{M}}_{s_0}(\ensuremath{\Diamond} \alpha)$.
In the following, we say that a state $s$ (resp. a label $\alpha \subseteq A$) is reachable in $\mathcal{M}$
iff the reachability probability of this state (resp. label)
from the initial state is strictly positive.
\begin{example}[Example \ref{ex:mc} continued]
In Figure \ref{fig:example_mc} the probability of the path $(s_0,$
$s_2,$ $s_1,$ $s_1,$ $s_3)$ is $0.3 \cdot 0.5 \cdot 0.5 \cdot 0.5 =
0.0375$ and the probability of reaching the state $s_1$ is
$\Proba^{\mathcal{M}_1}(\ensuremath{\Diamond} s_1) = p(s_0)(s_1) +
\Sigma_{i=0}^{+\infty}{p(s_0)(s_2){\cdot}p(s_2)(s_2)^i{\cdot}p(s_2)(s_1)}
= p(s_0)(s_1) + p(s_0)(s_2){\cdot}p(s_2)(s_1){\cdot}(1/(1-p(s_2)(s_2)))
= 1$. Furthermore, the probability of reaching $\beta$ corresponds to
the probability of reaching the state $s_2$.
\end{example}
\begin{figure*}[t]
\mbox{
\begin{minipage}[t]{.31\textwidth}
\begin{center}
{\tiny
\begin{tikzpicture}[scale=1.5]
\node[vertex] (n0) at (0,0.5) {$s_0$};
\node[vertex, label={[label distance=-0.03cm]-45 :$\alpha$}] (n1) at (0.9,1) {$s_1$};
\node[vertex, label={[label distance=-0.10cm]+45 :$\beta$}] (n2) at (0.9,0) {$s_2$};
\node[vertex, label={[label distance=-0.10cm]-135:$\alpha,\beta$}] (n3) at (1.8,1) {$s_3$};
\node[vertex, label={[label distance=-0.10cm]+135:$\alpha$}] (n4) at (1.8,0) {$s_4$};
\draw[edge] (n0) to node[above left] {$0.7$} (n1);
\draw[edge] (n0) to node[below left] {$0.3$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$0.5$} (n1);
\draw[edge] (n1) to node[above] {$0.5$} (n3);
\draw[edge] (n2) to node[right] {$0.5$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$0.5$} (n2);
\draw[edge] (n3) to[out=45,in=45+90,min distance=4mm] node[above] {$1$} (n3);
\draw[edge] (n4) to[out=-45,in=-45-90,min distance=4mm] node[below] {$1$} (n4);
\end{tikzpicture}}
\caption{{\textnormal{MC}} $\mathcal{M}_1$}\label{fig:example_mc}%
\end{center}
\end{minipage}}
\mbox{
\begin{minipage}[t]{.3\textwidth}
\begin{center}
{\tiny
\begin{tikzpicture}[scale=1.5]
\node[vertex] (n0) at (0,0.5) {$s_0$};
\node[vertex, label={[label distance=-0.03cm]-45 :$\alpha$}] (n1) at (0.9,1) {$s_1$};
\node[vertex, label={[label distance=-0.10cm]+45 :$\beta$}] (n2) at (0.9,0) {$s_2$};
\node[vertex, label={[label distance=-0.10cm]-135:$\alpha,\beta$}] (n3) at (1.8,1) {$s_3$};
\node[vertex, label={[label distance=-0.10cm]+135:$\alpha$}] (n4) at (1.8,0) {$s_4$};
\draw[edge] (n0) to node[above left] {$0.7$} (n1);
\draw[edge] (n0) to node[below left] {$0.3$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$1-p$} (n1);
\draw[edge] (n1) to node[above] {$p$} (n3);
\draw[edge] (n2) to node[right] {$p$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$1-p$} (n2);
\draw[edge] (n3) to[out=45,in=45+90,min distance=4mm] node[above] {$1$} (n3);
\draw[edge] (n4) to[out=-45,in=-45-90,min distance=4mm] node[below] {$1$} (n4);
\end{tikzpicture}}
\caption{\textnormal{pMC}\ $\mathcal{I}^\prime$}\label{fig:example_pmc}%
\end{center}
\end{minipage}}
\mbox{
\begin{minipage}[t]{.35\textwidth}
\begin{center}
{\tiny
\hspace*{-0.1cm}
\begin{tikzpicture}[scale=1.5]
\node[vertex] (n0) at (0,0.5) {$s_0$};
\node[vertex, label={[label distance=-0.03cm]-45 :$\alpha$}] (n1) at (0.9,1) {$s_1$};
\node[vertex, label={[label distance=-0.10cm]+45 :$\beta$}] (n2) at (0.9,0) {$s_2$};
\node[vertex, label={[label distance=-0.10cm]-135:$\alpha, \beta$}] (n3) at (2.1,1) {$s_3$};
\node[vertex, label={[label distance=-0.10cm]+135:$\alpha$}] (n4) at (2.1,0) {$s_4$};
\draw[edge] (n0) to node[above left] {$[0,1]$} (n1);
\draw[edge] (n0) to node[below left] {$[0,1]$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$[0.5,1]$} (n1);
\draw[edge] (n1) to node[above] {$[0.3,0.5]$} (n3);
\draw[edge] (n2) to node[right] {$[0,0.6]$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$[0.2,0.6]$} (n2);
\draw[edge] (n2) to node[below] {$[0,0.5]$} (n4);
\draw[edge] (n3) to[out=45,in=45+90,min distance=4mm] node[above] {$1$} (n3);
\draw[edge] (n4) to node[right] {$[0,0.5]$} (n3);
\draw[edge] (n4) to[out=-45,in=-45-90,min distance=4mm] node[below] {$[0.5,0.6]$} (n4);
\end{tikzpicture}}
\caption{{\textnormal{IMC}} $\mathcal{I}$}\label{fig:example_imc}
\end{center}
\end{minipage}}
\vspace*{-0.4cm}
\end{figure*}
\section{Markov Chains Abstractions}
\label{sec:abstraction-models}
Modelling an application as a Markov Chain requires knowing the exact
probability for each possible transition of the system. However, this
can be difficult to compute or to measure in the case of a real-life
application (\eg precision errors, limited knowledge). In this
section, we start with a generic definition of Markov chain
abstraction models. Then we recall three abstraction models from the
literature, respectively {\textnormal{pMC}}, {\textnormal{IMC}}, and {\textnormal{pIMC}}, and finally we
present a comparison of these existing models in terms of succinctness.
\begin{definition}[Markov chain Abstraction Model]\label{def:abstract_model}
A Markov chain abstraction model (an abstraction model for short) is a pair $(\ensuremath{{\tt L}}, \ensuremath{\models})$
where $\ensuremath{{\tt L}}$ is a nonempty set and $\ensuremath{\models}$ is a relation between ${\mcSet}$ and $\ensuremath{{\tt L}}$.
Let $\mathcal{P}$ be in $\ensuremath{{\tt L}}$ and $\mathcal{M}$ be in {\mcSet}
we say that $\mathcal{M}$ implements $\mathcal{P}$
iff $(\mathcal{M}, \mathcal{P})$ belongs to $\ensuremath{\models}$ (\ie $\mathcal{M} \ensuremath{\models} \mathcal{P}$).
When the context is clear, we do not mention the satisfaction relation $\ensuremath{\models}$
and only use $\ensuremath{{\tt L}}$ to refer to the abstraction model $(\ensuremath{{\tt L}}, \ensuremath{\models})$.
\end{definition}
A {\em Markov chain Abstraction Model} is a specification theory for
{\textnormal{MC}}s. It consists in a set of abstract objects, called {\em
specifications}, each of which representing a (potentially infinite)
set of {\textnormal{MC}}s -- {\em implementations} -- together with a satisfaction
relation defining the link between implementations and specifications.
As an example, consider the powerset of {\mcSet} (\ie the set
containing all the possible sets of Markov chains). Clearly,
$(2^{\mcSet}, \in)$ is a Markov chain abstraction model, which we call
the {\em canonical abstraction model}. This abstraction model has the
advantage of representing all the possible sets of Markov chains but
it also has the disadvantage that some Markov chain abstractions are
only representable by an infinite extension representation. Indeed,
recall that there exists subsets of $[0,1] \subseteq \Rset$ which
cannot be represented in a finite space (e.g., the Cantor set
\cite{Cantor1883}).
We now present existing {\textnormal{MC}} abstraction models from the literature.
\subsection{Existing MC Abstraction Models}
\custompar{Parametric Markov Chain}
is a {\textnormal{MC}} abstraction model
from \cite{Alur93}
where a transition can be annotated by a
rational function over {\em parameters}.
We write $\ensuremath{\tt{pMC}}$ for the set containing
all the parametric Markov chains.
\begin{definition}[Parametric Markov Chain]\label{def:pmc}
A Parametric Markov Chain ({\textnormal{pMC}} for short)
is a tuple $\mathcal{I} = (S,s_0,P,V,Y)$
where $S$, $s_0$, and $V$ are defined as for \textnormal{MC}{s},
$Y$ is a set of variables (parameters),
and $P: S \times S \to \Qset_Y$ associates with each potential transition
a parameterized probability.
\end{definition}
Let $\mathcal{M} = (S,s_0,p,V)$ be a $\textnormal{MC}$ and $\mathcal{I} =
(S,s_0,P,V,Y)$ be a $\textnormal{pMC}$. The satisfaction relation
$\ensuremath{\models_{\tt{p}}}$ between $\mcSet$ and $\ensuremath{\tt{pMC}}$ is defined by
$\mathcal{M} \ensuremath{\models_{\tt{p}}} \mathcal{I}$ iff there exists a
valuation $v$ of $Y$ s.t. $p(s)(s^\prime)$ equals $v(P(s,s^\prime))$
for all $s,s^\prime$ in $S$.
\begin{example}
Figure~\ref{fig:example_pmc} shows a {\textnormal{pMC}} $\mathcal{I}^\prime = (S,s_0,P,V,Y)$
where $S$, $s_0$, and $V$ are similar to the same entities in the {\textnormal{MC}} $\mathcal{M}$
from Figure~\ref{fig:example_mc}, the set of variable $Y$ contains only one variable $p$,
and the parametric transitions in $P$ are given by the edge labelling
(\eg $P(s_0,s_1) = 0.7$, $P(s_1,s_3) = p$, and $P(s_2,s_2) = 1 - p$).
Note that the {\textnormal{pMC}} $\mathcal{I}^\prime$ is a specification
containing the {\textnormal{MC}} $\mathcal{M}$ from Figure~\ref{fig:example_mc}.
\end{example}
\custompar{Interval Markov Chains}
extend {\textnormal{MC}}s by allowing to label transitions with intervals of possible
probabilities instead of precise probabilities.
We write
$\ensuremath{\tt{IMC}}$ for the set containing all the interval Markov chains.
\begin{definition}[Interval Markov Chain \cite{JonssonL91}]\label{def:imc}
An Interval Markov Chain ({\textnormal{IMC}} for short) is a tuple $\mathcal{I} = (S,s_0,P,V)$,
where $S$, $s_0$, and $V$ are defined as for \textnormal{MC}{s},
and $P : S \times S \to \Inter$ associates
with each potential transition an interval of probabilities.
\end{definition}
\begin{example}\label{ex:imc}
Figure \ref{fig:example_imc} illustrates \textnormal{IMC}\ $\mathcal{I} = (S, s_0, P, V)$ where
$S$, $s_0$, and $V$ are similar to the \textnormal{MC}\ given in Figure \ref{fig:example_mc}.
By observing the edge labelling
we see that $P(s_0,s_1)=[0,1]$, $P(s_1,s_1)=[0.5,1]$, and $P(s_3,s_3)=[1, 1]$.
On the other hand, the intervals of probability for missing transitions are reduced to $[0,0]$, e.g., $P(s_0,s_0)=[0,0]$, $P(s_0,s_3)=[0,0]$, $P(s_1,s_4)=[0,0]$.
\end{example}
In the literature, {\textnormal{IMC}}s have been mainly used with two distinct
semantics: {\em at-every-step} and {\em once-and-for-all}. Both
semantics are associated with distinct satisfaction relations which we
now introduce.
The {\em once-and-for-all} {\textnormal{IMC}} semantics (\cite{Prophesy,tulip,puggelli13}) is alike to the semantics for \textnormal{pMC}, as
introduced above. The associated satisfaction relation
$\ensuremath{\models^{o}_{\tt{I}}}$ is defined as follows: A {\textnormal{MC}} $\mathcal{M} =
(T, t_0, p, V^M)$ satisfies an {\textnormal{IMC}} $\mathcal{I} =
(S,s_0,P,V^I)$ iff $(T,t_0,V^M) = (S,s_0,V^I)$ and for all
reachable state $s$ and all state $s' \in S$, $p(s)(s') \in
P(s,s')$. In this sense, we say that {\textnormal{MC}} implementations using the
once-and-for-all semantics need to have the same structure as the
{\textnormal{IMC}} specification.
On the other hand, the {\em at-every-step} {\textnormal{IMC}} semantics, first introduced
in~\cite{JonssonL91}, operates as a simulation relation based on the
transition probabilities and state labels, and therefore allows {\textnormal{MC}}
implementations to have a different structure than the {\textnormal{IMC}}
specification. The associated satisfaction relation $\ensuremath{\models^{a}_{\tt{I}}}$
is defined as follows: A {\textnormal{MC}} $\mathcal{M}$ $=$ $(T,
t_0,$ $p,$ $V^M)$ satisfies an {\textnormal{IMC}} $\mathcal{I} = (S,s_0,P,V^I)$ iff
there exists a relation $\mathcal{R} \subseteq T \times S$ such that
$(t_0, s) \in \mathcal{R}$ and
whenever $(t,s) \in \mathcal{R}$, we have \begin{enumerate*}
\item the labels of $s$ and $t$ correspond: $V^M(t) = V^I(s)$,
\item there exists a correspondence function $\delta: T \to (S \to [0, 1])$ s.t.
\begin{enumerate*}
\item $\forall t^\prime \in T$ if $p(t)(t^\prime) > 0$ then $\delta(t^\prime)$ is a distribution on $S$
\item $\forall s^\prime \in S:
(\Sigma_{t^\prime \in T} p(t)(t^\prime) \cdot \delta(t^\prime)(s^\prime)) \in P(s,s^\prime)$, and
\item $\forall (t^\prime,s^\prime) \in T \times S$, if $\delta(t^\prime)(s^\prime) > 0$,
then $(t^\prime, s^\prime) \in \mathcal{R}$.
\end{enumerate*}
\end{enumerate*}
By construction, it is clear that $\ensuremath{\models^{a}_{\tt{I}}}$ is more general
than $\ensuremath{\models^{o}_{\tt{I}}}$, \ie that whenever $\mathcal{M}
\ensuremath{\models^{o}_{\tt{I}}} \mathcal{I}$, we also have $\mathcal{M}
\ensuremath{\models^{a}_{\tt{I}}} \mathcal{I}$. The reverse is obviously not true in
general, even when the underlying graphs of $\mathcal{M}$ and
$\mathcal{I}$ are isomorphic (see Appendix~\ref{ap:compare_imcs_satisfaction_relations} for details).
\begin{figure*}[t]
\mbox{
\begin{minipage}[t]{.48\textwidth}
\begin{center}
{\tiny
\begin{tikzpicture}[scale=1.5]
\node[vertex] (n0) at (0,0.5) {$t_0$};
\node[vertex, label={[label distance=-0.03cm]-105 :$\alpha$}] (n1) at (0.9,1) {$t_1$};
\node[vertex, label={[label distance=-0.05cm]0 :$\beta$}] (n2) at (0.9,0) {$t_2$};
\node[vertex, label={[label distance=-0.05cm]0 :$\alpha,\beta$}] (n3) at (2,1.3) {$t_3$};
\node[vertex, label={[label distance=-0.05cm]0 :$\alpha,\beta$}, text height=1.3ex,text width=0.6em] (n3p) at (2,0.7) {$t_{3^\prime}$};
\draw[edge] (n0) to node[above left] {$0.7$} (n1);
\draw[edge] (n0) to node[below left] {$0.3$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$0.5$} (n1);
\draw[edge] (n1) to node[above] {$0.3$} (n3);
\draw[edge] (n1) to node[below] {$0.2$} (n3p);
\draw[edge] (n2) to node[right] {$0.8$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$0.2$} (n2);
\draw[edge] (n3) to[out=45+90,in=45,min distance=4mm] node[above] {$0.2$} (n3);
\draw[edge] (n3) to node[right] {$0.8$} (n3p);
\draw[edge] (n3p) to[out=-45-90,in=-45,min distance=4mm] node[below] {$1$} (n3p);
\end{tikzpicture}}
\caption{\textnormal{MC}\ $\mathcal{M}_2$ satisfying the \textnormal{IMC}\ $\mathcal{I}$ from Figure \ref{fig:example_imc} with a different structure}\label{fig:example_mc_not_iso}
\end{center}
\end{minipage}}
\hspace{.005\textwidth}
\mbox{
\begin{minipage}[t]{.46\textwidth}
\begin{center}
{\tiny
\begin{tikzpicture}[scale=1.4]
\node[vertex] (n0) at (0,0.5) {$s_0$};
\node[vertex, label={[label distance=-0.03cm]-45 :$\alpha$}] (n1) at (0.9,1) {$s_1$};
\node[vertex, label={[label distance=-0.10cm]+45 :$\beta$}] (n2) at (0.9,0) {$s_2$};
\node[vertex, label={[label distance=-0.10cm]-135:$\alpha, \beta$}] (n3) at (2.1,1) {$s_3$};
\node[vertex, label={[label distance=-0.10cm]+135:$\alpha$}] (n4) at (2.1,0) {$s_4$};
\draw[edge] (n0) to node[above left] {$[0,1]$} (n1);
\draw[edge] (n0) to node[below left] {$[0,1]$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$[q,1]$} (n1);
\draw[edge] (n1) to node[above] {$[0.3,q]$} (n3);
\draw[edge] (n2) to node[right] {$[0,p]$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$[0.2,p]$} (n2);
\draw[edge] (n2) to node[below] {$[0,0.5]$} (n4);
\draw[edge] (n3) to[out=45,in=-45,min distance=4mm] node[right] {$1$} (n3);
\draw[edge] (n4) to node[right] {$[0,0.5]$} (n3);
\draw[edge] (n4) to[out=-45,in=45,min distance=4mm] node[right] {$[0.5,p]$} (n4);
\end{tikzpicture}}
\caption{{\textnormal{pIMC}} $\mathcal{P}$}\label{fig:example_pimc}
\end{center}
\end{minipage}}
\vspace*{-0.4cm}
\end{figure*}
\begin{example}[Example \ref{ex:imc} continued]\label{ex:mcs_satify_imc}
Consider the {\textnormal{MC}} $\mathcal{M}_1$ with state space $S$ from Figure \ref{fig:example_mc} and
the {\textnormal{MC}} $\mathcal{M}_2$ with state space $T$ from Figure \ref{fig:example_mc_not_iso}. They both
satisfy the \textnormal{IMC}\ $\mathcal{I}$ with state space $S$ given in Figure \ref{fig:example_imc}.
Furthermore, $\mathcal{M}_1$ satisfies $\mathcal{I}$ with the same structure.
On the other hand, for the {\textnormal{MC}} $\mathcal{M}_2$ given in Figure \ref{fig:example_mc_not_iso},
the state $s_3$ from $\mathcal{I}$ has been ``split'' into two states $t_3$ and $t_{3^\prime}$ in $\mathcal{M}_2$
and the state $t_1$ from $\mathcal{M}_2$ ``aggregates'' states $s_1$ and $s_4$ in $\mathcal{I}$.
The relation $\mathcal{R} \subseteq T \times S$ containing the pairs
$(t_0,s_0)$, $(t_1,s_1)$, $(t_1,s_4)$, $(t_2,s_2)$, $(t_3,s_3)$, and $(t_{3^\prime}, s_3)$
is a satisfaction relation between $\mathcal{M}_2$ and $\mathcal{I}$.
\end{example}
\custompar{Parametric Interval Markov Chains},
as introduced in \cite{DelahayeLP16}, abstract {\textnormal{IMC}}s by allowing (combinations of)
parameters to be used as interval endpoints in {\textnormal{IMC}}s. Under a given
parameter valuation the {\textnormal{pIMC}} yields an {\textnormal{IMC}} as introduced above.
{\textnormal{pIMC}}s therefore allow the representation, in a compact way and with a
finite structure, of a potentially infinite number of {\textnormal{IMC}}s. Note that
one parameter can appear in several transitions at once, requiring the
associated transition probabilities to depend on one another.
Let $Y$ be a finite set of
parameters and $v$ be a valuation over $Y$. By combining notations
used for $\textnormal{IMC}$s and $\textnormal{pMC}$s the set $\Inter(\Qset_Y)$ contains all
parametrized intervals over $[0,1]$, and for all $I
= [f_1, f_2] \in \Inter(\Qset_Y)$, $v(I)$ denotes the interval
$[v(f_1), v(f_2)]$ if $0 \le v(f_1) \leq v(f_2) \le 1$ and the empty set
otherwise\footnote{Indeed, when $0 \le v(f_1) \leq v(f_2) \le 1$ is not respected, the interval is inconsistent and therefore empty.}.
We write $\ensuremath{\tt{pIMC}}$ for the set containing all the parametric interval Markov chains.
\begin{definition}[Parametric Interval Markov Chain \cite{DelahayeLP16}]\label{def:pimc}
A Parametric Interval Markov Chain ({\textnormal{pIMC}} for short) is a tuple $\mathcal{P} = (S,s_0,P,V,Y)$,
where $S$, $s_0$, $V$ and $Y$ are defined as for {\textnormal{pMC}}s,
and
$P : S \times S \to \Inter(\Qset_Y)$
associates with each potential transition a (parametric) interval.
\end{definition}
In \cite{DelahayeLP16} the authors introduced {\textnormal{pIMC}}s where parametric interval
endpoints are limited to linear combination of parameters. In this paper we extend the {\textnormal{pIMC}} model by allowing rational
functions over parameters as endpoints of parametric intervals. Given
a \textnormal{pIMC}\ $\mathcal{P} =(S,s_0,P,V,Y)$ and a valuation $v$, we write
$v(\mathcal{P})$ for the \textnormal{IMC}\ $(S,s_0,P_v,V)$ obtained by replacing
the transition function $P$ from $\mathcal{P}$ with the function $P_v
: S \times S \to \Inter$ defined by $P_v(s,s^\prime) =
v(P(s,s^\prime))$ for all $s,s^\prime \in S$.
The \textnormal{IMC}\ $v(\mathcal{P})$ is called an {\em instance} of
\textnormal{pIMC}\ $\mathcal{P}$. Finally, depending on the semantics chosen for
{\textnormal{IMC}}s, two satisfaction relations can be defined between
{\textnormal{MC}}s and {\textnormal{pIMC}}s. They are written $\ensuremath{\models^a_{\tt{pI}}}$ and
$\ensuremath{\models^o_{\tt{pI}}}$ and defined as follows: $\mathcal{M}
\ensuremath{\models^a_{\tt{pI}}} \mathcal{P}$ (resp. $\ensuremath{\models^o_{\tt{pI}}}$)
iff there exists an \textnormal{IMC}\ $\mathcal{I}$ instance of
$\mathcal{P}$ s.t. $\mathcal{M} \ensuremath{\models^{a}_{\tt{I}}}
\mathcal{I}$ (resp. $\ensuremath{\models^{o}_{\tt{I}}}$).
\begin{example}
Consider the {\textnormal{pIMC}} $\mathcal{P} = (S,$0$,P,V,Y)$ given in Figure \ref{fig:example_pimc}.
The set of states $S$ and the labelling function
are the same as in the ${\textnormal{MC}}$ and the ${\textnormal{IMC}}$ presented in Figures~\ref{fig:example_mc}~and~\ref{fig:example_imc} respectively.
The set of parameters $Y$ has two elements $p$ and $q$.
Finally, the parametric intervals from the transition function $P$
are given by the edge labelling
(\eg $P(s_1,s_3)=[0.3, q]$, $P(s_2,s_4)=[0,0.5]$, and $P(s_3,s_3)=[1,1]$).
Note that the {\textnormal{IMC}} $\mathcal{I}$ from Figure \ref{fig:example_imc}
is an instance of $\mathcal{P}$
(by assigning the value $0.6$ to the parameter $p$ and $0.5$ to $q$).
Furthermore, as said in Example \ref{ex:mcs_satify_imc},
the Markov Chains $\mathcal{M}_1$ and $\mathcal{M}_2$ (from Figures \ref{fig:example_mc} and \ref{fig:example_mc_not_iso} respectively)
satisfy $\mathcal{I}$, therefore $\mathcal{M}_1$ and $\mathcal{M}_2$ satisfy $\mathcal{P}$.
\end{example}
In the following, we consider that the size of a {\textnormal{pMC}}, {\textnormal{IMC}}, or {\textnormal{pIMC}}
corresponds to its number of states plus its number of transitions not reduced to $0$, $[0,0]$ or $\emptyset$.
We will also often need to consider the predecessors (\ensuremath{{\tt{Pred}}}),
and the successors (\ensuremath{{\tt{Succ}}}) of some given states.
Given a \textnormal{pIMC}\ with a set of states $S$, a state $s$ in $S$, and a subset $S^\prime$ of $S$, we write:
\smallskip
\begin{minipage}{\textwidth}
\hspace{-.8cm}
\begin{minipage}{.58\textwidth}
\begin{itemize}
\item { $\ensuremath{{\tt{Pred}}}(s) = \{s^\prime \in S \mid P(s^\prime, s)
\notin \{\emptyset, [0, 0]\}\}$}
\item { $\ensuremath{{\tt{Succ}}}(s) = \{s^\prime \in S \mid P(s,
s^\prime) \notin \{\emptyset, [0,
0] \}\}$}
\end{itemize}
\end{minipage}
\begin{minipage}{.41\textwidth}
\begin{itemize}
\item { $\ensuremath{{\tt{Pred}}}(S^\prime) = \bigcup_{s^\prime \in
S^\prime} \ensuremath{{\tt{Pred}}}(s^\prime)$}
\item { $\ensuremath{{\tt{Succ}}}(S^\prime) = \bigcup_{s^\prime \in S^\prime}
\ensuremath{{\tt{Succ}}}(s^\prime)$}
\end{itemize}
\end{minipage}
\end{minipage}
\smallskip
\subsection{Abstraction Model Comparisons}
${\textnormal{IMC}}$, ${\textnormal{pMC}}$, and ${\textnormal{pIMC}}$ are three Markov chain Abstraction Models.
In order to compare their expressiveness and compactness, we introduce
the comparison operators $\ensuremath{\sqsubseteq}$ and $\equiv$.
Let $(\ensuremath{{\tt L}}_1, \models_1)$ and $(\ensuremath{{\tt L}}_2,\models_2\nobreak)$ be two Markov chain abstraction models
containing respectively $\mathcal{L}_1$ and $\mathcal{L}_2$.
We say that $\mathcal{L}_1$ is entailed by $\mathcal{L}_2$,
written $\mathcal{L}_1 \ensuremath{\sqsubseteq} \mathcal{L}_2$,
iff all the {\textnormal{MC}}s satisfying $\mathcal{L}_1$ satisfy $\mathcal{L}_2$
modulo bisimilarity.
(\ie
$\forall \mathcal{M} \models_1 \mathcal{L}_1, \exists \mathcal{M}^\prime \models_2 \mathcal{L}_2$ s.t. $\mathcal{M}$ is bisimilar to $\mathcal{M}^\prime$).
We say that $\mathcal{L}_1$ is (semantically) equivalent to $\mathcal{L}_2$,
written $\mathcal{L}_1 \equiv \mathcal{L}_2$,
iff $\mathcal{L}_1 \ensuremath{\sqsubseteq} \mathcal{L}_2$ and $\mathcal{L}_2 \ensuremath{\sqsubseteq} \mathcal{L}_1$.
Definition~\ref{def:succinctness} introduces succinctness based on the sizes of the abstractions.
\begin{definition}[Succinctness]\label{def:succinctness}
Let $(\ensuremath{{\tt L}}_1, \models_1)$ and $(\ensuremath{{\tt L}}_2, \models_2)$ be two Markov chain abstraction models.
$\ensuremath{{\tt L}}_1$ is at least as succinct as $\ensuremath{{\tt L}}_2$,
written $\ensuremath{{\tt L}}_1 \leq \ensuremath{{\tt L}}_2$,
iff there exists a polynomial $p$ such that for every
$\mathcal{L}_2 \in \ensuremath{{\tt L}}_2$,
there exists $\mathcal{L}_1 \in \ensuremath{{\tt L}}_1$
s.t. $\mathcal{L}_1 \equiv \mathcal{L}_2$ and
$|\mathcal{L}_1| \leq p(|\mathcal{L}_2|)$.\footnote{
$|\mathcal{L}_1|$ and $|\mathcal{L}_2|$ are the sizes of $\mathcal{L}_1$ and
$\mathcal{L}_2$, respectively.}
Thus, $\ensuremath{{\tt L}}_1$ is strictly more succinct than $\ensuremath{{\tt L}}_2$,
written $\ensuremath{{\tt L}}_1 < \ensuremath{{\tt L}}_2$, iff
$\ensuremath{{\tt L}}_1 \leq \ensuremath{{\tt L}}_2$
and $\ensuremath{{\tt L}}_2 \not\leq \ensuremath{{\tt L}}_1$.
\end{definition}
We start with a comparison of the succinctness of the {\textnormal{pMC}} and {\textnormal{IMC}}
abstractions. Since {\textnormal{pMC}}s allow the expression of dependencies
between the probabilities assigned to distinct transitions while
{\textnormal{IMC}}s allow all transitions to be independant, it is clear that
there are {\textnormal{pMC}}s without any equivalent {\textnormal{IMC}}s (regardless of the
{\textnormal{IMC}} semantics used), therefore $(\ensuremath{\tt{IMC}},\ensuremath{\models^{o}_{\tt{I}}}) \not
\le \ensuremath{\tt{pMC}}$ and $(\ensuremath{\tt{IMC}},\ensuremath{\models^{a}_{\tt{I}}}) \not \le \ensuremath{\tt{pMC}}$. On
the other hand, {\textnormal{IMC}}s imply that transition probabilities need to
satisfy linear inequalities in order to fit given intervals. However,
these types of constraints are not allowed in {\textnormal{pMC}}s. It is therefore
easy to exhibit {\textnormal{IMC}}s that, regardless of the semantics considered,
do not have any equivalent {\textnormal{pMC}} specification. As a consequence, $
\ensuremath{\tt{pMC}} \not \le (\ensuremath{\tt{IMC}},\ensuremath{\models^{o}_{\tt{I}}})$ and $ \ensuremath{\tt{pMC}} \not
\le (\ensuremath{\tt{IMC}},\ensuremath{\models^{a}_{\tt{I}}})$.
We now compare {\textnormal{pMC}}s and {\textnormal{IMC}}s to {\textnormal{pIMC}}s. Recall that the
{\ensuremath{\tt{pIMC}}} model is a Markov chain abstraction model allowing to
declare parametric interval transitions, while the {\ensuremath{\tt{pMC}}} model
allows only parametric transitions (without intervals), and the
{\ensuremath{\tt{IMC}}} model allows interval transitions without parameters.
Clearly, any {\textnormal{pMC}} and any {\textnormal{IMC}} can be translated into a {\textnormal{pIMC}}
with the right semantics (once-and-for-all for {\textnormal{pMC}}s and the chosen
{\textnormal{IMC}} semantics for {\textnormal{IMC}}s). This means that
$({\ensuremath{\tt{pIMC}}},\ensuremath{\models^o_{\tt{pI}}})$ is more succinct than {\ensuremath{\tt{pMC}}}
and {\ensuremath{\tt{pIMC}}} is more succinct than {\ensuremath{\tt{IMC}}} for both semantics.
Furthermore, since {\ensuremath{\tt{pMC}}} and {\ensuremath{\tt{IMC}}} are not comparable due to
the above results, we have that the {\ensuremath{\tt{pIMC}}}
abstraction model is strictly more succinct than the {\ensuremath{\tt{pMC}}}
abstraction model and than the {\ensuremath{\tt{IMC}}} abstraction model with the
right semantics. Our comparison results are presented in Proposition~\ref{prop:sunccinctness_pimc_pmc_imc}.
Further explanations and examples
are given in Appendix~\ref{ap:model_comparison}.
\begin{proposition}\label{prop:sunccinctness_pimc_pmc_imc}
The Markov chain abstraction models can be ordered as follows w.r.t.
succinctness:
$({\ensuremath{\tt{pIMC}}},\ensuremath{\models^o_{\tt{pI}}}) < (\ensuremath{\tt{pMC}}, \ensuremath{\models_{\tt{p}}})$,
$({\ensuremath{\tt{pIMC}}},\ensuremath{\models^o_{\tt{pI}}}) < (\ensuremath{\tt{IMC}}, \ensuremath{\models^{o}_{\tt{I}}})$ and
$({\ensuremath{\tt{pIMC}}},\ensuremath{\models^a_{\tt{pI}}}\nobreak) < (\ensuremath{\tt{IMC}}, \ensuremath{\models^{a}_{\tt{I}}})$.
\end{proposition}
Note that $(\ensuremath{\tt{pMC}}, \ensuremath{\models_{\tt{p}}}) \le (\ensuremath{\tt{IMC}}, \ensuremath{\models^{o}_{\tt{I}}})$ could be achieved by adding unary constraints on the parameters of a {\textnormal{pMC}}, which is not allowed here. However, this would not have any impact on our other results.
\section{Qualitative Properties}\label{sec:qualitative-reachability}
As seen above, {\textnormal{pIMC}}s are a succinct abstraction formalism for
{\textnormal{MC}}s. The aim of this section is to investigate qualitative
properties for {\textnormal{pIMC}}s, \ie properties that can be evaluated at the
specification ({\textnormal{pIMC}}) level, but that entail properties on its {\textnormal{MC}}
implementations.
{\textnormal{pIMC}} specifications are very expressive as they allow the abstraction of
transition probabilities using both intervals and
parameters. Unfortunately, as it is the case for {\textnormal{IMC}}s, this allows the
expression of incorrect specifications. In the {\textnormal{IMC}} setting, this is
the case either when some intervals are ill-formed or when there is no
probability distribution matching the interval constraints of the
outgoing transitions of some reachable state. In this case, no {\textnormal{MC}}
implementation exists that satisfies the {\textnormal{IMC}} specification.
Deciding
whether an implementation
that satisfies a given specification
exists is
called the consistency problem. In the {\textnormal{pIMC}} setting, the
consistency problem is made more complex because of the parameters
which can also induce inconsistencies in some cases. One could also be
interested in verifying whether there exists an implementation that
reaches some target states/labels, and if so, propose a parameter
valuation ensuring this property. Both the consistency and the
consistent reachability problems have already been investigated in the
{\textnormal{IMC}} and {\textnormal{pIMC}} setting~\cite{Delahaye15,DelahayeLP16}. In this section,
we briefly recall these problems and propose new solutions based on CSP encodings. Our encodings are linear in the size
of the original {\textnormal{pIMC}}s whereas the algorithms
from~\cite{Delahaye15,DelahayeLP16} are exponential.
\subsection{Existential Consistency}
A {\textnormal{pIMC}} $\mathcal{P}$ is existential consistent iff
there exists a {\textnormal{MC}} $\mathcal{M}$ satisfying $\mathcal{P}$ (\ie
there exists a {\textnormal{MC}} $\mathcal{M}$ satisfying an {\textnormal{IMC}} $\mathcal{I}$ instance of $\mathcal{P}$).
As seen in Section~\ref{sec:background}, {\textnormal{pIMC}}s are equipped with two
semantics: once-and-for-all ($\ensuremath{\models^o_{\tt{pI}}}$) and
at-every-step ($\ensuremath{\models^a_{\tt{pI}}}$). Recall that
$\ensuremath{\models^o_{\tt{pI}}}$ imposes that the underlying graph structure of
implementations needs to be isomorphic to the graph structure of the
corresponding specification. In contrast, $\ensuremath{\models^a_{\tt{pI}}}$ allows
implementations to have a different graph structure. It therefore
seems that some {\textnormal{pIMC}}s could be inconsistent w.r.t
$\ensuremath{\models^o_{\tt{pI}}}$ while being consistent w.r.t
$\ensuremath{\models^a_{\tt{pI}}}$. On the other hand, checking the consistency w.r.t
$\ensuremath{\models^o_{\tt{pI}}}$ seems easier because of the fixed graph
structure.
In \cite{Delahaye15}, the author firstly proved that both semantics are equivalent
w.r.t. existential consistency, and proposed a {\textnormal{CSP}} encoding for verifying this property
which is exponential in the size of the {\textnormal{pIMC}}.
Based on this result of semantics equivalence
w.r.t. existential consistency from \cite{Delahaye15}
we propose a new {\textnormal{CSP}} encoding, written
{\Mec}, for verifying the existential consistency property for
{\textnormal{pIMC}}s.
\begin{figure*}[t]
\mbox{
\begin{minipage}[t]{.48\textwidth}
\begin{center}
{\tiny
\begin{tikzpicture}[scale=1.4]
\node[vertex] (n0) at (0,0.5) {$\rho_0$};
\node[vertex] (n1) at (1.2,1) {$\rho_1$};
\node[vertex] (n2) at (1.2,0) {$\rho_2$};
\node[vertex] (n3) at (2.4,1) {$\rho_3$};
\node[vertex] (n4) at (2.4,0) {$\rho_4$};
\node[vertex,draw=none,fill=none,minimum size=0em] (nP) at (3.5,0.66) {$\pi_p \in [0,1]$};
\node[vertex,draw=none,fill=none,minimum size=0em] (nQ) at (3.5,0.33) {$\pi_q \in [0,1]$};
\draw[edge] (n0) to node[above left] {$\transition{0}{1}$} (n1);
\draw[edge] (n0) to node[below left] {$\transition{0}{2}$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$\transition{1}{1}$} (n1);
\draw[edge] (n1) to node[above] {$\transition{1}{3}$} (n3);
\draw[edge] (n2) to node[right] {$\transition{2}{1}$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$\transition{2}{2}$} (n2);
\draw[edge] (n2) to node[below] {$\transition{2}{4}$} (n4);
\draw[edge] (n3) to[out=45,in=-45,min distance=4mm] node[right] {$\transition{3}{3}$} (n3);
\draw[edge] (n4) to node[right] {$\transition{4}{3}$} (n3);
\draw[edge] (n4) to[out=-45,in=45,min distance=4mm] node[right] {$\transition{4}{4}$} (n4);
\end{tikzpicture}}
\caption{Variables in the {\textnormal{CSP}} produced by {\Mec} for the {\textnormal{pIMC}} $\mathcal{P}$ from Fig. \ref{fig:example_pimc}}\label{fig:variables_consistency}
\end{center}
\end{minipage}}
\hspace{.005\textwidth}
\mbox{
\begin{minipage}[t]{.48\textwidth}
\begin{center}
{\tiny
\hspace{-0.4cm}
\begin{tikzpicture}[scale=1.4]
\node[vertex] (n0) at (0,0.5) {$\top$};
\node[vertex] (n1) at (1.2,1) {$\top$};
\node[vertex] (n2) at (1.2,0) {$\top$};
\node[vertex] (n3) at (2.4,1) {$\top$};
\node[vertex] (n4) at (2.4,0) {$\bot$};
\node[vertex,draw=none,fill=none,minimum size=0em] (nP) at (3.4,0.66) {$\pi_p = 0.5$};
\node[vertex,draw=none,fill=none,minimum size=0em] (nQ) at (3.4,0.33) {$\pi_q = 0.5$};
\draw[edge] (n0) to node[above left] {$0.7$} (n1);
\draw[edge] (n0) to node[below left] {$0.3$} (n2);
\draw[edge] (n1) to[out=45,in=45+90,min distance=4mm] node[above] {$0.5$} (n1);
\draw[edge] (n1) to node[above] {$0.5$} (n3);
\draw[edge] (n2) to node[right] {$0.5$} (n1);
\draw[edge] (n2) to[out=-45,in=-45-90,min distance=4mm] node[below] {$0.5$} (n2);
\draw[edge] (n2) to node[below] {$0$} (n4);
\draw[edge] (n3) to[out=45,in=-45,min distance=4mm] node[right] {$1$} (n3);
\draw[edge] (n4) to node[right] {$0$} (n3);
\draw[edge] (n4) to[out=-45,in=45,min distance=4mm] node[right] {$0$} (n4);
\end{tikzpicture}}
\caption{A solution to the {\textnormal{CSP}} {\Mec}($\mathcal{P}$) for the {\textnormal{pIMC}} $\mathcal{P}$ from Fig. \ref{fig:example_pimc}}\label{fig:solution_consistency}
\end{center}
\end{minipage}}
\vspace*{-0.4cm}
\end{figure*}
Let $\mathcal{P}$ $=$ $(S,$$s_0,$$P,$$V,$$Y)$ be a {\textnormal{pIMC}}, we
write \Mec($\mathcal{P}$) for the {\textnormal{CSP}} produced by {\Mec} according
to $\mathcal{P}$. Any solution of \Mec($\mathcal{P}$) will
correspond to a {\textnormal{MC}} satisfying $\mathcal{P}$. In
\Mec($\mathcal{P}$), we use one variable $\pi_p$ with domain $[0,1]$
per parameter $p$ in $Y$; one variable $\transition{s}{s^\prime}$ with
domain $[0, 1]$ per transition $(s, s^\prime)$ in $\{ \{s\} \times
\ensuremath{{\tt{Succ}}}(s) \mid s \in S\}$; and
one Boolean variable $\rho_s$ per state $s$ in $S$.
These Boolean variables will indicate for each state whether it appears in the {\textnormal{MC}} solution of the {\textnormal{CSP}} (\ie in the {\textnormal{MC}} satisfying the {\textnormal{pIMC}} $\mathcal{P}$).
For each state $s \in S$, Constraints are as follows:
\begin{minipage}{\textwidth}
\hspace{-.4cm}
\begin{minipage}[t]{.54\textwidth}
\begin{enumerate}
\enumerateConstraints
\item {$\rho_{s}$,
if $s = s_0$}\label{encoding_ec_init_state}%
\setcounter{enumi}{2}
\item {$\neg \rho_s \Leftrightarrow
\Sigma_{s^\prime \in \ensuremath{{\tt{Pred}}}(s) \setminus \{s\}} \transition{s^\prime}{s} = 0$,
if $s \ne s_0$}\label{encoding_ec_cstr_reach_propag}%
\setcounter{enumi}{4}
\item { $\rho_s \Rightarrow
\transition{s}{s^\prime} \in P(s,s^\prime)$,
for all $s^\prime \in \ensuremath{{\tt{Succ}}}(s)$}\label{encoding_ec_cstr_intervals}%
\end{enumerate}
\end{minipage}
\hspace{.04\textwidth}
\begin{minipage}[t]{.4\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{1}
\item { $\rho_s \Leftrightarrow
\Sigma_{s^\prime \in \ensuremath{{\tt{Succ}}}(s)} \transition{s}{s^\prime} = 1$}\label{encoding_ec_sum_to_one}%
\setcounter{enumi}{3}
\item { $\neg \rho_s \Leftrightarrow
\Sigma_{s^\prime \in \ensuremath{{\tt{Succ}}}(s)} \transition{s}{s^\prime} = 0$}\label{encoding_ec_sum_to_zero}%
\end{enumerate}
\end{minipage}
\end{minipage}
\smallskip
Recall that given a {\textnormal{pIMC}} $\mathcal{P}$ the objective of
the {\textnormal{CSP}} $\Mec(\mathcal{P})$ is to construct a {\textnormal{MC}} $\mathcal{M}$
satisfying $\mathcal{P}$. Constraint~\ref{encoding_ec_init_state} states that the initial
state $s_0$ appears in $\mathcal{M}$. Constraint~\ref{encoding_ec_cstr_reach_propag} ensures
that for each non-initial state $s$, variable $\rho_s$ is set to
{\false} iff $s$ is not reachable from its predecessors.
Constraint~\ref{encoding_ec_sum_to_one} ensures that if a
state $s$ appears in $\mathcal{M}$, then its outgoing transitions
form a probability distribution.
On the contrary, Constraint~\ref{encoding_ec_sum_to_zero} propagates
non-appearing states (\ie if a state $s$
does not appear in $\mathcal{M}$ then all its outgoing transitions
are set to zero). Finally, Constraint~\ref{encoding_ec_cstr_intervals} states
that, for all appearing states, the outgoing transition
probabilities must be selected inside the specified intervals.
\begin{example}\label{ex:model_consistency} Consider the {\textnormal{pIMC}} $\mathcal{P}$ given in Figure \ref{fig:example_pimc}.
Figure \ref{fig:variables_consistency} describes the variables in $\Mec(\mathcal{P})$:
one variable per transition
(\eg $\transition{0}{1}$, $\transition{0}{2}$, $\transition{1}{1}$),
one Boolean variable per state
(\eg $\rho_0$, $\rho_1$),
and one variable per parameter ($\pi_p$ and $\pi_q$).
The following constraints
correspond to the Constraints~\ref{encoding_ec_sum_to_one}, \ref{encoding_ec_cstr_reach_propag}, \ref{encoding_ec_sum_to_zero}, and \ref{encoding_ec_cstr_intervals} generated by our encoding $\Mec$
for the state $2$ of $\mathcal{P}$:
\hspace*{-0.7cm}
\begin{minipage}[t]{.34\textwidth}
\begin{enumerate}
\item[] $\neg \rho_2 \Leftrightarrow \transition{0}{2} = 0$
\item[] $\neg \rho_2 \Leftrightarrow \transition{2}{1} + \transition{2}{2} + \transition{2}{4} = 0$
\end{enumerate}
\end{minipage}
\begin{minipage}[t]{.32\textwidth}
\begin{enumerate}
\item[] $\rho_2 \Leftrightarrow \transition{2}{1} + \transition{2}{2} + \transition{2}{4} = 1$
\item[] $\rho_2 \Rightarrow 0 \leq \transition{2}{1} \leq \pi_p$
\end{enumerate}
\end{minipage}
\begin{minipage}[t]{.32\textwidth}
\begin{enumerate}
\item[] $\rho_2 \Rightarrow 0.2 \leq \transition{2}{2} \leq \pi_p$
\item[] $\rho_2 \Rightarrow 0 \leq \transition{2}{4} \leq 0.5$
\end{enumerate}
\end{minipage}
\end{example}
Finally, Figure \ref{fig:solution_consistency} describes a solution for the {\textnormal{CSP}} $\Mec(\mathcal{P})$.
Note that given a solution of a {\textnormal{pIMC}} encoded by $\Mec$,
one can construct a {\textnormal{MC}} satisfying the given {\textnormal{pIMC}}
by keeping all the states $s$ s.t. $\rho_s$ is equal to {\true} and
considering the transition function given by the probabilities in the
$\transition{s}{s^\prime}$ variables. We now show that
our encoding works as expected.
\begin{proposition}\label{prop:csp_existential_consistency}
A {\textnormal{pIMC}} $\mathcal{P}$ is existential consistent
iff $\Mec(\mathcal{P})$ is satisfiable.
\end{proposition}
Our existential consistency encoding is linear in the size of the {\textnormal{pIMC}} instead of exponential for the encoding from~\cite{DelahayeLP16} which enumerates the powerset of the states in the {\textnormal{pIMC}}
resulting in deep nesting of conjunctions and disjunctions.
\subsection{Qualitative Reachability}
Let $\mathcal{P} = (S,s_0,P,V,Y)$ be a \textnormal{pIMC}\ and
$\alpha \subseteq A$ be a state label.
We say that $\alpha$ is {\em existential reachable} in $\mathcal{P}$
iff there exists an implementation $\mathcal{M}$ of $\mathcal{P}$
where $\alpha$ is reachable
(\ie $\Proba^{\mathcal{M}}(\ensuremath{\Diamond} \alpha)>0$).
In a dual way,
we say that $\alpha$ is {\em universal reachable} in $\mathcal{P}$
iff $\alpha$ is reachable in any implementation $\mathcal{M}$ of $\mathcal{P}$.
As for existential consistency, we use a result from~\cite{Delahaye15}
that states that both {\textnormal{pIMC}} semantics are equivalent w.r.t.
existential (and universal) reachability. We therefore
propose a new CSP encoding, written $\ensuremath{{\bf C_{{\exists}r}}}$, that extends $\Mec$, for verifying these properties. Formally,
{\textnormal{CSP}} $\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P}) = (X \cup X^\prime,D \cup D^\prime,C \cup C^\prime)$ is such that
$(X,D,C) = \Mec(\mathcal{P})$,
$X^\prime$~contains one integer variable $\omega_s$ with domain $[0, |S|]$ per state $s$ in $S$,
$D^\prime$~contains the domains of these variables, and
$C^\prime$ is composed of the following constraints for each state $s \in S$:
\smallskip
\begin{minipage}{\textwidth}
\hspace*{.2cm}
\begin{minipage}[t]{.3\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{5}
\item {$\omega_{s} = 1$, if $s = s_0$}\label{encoding_er_init_state}%
\end{enumerate}
\end{minipage}
\hspace{.02\textwidth}
\begin{minipage}[t]{.3\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{6}
\item { $\omega_s \neq 1$, if $s \neq s_0$}\label{encoding_er_non_init_state}%
\end{enumerate}
\end{minipage}
\hspace{.02\textwidth}
\begin{minipage}[t]{.28\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{7}
\item { $\rho_s \Leftrightarrow (\omega_s \neq 0)$}\label{encoding_er_bool_var}%
\end{enumerate}
\end{minipage}
\hspace*{0.25cm}
\begin{minipage}[t]{.93\textwidth}
\vspace*{-0.25cm}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{8}
\item { $\omega_s > 1 \Rightarrow \bigvee_{s^\prime \in \ensuremath{{\tt{Pred}}}(s) \setminus \{ s \} }(\omega_s = \omega_{s^\prime} + 1) \wedge (\transition{s}{s^\prime} > 0)$,
if $s \neq s_0$}\label{encoding_er_propag_reach}%
\item { $\omega_s = 0 \Leftrightarrow \bigwedge_{s^\prime \in \ensuremath{{\tt{Pred}}}(s) \setminus \{ s \} }(\omega_{s^\prime} = 0) \vee (\transition{s}{s^\prime} = 0)$,
if $s \neq s_0$}\label{encoding_er_propag_non_reach}%
\end{enumerate}
\end{minipage}
\end{minipage}
\smallskip
Recall first that {\textnormal{CSP}} $\Mec(P)$ constructs a Markov chain $\mathcal{M}$ satisfying $\mathcal{P}$.
Informally, for each state $s$ in $\mathcal{M}$ the Constraints~\ref{encoding_er_init_state}, \ref{encoding_er_non_init_state}, \ref{encoding_er_propag_reach} and \ref{encoding_er_propag_non_reach} in $\ensuremath{{\bf C_{{\exists}r}}}$ ensure that
$\omega_s = k$ iff there exists in $\mathcal{M}$
a path from the initial state to $s$ of length $k-1$ with non zero probability;
and state $s$ is not reachable in $\mathcal{M}$ from the initial state $s_0$ iff $\omega_s$ equals to $0$. Finally, Constraint~\ref{encoding_er_bool_var} enforces the Boolean reachability indicator variable $\rho_s$
to bet set to {\true} iff there exists a path with non zero probability in $\mathcal{M}$
from the initial state $s_0$ to $s$ (\ie $\omega_s \neq
0$).
Let $S_\alpha$ be the set of states from $\mathcal{P}$ labeled with $\alpha$.
$\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P})$ therefore produces a Markov chain satisfying
$\mathcal{P}$ where reachable states $s$ are such that $\rho_s =
\true$.
As a consequence, $\alpha$ is existential reachable in
$\mathcal{P}$ iff $\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P})$ admits a solution such that
$\bigvee_{s \in S_\alpha} \rho_s$; and $\alpha$ is universal reachable in
$\mathcal{P}$ iff $\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P})$ admits no solution such that
$\bigwedge_{s \in S_\alpha} \neg\rho_s$. This is formalised in the following
proposition.
\begin{proposition}\label{prop:model_existential_reachability}
Let $\mathcal{P} = (S, s_0 , P, V, Y)$ be a \textnormal{pIMC},
$\alpha \subseteq A$ be a state label,
$S_\alpha = \{s \ | \ V(s) = \alpha\}$,
and $(X,D,C)$ be the {\textnormal{CSP}} $\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P})$.
\vspace*{-0.15cm}
\begin{itemize}
\item
\textnormal{CSP}\ $(X,D,C \cup \bigvee_{s \in S_\alpha} \rho_s)$
is satisfiable iff
$\alpha$ is existential reachable in $\mathcal{P}$
\item
\textnormal{CSP}\ $(X,D,C \cup \bigwedge_{s \in S_\alpha} \neg\rho_s)$
is unsatisfiable iff
$\alpha$ is universal reachable in $\mathcal{P}$
\end{itemize}
\end{proposition}
As for the existential consistency problem,
we have an exponential gain in terms of size of the encoding
compared to~\cite{DelahayeLP16}:
the number of constraints and variables in {\ensuremath{{\bf C_{{\exists}r}}}} is linear
in terms of the size of the encoded {\textnormal{pIMC}}.
\custompar{Remark.}
In $\ensuremath{{\bf C_{{\exists}r}}}$ Constraints~\ref{encoding_ec_cstr_reach_propag} inherited from $\Mec$
are entailed by Constraints~\ref{encoding_er_bool_var} and \ref{encoding_er_propag_non_reach} added to $\ensuremath{{\bf C_{{\exists}r}}}$.
Thus, in a practical approach one may ignore Constraints~\ref{encoding_ec_cstr_reach_propag} from $\Mec$
if they do not improve the solver performances.
\section{Quantitative Properties}\hspace*{0cm}
\label{sec:quantitative}
We now move to the verification of quantitative reachability
properties in {\textnormal{pIMC}}s. Quantitative reachability has already been
investigated in the context of {\textnormal{pMC}}s and {\textnormal{IMC}}s with the
once-and-for-all semantics. Due to the complexity of allowing
implementation structures to differ from the structure of the
specifications, quantitative reachability in {\textnormal{IMC}}s with the
at-every-step semantics has, to the best of our knowledge, never been
studied. In this section, we propose our main theoretical contribution: a theorem
showing that both {\textnormal{IMC}} semantics are equivalent with respect to
quantitative reachability, which allows the extension of all results
from~\cite{tulip,benedikt2013ltl} to the at-every-step semantics. Based on this result, we
also extend the CSP encodings introduced in
Section~\ref{sec:qualitative-reachability} in order to solve quantitative
reachability properties on {\textnormal{pIMC}}s regardless of their semantics.
\subsection{Equivalence of $\ensuremath{\models^{o}_{\tt{I}}}$ and
$\ensuremath{\models^{a}_{\tt{I}}}$ w.r.t quantitative reachability}\label{sec:equiv_imc_semantics}
Given an {\textnormal{IMC}} $\mathcal{I} = (S,s_0,P,V)$ and a state label
$\alpha \subseteq A$, a quantitative reachability property on
$\mathcal{I}$ is a property of the type
$\mathbb{P}^{\mathcal{I}}(\ensuremath{\Diamond} \alpha) {\sim} p$, where
$0<p<1$ and ${\sim} \in \{\le, <, >, \ge\}$. Such a property is
verified iff there exists an {\textnormal{MC}} $\mathcal{M}$ satisfying $\mathcal{I}$ (with the chosen semantics) such that
$\mathbb{P}^{\mathcal{M}}(\ensuremath{\Diamond} \alpha) {\sim} p$.
As explained above, all existing techniques and tools for verifying
quantitative reachability properties on {\textnormal{IMC}}s only focus on the
once-and-for-all semantics. Indeed, in this setting, quantitative
reachability properties are easier to compute because the underlying
graph structure of all implementations is known. However, to the best
of our knowledge, there are no works addressing the same problem with
the at-every-step semantics or showing that addressing the problem in
the once-and-for-all setting is sufficiently general. The following
theorem fills this theoretical gap by proving that both semantics are
equivalent w.r.t quantitative reachability. In other words, for all
{\textnormal{MC}} $\mathcal{M}$ such that $\mathcal{M} \ensuremath{\models^{a}_{\tt{I}}}
\mathcal{I}$ and all state label $\alpha$, there exist {\textnormal{MC}}s
$\mathcal{M}_\le$ and $\mathcal{M}_{\ge}$ such that $\mathcal{M}_{\le}
\ensuremath{\models^{o}_{\tt{I}}} \mathcal{I}$, $\mathcal{M}_{\ge}
\ensuremath{\models^{o}_{\tt{I}}} \mathcal{I}$ and
$\mathbb{P}^{\mathcal{M}_{\le}}(\ensuremath{\Diamond} \alpha) \le
\mathbb{P}^{\mathcal{M}}(\ensuremath{\Diamond} \alpha) \le
\mathbb{P}^{\mathcal{M_{\ge}}}(\ensuremath{\Diamond} \alpha)$. This is
formalized in the following theorem.
\begin{theorem}\label{thm:reachability-semantics-equivalence-imcs}
Let $\mathcal{I} = (S,s_0,P,V)$ be an {\textnormal{IMC}}, $\alpha
\subseteq A$ be a state label, ${\sim} \in \{\le,<,>,\ge\}$
and $0<p<1$. $\mathcal{I}$ satisfies
$\mathbb{P}^{\mathcal{I}}(\ensuremath{\Diamond} \alpha) {\sim} p$ with
the once-and-for-all semantics iff $\mathcal{I}$ satisfies
$\mathbb{P}^{\mathcal{I}}(\ensuremath{\Diamond} \alpha) {\sim} p$ with
the at-every-step semantics.
\end{theorem}
The proof is constructive (see
Appendix~\ref{ap:equiv_imc_semantics}): we use the structure of the relation $\mathcal{R}$ from the
definition of $\ensuremath{\models^{a}_{\tt{I}}}$ in order to build the {\textnormal{MC}}s
$\mathcal{M}_{\le}$ and $\mathcal{M}_{\ge}$.
\subsection{Constraint Encodings}
Note that the result from
Theorem~\ref{thm:reachability-semantics-equivalence-imcs} naturally
extends to {\textnormal{pIMC}}s. We therefore exploit this result to construct a
{\textnormal{CSP}} encoding for verifying quantitative reachability properties in
{\textnormal{pIMC}}s.
As in Section~\ref{sec:qualitative-reachability}, we extend the
CSP $\Mec$, that produces a correct $\textnormal{MC}$ implementation for the given
{\textnormal{pIMC}}, by imposing that this $\textnormal{MC}$ implementation satisfies the
given quantitative reachability property. In order to compute the
probability of reaching state label $\alpha$ at the {\textnormal{MC}} level, we
use standard techniques from~\cite{Baier2008PMC} that require the
partitioning of the state space into three sets $S_{\top}$,
$S_{\bot}$, and $S_?$ that correspond to states reaching
$\alpha$ with probability $1$, states from which $\alpha$ cannot be
reached, and the remaining states, respectively. Once this partition is chosen, the
reachability probabilities of all states in $S_?$ are computed as the
unique solution of a linear equation system (see~\cite{Baier2008PMC},
Theorem 10.19, p.766). We now explain how we identify states from
$S_\bot, S_\top$ and $S_?$ and how we encode the
linear equation system, which leads to the resolution of quantitative
reachability.
Let $\mathcal{P} = (S,s_0,P,V,Y)$ be a \textnormal{pIMC}\ and $\alpha \subseteq
A$ be a state label. We start by setting $S_\top = \{s \ |\ V(s) =
\alpha\}$. We then extend $\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P})$ in order to identify the
set $S_\bot$. Let $\ensuremath{{\bf C^{\prime}_{{\exists}r}}}(\mathcal{P}, \alpha) = (X \cup X^\prime,D
\cup D^\prime,C \cup C^\prime)$ be such that $(X,D,C) =
\ensuremath{{\bf C_{{\exists}r}}}(\mathcal{P})$, $X^\prime$~contains one Boolean variable
$\lambda_s$ and one integer variable $\alpha_s$ with domain $[0, |S|]$
per state $s$ in $S$, $D^\prime$~contains the domains of these
variables, and $C^\prime$ is composed of the following constraints for
each state $s \in S$:
\begin{minipage}{\textwidth}
\hspace{-0.4cm}
\begin{minipage}[t]{.3\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{10}
\item {$\alpha_s = 1$,
if $\alpha = V(s)$}\label{encoding_erprime_target_state}%
\end{enumerate}
\end{minipage}
\hspace*{0.02\textwidth}
\begin{minipage}[t]{.3\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{11}
\item { $\alpha_s \neq 1$,
if $\alpha \ne V(s)$}\label{encoding_erprime_non_target_state}%
\end{enumerate}
\end{minipage}
\hspace*{0.02\textwidth}
\begin{minipage}[t]{.34\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{12}
\item { $\lambda_s \Leftrightarrow (\rho_s \wedge (\alpha_s \neq 0))$}\label{encoding_erprime_bool_var}%
\end{enumerate}
\end{minipage}
\hspace*{-0.4cm}
\begin{minipage}[t]{0.99\textwidth}
\vspace*{-0.2cm}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{13}
\item { $\alpha_s > 1 \Rightarrow \bigvee_{s^\prime \in \ensuremath{{\tt{Succ}}}(s) \setminus \{ s \} }(\alpha_s = \alpha_{s^\prime} + 1) \wedge (\transition{s}{s^\prime} > 0)$,
if $\alpha \ne V(s)$}\label{encoding_erprime_propag_target}%
\item { $\alpha_s = 0 \Leftrightarrow \bigwedge_{s^\prime \in \ensuremath{{\tt{Succ}}}(s) \setminus \{ s \} }(\alpha_{s^\prime} = 0) \vee (\transition{s}{s^\prime} = 0)$,
if $\alpha \ne V(s)$}\label{encoding_erprime_propag_non_target}%
\end{enumerate}
\end{minipage}
\end{minipage}
\smallskip
Note that variables $\alpha_s$ play a symmetric role to
variables $\omega_s$ from $\ensuremath{{\bf C_{{\exists}r}}}$: instead of indicating the existence
of a path from $s_0$ to $s$, they characterize the existence of a path
from $s$ to a state labeled with $\alpha$. In addition, due to
Constraint~\ref{encoding_erprime_bool_var}, variables $\lambda_s$ are set to {\true} iff
there exists a path with non zero probability from the initial state
$s_0$ to a state labeled with $\alpha$ passing by $s$. Thus, $\alpha$ cannot be reached from states s.t.
$\lambda_s = \false$.
Therefore, $S_\bot = \{s \ |\ \lambda_s = \false\}$.
Finally, we encode the equation system from~\cite{Baier2008PMC} in a
last {\textnormal{CSP}} encoding that extends $\ensuremath{{\bf C^{\prime}_{{\exists}r}}}$. Let
$\ensuremath{{\bf C_{{\exists}\bar{r}}}}(\mathcal{P}, \alpha) = (X \cup X^\prime,D \cup D^\prime,C
\cup C^\prime)$ be such that $(X,D,C) = \ensuremath{{\bf C^{\prime}_{{\exists}r}}}(\mathcal{P},
\alpha)$, $X^\prime$~contains one variable $\pi_s$ per state $s$ in
$S$ with domain $[0, 1]$, $D^\prime$~contains the domains of these
variables, and $C^\prime$ is composed of the following constraints for
each state $s \in S$:
\begin{minipage}{0.93\textwidth}
\begin{minipage}[t]{.45\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{15}
\item {$\neg\lambda_s \Rightarrow \pi_s = 0$}%
\end{enumerate}
\end{minipage}
\begin{minipage}[t]{.45\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{16}
\item { $\lambda_s \Rightarrow \pi_s = 1$,
if $\alpha = V(s)$}%
\end{enumerate}
\end{minipage}
\begin{minipage}[t]{0.67\textwidth}
\begin{enumerate}
\enumerateConstraints
\setcounter{enumi}{17}
\item { $
\lambda_s \Rightarrow \pi_s = \Sigma_{s^{\prime} \in \ensuremath{{\tt{Succ}}}(s)} \pi_{s^\prime} \transition{s^\prime}{s}$,
\hfill if $\alpha \ne V(s)$}%
\end{enumerate}
\end{minipage}
\end{minipage}
\smallskip
As a consequence, variables $\pi_s$ encode the probability with which
state $s$ eventually reaches $\alpha$ when $s$ is reachable from the
initial state and $0$ otherwise.
Let $p \in [0, 1] \subseteq \Rset$ be a probability bound. Adding the
constraint $\pi_{s_0} \leq p$ (resp. $\pi_{s_0} \geq p$) to the
previous {\ensuremath{{\bf C_{{\exists}\bar{r}}}}} encoding allows to determine if there exists a
{\textnormal{MC}} $\mathcal{M} \ensuremath{\models^a_{\tt{pI}}} \mathcal{P}$ such that
$\mathbb{P}^{\mathcal{M}} (\ensuremath{\Diamond} \alpha) \le p$ (resp $\ge p$).
Formally, let ${\sim} \in \{\leq, <, \geq, >\}$ be a comparison operator,
we write $\not\sim$ for its negation (\eg $\not\leq$ is $>$).
This leads to the following theorem.
\begin{theorem}\label{thm:pimc_reachability_in_cp}
Let $\mathcal{P} = (S, s_0 , P, V, Y)$ be a \textnormal{pIMC},
$\alpha \subseteq A$ be a label,
$p \in [0, 1]$,
${\sim} \in \{\leq,<, \geq,>\}$ be a comparison operator,
and $(X,D,C)$ be $\ensuremath{{\bf C_{{\exists}\bar{r}}}}(\mathcal{P}, \alpha)$:
\vspace*{-0.05cm}
\begin{itemize}
\item
\textnormal{CSP}\ $(X,D,C \cup (\pi_{s_0} \sim p))$
is satisfiable iff
$\exists \mathcal{M} \ensuremath{\models^a_{\tt{pI}}} \mathcal{P}$ s.t. $\Proba^\mathcal{M}(\ensuremath{\Diamond} \alpha) \sim p$
\item
\textnormal{CSP}\ $(X,D,C \cup (\pi_{s_0} \not\sim p))$
is unsatisfiable iff
$\forall \mathcal{M} \ensuremath{\models^a_{\tt{pI}}} \mathcal{P}$: $\Proba^\mathcal{M}(\ensuremath{\Diamond} \alpha) \sim p$
\end{itemize}
\end{theorem}
\section{Prototype Implementation}
Our results have been implemented in a prototype tool\footnote{All
resources, benchmarks, and source code are
available online as a Python library at \url{https://github.com/anicet-bart/pimc_pylib}}
which generates the above CSP encodings, and
CSP encodings from~\cite{DelahayeLP16} as well. Given a {\textnormal{pIMC}} in a text
format inspired from \cite{tulip},
our tool produces the desired {\textnormal{CSP}} as a SMT instance with the QF\_NRA logic (Quantifier Free Non linear Real-number
Arithmetic). This instance can then be fed to any
solver accepting the SMT-LIB format with QF\_NRA logic \cite{BarFT-SMTLIB}. For our
benchmarks, we chose Z3 \cite{Z3} (latest version: 4.5.0).
QF\_NRA does not deal with integer variables. In practice,
logics mixing integers and reals are harder than those over reals only. Thus we obtained better results by encoding integer
variables into real ones. In our implementations each integer variable $x$ is declared as a real variable whose real domain bounds are its original integer domain bounds; we also add the constraint $x < 1 \Rightarrow x = 0$. Since we only perform incrementation of $x$ this preserves the same set of solutions.
In order to evaluate our prototype, we extend the
\bench{nand} model from \cite{NPKS05}\footnote{Available online at
\texttt{http://www.prismmodelchecker.com}}. The original {\textnormal{MC}}
\bench{nand} model has already been extended as a {\textnormal{pMC}}
in~\cite{Prophesy}, where the authors consider a single parameter $p$
for the probability that each of the $N$ $nand$ gates fails during the
multiplexing. We extend this model to {\textnormal{pIMC}} by considering intervals
for the probability that the initial inputs are stimulated and we
have one parameter per $nand$ gate to represent the
probability that it fails. {\textnormal{pIMC}}s in text format are
automatically generated from the PRISM model.
Table~\ref{tab:xp} summarizes the size of the considered instances of
the model (in terms of states, transitions, and parameters) and of the
corresponding CSP problems (in terms of number of variables and
constraints). In addition, we also present the resolution time of the
given CSPs using the Z3 solver. Our experiments were performed on a $2.4$ GHz Intel Core
i5 processor with time out set to $10$ minutes and memory out set to
$2$Gb.
\begin{table}[t]
{ \scriptsize
\begin{center}
\scalebox{.9}{\begin{tabular}{|ll||ccc|ccc|ccc|ccc|c|}
\hline
& & \multicolumn{3}{c|}{\textnormal{pIMC}} & \multicolumn{3}{c|}{\Mec} & \multicolumn{3}{c|}{\ensuremath{{\bf C_{{\exists}r}}}}
& \multicolumn{3}{c|}{\ensuremath{{\bf C_{{\exists}\bar{r}}}}} \\
\multicolumn{2}{|l||}{Benchmark} &
\#states & \#trans. & \#par. &
\#var. & \#cstr. & time &
\#var. & \#cstr. & time &
\#var. & \#cstr. & time \\
\hline \hline
\bench{nand} & \bench{K=1; N=2} & 104 & 147 & 4 & 255 & 1,526 & 0.17s & 170 & 1,497 & 0.19s & 296 & 2,457 & 69.57s \\
\bench{nand} & \bench{K=1; N=3} & 252 & 364 & 5 & 621 & 3,727 & 0.24s & 406 & 3,557 & 0.30s & 703 & 5,828 & 31.69s \\
\bench{nand} & \bench{K=1; N=5} & 930 & 1,371 & 7 & 2,308 &
13,859 & 0.57s & 1,378 & 12,305 & 0.51s & 2,404 & 20,165 & T.O. \\
\bench{nand} & \bench{K=1; N=10} & 7,392 & 11,207 & 12 & 18,611 & 111,366 & 9.46s & 9,978 & 89,705 & 13.44s & 17,454 & 147,015 & T.O. \\\hline
\end{tabular}}
\end{center}}
\caption{Benchmarks}\label{tab:xp}\vspace{-1cm}
\end{table}
\section{Conclusion and future work}
In this paper, we have compared several Markov Chain abstractions in
terms of succinctness and we have shown that Parametric Interval Markov Chain
is a strictly more succinct abstraction formalism than other existing
formalisms such as Parametric Markov Chains and Interval Markov
Chains. In addition, we have proposed constraint encodings for
checking several properties over {\textnormal{pIMC}}. In the context of
qualitative properties such as existencial consistency or consistent
reachability, the size of our encodings is significantly smaller than
other existing solutions. In the quantitative setting, we have
compared the two usual semantics for {\textnormal{IMC}}s and {\textnormal{pIMC}}s and showed
that both semantics are equivalent with respect to quantitative
reachability properties. As a side effect, this result ensures that
all existing tools and algorithms solving reachability problems in
{\textnormal{IMC}}s under the once-and-for-all semantics can safely be extended to
the at-every-step semantics with no changes. Based on this result, we have then proposed
{\textnormal{CSP}} encodings addressing quantitative reachability in the context of
{\textnormal{pIMC}}s regardless of the chosen semantics. Finally, we have
developed a prototype tool that automatically generates our {\textnormal{CSP}}
encodings and that can be plugged to any constraint solver accepting
the SMT-LIB format as input.
We plan to develop our tool for {\textnormal{pIMC}}
verification in order to manage other, more complex, properties (\eg
supporting the LTL-language in the spirit of what Tulip \cite{tulip}
does). We also plan on investigating a practical way of computing and
representing the set of {\em all solutions} to the parameter synthesis
problem.
\bibliographystyle{splncs03}
|
1,116,691,498,536 | arxiv | \section{Introduction}
One of the most important recent results of the observational
cosmology is the conclusion that the Universe is speeding up
rather than slowing down. The combined analysis of the type Ia
supernovae, galaxy clusters measurements and WMAP (Wilkinson
Microwave Anisotropy Probe) data gives strong evidence for the
accelerated cosmic expansion~\cite{Riess1,Spergel}.
The cosmological acceleration suggests that the present day
Universe is dominated by a smoothly distributed slowly varying
cosmic fluid with negative pressure, the so-called dark energy
(DE)~\cite{Tegmark,Astier,Copeland,Gong,ASS,VaStar}. To specify a
component of a cosmic fluid one usually uses a phenomenological
relation between the pressure $p$ and the energy density $\varrho$
corresponding to each component of fluid
$p=w\varrho$. The function $w$ is called as the state parameter.
Contemporary
experiments~\cite{Riess1,Spergel,Tegmark,Astier} give strong
support that the Universe is approximately spatially flat and the
DE state parameter $w_{DE}^{\vphantom {27}}$ is currently close to
$-1$:
\begin{equation}
w_{DE}^{\vphantom {27}}={}-1\pm 0.2.
\end{equation}
The state parameter $w_{DE}^{\vphantom {27}}\equiv -1$ corresponds
to the cosmological constant. From the theoretical point of view
(see~\cite{VaStar,AKV} and references therein) this domain of
$w_{DE}^{\vphantom {27}}$ covers three essentially different
cases: $w_{DE}^{\vphantom {27}}>-1$, $w_{DE}^{\vphantom
{27}}\equiv {}-1$ and $w_{DE}^{\vphantom {27}}<-1$. From the
observations there is no barrier between these three
possibilities. Moreover it has been shown in~\cite{VarunStar} that
the state parameter $w_{DE}^{\vphantom {27}}$, which gives the
best fit to the experimental data, evolves from $w_{DE}^{\vphantom
{27}}\simeq 0$ to $w_{DE}^{\vphantom {27}}\leqslant -1$ and for a
large region in parameter space an evolving state parameter
$w_{DE}^{\vphantom {27}}$ is favoured over $w_{DE}^{\vphantom
{27}}\equiv -1$.
The standard way to obtain an evolving state parameter
$w_{DE}^{\vphantom {27}}$ is to include scalar fields into a
cosmological model. Under general assumptions within single
scalar field four-dimensional models one can realize only one of
the following possibilities: $w_{DE}^{\vphantom {27}}\geqslant-1$
(quintessence models) or $w_{DE}^{\vphantom {27}}\leqslant-1$
(phantom models)~\cite{Vikman}. Two-fields models with the
crossing of the cosmo\-logical constant barrier $w_{DE}^{\vphantom
{27}}\equiv{}-1$ are known as quintom models and include one
phantom scalar field and one usual scalar field. Note that the
most of phenomenological models describing the crossing of the
cosmological constant barrier~\cite{across-1,Andrianov,AK} use a
few scalar fields or a modified gravity.
Nowadays string and D-brane theories have found cosmological
applications related to the acceleration of the Universe. In
phenomenological models, describing the case $w_{DE} < -1$, all
standard energy conditions are violated and there are problems
with stability at classical and quantum levels
(see~\cite{Copeland,wless-1,AreVo} and references therein).
Possible way to evade the instability problem for models with
$w_{DE}<-1$ is to yield a phantom model as an effective one, which
arises from more fundamental theory with a normal sign of a
kinetic term. In particular, if we consider a model with higher
derivatives such as $\phi e^{-\Box}\phi$, then in the first
nontrivial approximation we obtain $\phi
e^{-\Box}\phi\simeq\phi^2-\phi\Box\phi$, and such a model gives a
kinetic term with a ghost sign. It turns out, that such a
possibility does appear in the string field theory (SFT)
framework~\cite{Arefeva} (see also~\cite{AK,AreVo}), namely in the
theory of fermionic NSR string with GSO$-$ sector. According to
Sen's conjecture (see~\cite{SFT-review} for review), the scalar
field $\phi$ is an open string theory tachyon, describing the
brane decay. The four dimensional gravitational model with a
phantom scalar field is considered as a string theory
approximation, that gives a possibility to solve instability
problems.
In this paper we consider a SFT inspired gravitational model with
two scalar fields and a polynomial potential, which is a
generalization of a one-field cosmological model, describing
in~\cite{AKV}. The first two-fields generalization of this
one-field model has been proposed in~\cite{AKV3} as a polynomial
model, which has a one-parameter set of exact solutions with the
state parameter $w_{DE}$, which crosses the barrier
$w_{DE}^{\vphantom {27}}=-1$ at large time and reaches $-1$ from
below at infinity. In this paper we construct a new model with a
two-parameter set of exact solutions, for some values of
parameters we obtain $w_{DE}^{\vphantom {27}}<-1$ at large time,
whereas for other $w_{DE}^{\vphantom {27}}>-1$ at large time. Note
that the different behavior of $w_{DE}^{\vphantom {27}}$ at large
time corresponds to one and the same potential and asymptotic
conditions of the fields.
We study different possibility to use the superpotential method
and demonstrate that it is very useful not only to construct
potential for the given exact solutions, but also to seek new
exact solutions. To demonstrate that the superpotential method
allows to find a form of a polynomial potential and solutions for
the given Hubble parameter we construct a toy two-fields model for
the Hubble parameter proposed in the SFT inspired model with high
derivatives~\cite{AK}.
\section{String Field Theory Inspired Two-Fields Model}
We consider a model of Einstein gravity interacting with a single
phantom scalar field $\phi$ and one standard scalar field $\xi$ in
the spatially flat Friedmann Universe. In typical cases a phantom
scalar field represents the open string tachyon, whereas the usual
scalar field corresponds to the closed string
tachyon~\cite{Arefeva,AKV3,AJ,LY}. Since the origin of the scalar
fields is connected with the string field theory the action
contains the typical string mass $M_s$ and a dimensionless open
string coupling constant $g_o$:
\begin{equation}
S=\!\int\! d^4x \sqrt{-g}\left(\frac{M_P^2}{2M_s^2}R+
\frac1{g_o^2}\left(\frac1{2}g^{\mu\nu}(\partial_{\mu}\phi\partial_{\nu}\phi
-\partial_{\mu}\xi\partial_{\nu}\xi) -V(\phi,\xi)\right)\right),
\label{action}
\end{equation}
where $M_P$ is the Planck mass. The Friedmann metric $g_{\mu\nu}$
is a spatially flat:
\begin{equation*}
ds^2={}-dt^2+a^2(t)\left(dx_1^2+dx_2^2+dx_3^2\right),
\end{equation*}
where $a(t)$ is a scale factor. The coordinates $(t,x_i)$ and
fields $\phi$ and $\xi$ are dimensionless.
If the scalar fields depend only on time,
then the equations of motion are as follows
\begin{eqnarray}
&H^2=\displaystyle\frac{1}{3m_p^2}\left({}-\frac12\dot\phi^2
+\frac12\dot\xi^2+V\right),
\label{eom1}
\\
&\dot
H=\displaystyle\frac{1}{2m_p^2}\left(\dot\phi^2-\dot\xi^2\right),
\label{eom2}
\\
&\ddot\phi+3H\dot\phi=\displaystyle{}\frac{\partial
V}{\partial\phi}, \label{eom3}
\\
&\ddot\xi+3H\dot\xi=\displaystyle{}-\frac{\partial
V}{\partial\xi}. \label{eom4}
\end{eqnarray}
For short hereafter we use the dimensionless parameter $m_p$:
$m_p^2=g_o^2M_P^2/M_s^2$. Dot denotes the time derivative. The
Hubble parameter $H\equiv \dot a(t)/a(t)$. Note that only three of
four differential equations (\ref{eom1})--(\ref{eom4}) are
independent. Equation (\ref{eom4}) is a consequence of
(\ref{eom1})--(\ref{eom3}).
The DE state parameter can be expressed in terms of the Hubble
parameter:
\begin{equation}
\label{w} w_{DE}=-1-\frac23\frac{\dot H}{H^2}.
\end{equation}
The crossing of the cosmological constant barrier $w_{DE}=-1$
corresponds to change of sign of $\dot H$. The phantom like
behavior corresponds to an increasing Hubble parameter. If we
know the explicit form of fields $\phi(t)$ and $\xi(t)$ and do not
know the potential $V(\phi,\xi)$, then, using eq.~(\ref{eom2}), we
can obtain $H(t)$ with an accuracy to a constant:
\begin{equation}
\label{Ht2}
H(t)=\frac{1}{2m_p^2}\left(\int\limits^t\dot\phi^2(\tau) d\tau
-\int\limits^t\dot\xi^2(\tau) d\tau\right) +C.
\end{equation}
At the same time if we know $H(t)$ we can find the potential as a
function of time:
\begin{equation}
\label{Vt}
V(t)=m_p^2\left(3H(t)^2+\dot H(t)\right).
\end{equation}
The Aref'eva DE model~\cite{Arefeva} (see
also~\cite{AKV,AK,AKV3,AKV2,AKVBulg}) assumes that our Universe is
a slowly decaying D3-brane and its dynamics is described by the
open string tachyon mode. To describe the open string tachyon
dynamics a level truncated open string field theory is used. The
notable feature of such tachyon dynamics is a non-local polynomial
interaction~\cite{SFT-review,ABKM1,ABKM2,Witten-SFT,AMZ-PTY,BSZ}.
It has been found that the open string tachyon behavior is
effectively modelled by a scalar field with a negative kinetic
term~\cite{AJK}. In this paper we consider local models with
effective potentials $V(\phi,\xi)$. The form of these potentials
are assumed to be given from the string field theory within the
level truncation scheme. Usually for a finite order truncation the
potential is a polynomial and its particular form depends on the
string type. The level truncated cubic open string field theory
fixes the form of the interaction of local fields to be a cubic
polynomial with non-local form-factors. Integrating out low lying
auxiliary fields one gets the fourth degree
polynomial~\cite{ABKM1,ABKM2}. Higher order auxiliary fields may
change the coefficients of lower degree terms and produce higher
degree monomials.
The back reaction of this brane is incorporated in the dynamics of
the closed string tachyon. The scalar field $\xi$ comes from the
closed string sector, similar to~\cite{Oh} and its effective local
description is given by an ordinary kinetic term~\cite{LY} and,
generally speaking, a non-polynomial self-interaction~\cite{BZ}.
An exact form of the open-closed tachyon interaction is not known
and, following~\cite{AKV3}, we consider the simplest polynomial
interaction.
More exactly we impose the following restrictions on the potential
$V(\phi,\xi)$:
\begin{itemize}
\item the potential is the sixth degree polynomial:
\begin{equation}
\label{potenV}
V(\phi,\xi)=\sum_{k=0}^{6}\sum_{j=0}^{6-k}c_{kj}\phi^k\xi^j,
\end{equation}
\item coefficient in front of the fifth and sixth powers are of
order $1/m_p^2$ and the limit $m_p^2\to \infty$ gives a nontrivial
fourth degree potential,
\item the potential is even: $V(\phi,\xi)=V(-\phi,-\xi)$. It means
that if $k+j$ is odd, then $c_{kj}=0$.
\end{itemize}
From the SFT we can also assume asymptotic conditions for
solutions. To specify the asymptotic conditions for scalar fields
let us recall that we have in mind the following picture. We
assume that the phantom field $\phi(t)$ smoothly rolls from the
unstable perturbative vacuum ($\phi=0$) to a nonperturbative one,
for example $\phi=1$, and stops there. The field $\xi(t)$
corresponds to close string and is expected to go asymptotically
to zero in the infinite future. In other words we seek such a
function $\phi(t)$ that $\phi(0)=0$ and it has a non-zero
asymptotic at $t\to +\infty$: $\phi(+\infty)=A$. The function
$\xi(t)$ should have zero asymptotic at $t\to +\infty$. At the
same time we can not calculate the explicit form of solutions in
the string field theory framework.
In this paper we show how using the superpotential method we can
construct a potential and exact solutions, which satisfy
conditions obtaining in the SFT framework.
\section{The Method of Superpotential}
The gravitational models with one or a few scalar fields play an
important role in cosmology and theories with extra dimensions.
One of the main problems in the investigation of such models is to
construct exact solutions for the equations of motion. System
(\ref{eom1})--(\ref{eom4}) with a polynomial potential
$V(\phi,\xi)$ is not integrable. Moreover we can not integrate
even models with one scalar field and a polynomial potential.
The superpotential method has been proposed for construction of a
potential, which corresponds to the exact solutions to
five-dimensional gravitational models~\cite{DeWolfe}. The main
ideas of this method are to consider the function H(t) (the Hubble
parameter in cosmology) as a function (superpotential) of scalar
fields and to construct the potential for the special solutions,
given in the explicit form.
Let
\begin{equation}
H(t)=W\bigl(\phi(t),\xi(t)\bigr).
\end{equation}
Equation (\ref{eom2}) can be rewritten as follows
\begin{equation}
\frac{\partial W}{\partial\phi}\dot\phi+\frac{\partial
W}{\partial\xi}\dot\xi=\frac1{2m_p^2}\left(\dot\phi^2-\dot\xi^2\right).
\label{alexey_W_equation}
\end{equation}
If one find such $W(\phi,\xi)$ that the relations
\begin{equation}
\dot\phi=2m_p^2\frac{\partial W}{\partial\phi},
\label{deWolfe_method1}
\end{equation}
\begin{equation}
\dot\xi={}-2m_p^2\frac{\partial W}{\partial\xi},
\label{deWolfe_method2}
\end{equation}
\begin{equation}
V =3m_p^2W^2+2m_p^4\left(\left(\frac{\partial W}{\partial
\phi}\right)^2-\left(\frac{\partial W}{\partial
\xi}\right)^2\right) \label{deWolfe_potential}
\end{equation}
are satisfied, then the corresponding $\phi(t)$, $\xi(t)$ and
$H(t)$ are a solution of system (\ref{eom1})--(\ref{eom4}).
The superpotential method separates system
(\ref{eom1})--(\ref{eom4}) into two parts: system
(\ref{deWolfe_method1})--(\ref{deWolfe_method2}), which is as a
rule integrable for the given polynomial $W(\phi,\xi)$ and
equation (\ref{deWolfe_potential}), which is not integrable if
$V(\phi,\xi)$ is a polynomial, but has a special polynomial
solutions. The way to use of superpotential method does not
include solving of eq.~(\ref{deWolfe_potential}). The potential
$V(\phi,\xi)$ is constructed by means of the given $W(\phi,\xi)$.
There are a few ways to use the superpotential method. The
standard way~\cite{DeWolfe} is to construct the potential for the
solutions given in the explicit form. We assume an explicit form
of solutions, find the superpotential $W$ and use
(\ref{deWolfe_potential}) to obtain the corresponding potential
$V$. If we consider one-field models, putting, for example,
$\xi\equiv 0$ in (\ref{eom1})--(\ref{eom4}), then from
(\ref{deWolfe_method1}) we obtain $W(\phi)$ up to a constant. At
the same time solving (\ref{deWolfe_method1}) we obtain a
one-parameter set of solutions: $\phi(t-t_0)$. So in the case of
one-field models we have the following correspondence
\begin{equation}
\phi(t-t_0) \quad \leftrightarrow \quad W(\phi)+C,
\label{phiW}
\end{equation}
where $t_0$ and $C$ are arbitrary constants.
In two-fields models the correspondence (\ref{phiW}) does not
exist and the superpotential method gives a possibility to find
new solutions. Indeed,
equations~(\ref{deWolfe_method1})--(\ref{deWolfe_method2}) form
the second order system of differential equations. If this system
is integrable then we obtain two-parameter set of solutions. To
assume some explicit form of solutions means to assign a
one-parameter set of solutions. The superpotential method allows
to generalize this set of solutions up to two-parameter set. On
the other hand we can construct different forms of superpotential
and potential, which correspond to one and the same one-parameter
set of solutions.
The idea to consider the Hubble parameter as a function of scalar
fields and to transform (\ref{eom1})--(\ref{eom4}) into
(\ref{deWolfe_method1})--(\ref{deWolfe_potential}) has been used
in the Hamilton--Jacobi formulation of the Friedmann
equations~\cite{Muslimov,Salopek} (see also~\cite{Liddle}) and
does not connect with supersymmetric and supergravity theories. At
the same time the method, based on the idea to apply system
(\ref{deWolfe_method1})--(\ref{deWolfe_potential}) instead of the
original equations of motion for the search exact special
solutions, is actively used in two-dimensional fields
models~\cite{Bazeia95,Bazeia99} and
supergravity~\cite{Brandhuber}.
Equations~(\ref{deWolfe_method1})--(\ref{deWolfe_method2}) are
known as the Bogomol'nyi equations~\cite{Bogomolnyi} (see
also~\cite{Bazeia99}). The superpotential method is a combination
and a natural extension of these two methods. This method is
actively used in cosmology~\cite{AKV,AKV3,BazeiaDE,BazeiaCDM}. Let
us note generalizations of this method on the equations of motion,
describing the close and open Friedmann universes~\cite{BazeiaDE},
systems with the cold dark matter~\cite{BazeiaCDM} and the
Brans--Dicke theory~\cite{MMVS}. The idea to consider the Hubble
parameter as a function of scalar fields and to transform
(\ref{eom1})--(\ref{eom4}) into
(\ref{deWolfe_method1})--(\ref{deWolfe_potential}) has been used
in the Hamilton--Jacobi formulation of the Friedmann
equations~\cite{Muslimov,Salopek} (see also~\cite{Liddle}) and
does not connect with supersymmetric and supergravity theories. At
the same time the idea to apply system
(\ref{deWolfe_method1})--(\ref{deWolfe_potential}) instead of the
original equations of motion and to seek in such a way exact
special solutions is actively used in two-dimensional fields
models~\cite{Bazeia95,Bazeia99}, supergravity~\cite{Brandhuber}
and supersymmetric models with the BPS states.
Equations~(\ref{deWolfe_method1}) and (\ref{deWolfe_method2}) are
known as the Bogomol'nyi equations~\cite{Bogomolnyi} (see
also~\cite{Bazeia99}). The superpotential method is a combination
and a natural extension of these two methods. At present the
superpotential method is actively used in
cosmology~\cite{AKV,AKV3,BazeiaDE,BazeiaCDM}. Let us note
generalizations of this method on the equations of motion for the
close and open Friedmann universes~\cite{BazeiaDE}, systems with
the cold dark matter~\cite{BazeiaCDM} and the Brans--Dicke
theory~\cite{MMVS}.
\section{The construction of potentials for the given solutions}
\subsection{Non-polynomial potential}
In this section we demonstrate that one and the same solutions can
correspond to both polynomial and non-polynomial potentials. In
the next section we show that the superpotential method allows to
find different exact solutions, which correspond to the different
behavior of the Hubble parameter, but one and the same potential.
From the asymptotic conditions we assume the following explicit
form of solution:
\begin{equation}
\label{sol2} \phi(t)=A\tanh(\omega t) \qquad \mbox{and}\qquad
\xi(t)=\frac{A\sqrt{2(1+b)}}{\cosh(\omega t)},
\end{equation}
where $A>0$, $\omega>0$ and $b>-1$.
From (\ref{Ht2}) we obtain
\begin{equation}
\label{Ht1}
H(t)=\frac{A^2\omega }{6m_p^2}\Bigl(3\tanh(\omega t)-(3+2b)\tanh^3(\omega t)\Bigr).
\end{equation}
Note that this kink-lump solution is a natural generalization of
the kink solution for the one-field phantom model~\cite{AKV}. The
behavior of the Hubble parameter at large time depends on the
parameter $b$. From the contemporary experimental data it follows
that the present date Universe is expanding one that corresponds
to $H>0$ at large time. The condition $\lim\limits_{t=+\infty}H>0$
is equivalent to $b<0$. Eventually, we state that $-1<b<0$.
On the other hand, in the past there were eras of the accelerated
and decelerated expanding Universe, it means that the Hubble
parameter $H$ has to be not a monotonic function and should has an
extremum at some point $t_c>0$. From (\ref{Ht1}) we obtain that
\begin{equation}
t_{c}=\frac{1}{\omega}\mathop{\mathrm{arccosh}}\left(\pm
\frac{\sqrt{2(b+1)(2b+3)}}{2(b+1)} \right). \label{tc0}
\end{equation}
We have assumed that $b>-1$, so the sign "+" corresponds to real
$t_{c}$. At $t>0$ the Hubble parameter $H$ has one extremum,
namely a maximum. The corresponding DE state parameter $w_{DE}$ is
given by
\begin{equation}
w_{DE}=-1+12m_p^2\frac{\left(2(b+1)\cosh(\omega
t)^2-3-2b\right)\cosh(\omega t)^2} {A^2\sinh(\omega
t)^2(2b\sinh(\omega t)^2-3)^2}.
\end{equation}
It is easy to see that at large time $w_{DE}>-1$, so we obtain the
quintessence like behavior of the Universe\footnote{It has been
shown~\cite{AKVBulg} that if we consider the other pair of scalar
fields
\begin{equation}
\label{sol2bulg} \tilde{\phi}(t)=\tanh(t), \qquad
\tilde{\xi}(t)=\frac{\sqrt{(1+b)}}{\cosh(2 t)},
\end{equation}
then for some values of parameter $b$, for example $b=-0.01$, the
corresponding Hubble parameter has both a maximum and a minimum at
$t>0$ and increases at large time. Note that the polynomial
potential, which corresponds to solutions~(\ref{sol2bulg}), is not
known.}.
Let us construct a potential, which corresponds to fields
(\ref{sol2}). The functions $\phi(t)$ and $\xi(t)$ are solutions
of the following system of differential equations:
\begin{equation}
\label{equphixi} \left\{
\begin{split}
\dot\phi&=A\omega-\frac{\omega}{A}\phi^2, \\
\dot\xi&=\omega\xi\sqrt{1-\frac{\xi^2}{2(1+b)A^2}}.
\end{split}
\right.
\end{equation}
The straightforward use of the superpotential method gives
\begin{equation}
\label{DW} \frac{\partial
W}{\partial\phi}=\frac{\omega}{2m_p^2}\Bigl(A-\frac{1}{A}\phi^2\Bigr),
\qquad \frac{\partial
W}{\partial\xi}={}-\frac{\omega\xi}{2m_p^2}\sqrt{1-\frac{\xi^2}{2(1+b)A^2}}.
\end{equation}
Therefore,
\begin{equation}
\label{W1}
H\equiv W=\frac{\omega}{6m_p^2}\left(3A\phi-\frac{\phi^3}{A}-
\sqrt{\frac{\left(2(1+b)A^2-\xi^2\right)^3}{2(1+b)A^2}} + H_0\right),
\end{equation}
where $H_0$ is an arbitrary constant. Different values of $H_0$
correspond to different $V(\phi,\xi)$. The obtained potentials
\begin{equation}
\label{V1}
V=\omega^2\left(\frac{\left(A^2-\phi^2\right)^2}{2A}
-\frac{\xi^2}{2}+ \frac{\xi^4}{4(1+b)A^2}\right)+3m_p^2W^2
\end{equation}
are polynomial ones only in the flat space-time ($m_p^2=\infty$)
and do not satisfy conditions of Section 2.
The goal of this paper is to construct a polynomial potential
model with such set of exact solutions that the quintessence large
time behavior corresponds to some solutions and the phantom large
time behavior corresponds to other ones. The potential and
solutions should satisfy conditions from Section 2. In other words
our model should be the SFT inspired one. We make this
construction in two steps. At the first step we construct
polynomial potential for (\ref{sol2}). At the second step we find
new solutions for the obtained polynomial potential.
\subsection{New polynomial potentials for the given solutions} Let us
construct for the functions (\ref{sol2}) such a superpotential
that the corresponding potential has the polynomial form.
Functions (\ref{sol2}) satisfy not only system (\ref{equphixi}),
but also the following system of differential equations:
\begin{equation}
\left\{
\begin{split}
\dot\phi&=\displaystyle A\omega b\left(\frac{\phi^2}{A^2}-1\right)+\frac{\omega\xi^2}{2A},\\
\dot\xi&=\displaystyle {}-\frac{\omega}{A}\phi\,\xi.
\end{split}
\right.
\label{time_dependence-anzats-1}
\end{equation}
The corresponding Hubble parameter (superpotential) is given by
\begin{equation}
H=\tilde{W}=\frac{\omega\phi}{2m_p^2}\left(Ab\left(\frac{\phi^2}{3A^2}-1\right)+
\frac{\xi^2}{2A}\right)+H_0. \label{W2}
\end{equation}
To obtain even potential we put $H_0=0$:
\begin{equation}
\tilde{V}=\frac{\omega^2}{2}\left(b\left(\phi^2-1\right)+\frac{1}
{2}\xi^2\right)^2 -\frac{\omega^2}{2A^2}\phi^2\xi^2+
\frac{3\omega^2\phi^2}{4m_p^2}\left(Ab\left(\frac{\phi^2}{3A^2}-1\right)+
\frac{\xi^2}{2A}\right)^2. \label{alexey_V}
\end{equation}
This example shows that the same functions $\phi(t)$ and $\xi(t)$
can correspond to essentially different potentials $V(\phi,\xi)$.
So, we conclude that in two-fields models one has more freedom to
choose the potential, without changing solutions than in one-field
models. Moreover, the solutions do not change if we add to the
potential $\tilde{V}$ (or $V$) a function $\delta V$, which is
such that $\delta V$, $\partial(\delta V)/\partial\phi$ and
$\partial(\delta V)/\partial\xi$ are zero on the solution. For
example, we can add
\begin{equation}
\delta
V=K(\phi,\xi)\left[\phi^2+\frac{1}{2(1+b)}\xi^2-A^2\right]^2,
\label{deltaV}
\end{equation}
where $K(\xi,\phi)$ is a smooth function. So, we can obtain new
potentials, which correspond to the given exact
solutions~(\ref{sol2}), without constructing of new
superpotentials.
\section{Construction of new solutions via the superpotential method}
In previous section we have shown how we can choose potential for
the given solutions. In this section we demonstrate the
possibility to find new exact solutions (may be in quadratures)
using superpotential method. Let us consider the model with the
potential (\ref{alexey_V}). It is easy to see that system
(\ref{time_dependence-anzats-1}) has not only solutions
(\ref{sol2}), but also the trivial solutions $\{\phi(t)=\pm A,
\quad \xi(t)=0\}$ and solution
\begin{equation}
\label{solxi0}
\phi(t)=-A\tanh(\omega b(t-t_0)), \qquad \xi(t)=0.
\end{equation}
If $\xi(t)\not\equiv 0$, then, using the second equation of
(\ref{time_dependence-anzats-1}), we obtain the second order
differential equation in $\xi(t)$:
\begin{equation}
\label{equ2xi}
\ddot \xi(t)=\omega^2b\,\xi(t)-\frac{\omega^2 \xi^3(t)}{2A^2}
+\frac{(1-b)\dot \xi^2(t)}{\xi(t)}.
\end{equation}
The solutions of eq. (\ref{equ2xi}) with $b>-1$ are defined in
quadratures
\begin{equation}
\label{solxi}
t-t_0=\pm\int\frac{A\sqrt{2(1+b)}\xi^{b-1}}
{\omega\sqrt{2A^2\xi^{2b}+2A^2b\xi^{2b}+\xi^{2b+2}+2A^2C+2A^2bC}}\,d\xi,
\end{equation}
where $C$ and $t_0$ are arbitrary constants. For some values of
parameter $b$ the general solution to
(\ref{time_dependence-anzats-1}) can be written in the explicit
form, for example at $b={}-1/2$ we obtain:
\begin{equation}
\label{solphixi_1_2}
\begin{split}
\phi(t)&=\frac{A\left(\left(C_1^2C_2^2+4A^2\right)e^{\omega t}
-C_1^2e^{-\omega t}\right)}
{\left(C_1^2C_2^2+4A^2\right)e^{\omega t}+2C_1^2C_2+C_1^2e^{-\omega t}},\\
\xi(t)&= \frac{4C_1A^2}{\left(C_1^2C_2^2+4A^2\right)e^{\omega
t}+2C_1^2C_2+C_1^2e^{-{\omega t}}},
\end{split}
\end{equation}
where $C_1$ and $C_2$ are arbitrary parameters. It is easy to
check that for all values of $C_1$, but $C_1=0$, and $C_2$
solutions (\ref{solphixi_1_2}) and the Hubble parameter $H(t)$
satisfy the following asymptotic conditions:
\begin{equation}
\phi(\pm\infty)=\pm A,\qquad \xi(\pm\infty)=0,\qquad
H(\pm \infty)={}\pm \frac{A^2\omega}{6m_p^2}.
\end{equation}
So we have constructed a gravitational model with a two-parameter
set of exact solutions. The potential and solutions satisfy
conditions, imposed by means of the string field theory (see
Section 2).
Let us analyze the property of the obtained solutions and the
cosmological consequences. System (\ref{time_dependence-anzats-1})
is invariant to change $\xi(t)$ on $-\xi(t)$, so each solution
$\phi(t)$ corresponds to two solutions ${}\pm\xi(t)$. Note that
the function $\phi(t)$ is invariant to the change $C_1 \rightarrow
{}-C_1$, whereas the function $\xi(t)$ changes a sign. The Hubble
parameter depends on $\xi^2$, so, without loss of generality, we
can put $C_1>0$.
System~(\ref{time_dependence-anzats-1}) is autonomous one, so if
there exists a solution $\{\tilde{\phi}(t), \tilde{\xi}(t)\}$,
then a pair of functions
$\{\tilde{\phi}(t-t_0),\tilde{\xi}(t-t_0)\}$, where $t_0\in
\mathbb{C}$, has to be a solution as well. It is convenient to use
in (\ref{solphixi_1_2}) such parameters that one of them
corresponds to a shift of solutions in time. We put
$C_1=exp(t_0)$. Using the restriction $t_0\in \mathbb{R}$ we come
to the condition $C_1>0$. For short we introduce the new parameter
$C\equiv C_1C_2$ instead of $C_2$. Solutions (\ref{solphixi_1_2})
take the following form:
\begin{equation} \label{phixi+C}
\begin{split}
\phi(t)&={}\frac{A\left((C^2+4A^2)e^{\omega(t-t_0)}-e^{-\omega(t-t_0)}\right)}
{(C^2+4A^2)e^{\omega(t-t_0)}+2C+e^{-\omega(t-t_0)}},\\ \xi(t)&=
\frac{{}4A^2}{(C^2+4A^2)e^{\omega(t-t_0)}+2C+e^{-\omega(t-t_0)}}.
\end{split}
\end{equation}
To compare the obtained solutions with the initial solution
(\ref{sol2}), we introduce new parameter $t_1\equiv t_0+t_{00}$,
where
\begin{equation}
\label{t00}
t_{00}\equiv{}-\frac{1}{2\omega}\ln\left(C^2+4A^2\right).
\end{equation}
Now functions $\phi(t)$ and $\xi(t)$ are
\begin{equation}
\label{phixi2}
\begin{split}
\phi(t)&={}\frac{A\left(e^{\omega(t-t_1)}-e^{-\omega(t-t_1)}\right)}
{e^{\omega(t-t_1)}+\frac{2C}{\sqrt{C^2+4A^2}}+e^{-\omega(t-t_1)}},\\
\xi(t)&=
\frac{4A^2}{\sqrt{C^2+4A^2}\left(e^{\omega(t-t_1)}+\frac{2C}{\sqrt{C^2+4A^2}}
+e^{-\omega(t-t_1)}\right)}.
\end{split}
\end{equation}
Let us consider solutions with $t_1=0$. It is easy to see that in
this case
\begin{equation}
\phi(0)=0,\qquad \dot\phi(0)=\frac{A\omega\sqrt{C^2+4A^2}}{C+\sqrt{C^2+4A^2}}>0
\qquad \mbox{and} \qquad \dot\xi(0)=0.
\end{equation}
From (\ref{eom2}) it follows that $\dot H(0)>0$ and from
(\ref{W2}) it follows that $H(0)=0$. Therefore, solutions with
$t_1=0$ and an arbitrary $C$ are cosmological bounce solution (see, for
example,~\cite{Cai07}), in other words, $a(t)$ has a bounce in the point $t=0$.
Let us consider how the behavior of the Hubble parameter $H(t)$
depends on $C$.
In the case $C=0$ we have solutions
\begin{equation}
\label{sol2t1}
\phi_0(t)=A\tanh(\omega(t-t_1)) \qquad \mbox{and} \qquad
\xi_0(t)={}\frac{A}{\cosh(\omega(t-t_1))}.
\end{equation}
At $t_1=0$ these solutions coincide with solutions (\ref{sol2}).
The corresponding Hubble parameter
\begin{equation}
\label{Ht_th_ch}
H_0=\frac{A^2\omega}{6m_p^2}\Bigl(3\tanh(\omega t)-2\tanh^3(\omega t)\Bigr)
\end{equation}
has a maximum at the point
$t_{max}={}-\ln\left(\sqrt{2}-1\right)/\omega\simeq 0.881/\omega$
and the quintessence large time behaviour. The solutions $\phi_0$
and $\xi_0$, the Hubble parameter $H_0$ and the state parameter
$w_{DE}^{\vphantom {27}}$ are presented on \Fref{H_th_ch} (we put
$A=1$, $\omega=1$ and $m_p^2=1/6$).
\begin{figure}[h]
\centering
\includegraphics[width=50mm]{figure1a.eps} { \ \ }
\includegraphics[width=50mm] {figure1b.eps} { \ \ }
\includegraphics[width=50mm]{figure1c.eps}
\caption{The fields $\phi$ and $\xi$ (left), the Hubble parameter
$H$ (center) and the state parameter $w_{DE}$ (right) at $C=0$ and
$t_1=0$.} \label{H_th_ch}
\end{figure}
For arbitrary $C$ the Hubble parameter is as follows:
\begin{equation}
\label{Hgen}
\begin{split}
H&=\frac{A^2\omega\left(e^{\omega(t_0-t)}+\left(C^2+4A^2\right)e^{\omega(t-t_0)}
\right)} {6m_p^2\left(e^{\omega(t_0-t)}
+2C+\left(C^2+4A^2\right)e^{\omega(t-t_0)}\right)^3}
\Bigl(e^{2\omega(t_0-t)}+6Ce^{\omega(t_0-t)}+{}\\&{}+10\left(C^2+4A^2\right)
+6C\left(C^2+4A^2\right)e^{\omega(t-t_0)}+\left(C^2+4A^2\right)^2e^{2\omega(t-t_0)}\Bigr).
\end{split}
\end{equation}
The straightforward calculations give that for all $C$, but
$C=\pm2A$, $\dot H(t)=0$ at four points
\begin{equation}
\label{tmax}
t_{m_k^{\vphantom{27}}}=t_0-\frac{1}{\omega}\ln\left(-\frac{4A^2+C^2\pm2A\sqrt{8A^2+2C^2}}{(C\pm
2A)(C^2+4A^2)}\right), \qquad k=1,\dots,4,
\end{equation}
where two signs "$\pm$" are independent. Note that if $C\neq \pm
2A$, then $\ddot H(t_{m_k^{\vphantom{27}}})\neq 0$. Therefore, the
Hubble parameter $H(t)$ has extrema at points
$t_{m_k^{\vphantom{27}}}$.
At $C>2A$ all four points $t_{m_k^{\vphantom{27}}}$ do not belong
to real axis.
If $C=2A$, then $\dot H(t)=0$ at two points, which do not belong
to real axis:
\begin{equation}
\tilde{t}_{m_1^{\vphantom{27}}}=t_0-\frac{1}{\omega}\ln(-2A)
\qquad\mbox{and} \qquad
\tilde{t}_{m_2^{\vphantom{27}}}=t_0-\frac{1}{\omega}\ln(-4A).
\end{equation}
So, at $C\geqslant 2A$ the Hubble parameter $H(t)$ is a
monotonically increasing function and its behavior is close to the
behavior of the Hubble parameter in one-field model~\cite{AKV}.
At $-2A<C<2A$ the function $H(t)$ has extrema at two points. If
$t_1=0$, the $\phi(t)$ is an odd function, whereas $\xi(t)$ is an
even one. Therefore the corresponding Hubble parameter, calculated
by means of (\ref{W2}), is an odd function. It is easy to check
that on semi-axis $t>0$ the Hubble parameter $H(t)$ is positive
and, hence, has a maximum at $C<2A$ (see~\Fref{Hpl}). Thus, the
behavior of $H(t)$ in the case $-2A<C<2A$ looks like its behavior
at $C=0$.
\begin{figure}[h]
\centering
\includegraphics[width=50mm]{figure2a.eps} { \ \ }
\includegraphics[width=50mm]{figure2b.eps} { \ \ }
\includegraphics[width=50mm]{figure2c.eps}
\caption{The fields $\phi$ and $\xi$ (left), the Hubble parameter
$H$ (center) and the state parameter $w_{DE}$ (right) at $C=1$ and
$t_1=0$.} \label{Hpl}
\end{figure}
If $C=-2A$, then $\dot H(t)=0$ at two points:
\begin{equation}
\tilde{t}_{m_3^{\vphantom{27}}}=t_0-\frac{1}{\omega}\ln(2A)
\qquad\mbox{and} \qquad
\tilde{t}_{m_4^{\vphantom{27}}}=t_0-\frac{1}{\omega}\ln(4A).
\end{equation}
At these points $\ddot H=\pm16A^2/m_p^2\neq 0$, hence, the Hubble
parameter behavior is close to $H(t)$ on Figures \ref{H_th_ch} and
\ref{Hpl}.
Let us consider the case $C<-2A$. All four points of extremum
(\ref{tmax}) are real. It means, that at $C<-2A$ we obtain a
qualitative new behavior of the Hubble parameter.
If $t_1=0$, then, as it has been noted above, the Hubble parameter
is an odd function. The derivative of the Hubble parameter at zero
point is positive, hence, $H(t)$ has maximum at some point
$t_{m_1}>0$, a minimum at $t_{m_2}>t_{m_1}$ and is a monotonically
increasing function at $t>t_{m_2}$. Note that $w_{DE}^{\vphantom
{27}}<-1$ at $t>t_{m_2}$. Thus we have found the exact solutions,
which correspond to the nonmonotonic function $H(t)$ with phantom
large time behaviour (see \Fref{Hphantom}).
\begin{figure}[h]
\centering
\includegraphics[width=50mm]{figure3a.eps} { \ \ }
\includegraphics[width=50mm]{figure3b.eps} { \ \ }
\includegraphics[width=50mm]{figure3c.eps}
\caption{The fields $\phi$ and $\xi$ (left), the Hubble parameter
$H$ (center) and the state parameter $w_{DE}$ (right) at $C=-5$
and $t_1=0$.} \label{Hphantom}
\end{figure}
Using the superpotential method we have obtained that the model
with the potential
\begin{equation}
\tilde{V}=\omega^2\left(\frac{1}{8}\left(1-\phi^2+\xi^2\right)^2
-\frac{1}{2A^2}\phi^2\xi^2+
\frac{3\phi^2}{4m_p^2}\left(\frac{A}{2}\left(1-\frac{\phi^2}{3A^2}\right)+
\frac{\xi^2}{2A}\right)^2\right) \label{V_Sergey}
\end{equation}
has two-parameter sets of exact solutions. Note that the obtained
solutions have one and the same asymptotic conditions, whereas the
behaviour of the state parameter $w_{DE}^{\vphantom {27}}$ turn
out different. So, we can conclude that at large time both
quintessence ($w_{DE}^{\vphantom {27}}>-1$) and phantom
($w_{DE}^{\vphantom {27}}<-1$) behavior of $w_{DE}^{\vphantom
{27}}$ are possible to obtain from the SFT inspired effective
model with one and the same polynomial potential.
\section{Two-fields model with a polynomial potential and $w_{DE}$
crossing the cosmological barrier infinitely often}
From the Cubic Superstring Field Theory I.Ya.~Aref'eva and
A.S.~Koshelev obtained that the late time rolling of non-local
tachyon leads to a cosmic acceleration with a periodic crossing of
the cosmological constant barrier~\cite{AK}. At large time
approximation, when an open string tachyon
$\phi=1-\tilde{\delta\phi}$ and $|\tilde{\delta\phi}|\ll 1$, the
following Hubble parameter has been obtained:
\begin{equation}
\label{HAK}
H=H_0+C_H^{\vphantom {27}}e^{-2rt}\sin(2\nu(t-t_0)),
\end{equation}
where $H_0$, $C_H^{\vphantom {27}}$, $r$, $\nu$ and $t_0$ are real
constants.
In~\cite{AK} the authors consider non-local model and the
corresponding Friedmann equations. In this paper we construct the
two-fields local model with the Hubble parameter (\ref{HAK}) in
the case $\nu=r$. For simplicity we put $C_H^{\vphantom
{27}}=1/(2m_p^2)$ and construct solutions, which do not depend on
$m_p^2$. In this case
\begin{equation}
\label{HAK2}
\dot H=\frac{r}{m_p^2}e^{-2rt}
\Bigl(2\sin(rt)^2-(\sin(rt)-\cos(rt))^2\Bigr).
\end{equation}
Using (\ref{eom2}) we can define the following explicit form of
solutions:
\begin{equation}
\dot{\tilde{\delta\phi}}={}- 2\sqrt{r}e^{-rt}\sin(rt),
\qquad \dot\xi=\sqrt{2r}e^{-rt}\Bigl(\sin(rt)-\cos(rt)\Bigr).
\end{equation}
It is easy to check that if
\begin{equation}
\tilde{\delta\phi}= \frac{1}{\sqrt{r}}e^{-rt}\Big(\cos(rt)+\sin(rt)\Big),
\qquad \xi={}-\frac{\sqrt{2}}{\sqrt{r}}e^{-rt}\sin(rt),
\end{equation}
then
\begin{equation}
\label{equphixiAK}
\dot{\tilde{\delta\phi}}= \sqrt{2}r\xi, \qquad \dot\xi={}-\sqrt{2}r\tilde{\delta\phi}+2r\xi.
\end{equation}
Let construct the superpotential:
\begin{equation}
\frac{\partial W}{\partial \tilde{\delta\phi}}
=\frac1{2m_p^2}\sqrt{2}r\xi, \qquad
\frac{\partial W}{\partial \xi}
=\frac1{2m_p^2}\left(\sqrt{2}r\tilde{\delta\phi}-2r\xi\right),
\end{equation}
so
\begin{equation}
W=\frac1{2m_p^2}\left(\sqrt{2}r\xi\tilde{\delta\phi}-r\xi^2\right)+H_0.
\end{equation}
The potential is (we put $H_0=0$)
\begin{equation}
\label{VAK}
V=-r^2\left(\xi^2+\tilde{\delta\phi}^2-2\sqrt{2}\xi\tilde{\delta\phi}\right)+\frac{3r^2}{4m_p^2}
\left(\sqrt{2}\xi\tilde{\delta\phi}-\xi^2\right)^2.
\end{equation}
Thus we obtain the explicit solutions and the fourth degree
polynomial potential, which corresponds to the Hubble parameter
from the SFT inspired model with high derivatives.
Note that the standard method to construct models with scalar
fields for the given behaviour of the Hubble parameter is the
method, which uses $V(\phi,\xi)$ as a function of
time~\cite{Andrianov}. If we know the Hubble parameter $H(t)$,
then, using (\ref{Vt}), we obtain $V(t)$ and after that we can
attempt to find the functions $\phi(t)$ and $\xi(t)$ and the
potential $V(\phi,\xi)$. Such method is very effective if at least
one of derivatives either $V_\phi$ or $V_\xi$ is such a function
$F(V)$ that a form of $F$ does not depend on $\phi$ and $\xi$. For
example, if
\begin{equation}
\label{Vexp}
V(\phi,\xi)=V_1(\phi)e^{\alpha\xi},
\end{equation}
where $\alpha$ is a constant, then
\begin{equation}
\frac{\partial V(\phi(t),\xi(t))}{\partial\xi}=\alpha V_1(\phi(t))
e^{\alpha\xi(t)}=\alpha V(t)
\end{equation}
and (\ref{eom4}) is a linear differential equation in $\xi$
\begin{equation}
\label{eom4Andr}
\ddot{\xi}+3H(t)\dot{\xi}+\alpha\left(3H^2(t)+\dot{H}(t)\right)=0.
\end{equation}
This equation allows to find $\xi(t)$ if the Hubble parameter
$H(t)$ is known~\cite{Andrianov}. The superpotential method is
not so effective to seek potential in the form (\ref{Vexp}) for
the given Hubble parameter $H(t)$. On the other hand if the
required form of the potential is a polynomial, the superpotential
method is not less effective and maybe even more easy to use than
the above-mentioned method.
\section{Conclusions}
In this paper we have investigated the dynamics of two component
DE models, with one phantom field and one usual field. The main
motivation for us is a model of the Universe as a slowly decaying
D3-brane, whose dynamics is described by a tachyon
field~\cite{Arefeva}. To take into account the back reaction of
gravity we add a scalar field with an usual kinetic term.
We construct a cosmological model with the SFT inspired polynomial
potential $V(\phi,\xi)$ and find two-parameter set of exact
solutions. This set can be separated into two subset such that one
subset corresponds to the quintessence large time behaviour,
another subset corresponds to the phantom one. Note that both
subsets have solutions, which satisfy one and the same asymptotic
conditions and the additional condition $\phi(0)=0$.
We also construct two-fields model with the fourth degree
polynomial potential, which corresponds to the Hubble parameter,
obtained in the SFT framework~\cite{AK}. In this model the state
parameter $w_{DE}$ crosses the cosmological constant barrier
infinitely often.
In this paper we actively use the superpotential method and show
that there are new ways to use this method in the case of two
fields. We can not only construct potential for the given
solutions, but also find new solutions. In particular
superpotential method allows to generalize a one-parameter set of
solutions up to two-parameter set. The superpotential method
allows to separate the initial system of motion
equations~(\ref{eom1})--(\ref{eom4}) into two parts. One part is
the equation on superpotential~(\ref{deWolfe_potential}), which in
general case is not integrable, but for many polynomial potentials
has special solutions. Substituting these solutions into the
second part
(system~(\ref{deWolfe_method1})--(\ref{deWolfe_method2})) we
obtain a system of ordinary differential equations, which is
usually integrable at least in quadratures. Note that the systems
of the type (\ref{deWolfe_method1})--(\ref{deWolfe_method2}) are
actively investigated both in mechanics and in supersymmetry
theories with BPS states. So, the superpotential method allows to
stand out from the system of the Friedmann equations a subsystem,
which can be integrable, even in the case of a nonintegrable
initial system. On the other hand this method allows to make such
a fine tuning of parameters of the considering gravitational
models, for example, a choose of coefficients of the potential,
that the explicit solutions exist.
For solutions with $H(0)=0$ we obtain that $\dot H(0)>0$. Similar
solutions are known as bounce ones (see, for
example,~\cite{Cai07}). Note, that bounce solutions has been
obtained in the SFT inspired
higher-derivative models~\cite{Biswas,AJV}.
\section*{Acknowledgments}
Author is grateful to A.A.~Andrianov, I.Ya.~Aref'eva and
A.Yu.~Kamenshchik for useful discussions. This research is
supported in part by RFBR grant 05-01-00758 and by Russian
Fede\-ration President's grant NSh--8122.2006.2.
|
1,116,691,498,537 | arxiv | \section{Introduction}
\label{sec:intro}
Low rank tensor completion is a well-studied problem and has various applications in the fields of recommendation systems \cite{Symeonidis2008}, link-prediction \cite{Ermis2015}, compressed sensing \cite{Cichocki2015}, to name a few. Majority of the previous works focus on solving the problem in a static setting \cite{Filipovic2015,Guo2017,Kasai2016a}. However, most of the real world data is dynamic, for example in an online movie recommendation system the number of users and movies increase with time. It is prohibitively expensive to use the static algorithms for dynamic data. Therefore, there has been an increasing interest in developing algorithms for dynamic low-rank tensor completion \cite{Kasai2016,Mardani2015,Song2017}.
Usually in many real world scenarios, besides the tensor data, additional side information is also available, e.g., in the form of matrices. In the dynamic scenarios, the side information grows with time as well. For instance, movie-genre information in the movie recommendation etc. There has been considerable amount of work in incorporating side information into tensor completion \cite{Narita2011,Ge2016}. However, the previous works on incorporating side information deal with the static setting. In this paper, we propose a dynamic low-rank tensor completion model that incorporates side information growing with time.
Most of the current dynamic tensor completion algorithms work in the streaming scenario, i.e., the case where the tensor grows only in one mode, which is usually the time mode. In this case, the side information is a static matrix. Multi-aspect streaming scenario \cite{Fanaee-T2015,Song2017}, on the other hand, is a more general framework, where the tensor grows in all the modes of the tensor. In this setting, the side information matrices also grow. \reffig{fig:illustartion} illustrates the difference between streaming and multi-aspect streaming scenarios with side information.
Besides side information, incorporating nonnegative constraints into tensor decomposition is desirable in an unsupervised setting. Nonnegativity is essential for discovering interpretable clusters \cite{Hyvoenen2008, Murphy2012}. Nonnegative tensor learning is explored for applications in computer vision \cite{Shashua2005,Kim2007}, unsupervised induction of relation schemas \cite{Nimishakavi2016}, to name a few.
Several algorithms for online Nonnegative Matrix Factorization (NMF) exist in the literature \cite{Lefevre2011, Zhao2017}, but algorithms for nonnegative online tensor decomposition with side information are not explored to the best of our knowledge. We also fill this gap by showing how nonnegative constraints can be enforced on the decomposition learned by our proposed framework SIITA{}.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\noindent\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=1\textwidth]{streaming.pdf}\\
{\small(a) Streaming tensor sequence with side information.}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=1\textwidth]{MAST.pdf}\\
{\small (b) Multi-aspect streaming tensor sequence with side information.}
\end{minipage}
\end{tabular}
\caption{Illustration of streaming and multi-aspect streaming sequences with side information. The blue block represents the tensor at time step and the green block represents the side information. The blocks in grey represent the data at previous time steps.
For easy understanding, we show side information along only one mode.}
\label{fig:illustartion}
\end{figure}
In this paper, we work with the more general multi-aspect streaming scenario and make the following contributions:
\begin{itemize}
\item Formally define the problem of multi-aspect streaming tensor completion with side information.
\item Propose a Tucker based framework Side Information infused Incremental Tensor Analysis (SIITA{}) for the problem of multi-aspect streaming tensor completion with side information. We employ a stochastic gradient descent (SGD) based algorithm for solving the optimization problem.
\item Incorporate nonnegative constraints with SIITA{} for discovering the underlying clusters in unsupervised setting.
\item Demonstrate the effectiveness of SIITA{} using extensive experimental analysis on multiple real-world datasets in all the settings.
\end{itemize}
The organization of the paper is as follows. In Section \ref{sec:prelim}, we introduce the definition of multi-aspect streaming tensor sequence with side information and discuss our proposed framework SIITA{} in Section \ref{sec:mast_si}. We also discuss how nonnegative constraints can be incorporated into SIITA{} in Section \ref{sec:mast_si}. The experiments are shown in Section \ref{sec:exp}, where SIITA{} performs effectively in various settings. All our codes are implemented in Matlab, and can be found at \texttt{\url{https://madhavcsa.github.io/}}.
\section{Related Work}
\label{sec:rel}
\begin{table*}[bt]
\centering
\small
\caption{ Summary of different tensor streaming algorithms.\label{tbl:bsline_table}}
\begin{tabular}{p{3.5cm} p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{3.2cm}}
\toprule
Property & TeCPSGD\cite{Mardani2015} &OLSTEC \cite{Kasai2016} & MAST \cite{Song2017} & AirCP \cite{Ge2016} & SIITA{} (this paper) \\
\midrule
Streaming & \multicolumn{1}{c}{\checkmark}& \multicolumn{1}{c}{\checkmark} &\multicolumn{1}{c}{\checkmark} & &\multicolumn{1}{c}{\checkmark} \\
Multi-Aspect Streaming & & & \multicolumn{1}{c}{\checkmark} & &\multicolumn{1}{c}{\checkmark} \\
Side Information & & & & \multicolumn{1}{c}{\checkmark} & \multicolumn{1}{c}{\checkmark} \\
Sparse Solution & & & & &\multicolumn{1}{c}{\checkmark} \\
\bottomrule
\end{tabular}
\end{table*}
{\bf Dynamic Tensor Completion :} \cite{Sun2006,Sun2008} introduce the concept of dynamic tensor analysis by proposing multiple Higher order SVD based algorithms, namely Dynamic Tensor Analysis (DTA), Streaming Tensor Analysis (STA) and Window-based Tensor Analysis (WTA) for the streaming scenario.
\cite{Nion2009} propose two adaptive online algorithms for CP decomposition of $3$-order tensors. \cite{Yu2015} propose an accelerated online algorithm for tucker factorization in streaming scenario, while an accelerated online algorithm for CP decomposition is developed in \cite{Zhou2016}.
A significant amount of research work is carried out for dynamic tensor decompositions, but work focusing on the problem of dynamic tensor completion is relatively less explored. Work by \cite{Mardani2015} can be considered a pioneering work in dynamic tensor completion. They propose a streaming tensor completion algorithm based on CP decomposition.
Recent work by \cite{Kasai2016} is an accelerated second order Stochastic Gradient Descent (SGD) algorithm for streaming tensor completion based on CP decomposition.
\cite{Fanaee-T2015} introduces the problem of multi-aspect streaming tensor analysis by proposing a histogram based algorithm. Recent work by \cite{Song2017} is a more
general framework for multi-aspect streaming tensor completion.
{\bf Tensor Completion with Auxiliary Information :} \cite{Acar2011} propose a Coupled Matrix Tensor Factorization (CMTF) approach for incorporating additional side information, similar ideas are also explored in
\cite{Beutel2014} for factorization on hadoop and in \cite{Ermis2015a} for link prediction in heterogeneous data. \cite{Narita2011} propose with-in mode and cross-mode regularization methods for incorporating similarity side information matrices into factorization. Based on similar ideas, \cite{Ge2016} propose AirCP, a CP-based tensor completion algorithm.
\cite{Welling2001} propose nonnegative tensor decmpositon by incorporating nonnegative constraints into CP decomposition. Nonnegative CP decomposition is explored for applications in computer vision in \cite{Shashua2005}. Algorithms for nonnegative Tucker decomposition are proposed in \cite{Kim2007} and for sparse nonnegative Tucker decomposition are proposed in \cite{Morup2008}. However, to the best our knowledge, nonnegative tensor decomposition algorithms do not exist for dynamic settings, a gap we fill in this paper.
Inductive framework for matrix completion with side information is proposed in \cite{jain2013,Natarajan2014,Si2016}, which has not been explored for tensor completion to the best of our knowledge. In this paper, we propose an online inductive framework for multi-aspect streaming tensor completion.
\reftbl{tbl:bsline_table} provides details about the differences between our proposed SIITA{} and various baseline tensor completion algorithms.
\section{Preliminaries}
\label{sec:prelim}
An $N^{th}$-order or $N$-mode tensor is an $N$-way array. We use boldface calligraphic letters to represent tensors (e.g., $\ten{X}$), boldface uppercase to represent matrices (e.g., $\mat{U}$), and boldface lowercase to represent vectors (e.g., $\mat{v}$). $\ten{X}[i_1, \cdots, i_N]$ represents the entry of $\ten{X}$ indexed by $[i_1, \cdots, i_N]$.
~\\
{\bf Definition 1 (Coupled Tensor and Matrix) \cite{Song2017}}: A matrix and a tensor are called coupled if they share a mode. For example, a ${user} \times {movie} \times { time}$ tensor and a ${movie} \times {genre}$ matrix are coupled along the {movie} mode.
~\\
{\bf Definition 2 (Tensor Sequence) \cite{Song2017}}:
A sequence of $N^{th}$-order tensors $\ten{X}^{(1)}, \ldots , \ten{X}^{(t)}, \dots$ is called a tensor sequence denoted as $\{\ten{X}^{(t)}\}$, where each $\ten{X}^{(t)} \in \mathbb{R}^{I_1^t \times I_2^t \times \ldots \times I_N^t}$ at time instance $t$.
~\\
{\bf Definition 3 (Multi-aspect streaming Tensor Sequence) \label{def:mast-seq}\cite{Song2017}}:
A tensor sequence of $N^{th}$-order tensors $\{\ten{X}^{(t)}\}$ is called a multi-aspect streaming tensor sequence if for any $t \in \mathbb{Z}^{+}$, $\ten{X}^{(t-1)} \in \mathbb{R}^{I_1^{t-1} \times I_2^{t-1} \times \ldots \times I_N^{t-1}}$ is the sub-tensor of $\ten{X}^{(t)} \in \mathbb{R}^{I_1^t \times I_2^t \times \ldots \times I_N^t}$, i.e.,
\[
\ten{X}^{(t-1)} \subseteq \ten{X}^{(t)},~\mathrm{where}~I_{i}^{t-1} \le I_{i}^{t},~\forall 1 \le i \le N.
\]
Here, $t$ increases with time, and $\ten{X}^{(t)}$ is the snapshot tensor of this sequence at time $t$.
~\\
{\bf Definition 4 (Multi-aspect streaming Tensor Sequence with Side Information) }: Given a time instance $t$, let $\mat{A}_{i}^{(t)} \in \mathbb{R}^{I_{i}^{t} \times M_i}$ be a side information (SI) matrix corresponding to the $i^{th}$ mode of $\ten{X}^{(t)}$ (i.e., rows of $\mat{A}_{i}^{(t)}$ are coupled along $i^{th}$ mode of $\ten{X}^{(t)}$). While the number of rows in the SI matrices along a particular mode $i$ may increase over time, the number of columns remain the same, i.e., $M_i$ is not dependent on time. In particular, we have,
\begin{align*}
\mat{A}_{i}^{(t)} &=
\begin{bmatrix}
\mat{A}_{i}^{(t-1)} \\
\Delta_{i}^{(t)}
\end{bmatrix},~\mathrm{where}~\Delta_{i}^{(t)} \in \mathbb{R}^{[I_i^{(t)} - I_{i}^{(t-1)}] \times M_i}.
\end{align*}
Putting side information matrices of all the modes together, we get the side information set $\set{A}^{(t)}$,
\[
\set{A}^{(t)} = \{\mat{A}_{1}^{(t)}, \ldots, \mat{A}_{N}^{(t)}\}.
\]
Given an $N^{th}$-order multi-aspect streaming tensor sequence $\{\ten{X}^{(t)}\}$, we define a multi-aspect streaming tensor sequence with side information as $\{(\ten{X}^{(t)}, \set{A}^{(t)})\}$.
We note that all modes may not have side information available. In such cases, an identity matrix of appropriate size may be used as $\mat{A}_{i}^{(t)}$, i.e., $\mat{A}_{i}^{(t)} = \mat{I}^{I_{i}^{t} \times I_{i}^{t}}$, where $M_i = I_{i}^{t}$.
The problem of multi-aspect streaming tensor completion with side information is formally defined as follows:
\begin{center}
\framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{
{\bf Problem Definition}: Given a multi-aspect streaming tensor sequence with side information $\{(\ten{X}^{(t)}, \set{A}^{(t)})\}$, the goal is to predict the missing values in $\ten{X}^{(t)}$ by utilizing only entries in the relative complement $\ten{X}^{(t)} \setminus \ten{X}^{(t-1)}$ and the available side information $\set{A}^{(t)}$.
}}
\end{center}
\section{Proposed Framework SIITA{}}
\label{sec:mast_si}
In this section, we discuss the proposed framework SIITA{} for the problem of multi-aspect streaming tensor completion with side information.
Let $\{(\ten{X}^{(t)}, \mathcal{A}^{(t)}) \}$ be an $N^{th}$-order multi-aspect streaming tensor sequence with side information. Assuming that, at every time step, $\ten{X}^{(t)}[i_1, i_2, \cdots, i_N]$ are only observed for some indices $[i_1, i_2, \cdots, i_N] \in \Omega$, where $\Omega$ is a subset of the complete set of indices $[i_1, i_2, \cdots, i_N]$.
Let the sparsity operator $\mathcal{P}_{\Omega} $ be defined as:
\begin{equation*}
\mathcal{P}_{\Omega}[i_1, i_2, \cdots, i_N] =
\begin{cases}
\ten{X}[i_1, \cdots, i_N], & \text{if}\ [i_1, \cdots, i_N] \in \Omega\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
Tucker tensor decomposition \cite{Kolda2009}, is a form of higher-order PCA for tensors. It decomposes an $N^{th}$-order tensor $\ten{X}$ into a core tensor multiplied by a matrix along each mode as follows
\begin{equation*}
\ten{X} \approx \ten{G} \times_1 \mat{U}_1 \times_2 \mat{U}_2 \times_3 \cdots \mat{U}_N,
\end{equation*}
where, $\mat{U}_i \in \mathbb{R}^{I_i \times r_i}, i=1:N$ are the factor matrices and can be thought of as principal components in each mode. The tensor $\ten{G} \in \mathbb{R}^{r_1 \times r_2 \times \cdots r_N}$ is called the \emph{core tensor}, which shows the interaction between different components. $(r_1, r_2, \cdots, r_N)$ is the (multilinear) rank of the tensor. The $i$-mode matrix product of a tensor $\ten{X} \in \mathbb{R}^{I_1 \times I_2 \times \cdots I_N}$ with a matrix $\mat{P} \in \mathbb{R}^{r \times I_i}$ is denoted by $\ten{X}\times_i \mat{P}$, more details can be found in \cite{Kolda2009}. {The standard approach of incorporating side information while learning factor matrices in Tucker decomposition is by using an additive term as a regularizer \cite{Narita2011}. However, in an online setting the additive side information term poses challenges as the side information matrices are also dynamic.} Therefore, we propose the following fixed-rank {\it inductive framework} for recovering missing values in $\ten{X}^{(t)}$, at every time step $t$:
\begin{equation}
\label{eqn:ind_tucker}
\underset{\mat{U}_i \in \mathbb{R}^{M_i \times r_i}, i = 1:N}{\min_{\ten{G} \in \mathbb{R}^{r_1 \times \ldots \times r_N}}} F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}, \{\mat{U}_i\}_{i=1:N}),
\end{equation}
where
\begin{multline}
\label{eqn:F_U_G}
F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}, \{\mat{U}_n\}_{i=1:N}) =
\norm{\mathcal{P}_{\Omega}(\ten{X}^{(t)}) -
\mathcal{P}_{\Omega}(\ten{G} \times_1 \mat{A}_1^{(t)}\mat{U}_1 \times_2 \ldots \times_N \mat{A}_N^{(t)}\mat{U}_N)}_F^2 \\
+ \lambda_g \norm{\ten{G}}_F^2 + \sum_{i=1}^{N}\lambda_i \norm{\mat{U}_i}_F^2.
\end{multline}
$\norm{\cdot}_F$ is the Frobenius norm, $\lambda_g > 0$ and $\lambda_i > 0, i=1:N$ are the regularization weights. Conceptually, the inductive framework models the ratings of the tensor as a weighted scalar product of the side information matrices. Note that (\ref{eqn:ind_tucker}) is a generalization of the inductive matrix completion framework \cite{jain2013,Natarajan2014,Si2016}, which has been effective in many applications.
The inductive tensor framework has two-fold benefits over the typical approach of incorporating side information as an additive term. The use of $\mat{A}_i \mat{U}_i$ terms in the factorization reduces the dimensionality of variables from $\mat{U}_i \in \mathbb{R}^{I_i \times r_i}$ to $\mat{U}_i \in \mathbb{R}^{M_i \times r_i}$ and typically $M_i \ll I_i$. As a result, computational time required for computing the gradients and updating the variables decreases remarkably. Similar to \cite{Kim2007}, we define
\begin{equation*}
\mat{U}_i^{(\backslash n)} = \big[ \mat{A}_{i-1}^{(t)} \mat{U}_{i-1} \otimes \ldots \otimes \mat{A}_{1}^{(t)} \mat{U}_{1} \otimes \ldots \otimes
\mat{A}_{N}^{(t)} \mat{U}_{N} \otimes \ldots \otimes \mat{A}_{i+1}^{(t)} \mat{U}_{i+1} \big ] \nonumber ,
\end{equation*}
which collects Kronecker products of mode matrices except for $\mat{A}_i\mat{U}_i$ in a backward cyclic manner.
The gradients for \eqref{eqn:ind_tucker} wrt $\mat{U}_i$ for $i = 1:N$ and $\ten{G}$ can be computed as following:
\begin{equation}\label{eqn:gradu}
\begin{array}{lll}
\displaystyle \frac{\partial F}{\partial \mat{U}_i} = -(\mat{A}_i^{(t)})^\top \ten{R}_{(i)}^{(t)} \mat{U}_i^{(\backslash n)} \ten{G}_{(i)}^\top + 2\lambda_i \mat{U}_i \\
\\
\displaystyle \frac{\partial F}{\partial \ten{G}} = - \ten{R}^{(t)} \times_1 (\mat{A}_1^{(t)}\mat{U}_1)^{\top} \times_2 \ldots \times_N (\mat{A}_N^{(t)}\mat{U}_N)^{\top}+2\lambda_g \ten{G},
\end{array}
\end{equation}
where
\begin{equation*}
\label{eqn:res}
\ten{R}^{(t)} = \ten{X}^{(t)} - \ten{G} \times_1 \mat{A}_1^{(t)} \mat{U}_1 \times_2 \ldots \times_N \mat{A}_N^{(t)} \mat{U}_N .
\end{equation*}
By updating the variables using gradients given in (\ref{eqn:gradu}), we can recover the missing entries in $\ten{X}^{(t)}$ at every time step $t$, however that is equivalent to performing a static tensor completion at every time step. Therefore, we need an incremental scheme for updating the variables. Let $\mat{U}_i^{(t)}$ and $\ten{G}^{(t)}$ represent the variables at time step $t$, then
\begin{equation}
\begin{array}{ll}
F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N}) =\\
\qquad F(\ten{X}^{(t-1)}, \mathcal{A}^{(t-1)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N})\ \ + \\
\qquad F(\ten{X}^{(\Delta t)}, \mathcal{A}^{(\Delta t)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N}),
\end{array}
\end{equation}
since $\ten{X}^{(t-1)}$ is recovered at the time step $t$-$1$, the problem is equivalent to using only
\[
F^{(\Delta t)} = F(\ten{X}^{(\Delta t)}, \mathcal{A}^{(\Delta t)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N}),
\]
for updating the variables at time step $t$.
We propose to use the following approach to update the variables at every time step $t$, i.e.,
\begin{equation}\label{eqn:update_u}
\begin{array}{lll}
\mat{U}_i^{(t)} = \mat{U}_i^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t-1)}}, i = 1:N \\
\ten{G}^{(t)} = \ten{G}^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t-1)}},
\end{array}
\end{equation}
where $\gamma$ is the step size for the gradients. $\ten{R}^{(\Delta t)}$, needed for computing the gradients of $F^{(\Delta t)}$, is given by
\begin{equation}\label{eqn:r_delta}
\ten{R}^{(\Delta t)} = \ten{X}^{(\Delta t)} - \ten{G}^{(t-1)} \times_1 \mat{A}_1^{(\Delta t)}\mat{U}_1^{(t-1)} \times_2 \ldots \times_N \mat{A}_N^{(\Delta t)}\mat{U}_N^{(t-1)}.
\end{equation}
\begin{algorithm}[t]
\small
\caption{Proposed SIITA{} Algorithm}\label{alg:simast}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Return}
\Input{$ \{\ten{X}^{(t)}, \mathcal{A}^{(t)}\}, \lambda_i, i = 1:N ,(r_1, \ldots, r_N) $}
Randomly initialize $\mat{U}_i^{(0)} \in \mathbb{R}^{M_i \times r_i}, i = 1:N$ and $\ten{G}^{(0)} \in \mathbb{R}^{r_i \times \ldots \times r_N}$ ;\\
\For{t = 1, 2, \ldots}
{
$\mat{U}_i^{(t)_{0}} \coloneqq \mat{U}_i^{(t-1)}, i = 1:N$; \\
$\ten{G}^{(t)_0} \coloneqq \ten{G}^{(t-1)}$;\\
\For{k = 1:K}
{
Compute $\ten{R}^{(\Delta t)}$ from \refeqn{eqn:r_delta} using $\mat{U}_i^{(t)_{k-1}}, i = 1:N$ and $\ten{G}^{(t)_{k-1}}$ ;\\
Compute $\frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t)_{k-1}}}$ for $i = 1:N$ from \refeqn{eqn:gradu}; \\
Update $\mat{U}_i^{(t)_k}$ using $\frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t)_{k-1}}}$ and $\mat{U}_i^{(t)_{k-1}}$ in \refeqn{eqn:update_u} ; \\
Compute $\frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t)_{k-1}}} $ from \refeqn{eqn:gradu}; \\
Update $\ten{G}^{(t)_k}$ using $\ten{G}^{(t)_{k-1}}$ and $\frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t)_{k-1}}} $ in \refeqn{eqn:update_u}; \\
}
$ \mat{U}_i^{(t)} \coloneqq \mat{U}_i^{(t)_{K}}$;\\
$ \ten{G}^{(t)} \coloneqq \ten{G}^{(t)_K}$;\\
}
\Output{$\mat{U}_i^{(t)}, i = 1:N, \ten{G}^{(t)}$.}
\end{algorithm}
\refalg{alg:simast} summarizes the procedure described above. The computational cost of implementing \refalg{alg:simast} depends on the update of the variables (\ref{eqn:update_u}) and the computations in (\ref{eqn:r_delta}). The cost of computing $\ten{R}^{(\Delta t)}$ is $O( \sum_i I_i M_i r_i + |\Omega|r_1 \ldots r_N)$. The cost of performing the updates (\ref{eqn:update_u}) is $O(|\Omega| r_1 \ldots r_N + \sum_i M_i r_i)$. Overall, at every time step, the computational cost of \refalg{alg:simast} is $O(K( \sum_i I_i M_i r_i + |\Omega|r_1 \ldots r_N))$.
\subsection*{Extension to the nonnegative case: NN-SIITA{}}
\label{sec:nn_siita}
We now discuss how nonnegative constraints can be incorporated into the decomposition learned by SIITA{}. Nonnegative constraints allow the factor of the tensor to be interpretable.
We denote SIITA{} with nonnegative constraints with NN-SIITA{}. At every time step $t$ in the multi-aspect streaming setting, we seek to learn the following decomposition:
\begin{equation}
\label{eqn:nn_tucker}
\underset{\mat{U}_i \in \mathbb{R}_{+}^{M_i \times r_i}, i = 1:N}{\min_{\ten{G} \in \mathbb{R}_{+}^{r_1 \times \ldots \times r_N}}} F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}, \{\mat{U}_i\}_{i=1:N}),
\end{equation}
where $F(\cdot)$ is as given in \eqref{eqn:F_U_G}.
We employ a projected gradient descent based algorithm for solving the optimization problem in \eqref{eqn:nn_tucker}. We follow the same incremental update scheme discussed in \refalg{alg:simast}, however we use a projection operator defined below for updating the variables. For NN-SIITA{}, \eqref{eqn:update_u} is replaced with
\begin{equation*}\label{eqn:update_u_nn}
\begin{array}{lll}
\mat{U}_i^{(t)} = \Pi_{+}[\mat{U}_i^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t-1)}}], i = 1:N \\
\ten{G}^{(t)} = \Pi_{+}[\ten{G}^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t-1)}}],
\end{array}
\end{equation*}
where $\Pi_{+}$ is the element-wise projection operator defined as
\begin{equation*}
\Pi_{+}[x_i] =
\begin{cases}
x_i, & \text{if}\ x_i > 0\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
The projection operator maps a point back to the feasible region ensuring that the factor matrices and the core tensor are always nonnegative with iterations.
\section{Experiments}
\label{sec:exp}
We evaluate SIITA{} against other state-of-the-art baselines in two dynamic settings viz., (1) multi-aspect streaming setting (\refsec{sec:mast}), and (2) traditional streaming setting (\refsec{sec:olstec}). We then evaluate effectiveness of SIITA{} in the non-streaming batch setting (\refsec{sec:static}). We analyze the effect of different types of side information in \refsec{sec:ablation}. Finally, we evaluate the performance of NN-SIITA{} in the unsupervised setting in \refsec{sec:nnsiita_exp}.
\textbf{Datasets}: Datasets used in the experiments are summarized in \reftbl{tbl:mast_data}. {\bf MovieLens 100K} \cite{Harper2015} is a standard movie recommendation dataset.
\textbf{YELP} is a downsampled version of the YELP(Full) dataset \cite{Jeon2016}. The YELP(Full) review dataset consists of 70K (user) $\times$ 15K (business) $\times$ 108 (year-month) tensor, and a side information matrix of size 15K (business) $\times$ 68 (city). We select a subset of this dataset for comparisons as the considered baselines algorithms cannot scale to the full dataset. We note that SIITA{}, our proposed method, doesn't have such scalability concerns. In \refsec{sec:ablation}, we show that SIITA{} scales to datasets of much larger sizes.
In order to create YELP out of YELP(Full), we select the top frequent 1000 users and top 1000 frequent businesses and create the corresponding tensor and side information matrix.
After the sampling, we obtain a tensor of size 1000 (user) $ \times$ 992 (business) $\times$ 93 (year-month) and a side information matrix of dimensions 992 (business) $ \times $ 56 (city).
\begin{table}[t]
\centering
\small
\caption{\label{tbl:mast_data}Summary of datasets used in the paper. The starting size and increment size given in the table are for Multi-Aspect Streaming setting. For Streaming setting, the tensor grows in the third dimension, one slice at every time step.}
\begin{tabular}{ccc}
\toprule
& MovieLens 100K & YELP \\
\midrule
Modes & user $\times$ movie $\times$ week & user $\times$ business $\times$ year-month \\
\midrule
Tensor Size & 943$\times$1682$\times$31& 1000$\times$992$\times$93 \\
\midrule
Starting size & 19$\times$34$\times$2& 20$\times$20$\times$2\\
\midrule
Increment step &19, 34, 1 & 20, 20, 2\\
\midrule
Sideinfo matrix & 1682 (movie) $\times$ 19 (genre) & 992 (business) $\times$ 56 (city) \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Multi-Aspect Streaming Setting}
\label{sec:mast}
\begin{table}[t]
\centering
\small
\caption{\label{tbl:rmse_mast}Test RMSE (lower is better) averaged across all the time steps in the multi-aspect streaming tensor sequence setting (Definition 4) for MAST and SIITA{}. SIITA{}, the proposed method, outperforms MAST for all the datasets. \refsec{sec:mast} provides more details.}
\begin{tabular}{ccccc}
\toprule
Dataset & Missing\% & Rank & MAST & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & 1.60& {\bf 1.23}\\
& & 5 & 1.53 & 1.29\\
& & 10 & 1.48 &2.49\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & 1.74 & {\bf 1.28}\\
& & 5 & 1.75 & 1.29\\
& & 10 & 1.64 & 2.55\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 2.03 & {\bf 1.59}\\
& & 5 & 1.98 &1.61\\
& & 10 & 2.02 &2.96\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.90& {\bf 1.43} \\
& & 5 & 1.92 & 1.54\\
& & 10 & 1.93 &4.03\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.94 & {\bf 1.51} \\
& & 5 & 1.94& 1.67\\
& & 10 & 1.96 & 4.04\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 &1.97 & 1.71 \\
& & 5 & 1.97 & {\bf 1.61}\\
& & 10 & 1.97 & 3.49\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_ML_MAST_20_Missing}\\
{\small (a) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_YELP_MAST_20_Missing}\\
{\small (b) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:mast_rmse}Evolution of test RMSE of MAST and SIITA{} with each time step. For both the datasets, SIITA{} attains a stable performance after a few time steps, while the performance of MAST degrades with every time step. Refer to \refsec{sec:mast} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_ML_MAST_20_Missing}\\
{\small (a) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_YELP_MAST_20_Missing}\\
{\small (b) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:mast_runtime}Runtime comparison between MAST and SIITA{} at every time step.
SIITA{} is significantly faster than MAST. Refer to \refsec{sec:mast} for more details.}
\end{figure}
We first analyze the model in the multi-aspect streaming setting, for which we consider MAST \cite{Song2017} as a state-of-the-art baseline. \\
{\bf MAST \cite{Song2017}}: MAST is a dynamic low-rank tensor completion algorithm, which enforces nuclear norm regularization on the decomposition matrices of CP. A tensor-based Alternating Direction Method of Multipliers is used for solving the optimization problem.
We experiment with the MovieLens 100K and YELP datasets. Since the third mode is time in both the datasets, i.e., (week) in MovieLens 100K and (year-month) in YELP, one way to simulate the multi-aspect streaming sequence (Definition 3) is by considering
every slice in third-mode as one time step in the sequence, and letting the tensor grow along other two modes with every time step, similar to the ladder structure given in \cite[Section ~3.3]{Song2017}. Note that this is different from the traditional streaming setting, where the tensor only grows in time mode while the other two modes remain fixed.
In contrast, in the multi-aspect setting here, there can be new users joining the system within the same month but on different days or different movies getting released on different days in the same week etc. Therefore in our simulations, we consider the third mode as any normal mode and generate a more general multi-aspect streaming tensor sequence, the details are given in \reftbl{tbl:mast_data}. The parameters for MAST are set based on the guidelines provided in \cite[~Section 4.3]{Song2017}.
We compute the root mean square error on test data (test RMSE; lower is better) at every time step and report the test RMSE averaged across all the time steps in \reftbl{tbl:rmse_mast}.
We perform experiments on multiple train-test splits for each dataset. We vary the test percentage, denoted by {\it Missing\%} in \reftbl{tbl:rmse_mast}, and the rank of decomposition, denoted by {\it Rank} for both the datasets. For every (Missing\%, Rank) combination, we run both models on ten random train-test splits and report the average.
For SIITA{}, Rank = $r$ in \reftbl{tbl:rmse_mast} represents the Tucker-rank $(r, r, r)$.
In \reftbl{tbl:rmse_mast}, the proposed SIITA{} achieves better results than MAST. \reffig{fig:mast_rmse} shows the plots for test RMSE at every time step. Since SIITA{} handles the sparsity in the data effectively, as a result SIITA{} is significantly faster than MAST, which can be seen from \reffig{fig:mast_runtime}. Overall, we find that SIITA{}, the proposed method, is more effective and faster compared to MAST in the multi-aspect streaming setting.
\subsection{Streaming Setting}
\label{sec:olstec}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_ML_Streaming_20_Missing}\\
{\small (b) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.254]{TestRMSE_YELP_Streaming_20_Missing}\\
{\small (a) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:olstec_rmse} Evolution of Test RMSE of TeCPSGD, OLSTEC and SIITA{} with each time step. In both datasets, SIITA{} performs significantly better than the baseline algorithms in the pure streaming setting.
Refer to \refsec{sec:olstec} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{Runtime_ML_Streaming_20_Missing}\\
{\small (b) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.254]{RunTime_YELP_Streaming_20_Missing}\\
{\small (a) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:olstec_runtime}Runtime comparison between TeCPSGD, OLSTEC and SIITA{}. SIITA{} is able to exploit sparsity in the data and is much faster. Refer to \refsec{sec:olstec} for more details.}
\end{figure}
\begin{table}[t]
\centering
\small
\caption{Test RMSE averaged across all the time steps in the streaming setting for TeCPSGD, OLSTEC, a state-of-the-art streaming tensor completion algorithm, and SIITA. SIITA{} outperforms the baseline algorithms significantly. See \refsec{sec:olstec} for more details. \label{tbl:rmse_streaming}}
\begin{tabular}{cccccc}
\toprule
Dataset & Missing\% & Rank & TeCPSGD & OLSTEC & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & 3.39 & 5.46 &{\bf 1.53}\\
& & 5 & 3.35 & 4.65& 1.54\\
& & 10 & 3.19 & 4.96& 1.71\\
\cline{2-6}
&\multirow{3}{5pt}{50\%} & 3 & 3.55 & 8.39& {\bf 1.63}\\
& & 5 & 3.40 & 6.73& 1.64\\
& & 10 & 3.23 &3.66 &1.73\\
\cline{2-6}
&\multirow{3}{5pt}{80\%} & 3 & 3.78 &3.82 & 1.79\\
& & 5 & 3.77 & 3.80& {\bf 1.75}\\
& & 10 & 3.84 & 4.34& 2.47\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 4.55 & 4.04 & {\bf 1.45}\\
& & 5 & 4.79 & 4.04& 1.59\\
& & 10 & 5.17 & 4.03& 2.85\\
\cline{2-6}
& \multirow{3}{5pt}{50\%} & 3 & 4.67 & 4.03& {\bf 1.55}\\
& & 5 & 5.03 & 4.03& 1.67\\
& & 10 & 5.25 & 4.03 & 2.69\\
\cline{2-6}
& \multirow{3}{5pt}{80\%} & 3 & 4.99 & 4.02& {\bf 1.73}\\
& & 5 & 5.17 & 4.02& 1.78\\
& & 10 & 5.31 & 4.01& 2.62\\
\bottomrule
\end{tabular}
\end{table}
In this section, we simulate the pure streaming setting by letting the tensor grow only in the third mode at every time step. The number of time steps for each dataset in this setting is the dimension of the third mode, i.e., 31 for MovieLens 100K and 93 for YELP.
We compare the performance of SIITA{} with TeCPSGD and OLSTEC algorithms in the streaming setting.\\
{\bf TeCPSGD \cite{Mardani2015}:} TeCPSGD is an online Stochastic Gradient Descent based algorithm for recovering missing data in streaming tensors. This algorithm is based on PARAFAC decomposition. TeCPSGD is the first proper tensor completion algorithm in the dynamic setting.\\
{\bf OLSTEC \cite{Kasai2016}:} OLSTEC is an online tensor tracking algorithm for partially observed data streams corrupted by noise. OLSTEC is a second order stochastic gradient descent algorithm based on CP decomposition exploiting recursive least squares. OLSTEC is the state-of-the-art for streaming tensor completion.
We report test RMSE, averaged across all time steps, for both MovieLens 100K and YELP datasets. Similar to the multi-aspect streaming setting, we run all the algorithms for multiple train-test splits. For each split, we run all the algorithms with different ranks. For every (Missing\%, Rank) combination, we run all the algorithms on ten random train-test splits and report the average. SIITA{} significantly outperforms all the baselines in this setting, as shown in \reftbl{tbl:rmse_streaming}. Figure \ref{fig:olstec_rmse} shows the average test RMSE of every algorithm at every time step. From \reffig{fig:olstec_runtime} it can be seen that SIITA{} takes much less time compared to other algorithms. The spikes in the plots suggest that the particular slices are relatively less sparse.
\subsection{Batch Setting}
\label{sec:static}
\begin{table}[t]
\centering
\small
\caption{Mean Test RMSE across multiple train-test splits in the Batch setting. SIITA{} achieves lower test RMSE on both the datasets compared to AirCP, a state-of-the-art algorithm for this setting. Refer to \refsec{sec:static} for details. \label{tbl:rmse_static}}
\begin{tabular}{ccccc}
\toprule
Dataset & Missing\% & Rank & AirCP & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & 3.351 & {\bf1.534}\\
& & 5 & 3.687 & 1.678\\
& & 10 & 3.797 &2.791\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & 3.303 & {\bf 1.580}\\
& & 5 & 3.711& 1.585\\
& & 10 & 3.894& 2.449\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 3.883 & {\bf 1.554}\\
& & 5 & 3.997 & 1.654\\
& & 10 & 3.791 & 3.979\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.094 & {\bf1.052} \\
& & 5 & 1.086 & 1.056\\
& & 10 & 1.077 & 1.181\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.096 & 1.097 \\
& & 5 & 1.095 & {\bf1.059}\\
& & 10 & 1.719 & 1.599\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 & 1.219 & 1.199 \\
& & 5 & {\bf 1.118} & 1.156\\
& & 10 & 2.210 & 2.153\\
\bottomrule
\end{tabular}
\end{table}
Even though our primary focus is on proposing an algorithm for the multi-aspect streaming setting, SIITA{} can be run as a tensor completion algorithm with side information in the batch (i.e., non streaming) setting. To run in batch setting, we set $K=1$ in \refalg{alg:simast} and run for multiple passes over the data. In this setting, AirCP \cite{Ge2016} is the current state-of-the-art algorithm which is also capable of handling side information. We consider AirCP as the baseline in this section. The main focus of this setting is to demonstrate that SIITA{} incorporates the side information effectively.
~\\
{\bf AirCP \cite{Ge2016}}: AirCP is a CP based tensor completion algorithm proposed for recovering the spatio-temporal dynamics of online memes. This algorithm incorporates auxiliary information from memes, locations and times. An alternative direction method of multipliers (ADMM) based algorithm is employed for solving the optimization. AirCP expects the side information matrices to be similarity matrices and takes input the Laplacian of the similarity matrices. However, in the datasets we experiment with, the side information is available as feature matrices. Therefore, we consider the covariance matrices $\mat{A}_i\mat{A}_i^\top$ as similarity matrices.
We run both algorithms till convergence and report test RMSE. For each dataset, we experiment with different levels of test set sizes,
and for each such level, we run our experiments on 10 random splits. We report the mean test RMSE per train-test percentage split. We run our experiments with multiple ranks of factorization. Results are shown in \reftbl{tbl:rmse_static}, where we observe that SIITA{} achieves better results. Note that the rank for SIITA{} is the Tucker rank, i.e., rank = 3. This implies a factorization rank of (3, 3, 3) for SIITA{}.
\textbf{Remark:} Since all the baselines considered for various settings are CP based, we only compare for CP tensor rank. From Tables \ref{tbl:rmse_mast}, \ref{tbl:rmse_streaming} and \ref{tbl:rmse_static} it can be seen that the performance suffers for rank = 10. However, when we run SIITA{} with a rank = (10, 10, 2) we achieve a lower test RMSE.
\subsection{Analyzing Merits of Side Information}
\label{sec:ablation}
\begin{table}[t]
\centering
\small
\caption{\label{tbl:mast_no_si} Test RMSE averaged across multiple train-test splits in the Multi-Aspect Streaming setting, analyzing the merits of side information. See \refsec{sec:ablation} for more details.}
\begin{tabular}{ccccc}
\toprule
Dataset & Missing\% & Rank & SIITA{} (w/o SI) & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & {\bf 1.19}& 1.23\\
& & 5 & {\bf 1.19} & 1.29\\
& & 10 & 2.69 &2.49\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & {\bf 1.25} & 1.28\\
& & 5 & {\bf 1.25} & 1.29\\
& & 10 & 3.28 & 2.55\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 1.45 & 1.59\\
& & 5 & {\bf 1.42} &1.61\\
& & 10 & 2.11 &2.96\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.44& {\bf 1.43} \\
& & 5 & 1.48 & 1.54\\
& & 10 & 3.90 &4.03\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.57 & {\bf 1.51} \\
& & 5 & 1.62& 1.67\\
& & 10 & 5.48 & 4.04\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 &1.75 & 1.71 \\
& & 5 & 1.67 & {\bf 1.61}\\
& & 10 & 5.28 & 3.49\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\small
\caption{\label{tbl:streaming_no_si} Test RMSE averaged across multiple train-test splits in the streaming setting, analyzing the merits of side information. See \refsec{sec:ablation} for more details.}
\begin{tabular}{ccccc}
\toprule
Dataset & Missing\% & Rank & SIITA{} (w/o SI) & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & {\bf 1.46} & 1.53\\
& & 5 & 1.53 & 1.54\\
& & 10 & 1.55 &1.71\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & 1.58 & 1.63\\
& & 5 & 1.67 & 1.64\\
& & 10 & {\bf 1.56} & 1.73\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 1.76 & 1.79\\
& & 5 & {\bf 1.74} &1.75\\
& & 10 & 2.31 &2.47\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.46 & {\bf 1.45} \\
& & 5 & 1.62 & 1.59\\
& & 10 & 2.82 &2.85\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.57 & {\bf 1.55} \\
& & 5 & 1.69 & 1.67\\
& & 10 & 2.54 & 2.67\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 &1.76 & {\bf 1.73} \\
& & 5 & 1.80 & 1.78\\
& & 10 & 2.25 & 2.62\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_ML_MAST_NO_SI_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_YELP_MAST_NO_SI_80_Missing}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:rmse_no_si_mast}Evolution of test RMSE with every time step in the multi-aspect streaming setting for SIITA{} and SIITA{} (w/o SI). See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_ML_MAST_NO_SI_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_YELP_MAST_NO_SI}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:runtime_no_si_mast}Run Time comparison between SIITA{} and SIITA{} (w/o SI) in the multi-aspect streaming setting. See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_ML_NO_SI_Streaming_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{TestRMSE_YELP_Streaming_No_SI_80_Missing}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:rmse_no_si_streaming}Evolution of test RMSE with every time step in the streaming setting for SIITA{} and SIITA{}(w/o SI). See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_ML_NO_SI_Streaming_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_YELP_Streaming_NO_SI}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:runtime_no_si_streaming}Run Time comparison between SIITA{} and SIITA{} (w/o SI) in the Streaming setting. See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.24]{TestRMSE_ML1M_MAST}\\
{\small (a) Test RMSE at every time step}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{RunTime_ML1M_MAST}\\
{\small (b) Run Time at every time step}
\end{minipage}
\end{tabular}
\caption{\label{fig:ML1M_rmse_mast}Investigating the merits of side information for MovieLens 1M dataset in the multi-aspect streaming setting. Side information along the user mode is the most useful for tensor completion. See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=0.99\textwidth]{ML_1M_Ablation_rmse}\\
{\small (a) Evolution of Test RMSE against epochs.}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=0.99\textwidth]{ML_1M_Ablation_Runtime}\\
{\small (b) Time elapsed with every epoch.}
\end{minipage}
\end{tabular}
\caption{\label{fig:ablation}Investigating the merits of side information for MovieLens 1M dataset in the batch setting. Side information along the user mode is the most useful for tensor completion. See \refsec{sec:ablation} for more details.}
\end{figure}
Our goal in this paper is to propose a flexible framework using which side information may be easily incorporated during incremental tensor completion, especially in the multi-aspect streaming setting. Our proposed method, SIITA{}, is motivated by this need. In order to evaluate merits of different types of side information on SIITA{}, we report several experiments where performances of SIITA{} with and without various types of side information are compared.
\textbf{Single Side Information}: In the first experiment, we compare SIITA{} with and without side information (by setting side information to identity; see \refsec{sec:prelim}). We run the experiments in both multi-aspect streaming and streaming settings.
\reftbl{tbl:mast_no_si} reports the mean test RMSE of SIITA{} and SIITA{} (w/o SI), which stands for running SIITA{} without side information, for both datasets in multi-aspect streaming setting. For MovieLens 100K, SIITA{} achieves better performance without side information. Whereas for YELP, SIITA{} performs better with side information.
\reffig{fig:rmse_no_si_mast} shows the evolution of test RMSE at every time step. \reffig{fig:runtime_no_si_mast} shows the runtime of SIITA{} when run with and without side information.
SIITA{} runs faster in the presence of side information. \reftbl{tbl:streaming_no_si} reports the mean test RMSE for both the datasets in the streaming setting. Similar to the multi-aspect streaming setting, SIITA{} achieves better performance without side information for MovieLens 100K dataset and with side information for YELP dataset.
\reffig{fig:rmse_no_si_streaming} shows the test RMSE of SIITA{} against time steps, with and without side information. \reffig{fig:runtime_no_si_streaming} shows the runtime at every time step.
\textbf{Multi Side Information}: In all the datasets and experiments considered so far, side information along only one mode is available to SIITA{}. In this next experiment, we consider the setting where side information along multiple modes are available. For this experiment, we consider the {\bf MovieLens 1M } \cite{Harper2015} dataset, a standard dataset of 1 million movie ratings. This dataset consists of a 6040 (user) $\times$ 3952 (movie) $\times$ 149 (week) tensor, along with two side information matrices: a 6040 (user) $\times$ 21 (occupation) matrix, and a 3952 (movie) $\times$ 18 (genre) matrix.
Note that among all the methods considered in the paper, SIITA{} is the only method which scales to the size of MovieLens 1M datasets.
We create four variants of the dataset. The first one with the tensor and all the side information matrices denoted by {MovieLens 1M}, the second one with the tensor and only the side information along the movie mode denoted by {MovieLens 1M (movie mode)}.
Similarly, {MovieLens (user mode)} with only user mode side information, and finally {MovieLens 1M (no si)} with only the tensor and no side information.
We run SIITA{} in multi-aspect streaming and batch modes for all the four variants. Test RMSE at every time step in the multi-aspect streaming setting is shown in \reffig{fig:ML1M_rmse_mast}(a).
Evolution of Test RMSE (lower is better) against epochs are shown in \reffig{fig:ablation}(a) in batch mode. From Figures \ref{fig:ML1M_rmse_mast}(a) and \ref{fig:ablation}(a), it is evident that the variant {MovieLens 1M (user mode)} achieves best overall performance, implying that the side information along the user mode is more useful for tensor completion in this dataset. However, {MovieLens 1M (movie mode)} achieves poorer performance than other variants implying that movie-mode side information is not useful for tensor completion in this case. This is also the only side information mode available to SIITA{} during the MovieLens 100K experiments in Tables \ref{tbl:mast_no_si} and \ref{tbl:streaming_no_si}. This sub-optimal side information may be a reason for SIITA{}'s diminished performance when using side information for MovieLens100K dataset. From the runtime comparisons in Figures \ref{fig:ablation} (b) and \ref{fig:ML1M_rmse_mast}(b), we observe that {MovieLens 1M} (where both types of side information are available) takes the least time, while the variant {MovieLens 1M (no si)} takes the most time to run. This is a benefit we derive from the inductive framework, where in the presence of useful side information, SIITA{} not only helps in achieving better performance but also runs faster.
\subsection{Unsupervised Setting}
\label{sec:nnsiita_exp}
\begin{figure}
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{Avg_Purity_ML_MAST}\\
{\small (a) MovieLens 100K}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{Avg_Purity_YELP_MAST}\\
{\small (b) YELP}
\end{minipage}
\end{tabular}
\caption{\label{fig:nn_siita} Average Purity of clusters learned by NN-SIITA{} and NN-SIITA{} (w/o SI) at every time step in the unsupervised setting. For both datasets, side information helps in learning purer clusters. See \refsec{sec:nnsiita_exp} for more details.}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{Vary_W_ML_MAST}\\
{\small (a) MovieLens 100K}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.25]{Vary_W_YELP_MAST}\\
{\small (b) YELP}
\end{minipage}
\end{tabular}
\caption{\label{fig:nn_siita_w} Evolution of mean average purity with $w$ for NN-SIITA{} and NN-SIITA{} (w/o SI) for both MovieLens 100K and YELP datasets. See \refsec{sec:nnsiita_exp} for more details.}
\end{figure}
\begin{table*}[tb]
\tiny
\centering
\caption{\label{tbl:nnsita_clusters} Example clusters learned by NN-SIITA{} for MovieLens 100K and YELP datasets. The first column is an example of a pure cluster and the second column is an example of noisy cluster. See \refsec{sec:nnsiita_exp} for more details.}
\begin{tabular}{ccccc}
\toprule
& \multicolumn{2}{c}{Cluster (Action, Adventure, Sci-Fi)} & \multicolumn{2}{c}{Cluster (Noisy)} \\
\cline{2-5}
\multirow{6}{30pt}{MovieLens100K} & \multicolumn{1}{c}{Movie} & \multicolumn{1}{c}{Genres} & \multicolumn{1}{c}{Movie} & \multicolumn{1}{c}{Genres} \\
\cline{2-5}
& The Empire Strikes Back (1980) & Action, Adventure, Sci-Fi, Drama, Romance & Toy Story (1995)& Animation, Children's, Comedy\\
& Heavy Metal (1981) & Action, Adventure, Sci-Fi, Animation, Horror & From Dusk Till Dawn (1996)& Action, Comedy, Crime, Horror, Thriller\\
& Star Wars (1977) & Action, Adventure, Sci-Fi, Romance, War & Mighty Aphrodite (1995)& Comedy\\
& Return of the Jedi (1983) & Action, Adventure, Sci-Fi, Romance, War & Apollo 13 (1995)& Action, Drama, Thriller\\
& Men in Black (1997) & Action, Adventure, Sci-Fi, Comedy & Crimson Tide (1995)& Drama, Thriller, War \\
\bottomrule
& \multicolumn{2}{c}{Cluster (Phoenix)} & \multicolumn{2}{c}{Cluster (Noisy)} \\
\cline{2-5}
\multirow{6}{30pt}{YELP} & \multicolumn{1}{c}{Business} & \multicolumn{1}{c}{Location} & \multicolumn{1}{c}{Business} & \multicolumn{1}{c}{Location} \\
\cline{2-5}
& Hana Japanese Eatery & \multicolumn{1}{c}{Phoenix} & The Wigman& \multicolumn{1}{c}{Litchfield Park }\\
& Herberger Theater Center & \multicolumn{1}{c}{Phoenix} & Hitching Post 2 & \multicolumn{1}{c}{Gold Canyon }\\
& Scramble A Breakfast Joint & \multicolumn{1}{c}{Phoenix}& Freddys Frozen Custard \& Steakburgers& \multicolumn{1}{c}{Glendale}\\
& The Arrogant Butcher & \multicolumn{1}{c}{Phoenix}& Costco& \multicolumn{1}{c}{Avondale}\\
& FEZ & \multicolumn{1}{c}{Phoenix}& Hana Japanese Eatery& \multicolumn{1}{c}{Phoenix}\\
\bottomrule
\end{tabular}
\end{table*}
In this section, we consider an unsupervised setting with the aim to discover underlying clusters of the items, like movies in the MovieLens 100K dataset and businesses in the YELP dataset, from a sequence of sparse tensors. It is desirable to mine clusters such that similar items are grouped together. Nonnegative constraints are essential for mining interpretable clusters \cite{Hyvoenen2008, Murphy2012}. For this set of experiments, we consider the nonnegative version of SIITA{} denoted by NN-SIITA{}. We investigate whether side information helps in discovering more coherent clusters of items in both datasets.
We run our experiments in the multi-aspect streaming setting. At every time step, we compute {\it Purity} of clusters and report average-Purity. Purity of a cluster is defined as the percentage of the cluster that is coherent. For example, in MovieLens 100K, a cluster of movies is 100\% pure if all the movies belong to the same genre and 50\% pure if only half of the cluster belong to the same genre. Formally, let clusters of items along mode-$i$ are desired, let $r_i$ be the rank of factorization along mode-$i$. Every column of the matrix $\mat{A}_i \mat{U}_i$ is considered a distribution of the items, the top-$w$ items of the distribution represent a cluster. For $p$-th cluster, i.e., cluster representing column $p$ of the matrix $\mat{A}_i \mat{U}_i$, let $w_p$ items among the top-$w$ items belong to the same category, Purity and average-Purity are defined as follows:
\begin{align*}
\text{Purity}(p) = w_p/w, \\
\text{average-Purity} = \frac{1}{r_i} \sum\limits_{p=1}^{r_i} \text{Purity}(p).
\end{align*}
Note that Purity is computed per cluster, while average-Purity is computed for a set of clusters. Higher average-Purity indicates a better clustering.
We report average-Purity at every time step for both the datasets. We run NN-SIITA{} with and without side information. \reffig{fig:nn_siita} shows average-Purity at every time step for MovieLens 100K and YELP datasets. It is clear from \reffig{fig:nn_siita} that for both the datasets side information helps in discovering better clusters. We compute the Purity for MovieLens 100K dataset based on the genre information of the movies and for the YELP dataset we compute Purity based on the geographic locations of the businesses. \reftbl{tbl:nnsita_clusters} shows some example clusters learned by NN-SIITA{}. For MovieLens 100K dataset, each movie can belong to multiple genres. For computing the Purity, we consider the most common genre for all the movies in a cluster. Results shown in \reffig{fig:nn_siita} are for $w = 5$. However, we also vary $w$ between 5 and 25 and report the \emph{mean} average-Purity, which is obtained by computing the mean across all the time steps in the multi-aspect streaming setting. As can be seen from \reffig{fig:nn_siita_w}, having side information helps in learning better clusters for all the values of $w$. For MovieLens 100K, the results reported are with a factorization rank of $(3,7,3)$ and for YELP, the rank of factorization is $(5,7,3)$. Since this is an unsupervised setting, note that we use the entire data for factorization, i.e., there is no train-test split.
\section{Conclusion}
\label{sec:conclusion}
We propose an inductive framework for incorporating side information for tensor completion in standard and multi-aspect streaming settings. The proposed framework can also be used in the batch setting. Given a completely new dataset with side information along multiple modes, SIITA{} can be used to analyze the merits of different side information for tensor completion. Besides performing better, SIITA{} is also significantly faster than state-of-the-art algorithms. We also propose NN-SIITA{} for handling nonnegative constraints and show how it can be used for mining interpretable clusters. Our experiments confirm the effectiveness of SIITA{} in many instances. In future, we plan to extend our proposed framework to handle missing side information problem instances \cite{Kishan2017}.
\subsection*{ Acknowledgement } This work is supported in part by the Ministry of Human Resource Development (MHRD), Government of India.
\bibliographystyle{authordate1}
\section{Conclusion}
\label{sec:conclusion}
We propose an inductive framework for incorporating side information for tensor completion in multi-aspect streaming and streaming settings. The proposed framework can also be used for tensor completion with side information in batch setting. Given a completely new dataset with side information along multiple modes, SIITA{} can be used to analyze the merits of different side information for tensor completion. Besides performing better, SIITA{} is also significantly faster than state-of-the-art algorithms. We also propose NN-SIITA{} for incorporating nonnegative constraints and demonstrate how it can be used for mining interpretable clusters.
In many instances, the side information matrices are themselves incomplete \cite{Kishan2017}. In future, we plan to extend our proposed framework to recover missing data in the side information matrices besides completing the tensor.
\section{Experiments}
\label{sec:exp}
We evaluate SIITA{} against other state-of-the-art baselines in two dynamic settings viz., (1) multi-aspect streaming setting (\refsec{sec:mast}), and (2) traditional streaming setting (\refsec{sec:olstec}). We then evaluate effectiveness of SIITA{} in the non-streaming batch setting (\refsec{sec:static}). We analyze the effect of different types of side information in \refsec{sec:ablation}. Finally, we evaluate the performance of NN-SIITA{} in the unsupervised setting in \refsec{sec:nnsiita_exp}.
\textbf{Datasets}: Datasets used in the experiments are summarized in \reftbl{tbl:mast_data}. {\bf MovieLens 100K} \cite{Harper2015} is a standard movie recommendation dataset.
\textbf{YELP} is a downsampled version of the YELP(Full) dataset \cite{Jeon2016}. The YELP(Full) review dataset consists of 70K (user) $\times$ 15K (business) $\times$ 108 (year-month) tensor, and a side information matrix of size 15K (business) $\times$ 68 (city). We select a subset of this dataset as the various baselines algorithms compared are unable to handle datasets of this size. We note that SIITA{}, our proposed method, doesn't have such scalability concerns. In fact, as we show later in \refsec{sec:ablation}, SIITA{} is able to process datasets of much larger sizes.
In order to create YELP out of YELP(Full), we select the top frequent 1000 users and top 1000 frequent businesses and create the corresponding tensor and side information matrix.
After the sampling, we obtain a tensor of dimensions 1000 (user) $ \times$ 992 (business) $\times$ 93 (year-month) and a side information matrix of dimensions 992 (business) $ \times $ 56 (city). \\
\begin{table}[t]
\centering
\small
\caption{\label{tbl:mast_data}Summary of datasets used in the paper. The starting size and increment size given in the table are for Multi-Aspect Streaming setting. For Streaming setting, the tensor grows in the third dimension, one slice at every time step.}
\begin{tabular}{p{2cm}p{2cm}p{2cm}}
\toprule
& MovieLens 100K & YELP \\
\midrule
Modes & user $\times$ movie $\times$ week & user $\times$ business $\times$ year-month \\
\midrule
Tensor Size & 943$\times$1682$\times$31& 1000$\times$992$\times$93 \\
\midrule
Starting size & 19$\times$34$\times$2& 20$\times$20$\times$2\\
\midrule
Increment step &19, 34, 1 & 20, 20, 2\\
\midrule
Sideinfo matrix & 1682 (movie) $\times$ 19 (genre) & 992 (business) $\times$ 56 (city) \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Multi-Aspect Streaming Setting}
\label{sec:mast}
\begin{table}[t]
\centering
\small
\caption{\label{tbl:rmse_mast}Test RMSE (lower is better) averaged across all the time steps in the multi-aspect streaming tensor sequence setting (Definition 4) for MAST and SIITA{}. SIITA{}, the proposed method, outperforms MAST for all the datasets. \refsec{sec:mast} provides more details.}
\begin{tabular}{p{1.5cm}p{1cm}lll}
\toprule
Dataset & Missing\% & Rank & MAST & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & 1.60& {\bf 1.23}\\
& & 5 & 1.53 & 1.29\\
& & 10 & 1.48 &2.49\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & 1.74 & {\bf 1.28}\\
& & 5 & 1.75 & 1.29\\
& & 10 & 1.64 & 2.55\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 2.03 & {\bf 1.59}\\
& & 5 & 1.98 &1.61\\
& & 10 & 2.02 &2.96\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.90& {\bf 1.43} \\
& & 5 & 1.92 & 1.54\\
& & 10 & 1.93 &4.03\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.94 & {\bf 1.51} \\
& & 5 & 1.94& 1.67\\
& & 10 & 1.96 & 4.04\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 &1.97 & 1.71 \\
& & 5 & 1.97 & {\bf 1.61}\\
& & 10 & 1.97 & 3.49\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.13]{plots_CIKM_eps/TestRMSE_ML_MAST_20_Missing}\\
{\small (a) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.13]{plots_CIKM_eps/TestRMSE_YELP_MAST_20_Missing}\\
{\small (b) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:mast_rmse}Evolution of test RMSE of MAST and SIITA{} with each time step. For both the datasets, SIITA{} attains a stable performance after a few time steps, while the performance of MAST degrades with every time step. Refer to \refsec{sec:mast} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.13]{plots_CIKM_eps/RunTime_ML_MAST_20_Missing}\\
{\small (a) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.13]{plots_CIKM_eps/RunTime_YELP_MAST_20_Missing}\\
{\small (b) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:mast_runtime}Runtime comparison between MAST and SIITA{} at every time step.
SIITA{} is significantly faster than MAST. Refer to \refsec{sec:mast} for more details.}
\end{figure}
We start with experimental analysis of the model in the multi-aspect streaming setting, for which we consider MAST \cite{Song2017} as it is the state-of-the-art baseline.. \\
{\bf MAST \cite{Song2017}}: MAST is a dynamic low-rank tensor completion algorithm, which enforces nuclear norm regularization on the decomposition matrices of CP. A tensor-based Alternating Direction Method of Multipliers is used for solving the optimization problem.
We experiment with MovieLens 100K and YELP datasets. Since the third mode is time in both the datasets, i.e., (week) in MovieLens 100K and (year-month) in YELP, one way to simulate the multi-aspect streaming sequence (Definition 3) is by considering
every slice in third-mode as one time step in the sequence, and letting the tensor grow along other two modes with every time step, similar to the ladder structure given in \cite[Section ~3.3]{Song2017}. Note that this is different from the traditional streaming setting, where the tensor only grows in time mode while the other two modes remain fixed.
In contrast, in the multi-aspect setting here, there can be new users joining the system within the same month but on different days or different movies getting released on different days in the same week etc. Therefore in our simulations we consider the third mode
as any normal mode and generate a more general multi-aspect streaming tensor sequence, the details of the sizes of starting tensor and increase in size at every time step are given in \reftbl{tbl:mast_data}. Parameters for MAST are set based on the guidelines provided in
\cite[~Section 4.3]{Song2017}.
We compute the root mean square error on test data (test RMSE; lower is better) at every time step and report the test RMSE averaged across all the time steps in \reftbl{tbl:rmse_mast}.
We perform experiments on multiple train-test splits for each dataset. We vary the test percentage, denoted by {\it Missing\%} in \reftbl{tbl:rmse_mast}, and the rank of decomposition, denoted by {\it Rank} for both the datasets. For every (Missing\%, Rank) combination, we run both models on ten random train-test splits and report the average.
For SIITA{}, Rank = $r$ in \reftbl{tbl:rmse_mast} represents the Tucker-rank $(r, r, r)$.
As can be seen from \reftbl{tbl:rmse_mast}, the proposed SIITA{} achieves better results than MAST. \reffig{fig:mast_rmse} shows the plots for test RMSE at every time step. Since SIITA{} handles the sparsity in the data effectively, as a result SIITA{} is significantly faster than MAST, which can be seen from \reffig{fig:mast_runtime}. Overall, we find that SIITA{}, the proposed method, is more effective and faster compared to MAST in the multi-aspect streaming setting.
\subsection{Streaming Setting}
\label{sec:olstec}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/TestRMSE_ML_Streaming_20_Missing}\\
{\small (b) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.154]{plots_CIKM_eps/TestRMSE_YELP_Streaming_20_Missing}\\
{\small (a) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:olstec_rmse} Evolution of Test RMSE of TeCPSGD, OLSTEC and SIITA{} with each time step. In both datasets, SIITA{} performs significantly better than the baseline algorithms in the pure streaming setting.
Refer to \refsec{sec:olstec} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/Runtime_ML_Streaming_20_Missing}\\
{\small (b) MovieLens 100K \\ (20\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.154]{plots_CIKM_eps/RunTime_YELP_Streaming_20_Missing}\\
{\small (a) YELP \\ (20\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:olstec_runtime}Runtime comparison between TeCPSGD, OLSTEC and SIITA{}. SIITA{} is able to exploit sparsity in the data and is much faster. Refer to \refsec{sec:olstec} for more details.}
\end{figure}
\begin{table}[t]
\centering
\small
\caption{Test RMSE averaged across all the time steps in the streaming setting for TeCPSGD, OLSTEC, a state-of-the-art streaming tensor completion algorithm, and SIITA. SIITA{} outperforms the baseline algorithms significantly. See \refsec{sec:olstec} for more details. \label{tbl:rmse_streaming}}
\begin{tabular}{p{1.3cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}p{1.0cm}}
\toprule
Dataset & Missing\% & Rank & TeCPSGD & OLSTEC & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & 3.39 & 5.46 &{\bf 1.53}\\
& & 5 & 3.35 & 4.65& 1.54\\
& & 10 & 3.19 & 4.96& 1.71\\
\cline{2-6}
&\multirow{3}{5pt}{50\%} & 3 & 3.55 & 8.39& {\bf 1.63}\\
& & 5 & 3.40 & 6.73& 1.64\\
& & 10 & 3.23 &3.66 &1.73\\
\cline{2-6}
&\multirow{3}{5pt}{80\%} & 3 & 3.78 &3.82 & 1.79\\
& & 5 & 3.77 & 3.80& {\bf 1.75}\\
& & 10 & 3.84 & 4.34& 2.47\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 4.55 & 4.04 & {\bf 1.45}\\
& & 5 & 4.79 & 4.04& 1.59\\
& & 10 & 5.17 & 4.03& 2.85\\
\cline{2-6}
& \multirow{3}{5pt}{50\%} & 3 & 4.67 & 4.03& {\bf 1.55}\\
& & 5 & 5.03 & 4.03& 1.67\\
& & 10 & 5.25 & 4.03 & 2.69\\
\cline{2-6}
& \multirow{3}{5pt}{80\%} & 3 & 4.99 & 4.02& {\bf 1.73}\\
& & 5 & 5.17 & 4.02& 1.78\\
& & 10 & 5.31 & 4.01& 2.62\\
\bottomrule
\end{tabular}
\end{table}
In this section, we simulate the pure streaming setting by letting the tensor grow only in the third mode at every time step. The number of time steps for each dataset in this setting is the dimension of the third mode, i.e., 31 for MovieLens 100K and 93 for YELP.
We compare the performance of SIITA{} with TeCPSGD and OLSTEC algorithms in the streaming setting.\\
~\\
{\bf TeCPSGD \cite{Mardani2015}:} TeCPSGD is an online Stochastic Gradient Descent based algorithm for recovering missing data in streaming tensors. This algorithm is based on PARAFAC decomposition. TeCPSGD is the first proper tensor completion algorithm in the dynamic setting.\\
~\\
{\bf OLSTEC \cite{Kasai2016}:} OLSTEC is an online tensor tracking algorithm for partially observed data streams corrupted by noise. OLSTEC is a second order stochastic gradient descent algorithm based on CP decomposition exploiting recursive least squares. OLSTEC is the state-of-the-art for streaming tensor completion.
We report test RMSE, averaged across all time steps, for both MovieLens 100K and YELP datasets. Similar to the multi-aspect streaming setting, we run all the algorithms for multiple train-test splits and for each split we run all the algorithms with different ranks. For every (Missing\%, Rank) combination, we run all the algorithms on ten random train-test splits and report the average.
SIITA{} significantly outperforms all the baselines in this setting, as can be seen from \reftbl{tbl:rmse_streaming}. Figure \ref{fig:olstec_rmse} shows the average test RMSE of every algorithm at every time step. From \reffig{fig:olstec_runtime} it can be seen that SIITA{} takes much less time compared to other algorithms. The spikes in the plots suggest that the particular slices are relatively less sparse.
\subsection{Batch Setting}
\label{sec:static}
\begin{table}[t]
\centering
\small
\caption{Mean Test RMSE across multiple train-test splits in the Batch setting. SIITA{} achieves lower test RMSE on both the datasets compared to AirCP, a state-of-the-art algorithm for this setting. Refer to \refsec{sec:static} for details. \label{tbl:rmse_static}}
\begin{tabular}{p{1.2cm}p{1cm}lll}
\toprule
Dataset & Missing\% & Rank & AirCP & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & 3.351 & {\bf1.534}\\
& & 5 & 3.687 & 1.678\\
& & 10 & 3.797 &2.791\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & 3.303 & {\bf 1.580}\\
& & 5 & 3.711& 1.585\\
& & 10 & 3.894& 2.449\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 3.883 & {\bf 1.554}\\
& & 5 & 3.997 & 1.654\\
& & 10 & 3.791 & 3.979\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.094 & {\bf1.052} \\
& & 5 & 1.086 & 1.056\\
& & 10 & 1.077 & 1.181\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.096 & 1.097 \\
& & 5 & 1.095 & {\bf1.059}\\
& & 10 & 1.719 & 1.599\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 & 1.219 & 1.199 \\
& & 5 & {\bf 1.118} & 1.156\\
& & 10 & 2.210 & 2.153\\
\bottomrule
\end{tabular}
\end{table}
Even though our primary focus is on proposing an algorithm for the multi-aspect streaming setting, SIITA{} can be run as a tensor completion algorithm with side information in the batch (i.e., non streaming) setting as well. To run in batch mode, we set $K=1$ in \refalg{alg:simast} and run for multiple passes over the data. In this setting, AirCP \cite{Ge2016} is the current state-of-the-art algorithm which is also capable of handling side information. We consider AirCP as the baseline in this section.
The main focus of this setting is to demonstrate that SIITA{} does a good job in incorporating the side information.
~\\
{\bf AirCP \cite{Ge2016}}: AirCP is a CP based tensor completion algorithm proposed for recovering the spatio-temporal dynamics of online memes. This algorithm incorporates auxiliary information from memes, locations and times. An alternative direction method of multipliers (ADMM) based algorithm is employed for solving the optimization.
AirCP expects the side information matrices to be similarity matrices and takes input the Laplacian of the similarity matrices. However, in the datasets we experiment with, the side information is available as feature matrices. Therefore, we consider the covariance matrices $\mat{A}_i\mat{A}_i^\top$ as similarity matrices.
We run both algorithms till convergence and report test RMSE. For each dataset, we experiment with different levels of test set sizes,
and for each such level, we run our experiments on 10 random splits. We report the mean test RMSE per train-test percentage split. We run our experiments with multiple ranks of factorization. Results are summarized in
\reftbl{tbl:rmse_static}. From this table, we observe that SIITA{} achieves better results. Note that the rank for SIITA{} is the Tucker rank, i.e., rank = 3. This implies a factorization rank of (3, 3, 3) for SIITA{}.
\textbf{Remark:} Since all the baselines considered for various settings are CP based, we only compare for CP tensor rank. From Tables \ref{tbl:rmse_mast}, \ref{tbl:rmse_streaming} and \ref{tbl:rmse_static} it can be seen that the performance suffers for rank = 10. However, when we run SIITA{} with a rank = (10, 10, 2) we achieve lower test RMSE.
\subsection{Analyzing Merits of Side Information}
\label{sec:ablation}
\begin{table}[t]
\centering
\small
\caption{\label{tbl:mast_no_si} Test RMSE averaged across multiple train-test splits in the Multi-Aspect Streaming setting, analyzing the merits of side information. See \refsec{sec:ablation} for more details.}
\begin{tabular}{p{1.5cm}p{1cm}lll}
\toprule
Dataset & Missing\% & Rank & SIITA{} (w/o SI) & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & {\bf 1.19}& 1.23\\
& & 5 & {\bf 1.19} & 1.29\\
& & 10 & 2.69 &2.49\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & {\bf 1.25} & 1.28\\
& & 5 & {\bf 1.25} & 1.29\\
& & 10 & 3.28 & 2.55\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 1.45 & 1.59\\
& & 5 & {\bf 1.42} &1.61\\
& & 10 & 2.11 &2.96\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.44& {\bf 1.43} \\
& & 5 & 1.48 & 1.54\\
& & 10 & 3.90 &4.03\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.57 & {\bf 1.51} \\
& & 5 & 1.62& 1.67\\
& & 10 & 5.48 & 4.04\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 &1.75 & 1.71 \\
& & 5 & 1.67 & {\bf 1.61}\\
& & 10 & 5.28 & 3.49\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\small
\caption{\label{tbl:streaming_no_si} Test RMSE averaged across multiple train-test splits in the streaming setting, analyzing the merits of side information. See \refsec{sec:ablation} for more details.}
\begin{tabular}{p{1.5cm}p{1cm}lll}
\toprule
Dataset & Missing\% & Rank & SIITA{} (w/o SI) & SIITA{} \\
\midrule
\multirow{9}{35pt}{MovieLens 100K} &\multirow{3}{5pt}{20\%} & 3 & {\bf 1.46} & 1.53\\
& & 5 & 1.53 & 1.54\\
& & 10 & 1.55 &1.71\\
\cline{2-5}
&\multirow{3}{5pt}{50\%} & 3 & 1.58 & 1.63\\
& & 5 & 1.67 & 1.64\\
& & 10 & {\bf 1.56} & 1.73\\
\cline{2-5}
&\multirow{3}{5pt}{80\%} & 3 & 1.76 & 1.79\\
& & 5 & {\bf 1.74} &1.75\\
& & 10 & 2.31 &2.47\\
\midrule
\multirow{9}{30pt}{YELP} & \multirow{3}{5pt}{20\%} & 3 & 1.46 & {\bf 1.45} \\
& & 5 & 1.62 & 1.59\\
& & 10 & 2.82 &2.85\\
\cline{2-5}
& \multirow{3}{5pt}{50\%} & 3 & 1.57 & {\bf 1.55} \\
& & 5 & 1.69 & 1.67\\
& & 10 & 2.54 & 2.67\\
\cline{2-5}
& \multirow{3}{5pt}{80\%} & 3 &1.76 & {\bf 1.73} \\
& & 5 & 1.80 & 1.78\\
& & 10 & 2.25 & 2.62\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/TestRMSE_ML_MAST_NO_SI_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/TestRMSE_YELP_MAST_NO_SI_80_Missing}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:rmse_no_si_mast}Evolution of test RMSE with every time step in the multi-aspect streaming setting for SIITA{} and SIITA{} (w/o SI). See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/RunTime_ML_MAST_NO_SI_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/RunTime_YELP_MAST_NO_SI}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:runtime_no_si_mast}Run Time comparison between SIITA{} and SIITA{} (w/o SI) in the multi-aspect streaming setting. See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/TestRMSE_ML_NO_SI_Streaming_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/TestRMSE_YELP_Streaming_No_SI_80_Missing}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:rmse_no_si_streaming}Evolution of test RMSE with every time step in the streaming setting for SIITA{} and SIITA{}(w/o SI). See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/RunTime_ML_NO_SI_Streaming_80_Missing}\\
{\small (b) MovieLens 100K \\ (80\% Missing)}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/RunTime_YELP_Streaming_NO_SI}\\
{\small (a) YELP \\ (80\% Missing)}
\end{minipage}
\end{tabular}
\caption{\label{fig:runtime_no_si_streaming}Run Time comparison between SIITA{} and SIITA{} (w/o SI) in the Streaming setting. See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.14]{plots_CIKM_eps/TestRMSE_ML1M_MAST}\\
{\small (a) Test RMSE at every time step}
\end{minipage}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/RunTime_ML1M_MAST}\\
{\small (b) Run Time at every time step}
\end{minipage}
\end{tabular}
\caption{\label{fig:ML1M_rmse_mast}Investigating the merits of side information for MovieLens 1M dataset in the multi-aspect streaming setting. Side information along the user mode is the most useful for tensor completion. See \refsec{sec:ablation} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=0.99\textwidth]{plots/ML_1M_Ablation_rmse}\\
{\small (a) Evolution of Test RMSE against epochs.}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=0.99\textwidth]{plots/ML_1M_Ablation_Runtime}\\
{\small (b) Time elapsed with every epoch.}
\end{minipage}
\end{tabular}
\caption{\label{fig:ablation}Investigating the merits of side information for MovieLens 1M dataset in the batch setting. Side information along the user mode is the most useful for tensor completion. See \refsec{sec:ablation} for more details.}
\end{figure}
Our goal in this paper is to propose a flexible framework using which side information may be easily incorporated during incremental tensor completion, especially in the multi-aspect streaming setting. Our proposed method, SIITA{}, is motivated by this need. In order to evaluate merits of different types of side information on SIITA{}, in this section we report several experiments where performances of SIITA{} with and without various types of side information are compared.
\textbf{Single Side Information}: In the first experiment, we compare SIITA{} with and without side information (by setting side information to identity; see last paragraph of \refsec{sec:prelim}). We run the experiments in both multi-aspect streaming and streaming settings.
\reftbl{tbl:mast_no_si} reports the mean test RMSE of SIITA{} and SIITA{} (w/o SI), which stands for running SIITA{} without side information, for both datasets in multi-aspect streaming setting. For MovieLens 100K, SIITA{} achieves better performance without side information. Whereas for YELP, SIITA{} performs better with side information.
\reffig{fig:rmse_no_si_mast} shows the evolution of test RMSE at every time step for both datasets. \reffig{fig:runtime_no_si_mast} shows the runtime of SIITA{} when run with and without side information.
SIITA{} runs faster in the presence of side information. \reftbl{tbl:streaming_no_si} reports the mean test RMSE for both the datasets in the streaming setting. Similar to the multi-aspect streaming setting, SIITA{} achieves better performance without side information for MovieLens 100K dataset and with side information for YELP dataset.
\reffig{fig:rmse_no_si_streaming} shows the test RMSE of SIITA{} at every time step when run with side information and without side information. \reffig{fig:runtime_no_si_streaming} shows the runtime at every time step.
\textbf{Multi Side Information}: In all the datasets and experiments considered so far, side information along only one mode is available to SIITA{}. In this next experiment, we consider the setting where side information along multiple modes are available. For this experiment, we consider the {\bf MovieLens 1M } \cite{Harper2015} dataset, a standard dataset of 1 million movie ratings. This dataset consists of a 6040 (user) $\times$ 3952 (movie) $\times$ 149 (week) tensor, along with two side information matrices: a 6040 (user) $\times$ 21 (occupation) matrix, and a 3952 (movie) $\times$ 18 (genre) matrix. As this dataset consists of side information along multiple modes, it gives us an opportunity to perform this study conclusively.
Note that among all the methods considered in the paper, SIITA{} is the only method which scales to the size of MovieLens 1M datasets
We create four variants of the dataset. The first one with the tensor and all the side information matrices denoted by {MovieLens 1M}, the second one with the tensor and only the side information along the movie mode denoted by {MovieLens 1M (movie mode)}.
Similarly, {MovieLens (user mode)} with only user mode side information, and finally {MovieLens 1M (no si)} with only the tensor and no side information.
We run SIITA{} in multi-aspect streaming and batch modes for all the four variants. Test RMSE at every time step in the multi-aspect streaming setting is shown in \reffig{fig:ML1M_rmse_mast}(a).
Evolution of Test RMSE (lower is better) against epochs are shown in \reffig{fig:ablation}(a) in batch mode. From Figures \ref{fig:ML1M_rmse_mast}(a) and \ref{fig:ablation}(a), it is evident that the variant {MovieLens 1M (user mode)} achieves best overall performance, implying that the side information along the user mode is more useful for tensor completion in this dataset. However, {MovieLens 1M (movie mode)} achieves poorer performance than other variants implying that movie-mode side information is not useful for tensor completion in this case. This is also the only side information mode available to SIITA{} during the MovieLens 100K experiments in Tables \ref{tbl:mast_no_si} and \ref{tbl:streaming_no_si}. This sub-optimal side information may be a reason for SIITA{}'s diminished performance when using side information for MovieLens100K dataset. From the runtime comparisons in Figures \ref{fig:ablation} (b) and \ref{fig:ML1M_rmse_mast}(b), we observe that {MovieLens 1M} (where both types of side information are available) takes the least time, while the variant {MovieLens 1M (no si)} takes the most time to run. This is a benefit we derive from the inductive framework, where in the presence of useful side information, SIITA{} not only helps in achieving better performance but also runs faster.
\subsection{Unsupervised Setting}
\label{sec:nnsiita_exp}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/Avg_Purity_ML_MAST}\\
{\small (a) MovieLens 100K}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/Avg_Purity_YELP_MAST}\\
{\small (b) YELP}
\end{minipage}
\end{tabular}
\caption{\label{fig:nn_siita} Average Purity of clusters learned by NN-SIITA{} and NN-SIITA{} (w/o SI) at every time step in the unsupervised setting. For both datasets, side information helps in learning purer clusters. See \refsec{sec:nnsiita_exp} for more details.}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\noindent \begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/Vary_W_ML_MAST.eps}\\
{\small (a) MovieLens 100K}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[scale=0.15]{plots_CIKM_eps/Vary_W_YELP_MAST.eps}\\
{\small (b) YELP}
\end{minipage}
\end{tabular}
\caption{\label{fig:nn_siita_w} Evolution of mean average purity with $w$ for NN-SIITA{} and NN-SIITA{} (w/o SI) for both MovieLens 100K and YELP datasets. See \refsec{sec:nnsiita_exp} for more details.}
\end{figure}
\begin{table*}[tb]
\scriptsize
\centering
\caption{\label{tbl:nnsita_clusters} Example clusters learned by NN-SIITA{} for MovieLens 100K and YELP datasets. The first column is an example of a pure cluster and the second column is an example of noisy cluster. See \refsec{sec:nnsiita_exp} for more details.}
\begin{tabular}{p{1.5cm}p{2.8cm}p{4cm}p{3cm}p{3.6cm}}
\toprule
& \multicolumn{2}{c}{Cluster (Action, Adventure, Sci-Fi)} & \multicolumn{2}{c}{Cluster (Noisy)} \\
\cline{2-5}
\multirow{6}{30pt}{MovieLens100K} & \multicolumn{1}{c}{Movie} & \multicolumn{1}{c}{Genres} & \multicolumn{1}{c}{Movie} & \multicolumn{1}{c}{Genres} \\
\cline{2-5}
& The Empire Strikes Back (1980) & Action, Adventure, Sci-Fi, Drama, Romance & Toy Story (1995)& Animation, Children's, Comedy\\
& Heavy Metal (1981) & Action, Adventure, Sci-Fi, Animation, Horror & From Dusk Till Dawn (1996)& Action, Comedy, Crime, Horror, Thriller\\
& Star Wars (1977) & Action, Adventure, Sci-Fi, Romance, War & Mighty Aphrodite (1995)& Comedy\\
& Return of the Jedi (1983) & Action, Adventure, Sci-Fi, Romance, War & Apollo 13 (1995)& Action, Drama, Thriller\\
& Men in Black (1997) & Action, Adventure, Sci-Fi, Comedy & Crimson Tide (1995)& Drama, Thriller, War \\
\bottomrule
& \multicolumn{2}{c}{Cluster (Phoenix)} & \multicolumn{2}{c}{Cluster (Noisy)} \\
\cline{2-5}
\multirow{6}{30pt}{YELP} & \multicolumn{1}{c}{Business} & \multicolumn{1}{c}{Location} & \multicolumn{1}{c}{Business} & \multicolumn{1}{c}{Location} \\
\cline{2-5}
& Hana Japanese Eatery & \multicolumn{1}{c}{Phoenix} & The Wigman& \multicolumn{1}{c}{Litchfield Park }\\
& Herberger Theater Center & \multicolumn{1}{c}{Phoenix} & Hitching Post 2 & \multicolumn{1}{c}{Gold Canyon }\\
& Scramble A Breakfast Joint & \multicolumn{1}{c}{Phoenix}& Freddys Frozen Custard \& Steakburgers& \multicolumn{1}{c}{Glendale}\\
& The Arrogant Butcher & \multicolumn{1}{c}{Phoenix}& Costco& \multicolumn{1}{c}{Avondale}\\
& FEZ & \multicolumn{1}{c}{Phoenix}& Hana Japanese Eatery& \multicolumn{1}{c}{Phoenix}\\
\bottomrule
\end{tabular}
\end{table*}
In this section, we consider an unsupervised setting with an aim to discover underlying clusters of the items, like movies in MovieLens 100K dataset and businesses in YELP dataset etc, from a sequence of sparse tensors.
It is desirable to mine clusters such that similar items are grouped together. Nonnegative constraints are essential for mining interpretable clusters as noted by \cite{Hyvoenen2008, Murphy2012}. Therefore, for this set of experiments we consider
the nonnegative version of SIITA{} denoted by NN-SIITA{}. We investigate whether side information helps in discovering more coherent clusters of items in both datasets.
We run our experiments in the multi-aspect streaming setting. At every time step, we compute {\it Purity} of clusters and report average Purity .
Purity of a cluster is defined as the percentage of the cluster that is coherent. For example, in MovieLens 100K, a cluster of movies is 100\% pure if all the movies belong to the same genre and 50\% pure if only half of the cluster belong to the same genre. Formally, let clusters of items along mode-$i$ are desired, let $r_i$ be the rank of factorization along mode-$i$.
Every column of the matrix $\mat{A}_i \mat{U}_i$ is considered a distribution of the items, the top-$w$ items of the distribution represent a cluster. For $p$-th cluster, i.e., cluster representing column $p$ of the matrix $\mat{A}_i \mat{U}_i$, let $w_p$ items among the top-$w$ items belong to the same category, Purity and average Purity are defined as follows:
\begin{equation*}
\text{Purity}(p) = w_p/w,
\end{equation*}
\begin{equation*}
\text{average-Purity} = \frac{1}{r_i} \sum\limits_{p=1}^{r_i} \text{Purity}(p).
\end{equation*}
Note that Purity is computed per cluster, while average Purity is computed for a set of clusters. Higher average Purity indicates a better clustering.
We report average Purity at every time step for both the datasets. We run NN-SIITA{} with and without side information. \reffig{fig:nn_siita} shows average Purity at every time step for MovieLens 100K and YELP datasets. It is clear from \reffig{fig:nn_siita} that for both the datasets side information helps in discovering better clusters.
We compute the Purity for MovieLens 100K dataset based on the genre information of the movies and for the YELP dataset we compute Purity based on the geographic locations of the businesses. \reftbl{tbl:nnsita_clusters} shows some example clusters learned by NN-SIITA{}. For MovieLens 100K dataset, each movie can belong to multiple genres. For computing the Purity, we consider the most common genre for all the movies in a cluster. Results shown in \reffig{fig:nn_siita} are for $w = 5$. However, we also vary $w$ between 5 and 25 and report the \emph{mean} average-Purity, which is obtained by computing the mean across all the time steps in the multi-aspect streaming setting. As can be seen from \reffig{fig:nn_siita_w}, having side information helps in learning better clusters for all the values of $w$.
For MovieLens 100K, the results reported are with a factorization rank of $(3,7,3)$ and for YELP, the rank of factorization is $(5,7,3)$. Since this is an unsupervised setting, note that we use the entire data for factorization, i.e., there is no train-test split.
\section{Introduction}
\label{sec:intro}
Low rank tensor completion is a well-studied problem and has various applications in the fields of recommendation systems \cite{Symeonidis2008}, link-prediction \cite{Ermis2015}, compressed sensing \cite{Cichocki2015}, to name a few. Majority of the previous works focus on solving the problem in a static setting \cite{Filipovic2015,Guo2017,Kasai2016a}. However, most of the real world data is dynamic, for example in an online movie recommendation system the number of users and movies increase with time. It is prohibitively expensive to use the static algorithms for dynamic data. Therefore, there has been an increasing interest in developing algorithms for dynamic low-rank tensor completion \cite{Kasai2016,Mardani2015,Song2017}.
Usually in many real world scenarios, besides the tensor data, additional side information is also available, e.g., in the form of matrices. In the dynamic scenarios, the side information grows with time as well. For instance, movie-genre information in the movie recommendation etc. There has been considerable amount of work in incorporating side information into tensor completion \cite{Narita2011,Ge2016}. However, the previous works on incorporating side information deal with the static setting. In this paper, we propose a dynamic low-rank tensor completion model that incorporates side information growing with time.
Most of the current dynamic tensor completion algorithms work in the streaming scenario, i.e., the case where the tensor grows only in one mode, which is usually the time mode. In this case, the side information is a static matrix. Multi-aspect streaming scenario \cite{Fanaee-T2015,Song2017}, on the other hand, is a more general framework, where the tensor grows in all the modes of the tensor. In this setting, the side information matrices also grow. \reffig{fig:illustartion} illustrates the difference between streaming and multi-aspect streaming scenarios with side information.
Besides side information, incorporating nonnegative constraints into tensor decomposition is desirable in an unsupervised setting. Nonnegativity is essential for discovering interpretable clusters \cite{Hyvoenen2008, Murphy2012}. Nonnegative tensor learning is explored for applications in computer vision \cite{Shashua2005,Kim2007}, unsupervised induction of relation schemas \cite{Nimishakavi2016}, to name a few.
Several algorithms for online Nonnegative Matrix Factorization (NMF) exist in the literature \cite{Lefevre2011, Guan2012, Zhao2017}, but algorithms for nonnegative online tensor decomposition with side information are not explored to the best of our knowledge. We also fill this gap by showing how nonnegative constraints can be enforced on the decomposition learned by our proposed framework SIITA{}.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\noindent\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=1\textwidth]{figs/streaming.pdf}\\
{\small(a) Streaming tensor sequence with side information.}
\end{minipage}
\begin{minipage}[b]{0.5\hsize}
\centering
\includegraphics[width=1\textwidth]{figs/MAST.pdf}\\
{\small (b) Multi-aspect streaming tensor sequence with side information.}
\end{minipage}
\end{tabular}
\caption{Illustration of streaming and multi-aspect streaming sequences with side information. The blue block represents the tensor at time step and the green block represents the side information. The blocks in grey represent the data at previous time steps.
For easy understanding, we show side information along only one mode.}
\label{fig:illustartion}
\end{figure}
In this paper, we work with the more general multi-aspect streaming scenario and make the following contributions:
\begin{itemize}
\item Formally define the problem of multi-aspect streaming tensor completion with side information.
\item Propose a Tucker based framework Side Information infused Incremental Tensor Analysis (SIITA{}) for the problem of multi-aspect streaming tensor completion with side information. We employ a stochastic gradient descent (SGD) based algorithm for solving the optimization problem.
\item Incorporate nonnegative constraints with SIITA{} for discovering the underlying clusters in unsupervised setting.
\item Demonstrate the effectiveness of SIITA{} using extensive experimental analysis on multiple real-world datasets in all the settings.
\end{itemize}
The organization of the paper is as follows. In Section \ref{sec:prelim}, we introduce the definition of multi-aspect streaming tensor sequence with side information and discuss our proposed framework SIITA{} in Section \ref{sec:mast_si}. We also discuss how nonnegative constraints can incorporated into SIITA{} in Section \ref{sec:mast_si}. The experiments are shown in Section \ref{sec:exp}, where SIITA{} performs effectively in various settings. All our codes are implemented in Matlab, and can be found at the following anonymous link : \url{https://bit.ly/2IXx2PM}.
\section{Preliminaries}
\label{sec:prelim}
An $N^{th}$-order or $N$-mode tensor is an $N$-way array. We use boldface calligraphic letters to represent tensors (e.g., $\ten{X}$), boldface uppercase to represent matrices (e.g., $\mat{U}$), and boldface lowercase to represent vectors (e.g., $\mat{v}$). $\ten{X}[i_1, \cdots, i_N]$ represents the entry of $\ten{X}$ indexed by $[i_1, \cdots, i_N]$.
~\\
{\bf Definition 1 (Coupled Tensor and Matrix) \cite{Song2017}}: A matrix and a tensor are called coupled if they share a mode. For example, a ${user} \times {movie} \times { time}$ tensor and a ${movie} \times {genre}$ matrix are coupled along the {movie} mode.
~\\
{\bf Definition 2 (Tensor Sequence) \cite{Song2017}}:
A sequence of $N^{th}$-order tensors $\ten{X}^{(1)}, \ldots , \ten{X}^{(t)}, \dots$ is called a tensor sequence denoted as $\{\ten{X}^{(t)}\}$, where each $\ten{X}^{(t)} \in \mathbb{R}^{I_1^t \times I_2^t \times \ldots \times I_N^t}$ at time instance $t$.
~\\
{\bf Definition 3 (Multi-aspect streaming Tensor Sequence) \label{def:mast-seq}\cite{Song2017}}:
A tensor sequence of $N^{th}$-order tensors $\{\ten{X}^{(t)}\}$ is called a multi-aspect streaming tensor sequence if for any $t \in \mathbb{Z}^{+}$, $\ten{X}^{(t-1)} \in \mathbb{R}^{I_1^{t-1} \times I_2^{t-1} \times \ldots \times I_N^{t-1}}$ is the sub-tensor of $\ten{X}^{(t)} \in \mathbb{R}^{I_1^t \times I_2^t \times \ldots \times I_N^t}$, i.e.,
\[
\ten{X}^{(t-1)} \subseteq \ten{X}^{(t)},~\mathrm{where}~I_{i}^{t-1} \le I_{i}^{t},~\forall 1 \le i \le N.
\]
Here, $t$ increases with time, and $\ten{X}^{(t)}$ is the snapshot tensor of this sequence at time $t$.
~\\
{\bf Definition 4 (Multi-aspect streaming Tensor Sequence with Side Information) }: Given a time instance $t$, let $\mat{A}_{i}^{(t)} \in \mathbb{R}^{I_{i}^{t} \times M_i}$ be a side information (SI) matrix corresponding to the $i^{th}$ mode of $\ten{X}^{(t)}$ (i.e., rows of $\mat{A}_{i}^{(t)}$ are coupled along $i^{th}$ mode of $\ten{X}^{(t)}$). While the number of rows in the SI matrices along a particular mode $i$ may increase over time, the number of columns remain the same, i.e., $M_i$ is not dependent on time. In particular, we have,
\begin{align*}
\mat{A}_{i}^{(t)} &=
\begin{bmatrix}
\mat{A}_{i}^{(t-1)} \\
\Delta_{i}^{(t)}
\end{bmatrix},~\mathrm{where}~\Delta_{i}^{(t)} \in \mathbb{R}^{[I_i^{(t)} - I_{i}^{(t-1)}] \times M_i}.
\end{align*}
Putting side information matrices of all the modes together, we get the side information set $\set{A}^{(t)}$,
\[
\set{A}^{(t)} = \{\mat{A}_{1}^{(t)}, \ldots, \mat{A}_{N}^{(t)}\}.
\]
Given an $N^{th}$-order multi-aspect streaming tensor sequence $\{\ten{X}^{(t)}\}$, we define a multi-aspect streaming tensor sequence with side information as $\{(\ten{X}^{(t)}, \set{A}^{(t)})\}$.
We note that all modes may not have side information available. In such cases, an identity matrix of appropriate size may be used as $\mat{A}_{i}^{(t)}$, i.e., $\mat{A}_{i}^{(t)} = \mat{I}^{I_{i}^{t} \times I_{i}^{t}}$, where $M_i = I_{i}^{t}$.
The problem of multi-aspect streaming tensor completion with side information is formally defined as follows:
\begin{center}
\framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{
{\bf Problem Definition}: Given a multi-aspect streaming tensor sequence with side information $\{(\ten{X}^{(t)}, \set{A}^{(t)})\}$, the goal is to predict the missing values in $\ten{X}^{(t)}$ by utilizing only entries in the relative complement $\ten{X}^{(t)} \setminus \ten{X}^{(t-1)}$ and the available side information $\set{A}^{(t)}$.
}}
\end{center}
\section{Proposed Framework SIITA{}}
\label{sec:mast_si}
In this section, we discuss the proposed framework SIITA{} for the problem of multi-aspect streaming tensor completion with side information.
Let $\{(\ten{X}^{(t)}, \mathcal{A}^{(t)}) \}$ be an $N^{th}$-order multi-aspect streaming tensor sequence with side information. Assuming that, at every time step, $\ten{X}^{(t)}[i_1, i_2, \cdots, i_N]$ are only observed for some indices $[i_1, i_2, \cdots, i_N] \in \Omega$, where $\Omega$ is a subset of the complete set of indices $[i_1, i_2, \cdots, i_N]$.
Let the sparsity operator $\mathcal{P}_{\Omega} $ be defined as:
\begin{equation*}
\mathcal{P}_{\Omega}[i_1, i_2, \cdots, i_N] =
\begin{cases}
\ten{X}[i_1, \cdots, i_N], & \text{if}\ [i_1, \cdots, i_N] \in \Omega\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
Tucker tensor decomposition \cite{Kolda2009}, is a form of higher-order PCA for tensors. It decomposes an $N^{th}$-order tensor $\ten{X}$ into a core tensor multiplied by a matrix along each mode as follows
\begin{equation*}
\ten{X} \approx \ten{G} \times_1 \mat{U}_1 \times_2 \mat{U}_2 \times_3 \cdots \mat{U}_N,
\end{equation*}
where, $\mat{U}_i \in \mathbb{R}^{I_i \times r_i}, i=1:N$ are the factor matrices and can be thought of as principal components in each mode. The tensor $\ten{G} \in \mathbb{R}^{r_1 \times r_2 \times \cdots r_N}$ is called the \emph{core tensor}, which shows the interaction between different components. $(r_1, r_2, \cdots, r_N)$ is the (multilinear) rank of the tensor. The $i$-mode matrix product of a tensor $\ten{X} \in \mathbb{R}^{I_1 \times I_2 \times \cdots I_N}$ with a matrix $\mat{P} \in \mathbb{R}^{r \times I_i}$ is denoted by $\ten{X}\times_i \mat{P}$, more details can be found in \cite{Kolda2009}. {The standard approach of incorporating side information while learning factor matrices in Tucker decomposition is by using an additive term as a regularizer \cite{Narita2011}. However, in an online setting the additive side information term poses challenges as the side information matrices are also dynamic.} Therefore, we propose the following fixed-rank {\it inductive framework} for recovering missing values in $\ten{X}^{(t)}$, at every time step $t$:
\begin{equation}
\label{eqn:ind_tucker}
\underset{\mat{U}_i \in \mathbb{R}^{M_i \times r_i}, i = 1:N}{\min_{\ten{G} \in \mathbb{R}^{r_1 \times \ldots \times r_N}}} F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}, \{\mat{U}_i\}_{i=1:N}),
\end{equation}
where
\begin{multline}
\label{eqn:F_U_G}
F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}, \{\mat{U}_n\}_{i=1:N}) = \\
\norm{\mathcal{P}_{\Omega}(\ten{X}^{(t)}) -
\mathcal{P}_{\Omega}(\ten{G} \times_1 \mat{A}_1^{(t)}\mat{U}_1 \times_2 \ldots \times_N \mat{A}_N^{(t)}\mat{U}_N)}_F^2 \\
+ \lambda_g \norm{\ten{G}}_F^2 + \sum_{i=1}^{N}\lambda_i \norm{\mat{U}_i}_F^2.
\end{multline}
$\norm{\cdot}_F$ is the Frobenius norm, $\lambda_g > 0$ and $\lambda_i > 0, i=1:N$ are the regularization weights. Conceptually, the inductive framework models the ratings of the tensor as a weighted scalar product of the side information matrices. Note that (\ref{eqn:ind_tucker}) is a generalization of the inductive matrix completion framework \cite{jain2013,Natarajan2014,Si2016}, which has been effective in many applications.
The inductive tensor framework has two-fold benefits over the typical approach of incorporating side information as an additive term. The use of $\mat{A}_i \mat{U}_i$ terms in the factorization reduces the dimensionality of variables from $\mat{U}_i \in \mathbb{R}^{I_i \times r_i}$ to $\mat{U}_i \in \mathbb{R}^{M_i \times r_i}$ and typically $M_i \ll I_i$. As a result, computational time required for computing the gradients and updating the variables decreases remarkably. Similar to \cite{Kim2007}, we define
\begin{multline*}
\mat{U}_i^{(\backslash n)} = \big[ \mat{A}_{i-1}^{(t)} \mat{U}_{i-1} \otimes \ldots \otimes \mat{A}_{1}^{(t)} \mat{U}_{1} \otimes \ldots \otimes \\ \nonumber
\mat{A}_{N}^{(t)} \mat{U}_{N} \otimes \ldots \otimes \mat{A}_{i+1}^{(t)} \mat{U}_{i+1} \big ] \nonumber ,
\end{multline*}
which collects Kronecker products of mode matrices except for $\mat{A}_i\mat{U}_i$ in a backward cyclic manner.
The gradients for \eqref{eqn:ind_tucker} wrt $\mat{U}_i$ for $i = 1:N$ and $\ten{G}$ can be computed as following:
\begin{equation}\label{eqn:gradu}
\begin{array}{lll}
\displaystyle \frac{\partial F}{\partial \mat{U}_i} = -(\mat{A}_i^{(t)})^\top \ten{R}_{(i)}^{(t)} \mat{U}_i^{(\backslash n)} \ten{G}_{(i)}^\top + 2\lambda_i \mat{U}_i \\
\\
\displaystyle \frac{\partial F}{\partial \ten{G}} = - \ten{R}^{(t)} \times_1 (\mat{A}_1^{(t)}\mat{U}_1)^{\top} \times_2 \ldots \times_N (\mat{A}_N^{(t)}\mat{U}_N)^{\top} \\
\qquad \qquad+ \ 2\lambda_g \ten{G},
\end{array}
\end{equation}
where
\begin{equation*}
\label{eqn:res}
\ten{R}^{(t)} = \ten{X}^{(t)} - \ten{G} \times_1 \mat{A}_1^{(t)} \mat{U}_1 \times_2 \ldots \times_N \mat{A}_N^{(t)} \mat{U}_N .
\end{equation*}
By updating the variables using gradients given in (\ref{eqn:gradu}), we can recover the missing entries in $\ten{X}^{(t)}$ at every time step $t$, however that is equivalent to performing a static tensor completion at every time step. Therefore, we need an incremental scheme for updating the variables. Let $\mat{U}_i^{(t)}$ and $\ten{G}^{(t)}$ represent the variables at time step $t$, then
\begin{equation}
\begin{array}{ll}
F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N}) =\\
\qquad F(\ten{X}^{(t-1)}, \mathcal{A}^{(t-1)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N})\ \ + \\
\qquad F(\ten{X}^{(\Delta t)}, \mathcal{A}^{(\Delta t)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N}),
\end{array}
\end{equation}
since $\ten{X}^{(t-1)}$ is recovered at the time step $t$-$1$, the problem is equivalent to using only $F^{(\Delta t)} = F(\ten{X}^{(\Delta t)}, \mathcal{A}^{(\Delta t)}, \ten{G}^{(t-1)}, \{\mat{U}_i^{(t-1)}\}_{i=1:N})$ for updating the variables at time step $t$.
We propose to use the following approach to update the variables at every time step $t$, i.e.,
\begin{equation}\label{eqn:update_u}
\begin{array}{lll}
\mat{U}_i^{(t)} = \mat{U}_i^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t-1)}}, i = 1:N \\
\ten{G}^{(t)} = \ten{G}^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t-1)}},
\end{array}
\end{equation}
where $\gamma$ is the step size for the gradients. $\ten{R}^{(\Delta t)}$, needed for computing the gradients of $F^{(\Delta t)}$, is given by
\begin{equation}\label{eqn:r_delta}
\begin{array}{lll}
\ten{R}^{(\Delta t)} = \ten{X}^{(\Delta t)} - \ten{G}^{(t-1)} \times_1 \mat{A}_1^{(\Delta t)}\mat{U}_1^{(t-1)} \times_2 \ldots \\
\qquad \qquad \times_N \mat{A}_N^{(\Delta t)}\mat{U}_N^{(t-1)}.
\end{array}
\end{equation}
\begin{algorithm}[t]
\small
\caption{Proposed SIITA{} Algorithm}\label{alg:simast}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Return}
\Input{$ \{\ten{X}^{(t)}, \mathcal{A}^{(t)}\}, \lambda_i, i = 1:N ,(r_1, \ldots, r_N) $}
Randomly initialize $\mat{U}_i^{(0)} \in \mathbb{R}^{M_i \times r_i}, i = 1:N$ and $\ten{G}^{(0)} \in \mathbb{R}^{r_i \times \ldots \times r_N}$ ;\\
\For{t = 1, 2, \ldots}
{
$\mat{U}_i^{(t)_{0}} \coloneqq \mat{U}_i^{(t-1)}, i = 1:N$; \\
$\ten{G}^{(t)_0} \coloneqq \ten{G}^{(t-1)}$;\\
\For{k = 1:K}
{
Compute $\ten{R}^{(\Delta t)}$ from \refeqn{eqn:r_delta} using $\mat{U}_i^{(t)_{k-1}}, i = 1:N$ and $\ten{G}^{(t)_{k-1}}$ ;\\
Compute $\frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t)_{k-1}}}$ for $i = 1:N$ from \refeqn{eqn:gradu}; \\
Update $\mat{U}_i^{(t)_k}$ using $\frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t)_{k-1}}}$ and $\mat{U}_i^{(t)_{k-1}}$ in \refeqn{eqn:update_u} ; \\
Compute $\frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t)_{k-1}}} $ from \refeqn{eqn:gradu}; \\
Update $\ten{G}^{(t)_k}$ using $\ten{G}^{(t)_{k-1}}$ and $\frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t)_{k-1}}} $ in \refeqn{eqn:update_u}; \\
}
$ \mat{U}_i^{(t)} \coloneqq \mat{U}_i^{(t)_{K}}$;\\
$ \ten{G}^{(t)} \coloneqq \ten{G}^{(t)_K}$;\\
}
\Output{$\mat{U}_i^{(t)}, i = 1:N, \ten{G}^{(t)}$.}
\end{algorithm}
\refalg{alg:simast} summarizes the procedure described above. The computational cost of implementing \refalg{alg:simast} depends on the update of the variables (\ref{eqn:update_u}) and the computations in (\ref{eqn:r_delta}). The cost of computing $\ten{R}^{(\Delta t)}$ is $O( \sum_i I_i M_i r_i + |\Omega|r_1 \ldots r_N)$. The cost of performing the updates (\ref{eqn:update_u}) is $O(|\Omega| r_1 \ldots r_N + \sum_i M_i r_i)$. Overall, at every time step, the computational cost of \refalg{alg:simast} is $O(K( \sum_i I_i M_i r_i + |\Omega|r_1 \ldots r_N))$.
\subsection*{Extension to the nonnegative case: NN-SIITA{}}
\label{sec:nn_siita}
We now discuss how nonnegative constraints can be incorporated into the decomposition learned by SIITA{}. Nonnegative constraints allow the factor of the tensor to be interpretable.
We denote SIITA{} with nonnegative constraints with NN-SIITA{}. At every time step $t$ in the multi-aspect streaming setting, we seek to learn the following decomposition:
\begin{equation}
\label{eqn:nn_tucker}
\underset{\mat{U}_i \in \mathbb{R}_{+}^{M_i \times r_i}, i = 1:N}{\min_{\ten{G} \in \mathbb{R}_{+}^{r_1 \times \ldots \times r_N}}} F(\ten{X}^{(t)}, \mathcal{A}^{(t)}, \ten{G}, \{\mat{U}_i\}_{i=1:N}),
\end{equation}
where $F(\cdot)$ is as given in \eqref{eqn:F_U_G}.
We employ a projected gradient descent based algorithm for solving the optimization problem in \eqref{eqn:nn_tucker}. We follow the same incremental update scheme discussed in \refalg{alg:simast}, however we use a projection operator defined below for updating the variables. For NN-SIITA{}, \eqref{eqn:update_u} is replaced with
\begin{equation*}\label{eqn:update_u_nn}
\begin{array}{lll}
\mat{U}_i^{(t)} = \Pi_{+}[\mat{U}_i^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \mat{U}_i^{(t-1)}}], i = 1:N \\
\ten{G}^{(t)} = \Pi_{+}[\ten{G}^{(t-1)} - \gamma \frac{\partial F^{(\Delta t)}}{\partial \ten{G}^{(t-1)}}],
\end{array}
\end{equation*}
where $\Pi_{+}$ is the element-wise projection operator defined as
\begin{equation*}
\Pi_{+}[x_i] =
\begin{cases}
x_i, & \text{if}\ x_i > 0\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
The projection operator maps a point back to the feasible region ensuring that the factor matrices and the core tensor are always nonnegative with iterations.
\section{Related Work}
\label{sec:rel}
\begin{table*}[bt]
\centering
\small
\caption{ Summary of different tensor streaming algorithms.\label{tbl:bsline_table}}
\begin{tabular}{p{3cm} p{1.5cm} p{1.5cm}p{1.5cm}p{1.5cm} p{2cm} p{2cm} }
\toprule
Property & TeCPSGD\cite{Mardani2015} &OLSTEC \cite{Kasai2016} & MAST \cite{Song2017} & AirCP \cite{Ge2016} & SIITA{} (this paper) \\
\midrule
Streaming & \checkmark& \checkmark & \checkmark & & \checkmark \\
Multi-Aspect Streaming & & & \checkmark & & \checkmark \\
Side Information & & & & \checkmark & \checkmark \\
Sparse Solution & & & & & \checkmark \\
\bottomrule
\end{tabular}
\end{table*}
{\bf Dynamic Tensor Completion :} \citeauthor{Sun2008} \cite{Sun2006,Sun2008} introduce the concept of dynamic tensor analysis by proposing multiple Higher order SVD based algorithms, namely Dynamic Tensor Analysis (DTA), Streaming Tensor Analysis (STA) and Window-based Tensor Analysis (WTA) for the streaming scenario.
\citeauthor{Nion2009} \cite{Nion2009} propose two adaptive online algorithms for CP decomposition of $3$-order tensors. \citeauthor{Yu2015} \cite{Yu2015} propose an accelerated online algorithm for tucker factorization in streaming scenario, while an accelerated online algorithm for CP decomposition is developed in \cite{Zhou2016}.
A significant amount of research work is carried out for dynamic tensor decompositions, but work focusing on the problem of dynamic tensor completion is relatively less explored. Work by \citeauthor{Mardani2015} \cite{Mardani2015} can be considered a pioneering work in dynamic tensor completion. They propose a streaming tensor completion algorithm based on CP decomposition.
Recent work by \citeauthor{Kasai2016} \cite{Kasai2016} is an accelerated second order Stochastic Gradient Descent (SGD) algorithm for streaming tensor completion based on CP decomposition.
\citeauthor{Fanaee-T2015} \cite{Fanaee-T2015} introduces the problem of multi-aspect streaming tensor analysis by proposing a histogram based algorithm. Recent work by \citeauthor{Song2017} \cite{Song2017} is a more
general framework for multi-aspect streaming tensor completion.
{\bf Tensor Completion with Auxiliary Information :} \citeauthor{Acar2011} \cite{Acar2011} propose a Coupled Matrix Tensor Factorization (CMTF) approach for incorporating additional side information, similar ideas are also explored in
\cite{Beutel2014} for factorization on hadoop and in \cite{Ermis2015a} for link prediction in heterogeneous data. \citeauthor{Narita2011} \cite{Narita2011} propose with-in mode and cross-mode regularization methods for incorporating similarity side information matrices into factorization. Based on similar ideas, \citeauthor{Ge2016} \cite{Ge2016} propose AirCP, a CP-based tensor completion algorithm.
\citeauthor{Welling2001} \cite{Welling2001} propose nonnegative tensor decmpositon by incorporating nonnegative constraints into CP decomposition. Nonnegative CP decomposition is explored for applications in computer vision in \cite{Shashua2005}. Algorithms for nonnegative Tucker decomposition are proposed in \cite{Kim2007} and for sparse nonnegative Tucker decomposition are proposed in \cite{Morup2008}. However, to the best our knowledge, nonnegative tensor decomposition algorithms do not exist for dynamic settings, a gap we fill in this paper.
Inductive framework for matrix completion with side information is proposed in \cite{jain2013,Natarajan2014,Si2016}, which has not been explored for tensor completion to the best of our knowledge. In this paper, we propose an online inductive framework for multi-aspect streaming tensor completion.
\reftbl{tbl:bsline_table} provides details about the differences between our proposed SIITA{} and various baseline tensor completion algorithms.
|
1,116,691,498,538 | arxiv | \section{Introduction}
The general purpose of this paper is to study non-compact quantum spaces in the C*-algebraic framework.
Usually quantum spaces arising in Quantum Group Theory are given by generators and relations.
The (*-)algebra obtained in this way can then be viewed as the coordinate ring of polynomial functions on the quantum space.
For compact quantum spaces, there is a general procedure to assign a unital C*-algebra to the quantum space:
one considers the universal C*-norm defined as the supremum of the operator norms of all bounded *-representation
of the coordinate ring and
takes the closure with respect to this norm. Here, having a compact quantum space is essentially
synonymous to the existence of the universal C*-norm.
The non-compact situation is characterized by the fact that the *-algebra admits unbounded *-representations and that
the universal C*-norm might not exist. For this setting, S.~L.~Woro\-nowicz developed a theory
of C*-algebras generated by unbounded elements \cite{Wo91,Wo}. However, this method is not constructive, the unbounded operators
and the C*-algebra have to be given at the beginning, one only proves that the unbounded operators actually generate
the C*-algebra.
Since we are more interested in having an explicit C*-algebra at hand than proving technical details, we prefer to construct
a non-commutative C*-algebra
by analogy to the classical C*-al\-ge\-bra of continuous functions vanishing at infinity
on the corresponding locally compact
space. The analogy to the classical case involves concrete Hilbert space representations
of the coordinate ring
and can therefore be done only by a case to case study.
In the present paper, we will do it for a 2-dimensional quantum complex plane,
the 1-dimensional version has already been treated in \cite{CW,So}.
Similar construction of function algebras on non-compact quantum spaces can be found, for instance,
in \cite{BS, KW, OW, SSV, SSV00}
but none of these papers touches the C*-algebra framework.
Let us briefly outline our construction. First we classify all well-behaved Hilbert space representations
of the coordinate ring $\CCq$. It is important to have knowledge about all possible
representations because it turns out that different representations correspond to different domains
of the quantum complex plane. The next step is to realize these representations on a function space
($\mathcal{L}_2$-space) such that modulus of each generator
(the non-negative self adjoint part in its polar decomposition)
acts as a multiplication operator. Furthermore, the measures are chosen in such a way that
the partial isometries from the polar decompositions are given on the same footing: they act
as multiplicative $q$-shifts on functions. In this manner we obtain very simple commutation relations between
the multiplication operators and the partial isometries.
Then we consider an auxiliary *-algebra of bounded operators generated by continuous functions of
the moduli of the generators (represented by multiplication operators)
and powers of the partial isometries and their adjoints.
For the interpretation as continuous functions on the 2-dimensional quantum complex plane
vanishing at infinity, we require that the continuous functions belong to $C_0([0,\infty){\hspace{-1pt}}\times{\hspace{-1pt}} [0,\infty))$
and that these functions, when evaluated at 0, do not depend on the phases
(the partial isometries from the polar decompositions).
Moreover, in order not to ``miss any points'', we consider some sort of universal representation,
where the involved measures have the largest possible support.
Finally, the C*-algebra
of continuous functions vanishing at infinity is defined by
taking the C*-closure of the auxiliary algebra in the operator norm.
An advantage of our approach is that it allows a geometric interpretation of the different representations.
As usual, a nontrivial 1-dimensional representation corresponds to a classical point,
in our case to the origin of $\mathbb{C}^2_q$. Setting one generator to zero, we get a copy of $\mathbb{C}_q$ inserted into
the quantum space $\mathbb{C}^2_q$. Last but not least, there is a family of faithful representations
that describe a 2-dimensional quantum complex plane, where the copy $\mathbb{C}_q$ from the previous representation
is shrunk to a point. Therefore, restricting oneself (as quite customary)
to the family of faithful representations
will not yield the whole 2-dimensional quantum complex plane.
\section{Preliminaries}
Throughout this paper, $q$ stands for a real number in the interval $(0,1)$.
The coordinate ring $\CCq$ of polynomial functions on 2-dimensional quantum complex plane
is the *-algebra over $\mathbb{C}$
generated by $z_1$ and $z_2$ satisfying the (overcomplete) relations \cite{KS}
\begin{align}
z_2 z_1 &= q z_1 z_2\,, &
z_1^* z_2^* &= q z_2^* z_1^* \,,\label{R1}\\
z_2 z_1^* &= q z_1^* z_2 \,, &
z_1 z_2^* &= q z_2^* z_1\,, \label{R2}\\
z_2 z_2^* &= q^2 z_2^* z_2\,, &
z_1 z_1^* &= q^2 z_1^* z_1 - (1-q^2) z_2^* z_2\,. \label{R3}
\end{align}
By a slight abuse of notation, we will use in the following sections the same letter to denote
a generator of the coordinate ring $\CCq$ and its representation as a Hilbert space operator.
We adopt the convention that
$\mathbb{N}= \{ 1,2, \ldots \}$ and $\mathbb{N}_0= \{0,1,2, \ldots\}$.
Given an at most countable index set $I$ and a Hilbert space $\hH_0$, consider the orthogonal sum
$\hH=\mathop{\oplus}_{i\in I}\hH_0$. We write $\eta_i$ for the vector in $\hH$ which has the
element $\eta\in\hH_0$ as its $i$-th component and zero otherwise. \label{Hn}
It is understood that $\eta_i=0$ whenever $i\notin I$.
For a subset $A\subset [0,\infty)$, the indicator function $\chi_{A}: [0,\infty)\rightarrow \mathbb{C}$
is defined by
\[\label{chi}
\chi_A{\hspace{-1pt}}(t) := \left\{
\begin{array}{l l}
1, & \quad t\in A\,,\\
0, & \quad t\notin A\,.
\end{array}\right.
\]
For a subset $S$ of a *-algebra, the symbol $\text{*-}\mathrm{alg}(S)$
stands for the *-subalgebra generated by the elements of $S$.
\section{Hilbert space representations of the 2-dimensional quantum complex plane}
In this section, we give a complete description of ``good'' *-representations of $\CCq$.
Here ``good'' means that,
in order to avoid pathological cases, we impose in Definition \ref{wb}
some natural regularity conditions on the unbounded operators.
These representations will be called \emph{well-behaved}, see \cite{S}.
To motivate the regularity conditions, we start with formal algebraic manipulations.
These algebraic relations, together with the regularity conditions of Definition~\ref{wb},
will allow us to classify in Theorem \ref{reps}
all well-behaved representations of $\CCq$.
Let $z_1$ and $z_2$ be densely defined closed operators on a Hilbert space $\hH$ satisfying
the relations \eqref{R1}--\eqref{R3} on a common dense domain.
Set $Q:= z_2^*z_2$. From the relations \eqref{R1}--\eqref{R3} within the algebra $\CCq$, we get
\[ \label{zQ}
z_1 Q= Q z_1, \quad z_1^* Q= Q z_1^*, \quad z_2Q=q^2 Q z_2, \quad z_2^*Q=q^{-2} Q z_2^*\,,
\]
and for any polynomial $p$ in one variable, \eqref{zQ} yields
\[ \label{pQ}
z_1 p(Q)= p(Q) z_1, \quad z_1^* p(Q)= p(Q) z_1^*, \quad z_2 p(Q)=p(q^2 Q) z_2, \quad z_2^*p(Q)=p(q^{-2} Q) z_2^*.
\]
Let us assume that \eqref{pQ} holds for all
bounded Borel measurable functions on $\mathrm{spec}(Q)$,
where $p(Q)=\int p(\lambda) \,\mathrm{d} E(\lambda)$ is defined by the spectral theorem with the unique
projection-valued measure $E$ of $Q$.
Then $\ker(Q)=E(\{0\})\hH$ and $\ker(Q)^\perp=E((0,\infty))\hH$ are invariant under the actions of the generators
of $\CCq$.
On $\ker(Q)$, we have $Q= z_2^*z_2=0$, thus $z_2=z_2^*=0$, and \eqref{R3} becomes
\[ \label{R30}
z_1 z_1^* = q^2 z_1^* z_1. \tag{\ref{R3}'}
\]
On $\ker(Q)^\perp$, the operator $\sqrt{Q}^{{\hspace{1pt}} -1}= \int \frac{1}{\sqrt{\lambda}} \,\mathrm{d} E(\lambda)$
is well-defined. Consider at the moment the abstract element
\[
w:= \sqrt{Q}^{-1} z_1= z_1 \sqrt{Q}^{-1} . \label{w}
\]
Inserting \eqref{w} into
the second relation of \eqref{R3} yields formally
\[ \label{wwQ}
ww^* -q^2 w^*w = -(1-q^2) z_2^* z_2 Q^{-1} = -(1-q^2).
\]
Note that, by \eqref{zQ}, we have $z_1^*z_1\, z_2^*z_2 = z_2^*z_2\, z_1^*z_1$. This relation together with
\eqref{wwQ}, \eqref{pQ}, \eqref{R30}, and
the first equation in \eqref{R3} motivate the following definition of
well-behaved *-representations of $\CCq$.
\begin{defn} \label{wb}
A well-behaved *-representations of $\CCq$ is given by densely defined closed operators $z_1$ and $z_2$
satisfying \eqref{R1}--\eqref{R3} on a common dense domain such that
\begin{enumerate}[(i)]
\item \label{0}
The self-adjoint operators $z_1^*z_1$ and $z_2^*z_2$ strongly commute.
\item \label{i}
$z_2$ is a $q$-normal operator, i.e., it satisfies the operator equation
$$
z_2 z_2^* = q^2 z_2^* z_2.
$$
\item \label{ii}
For all
bounded Borel measurable functions $f$ on $\mathrm{spec}(Q)$, the operator relations
$$
f(Q) z_1\subset z_1 f(Q), \quad f(Q) z_1^*\subset z_1^* f(Q), \quad
f(Q) z_2\subset z_2 f(q^2Q), \quad f(Q) z_2^*\subset z_2^* f(q^2Q)
$$
hold.
\item \label{iii}
On $\ker(Q)$, $z_1$ is a $q$-normal operator, i.e.,
$$
z_1 z_1^* = q^2 z_1^* z_1.
$$
\item On $\ker(Q)^\perp$,
$z_1$ commutes with $\sqrt{Q}^{-1}$ and setting
$w:= \sqrt{Q}^{-1} z_1= z_1 \sqrt{Q}^{-1} $ defines a densely defined closed operator
fulfilling the operator equation
\[ \label{ww}
ww^* = q^2 w^*w -(1-q^2) .
\]
\end{enumerate}
Here, the equality of operators on both sides of the equations includes the equality of their domains.
\end{defn}
The well-behaved representations of
$q$-normal operators have been studied in \cite{CSS} and \cite{CW}.
By \cite[Corollary 2.2]{CW},
any $q$-normal operator $\zeta$ on a Hilbert space ${\mathcal{G}}$
admits the following representation:
\[ \label{zeta}
{\mathcal{G}}= \mathrm{ker}(\zeta) \oplus ( \oplus_{n\in\mathbb{Z}} \,{\mathcal{G}}_0), \qquad
\zeta =0 \ \ \text{on} \ \ \mathrm{ker}(\zeta), \qquad
\zeta\, g_n = q^n Z g_{n-1}\ \ \text{on} \ \ \oplus_{n\in\mathbb{Z}} {\mathcal{G}}_0,
\]
where $Z$ denotes a self-adjoint operator on ${\mathcal{G}}_0$ sucht that
$\mathrm{spec}(Z)\subset [q,1]$ and $q$ is not an eigenvalue of $Z$.
Furthermore, the representations of operators $w$ satisfying \eqref{ww}
on a Hilbert space ${\mathcal{G}}$ have been classified in \cite[Lemma 2.3]{KW}.
It follows from this lemma that ${\mathcal{G}}$ can be written as a direct sum
${\mathcal{G}}= \oplus_{m\in\mathbb{N}}\, {\mathcal{G}}_0$, and the actions of $w$ and $w^*$ are determined by
\[ \label{wg}
w\, g_m = \sqrt{q^{-2m}-1}\, g_{m+1} , \quad w^* g_m = \sqrt{q^{-2(m-1)}-1}\, g_{m-1} , \quad g\in{\mathcal{G}}_0,\ \ m\in\mathbb{N}.
\]
Equations \eqref{zeta} and \eqref{wg} are all we need for the classification of the well-behaved repre\-sen\-ta\-tions of $\CCq$.
\begin{thm} \label{reps}
Any well-behaved Hilbert space representation of $\CCq$ is unitarily equivalent to a representation
given by the following formulas:
Let $\hH_{0}$, $\hH_{00}$ and $\mathcal{N}$ be Hilbert spaces, and let
$A$ and $B$ be self-adjoint operators on $\hH_{0}$ and $\hH_{00}$, respectively,
such that their spectrum belongs to $[q,1]$ and $q$ is not an eigenvalue.
Then the Hilbert space $\hH$ of the representation decomposes into the direct sum
$$
\hH= \mathcal{N}\oplus (\oplus_{k\in\mathbb{Z}} \hH_0)\oplus (\oplus_{n\in\mathbb{Z}} \oplus_{m\in \mathbb{N}} \hH_{00}),
$$
and the actions of $z_1$ and $z_2$ are determined by
\begin{align}
&z_1=z_2=0 \ \ \text{on} \ \ \mathcal{N}, \label{N} \\
&z_1\,h_k = q^k A {\hspace{1pt}} h_{k-1}, \ \ z_2 =0 \ \ \text{on} \ \ \oplus_{k\in\mathbb{Z}} \hH_0, \label{K}\\
&z_1\,h_{n,m} = \sqrt{q^{-2m}-1}{\hspace{1pt}} q^n B{\hspace{1pt}} h_{n,m+1}, \ \ z_2\, h_{n,m} = q^n B{\hspace{1pt}} h_{n-1,m} \label{H}
\ \ \text{on} \ \ \oplus_{n\in\mathbb{Z}} \oplus_{m\in \mathbb{N}} \hH_{00}.
\end{align}
A common dense domain is obtained by considering the subspace of those elements of $\hH$ which
have at most a finite number
of non-zero components in the direct sum.
Only the representation \eqref{H} is faithful.
A representation is irreducible if and only if one of the Hilbert spaces $\hH_{0}$, $\hH_{00}$ and $\mathcal{N}$
is isomorphic to $\mathbb{C}$ and the others are zero.
\end{thm}
\begin{proof}
As the sets $\{0\}$ and $(0,\infty)$ are invariant under multiplication with powers of $q$,
it follows from Definition \ref{wb}\eqref{ii} that ${\mathcal{K}} := E(\{0\})\hH$ and
${\mathcal{G}} := E((0,\infty))\hH$ are invariant under the actions of $z_1$ and $z_2$.
Clearly, $\hH= {\mathcal{K}}\oplus {\mathcal{G}}$.
Since ${\mathcal{K}}=\ker(Q) =\ker(z_2^*z_2)$, we have $z_2=0$ on ${\mathcal{K}}$.
By Definition \ref{wb}\eqref{iii}, the restriction of $z_1$ to ${\mathcal{K}}$ is a $q$-normal operator,
therefore its representation is given by \eqref{zeta}.
Setting $\mathcal{N}:=\ker(z_1)$, $\hH_0:= {\mathcal{G}}_0$ and $A:=Z$, we obtain \eqref{N} y \eqref{K} from \eqref{zeta}.
By Definition \ref{wb}\eqref{i} and the definition of ${\mathcal{G}}$, $z_2$ is a $q$-normal operator on
${\mathcal{G}}$ with $\mathrm{ker}(z_2) = \{ 0\}$. Therefore $z_2$ acts on ${\mathcal{G}}=\oplus_{n\in\mathbb{Z}} {\hspace{1pt}} {\mathcal{G}}_0$
by the formulas on right hand side of \eqref{zeta}.
Note that
\[ \label{Qg}
Q {\hspace{1pt}} g_n =q^{2n} Z^2 g_n \quad \text{on} \quad {\mathcal{G}}_n:=\{ g_n : g\in {\mathcal{G}}_0\},
\]
with $\mathrm{spec}(q^{2n} Z^2)\subset [q^{2n+2}, q^{2n}]$ and $q^{2n+2}$
is not an eigenvalue.
Considering the disjoint union $(0,\infty)=\cup_{n\in\mathbb{Z}} \,(q^{2n+2}, q^{2n}]$,
one readily sees that ${\mathcal{G}}_n=E((q^{2n+2}, q^{2n}]){\hspace{1pt}} {\mathcal{G}}$.
From Definition \ref{wb}\eqref{ii}, it follows that
$$
E((q^{2n+2}, q^{2n}]) z_1\subset z_1 E((q^{2n+2}, q^{2n}]), \ \;
E((0,\infty){\hspace{-1pt}} \setminus{\hspace{-1pt}} (q^{2n+2}, q^{2n}]) z_1\subset
z_1E((0,\infty){\hspace{-1pt}}\setminus {\hspace{-1pt}} (q^{2n+2}, q^{2n}]),
$$
and the same holds for $z_1$ replaced by $z_1^*$.
Since $ \sqrt{Q}^{-1} $ trivially commutes with $E((q^{2(n+1)}, q^{2n}])$,
we conclude that $w:= \sqrt{Q}^{-1} z_1$ and $w^*$ leave ${\mathcal{G}}_n$ invariant.
On ${\mathcal{G}}_n$, $w$ still satisfies \eqref{ww}, thus its representation is given by \eqref{wg}.
Therefore we can write ${\mathcal{G}}_n = \oplus_{m\in\mathbb{N}}{\hspace{1pt}} \hH_{n0}$ and
\[ \label{whn}
w{\hspace{1pt}} h_{n,m} = \sqrt{q^{-2m}-1}{\hspace{1pt}} h_{n,m+1},
\]
where $h_{n,m}$ belongs to the $m$-th position in the direct sum $\oplus_{m\in\mathbb{N}} \hH_{n0}$.
But ${\mathcal{G}}_n$ is just a copy of ${\mathcal{G}}_0$,
so $\hH_{n0} = \hH_{00}$ for all $n\in\mathbb{Z}$. Equation \eqref{whn} yields
\[
w^*w{\hspace{1pt}} h_{n,m} = (q^{-2m}-1){\hspace{1pt}} h_{n,m} ,
\]
hence $\hH_{nm} := \{ h_{n,m} : h\in \hH_{00}\}$
is the eigenspace for the eigenvalue $q^{-2m}-1$
of the restriction of $w^*w$ to ${\mathcal{G}}_n$.
Definition \ref{wb}\eqref{0} implies that $w^*w$ and $Q$ strongly commute.
Therefore the restrictions of $w^*w$ and $Q$ to ${\mathcal{G}}_n=E((q^{2n+2}, q^{2n}]){\mathcal{G}}$ also
strongly commute. As a consequence, the self-ad\-joint operator $Z$ from \eqref{Qg} leaves
the eigenspaces $\hH_{nm}$ invariant. Denote the restriction of $Z$ to $\hH_{00}$ by $B$.
Since $\hH_{nm}$ is an identical copy of $\hH_{00}$ in the $m$-th position of the direct sum $\oplus_{m\in\mathbb{N}} \hH_{n0}$, we get
\[ \label{ZA}
Z {\hspace{1pt}} h_{n,m} = B{\hspace{1pt}} h_{n,m} \quad \text{for all} \ \ h_{n,m} \in\hH_{nm}.
\]
Moreover, $B$ inherits the spectral properties from $Z$ as required in Theorem \ref{reps}.
Finally, \eqref{zeta} and \eqref{ZA} give
\[ \label{z2}
z_2 {\hspace{1pt}} h_{n,m} = q^n Z {\hspace{1pt}} h_{n-1,m} = q^n B {\hspace{1pt}} h_{n-1,m}, \quad \text{} \ \ h_{n,m} \in\hH_{nm},
\]
and from \eqref{Qg} and \eqref{whn}, we get
$$
z_1 {\hspace{1pt}} h_{n,m} = \sqrt{Q}{\hspace{1pt}} w {\hspace{1pt}} h_{n,m}
=\sqrt{q^{-2m}-1} \sqrt{q^{2n} Z^2}{\hspace{1pt}} h_{n,m+1} = \sqrt{q^{-2m}-1}{\hspace{1pt}} q^n B h_{n,m+1}
$$
for all $h_{n,m} \in\hH_{nm}$. This proves \eqref{H}.
That the representation \eqref{H} is faithful follows from the fact that the representations of
$z_2$ and $\omega$ are faithful, see \cite{CSS} and \cite{KW}, respectively.
The statement about irreducible representations is obvious since writing any of
the Hilbert spaces $\hH_{0}$, $\hH_{00}$ and $\mathcal{N}$ as an orthogonal sum of two non-zero
subspaces will result in an orthogonal sum of non-trivial representations.
\end{proof}
\section{Hilbert space representations on function spaces}
Note that the decomposition of $\hH$ in Theorem \ref{reps} is determined by the spectral properties of the self-adjoint
operators $Q$ and $w^*w$.
As well-known \cite[Theorem VII.3]{RS}, each self-adjoint operator $T$ on a separable Hilbert space is
unitarily equivalent to a direct sum of multiplication operators on $\mathcal{L}_2(\mathrm{spec}(T), \mu)$.
We will use this fact to realize the representations of Theorem \ref{reps}
on $\mathcal{L}_2$-spaces which will be the basis for studying function algebras on
the 2-dimensional quantum complex plane in the next section.
The direct sum of Hilbert spaces $\oplus_{n\in\mathbb{Z}} \oplus_{m\in \mathbb{N}} \hH_{00}$ in Theorem \ref{reps}
is isomorphic to the tensor product $ \ell_2(\N) \otimes (\oplus_{n\in\mathbb{Z}} \hH_{00}) $.
Let $\zeta$ be a $q$-normal operator acting on $\oplus_{n\in\mathbb{Z}} \hH_{00}$ by the formulas on the right hand side of \eqref{zeta},
and let $\omega$ act on $\ell_2(\N)$ by the formulas in \eqref{wg} with ${\mathcal{G}}_0=\mathbb{C}$.
Then $z_2$ from \eqref{z2} and $w$ from \eqref{whn} can be written $z_2= {\mathrm{id}} \otimes \zeta$ and $w=\omega\otimes{\mathrm{id}}$,
respectively.
It has been shown in \cite[Theorem 1]{CSS} that any $q$-normal operator $\zeta$ is unitarily
equivalent to a direct sum of operators of the following form:
There exists a $q$-invariant Borel measure
$\mu$ on $[0,\infty)$
such that $\zeta$ and $\zeta^*$ act
on $\hH= \mathcal{L}_2([0,\infty), \mu)$ by
\[ \label{L2rep}
\zeta {\hspace{1pt}} f(t)=q{\hspace{1pt}} t f(qt),\ \ \zeta^*f(t)=tf(q^{-1}t),\ \ f{\hspace{-1pt}} \in{\hspace{-1pt}}\mathrm{dom}(\zeta){\hspace{-1pt}}:={\hspace{-1pt}}
\{ h{\hspace{-1pt}} \in{\hspace{-1pt}} \mathcal{L}_2([0,\infty), \mu): \mbox{$\int$}{\hspace{1pt}} t^2{\hspace{1pt}} |h(t)|^2 \mathrm{d}\mu(t){\hspace{-1pt}}<{\hspace{-1pt}} \infty\}.
\]
Here, the $q$-invariance of the measure means that
$\mu(qS) = \mu(S)$ for all Borel subsets $S$ of $[0,\infty)$.
Note that $\ker(\zeta)=\{0\}$ if and only if $\mu(\{0\}) =0$.
Therefore, in order to obtain a representation of the form \eqref{H},
we have to assume that $\mu(\{0\}) =0$.
To turn $\ell_2(\N)$ into an $\mathcal{L}_2$-space, we consider the operator
$y:=\sqrt{\omega^*\omega +1}$ on $\ell_2(\N)$. Denoting by $\{ e_n : n\in\mathbb{N}\}$ the standard basis of $\ell_2(\N)$, we have
\[ \label{y}
y{\hspace{1pt}} e_n = q^{-n}{\hspace{1pt}} e_n,\quad n\in\mathbb{N}.
\]
Since the set of eigenvalues of $y$ is discrete, $y$ can be realized as a multiplication operator
on $\mathcal{L}_2(\mathrm{spec}(y),\sigma) \cong \ell_2(\N)$
by choosing the counting measure $\sigma(\{q^{-n}\}) = 1$
on $\mathrm{spec}(y)$. Extending $\sigma$ to a Borel measure on $[0,\infty)$ by setting $\sigma([0,\infty)\setminus \mathrm{spec}(y)) :=0$,
we get
$y{\hspace{1pt}} g(s) = s{\hspace{1pt}} g(s)$.
The set
$$
\{ e_n:=\chi_{\{q^{-n}\}}(s): n\in\mathbb{N}\}
$$
is an orthonormal basis of $\mathcal{L}_2(\mathrm{spec}(y),\sigma)$,
where $\chi_{\{q^{-n}\}}$ denotes the indicator function \eqref{chi}.
Note that
$$
\chi_{\{q^{-n}\}}{\hspace{-1pt}}(qs)= \chi_{\{q^{-(n+1)}\}}{\hspace{-1pt}}(s)=e_{n+1}\ \ \text{and} \ \
\sqrt{(q{\hspace{1pt}} s)^{2} -1}{\hspace{1pt}} \chi_{\{q^{-(n+1)}\}}{\hspace{-1pt}}(s) = \sqrt{ q^{-2n} -1}{\hspace{1pt}} \chi_{\{q^{-(n+1)}\}}{\hspace{-1pt}}(s),
$$
where we used $f(t){\hspace{1pt}} \chi_{\{t_0\}}{\hspace{-1pt}}(t) =f(t_0){\hspace{1pt}} \chi_{\{t_0\}}{\hspace{-1pt}}(t)$ in the second equation. Hence
\[ \label{wy}
\omega {\hspace{1pt}} g(s) = \sqrt{(q{\hspace{1pt}} s)^{2} -1}\, g(q{\hspace{1pt}} s) \quad \text{and} \quad
\omega^* g(s) = \sqrt{s^{2} -1}\, g(q^{-1}{\hspace{1pt}} s)
\]
for $g\in\mathrm{dom}(\omega)=\mathrm{dom}(\omega^*)
:=\{ h\in \mathcal{L}_2([0,\infty),\sigma) {\hspace{1pt}}:{\hspace{1pt}} \mbox{$\int$}{\hspace{1pt}} s^{2}|h(s)|^2{\hspace{1pt}} \mathrm{d}\sigma(s) < \infty\})$.
En particular,
\[
\omega^*e_1= \sqrt{s^{2} -1}\, \chi_{\{q^{-1}\}}{\hspace{-1pt}}(q^{-1}{\hspace{1pt}} s) = \sqrt{1^2 -1}\, \chi_{\{q^{-1}\}}{\hspace{-1pt}}(q^{-1}{\hspace{1pt}} s) =0,
\]
as required. Also, although $||\chi_{\{1\}}{\hspace{-1pt}}(s)||=0$ and $\chi_{\{1\}}{\hspace{-1pt}}(qs)= \chi_{\{q^{-1}\}}{\hspace{-1pt}}(s) =e_1$,
we have
$$
\sqrt{(q{\hspace{1pt}} s)^{2} -1}\, \chi_{\{1\}}{\hspace{-1pt}}(qs) =\sqrt{(q{\hspace{1pt}} q^{-1})^{2} -1}\, \chi_{\{1\}}{\hspace{-1pt}}(qs) =0,
$$
so that \eqref{wy} remains consistent.
Now,
under the isomorphism
$\mathcal{L}_2( [0,\infty),\sigma)\otimes \mathcal{L}_2([0,\infty),\mu) \cong
\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty), \sigma\otimes\mu )$,
we obtain from \eqref{L2rep} and \eqref{wy} the following representation of
$z_1 = \sqrt{Q}{\hspace{1pt}} w = \omega\otimes \sqrt{\zeta^*\zeta}$ and $z_2 = {\mathrm{id}}\otimes \zeta$,
\[ \label{zL2}
z_1{\hspace{1pt}} h(s,t) = \sqrt{(qs)^{2}-1}{\hspace{1pt}} t{\hspace{1pt}} h(q s,t), \qquad z_2{\hspace{1pt}} g(s,t)=q {\hspace{1pt}} t{\hspace{1pt}} g(s, q{\hspace{1pt}} t),
\]
where $h\in \mathrm{dom}(\omega) \otimes_{\mathrm{alg}} \mathrm{dom}(\zeta) $ and
$g\in \mathcal{L}_2( [0,\infty),\sigma)\otimes_{\mathrm{alg}}\mathrm{dom}(\zeta)$.
To sum up, we have shown that the representations from \eqref{H}
are unitarily equivalent to a direct sum of representations
of the type described in \eqref{zL2}.
Recall that $\mathcal{N}\oplus (\oplus_{k\in\mathbb{Z}} {\mathcal{K}}_0)$ in Theorem \ref{reps}
corresponds to the kernel of the $q$-normal operator $z_2$,
and that a $q$-normal operator in the representation \eqref{L2rep}
has a trivial kernel if and only if $\mu(\{0\})=0$.
Since $\mu$ from the last paragraph was assumed to satisfy $\mu(\{0\})=0$,
we will now add a point measure $\delta_0$ centred at $0$ to it.
By unitary equivalence, we may assume that $\delta_0(\{0\})=1$.
Then the representation of $z_1$ on $\oplus_{k\in\mathbb{Z}} {\mathcal{K}}_0$ is again
unitarily equivalent to a direct sum of representations of the type \eqref{L2rep}.
To realize these representations on our $\mathcal{L}_2$-space, we choose a
$q$-invariant measure on $[0,\infty)$, say $\nu$, assume again $\nu(\{0\})=0$,
take the product measure $\nu \otimes \delta_0$,
and add it to $\sigma\otimes \mu$.
Then
$$
\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty), \sigma{\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu {\hspace{-1pt}}+{\hspace{-1pt}} \nu {\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0 )
\cong \mathcal{L}_2( [0,\infty)\! \times\! [0,\infty), \sigma {\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu )\oplus \mathcal{L}_2( [0,\infty)\! \times\! [0,\infty), \nu{\hspace{-1pt}} \otimes{\hspace{-1pt}}\delta_0 ),
$$
and on $\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty),\nu{\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0 )$, we have the representation
\[ \label{zL20}
z_1{\hspace{1pt}} h(s,t) =q{\hspace{1pt}} s{\hspace{1pt}} h(q s, t) =q{\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t){\hspace{1pt}} s{\hspace{1pt}} h(q{\hspace{1pt}} s,t), \quad z_2{\hspace{1pt}} g(s,t)=q{\hspace{1pt}} t{\hspace{1pt}} g(s,q{\hspace{1pt}} t) =0
\]
for all $h$
such that $\int s^2 |h(0,s)|^2 {\hspace{1pt}} \mathrm{d}\nu(s)<\infty$
and for all $g$.
Here, for functions depending on the second variable $t$, we used the fact that $\mathrm{supp}(\delta_0)=\{0\}$.
Again, by \cite[Theorem 1]{CSS} and the same argumentation as above,
the representations from \eqref{K} are unitarily equivalent to
a direct sum of representations of the type described in \eqref{zL20}.
Finally, to obtain a non-trivial component $\mathcal{N}= \ker{z_1}\cap \ker{z_2}$, we add to
$\sigma{\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu {\hspace{-1pt}}+{\hspace{-1pt}} \nu{\hspace{-1pt}} \otimes{\hspace{-1pt}}\delta_0 $
the point measure $\epsilon {\hspace{1pt}} \delta_0\otimes \delta_0$, where $\epsilon=0$ or $\epsilon=1$
depending on whether $\mathcal{N}=\{0\}$ or $\mathcal{N}\neq \{0\}$. Summarizing, we have proven the following theorem:
\begin{thm} \label{teo2}
The Hilbert space representations of $\CCq$ from Theorem \ref{reps} are
unitarily equivalent to a direct sum of representations of the following type:
Let $\mu$ and $\nu$ be $q$-invariant Borel measures on $[0,\infty)$
such that $\mu(\{0\})=\nu(\{0\})=0$.
Denote by
$\delta_0$ the Dirac measure centred at $0$, and
define a
Borel measure $\sigma$ on
$[0,\infty)$ by setting $\sigma(\{q^{-n}\}): = 1$ for all $n\in \mathbb{N}$ and
$\sigma\big([0,\infty) \setminus \{q^{-n}:n\in\mathbb{N}\}\big):=0$.
For $\epsilon \in\{0,1\}$,
consider the Hilbert space
\[ \label{HL2}
\hH:= \mathcal{L}_2\big( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}} \sigma{\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu{\hspace{-1pt}}+{\hspace{-1pt}}\nu{\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0 + \epsilon{\hspace{1pt}} \delta_0\otimes \delta_0 \big),
\]
and set
$$
\mathrm{dom}(z_1):= \{h \in \hH \,:\, s{\hspace{1pt}} t{\hspace{1pt}} h \in\hH \ \text{and} \ s{\hspace{1pt}} \chi_{\{0\}}(t){\hspace{1pt}} h \in\hH\} , \quad
\mathrm{dom}(z_2):= \{g \in \hH \,:\, t{\hspace{1pt}} g\in\hH\} \,,
$$
where $\chi_{\{0\}}$ denotes the indicator function from \eqref{chi}.
For $h\in \mathrm{dom}(z_1)$ and $g\in\mathrm{dom}(z_2)$, the ac\-tions of the generators of $\CCq$
are given by
\begin{align} \label{multrep}
z_1{\hspace{1pt}} h(s,t) &= \sqrt{(qs)^{2}-1}{\hspace{1pt}} t{\hspace{1pt}} h(q s,t) + q{\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t){\hspace{1pt}} s{\hspace{1pt}} h(q s,t), &
z_2{\hspace{1pt}} g(s,t)&= q{\hspace{1pt}} t{\hspace{1pt}} g(s,q{\hspace{1pt}} t) , \\
z_1^*{\hspace{1pt}} h(s,t) &= \sqrt{s^{2}-1}{\hspace{1pt}} t{\hspace{1pt}} h(q^{-1} s,t) + \chi_{\{0\}}{\hspace{-1pt}}(t){\hspace{1pt}} s{\hspace{1pt}} h(q^{-1} s,t), &
z_2^*{\hspace{1pt}} g(s,t)&= t{\hspace{1pt}} g(s,q^{-1} t). \label{multrep*}
\end{align}
\end{thm}
Note that
\begin{align} \nonumber
\hH&=
\mathcal{L}_2\big( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}} \epsilon{\hspace{1pt}} \delta_0\otimes \delta_0 \big)\oplus
\mathcal{L}_2\big( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}}\nu {\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0\big) \oplus
\mathcal{L}_2\big( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}} \sigma {\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu\big)\\
&=
\mathcal{L}_2\big( \{0\}\!\times\! \{0\}{\hspace{1pt}} , {\hspace{1pt}} \epsilon{\hspace{1pt}} \delta_0\otimes \delta_0 \big)\oplus
\mathcal{L}_2\big( [0,\infty)\! \times\!\{0\} {\hspace{1pt}} , {\hspace{1pt}} \nu {\hspace{-1pt}} \otimes{\hspace{-1pt}}\delta_0 \big) \oplus
\mathcal{L}_2\big( \{q^{-n}:n{\hspace{-1pt}}\in{\hspace{-1pt}}\mathbb{N}\} \! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}} \sigma {\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu\big) \label{ortoH}
\end{align}
and that the restriction of the representation \eqref{multrep} to one of the orthogonal components
cor\-res\-ponds to one of the representations from \eqref{N}--\eqref{H}. Of course, we could have formulated
Theorem \ref{teo2} for each of the orthogonal subspaces separately.
The reason why we prefer to work with a single Hilbert space on the domain
$[0,\infty)\! \times\! [0,\infty)$
will become clear in the next section.
\section{C*-algebra of continuous functions vanishing at infinity}
The aim of this section is to define a C*-algebra which can be viewed as the algebra of continuous functions
on the 2-dimensional quantum complex plane
vanishing at infinity. The definition will be motivated by a similar construction for the
1-dimensional quantum complex plane \cite{CW}.
As a point of departure, we first look for an auxiliary *-algebra, where the commutation relations are
considerable simple.
For the convenience of the reader, we recall the construction of the C*-algebra $C_0(\mathbb{C}_q)$
of continuous functions vanishing at infinity on the 1-dimensional quantum complex plane \cite{CW}.
Given a representation of the type \eqref{L2rep}, consider the following *-subalgebra of $\mathrm{B}( \mathcal{L}_2([0,\infty), \mu))${\hspace{1pt}}:
\[ \label{Cq}
\text{*-}\mathrm{alg}\{C_0(\mathrm{spec}(|\zeta|),U\} := \Big\{ \sum_{\text{finite}} f_k(|\zeta|) {\hspace{1pt}} U^k\,:\,
k{\hspace{-1pt}} \in{\hspace{-1pt}} \mathbb{Z},\ f_k{\hspace{-1pt}}\in{\hspace{-1pt}} C_0( \mathrm{spec}(|\zeta|),\ f_k(0){\hspace{-1pt}}={\hspace{-1pt}} 0 \ \text{if} \ k{\hspace{-1pt}}\neq{\hspace{-1pt}} 0\Big\},
\]
where $\mu(\{0\})=0$ and $U$ denotes the unitary operator from the polar decomposition $\zeta =U{\hspace{1pt}} |\zeta|$.
For all bounded continuous functions $f$ on $\mathrm{spec}(|\zeta|)$, the operators $f(|\zeta|)$ and $U$ satisfy the commutation relation
$$
U{\hspace{1pt}} f(|\zeta|) = f(q{\hspace{1pt}} |\zeta|) {\hspace{1pt}} U.
$$
In\cite{CW},
a representation of the type \eqref{L2rep} of a $q$-normal operator $Z=U{\hspace{1pt}} |Z|$ was said to be
\emph{universal} if $\mathrm{spec}(|Z|)=[0,\infty)$, or equivalently if $\mathrm{supp}(\mu)=[0,\infty)$.
It has the universal property that
\[
\text{*-}\mathrm{alg}\{C_0(\mathrm{spec}(|Z|),U\} \ni \sum_{\text{finite}} f_k(|Z|) {\hspace{1pt}} U^k \
\longmapsto \ \sum_{\text{finite}} f_k(|\zeta|) {\hspace{1pt}} U^k \in \text{*-}\mathrm{alg}\{C_0(\mathrm{spec}(|\zeta|),U\}
\]
yields always a well-defined surjective *-homomorphism.
Although the exact definition in \cite{CW} is slightly abstract, \cite[Theorem 3.3]{CW} states that
$C_0(\mathbb{C}_q)$ is isomor\-phic to the norm closure of $\text{*-}\mathrm{alg}\{C_0(\mathrm{spec}(|Z|),U\}$
in $\mathrm{B}( \mathcal{L}_2([0,\infty), \mu))$.
Motivated by the previous description, we call a representation from Theorem \ref{teo2}
\emph{universal} if $\epsilon=1$ and
$\mathrm{supp}(\mu)=\mathrm{supp}(\nu)=[0,\infty)$. Such $q$-invariant measures can be obtained, for instance,
by taking the Lebesgue measure $\lambda$ on $(q,1]$ and setting
$$
\mu(M) = \sum_{k\in\mathbb{Z}} \lambda ( q^{-k}(M\cap (q^{k+1},q^k])) .
$$
Given a universal representation, consider the polar decompositions
$z_1{\hspace{-1pt}}={\hspace{-1pt}} U{\hspace{1pt}} |z_1|$ and $z_2{\hspace{-1pt}} ={\hspace{-1pt}} V{\hspace{1pt}} |z_2|$.
For all $h\in \mathrm{dom}(|z_1|)=\mathrm{dom}(z_1)$ and $g\in \mathrm{dom}(|z_2|)=\mathrm{dom}(z_2)$, \eqref{multrep}
and \eqref{multrep*} imply
\[ \label{mod}
|z_1|{\hspace{1pt}} h(t,s) = \big(\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) \sqrt{s^{2}-1}{\hspace{1pt}} t + s {\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t)\big) h(t, s), \qquad
|z_2|{\hspace{1pt}} g(t,s)= {\hspace{1pt}} t{\hspace{1pt}} g( t,s).
\]
Since
\begin{align*}
&\mathrm{ran}(|z_1|)=\ker(|z_1|)^\perp= \mathrm{ran}\big(\chi_{(0,\infty)}{\hspace{-1pt}}(t)\,\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) + \chi_{\{0\}}{\hspace{-1pt}}(t)\, \chi_{(0,\infty)}(s)\big), \\
&\mathrm{ran}(|z_2|)=\ker(|z_2|)^\perp= \mathrm{ran}(\chi_{(0,\infty)}{\hspace{-1pt}}(t)),
\end{align*}
it follows from \eqref{multrep} that
\begin{align} \label{U}
U h(s, t) &=\big( \chi_{(0,\infty)}{\hspace{-1pt}}(t)\,\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(qs) + \chi_{\{0\}}{\hspace{-1pt}}(t)\, \chi_{(0,\infty)}(s)\big)h(qs, t) , \\
V h(s, t) &= \chi_{(0,\infty)}{\hspace{-1pt}}(t) h(s, qt), \label{V}
\end{align}
for all $h\in \hH$,
where we used $\chi_{(0,\infty)}{\hspace{-1pt}}(q{\hspace{1pt}} r)=\chi_{(0,\infty)}{\hspace{-1pt}}(r)$.
Their adjoints act on $\hH$ by
\begin{align} \label{U*}
U^* h(s, t) &=\big( \chi_{(0,\infty)}{\hspace{-1pt}}(t)\,\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) + \chi_{\{0\}}{\hspace{-1pt}}(t)\, \chi_{(0,\infty)}(s)\big)h(q^{-1} s, t) , \\
V^* h(s, t) &= \chi_{(0,\infty)}{\hspace{-1pt}}(t) h(s, q^{-1}t) . \label{V*}
\end{align}
From \eqref{U}--\eqref{V*}, we get
\begin{align} \label{UU*}
U^* U &=\chi_{(0,\infty)}{\hspace{-1pt}}(t)\,\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) + \chi_{\{0\}}{\hspace{-1pt}}(t)\, \chi_{(0,\infty)}(s),\\
U U ^* &=\chi_{(0,\infty)}{\hspace{-1pt}}(t)\,\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(qs) + \chi_{\{0\}}{\hspace{-1pt}}(t)\, \chi_{(0,\infty)}(s), \label{U*U} \\
V^* V &= \chi_{(0,\infty)}{\hspace{-1pt}}(t) =V V^*. \label{VV*}
\end{align}
Using again $\chi_{(0,\infty)}{\hspace{-1pt}}(q^{\pm 1}t)=\chi_{(0,\infty)}{\hspace{-1pt}}(t) $ and $\chi_{\{0\}}{\hspace{-1pt}}(q^{\pm 1} t)=\chi_{\{0\}}{\hspace{-1pt}}(t)$,
one easily sees that
\[
UV=VU, \quad UV^*=V^* U, \quad U^*V=VU^*,\quad U^*V^*=V^*U^*.
\]
Considering Borel measurable functions $f$ on $[0,\infty)\! \times\! [0,\infty)$ as multiplication operators by
$$
f {\hspace{1pt}} h(s,t):= f(s,t)\, h(s,t),
$$
we obtain from \eqref{U}--\eqref{V*} the following simple commutation relations:
\begin{align} \label{fUU}
U{\hspace{1pt}} f(s,t)&= f(qs, t){\hspace{1pt}} U, & U^*{\hspace{1pt}} f(s,t) &= f(q^{-1}s, t){\hspace{1pt}} U^*, \\
V{\hspace{1pt}} f(s,t)&= f(s, qt){\hspace{1pt}} V, & V^*{\hspace{1pt}} f(s,t) &=f(s, q^{-1} t) {\hspace{1pt}} V . \label{fVV}
\end{align}
In fact, the reason for choosing $|\zeta|$ from \eqref{zeta} and $y$ from \eqref{y} as multiplication operators was
to obtain such simple commutation relations between functions and the phases from the polar decom\-positions of
the generators of $\CCq$. As a consequence,
\[ \label{fun}
\mathrm{Fun}(\mathbb{C}_q^2) := \Big\{ \sum_{\text{finite}} f_{nm}(s,t){\hspace{1pt}} U^{\#n}V^{\#m}\,:\, f\in \mathcal{L}_{\infty}([0,\infty)\times [0,\infty))\Big\}
\]
is a *-subalgebra of $\mathrm{B}(\hH)$, where
\[\label{hash}
U^{\# n} := \left\{
\begin{array}{l l}
U^n, & \quad n\geq 0\,,\\
U^{*n}, & \quad n< 0\,,
\end{array}\right.
\qquad
V^{\# n} := \left\{
\begin{array}{l l}
V^n, & \quad n\geq 0\,,\\
V^{*n}, & \quad n< 0\,,
\end{array}\right.
\qquad
n\in\mathbb{Z}.
\]
Moreover, by \eqref{mod} and the previous commutation relations, there exists for all $k,l,m,n\in\mathbb{N}_{0}$
a Borel measurable function
$p_{klmn}$ on $[0,\infty)\times [0,\infty)$
such that
\[ \label{p}
z_1^k z_1^{*l} z_2^m z_2^{*n} = p_{klmn}(s,t){\hspace{1pt}} U^{\# k-l}V^{\# m-n} \,.
\]
Equations \eqref{fun} and \eqref{p} are the motivation for the construction of the C*-algebra of
continuous functions on $\mathbb{C}^2_q$ vanishing at infinity.
Before treating the quantum case, let us briefly review the classical C*-algebra $C_0(\mathbb{C}^2)$.
In analogy to the polar decomposition of the generators, write
$z_1= \mathrm{e}^{\mathrm{i} \varphi} |z_1|$ and $z_2= \mathrm{e}^{\mathrm{i} \theta} |z_2|$. Let $n,m\in\mathbb{Z}$.
Given a function $f_{nm}\in C_0([0,\infty)\! \times\! [0,\infty))$, the assignment
$$
\mathbb{C}^2 \ni (\mathrm{e}^{\mathrm{i} \varphi} |z_1| ,\mathrm{e}^{\mathrm{i} \theta} |z_2| )\
\longmapsto\ f_{nm}(|z_1|, |z_2|){\hspace{1pt}} \mathrm{e}^{\mathrm{i} \varphi n}{\hspace{1pt}} \mathrm{e}^{\mathrm{i} \theta m} \in\mathbb{C}
$$
defines a function in $C_0(\mathbb{C}^2)$ if and only if
\begin{enumerate}[(a)]
\item
$f_{nm}(0, |z_2|){\hspace{1pt}} \mathrm{e}^{\mathrm{i} \varphi n}{\hspace{1pt}} \mathrm{e}^{\mathrm{i} \theta m}$
does not depend on $\varphi $
\ \,$\Longrightarrow$ \ $f_{nm}(0, |z_2|)=0$ for $n\neq 0$,
\item
$f_{nm}(|z_1|,0){\hspace{1pt}} \mathrm{e}^{\mathrm{i} \varphi n}{\hspace{1pt}} \mathrm{e}^{\mathrm{i} \theta m}$
does not depend on $\theta $
\ \;$\Longrightarrow$ \ \,$f_{nm}(|z_1|, 0)=0$ for $m\neq 0$.
\end{enumerate}
Moreover, the following *-subalgebra of $C_0(\mathbb{C}^2)$,
\begin{align} \label{C0C2}
\mathcal{C}_0(\mathbb{C}^2):= \Big\{ \sum_{\text{finite}} f_{nm}(|z_1|,|z_2|) {\hspace{1pt}} \mathrm{e}^{\mathrm{i} \varphi n}{\hspace{1pt}} \mathrm{e}^{\mathrm{i} \theta m}&\,:\, \
f_{nm} \in C_0([0,\infty)\! \times\! [0,\infty)), \ \,n,m\in\mathbb{Z}, \\[-8pt]
&f_{nm}(0,|z_2|)=0 \ \;\text{if} \ \;n\neq 0, \ \ f_{nm}(|z_1|,0)=0 \ \;\text{if} \ \;m\neq 0 \, \Big\}, \nonumber
\end{align}
separates the points of $\mathbb{C}^2$. By the Stone--Weierstra{\ss} theorem, its norm closure yields $C_0(\mathbb{C}^2)$.
To pass from the classical to the quantum case, we start with a universal representation from Theorem \ref{teo2}.
For all bounded continuous functions $g$ on $[0,\infty)$, the operators $g(|z_1|), \, g(|z_2|) \in\mathrm{B}(\hH)$ are well-defined
by the spectral theorem, and \eqref{mod} shows that
$$
g(|z_1|)\, h(s,t) = g\big({\hspace{1pt}} \chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s)\sqrt{s^{2}-1}{\hspace{1pt}} t + s {\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t){\hspace{1pt}} \big)\,h(s,t),\qquad
g(|z_2|){\hspace{1pt}} h(s,t) = g(t) {\hspace{1pt}} h(s,t) .
$$
By the universality of the representation, we have $\|g(|z_i|)\| = \|g\|_\infty$, $i=1,2$.
In particular, the norm does not depend on the chosen measures of a universal representation.
Similarly, for $f\in C_0([0,\infty)\! \times\! [0,\infty))$, the formula
\[ \label{fzz}
f(|z_1|,|z_2|)\, h(s,t) := f\big({\hspace{1pt}} \chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s)\sqrt{s^{2}-1}{\hspace{1pt}} t + s {\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t) {\hspace{1pt}},t{\hspace{1pt}}\big){\hspace{1pt}} h(s,t),\qquad h\in\hH,
\]
yields a well-defined operator in $\mathrm{B}(\hH)$ with $\|f(|z_1|,|z_2|)\| = \|f\|_\infty$.
These observations lead to the following definition of $C_0(\mathbb{C}_q^2)$.
\begin{defn}
Given a universal representation of $\CCq$ from Theorem \ref{teo2},
let $z_1 = U{\hspace{1pt}} |z_1|$ and $z_2 = V{\hspace{1pt}} |z_2|$ be the polar decompositions of
$z_1$ and $z_2$, respectively.
The C*-algebra $C_0(\mathbb{C}_q^2)$ of continuous functions
on the 2-dimensional quantum complex plane
vanishing at infinity is defined as the norm closure of
\begin{align} \label{defC0}
\mathcal{C}_0(\mathbb{C}^2_q):=\text{*-}\mathrm{alg} \Big\{ &\sum_{\text{finite}}
f_{nm}\big(\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) \sqrt{s^{2}-1}{\hspace{1pt}} t + s {\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t)\, ,{\hspace{1pt}} t{\hspace{1pt}} \big) \, U^{\# n}{\hspace{1pt}} V^{\# m}\,:\, \ n,m\in\mathbb{Z}, \\[-4pt]
& \ f_{nm} \in C_0([0,\infty)\! \times\! [0,\infty)), \ \; f_{nm}(0,t)=0 \ \,\text{if} \ \,n\neq 0, \ \; f_{nm}(s,0)=0 \ \,\text{if} \ \,m\neq 0 \Big\}
\nonumber
\end{align}
in $\mathrm{B}(\hH)$.
\end{defn}
Apart from the non-commutativity in \eqref{fUU} and \eqref{fVV}, the main difference to the classical case
is the unusual expression in the first argument of the function $f_{nm}$. However,
if we look at the representation on the orthogonal components of \eqref{ortoH} separately,
our formulas have a natural geometric interpretation.
First note that the function $h_{00}(s,t):=\chi_{\{0\}}{\hspace{-1pt}}(s) \chi_{\{0\}}{\hspace{-1pt}}(t) \in\hH$ generates the 1-dimensional invariant subspace
$\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}} \epsilon{\hspace{1pt}} \delta_0\otimes \delta_0 )$, and the representation of $\mathcal{C}_0(\mathbb{C}^2_q)$
on it reads
\[ \label{ev0}
\sum_{\text{finite}}
f_{nm}(\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) \sqrt{s^{2}-1}{\hspace{1pt}} t + s {\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t)\, ,{\hspace{1pt}} t) \, U^{\# n}{\hspace{1pt}} V^{\# m} \, h_{00}(s,t) = f_{00}(0,0) {\hspace{1pt}} h_{00}(s,t)
\]
since $f_{nm}(0,0)= 0$ if $n\neq 0$ or $m\neq 0$.
Obviously, \eqref{ev0} corresponds to evaluating functions on 2-dimensional complex plane at $(0,0)$.
This 1-dimensional representation describes the only classical point $(0,0)$ of $\mathbb{C}_q^2$.
Next, on $\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}}\nu {\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0)$, we have $z_2=0$ and can
thus write
\[ \label{C0C}
\sum_{n,m} f_{nm}(\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) \sqrt{s^{2}-1}{\hspace{1pt}} t + s {\hspace{1pt}} \chi_{\{0\}}{\hspace{-1pt}}(t)\, ,{\hspace{1pt}} t) \, U^{\# n}{\hspace{1pt}} V^{\# m}
= \sum_{n} f_{n0}(s ,0) \, U^{\# n}.
\]
Recalling that $z_1$ acts on $\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}}\nu {\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0)$ as a $q$-normal operator
and comparing \eqref{C0C} with \eqref{Cq} shows that the restriction of
$\mathcal{C}_0(\mathbb{C}^2_q)$ to $\mathcal{L}_2( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}}\nu {\hspace{-1pt}} \otimes{\hspace{-1pt}} \delta_0)$ generates $C_0(\mathbb{C}_q)$.
This representation corresponds to an inclusion $\mathbb{C}_q\times\{0\} \subset \mathbb{C}^2_q$.
Finally, on $\mathcal{L}_2\big( [0,\infty)\! \times\! [0,\infty){\hspace{1pt}} , {\hspace{1pt}} \sigma {\hspace{-1pt}} \otimes{\hspace{-1pt}}\mu\big)$,
we have $t= |z_2|>0$ and $\chi_{[q^{-1},\infty)}{\hspace{-1pt}}(s) \sqrt{s^{2}-1} = |\omega |$, see \eqref{wy}.
Thus the representation of the functions from \eqref{defC0} can be written
$$
\sum_{\text{finite}}f_{nm}(|\omega|\, t \, ,{\hspace{1pt}} t) \, U^{\# n}{\hspace{1pt}} V^{\# m}.
$$
Classically we get, for all $|\omega|\geq 0$,
$$
\sum_{\text{finite}}f_{nm}(|\omega|\, t \, ,{\hspace{1pt}} t) {\hspace{1pt}} \mathrm{e}^{\mathrm{i} \varphi n}{\hspace{1pt}} \mathrm{e}^{\mathrm{i} \theta m} \Big|_{t=0}
= \sum_{\text{finite}}f_{nm}(0,0) {\hspace{1pt}} \mathrm{e}^{\mathrm{i} \varphi n}{\hspace{1pt}} \mathrm{e}^{\mathrm{i} \theta m} = f_{00}(0,0).
$$
Therefore these functions separate only the points of $\mathbb{C}^2\setminus \mathbb{C} \times\{0\} $ and the whole
subspace $\mathbb{C} \times\{0\} $ gets identified with the single point $(0,0)$.
Geometrically, this corresponds to a 2-dimensional complex plane, where $\mathbb{C} \times\{0\} $ is shrunk
to one point.
Arguing backwards, we can say that the representation from \eqref{zL2} corresponds to a
2-dimen\-sional quantum complex plane, where $\mathbb{C}_q \times\{0\} $ is shrunk to a point, and that
$\mathbb{C}_q \times\{0\} $ gets glued into this space by the representation \eqref{zL20}.
Moreover, the origin of the 2-dimensional quantum complex plane is the only classical point described
by the 1-dimensional representation \eqref{ev0}.
|
1,116,691,498,539 | arxiv |
\section{Review of Anisotropic Viscosity}
In this section we give a more general review of the anisotropic Hall viscosity, summarizing the setup of Ref.~\cite{rao2019hall}. Without any rotational symmetry and in the absence of time reversal symmetry, the Hall viscosity tensor is generically expressed in terms of six coefficients, \begin{align} \label{eq:hallviscgeneral}
\visc{(\eta^\mathrm{H})}{\mu}{\nu}{\lambda}{\rho}
&\equiv\frac{1}{2}\left(\visc{\eta}{\mu}{\nu}{\lambda}{\rho}-\visc{\eta}{\lambda}{\rho}{\mu}{\nu}\right) \nonumber\\
&=\eta^\mathrm{H}\visc{(\sigma^z\wedge\sigma^x)}{\mu}{\nu}{\lambda}{\rho}+\gamma\visc{(\sigma^z\wedge\epsilon)}{\mu}{\nu}{\lambda}{\rho} \nonumber \\
& +\Theta\visc{(\sigma^x\wedge\epsilon)}{\mu}{\nu}{\lambda}{\rho}\nonumber +\bar{\eta}^\mathrm{H}\visc{(\delta\wedge\epsilon)}{\mu}{\nu}{\lambda}{\rho}+\bar{\gamma}\visc{(\delta\wedge\sigma^x)}{\mu}{\nu}{\lambda}{\rho}\nonumber \\ &+\bar{\Theta}\visc{(\sigma^z \wedge \delta)}{\mu}{\nu}{\lambda}{\rho},
\end{align}
Now when we look at the viscous forces produced in the bulk by this Hall viscosity tensor, we see that the barred and unbarred coefficients contribute to the same component of the bulk viscous force. In particular we have that the viscous force density is controlled by the rank two "Hall tensor"
\begin{align}
f^{\mathrm{H},\eta}_\mathrm{\nu}&=\sum_{\substack{\mu\nu'\rho'\\ \rho\lambda}}\frac{1}{2}\left(\epsilon^{\nu'\rho'}\visc{(\eta^\mathrm{H})}{\mu}{\nu'}{\lambda}{\rho'}\right)\partial_\mu\partial_\lambda(\epsilon_{\nu\rho}v^\rho) \\
&\equiv\sum_{\mu\lambda\rho}\eta_\mathrm{H}^{\mu\lambda}\partial_\mu\partial_\lambda(\epsilon_{\nu\rho}v^\rho)\nonumber.\\
\text{with} \;
\eta_\mathrm{H}^{\mu\nu}&=\frac{1}{4}\sum_{\lambda\rho}\epsilon^{\lambda\rho}\left(\visc{\eta}{\mu}{\lambda}{\nu}{\rho}+\visc{\eta}{\nu}{\lambda}{\mu}{\rho}\right)\\
&= (\eta^\mathrm{H}+\bar{\eta}^\mathrm{H})\delta^{\mu\nu}+(\gamma+\bar{\gamma})\sigma_z^{\mu\nu}+(\Theta+\bar{\Theta})\sigma_x^{\mu\nu} \nonumber.
\end{align}
The coefficient $\eta^\mathrm{H}$ is the usual isotropic Hall viscosity \cite{avron1987adiabatic}, the coefficient $\bar{\eta}^\mathrm{H}$ breaks angular momentum conservation and can appear in active (or anisotropic) systems, and the rest of the coefficients are explicitly anisotropic and appear when a system has less than threefold rotation symmetry.
\subsection{Non-dissipative contact terms}
As mentioned in the main text, the difference $\eta^\mathrm{H}_\mathrm{diff}\equiv \eta^\mathrm{H}-\bar{\eta}^\mathrm{H}$ between the isotropic Hall viscosities does not enter into the bulk force, it can be shifted by adding a divergenceless ``contact" \cite{rao2019hall} term $\delta \tau^i_j = C_0\partial^*_i v_j$ to the bulk stress tensor. From the lens of the viscosity tensor, the individual coefficients get shifted as
\begin{equation}
\begin{aligned}
\eta^H \rightarrow \eta^H + C_0/2\\
\bar{\eta}^H \rightarrow \bar{\eta}^H - C_0/2,
\end{aligned}
\end{equation}
We note here that a more general expression of the contact term
\begin{equation}
\delta\tau^\mu_{\hphantom{\mu}\nu}=\sum_{\lambda\rho}\epsilon^{\mu\lambda}C_{\nu\rho}\partial_\lambda v^\rho,\label{eq:divfreecontact}
\end{equation}
with the more general form of the coefficient $C_{\nu\rho}$ now as a symmetric rank two tensor
\begin{equation}
\label{eq:contacttermtensor}
C_{\nu\rho}=C_0\delta_{\nu\rho}+C_x \sigma^x_{\nu\rho}+C_z\sigma^z_{\nu\rho},
\end{equation}
In addition to the described effect of $C_0$, this provides the effect of shifting the difference between all barred and unbarred viscosities, and individually shifting the other viscosities as
\begin{eqnarray}
\gamma&\rightarrow \gamma+C_z/2 \;\;\;\;\;\;\bar{\gamma}&\rightarrow \bar{\gamma}-C_z/2 \\
\Theta&\rightarrow \Theta+C_x/2 \;\;\;\;\;\;\bar{\Theta}&\rightarrow \bar{\Theta}-C_x/2
\end{eqnarray}
We can continue viewing the contact terms as viscosities by looking at the boundary force provided by the contact term $C_0$, for example:
\begin{equation}
{\bf f}^{(C_0 ,\, \text{bdry})} = C_0 \left[ \left(\partial_{\bf s} v_{\bf n} + \frac{v_{\bf s}}{R} \right) \hat{\bf n} + \left(\partial_{\bf s} v_{\bf s} - \frac{v_{\bf n}}{R} \right) \hat{\bf s} \right].
\end{equation}
In the viewpoint that the contact term is a proxy for modified stress boundary conditions, with the above expression dictating the stress at the boundary.
\subsection{Dissipative viscosities \& contact term}
With higher than twofold rotational symmetry\footnote{Technically for fourfold symmetric systems there are two independent shear viscosities--this detail will not be relevant for our analysis} the dissipative viscosity tensor for a fluid can be parametrized as
\begin{align} \label{eq:dissipativevisc}
\visc{(\eta^\mathrm{D})}{\mu}{\nu}{\lambda}{\rho}
&\equiv\frac{1}{2}\left(\visc{\eta}{\mu}{\nu}{\lambda}{\rho}+\visc{\eta}{\lambda}{\rho}{\mu}{\nu}\right) \nonumber\\
&=\eta^\mathrm{sh}\visc{(\sigma^x\odot\sigma^x+\sigma^z\odot\sigma^z)}{\mu}{\nu}{\lambda}{\rho}+\eta^\mathrm{R}\visc{(\epsilon\odot\epsilon)}{\mu}{\nu}{\lambda}{\rho} \nonumber \\
& +\eta^\mathrm{RC}\visc{(\delta\odot\epsilon)}{\mu}{\nu}{\lambda}{\rho}+ \zeta\visc{(\delta\odot\delta)}{\mu}{\nu}{\lambda}{\rho}\nonumber,
\end{align}
The familiar bulk viscosity $\zeta$ and shear viscosity $\eta^\mathrm{sh}$ provide frictional forces in response to dynamic dilatations and volume-preserving shears, respectively. The rotational or vortex viscosity $\eta^\mathrm{R}$ breaks angular momentum conservation (analogous to $\bar{\eta}^\mathrm{H}$) and provides local resistive torques in response to vorticity. Lastly, $\eta^\mathrm{RC}$ is another dissipative viscosity that breaks angular momentum conservation For an incompressible fluid, $\eta^{\mathrm{RC}}$ and $\bar{\eta}^H$ provide the same stress both in the bulk and on the boundary, and so in our analysis we can set $\eta^\mathrm{RC}=0$ without loss of generality\footnote{We note there are other constraints on this coefficient $\eta^\mathrm{RC}$ considered in Ref.~\cite{monteiro2021hamiltonian}}.
In addition to the non-dissipative contact terms, there is another contact term that plays a similar role except for dissipative viscosities, and amounts to considering an antisymmetric piece of the tensor $C_{\mu\nu}$ in Eq.~\eqref{eq:contacttermtensor}. Explicitly this contact term is
\begin{equation}
\delta\tau^\mu_{\hphantom{\mu}\nu}= C_{\mathrm{dis}}
\sum_{\lambda\rho}\epsilon^{\mu\lambda}\epsilon_{\nu\rho}\partial_\lambda v^\rho,\label{eq:divfreecontact}
\end{equation}
Similar to the non-dissipative case, the bulk dissipative forces only depend on the linear combination. This contact term shifts three viscosities when added in this case,
\begin{equation}
\begin{aligned}
\label{eq:Dshifts}
\eta^\mathrm{sh} &\rightarrow \eta^\mathrm{sh} - C_\mathrm{dis}/2\\
\eta^\mathrm{R} &\rightarrow \eta^\mathrm{R} + C_{\mathrm{dis}}/2\\
\zeta &\rightarrow \zeta + C_{\mathrm{dis}}/2
\end{aligned}
\end{equation}
For the case of an incompressible fluid with $\zeta=0$, the contact term shifts the difference $\eta^\mathrm{dis}_\mathrm{diff}\equiv\eta^\mathrm{R} - \eta^\mathrm{sh}$, which is the case considered in the main text. We also note that it appears from the above that the contact term can generate a nonzero bulk viscosity for incompressible fluid with $\zeta=0$. In practice, however, this is unobservable as the dynamic constraint $\nabla \cdot {\bf v} = 0$ for an incompressible fluid prevents the bulk viscosity from contributing to the stress tensor.
For the threefold or higher rotationally symmetric case we consider in the main text, the dissipative viscous force on the boundary is
\begin{equation}
\begin{aligned}
\label{eq:dissbf}
{\bf f}^\mathrm{dis}&= \left[\left(\eta^\mathrm{dis}_\mathrm{tot} + \eta^\mathrm{dis}_\mathrm{diff}\right) \partial_{\bf n} v_{\bf{n}} \right] {\bf \hat{n}} \\
&+ \left[\eta^\mathrm{dis}_\mathrm{tot} \omega + (\eta^\mathrm{dis}_\mathrm{tot}+\eta^\mathrm{dis}_\mathrm{diff}) \left(\partial_{\bf n} \partial_{\bf s} - \frac{v_{\bf s}}{R}\right) \right]{\bf \hat{s}}
\end{aligned}
\end{equation}
Just as in the non-dissipative case, the boundary force depends not only on the bulk hydrodynamic obervable $\eta^\mathrm{dis}_\mathrm{tot}$, but also on the difference $\eta^\mathrm{dis}_\mathrm{diff}$.
\subsection{Stress boundary conditions}
We detail the modified version of the no-stress boundary condition, relevant for the free surface fluid problem we consider later on,
\begin{equation}
\hat{n}_\mu \tau^{\mu}_{\hphantom{\mu}\nu} = 0
\end{equation}
For a fluid with pressure $p$, we have the following conditions for the normal and tangential forces on the boundary:
\begin{equation}
\begin{aligned}
\label{eq:nostressbcs}
\hat{n}_\mu\hat{n}^\nu \tau^{\mu}_{\hphantom{\mu}\nu} =-p + \left(\eta^\mathrm{H}_\mathrm{tot} + \eta^\mathrm{H}_\mathrm{diff}\right) \left( \partial_{\bf s} v_{\bf n} + \frac{v_{\bf s}}{R} \right) + \eta^\mathrm{H}_\mathrm{tot} \omega + \left(\eta^\mathrm{dis}_\mathrm{tot} + \eta^\mathrm{dis}_\mathrm{diff}\right) \partial_{\bf n} v_{\bf{n}} &=0\\
\hat{n}_\mu\hat{s}^\nu \tau^{\mu}_{\hphantom{\mu}\nu} =\left(\eta^\mathrm{H}_\mathrm{tot} + \eta^\mathrm{H}_\mathrm{diff}\right) \left( \partial_{\bf s} v_{\bf s} - \frac{v_{\bf n}}{R} \right) + \eta^\mathrm{dis}_\mathrm{tot} \omega + (\eta^\mathrm{dis}_\mathrm{tot}+\eta^\mathrm{dis}_\mathrm{diff}) \left(\partial_{\bf n} v_{\bf s} - \frac{v_{\bf s}}{R}\right) &=0
\end{aligned}
\end{equation}
\section{Modified Lamb surface waves: anisotropic viscosity} \label{sec:two}
In this section we provide a more detailed derivation of the results of the main text for (incompressible) surface wave flow for a fluid with anisotropic odd viscosity in a half plane geometry, parameterized by $y = h(x,t)$ (see Figure.~\ref{fig:halfplane}). In particular, we would like to see how the dispersion $\Xi(k)$ of the surface waves is modified by the presence of our anisotropic odd viscosities, and how this is impacted by the dissipative and non-dissipative contact terms $C_0$ and $C_\mathrm{dis}$. We follow the strategy outlined in Ref.~\cite{abanov2018odd}, paying particular attention to the redundancies between the viscosity coefficients. We choose to frame the velocity field in terms of potentials $\phi$ (velocity potential) and $\psi$ (stream function) such that $\psi$ is the only source of vorticity:
\begin{equation}
\label{eq:veltwopotssup}
v_i = \partial_i \phi + \epsilon_{ik} \partial_k \psi
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=.5\columnwidth]{halfplane.pdf}
\caption{Half plane geometry where the height of the half plane is a surface wave with wavenumber $k$ and frequency $\Xi$. Considering an anisotropic viscous fluid, we try to find the dispersion relation $\Xi(k)$.}
\label{fig:halfplane}
\end{figure}
For the incompressible flow we consider, the velocity potential $\phi$ is harmonic
\begin{equation}
\label{eq:incompwave}
\boldsymbol{\nabla} \cdot {\bf v}= \nabla^2 \phi = 0.
\end{equation}
Similarly, the Laplacian of the stream function gives the vorticity
\begin{equation}
\boldsymbol{\nabla}\times\mathbf{v} = -\nabla^2\psi = \omega
\end{equation}
In the bulk of the half plane, our viscous fluid must satisfy the momentum continuity equation--which serves as the bulk equation of motion:
\begin{equation}
D_t (\rho v_\mu) = \partial_t (\rho v_\mu) + \rho v_\nu \partial_\nu v_\mu = - \partial_\nu \tau_{\nu\mu} -\rho g\hat{y}_\mu
\end{equation}
Here we have used the classical constitutive relation $\mathbf{g}_{\mathrm{mom}}=\rho\mathbf{v}$ to express the momentum density of the fluid in terms of the density $\rho$. We consider the Eulerian perspective of fluid flow and write the continuity equation in terms of a fluid derivative $D_t = \partial_t + v_i \partial_i$ \footnote{Alternatively, as in our previous work Ref.~\cite{rao2019hall}, we could have taken the Lagrangian perspective and group the non-linear convective term with the stress tensor in a momentum flux tensor\cite{lamb1924hydrodynamics,landau1986theory} }. As we are considering linearized surface waves for an incompressible fluid, we can set $\rho=1$ for convenience and neglect the higher-order convective term in the continuity equation to obtain the linearized equation of motion
\begin{equation}\label{eq:lineareom}
\partial_t {\bf v} = -\boldsymbol{\nabla} p + \eta^\mathrm{H}_{\mathrm{tot}} \boldsymbol{\nabla} \omega + \eta^{\mathrm{dis}}_{\mathrm{tot}}\;\boldsymbol{\nabla}^2 {\bf v} - g \hat{\bf y}
\end{equation}
As expected, the viscosities enter the equation of motion in terms of the sums $\eta^H_{\mathrm{tot}} = \eta^H +\bar{\eta}^H$ and $\eta^{\mathrm{dis}}_{\mathrm{tot}} = \eta^\mathrm{R} + \eta^\mathrm{sh}$. Here we notice that in the bulk non-dissipative viscosities can be thought of as a modification to the pressure of the fluid, in particular we can define the ``modified pressure'' \cite{abanov2018odd,ganeshan2017odd}
\begin{equation}
\tilde{p} = p - \eta^H_{\mathrm{tot}}\omega.
\end{equation}
This is a manifestation of the ``triviality'' of the Hall viscosity in the bulk, since we can view it as a modification to the pressure of the fluid \cite{Wiegmann_boundary,Wiegmann_vortex}. We now see the equation of motion simplifies to
\begin{equation}
\partial_t {\bf v} = -\boldsymbol{\nabla} \tilde{p} + \eta^{\mathrm{dis}}_{\mathrm{tot}}\;\boldsymbol{\nabla}^2 {\bf v} - g \hat{\bf y}
\end{equation}
If we take the curl of the equation above, we find that the vorticity $\omega$ satifies
\begin{equation}
\label{eq:vorticity}
\partial_t \omega = \eta^{\mathrm{dis}}_{\mathrm{tot}}\; {\boldsymbol{\nabla}}^2 \omega
\end{equation}
The bulk equation of motion must be supplemented by boundary conditions, and for the problem at hand we are physically motivated\cite{lamb1924hydrodynamics} to choose a no-stress boundary condition at the surface of the half plane and a kinematic boundary condition on the velocity vector. These are, denoting the boundary as $Y = h(x,t)$:
\begin{equation}
\begin{aligned}
\label{eq:bcs}
\hat{n}_\mu \tau_{\mu\nu}\biggr|_Y &= 0\\
v_y\bigg|_Y &=\partial_t Y \;\;
\end{aligned}
\end{equation}
These are sometimes referred to as the dynamic (stress condition) and kinematic (velocity condition) boundary conditions, respectively \cite{abanov2018odd}. We now have the equations of motion that are to be satisfied for our surface wave flow, and proceed by assuming a wave solution for the velocity potentials $\phi$ and $\psi$, where
\begin{equation}
\begin{aligned}
\label{eq:waveansatz}
\phi = \left(-iA\frac{k}{|k|} e^{|k| y} + Be^{-|k| y}\right) e^{ikx - i\Xi t}\\
\psi = (C e^{my} + De^{-my}) e^{ikx - i\Xi t}
\end{aligned}
\end{equation}
We enforce that the velocity be zero as $y \rightarrow -\infty$, meaning we need to set $B = 0$ and $D=0$ [$\mathrm{Re}(m)\geq 0$ by construction]. The incompressibility condition Eq.~\eqref{eq:incompwave} dictates that the wave-number $k$ parameterizes both the $x$ and $y$ dependence of the potential $\phi$, whereas $\psi$ requires two parameters $m$ and $k$. To begin to apply the boundary conditions in terms of the wave ansatz solutions in Eq.~\eqref{eq:waveansatz}, we explicitly write down the components of velocity according to Eq.~\eqref{eq:veltwopotssup},
\begin{equation}
\begin{aligned}
\label{eq:velocityexpr}
v_x &= (A |k| e^{|k| y} + Cm e^{m y}) e^{ikx - i\Xi t},\\
v_y & = -ik (A e^{|k| y} + C e^{my}) e^{ikx - i\Xi t}.
\end{aligned}
\end{equation}
The physical velocity is determined by taking the real part of this expression. We see that the velocity potential $\phi$ appears through the coefficient $A$ and $\psi$ through $C$ -- consequently, the amplitude $C$ need be proportional to the vorticity. The kinematic boundary condition tells us $\partial_t h = v_y(x,h,t)$ and thus the explicit behavior of the surface. This gives us the following relations for the height $h(x,t)$, the vorticity $\omega$ and the pressure $\tilde{p}$ from the velocity potentials
\begin{equation}
\begin{aligned}
\label{eq:hvp}
h(x,t) &= \frac{k}{\Xi} (A + C) e^{ikx - i\Xi t},\\
\omega &= e^{ikx - i\Xi t} (k^2-m^2) C e^{my},\\
\tilde{p} &= \Xi \frac{k}{|k|} A e^{|k| y} e^{ikx - i\Xi t} - gy.
\end{aligned}
\end{equation}
The first expression comes from integrating the kinematic boundary condition with respect to time, and keeping only terms to lowest order in the wave amplitudes. The second expression comes from substituting our ansatz for the velocity into the definition of the vorticity. Finally, the third equation comes from writing Eq.~\eqref{eq:lineareom} in terms of $\phi$ and $\psi$, and making use of Eqs.~\eqref{eq:vorticity} and \eqref{eq:incompwave}.
We have thus reduced the problem to finding relations for $m$, $k$, $\Xi$ and the amplitudes $A$ and $C$, and move to apply the bulk equations of motion and no-stress boundary conditions. Our goal is find $\Xi$ as a function of $k$, and thus to find how the dispersion is affected by the viscosities and contact terms in our setup.
We first proceed by analyzing the bulk vorticity equation Eq.~\eqref{eq:vorticity},
\begin{equation}
\partial_t \omega = \eta^\mathrm{dis}_\mathrm{tot} \nabla^2 \omega
\end{equation}
If we substitute our wave ansatz Eq.~\eqref{eq:hvp}, this leads to the relation
\begin{equation}
m^2 = k^2 - i\Xi/(\eta^\mathrm{sh} + \eta^R)
\end{equation}
between dispersion $\Xi$ and the parameters $m$ and $k$. The other two unknowns of the problem are the amplitude coefficients $A$ and $C$. The no stress boundary conditions in Eq.~\eqref{eq:bcs} should now supply us with enough information to estimate the dispersion and amplitudes for these surface waves. \\
In the bulk, we have the same setup as Lamb \cite{lamb1924hydrodynamics}, with the modified pressure $\tilde{p}$ playing the role of the pressure. On the boundary, the Hall viscosity has a contribution separate from the pressure and the resulting stress boundary conditions differ from Lamb's setup \cite{abanov2018odd,soni2019odd}. Further, our situation diverges further from previous works as our additional anisotropic Hall and dissipative viscosities ($\bar{\eta}^\mathrm{H}$ and $\eta^\mathrm{R}$) differentiate themselves from their usual counterparts ($\eta^\mathrm{H}$ and $\eta^\mathrm{sh}$) at the boundary. \\
We now unpack the no-stress conditions $f^{\text{bdry}}_\nu = \hat{n}^\mu \tau_{\mu\nu} = 0$. In our linearized picture, the normal vector to the surface is $\hat{\mathbf{n}} \approx (0,1) = \hat{\bf y}$. Above linear order the normal vector depends on the function $h(x,t)$ and is non-constant. The statement that there is no stress at the boundary gives us two constraints-- first in the $y$ direction we have
\begin{equation}
\begin{aligned}
\label{eq:dbc1}
f^{\text{bdry}}_y &= 0\\
\hookrightarrow p &= 2\eta^\mathrm{sh} \partial_y v_y - \eta^H (\partial_y v_x + \partial_x v_y) + \bar{\eta}^H \omega \end{aligned}
\end{equation}
Using the explicit expressions for pressure, vorticity and velocity in Eqs.~\eqref{eq:velocityexpr} and \eqref{eq:hvp}, this condition becomes:
\begin{equation}
\begin{aligned}
\label{eq:condition1}
A \left\lbrace \Xi^2 + 2\eta^{H} \Xi |k| k + 2i \Xi k^2 \eta^\mathrm{sh} - g |k| \right\rbrace + C \left \lbrace 2i |k|\Xi \eta^\mathrm{sh} m + 2 k|k| \Xi \eta^H
- g|k| \right\rbrace = 0
\end{aligned}
\end{equation}
Surprisingly, the anisotropic viscosities $\bar{\eta}^H$ and $\eta^R$ have cancelled out leaving a normal boundary condition identical to the cases considered in previous works \cite{abanov2018odd,soni2019odd}. Setting the tangential component of the boundary force to zero yields
\begin{equation}
\begin{aligned}
\label{eq:dbc2}
f^{\text{bdry}}_x &= 0\\
\hookrightarrow 0 &= \eta^\mathrm{sh}(\partial_x v_y + \partial_y v_x) + \eta^H(\partial_y v_y - \partial_x v_x) - \eta^\mathrm{R} \omega
\end{aligned}
\end{equation}
The anisotropic viscosities also do not enter this condition, which simplifies to:
\begin{equation}
\label{eq:cond2}
2A \left[\eta^\mathrm{sh} ik^2 + \eta^H k|k|\right] + C\left[2\eta^Hkm + 2ik^2 \eta^\mathrm{sh} + \Xi \right]=0
\end{equation}
We can combine the two conditions to form one overall {\it consistency condition} which relates $k$, the dispersion $\Xi$ and the viscosities. Since we are viewing $\Xi$ as a function of $\mathbf{k}$, and since the physical solutions are only determined by the real part of Eq.~\eqref{eq:velocityexpr}, we can restrict to $k>0$ without loss of generality; the $k<0$ solutions are obtained by complex conjugating our resultant expressions. Dividing Eq.~\eqref{eq:condition1} by \eqref{eq:cond2} gives, for $k>0$,
\begin{equation}
\label{eq:constraint}
\frac{gk - \Xi^2 - 2\Xi k^2(\eta^H + i \eta^\mathrm{sh})}{2k^2 (\eta^H + i \eta^\mathrm{sh})} = \frac{gk - 2 \Xi k (\eta^H k + i \eta^\mathrm{sh} m)}{\Xi + 2k(\eta^H m+ i \eta^\mathrm{sh} k)}.
\end{equation}
We will use this equation to compute the dispersion $\Xi(k)$ in different limits, and examine how it is affected by the anisotropic viscosity and contact terms \footnote{We consider the case $k>0, \eta^H>0$ as the other cases are symmetric, see Ref.~\cite{ganeshan2017odd}},
Reorganizing Eq.~\eqref{eq:constraint}, and discarding a trivial solution with $m=k$ and $\Xi = 0$, we find a polynomial equation for $m$. Introducing dimensionless quantities,
\begin{equation}
\begin{aligned}
\beta^2 = \frac{(\eta^\mathrm{sh} + \eta^R) k^2}{\sqrt{gk}}, \; \alpha = \frac{\eta^{H}}{\eta^\mathrm{sh} + \eta^R}, \\ \kappa = \frac{m}{k} \;\; \text{and} \;\; \gamma = \frac{\eta^\mathrm{sh}}{\eta^\mathrm{sh}+\eta^R}
\end{aligned}
\end{equation}
We can now cast the consistency condition as
\begin{widetext}
\begin{equation}
\begin{aligned}
\frac{\left[\kappa + 1 - 2i\alpha \right]}{\beta^4} + (\kappa-1)^2(\kappa+1)^3 - 4 (\kappa^2 -1) (\alpha^2 + \gamma^2) +4 \gamma (\kappa-1)(\kappa+1)^2 - 2 i \alpha (\kappa - 1)(\kappa + 1 )^3= 0
\end{aligned}
\end{equation}
\end{widetext}
\subsection{Gravity dominated waves}\label{sec:lambdispersiongamma}
We first consider the case where gravity dominates so $\beta << 1$, and rescale our coordinates to $x = \beta \kappa$ we find our constraint equation to be
\begin{equation}
\begin{aligned}
x+\beta-2i\alpha\beta + (x-\beta)^2(x + \beta)^3 - 4 \beta^3(x^2 - \beta^2)(\alpha^2 + \gamma^2) + 4\gamma(x - \beta)(x+\beta)^2 \beta^3 - 2i\alpha\beta(x-\beta)(x+\beta)^3 = 0
\end{aligned}
\end{equation}
\noindent \textbf{Zero viscosity solution.} The zero viscosity limit $\alpha = \beta = \gamma = 0$ gives the classical dispersion relation for gravity waves \cite{lamb1924hydrodynamics,ganeshan2017odd}
\begin{equation}
\Xi = \pm \sqrt{gk}
\end{equation}
\begin{comment}
\textbf{Dissipative corrections.} If we now turn on corrections to $\beta$ in orders , the constraint equation becomes
\begin{equation}
1 + x^4 + (4\gamma - 2)\beta^2 x^2 = 4\beta^3 \gamma^2 x + \mathcal{O}(\beta^4)
\end{equation}
We present the above equation to note that if $\gamma=1$, we recoup Eq.~(85) in Ref.~\cite{ganeshan2017odd}. Moving forward we simply work to order $\beta^3$, so we can neglect the term on the right hand side, and we are left with the following solutions for $x$:
\begin{equation}
x_{\pm} = e^{\pm i\pi/4} + e^{\mp i\pi/4} \left(1/2-\gamma \right) \beta^2
\end{equation}
These solutions correspond to dispersion relations:
\begin{equation}
\label{eq:lambdispersiongamma}
\Xi_\pm = \mp \sqrt{gk} - 2 i\gamma \eta^\mathrm{sh} k^2 + \mathcal{O}[(\eta^\mathrm{sh})^{3/2}]
\end{equation}
The first order correction to the Lamb dispersion is now dependent on $\gamma$. The dissipative redundancy tells us that by adding a contact term we can freely change $\gamma$ and therefore the viscous correction to the Lamb dispersion.
\begin{equation}
\begin{aligned}
\eta^\mathrm{sh} \rightarrow \eta^\mathrm{sh} + C_{dis}/2\\
\eta^R \rightarrow \eta^R - C_{dis}/2
\end{aligned}
\end{equation}
Making this contact term explicit in Eq.~\eqref{eq:lambdispersiongamma}, we have that
\begin{equation}
\Delta \Xi = -2ik^2\left[\frac{(\eta^\mathrm{sh})^2 + \eta^\mathrm{sh} C_{dis} + C_{dis}^2/4}{\eta^\mathrm{sh} + \eta^R}\right]
\end{equation}
\end{comment}
\noindent \textbf{Viscous corrections.} We now keep up to second order in $\beta$, representing small dissipative viscous corrections, and turn on a small non-dissipative correction $\alpha$. We keep the order of limits in analogy with Ref.~\cite{abanov2018odd}, the dissipative viscosities are smaller than $g$, i.e. that $\beta << 1$. The solution to the resulting constraint equation is given by \footnote{The linear term $B\beta$ in the dispersion vanishes ($B=0$)}
\begin{equation}
\begin{aligned}
x_{\pm} &= A_{\pm} \beta^2 + C_{\pm} \\
C_{\pm} &= e^{\mp i\pi/4}\\
A_+ &= \frac{e^{i\pi/4}}{2} \left[2\gamma - 2i\alpha -1\right], \; \; A_- = \frac{e^{i\pi/4}}{2} \left[2\gamma - 2i\alpha -i\right]
\end{aligned}
\end{equation}
The frequency in this case is given by:
\begin{equation}
\begin{aligned}
\label{eq:disp}
\Xi_{\pm} &= \pm \sqrt{gk}-(2i\gamma + \alpha)\eta^\mathrm{dis}_\mathrm{tot} k^2\\
&= \pm \sqrt{gk}- 2i\eta^\mathrm{sh}k^2 - 2\eta^\mathrm{H}k^2
\end{aligned}
\end{equation}
Despite the additional anisotropic viscosities in our picture, this result matches exactly the case where $\eta^R = \bar{\eta}^\mathrm{H} = 0$ considered in Ref.~\cite{ganeshan2017odd}. However we can now interpret this dispersion in terms of the total and differences between the viscosities:
\begin{equation}
\begin{aligned}
\Xi_{\pm} &= \pm \sqrt{gk}- i\left(\eta^\mathrm{dis}_\mathrm{tot} + \eta^\mathrm{dis}_\mathrm{diff}\right) k^2 - \left(\eta^\mathrm{H}_\mathrm{tot} + \eta^\mathrm{H}_\mathrm{diff}\right) k^2
\end{aligned}
\end{equation}
This dispersion is sensitive to both dissipative and non-dissipative contact terms, as the differences between odd viscosities and dissipative viscosities enter. To access the $k<0$ regime, we let $k\rightarrow |k|, \alpha \rightarrow - \alpha$ in Eq~\eqref{eq:disp} and find analogous solutions.
\subsection{Pure (odd) viscosity waves: $g=0$}
We now consider the case where $g=0$ and the dynamics of our surface waves are dominated by viscosity. We also suppose that odd viscosity is playing the main role and $\eta^\mathrm{H} >> \eta^\mathrm{sh}, \eta^\mathrm{R}$ \footnote{we work with individual viscosities initially and then convert to bulk and boundary components}. In this case, the constraint equation becomes
\begin{equation}
\frac{- \Xi^2 - 2\Xi k^2(\eta^H + i \eta^\mathrm{sh})}{2k^2 (\eta^H + i \eta^\mathrm{sh})} = \frac{- 2 \Xi k (\eta^H k + i \eta^\mathrm{sh} m)}{\Xi + 2k(\eta^H m+ i \eta^\mathrm{sh} k)}
\end{equation}
this becomes (throwing out the trivial $\Xi = 0$ solution):
\begin{equation}
\Xi^2 + 2\Xi k^2(\eta^\mathrm{H} + i \eta^\mathrm{sh}) + 2\Xi k (\eta^\mathrm{H} m + i \eta^\mathrm{sh} k) + 4k^3 (\eta^\mathrm{H}+i\eta^\mathrm{sh})(\eta^\mathrm{H}-i\eta^\mathrm{sh})(m-k) = 0
\end{equation}
If we utilize the relation $m^2 = k^2 - i\Xi/\eta^{\mathrm{dis}}_\mathrm{tot} \rightarrow \Xi = i(m-k)(m+k)\eta^\mathrm{dis}_\mathrm{tot}$, and throw out terms above first order in the dissipative viscosities we find
\begin{equation}
2i \eta^\mathrm{dis}_\mathrm{tot} (m+k)^2 + 4k^2 \eta^\mathrm{H} = 0
\end{equation}
This leads to the following dispersion (keeping only the solution with $\mathrm{Re}(m)>0$ that decays into the bulk)
\begin{equation}
\Xi = -2 \eta^\mathrm{H} k^2 - 2i k^2 \sqrt{|\eta^\mathrm{H}| \eta^\mathrm{dis}_\mathrm{tot}}
\end{equation}
The dispersion above describes chiral waves moving in a direction set by the odd viscosity. Importantly, it is only the $\textit{component}$ $\eta^\mathrm{H}$ rather than the full odd viscosity $\eta^H_{tot}$ that sets the direction. This means that the direction of these chiral waves cannot be determined from bulk data alone, or equivalently that the expression above is sensitive to the non-dissipative contact term \footnote{Because of the strong constraints on the size of the dissipative viscosities, we cannot make any clear judgement about the dependence on the dissipative contact term here}.
\section{Active fluids \& angular momentum conservation}
In many classical chiral active fluids\cite{soni2019odd}, time reversal symmetry is broken by a local rotation rate $\Omega$ for fluid particles. In this case, for an isotropic and incompressible chiral active fluid, the stress takes on a modified form due to angular momentum conservation
\begin{equation}\label{eq:rotatingstress}
\tau_{\mu\nu} = -p \delta_{\mu\nu} + \eta^\mathrm{sh}\left(\partial_\mu v_\nu + \partial_\nu v_\mu \right) + \eta^\mathrm{H} (\partial^*_\mu v_\nu + \partial_\mu v^*_\nu) + \eta^\mathrm{R} \epsilon_{\mu\nu} (\omega - 2\Omega) + \bar{\eta}^\mathrm{H} \delta_{\mu\nu} (\omega - 2\Omega)
\end{equation}
We have effectively added two $\Omega$-dependent terms to our stress tensor. This corresponds to measuring vorticity of the fluid in a locally rotating frame with frequency $\Omega$. We treat $\Omega$ as a fixed (constant) parameter of our setup, as in the physical situation of a colloidal chiral mixture \cite{soni2019odd}, and thus the modifications to the stress tensor do not enter the bulk equations of motion. On the boundary, however, the $\Omega$-dependent terms provide a steady-state boundary force
\begin{equation}
\label{eq:constbdryfrce}
f^\mathrm{bdry}_\nu = -2 \left(\eta^\mathrm{R} \hat{s}_\nu \Omega +\bar{\eta}^\mathrm{H} \hat{n}_\nu \Omega\right).
\end{equation}
The local rotation rate $\Omega$ causes an additional torque at the boundary due to $\eta^\mathrm{R}$ and an additional pressure contribution due to $\bar{\eta}^\mathrm{H}$. In what follows, we consider how this alternate form of time-reversal symmetry breaking could affect the viscous surface waves in Sec.~\ref{sec:two}. We also allow for a longitudinal friction from a substrate $f^\mathrm{fric}_j = -\mu v_j$ to be consistent with the experimental setup of Ref.~\cite{soni2019odd}. This term only enters the bulk equations of motion, and stabilizes a steady-state fluid velocity in the absence of external torques. We will analyze surface waves for this fluid both with and without gravity. To do so, we first begin by deriving the bulk equations of motion.
\subsection{Equations of motion}
The linearized continuity equation for momentum, again setting the density $\rho =1$ for convenience, is now given by
\begin{equation}\label{eq:omegabulkeoms}
\partial_t {\bf v} = - \nabla \tilde{p} + \eta^{\mathrm{dis}}_\mathrm{tot} \nabla^2 {\bf v} - g{\bf \hat{y}} - \mu {\bf v},
\end{equation}
where $\mu$ parametrizes the friction between the fluid and the substrate. Following the experimental considerations of Ref.~\cite{soni2019odd}, we have neglected the nonlinear term in the equations of motion. Taking the curl of Eq.~\eqref{eq:omegabulkeoms} leads to the vorticity equation
\begin{equation}
\label{eq:vort2}
\partial_t \omega = \eta^\mathrm{dis}_\mathrm{tot} \nabla^2 \omega - \mu \omega.
\end{equation}
\subsection{Steady state flow}
The modifications we have made now allow for a steady-state vorticity (zeroth order in the amplitude of surface waves) whereas in previous setup in Sec. II with $\Omega = 0$ and $\mu = 0$ we necessarily had $\omega = 0$ at zeroth order. We can look to solve the vorticity equation in the steady state, where Eq.~\eqref{eq:vort2} becomes
\begin{equation}
(\eta^\mathrm{dis}_\mathrm{tot}\nabla^2 - \mu) \, \omega = 0
\end{equation}
Again in the half plane geometry, $y\leq0$, it can be verified that
\begin{equation}
\label{eq:steadystate}
\omega_{s} = \frac{\eta^\mathrm{R}}{\eta^\mathrm{R} + \eta^\mathrm{sh}} (2\Omega) e^{y/\delta}
\end{equation}
satisfies the vorticity equation, where $\delta = ((\eta^\mathrm{R}+\eta^\mathrm{sh})/\mu)^{1/2}$ is the hydrodynamic length that appears in Ref.\cite{soni2019odd}. In choosing the multiplicative constant, we have anticipated the boundary conditions of Sec.~\ref{sec:omegabcs} below. The steady state vorticity corresponds to a flow profile in the $x$ direction (if there was a $y$ component, it would blow up as $x\rightarrow \infty$):
\begin{equation}
v_x = -\frac{\eta^\mathrm{R}}{\eta^\mathrm{R} + \eta^\mathrm{sh}} (2\Omega) \delta e^{y/\delta}
\end{equation}
We refer to the zeroth order velocity at the boundary as $v_x^{(0)} \equiv v_x(y=0)$.
\subsection{Modification to surface wave boundary conditions}\label{sec:omegabcs}
We now consider the generalization of our earlier linearized surface wave boundary conditions to account for the presence of a steady state, zeroth order fluid velocity. In terms of our no-stress boundary condition we have, by expanding the normal vector and the stress tensor to first order,
\begin{equation}\label{eq:bcexpansion}
n_\mu \tau_{\mu\nu} = (n^{(0)}_{\mu}+ \epsilon n^{(1)}_{\mu}) (\tau^{(0)}_{\mu\nu} + \epsilon \tau^{(1)}_{\mu\nu}) = 0.
\end{equation}
Here $\epsilon\hat{\mathbf{n}}^{(1)}$ is the first order variation of the surface normal vector (taking into account the variations in the fluid height), and $\epsilon\tau_{\mu\nu}^{(1)}$ is the first order variation of the stress tensor (taking into account the linearized fluid velocity). We consider our surface wave setup, where we treat the height $y = h(x,t)$ as a small perturbation around $y=0$. This means that the normal vector can be written as
\begin{equation}\label{eq:normal}
{\bf \hat{n}} = {\bf \hat{n_0}} + \epsilon {\bf \hat{n_1}} \approx {\bf \hat{y}} - (\partial_x h) \bf{\hat{x}}
\end{equation}
Collecting the zeroth order terms in Eq.~\eqref{eq:bcexpansion}, we have $n_\mu^{(0)} \tau^{(0)}_{\mu\nu} = 0$ and hence
\begin{equation}
\begin{aligned}
\label{eq:zeroth}
2\eta^\mathrm{R}\Omega - (\eta^\mathrm{sh} + \eta^\mathrm{R})\omega_s = 0\\
p_0 = \eta^\mathrm{H}_\mathrm{tot} \omega_s - 2\bar{\eta}^\mathrm{H} \Omega
\end{aligned}
\end{equation}
The first equation is satisfied by our expression Eq.~\eqref{eq:steadystate} for the zeroth order vorticity. The second tells us that with the steady state vorticity Eq.~\eqref{eq:steadystate} we are able to set the steady state pressure outside of the half plane to $p_0 = \left(\frac{\eta^\mathrm{H} \eta^\mathrm{R}}{\eta^\mathrm{dis}_\mathrm{tot}} -\frac{\bar{\eta}^\mathrm{H} \eta^\mathrm{sh}}{\eta^\mathrm{dis}_\mathrm{tot}}\right) (2\Omega)$.
At first order, we have that $\epsilon n_\mu^{(1)}\tau^{(0)}_{\mu\nu} + \epsilon n_\mu^{(0)} \tau^{(1)}_{\mu\nu} = 0$. Inserting Eqs.~\eqref{eq:normal} and \eqref{eq:rotatingstress}, this gives
\begin{equation}
\begin{aligned}
p_1 &= 2\eta^\mathrm{sh} \partial_y v_y - \eta^H (\partial_y v_x + \partial_x v_y) + \bar{\eta}^H \omega_1 + (\partial_x h)\left[\eta^\mathrm{dis}_\mathrm{diff} \omega_s + 2\Omega \eta^\mathrm{R} \right] - h \partial_y(p_0-\eta^\mathrm{H}_\mathrm{tot} \omega_s)\\
0 &= \eta^\mathrm{sh}(\partial_x v_y + \partial_y v_x) + \eta^H(\partial_y v_y - \partial_x v_x) - \eta^\mathrm{R} \omega_1 + (\partial_x h) \left[\eta^\mathrm{H}_\mathrm{diff} \omega_s + p_0 + 2 \bar{\eta}^\mathrm{H} \Omega\right] - h \eta^\mathrm{dis}_\mathrm{tot} \partial_y \omega_s
\end{aligned}
\end{equation}
We can apply the zeroth order boundary conditions to find
\begin{equation}
\begin{aligned}
\label{eq:dispfriction}
p_1 &= 2\eta^\mathrm{sh} \partial_y v_y - \eta^H (\partial_y v_x + \partial_x v_y) + \bar{\eta}^H \omega_1 +2 (\partial_x h)\eta^\mathrm{sh} \omega_s \\
0 &= \eta^\mathrm{sh}(\partial_x v_y + \partial_y v_x) + \eta^H(\partial_y v_y - \partial_x v_x) - \eta^\mathrm{R} \omega_1 + 2 (\partial_x h)\eta^\mathrm{H} \omega_s - h \eta^\mathrm{dis}_\mathrm{tot} \partial_y \omega_s
\end{aligned}
\end{equation}
where we have used the fact that from the zeroth-order boundary conditions $p_0-\eta^\mathrm{H}_\mathrm{tot}\omega_s$ is constant at the boundary.
The kinematic boundary condition in this case, where we have a zeroth order velocity, is given by
\begin{equation}
\label{eq:kbcfric}
\frac{dh}{dt}=\partial_t h + v_x^{(0)} \partial_x h = v_y(y=0,x,t)
\end{equation}
\subsection{Surface waves with $\Omega$}
We now continue on to consider surface waves with the time-reversal symmetry breaking coming from an internal rotation rate $\Omega$. The bulk vorticity equation is still
\begin{equation}
\partial_t \omega = \eta^\mathrm{dis}_\mathrm{tot} \nabla^2 \omega - \mu \omega
\end{equation}
We can write the overall vorticity as a sum of the steady state contribution, which we just considered, and a contribution first-order in the amplitude of surface waves
\begin{equation}
\omega = \omega_s + \omega_1(x,y,t)
\end{equation}
To consider the first-order contribution to the vorticity, we again introduce velocity potentials that parameterize our surface wave Eq.~\eqref{eq:waveansatz}. The ansatz for the first order vorticity is then equivalent to Eq.~\eqref{eq:hvp} and is given by
\begin{equation}
\omega_1 = e^{ikx - i\Xi t} (k^2-m^2) C e^{my}
\end{equation}
This satisfies the bulk equation of motion to linear order in the perturbative parameter
\begin{equation}
\partial_t \omega_1 = (\eta^\mathrm{dis}_\mathrm{tot} \nabla^2 - \mu) \omega_1
\end{equation}
This leads to the modified condition
\begin{equation}
\begin{aligned}
\label{eq:conditionfric}
\Xi = i\eta^\mathrm{dis}_\mathrm{tot}(m^2 - k^2) - i\mu
\end{aligned}
\end{equation}
Our proposed form for the first order velocities and vorticities in Eq.~\eqref{eq:hvp} still hold. The bulk equation of motions mandate that the modified pressure now takes the form
\begin{equation}\label{eq:fricp}
\tilde{p}=p_1-\eta^\mathrm{H}_\mathrm{tot}\omega_1 - \mu \phi
\end{equation}
which differs from Eq.~\eqref{eq:hvp} by the addition of $-\mu\phi$, where $\phi$ is the velocity potential. Eq.~\eqref{eq:waveansatz}. Additionally, the modified kinematic boundary condition Eq.~\eqref{eq:kbcfric} implies that the height $h(x,t)$ now takes the form
\begin{equation}
h(x,t) = \frac{v_y(y=0,x,t)}{-i\Xi(k) + ik v_x^{(0)}}\label{eq:frich}
\end{equation}
Now revisiting the first order boundary conditions Eq.~\eqref{eq:dispfriction}, we can substitute in our ansatz Eqs.~\eqref{eq:waveansatz}, \eqref{eq:fricp}, and \eqref{eq:frich} for the velocities, modified pressure, and height, respectively. The normal boundary condition in terms of surface wave parameters becomes
\begin{equation}
\begin{aligned}
\label{eq:const1omega}
A\left[\Xi(k v_x^{(0)}- \Xi) + gk + i\mu (k v_x^{(0)}-\Xi) + 2k^2 (k v_x^{(0)} - \Xi) + 2k^2 (k v_x^{(0)} - \Xi)(\eta^\mathrm{H}+i\eta^\mathrm{sh} + 2i\eta^\mathrm{sh}) \omega_s k^2 \right] \\
+ C \left[gk + 2k(\eta^\mathrm{H} k + i\eta^\mathrm{sh} m) (k v_x^{(0)}- \Xi) + 2i \eta^\mathrm{sh} \omega_s k^2 \right] = 0
\end{aligned}
\end{equation}
The tangential boundary condition becomes
\begin{equation}
\begin{aligned}
\label{eq:const2omega}
A\left[2(kv_x^{(0)} - \Xi)k^2(\eta^\mathrm{H} + i\eta^\mathrm{sh}) + 2k^2\eta^\mathrm{H} \omega_s + \eta^\mathrm{dis}_\mathrm{tot} k \partial_y \omega_s \right] \\+ C\left[(\Xi - i\mu)(kv_x^{(0)}-\Xi) + 2(kv_x^{(0)}-\Xi)k(\eta^\mathrm{H} m + i \eta^\mathrm{sh} k) + 2k^2\eta^\mathrm{H} \omega_s + \eta^\mathrm{dis}_\mathrm{tot} k\partial_y \omega_s\right]=0
\end{aligned}
\end{equation}
\begin{comment}
Keeping terms only to first order in the perturbed height, we arrive at the consistency condition\bbnote{Is this eqn correct now?}
\begin{equation}\label{eq:omegaconsistency}
\begin{aligned}
\frac{\Xi(k v_x^{(0)}- \Xi) + gk + i\mu (k v_x^{(0)}-\Xi) + 2k^2 (k v_x^{(0)} - \Xi) + 2k^2 (k v_x^{(0)} - \Xi)(\eta^\mathrm{H}+i\eta^\mathrm{sh} + 2i\eta^\mathrm{sh}) \omega_s k^2}{2(kv_x^{(0)} - \Xi)k^2(\eta^\mathrm{H} + i\eta^\mathrm{sh}) + 2k^2\eta^\mathrm{H} \omega_s + \eta^\mathrm{dis}_\mathrm{tot} k \partial_y \omega_s} \\= \frac{gk + 2k(\eta^\mathrm{H} k + i\eta^\mathrm{sh} m) (k v_x^{(0)}- \Xi) + 2i \eta^\mathrm{sh} \omega_s k^2}{(\Xi - i\mu)(kv_x^{(0)}-\Xi) + 2(kv_x^{(0)}-\Xi)k(\eta^\mathrm{H} m + i \eta^\mathrm{sh} k) + 2k^2\eta^\mathrm{H} \omega_s + \eta^\mathrm{dis}_\mathrm{tot} k\partial_y \omega_s}
\end{aligned}
\end{equation}
\end{comment}
The equations above Eq.~\eqref{eq:const1omega} and Eq.~\eqref{eq:const2omega} represent our consistency conditions for the wave setup with $\Omega$ and $\mu$. To solve the consistency conditions, we can combine Eqs.~\eqref{eq:const1omega} and \eqref{eq:const2omega} with Eq.~\eqref{eq:conditionfric} to find three nontrivial solutions for $m(k)$ that can have $Re(m)>0$. Due to the complicated nature of the consistency condition, to make progress we will focus analytically on three cases. First, we will consider surface waves in the limit of long-wavelength $k\delta\ll 1$ and zero gravity. Second, we will keep $k\delta\ll 1$ and introduce gravity as a small perturbation $g\delta\ll\eta_\mathrm{tot}^\mathrm{dis}\Omega$. Third, we will consider the large gravity limit.
\subsubsection{$g=0$}
We first consider the case without gravity, which was the setup in Ref.~\cite{soni2019odd}. In this case, in the long wavelength $k\delta << 1$ limit, there are two modes which always decay into the bulk. The first is, to third order,
\begin{equation}
\begin{aligned}
\Xi_{1,g=0} =2(i\eta^\mathrm{H} - \eta^\mathrm{sh}) \frac{2\Omega \delta \eta^\mathrm{R}}{\mu \eta^\mathrm{dis}_\mathrm{tot}} k^3 + \mathcal{O}[(k\delta)^{5/2}]
\end{aligned}
\end{equation}
This mode matches exactly that found in the corresponding long wavelength limit in Ref.\cite{soni2019odd}, despite the addition of the additional Hall viscosity $\bar{\eta}^\mathrm{H}$ \footnote{Our rotational viscosity has an opposite sign by definition and we define viscous force as the divergence on the first index of the stress tensor rather than the second, leading to additional sign differences, however our results are substantively compatible}. It leads directly to the stability condition $\text{sign}(\eta^\mathrm{H} \eta^\mathrm{R} \Omega)<0$ in order for perturbations to decay in time.
Additionally, there is always an overdamped excitation with dispersion given by
\begin{equation}
\Xi_{2,g=0} = -i\mu - \frac{2\eta^\mathrm{R} \Omega}{\eta^\mathrm{dis}_\mathrm{tot}} k\delta + e^{i\pi/4} (\eta^\mathrm{H} + i\eta^\mathrm{sh}) \sqrt{\frac{2\Omega \eta^\mathrm{R}}{\mu (\eta^\mathrm{dis}_\mathrm{tot})^{3/2}}} k^{3/2} +\mathcal{O}[(k\delta)^2]
\end{equation}
This solution is effectively dominated by damping due to the friction term in the limit $k\delta << 1$. We will see below, however, that for nonzero $g$ this mode is essential to recovering the second branch of our Lamb wave solutions Eq.~\eqref{eq:disp}.
Finally, there is a third nontrivial solution that can decay into the bulk. It corresponds to the solution
\begin{equation}
m_{3,g=0}(k)=\frac{k\eta_\mathrm{diff}^\mathrm{dis}}{\eta_\mathrm{tot}^\mathrm{dis}},
\end{equation}
which decays into the bulk whenever $\eta^\mathrm{R}\le \eta^\mathrm{sh}$. The dispersion relation is
\begin{equation}
\Xi_{3,g=0}(k)=-i\mu-4i\frac{\eta^\mathrm{R}\eta^\mathrm{sh}k^2}{\eta_\mathrm{tot}^\mathrm{dis}}+\mathcal{O}[(k\delta)^3]
\end{equation}
This mode is overdamped and almost completely stationary at small $k\delta$. We will see below that this mode is always unphysical for $g$ large enough (or equivalently, for $\mu$ small enough).
\subsubsection{Small gravity case}
We now consider the case where gravity is small and again with the long wavelength limit $k\delta << 1$. For the two main physical modes, we find that the effect of gravity is, to lowest order, to introduce a linear in $k$ correction to the damping rate, given by
\begin{equation}
\begin{aligned}
\Xi_{1g}(k) &= \Xi_{1,g=0}(k) - \frac{ig k \delta}{\sqrt{\eta^\mathrm{dis}_\mathrm{tot} \mu}} + ... \\
\Xi_{2g}(k) &=\Xi_{2,g=0}(k) + \frac{ig k \delta}{\sqrt{\eta^\mathrm{dis}_\mathrm{tot} \mu}} + ...
\end{aligned}
\end{equation}
The effect of gravity is more drastic on the $\Xi_3$ mode. First, we find that to linear order in $g$, $m_3(k)$ is given by
\begin{equation}
m_3(g) = \frac{k}{\eta^\mathrm{dis}_\mathrm{tot}}\left({\eta_\mathrm{diff}^\mathrm{dis}} + \frac{\eta^\mathrm{H}\delta g}{\eta^\mathrm{R}\Omega}\right)
\end{equation}
Stability of the fluid requires the second term to be strictly negative. This implies that the $\Xi_3$ mode will become unphysical even for small $g$, provided $\eta^\mathrm{H}$ and $1/\eta^\mathrm{R}$ are large enough. As such, we will neglect the $\Xi_3$ mode in what follows.
\subsubsection{Gravity $g\neq 0$ case}\label{sec:gneq0}
To examine the surface waves for general $g$ and $k$, let us first return to the consistency conditions Eqs.~\eqref{eq:const1omega} and \eqref{eq:const2omega}. Note that for $\omega_s,\mu\rightarrow 0$, this reproduces exactly the consistency equation we obtained for gravity-dominated Lamb waves in Eq.~\eqref{eq:constraint}. We thus expect that when $g\delta \gg \eta^\mathrm{dis}_\mathrm{tot}\Omega$, we should recover the two branches of the modified Lamb wave dispersion Eq.~\eqref{eq:disp}. We examine the two modes $\Xi_{1g}(k)$ and $\Xi_{2g}(k)$ in the limit of large $g\delta/\eta_\mathrm{dis}^\mathrm{tot}\Omega$. We expect that $\Xi_{1g} \sim -\sqrt{gk}$ and $\Xi_{2g} \sim \sqrt{gk}$ as $\Omega \rightarrow 0$. To see how this occurs, we show in Fig.~\ref{fig:dispersion} the real and imaginary parts of $\Xi_{1,2}$ for generic values $\eta^\mathrm{sh}=0.1,\eta^\mathrm{R}=0.5,\eta^\mathrm{H}=0.3, \mu=1,\omega_s=-1$ with $g=10$. We see in Fig.~\ref{fig:dispersion}(a) that for $\mathrm{Re}(\Xi)$ there is a crossover from nearly stationary behavior at small $k$ to a dispersion consistent with $\mathrm{Re}(\Xi)\sim\pm\sqrt{gk}$ at larger $k$. In Fig.~\ref{fig:dispersion}(b) we see that the damping rate $\mathrm{Im}(\Xi)$ for the two modes depend linearly on $k$ for small $k$, and are approximately equal at larger $k$, varying as $\mathcal{O}(k^2)$. Expanding $\Xi_{1g}$ and $\Xi_{2g}$ to lowest order in $k\delta$ captures the behavior of the dissipation at small $k$, yielding
\begin{equation}
\begin{aligned}
\Xi_{1g}(k) &= -\frac{igk}{\mu} + ... \\
\Xi_{2g}(k) &= -i\mu + \frac{igk}{\mu} - \frac{2\eta^\mathrm{R} \Omega}{\eta^\mathrm{dis}_\mathrm{tot}} k\delta + ...
\end{aligned}
\end{equation}
\begin{figure}[ht]
\subfloat[]{
\includegraphics[width=0.4\textwidth]{dispg10.pdf}
}
\quad
\subfloat[]{
\includegraphics[width=0.4\textwidth]{dampg10.pdf}
}
\caption{Dispersion (a) and Damping (b) for the modes $\Xi_{g1}$ and $\Xi_{g2}$ with $\eta^\mathrm{sh}=0.1,\eta^\mathrm{R}=0.5,\eta^\mathrm{H}=0.3, \mu=1,\omega_s=-1$ and $g=10$. There is a crossover from friction-dominated behavior at $k\delta\lesssim 0.025$ to Lamb wave-like behavior at $k\delta \gtrsim 0.025$.}\label{fig:dispersion}
\end{figure}
Next, we can analyze the dispersion asymptotically for large $g$. First, note that when both the dissipative and Hall viscosities are zero, the flow is pure potential flow (as in the case $\Omega=0$). In this limit, we find the viscosity-free dispersion relation
\begin{equation}
\Xi_{0} = -\frac{i\mu}{2}\pm\frac{1}{2}\sqrt{4gk-\mu^2},\label{eq:xi0friction}
\end{equation}
This describes propagating damped waves for $k$ greater than the threshold wavevector $k_*=\mu^2/(4g)$, and overdamped stationary waves for $k<k_*$. In analogy with Sec.~\ref{sec:lambdispersiongamma}, we can compute the dispersion perturbatively for small $\beta=\sqrt{\eta_\mathrm{tot}^\mathrm{dis}k^2}/(gk)^{1/4}$, which corresponds to a large-$g$ expansion. In full analogy with our modified Lamb waves of Sec.~\ref{sec:lambdispersiongamma}, we find
\begin{equation}
\Xi_{g\rightarrow\infty} = \pm\sqrt{gk} -\frac{i\mu}{2} - 2k^2(\eta^\mathrm{H}+i\eta^\mathrm{sh}) -\frac{1}{2}k\delta\omega_s.
\end{equation}
The first two terms correspond to the first two terms in the Taylor expansion of $\Xi_0$ in Eq.~\eqref{eq:xi0friction} for large $g$. The second term is identical to the modification to the Lamb wave dispersion found in Sec.~\ref{sec:lambdispersiongamma}. Finally, the last term gives the correction to the dispersion due to the nonzero angular velocity $\Omega$ of the fluid particles. This matches with our observations in Fig.~\ref{fig:dispersion}.
\begin{figure}
\centering
\includegraphics[scale=.5]{damping_g_01_12_100_over_ten.pdf}
\caption{Corresponding damping $\Im(\Xi)(k)$ for surface waves with gravity with time reversal breaking from a local rotation rate $\Omega$ to accompany Figure 2 in the main text. The red plot has $g=10$, the blue plot has $g=1$ and the orange has $g=1.2$. The other parameters are fixed at $\eta^\mathrm{sh}=0.1,\eta^\mathrm{R}=0.5,\eta^\mathrm{H}=0.3$ and $\mu=1$.}
\label{fig:dissipation}
\end{figure}
Lastly, in Fig.~\ref{fig:dissipation} we show the imaginary part of $\Xi_{1,2g}$ for the three different values of $g$ discussed in the main text. We see that for small $k$, the damping rate for $\Xi_{1g}$ always goes to zero, while the damping rate for $\Xi_{2g}$ always goes to $\mu$.
|
1,116,691,498,540 | arxiv | \section{Introduction}
\label{sec:introduction}
In the field of human computer interaction, an important metric is the delay between human action and computer reaction.
Quoting Jakob Nielsen \cite{nielsen1994usability,nielsenusabilitysite1993}:
\begin{itemize}
\item 0.1 seconds is about the limit for having the user feel that the system is reacting instantaneously [...]
\item 1.0 second is about the limit for the user's flow of thought to stay uninterrupted [...]
\item 10 seconds is about the limit for keeping the user's attention [...]
\end{itemize}
I bring this up because the underlying motivation driving this paper is that I want to tinker with huge stabilizer circuits, and I want to stand in the 0.1s bucket while I do it.
For example, consider the single level 15-to-1 surface code T state factory shown in figure 4 of \cite{gidney2019catalyzeddistillation}.
It corresponds to a circuit with millions of gates spanning 15 thousand qubits.
What would it take to simulate {\em that} in 0.1 seconds, with no built-in preconceptions about the surface code, while a user makes iterative changes that break and restore the functionality of the circuit?
There are several obstacles that make this difficult.
\begin{figure}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{assets/bench-surface-1000.png}
}
\caption{
Delay until one thousandth sample on $d \times d \times d$ surface code memory circuits \cite{horsman2012latticesurgery}.
The intention of this benchmark is to highlight simulators that can perform some sort of compilation or analysis that lowers the cost of taking multiple samples.
Stim does particularly well on this benchmark, with timings almost identical to its time-to-first-sample.
This is because it uses its first sample as a reference for batches of hundreds or thousands of Pauli frame simulations that are updated in parallel using SIMD instructions.
The per-sample performance of Qiskit's simulator also improves when collecting more than one sample.
}
\label{fig:bench-surface-1000}
\end{figure}
First, if a circuit has a million gates, and your computer's clock speed is 2 gigahertz, then the simulator has a budget of 200 clock cycles per gate.
That's a tight budget.
Second, describing an arbitrary $n$-qubit stabilizer state takes at least $\frac{1}{2} n^2$ bits \cite{gross2006hudson,karanjai2018contextuality,howmanystabilizers2021}.
This means that, in the worst case, a general stabilizer simulator that works in an online fashion will need megabytes of information to track the state of a 15000 qubit system.
Storing this amount of information is not a problem.
The problem is repeatedly touching that information without blowing the budget on cache misses.
Third, typically, simulating a Clifford gate has complexity $\Omega(n)$ where $n$ is the number of qubits in the circuit.
Given the aforementioned cycle budget, the constant factor hiding behind that $\Omega$ has to be roughly 0.01 cycles.
Fourth, the surface code is chock full of measurements and, basically, historically, the cost of measurements in stabilizer simulators started at $\Theta(n^3)$ \cite{gottesman1997stabilizerformalism}, improved to $\Theta(n^2)$~\cite{aaronson2004chp}, and then stayed there.
Aaronson and Gottesman's~\cite{aaronson2004chp} CHP simulator?
Measurement takes $\Theta(n^2)$ time in the worst case.
Anders and Briegel's~\cite{anders2006fastgraphsim} graph state simulator?
Measurement takes $\Theta(n^2)$ time in the worst case.
Bravyi et al's~\cite{bravyi2019simulation} CH form simulator?
Measurement takes $\Theta(n^2)$ time in the worst case.
We can't afford $\Theta(n^2)$ time.
Fifth, the circuit I've been talking about isn't even technically a stabilizer circuit.
It contains magic state injections.
This obstacle has actually been the subject of a lot of recent work \cite{bravyi2019simulation,bu2019efficient,huang2020feynman,huang2019approximate}.
I'll be completely ignoring it in this paper.
Sixth, when simulating an error correcting circuit, it's common to sample the circuit millions or even trillions of times.
For example, going to large code distances can allow a lone half distance error mechanism to shine through the background haze of full distance errors.
But logical errors become rare at large code distances, and so many samples have to be taken to see them.
Based on discussions I've had with co-workers who have done these sorts of large scale simulations \cite{conversationmikeasutin}, the main bottleneck is not decoding the errors but rather (surprisingly) generating the samples.
So it's not just the latency until the first sample that matters; the sustained bulk sampling rate is also important.
With all those obstacles listed I should note that, of course, by using hardcoded knowledge about magic state distillation and the surface code, it's possible to make a specialized simulator that could solve my example case much faster than a general stabilizer circuit simulator.
But my underlying motivation is one of exploration, and I would strongly prefer to create a tool whose performance and correctness doesn't require understanding the behavior of places I haven't been yet.
So, in this paper, I present my failed attempt to reach my ridiculous goal.
The result is a software tool I've named "Stim".
Although Stim can't simulate a 15 thousand qubit surface code circuit with millions of gates in under a tenth of a second, it can at least get that job done in roughly ten seconds.
Then, with that initial sample acquired, it can begin producing batches of hundreds of samples in a tenth of the time.
Stim makes three key improvements over previous stabilizer simulators.
These improvements address obstacle 4 (the measurement obstacle), obstacle 3 (the constant factor obstacle), and obstacle 6 (the mega sampling obstacle).
The measurement obstacle is addressed by making an algorithmic improvement that decreases the time complexity of {\em deterministic} measurements from quadratic to linear.
This is beneficial in contexts, such as the surface code, where a circuit has many measurements but those measurements are almost entirely deterministic.
The constant factor obstacle is addressed by using SIMD (Same Instruction Multiple Data) instructions in tight loops iterating over contiguous memory.
Using SIMD is not a novel concept (e.g. Aaronson et al's CHP simulator \cite{aaronson2004chp} packs the bits from Pauli terms into bytes, which allows 8 to be operated on by each instruction) but it's technically challenging to implement well and Stim does it at larger register sizes than previous work.
Specifically, Stim uses 256 bit wide Advanced Vector Extensions (AVX) \cite{wiki:Advanced_Vector_Extensions} and also I spent a lot of effort minimizing the number of operations in key loops.
For example, a core operation for most gates is multiplying together strings of Pauli terms, which requires accumulating scalar factors from individual Pauli multiplications.
I found a way to do this using only 11 bitwise operations per group of Paulis (see \fig{pauli_mult_code}).
Modern Intel CPUs can apply 3 bitwise AVX operations per clock cycle \cite{intel-intrinsics-and}, and 256 is much larger than 11, suggesting that a budget of 0.01 cycles per qubit during a Clifford gate isn't quite as ridiculous as it seems.
The mega sampling obstacle is addressed by including a Pauli frame simulator that uses SIMD instructions to operate on the state of hundreds of simulations in parallel.
This amortizes the cost of cache misses and branch mispredictions and other performance killers.
This is a simple enough idea, but it gives Stim an enormous advantage over previous work when bulk sampling (see \fig{bench-surface-1000}).
This paper is divided into sections.
In \sec{concepts}, I explain key background concepts that are needed in order to understand the simulation strategies being described.
Then in \sec{framesim} and \sec{tableausim} I describe the Pauli frame simulator and stabilizer tableau simulators implemented by Stim.
\sec{engineering} describes several of the software engineering details that went into Stim, such as why I decided to use a dense representation instead of a sparse representation and how I went about testing that operations were implemented correctly.
Stim's performance is compared to other stabilizer simulators in \sec{compare}.
Finally, \sec{conclusion} wraps everything up.
There are also a few short usage examples in \app{example}.
\section{Key Concepts}
\label{sec:concepts}
\subsection{Pauli Products}
There are four Pauli gates: the identity $I = \begin{bmatrix} 1&0\\0&1\end{bmatrix}$, the bit flip $X = \begin{bmatrix} 0&1\\1&0\end{bmatrix}$, the phase flip $Z = \begin{bmatrix} 1&0\\0&-1\end{bmatrix}$, and the other one $Y = \begin{bmatrix} 0&-i\\i&0\end{bmatrix}$.
Sometimes the identity gate is not considered to be a Pauli gate, but in this paper it is.
A Pauli product is a set of Pauli gates applied to qubits.
For example, $X_1 \cdot Y_2 \cdot Z_3$ is a Pauli product that applies $X$ to qubit 1, $Y$ to qubit 2, $Z$ to qubit 3, and $I$ to everything else.
The set of all Pauli products form a group under matrix multiplication.
This group is called the Pauli group.
Paulis can be encoded in a variety of ways.
One encoding, which I'll be calling the ``xz encoding", represents a Pauli by decomposing it into an $x$ bit representing the presence of an $X$ gate, a $z$ bit representing the presence of a $Z$ gate, and an ignored global phase:
$$\texttt{encode\_xz}(P) = (x, z) = (P \notin \{I, Z\}, P \notin \{I, X\}) = \texttt{encode\_xz}(X^x Z^z)$$
Meaning:
$$\texttt{encode\_xz}(I) = (0, 0)$$
$$\texttt{encode\_xz}(X) = (1, 0)$$
$$\texttt{encode\_xz}(Y) = (1, 1)$$
$$\texttt{encode\_xz}(Z) = (0, 1)$$
In the xz encoding (and most others), two Paulis can be multiplied up to global phase by xoring their encoded representations:
$$\texttt{encode\_xz}(P_1 P_2) = \texttt{encode\_xz}(P_1) \oplus \texttt{encode\_xz}(P_2) = (x_1 \oplus x_2, z_1 \oplus z_2)$$
In Stim, Pauli products are represented by the \path{stim::PauliString} class.
\subsection{Stabilizer Generators}
When a quantum state is in the +1 eigenstate of an operation, that operation is said to be a {\em stabilizer} of the state.
For example: $+Z$ is a stabilizer of the computational basis state $|0\rangle$, $+Z$ is {\em not} a stabilizer of the computational basis state $|1\rangle$, $+X$ is a stabilizer of the superposed state $|0\rangle + |1\rangle$, and $-YY$ is a stabilizer of the entangled state $|00\rangle + |11\rangle$.
Stabilizer operations that are Pauli products are of particular interest, as they form the basis of the {\em stabilizer formalism} \cite{gottesman1997stabilizerformalism} (easily one of the most important and foundational ideas in quantum error correction).
In this paper, whenever I say "stabilizer" I always mean "Pauli product stabilizer".
The stabilizers of a state form a group under multiplication.
Let $A$ and $B$ be stabilizers of a state $|\psi\rangle$, meaning $A |\psi\rangle = |\psi\rangle = B |\psi\rangle$.
$AB$ is also a stabilizer of $|\psi\rangle$, because $A B|\psi\rangle = A |\psi\rangle = |\psi\rangle$.
The stabilizers of a state commute.
Note that $A B|\psi\rangle = B A|\psi\rangle$, so $0 = (AB - BA) |\psi\rangle = [A, B] |\psi\rangle$.
The commutator of $A$ and $B$ must project $|\psi\rangle$ to the zero vector.
The commutators of Pauli products are always scalar factors of Pauli products, where only the scalar factor can affect the length of $|\psi\rangle$, so it must be the case that the scalar factor is zero, meaning $[A, B] = 0$ (i.e. $A$ and $B$ commute).
If you start with the set of all stabilizers of a state, and iteratively remove stabilizers that can be formed by multiplying together other stabilizers, you are left with a small set of stabilizers referred to as {\em generators} of the state's stabilizer group.
For example, $|000\rangle$ has seven stabilizers (not counting the vacuous identity stabilizer) and these stabilizers can be generated by $ZII$, $IZI$, and $IIZ$.
An $n$ qubit state will have at most $n$ stabilizer generators.
An $n$ qubit state with exactly $n$ stabilizer generators is a {\em stabilizer state}.
\subsection{Stabilizer Tableaus}
A {\em Clifford operation} is a unitary quantum operation that conjugates Pauli products into Pauli products.
$C$ is Clifford if, for all pauli products $P$, it is the case that $C^\dagger P C$ is also a Pauli product.
In fact, a Clifford operation can be uniquely identified (up to global phase) by how it conjugates Pauli products.
A stabilizer tableau is a representation of a Clifford operation that simply directly stores how the Clifford operation conjugates each generator of the Pauli group.
In this paper, I use the generators $X_q$ and $Z_q$ for each qubit $q$ that the operation touches.
For example, here is a stabilizer tableau for the Controlled Y gate $C_Y$:
$$
\text{tableau}(C_Y) = \begin{array}{r|cc|cc}
& X_1 & Z_1 & X_2 & Z_2 \\
\hline
\pm & + & + & + & + \\
1 & X & Z & Z & Z \\
2 & Y & & X & Z \\
\end{array}
$$
Each column describes how $C_Y$ conjugates one of the four generators of the two qubit Pauli group.
The column labelled $X_1$ states that $C_Y$ conjugates $X_1$ into $+X_1 Y_2$, i.e. that $C_Y^{-1} X_1 C_Y = X_1 Y_2$.
Any qubit not mentioned by a tableau is unaffected by that tableau.
For example, if $X_q$ does not appear in the tableau then it is understood that the tableau's operation conjugates $X_q$ into $X_q$.
In order for a stabilizer tableau to be valid, i.e. to represent a Clifford operation, it must preserve commutativity and anticommutativity.
The Pauli products in its columns must commute or anticommute in the same way that its generators do.
The column for $X_a$ must commute with the column for $X_b$ and the column for $Z_b$, but must anticommute with the column for $Z_a$.
Also, a valid tableau doesn't have missing columns.
If there is a row for qubit $q$, there must be columns for the generators $X_q$ and $Z_q$.
Storing an $n$-qubit stabilizer tableau uses $4n^2 + O(n)$ bits.
There are $2n$ generator outputs to store, each output has $n$ Pauli terms, and each Pauli term is xz-encoded into two bits.
In Stim, stabilizer tableaus are represented by the class \path{stim::Tableau}.
Storing the stabilizer tableau is Stim's dominant space cost.
All other space costs are linear in the number of qubits.
\subsubsection{Conjugating a Pauli Product by a Tableau}
To conjugate a Pauli product observable by the Clifford operation represented by a stabilizer tableau, start by decomposing the Pauli product into the generators listed in the tableau.
Then use the fact that conjugation distributes over matrix multiplication to conjugate each of the generators, and re-assemble the resulting Pauli product.
For example, suppose we want to apply the Controlled-Y tableau to the Pauli product $X_1 Y_2$.
We start by decomposing $X_1 Y_2$ into its generators $i (X_1) (X_2) (Z_2$).
We then look up each generator in the table, replacing the generator with the result of the lookup.
This produces the result $i (X_1 Y_2) (Z_1 X_1) (Z_1 Z_2)$.
We multiply these Pauli products together to get the result $X_1$.
We conclude that $C_Y^{-1} X_1 Y_2 C_Y = X_1$.
When a Pauli product involves qubits that do not appear in the stabilizer tableau, and we want to conjugate the Pauli product inplace, those additional qubits do not need to be touched.
If the tableau covers $m$ qubits, and the Pauli product has $c$ qubits in common with the tableau, then each of the $O(c)$ table lookups has cost $O(m)$ (including multiplying together terms at the end).
Therefore, the complexity of inplace conjugation of an $n$ qubit Pauli product by an $m$ qubit stabilizer tableau with $c$ qubits in common is $O(mc)$ with no dependence on $n$ (this assumes that it is not necessary to determine which qubits are in common).
Note that $O(mc) \subseteq O(m^2)$.
In Stim, the method \path{stim::Tableau::apply_within} performs inplace conjugation of a Pauli product by a tableau.
Stim also supports out of place conjugation via the \path{stim::Tableau::operator()} operator.
\subsubsection{Composing Tableaus}
Given an $N$-qubit stabilizer tableau $A$ and an $m$-qubit stabilizer tableau $B$ where $m \leq N$, we may want to append or prepend $B$ into $A$.
Inplace appending $B$ into $A$, i.e. performing the mutation $A \rightarrow B \circ A$, is done by inplace conjugating each of $A$'s columns with $B$.
Since there are $N$ columns, and inplace conjugation by an $m$ qubit tableau takes $O(m^2)$ time, inplace appending takes $O(N m^2)$ time.
Inplace prepending $B$ into $A$ is done by computing the result of conjugating each of $B$'s generators' outputs by $A$.
For each generator $g_q$ in $B$, we compute $g_q^\prime = A^{-1} B^{-1} g_q B A$.
After computing all of the $g_q^\prime$ values, we write $g_q^\prime$ into $A$ under $g_q$ for each $g_q$ from $B$.
There are $m$ generators to conjugate by $B$ and then by $A$.
The cost of conjugating by $B$ is $O(m)$, since the generator has 1 qubit in common with $B$.
The cost of then conjugating by $A$ is $O(Nm)$, since the output from $B$ has at most $m$ qubits in common with $A$.
Therefore the complexity of inplace prepending is the same as inplace appending: $O(N m^2)$.
In Stim, inplace composition is implemented by the \path{stim::Tableau::inplace_scatter_append} method and the \path{stim::Tableau::inplace_scatter_prepend} method.
There are also many specialized optimized methods for inplace composition of common gates, such as \path{stim::Tableau::prepend_SQRT_Z}.
\subsubsection{Inverting Tableaus}
The inverse $T^{-1}$ of a stabilizer tableau $T$ is the unique tableau that satisfies $T \circ T^{-1} = T^{-1} \circ T = I$, where $I$ is the identity tableau.
Although Clifford circuits can contain highly nonlocal effects such as entanglement, inverting the Pauli terms in a stabilizer tableau is almost entirely a local process.
For example, consider the following partially specified stabilizer tableau:
$$
T = \begin{array}{r|cc|cc|cc|cc}
& X_1 & Z_1 & X_2 & Z_2 & X_3 & Z_3 & X_4 & Z_4 \\
\hline
\pm & ? & ? & ? & ? & ? & ? & ? & ? \\
1 & ? & ? & ? & ? & ? & ? & ? & ? \\
2 & ? & ? & ? & ? & ? & ? & ? & ? \\
3 & ? & ? & ? & ? & ? & ? & ? & ? \\
4 & ? & ? & ? & ? & X & & ? & ? \\
\end{array}
$$
This tableau specifies four bits of information.
It states that conjugating $X_3$ by $T$ will produce a Pauli product that has an $X$ term on qubit 4, and that conjugating $Z_3$ by $T$ produces a Pauli product with no term on qubit 4.
Nothing else is specified.
What can we determine about the inverse tableau, using only these four bits of information?
Well, we know that the generator $X_4$ commutes with $T^{-1} X_3 T$.
Therefore $T X_4 T^{-1}$ must also commute with $T T^{-1} X_3 T T^{-1} = X_3$, since conjugation preserves commutation.
This means that the term on qubit 3 of $T X_4 T^{-1}$ must be either $X$ or $I$.
Furthermore, because $X_4$ commutes with $T^{-1} Z_3 T$, we know $T X_4 T^{-1}$ must also commute with $Z_3$.
This leaves only one possibility for the term on qubit 3 resulting from conjugating $X_4$ by the inverse of $T$: the identity gate.
We can similarly solve for the term on qubit 3 of $T Z_3 T^{-1}$.
Given four bits of information about the tableau, we have derived four bits of information about the inverse tableau:
$$
T^{-1} = \begin{array}{r|cc|cc|cc|cc}
& X_1 & Z_1 & X_2 & Z_2 & X_3 & Z_3 & X_4 & Z_4 \\
\hline
\pm & ? & ? & ? & ? & ? & ? & ? & ? \\
1 & ? & ? & ? & ? & ? & ? & ? & ? \\
2 & ? & ? & ? & ? & ? & ? & ? & ? \\
3 & ? & ? & ? & ? & ? & ? & & Z \\
4 & ? & ? & ? & ? & ? & ? & ? & ? \\
\end{array}
$$
The terms on qubit $b$ that result from applying $T$ to $X_a$ and $Z_a$ always completely determine the terms on qubit $a$ from applying $T^{-1}$ to $X_b$ and $Z_b$.
To compute the Pauli terms of the inverse tableau, all that is needed is to transpose the input and output indices and then apply a few local tweaks that correspond to solving the commutation constraints.
With the Pauli terms computed, we can move on to computing the signs.
Let $S$ be the tableau with the same Pauli terms as $T^{-1}$, but with all signs positive.
If the sign in the column for the generator $g_q$ is negative, then round-tripping $g_q$ through $S$ and then $T$ will return negative $-g_q$ instead of $+g_q$.
That is to say, the sign of the $g_q$ column in $T^{-1}$ is equal to $T Sg_q S^{-1} T^{-1} g_q$.
To compute the signs, we evaluate this expression for each generator $g_q$.
Interestingly, computing the signs of the inverse tableau is more expensive than computing the Pauli terms.
It takes $O(n^2)$ time to compute the Pauli terms of the inverse of an $n$ qubit tableau, and $O(n^3)$ time to compute the signs.
One possible way to avoid this cost is to associate a pair of signs with every row of the tableau, such that the column signs of the inverse tableau are equal to the row signs of the tableau.
When appending and prepending operations into a tableau, the row signs can be updated along with the column signs at no additional cost (asymptotically speaking).
However, for the use case in this paper, only the column signs are needed so I won't bother with including the row signs.
In Stim, the inverse of a tableau can be computed using the \path{stim::Tableau::inverse} method.
\subsection{Pauli Frames}
A Pauli frame stores, for each qubit, whether or not that qubit has been bit flipped and/or phase flipped relative to some reference state.
A Pauli frame can be moved through a Clifford circuit in the same way that Pauli product observables are moved through the circuit: by conjugating the frame by the Clifford operations that the frame is passing through.
In effect, a Pauli frame is just a Pauli product where the global phase is ignored.
\subsubsection{Tracking Noise}
When simulating a noisy stabilizer circuit, where the noise is composed of probabilistic Pauli operations, it is not necessary to directly apply the noise to the simulated qubits.
Instead, the noise can be accumulated into a Pauli frame stored alongside the simulation \cite{knill2005quantum}.
As the simulation progresses, applying gates to the qubits, the Pauli frame is kept synchronized with the simulation by conjugating the frame's contents by the same gates.
When the simulation reports the measurement result from a qubit $q$, and the Pauli frame is storing an $X$ or $Y$ update for $q$, the measurement result is intercepted and inverted before being forwarded along.
The forwarded measurement are equivalent to samples from the noisy circuit.
In other words, by using a Pauli frame, you can augment any stabilizer simulator into a noisy stabilizer simulator.
This works even without access to the internal implementation details of that simulator.
\subsubsection{Tracking Corrections}
Pauli frames can also track corrective Pauli operations, so that those operations don't have to be applied to the underlying qubits.
For example, this can be used to augment a black box non-adaptive stabilizer simulator into a simulator that supports adaptive Pauli operations that depend on previous measurement results.
This even works if the simulator is replaced by a physical quantum computer \cite{ware2017experimental}.
The ability to track corrections from the outside using a Pauli frame is a key technique in quantum error correction \cite{knill2005quantum}.
It is the fundamental reason why just-in-time decoding of errors isn't needed for stabilizer codes like the surface code \cite{fowler2012surfacecodereview}.
Any corrections that are needed can be backdated into the Pauli frame at the appropriate time, and propagated forward through the circuit to the current time by toggling recorded measurements as dictated by the Pauli frame.
\section{Pauli Frame Simulation}
\label{sec:framesim}
A Pauli frame simulator works by propagating a Pauli frame through a circuit \cite{rall2019simulation}.
Clifford operations conjugate the frame, noise processes multiply Paulis into the frame, and collapsing operations randomize parts of the frame.
The benefit of Pauli frame simulation is that the frame only takes $O(n)$ bits to store and $O(1)$ time to update per gate.
The downside of Pauli frame simulation is that it doesn't tell you measurement results; it tells you whether or not measurements {\em were flipped}.
To convert this information into actual measurement results, you need a noiseless reference sample to diff against.
In a Pauli frame simulation, noise processes must be Pauli channels (i.e. equivalent to sampling a Pauli product from a probability distribution and applying the sampled product to the system's state).
For example, dephasing and depolarization are Pauli channels but relaxation to the ground state and leaking outside the computational basis aren't.
A Pauli frame simulator simulates a Pauli channel noise process by sampling a Pauli product from the Pauli product distribution corresponding to the noise process and multiplying the sampled Pauli product into the Pauli frame.
Paulis can also be multiplied into the Pauli frame by collapsing operations (initializations, resets, and measurements).
These operations introduce new stabilizers into the system.
To force later measurements to have random results if they measure observables that anticommute with a newly introduced stabilizer, the stabilizer is multiplied into the Pauli frame with 50\% probability.
In other words, after each initialization and reset and measurement, a \path{Z_ERROR(0.5)} is applied to the target qubit.
Conveniently, inserting these random Z operations is equivalent to replacing the noiseless reference sample that is being used with another independent noiseless sample from the circuit.
The random Z operations make all reference samples interchangeable and reusable.
Putting all of the pieces together, a Pauli frame simulation works as follows.
(Note: this summary will describe actions as if they were applied to one isolated Pauli frame.
Stim uses SIMD operations to apply each action to multiple frames simultaneously.)
\begin{enumerate}
\item A reference sample is collected from the target circuit, with all noise processes disabled, using some other method.
The reference sample can be used and reused as many times as needed.
\item A Pauli frame is initialized with a randomly chosen $I$ or $Z$ term on each qubit (due to initialization being a collapsing operation).
\item The Pauli frame is advanced through the circuit.
\begin{itemize}
\item When the frame crosses a Clifford operation $C$, the frame undergoes the update $F \rightarrow C^{-1} F C$.
\item When the frame crosses a reset, the qubit's term in the frame is set to the identity operation.
Then a $Z$ on the target qubit is multiplied into the frame with 50\% probability, because resets are collapsing operations.
\item When the frame crosses a Pauli error channel, a Pauli product is sampled from the error channel (usually the no-error identity operation is by far the most likely) and multiplied into the frame.
\item When the frame crosses a measurement, the result of that measurement (for the simulation run that the frame represents) is reported as $r_M \oplus x_q$ (where $r_M$ is the reference measurement result and $x_q$ is true if the Pauli frame has an $X$ or $Y$ term on the target qubit).
Then a $Z$ on the target qubit is multiplied into the frame with 50\% probability, because measurements are collapsing operations.
\end{itemize}
\item If more samples of the circuit are needed, go to step 2. Otherwise stop.
\end{enumerate}
\section{Stabilizer Tableau Simulation}
\label{sec:tableausim}
Consider the first measurement in a stabilizer circuit.
(If there are multiple measurements tied for first, pick one of them arbitrarily.)
This measurement is measuring the observable $Z_q$, where $q$ is the qubit targeted by the measurement.
Between this measurement and the $|0\rangle$ states at start of the circuit there are a variety of Clifford gates forming a compound Clifford operation $C$.
By conjugating the current-time observable $Z_q$ by the inverse Clifford operation $C^{-1}$, we get some observable from the start of time that is equivalent to $Z_q$ at the current time.
Measuring $C Z_q C^{-1}$ at the start of time is equivalent to measuring $Z_q$ at the current time.
Suppose that $C Z_q C^{-1}$ is equal to $-Z_a Z_b$.
At the start of time we know that qubits $a$ and $b$ are in the $|0\rangle$ state, i.e. in the +1 eigenstate of the $Z$ observable.
Therefore we can replace $Z_a$ with $+1$, and we can do the same for $Z_b$.
If we were to measure $-Z_a Z_b$ at the start of time, we would get a measurement result of $-1$.
And since this observable is equivalent to $Z_q$ at the current time, we can conclude that measuring $Z_q$ now should also return a deterministic measurement result of $-1$.
Suppose alternatively that $C Z_q C^{-1}$ was equal to $-X_a X_b$.
This start-of-time observable anticommutes with the initial state, so the measurement result will be random.
But, in order to fully resolve the measurement, we need to simplify the observable we are dealing with.
The key insight to make is that, because all qubits are initialized into the $|0\rangle$ state, a controlled operation inserted at the start of time will have no effect on the state (because the control is not satisfied).
However, inserting a controlled operation at the start of time can change the start-of-time observables.
For example, if we insert a CNOT operation controlled by qubit $a$ and targeting qubit $b$ at the start of time, we will change $C$ so that $C Z_q C^{-1}$ is equal to $-X_a$ instead of $-X_a X_b$.
We like this, because it reduces the number of terms that are present.
Measuring $Z_q$ at the current time is now equivalent to measuring $-X_a$ at the beginning of time.
The latter measurement is clearly 50/50 random and forces qubit $a$ into either the $|+\rangle$ or $|-\rangle$ state (at the start of time).
We emulate the measurement collapse by inserting a Hadamard gate at the start of time (to prepare a $|+\rangle$) and then randomly inserting an $X$ gate or not before the Hadamard (to choose between $|+\rangle$ and $|-\rangle$).
After this change to the circuit, the $Z_q$ observable at the current time is equivalent to either $Z_a$ or $-Z_a$ at the start of time, and we have reduced the random measurement case to the deterministic measurement case.
Each measurement in the circuit, working from the start of time to the end of time, can be resolved in the same way that we resolved these two example cases.
Measurements whose equivalent-start-of-time-observable contain only $Z$ terms are deterministic, and have a result equal to the sign of their equivalent-start-of-time-observable.
After reporting this result, the offending measurement gate can be deleted from the circuit.
Measurements whose equivalent-start-of-time-observable contain an $X$ or $Y$ term trigger an elimination procedure which simplifies their observable into an observable with a single $X$ term.
This can be done by introducing no-effect controlled operations and no-effect S gates at the start of time.
Once a single term remains, a Hadamard gate is inserted at the start of time to emulate the $X$ observable collapsing and an $X$ gate is randomly inserted at the start of time to emulate the randomness of the result.
This forces the measurement under consideration to become deterministic, so it can be handled by the deterministic case.
\subsection{The Asymptotic Benefits of Backwards Thinking}
At this point I should note that, in previous work \cite{aaronson2004chp}, this whole process was explained in reverse.
Instead of being framed in terms of how single qubit observables at the current time mapped to compound observables at the start of time, previous work was framed in terms of single qubit stabilizers and ``destabilizers" at the start of time being mapped to compound Pauli products at the current time.
For example, measurements were classified as deterministic or random based on whether or not they anticommuted with any of the current-time stabilizers.
This is equivalent to checking whether the $Z$ observable mapped backward anticommutes with the initial state for the same reason that inverting a Clifford tableau is nearly a transposition.
The downside of thinking in terms of mapping forwards, instead of mapping backwards, is that the measurement result is not immediately available when a measurement is deterministic.
When mapping backwards, the measurement result is the sign of the equivalent observable at the start of time.
This allows a deterministic measurement to be classified as deterministic and resolved in worst case time $O(n)$, where $n$ is the number of qubits.
When mapping forwards, it is necessary to find a combination of stabilizers which multiply together to form the desired measurement's observable.
That process has a worst case time of $O(n^2)$, instead of $O(n)$.
\subsection{Tracking the Tableau}
To perform a stabilizer tableau simulation efficiently, we need a data structure where we can efficiently append Clifford operations (as we progress further into the circuit), efficiently prepend operations (for the elimination process at the beginning of time), and efficiently determine the start-of-time observable of a measurement.
In other words, we need a stabilizer tableau.
Here are the steps used to perform a stabilizer tableau simulation:
\begin{enumerate}
\item Initialize an identity stabilizer tableau $T$.
To aide the intuition, imagine it being positioned at the start of the circuit being simulated, just after the qubits have been initialized into the $|0\rangle$ state.
\item Begin folding circuit operations into $T$, working from earliest to latest.
\begin{itemize}
\item To fold a Clifford gate $C$ into $T$, perform an inplace prepend of $C^{-1}$ into $T$ then delete $C$ from the circuit.
(We prepend the inverse of $C$, instead of appending $C$, because $T$ is tracking the inverse of the circuit processed so far.)
\item To fold a measurement gate on qubit $q$ into $T$, first check whether or not the $Z_q$ column in $T$ contains any $X$ or $Y$ terms.
If it does, the measurement is random and must be resolved.
\begin{itemize}
\item If the measurement is random, arbitrarily pick one of the $X$ or $Y$ terms to be the ``pivot".
For each other $X$ or $Y$ term, perform an inplace append into $T$ of a CNOT operation controlled by the pivot targeting that term.
(It is not necessary to remove the leftover $Z$ terms.)
To collapse the target qubit, append into $T$ a single qubit operation targeting the pivot, changing its term to a $Z$ (from either an $X$ or $Y$).
Finally, to randomize the measurement result, flip a coin to decide whether or not to inplace append an $X$ operation on the pivot into $T$.
\end{itemize}
The measurement is now deterministic.
Report the sign of the $Z_q$ column as the result, and delete the measurement gate from the circuit.
\item To fold a reset gate into $T$, perform a measurement on the qubit without reporting the result.
The qubit is now either in the $|0\rangle$ state or the $|1\rangle$ state, determined by the sign of the $Z_q$ column of $T$.
To force the qubit into the $|0\rangle$ state, overwrite the $Z_q$ column's sign with $+1$.
\end{itemize}
\end{enumerate}
The asymptotic complexity of this simulation is $O(ng + nd + n^2r)$ where $n$ is the number of qubits, $g$ is the number of gates, $d$ is the number of measurements that have a deterministic result given previous measurements, and $r$ is the number of measurements that have random results.
(Note: $d$ and $r$ also include contributions from operations such as resets, which implicitly perform a measurement as part of their implementation.)
This is an improvement over the complexity $O(ng + n^2d + n^2r)$ from previous work.
An example of a context where this improvement provides an asymptotic advantage is simulating a distance $d$ surface code circuit for $d$ rounds.
Typically, such a circuit will have $\Theta(d^2)$ qubits, $\Theta(d^2)$ deterministic measurements in each round, and $\Theta(d^2)$ random measurements in the first and last round.
Simulating this circuit by tracking the forward tableau has a worst case simulation cost of $\Theta(d^7)$ whereas tracking the inverse tableau results in a worst case cost of $\Theta(d^6)$.
(Interestingly, the $\Theta(d^6)$ cost comes entirely the random measurements in the first and last rounds.
All of the intermediate rounds combined have a cost of $\Theta(d^5)$.)
\section{Software Engineering}
\label{sec:engineering}
\subsection{Data layout and vectorization}
There are many factors that affect software speed, such as the choice of programming language, the underlying algorithm, and company culture.
However, when attempting to write code that performs near the limits of what your computer is capable of, there are some factors that {\em have} to be considered.
Data layout is one of those factors.
In a good data layout, most data accesses are sequential.
They are performed while iterating across contiguous memory.
This makes the memory accesses predictable, so that various caching mechanisms can hide latency by prefetching data before it is needed.
In a bad data layout, data accesses are disorganized and difficult to predict, resulting in cache misses.
This can easily slow down your code by an order of magnitude (that's what it did in some quick tests I did) if not more \cite{norviglatency}.
When initially imagining Stim, I guessed that it was going to spend most of its time doing three key types of operations.
(1) Applying unitary operations and deterministic measurements to the stabilizer tableau by interacting its columns.
(2) Resolving random measurements by performing Gaussian elimination over the rows of the stabilizer tableau.
(3) Tweaking Pauli frames as they propagated through the operations in a circuit.
Notice that there are both row-wise and column-wise operations being applied to the stabilizer tableau.
This is bad, because it means we have to pick one to be fast and one to be slow.
Alternatively, we can spend effort transposing the table data to switch from one being fast to the other being fast.
In the circuits I care about (error correcting circuits) random measurements tend to come in large bursts.
With that in mind, I decided to go with the switching strategy.
Usually the stabilizer tableau is stored in column major order, so that unitary operations and deterministic measurements are efficient.
When (hopefully large batches of) random measurements need to be processed, the tableau is temporarily transposed into row major order.
(An example of a circuit that penalizes this strategy is a surface code circuit with a gauge covering a single missing data qubit \cite{nagayama2017surfacegauge}.)
I did experiment with an alternative data layout where 256x256 bit blocks from the table were contiguous in memory.
This significantly reduces the cost of switching from column-wise to row-wise operations (because only local transposes of the 256x256 blocks are needed).
However, it also forces key operations like Pauli string multiplication to skip through memory instead of iterating through contiguous memory.
Profiling showed that the costs outweighed the benefits.
For the Pauli frame simulator, I was initially worried that a good data layout didn't exist.
The problem is that the gates being performed by the circuit are not predictable, and each only affects a tiny number of bits in the Pauli frame.
This is a worst case scenario when it comes to cache misses.
Even for circuits with well organized operations, like surface code circuits operating on local 2d grids, some of the operations would have to be running ``against the grain" of memory.
I spent quite a lot of time trying to think of workarounds for this problem.
Eventually, I realized that I shouldn't be trying to vectorize within a Pauli frame simulation.
Instead, the correct thing to do is to vectorize {\em across multiple} Pauli frame simulations.
The data layout I decided on was to have 1024 frames packed together into a two dimensional table of bits, with the major axis corresponding to the qubits in the frame and the minor axis (i.e. the contiguous-in-memory axis) corresponding to the different frames being tracked.
Gates can then be applied to many frames using single instructions.
For example, an S gate targeting qubit $q$ is performed by xoring $x_q$ into $z_q$, where $x_q$ and $z_q$ are the bits of the xz-encoded Pauli for qubit $q$ in the Pauli frame.
Instead of individually operating on each $x_q$ and $z_q$ from each frame, there is a 256 bit word containing 256 separate $x_q$ bits from 256 frames, and a 256 bit word containing 256 separate $z_q$ bits from 256 frames, and the S gate is applied to all 256 frames by one \path{_mm256_xor_si256} instruction computing the 256 new $z_q$ values.
The story is similar for all other gates (e.g. see \fig{pauli_frame_code} for the CNOT gate's code).
With this approach each gate may still touch memory in an unpredictable way, but the cost of the cache miss will be amortized over a thousand parallel frame updates.
\begin{figure}
\centering
\begin{cpp}
int inplace_pauli_string_multiplication(
int n, __m256i *x1, __m256i *z1 __m256i *x2, __m256i *z2) {
// The 1s and 2s bits of 256 two-bit counters.
__m256i c1{};
__m256i c2{};
// Iterate over data in 256 bit chunks.
for (int k = 0; k < n; k++) {
__m256i old_x1 = x1[k];
__m256i old_z1 = z1[k];
// Update the left hand side Paulis.
x1[k] ^= x2[k];
z1[k] ^= z2[k];
// Accumulate anti-commutation counts.
__m256i x1z2 = old_x1 & z2[k];
__m256i anti_commutes = (x2[k] & old_z1) ^ x1z2;
c2 ^= (c1 ^ x1[k] ^ z1[k] ^ x1z2) & anti_commutes;
c1 ^= anti_commutes;
}
// Determine final anti-commutation phase tally.
return (popcount(c1) + 2 * popcount(c2))
}
\end{cpp}
\caption{
Vectorized Pauli string multiplication.
Multiplies one fixed length xz-encoded Pauli string into another while computing the base-$i$ logarithm of the resulting scalar phase.
The loop body contains 4 SIMD loads, 4 SIMD stores, and 11 bitwise SIMD operations (after trivial compiler optimizations).
I believe the number of bitwise SIMD operations is optimal, and would be very interested to hear if anyone can achieve the same effect with fewer.
}
\label{fig:pauli_mult_code}
\end{figure}
\begin{figure}
\centering
\begin{cpp}
void apply_CNOT_to_batched_pauli_frames(
int n, __m256i *x1, __m256i *z1 __m256i *x2, __m256i *z2) {
// Iterate over tracked frames in chunks of 256.
for (int k = 0; k < n; k++) {
z1[k] ^= z2[k];
x2[k] ^= x1[k];
}
}
\end{cpp}
\caption{
Vectorized code to apply a CNOT operation to a batch of $256n$ Pauli frames.
Note that the code is looping over multiple Pauli frames, not over multiple qubits.
The argument \protect\path{x1} points at data storing whether or not, for each Pauli frame, there is currently an X error on the control of the CNOT (the other arguments are similar, but for the Z errors and/or the target qubit).
}
\label{fig:pauli_frame_code}
\end{figure}
Another data layout decision I had to make as part of implementing Stim was whether or not to interleave the x and z bits of xz-encoded Pauli products.
Initially, I decided that for locality reasons each 256 bit word should contain the 128 x bits and 128 z bits from 128 Paulis.
This is a bad idea.
It violates a core rule of thumb for writing fast vectorized code: don't mix different data together.
For example, consider simulating an $S$ gate (again).
As part of simulating the $S$ gate, the x bits of a Pauli string have to be xored pairwise into the z bits.
If the x and z bits are in separate words, you can process the xor part of the $S$ gate for 256 Paulis using one AVX bitwise xor instruction.
If the x and z bits are in the same word, the xor will have to be accompanied by some shifting and masking and these multiple instructions will have only processed 128 Paulis instead of 256.
This realization flipped my decision from ``yes interleave" to ``definitely do not interleave".
(I also found that alternating between 256 bit x words and 256 bit z words, like \path{XZXZXZXZ}, was worse than doing all of one then all of the other, like \path{XXXXZZZZ}.
That being said, in this case the difference was small enough that I wouldn't be confident of it reproducing.)
\subsection{Sparse vs Dense}
\label{sec:sparsevdense}
Stim uses a dense representation for its stabilizer tableau.
Every bit is stored, even if almost all of the bits are zero.
In some contexts, like simulating the surface code, the stabilizer tableau representation can be made sparse and so a dense representation is wasteful.
Given how much I've been talking about the surface code in this paper, shouldn't I want to use a sparse representation instead of a dense one?
Well, maybe.
There's three issues that complicate the situation.
First, in order for a simulator to use a sparse representation, it has to {\em find} and {\em maintain} that representation.
To do this well, the simulator has to quickly solve non-local problems like ``where are the logical qubits hiding in this circuit?".
In the general case, these problems look very similar to problems with non-linear scaling (like Gaussian elimination).
Second, sparse representations hide huge constant factors compared to dense representations.
For example, Stim has a \path{--detector_hypergraph} mode where it groups error mechanisms present in a circuit based on which annotated detectors and logical observables those errors flip.
This error analysis does work very similar to what the tableau simulator does, but using a sparse representation.
Despite that, the error analysis is slower on most of the circuits I care about.
The problem is that a key bottleneck operation is computing the symmetric difference of small sets of integers, and my best efforts at writing code to do this quickly produced a result that takes on the order of ten nanoseconds to process sets with ten items.
By contrast, using a dense bit packed representation where the symmetric difference is computed using 256 bit wide xor operations, ten nanoseconds is enough time to process ten thousand items.
In other words, if you care more about speed than space, you may find that a sparse representation is worse until the sparsity of your data exceeds 99.9\%.
Third, there is elegance in a piece of software that is {\em unconditionally} fast.
It's bad enough that Stim's performance depends on the density of random measurements in a circuit.
I would prefer not to involve even more complicated properties like the existence of a stabilizer basis with low weight generators.
Using a sparse representation would be more appropriate for circuits with hundreds of thousands or millions of qubits, where even storing the stabilizer tableau becomes problematic.
Sparse representations are also more compelling in contexts where the hard or finicky parts of finding the sparse basis are embedded into the representation of the problem (e.g. explicitly annotating the locations of the logical qubits in the circuit).
\subsection{GPU Experiment Failure}
I did a small test to check if using a GPU would be beneficial for stabilizer tableau simulation or Pauli frame simulation.
All of the expensive operations in a stabilizer simulation are embarrassingly parallel bitwise operations.
They are very similar to xoring two arrays together.
So I wrote a WebGL2 fragment shader to xor together two large arrays, and tested how long it took.
I found that (surprisingly) the shader was roughly as fast at xoring as my CPU.
I expected the performance to be much better.
In hindsight, I suspect this is because the arithmetic intensity of xoring two arrays together is {\em extremely} low, and doesn't use floating point arithmetic.
That is to say, bitwise xoring doesn't hit on the comparative strengths of GPUs over CPUs.
This quick xor test suggests a GPU would perform poorly (or at least not significantly better than a CPU) at stabilizer simulation.
Since programming GPUs is harder, and was unlikely to give much benefit, I dropped the idea.
(I caution the reader that perhaps I'm simply not familiar enough with GPU programming to have produced a fast implementation.
For example, perhaps a compute shader would be more effective or perhaps I should have used a standalone GPU instead of an embedded GPU.
There's certainly no fundamental obstacle to a specialized piece of hardware running huge bitwise operations significantly faster than a CPU.)
\subsection{Threading}
I tried adding threading to various parts of Stim.
For example, when given a batch of CNOT operations to do, I tried partitioning them into two groups and running the two groups on separate threads.
Surprisingly, this was less than ten percent faster.
I didn't look deeply into why there was so little benefit; I simply decided based on the measurement that the complexity of threading batches of operations wasn't worth the gain.
One place where I found that it was worth adding threading was when transposing stabilizer tableaus.
The stabilizer tableau's data is divided into four independent pieces (x/z bits of X/Z observables), so it's trivial to process each one on a separate thread without worries of contention.
Overall I found that at large sizes this was about twice as fast as not threading the tableau transpose.
\subsection{Entropy}
\label{sec:randomness}
When running noisy Pauli frame simulations, Stim can consume gigabytes of entropy per second.
Stim gets this entropy by using \path{std::mt19937_64}, the 64 bit Mersenne twister PRNG included in C++'s standard library.
The PRNG is seeded using \path{std::random_device}, the questionable \cite{cpprandomtroubles2020} source of external entropy built into C++'s standard library.
Stim is often simulating noise that occurs with low probability (e.g. $p < 1\%$).
When deciding whether or not each instance of a noise process has occurred, Stim applies an optimization that reduces waste in the conversion from max-entropy bits to low-entropy bits.
Instead of sampling from
\path{std::bernoulli_distribution(p)} $n$ times, Stim samples the gap between errors using \path{std::geometric_distribution(p)} $np$ times (on average).
This reduces the expected cost of sampling $n$ potential errors from $\Theta(n)$ to $\Theta(np + 1)$.
Sometimes noise occurs with intermediate probability (e.g. $10\% < p < 40\%$).
To decide whether or not this type of noise occurs, Stim uses a hybrid sampling strategy.
The desired probability $p$ is decomposed into a truncated probability $p_{\texttt{trunc}} = \lfloor 256p \rfloor / 256$ and a refinement probability $p_{\texttt{refine}} = \frac{p - p_{\texttt{trunc}}}{1 - p_{\texttt{trunc}}}$.
The truncated probability is a multiple of $2^{-8}$, so bits that are true with this probability can be sampled quickly and exactly by generating a uniformly random byte, dividing it by 256, and comparing it to the truncated probability.
Bits that are true with the refinement probability can also be sampled quickly, because the refinement probability is small.
The bitwise OR of a truncated probability bit and a refinement probability bit is a bit that is true with the desired intermediate probability.
These optimizations, and others, allow Stim to sample from any Bernoulli distribution at gigahertz rates.
\subsection{Sampling Detection Events}
A ``detector" is a specified set of measurement locations in a circuit, with the promise that xoring those locations' measurement results together will produce a deterministic value (under noiseless conditions).
When a detector's measurements xor together into a value opposite to the expected one, that is a ``detection event" indicating the presence of errors.
In addition to its measurement result sampling mode \path{--sample=#}, Stim supports a detection event sampling mode \path{--detect=#}.
Note that, in principle, detection events can be sampled with better asymptotic complexity than measurements.
For example, consider that, in the surface code, an X error on a data qubit will flip every single future measurement of a $Z$ stabilizer involving that qubit.
By contrast, when sampling detection events, this same error flips two nearby detectors and that's it.
By sampling detection events instead of measurements, errors with unbounded non-local effects can become errors with bounded local effects.
Another benefit of sampling detection events is that, since the only data that is being reported is which detectors were flipped, otherwise-distinct errors can be fused together \cite{chao2020optimization}.
Ultimately, by using a sparse representation for the set of flipped detectors, and sampling low-probability errors the way I described in \sec{randomness}, sampling the detection events in a circuit with $O(n)$ gates and detectors can be done in $O(npd + 1)$ operations (where $p$ is the average error probability and $d$ is the average number of detectors flipped by an error).
Contrast with the $O(n)$ work needed to produce measurement samples using a Pauli frame simulation.
Currently, Stim doesn't implement the asymptotically efficient detector sampling method.
Stim produces detector samples by sampling measurement results and then combining them.
This is because, for the noise levels around 0.1\% that are most interesting to me, the larger constant factors inherent in using a sparse representation outweigh the asymptotic gains.
\subsection{Testing}
Stim contains a lot of code that is very easy to get wrong.
There are hundreds of opportunities for sign errors, transposition errors, substitution errors, and omissions that would go undetected without testing.
For example, embarrassingly, version 1.0 had a transposition error where the effects of the \path{X_ERROR} and \path{Z_ERROR} noise channels were swapped when in \path{--repl} mode.
To verify the behavior of the operations in Stim, they are redundantly specified and the redundancies are checked for consistency.
Each Clifford operation is specified in at least three different ways.
As a unitary matrix:
$$
\texttt{unitary}(\sqrt{Y}) = \frac{1}{2}\begin{bmatrix}
1 - i & 1 - i \\
i - 1 & 1 - i
\end{bmatrix}
$$
As a stabilizer tableau:
$$
\texttt{tableau}(\sqrt{Y}) =
\begin{array}{r|cc|cc}
& X_1 & Z_1 \\
\hline
\pm & + & - \\
1 & Z & X \\
\end{array}
$$
And as hand-optimized simulator code calling vectorized subroutines:
\begin{cpp}
void Tableau::prepend_SQRT_Y(size_t target) {
// This method contains an intentional bug.
xs[target].swap_with(zs[target]);
zs[target].sign ^= 1;
}
\end{cpp}
The unitary matrix and stabilizer tableau are validated against each other by using a vector state simulator and the state channel duality.
For each column in the tableau, the state vector simulator is initialized to contain a number of EPR pairs equal to the number of qubits that the operation under test acts on.
Let S be the subsystem made up of the first qubit from each EPR pair.
The state vector simulator applies the column's input observable to S as a Pauli product (including sign), applies the unitary matrix of the operation under test, and applies the column's output observable to S as a Pauli product (including sign).
It then verifies that the resulting state is equal (including global phase) to the state produced when acting just the unitary matrix of the operation under test on S.
This proves that the unitary matrix conjugates the input Pauli product into the output Pauli product, as specified by the tableau.
For some operations, further checks are needed before I feel confident their unitary matrix and tableau are correct.
For example, consider the $\sqrt{Y}$ operation.
The $Y$ operation has two square roots (up to global phase).
It is an arbitrary convention which of the two roots is the principle root.
What if I mixed them up?
Then I would enter the tableau for $\sqrt{Y}^\dagger$ instead of $\sqrt{Y}$, and also the unitary matrix for $\sqrt{Y}^\dagger$ instead of $\sqrt{Y}$.
Comparing the two will not catch the problem.
In these cases, I also included consistency tests which verify some disambiguating circuit identity.
For example, I want the choice of principle root to be consistent across axes, and I follow the usual convention that $\sqrt{Z} = \texttt{diag}(1, i)$.
So I can disambiguate $\sqrt{Y}$ by verifying that $\sqrt{Y} = H_{YZ} \cdot \sqrt{Z} \cdot H_{YZ} \neq \sqrt{Y}^\dagger$ where $H_{YZ} = (Y + Z) / \sqrt{2}$.
Once an operation's tableau is established as being correct, it can be used to verify the hand optimized code.
The test code does this by generating several large random tableaus \cite{bravyi2020randomtableau}, and confirming that composing the operation's tableau into the random tableau using a general method gives the same result as applying the hand optimized code.
Because of the local and discrete nature of tableau operations, this sort of fuzzing is extremely effective at catching bugs.
One important risk factor in testing is the risk of simply forgetting to test an operation in the first place.
I'd normally use code coverage tools to verify that the tests are exercising everything, but I couldn't get any C++ code coverage tool that I tried to work.
Instead, the test code takes advantage of the fact that every supported gate appears as data in a list.
This listed data is used for crucial functionality such as parsing, so I can expect it to be complete.
With this listed data, one test can verify all of the defined gates.
For example:
\begin{cpp}
TEST(gate_data, tableau_data_vs_unitary_data) {
for (const auto &gate : GATE_DATA.gates()) {
if (gate.flags & GATE_IS_UNITARY) {
EXPECT_TRUE(tableau_agrees_with_unitary(
gate.tableau(), gate.unitary())) << gate.name;
}
}
}
\end{cpp}
Noise channels are harder to test than Cliffords, because the behavior of noise channels isn't deterministic.
I tested noise channels by creating tests that sampled circuits applying the noise channel to simple states.
Then, statistical checks verified the consistency of the samples against independent definitions of noise channels from Cirq \cite{quantum_ai_team_and_collaborators_2020_4062499}.
The \path{X_ERROR}/\path{Z_ERROR} transposition that I mentioned slipped through these tests because originally they only exercised the bulk sampling API (which was a natural choice due to needing many samples), not the single-sample interactive API.
Tests are run with compiler optimizations enabled, but also run without compilation optimizations with address and memory sanitization checks enabled.
A continuous integration system verifies that the tests pass before allowing commits to be merged into the main branch.
\subsection{Profiling and Optimizing}
To determine the performance of individual components of Stim, I wrote a small benchmarking framework that would repeatedly run some task and compare the time it took to a reference time.
For example, here is code benchmarking the multiplication of Pauli strings with ten thousand terms:
\begin{cpp}
BENCHMARK(PauliString_multiplication_10K) {
size_t n = 10 * 1000;
PauliString p1(n);
PauliString p2(n);
benchmark_go([&]() {
p1.ref().inplace_right_mul_returning_log_i_scalar(p2);
}).goal_nanos(90).show_rate("Paulis", n);
}
\end{cpp}
Note that an important assumption here is that the compiler is inlining the \path{benchmark_go} method and the lambda.
I found that making \path{benchmark_go} a template with the lambda's type as a parameter caused more consistent inlining, but I can make no guarantees.
The framework runs whatever is in the \path{benchmark_go} body for 0.5 seconds, counting how many iterations were finished.
It then produces output comparing the inferred time to a reference time.
Example benchmarking output can be seen in \fig{stim-benchmark-output}.
This benchmarking methodology would not work well for a team of people (e.g. because different computers require different reference times), but was sufficient for my purposes.
When I wanted to improve the performance of a benchmark, or fix a regression, I would build Stim using debug flags \texttt{-g -fno-omit-framepointer} and run it under Linux's \path{perf} tool to identify functions and lines of code taking disproportionate amounts of time.
For example, the reason that I optimized the Bernoulli distribution sampling process to use less entropy per error is because profiling showed that generating entropy for errors was a significant bottleneck.
\begin{figure}
\centering
\resizebox{0.99\linewidth}{!}{
\includegraphics{assets/bench-methods.png}
}
\caption{
Screenshot of results from \texttt{stim\_benchmark}.
Each line is the result of a small benchmark defined in Stim's source code.
The ASCII bar indicators on the left show deviations from the reference time in a visual way that can easily be scanned over.
The location of the asterisk, relative to the center of the ASCII bar, indicates how many decibels (factors of $1.26$) slower or faster the run was (compared to the reference time).
In the screenshot, a few benchmarks are running slow.
That could indicate some sort of performance regression, or could just be noise.
}
\label{fig:stim-benchmark-output}
\end{figure}
\subsection{Portability}
One of the major downsides of using AVX instructions is that they are not portable.
They won't work on older CPUs, and they won't work on architectures besides x64 (e.g. most phones).
Stim tries to work around this by hiding all AVX instructions behind an opaque type \path{simd_word}.
Stim includes one implementation of \path{simd_word} that uses 256-bit-wide AVX instruction, another that uses older 128-bit-wide SSE instructions, and another that uses no non-standard instructions.
The presence of defines like \path{__AVX2__} then determine which \path{simd_word} implementation is used.
Because the size of a \path{simd_word} is not fixed, code is written in a fashion that avoids depending on this size.
For example, the class \path{simd_bits}, which represents a bit packed array that supports vectorized instructions, will help the caller with this by automatically padding its size up to a multiple of \path{simd_word}'s size.
\path{simd_bits} also exposes methods which handle the boilerplate parts of iterating over the words.
\path{simd_bit_table} plays a similar role for two dimensional data.
\subsection{Input Format}
Users need some way of describing quantum circuits to Stim.
I considered using an established format for this, such as OpenQASM.
I dropped the idea because OpenQASM has a lot of semantics that would be time consuming to implement (such as functions, named variables, and file includes).
Also OpenQASM is currently undergoing a breaking version change \cite{openqasmbreakingchange}.
Consequently, I decided to make yet another input format.
Stim's input format is nearly minimal.
It takes a series of lines, and each line can specify a gate to apply to some targets.
For example, the line ``\texttt{H 0}" says to apply a Hadamard to qubit 0 and the line ``\texttt{CNOT 0 1}" says to apply a Controlled-NOT operation controlled by qubit 0 targeting qubit 1.
That's essentially all that's necessary to know in order to use the format, besides the names of supported gates.
There are of course several exceptions to this minimalist ideal.
Blank lines are permitted.
Lines can be suffixed with a comment prefixed by a hash ``\texttt{\# like this}".
Gates can be broadcast over multiple targets like ``\texttt{H 0 1 2}".
And, most shamefully, there is a ``\texttt{REPEAT N \{...\}}" macro which repeats a block of instructions a given number of times (drastically reducing some file sizes).
\subsection{Output Format}
By default, Stim outputs a series of `0' and `1' characters, one for each measurement result.
The final measurement result is followed by a newline character `\textbackslash n'.
When multiple shots are taken, the measurement results (and newline) from the first shot are output, then the results (and newline) from the second shot, and so forth.
Stim also supports a binary output format, triggered by providing the command line argument \path{--out_format=b8}.
This format packs 8 measurements results into each byte of output, ordered from least significant bit to most significant bit.
The last byte output for a shot is padded to completion with 0s.
There is no separator between the bytes from separate shots.
In addition to these ``dense" formats, Stim supports sparse output formats like \path{--out_format=hits} and \path{--out_format=r8} that focus on the indices of non-zero bits.
These output formats are useful when sampling detectors, because non-zero bits are rarer.
\section{Comparison}
\label{sec:compare}
I compared Stim's performance to Aaronson and Gottesman's \path{chp} simulator \cite{aaronson2004chp}, IBM's Qiskit simulator (\texttt{qiskit method=`stabilizer'}) \cite{Qiskit}, Google's Cirq simulator (\path{cirq.CliffordSimulator}) \cite{quantum_ai_team_and_collaborators_2020_4062499}, and finally Anders and Briegel's \path{graphsim} \cite{anders2006fastgraphsim} simulator.
I performed the comparison benchmarks on my work laptop: a ThinkPad running gLinux (a Google internal modified variant of Debian) with an Intel Core i7-8650U CPU @ 1.90GHz and 16GB of RAM.
I compiled \path{stim} and \path{chp} with \texttt{g++ v10.2.1} using optimization level \path{-O3}.
I generated circuit files understood by each simulator (samples of these files are included in the ancillary files of this paper), and then iterated through the files of each benchmark timing how long each simulator took.
Output from the simulators was discarded, not recorded.
I didn't do anything special to prepare the computer for benchmarking except ensure it was plugged in with a full battery with no other user programs running.
I didn't disable dynamic frequency scaling or auto-updating mechanisms (the simulators differ in performance by large enough factors that I don't think this is a problem).
Timing was done by calling \path{time.monotonic()} or \path{std::chrono::steady_clock::now()} before and after the key simulation methods and computing the difference (I edited source code to do this when necessary).
Timing data doesn't include program startup or circuit parsing, except for ``\texttt{stim with startup+parsing}" where I used \texttt{date +\%s\%N} before and after invoking Stim in Bash.
Because the various simulators I benchmarked don't consume a common format or support a common gate set, it was often necessary to decompose one gate into several for one of the simulators.
For example, \path{chp} doesn't have a Z gate so two S gates were used instead when needed.
Additionally, because \path{graphsim} is an API rather than an end-to-end tool, I had to write a bit of glue code to drive that API using data read from a file.
For simplicity, I wrote this glue code to use \path{chp}'s file format which has very few gates.
After running the benchmarks I noticed that there were outlier times at the smallest sizes.
I think this is because the last problem size in each benchmark was large, and so simulating it could for example consume a lot of memory and evict pages that would then fault during the next run (which would be the smallest case in the next benchmark).
To correct this, I discarded the suspicious data for the three smallest cases from each benchmark and re-ran them on their own.
I chose five benchmarking tasks: bulk sampling a surface code circuit (see \fig{bench-surface-1000}), sampling a surface code circuit (see \fig{bench-surface}), sampling a Bacon-Shor code circuit (see \fig{bench-bacon}), sampling a randomly generated circuit (see \fig{bench-random}), and sampling multi-level S state distillation (see \fig{bench-distill}).
See the figure captions for commentary on each benchmark task.
\begin{figure}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{assets/bench-surface.png}
}
\caption{
Delay until first sample on $d \times d \times d$ unrotated surface code memory circuits \cite{horsman2012latticesurgery}.
The intention of this benchmark is to highlight simulators that can take advantage of redundant structure in the surface code, such as the abundance of local stabilizers and deterministic measurements.
Based on the apparent scaling, it appears that none of the simulators are making much use of the structure of the surface code.
Stim scales comparatively well on this benchmark because Stim performs deterministic measurements in worst case linear time instead of quadratic time.
}
\label{fig:bench-surface}
\end{figure}
\begin{figure}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{assets/bench-bacon.png}
}
\caption{
Delay until first sample on $d \times d \times d$ Bacon-Shor code memory circuits \cite{bacon2006operator}.
The intention of this benchmark is to contrast with the surface code benchmark, in that the Bacon-Shor code also has a lot of structure but doesn't have lots of deterministic measurements.
Note how Stim's apparent scaling on this benchmark looks more like the other simulators than it did in the surface code benchmark.
}
\label{fig:bench-bacon}
\end{figure}
\begin{figure}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{assets/bench-random.png}
}
\caption{
Delay until first sample on random circuits.
The intention of this benchmark is to highlight simulators with good performance in circuits that lack much exploitable structure.
Each circuit is made up of $n$ qubits operated on for $n$ layers.
Each layer randomly applies an $H$, $S$, or $I$ gate to each qubit, then samples a random pairing of the qubits and applies a CNOT to each pair, then samples 5\% of the qubits to measure in a random basis.
At the end of the circuit, every qubit is measured in a random basis.
One effect I don't understand here is why Stim appears to be scaling differently from chp.
}
\label{fig:bench-random}
\end{figure}
\begin{figure}
\centering
\resizebox{\linewidth}{!}{
\includegraphics{assets/bench-distill.png}
}
\caption{
Delay until first sample on nested 7-to-1 S state distillation circuits \cite{fowler2012surfacecodereview}.
The intention of this benchmark is to highlight simulators that do well on circuits made up of nearly independent pieces.
For example, with 5 levels of distillation, there are initially 2401 independent pieces (each with 15 qubits) performing identical copies of the 7-to-1 distillation circuit \href{https://algassert.com/quirk\#circuit=\%7B\%22cols\%22\%3A\%5B\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C\%22X\%22\%2C1\%2C\%22X\%22\%2C\%22X\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C\%22X\%22\%5D\%2C\%5B\%22\%E2\%80\%A2\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%5D\%2C\%5B1\%2C\%22\%5E\%3DA7\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22inputA7\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%2C\%22H\%22\%5D\%2C\%5B1\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%2C\%22Measure\%22\%5D\%2C\%5B1\%2C\%22inputA7\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%5E\%3DA7\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C\%22X\%22\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C1\%2C\%22X\%22\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B\%22Z\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22X\%22\%2C\%22X\%22\%2C1\%2C1\%2C1\%2C1\%2C\%22\%E2\%80\%A2\%22\%5D\%2C\%5B1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22Chance3\%22\%5D\%2C\%5B\%22Bloch\%22\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C1\%2C\%22\%7C0\%E2\%9F\%A9\%E2\%9F\%A80\%7C\%22\%2C\%22\%7C0\%E2\%9F\%A9\%E2\%9F\%A80\%7C\%22\%2C\%22\%7C0\%E2\%9F\%A9\%E2\%9F\%A80\%7C\%22\%5D\%5D\%2C\%22init\%22\%3A\%5B\%22-\%22\%2C\%22i\%22\%2C\%22i\%22\%2C\%22i\%22\%2C\%22i\%22\%2C\%22i\%22\%2C\%22i\%22\%2C\%22i\%22\%2C\%22\%2B\%22\%2C\%22\%2B\%22\%2C\%22\%2B\%22\%5D\%7D}{(view in Quirk)}.
Then the output qubits from each of these pieces are put into groups of seven and handed along to the next stage, which has 343 independent pieces.
This process continues until there is a single output qubit remaining.
graphsim does particularly well on this benchmark, due to the connected components of the graph state perfectly matching the independent pieces of the circuit.
}
\label{fig:bench-distill}
\end{figure}
Overall, I would say that the benchmarks show that at larger sizes Stim outperforms everything else by one or more orders of magnitude, and that when collecting thousands of samples Stim outperforms everything else by many orders of magnitude.
A notable exception is the multi-level distillation task, where \path{graphsim} shines due to its ability to simulate piecewise circuits in a piecewise fashion.
Actually, in most of the benchmarks, there is circuit structure present that should allow a simulator that notices and uses that structure to outperform Stim.
This would be an interesting avenue for future work.
\section{Conclusion}
\label{sec:conclusion}
In this paper I presented Stim, a fast stabilizer circuit simulator.
Stim was my attempt at simulating surface code circuits with tens of thousands of qubits and millions of operations in a tenth of a second.
Although that goal wasn't reached, Stim is regardless a useful piece of software.
I still think it's possible to simulate stabilizer circuits with tens of thousands of qubits and millions of gates in a tenth of a second, at least in special cases.
For example, when a person analyzes a surface code circuit, they notice the repetitiveness of the low level details of the circuit and zoom out to focus on topological details like lattice surgeries and braids.
If the stabilizer simulator could do something similar, decent speedups should be possible.
That is to say, one avenue of attack is to improve the performance of finding and maintaining sparse representations.
Another avenue of attack is to re-examine the assumptions going into the problem being solved.
The motivating use case I described at the start of the paper was a user making small iterative changes to a large circuit.
In that context the typical simulation is not a brand new circuit, but rather a slight variation of a circuit from just a moment ago.
This suggests that work could be cached and reused.
Alternatively, perhaps as part of creating the circuit, the user has provided information that can be used by the simulator.
For example, the user may have specified the circuit in terms of repeated pieces or may have noted that a measurement is supposed to relate to previous measurements in a specific way.
The simulator could verify this information (instead of having to find it) and then exploit it.
Ultimately, I hope the experimental and theoretical sides of the quantum community find Stim to be a useful tool.
When you can do something ten or a hundred times faster, new and interesting use cases become possible.
For example, with a fast enough simulator, you could consider brute forcing (or machine learning) the internal minutia of an error correction circuit to minimize undesirable error propagation properties.
I certainly intend to take advantage of the speed in my own work.
\section{Acknowledgements}
I thank Michaels Broughton and Newman for putting up with excessive jokes about {\em brute speed}.
I thank Michael Broughton for advice and discussions on profiling and optimizing code.
I thank N. Cody Jones for useful discussions about simulation algorithms, and in particular for pointing out that randomizing the Z frame after resets and measurements was sufficient for sampling from the space of noiseless circuit outputs.
I thank Hartmut Neven for creating an environment where this work was possible in the first place.
\bibliographystyle{plainnat}
|
1,116,691,498,541 | arxiv |
\section{Introduction}
Massive imaging and spectroscopic surveys have played an essential role in improving our understanding
of the statistical properties of a wide variety of celestial objects, such as solar system bodies, stars,
galaxies, and active galactic nuclei (AGN). Surveys are also crucial for modern, high precision cosmology,
and there are a number of ongoing and upcoming surveys that address the nature of dark matter and dark energy.
The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; \cite{aihara18b}) is among the most ambitious of the ongoing surveys,
with its aim to cover 1400 deg$^2$ under excellent seeing conditions in multiple filters down to unprecedented depths.
HSC is a wide-field (1.7 degree diameter) optical imager \citep{miyazaki18} installed at the prime focus of the 8.2m Subaru Telescope
operated by National Astronomical Observatory of Japan (NAOJ). The combination of the field of view and telescope
aperture makes it the most efficient survey instrument to date.
A large imaging survey with this instrument, the HSC-SSP survey, has been awarded 300 nights on the Subaru Telescope; the survey started in March 2014.
The survey consists of 3 layers: Wide, Deep, and UltraDeep. The Wide layer covers 1400 deg$^2$ in 5 broad-band filters
($grizy$) down to about 26th magnitude. The Deep layer has 4 separate fields (XMM-LSS, COSMOS, ELAIS-N1, DEEP2-F3)
roughly equally spaced in Right Ascension.
These 4 fields total about 26 deg$^2$. In addition to the broad-bands, we also observe in 3 narrow-band filters
(NB387, NB816, NB921) in the Deep layer to target emission line objects.
The UltraDeep layer has 2 fields: COSMOS and the Subaru/XMM-Newton Deep Survey (SXDS).
Thanks to long integration times (10-20 hours) in both broad and narrow-bands ($grizy$ and NB816, NB921, NB1010),
we reach to $\sim$28th magnitude over 4 deg$^2$.
For further details, the reader is referred to the survey design paper \citep{aihara18b}.
As of this writing, the survey has used more than 2/3 of the allocated time and has been obtaining excellent data throughout.
Our early science results are summarized in a special issue of the Publications of the Astronomical Society of Japan
(January 2018), which includes exciting results on solar system bodies, stars, galaxies, AGN, and cosmology.
The first public data release (PDR1) from HSC-SSP was made in February 2017, including data taken through
November 2015 from the first 61.5 nights of observations \citep{aihara18a}. Subsequent incremental releases added
the scientific value of PDR1. The first incremental release happened in June 2017, which
included photometric redshifts for the Wide layer \citep{tanaka18} and deep COSMOS data
from a joint data set taken by the HSC team and astronomers from the University of Hawaii \citep{tanaka17}. The second
incremental release was in November 2017, including an emission line object catalog \citep{hayashi18},
weak-lensing simulation data \citep{mandelbaum18}, and multi-band SXDS catalog
\citep{mehta18}.
The current paper presents a new major data release from the HSC-SSP, the second public data release (PDR2).
PDR2 is a major update in terms of both area and depth. The data quality is also improved thanks to several
important updates made to the processing pipeline. Thus, PDR2 is a superset of PDR1 in all aspects. In addition
to the latest survey data, we release the carefully calibrated galaxy shape measurements from \citet{mandelbaum18}
needed for weak-lensing analyses. All the data products are described in detail in the following sections.
The paper is structured as follows. We first give a brief summary of PDR2 in Section \ref{sec:the_release}.
Section \ref{sec:hardware_updates} summarizes recent hardware updates,
followed by a description of improvements to the processing pipeline in Section \ref{sec:pipeline_updates}.
Section \ref{sec:data} describes the data processing as well as a summary of our data products.
Section \ref{sec:data_quality_and_known_issues} presents our data quality assurance tests and
a list of known issues in the release. A short overview of the data access tools is given
in Section \ref{sec:data_access} and we give an update on our collaborating surveys in Section
\ref{sec:status_of_collaborating_surveys}. We conclude in Section \ref{sec:summary}
with a plan for future data releases.
We use the same terminology as in the PDR1 paper to
refer to the data and its processing; see Section 3.1 of \citet{aihara18a} for details.
\section{Overview of the Release}
\label{sec:the_release}
\subsection{The release and changes from PDR1}
This release includes data taken from March 2014 through January 2018 from 174 nights of observing time,
including nights lost to weather. This is a significant increase from the previous release,
which included 61.5 allocated nights. Fig.~\ref{fig:sky_coverage} shows the survey footprint in PDR2.
Some of the disjoint fields in PDR1 are now connected to each other as the survey has progressed.
Each of the separate Wide layer fields is now given a number; thus they are named W01-W07 as summarized
in Table \ref{tab:field_names}. Note that the field numbers will change in the next major data release (PDR3)
due to further progress in the survey.
Table \ref{tab:exptime} presents useful global statistics of the data, such
as the exposure time and limiting magnitudes for each filter and survey layer.
Note that the Deep+UltraDeep area is larger in the table than that mentioned in the previous section
because the table includes regions covered in a single exposure (i.e., the area increase is due to dithering).
Major changes since PDR1 include:\\
\begin{itemize}
\item The Wide area which has been observed to the nominal survey depth in all the filters (full-color full-depth
area in what follows) has increased from about 100 square degrees to 300 square degrees.
\item The Wide layer data in PDR1 included only the full-color full-depth area, but this release
includes partially observed area as well, i.e., regions not covered in all 5 filters or which have not reached the full depth.
\item The Deep and UltraDeep fields in COSMOS and SXDS overlap each other and they are jointly processed.
All the coadded images, multi-band catalogs, and database tables are based on the joint data.
\item Due to changes in the processing pipeline, the database table schema have been revised significantly,
and the table columns have different names, although the correspondence should be obvious in most cases.
Thus, SQL scripts for PDR1 do not work for PDR2.
\item The $r$ and $i$-band filters are replaced with new and more uniform filters called $r2$ and $i2$, respectively
(Section \ref{sec:new_filters}). The release includes data taken with both old and new filters. They are coadded together.
\item A new sky subtraction algorithm has been implemented that preserves the wings of large objects much
better than in PDR1 (Section \ref{sec:global_sky_subtraction}).
\item The object detection algorithm has also been improved and the catalog now includes significantly fainter sources
(Section \ref{sec:dynamic_object_detection}).
\item A new algorithm to remove artifacts from coadds has been introduced. It works very efficiently in the Wide layer
(Section \ref{sec:artifact_rejection}).
\item The $y$-band images were significantly affected by scattered light. This scattered light is now subtracted in
the processing, resulting in much cleaner $y$-band images (Section \ref{sec:scattered_light_in_the_yband}).
\item A fix to PSFEx, which models the shape of the point-spread function (PSF), has been made, allowing us to
handle data with very good seeing (Section \ref{sec:psfex_fix}).
All the good seeing data are used in this release.
\item Lossless compression has been applied to all the pipeline processed images.
Not all image browsers and I/O interfaces will be able to read these images; the user should use recent
versions of \code{ds9} and other tools (Section \ref{sec:lossless_image_compression}).
\item Color terms to translate Pan-STARRS1 photometry, which we calibrate our photometry against, have been updated.
Also, we now exclude late-type stars from photometric calibrations in order to avoid effects of metallicity variations of stars
(Section \ref{sec:colorterms}).
\end{itemize}
There are, however, known issues in the data, and users are referred to the issue list
in Section \ref{sec:known_issues} before exploring the data for science.
The list is kept up-to-date at the data release website\footnote{\url{https://hsc-release.mtk.nao.ac.jp/}}.
\begin{figure*}
\begin{center}
\includegraphics[width=18cm]{survey_area_0h_lowres.eps}\vspace{1cm}
\includegraphics[width=18cm]{survey_area_12h_lowres.eps}\vspace{1cm}
\includegraphics[width=18cm]{survey_area_north_lowres.eps}
\end{center}
\caption{
The area covered in this release shown in equatorial coordinates.
The blue and green areas show the Wide and Deep+UltraDeep layers, respectively.
For the Wide layer, the darker color means that the area is observed in more filters (up to 5 filters).
The red boxes indicate the approximate boundaries of the three disjoint regions that will make up the final Wide survey.
The Galactic extinction map from \citet{schlegel98} is shown in the background.
}
\label{fig:sky_coverage}
\end{figure*}
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{cccc}
\hline
Layer & Field Name & Database Schema & Database Field Identifier\\
\hline
UltraDeep & SXDS & \code{dud} & \code{sxds}\\
UltraDeep & COSMOS & \code{dud} & \code{cosmos}\\
Deep & XMM-LSS & \code{dud} & \code{sxds}\\
Deep & E(xtended)-COSMOS & \code{dud}& \code{cosmos}\\
Deep & ELAIS-N1 & \code{dud} & \code{elias_n1}\\
Deep & DEEP2-3 & \code{dud} & \code{deep2_3}\\
Wide & WIDE01H & \code{wide} & \code{w01}\\
Wide & XMM-LSS & \code{wide} & \code{w02}\\
Wide & GAMA09H & \code{wide} & \code{w03}\\
Wide & WIDE12H & \code{wide} & \code{w04}\\
Wide & GAMA15H & \code{wide} & \code{w04}\\
Wide & VVDS & \code{wide} & \code{w05}\\
Wide & HECTOMAP & \code{wide} & \code{w06}\\
--- & AEGIS & \code{wide} & \code{w07}\\
\hline
\end{tabular}
\end{center}
\caption{
List of the observed fields. The field names in the Wide layer are left-over from PDR1.
AEGIS is observed as a photometric redshift calibration field at
the Wide depth. The WIDE12H and GAMA15H fields are now connected and they are combined into
a single field (\code{w04}). The database field identifier should be used to query for
a given field. See the online schema browser for details. Note that \code{dud} means Deep/UltraDeep.
}
\label{tab:field_names}
\end{table*}
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{l|ccccccccc}
\hline\hline
{\bf Wide} & $g$ & $r$ & $i$ & $z$ & $y$ & & & & \\
exposure (min) & $10^{+2}_{-5}$ & $10^{+2}_{-5}$ & $16^{+6}_{-6}$ & $20^{+3}_{-10}$ & $16^{+6}_{-6}$ & & & & \\
seeing (arcsec) & $0.77^{+0.09}_{-0.08}$ & $0.76^{+0.15}_{-0.11}$ & $0.58^{+0.05}_{-0.05}$ & $0.68^{+0.08}_{-0.07}$ & $0.68^{+0.12}_{-0.09}$ & & & & \\
depth (mag) & $26.6^{+0.2}_{-0.3}$ & $26.2^{+0.2}_{-0.3}$ & $26.2^{+0.2}_{-0.4}$ & $25.3^{+0.2}_{-0.3}$ & $24.5^{+0.2}_{-0.3}$ & & & & \\
saturation (mag) & $17.6^{+0.5}_{-0.3}$ & $17.4^{+0.7}_{-0.4}$ & $18.0^{+0.2}_{-0.3}$ & $17.5^{+0.5}_{-0.5}$ & $17.3^{+0.7}_{-0.6}$ & & & & \\
area (deg$^2$) & 942 & 1022 & 796 & 905 & 924 & & & & \\
\hline
{\bf Deep+UltraDeep} & $g$ & $r$ & $i$ & $z$ & $y$ & $NB387$ & $NB816$ & $NB921$ & $NB1010$ \\
exposure (min) & $49^{+24}_{-17}$ & $45^{+24}_{-17}$ & $65^{+46}_{-37}$ & $130^{+46}_{-51}$ & $88^{+23}_{-42}$ & $68^{+13}_{-13}$ & $120^{+30}_{-30}$ & $112^{+56}_{-14}$ & ---\\
seeing (arcsec) & $0.81^{+0.05}_{-0.13}$ & $0.74^{+0.03}_{-0.05}$ & $0.62^{+0.07}_{-0.07}$ & $0.63^{+0.04}_{-0.03}$ & $0.71^{+0.06}_{-0.06}$ & $0.80^{+0.11}_{-0.08}$ & $0.69^{+0.11}_{-0.12}$ & $0.66^{+0.04}_{-0.07}$ & ---\\
depth (mag) & $27.3^{+0.4}_{-0.3}$ & $26.9^{+0.2}_{-0.3}$ & $26.7^{+0.3}_{-0.5}$ & $26.3^{+0.2}_{-0.4}$ & $25.3^{+0.2}_{-0.5}$ & $25.1^{+0.2}_{-0.2}$ & $26.1^{+0.2}_{-0.3}$ & $25.9^{+0.2}_{-0.3}$ & ---\\
saturation (mag) & $18.1^{+0.4}_{-0.3}$ & $18.2^{+0.5}_{-0.3}$ & $18.7^{+0.1}_{-0.2}$ & $17.7^{+0.3}_{-0.3}$ & $17.3^{+0.2}_{-0.2}$ & $14.7^{+0.1}_{-0.3}$ & $17.0^{+0.5}_{-0.4}$ & $17.0^{+0.3}_{-0.3}$ & ---\\
area (deg$^2$) & 35 & 35 & 35 & 36 & 36 & 22 & 26 & 28 & ---\\
\hline \hline
{\bf Wide} & & & & & & & & & \\
target exposure (min) & 10 & 10 & 20 & 20 & 20 & & & & \\
target depth (mag) & 26.8 & 26.4 & 26.2 & 25.4 & 24.7 & & & & \\
\hline
{\bf Deep} & & & & & & & & & \\
target exposure (min) & 84 & 84 & 126 & 210 & 126 & 84 & 168 & 252 & \\
target depth (mag) & 27.8 & 27.4 & 27.1 & 26.6 & 25.6 & 24.8 & 26.1 & 25.9 & \\
\hline
{\bf UltraDeep} & & & & & & & & & \\
target exposure (min) & 420 & 420 & 840 & 1134 & 1134 & & 630 & 840 & 1050\\
target depth (mag) & 28.4 & 28.0 & 27.7 & 27.1 & 26.6 & & 26.8 & 26.5 & 25.1\\
\hline \hline
\end{tabular}
\end{center}
\caption{
Approximate exposure time, seeing, $5\sigma$ depth for point sources,
and saturation magnitudes (also for point sources) for each filter and survey layer,
averaged over the entire survey area included in this release.
The numbers in the top half of the table are the median and the quartiles of the distribution, except for area, which
shows the total area covered in at least 1 exposure.
The target exposure times and expected depths (i.e., survey goals) are also shown for reference in
the bottom half of the table.
The numbers for the Wide layer shown in the top are close to the full-depth values, while
those for the Deep+UltraDeep are closer to the Deep depth due to the spatial averaging
(Deep is wider than UltraDeep).
Quality assurance (QA) plots showing the depth as a function of position for each field and for each filter
are available at the data release site.
Note that the expected depths are for point sources and are in reasonable agreement
with the measured depths. The $5\sigma$ limiting mags within 2 arcsec diameter
apertures, which may be more relevant for extended sources, are shallower by 0.3~mags than
the point source limits.
Note that NB1010 is not included in this release.
Note as well that there is significant spatial variation of all the values listed here over the survey area.
}
\label{tab:exptime}
\end{table*}
\subsection{Survey Progress}
\label{sec:survey_progress}
The progress of the Wide survey is summarized in Fig.~\ref{fig:survey_progress}.
This is a good measure of the overall survey progress because two-thirds of
the total observing time is for the Wide survey. The survey speed has remained
essentially the same since PDR1 (61.5 nights). The $r$-band is close to
the expected speed, but the other filters are behind schedule. The completion
rate at the end of January 2018
is 88, 97, 67, 81, and 80\% in the $g$, $r$, $i$, $z$, and $y$ band, respectively.
The $i$-band is the slowest due to the stringent seeing constraint ($\lesssim0.75$ arcsec)
as this is the band in which the weak-lensing analysis is done.
Overall, the survey is progressing at roughly 80\% of the expected speed. The reason
for the 20\% discrepancy is a combination of optimistic assumptions for overhead between exposures (30 seconds
as opposed to the 20 seconds originally assumed), 30 seconds calibration exposures that
were not included in the original plan, weather, and so on.
Nevertheless, the data quality is excellent; Fig.~\ref{fig:seeing_distrib}
shows the distribution of seeing in each visit for each filter; the median $i$-band
seeing is about 0.6 arcsec. This is superior to other on-going ground-based imaging surveys and is one of
the strengths of the HSC-SSP survey.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{survey_progress.eps}
\end{center}
\caption{
Allocated number of nights and number of visits acquired for the Wide layer.
The top panel shows the cumulative number of visits for the Wide layer obtained as
a function of the number of observing nights. The dashed lines indicate the average
numbers of visits required to complete the survey in 300 nights in the $gr$ (bottom line; 4 visits per pointing) and
$izy$ filters (top line; 6 visits per pointing), respectively. The bottom panel shows the cumulative number of
visits as a function of time. The meanings of the lines are the same as the top panel.
}
\label{fig:survey_progress}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{seeing_distrib.eps}
\end{center}
\caption{
Seeing distribution of individual visits for each filter.
The numbers and arrows show the median of the distribution. The vertical dashed lines indicate
the seeing threshold (1.3 arcsec) below which visits are used in the processing.
The plot includes only visits with sky transparency greater than 0.3 (Section \ref{sec:data_screening}).
Note that seeing shown is as measured and is not corrected for airmass.
}
\label{fig:seeing_distrib}
\end{figure}
\subsection{Previous Internal Releases}
\label{sec:previuos_internal_relesaes}
Our public data releases are based on internal data releases made about 1 year prior to the release.
PDR1 is based on the S15B internal data release, and this PDR2 is based on the S18A data
release made to the HSC collaboration in August 2018. Table \ref{tab:data_releases} summarizes the
internal releases we have had since PDR1, which have been used in our science papers.
The S16A release was made after S15B and some of our papers in the PASJ special issue published
in January 2018 were based on this release. S16A was processed with the same pipeline
as in S15B and the data quality remained the same, only the area and depth were increased.
The S17A release incorporated a major pipeline update: the HSC
code branch was merged with the LSST main development stream \citep{juric17}.
The biggest change visible to users was the change in the table schema and the names of various measurement outputs.
However, the correspondence is obvious in most cases.
Finally, the S18A release, on which this PDR2 is based, was made in August 2018 and included
a number of improvements in the data processing algorithms, which we describe in detail in Section \ref{sec:pipeline_updates}.
\begin{table*}[htbp]
\begin{center}
\begin{tabular}{l|ccccrrc}
\hline
Release & Date & Layer & N & \multicolumn{1}{c}{Area} & \multicolumn{1}{c}{Files} & \multicolumn{1}{c}{N} & hscPipe \\
& & & filter & \multicolumn{1}{c}{(deg$^2$)} & \multicolumn{1}{c}{(TBytes)} & \multicolumn{1}{c}{object} & version \\
\hline \hline
Public Data Release 2 & 2019-05-31 & Deep+UltraDeep & 8 & 31 & 88.8 & 20,451,226 & 6.7 \\% 32,818,438 (all)
(=S18A) & & Wide & 5 & 1114 (305) & 332.4 & 436,333,410 & 6.7 \\% 712,126,710 (all)
\hline \hline
S17A & 2017-09-28 & Deep+UltraDeep & 8 & 31 & 68.0 & 17,506,715 & 5.4 \\% 28,409,805 (all)
& & Wide & 5 & 1026 (225) & 209.9 & 348,033,013 & 5.4 \\% 566,657,807 (all)
\hline \hline
S16A & 2016-08-04 & UltraDeep & 7 & 4 & 7.5 & 3,208,918 & 4.0.5 \\% 5,089,002 (all)
& & Deep & 7 & 28 & 8.0 & 16,269,129 & 4.0.5 \\% 26,405,915 (all)
& & Wide & 5 & 456 (178) & 245.0 & 183,391,488 & 4.0.5 \\% 293,520,281 (all)
\hline \hline
Public Data Release 1 & 2017-02-28 & UltraDeep & 7 & 4 & 8.6 & 3,225,285 & 4.0.1 \\% 5,073,357 (all)
(=part of S15B) & & Deep & 7 & 26 & 16.6 & 15,959,257 & 4.0.1 \\% 25,918,070 (all)
& & Wide & 5 & 108 (100) & 57.1 & 52,658,163 & 4.0.1 \\% 84,017,247 (all)
\hline
\hline
\end{tabular}
\end{center}
\caption{
Summary of this public release and previous internal data releases.
The 5th column gives the survey area covered at least in one filter and one exposure in square degrees.
The full-color full-depth area in the Wide survey is shown in parentheses.
The Deep and UltraDeep data have been jointly processed since S17A.
The 7th column shows the number of primary objects.
}
\label{tab:data_releases}
\end{table*}
\subsection{Calibrated Shape Measurements from PDR1}
\label{sec:calibrated_shape_measurements}
Detailed galaxy shape measurements for weak-lensing analyses were withheld from PDR1. At this time,
we make these withheld measurements publicly available. The shape catalog
we release here is from \citet{mandelbaum18} and is based on the S16A internal data release (see Section \ref{sec:previuos_internal_relesaes}),
which is larger than PDR1.
As described in detail in \citet{mandelbaum18}, the catalog covers 136.9~deg$^2$ split into
six separate fields. Galaxy shapes are measured in the $i$-band, in which the mean seeing is $0.58$\arcsec.
Our PSF model meets the accuracy required for
weak lensing science; the fractional PSF size residual is approximately $0.003$ (requirement: $0.004$) and
the PSF model shape correlation function is $\rho_1<3\times 10^{-7}$ (requirement: $4\times 10^{-7}$) at
0.5$^\circ$ scales. Various null tests are statisticlaly consistent with zero, except for star-galaxy
shape correlations, which reveal additive systematics on $>1^\circ$ scales.
Our first weak-lensing cosmology results presented in \citet{hikage19} are based on this shape catalog.
A number of quality assurance cuts have already been applied (see \cite{mandelbaum18} for details), and
the catalog is ready for science. It has been loaded to the database under the PDR1 schema,
and the whole set of the S16A data is also included there.
The shape measurements made in PDR2 are withheld for now because they are not fully validated yet. They will be released in the future.
Similarly, deblended images (\code{heavyFootprint}\footnote{
The processing pipeline attempts to deblend overlapping sources and the result of this process is deblended images,
which are called \code{heavyFootprint}.
}) are also withheld and will be the subject of a future release.
\section{Hardware Updates}
\label{sec:hardware_updates}
\subsection{New Filters}
\label{sec:new_filters}
The $r$ and $i$ band filters were among the first set of filters manufactured for HSC.
Their filter curves turned out to depend on radius: the cutoff wavelength on the short wavelength side
of the filter transmission curves changes with radial distance from
the filter centers. The night sky spectrum is very structured with many strong emission lines.
As the filter bandpass changes with radius, these lines fall in and out of the filter, resulting
in radial structure in the sky background.
This also means that the photometry of detected objects varies across the field of view. In order to
achieve better photometric accuracy and better background behavior, we manufactured
new filters: $r2$ and $i2$.
These filters were installed on June 24th, 2016 and February 2nd, 2016, respectively.
These new filters have much weaker radial trends; their detailed properties are summarized in \citet{kawanomoto18}.
\subsection{Scattered Light in $y$-band}
\label{sec:encoder_laser_shield}
One of the known issues in PDR1 is that $y$-band images show a pair of arcs due to
scattered light, which were not subtracted very well in the sky subtraction process.
Each of the arcs is about 4-5 arcmin in thickness and crosses the entire field of view.
After some engineering observations, the light source and its path were identified.
The instrument rotator has 8 encoders, each of which has an LED to read a barcode.
Light from the LEDs reflected off the surface of the lens barrel of the Wide-Field Corrector,
thus reaching the detector surface. The shutter body is wider than the lens barrel, but
is not sufficiently wide to block the oblique incident light.
The scattered light caused a pair of arc-like structures that moved as the instrument rotated.
The exact wavelength of the LED light is unclear, but observations have shown that
the scattered light was seen only in the $y$-band and a few narrow-bands around $0.9-1.0\mu m$.
In order to eliminate the scattered light, covering screens were installed at
the edge of the shutter body on November 13, 2017 to obstruct the light path.
Data taken after that date do not exhibit any sign of scattered light.
However, all data taken before that date were affected, and we have developed software to subtract
the scattered light (Section \ref{sec:scattered_light_in_the_yband}).
\section{Pipeline Updates}
\label{sec:pipeline_updates}
PDR1 was processed with \code{hscPipe v4} as described in detail in \citet{bosch18}.
As mentioned earlier, there have been major pipeline updates since then, and PDR2 is processed
with \code{hscPipe v6}. This section describes the new features of \code{v6}.
\subsection{Global Sky Subtraction}
\label{sec:global_sky_subtraction}
In previous versions of the pipeline,
background subtraction was performed on each CCD individually.
We used an empirical background model consisting of ``superpixels''
typically of size $256\times 256$ pixels ($43''\times43''$).
A robust measure of the background was obtained for each superpixel by taking
a clipped mean and ignoring \code{DETECTED} pixels, and the superpixels were fit
with a 6th order two-dimensional Chebyshev polynomial.
The superpixels were then interpolated at the regular pixel positions using Akima spline,
and the resultant background image was subtracted from the CCD image.
While simple to implement,
this algorithm has two important drawbacks:
\begin{itemize}
\item The superpixel scale is necessarily limited in size to less than the size of the CCD,
which means that bright extended objects
(e.g., nearby galaxies or bright stars)
can easily bias the sky model.
\item Because CCDs are treated individually,
there can be discontinuities in the sky model between neighboring CCDs.
\end{itemize}
In order to address these deficiencies,
we developed a new algorithm to perform background subtraction over the entire field-of-view.
The new algorithm incorporates two elements.
The first element is an empirical background model extending over the entire focal plane.
This uses the superpixel technique we used before,
but extends it so that the model can be constructed over the entire focal plane.
Because this model operates across CCD boundaries,
discontinuities at CCD edges are reduced.
Experiments indicate that an appropriate superpixel scale for HSC is $1024\times 1024$ pixels
($\sim2'.8\times2'.8$).
Scales significantly larger leave sky subtraction residuals that vary from exposure to exposure.
The second element is a ``sky frame'',
which is the mean response of the instrument to the sky for a particular filter.
It is constructed from a clipped-mean of the superpixels with objects masked out from many
observations (typically several tens) that have large dithers, so that the same objects do not land on
the same pixels. This allows subtraction of static features that have a smaller scale
than the empirical background model. We use superpixels of $256\times 256$ pixels,
which is sufficient to model the ``rings'' in the $r$ and $i$-bands, which are due
to variations in the filter transmission curves as a function of radius from the center
(Section \ref{sec:new_filters}; the rings are essentially gone in the new $r2$ and $i2$ filters).
Fig.~\ref{fig:skyFrames} shows the sky frames. It is interesting that each filter has
its own characteristic spatial structure. The systematic offsets between the CCDs seen in blue filters
($g$ in particular) are likely due to variations in the CCD responses, while large-scale patterns
are due to variations in the filter response.
When subtracting the sky from a science exposure,
we first measure and subtract the large-scale empirical background model (first element),
and then fit and subtract a scaled sky frame (second element).
Fig.~\ref{fig:skySubtraction} compares the old and new algorithms.
The old algorithm (left) tends to subtract extended halos of bright objects as indicated by
the dark halo around the large object at the center. This has indeed been problematic for
studying nearby galaxies. The new algorithm (right) preserves the extended
wings much better, demonstrating the improved performance of the sky subtraction.
This improvement is particularly important for large extended sources such as nearby galaxies.
However, masks around bright stars to indicate regions that suffer from false detections and poor photometry
were not revised accordingly
as we discuss in detail in Section \ref{sec:bright_star_masks}. They will be fixed in
a future incremental release, which will be made by September 1st, 2019.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{skyFrames.eps}
\end{center}
\caption{
HSC sky frames showing the full focal plane in the $g$, $r$, $r2$, $i$, $i2$, $z$ bands from top-left to bottom-right.
The rings in the $r$ and $i$ bands are due to radial variations in the filter curves.
The $g$ and $r2$ bands show some CCD-dependent features, likely due to CCD sensitivity variations in the blue.
The other filters not shown here (i.e., $y$ and the narrow-band filters) do not exhibit any significant spatial structure.
}
\label{fig:skyFrames}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{skysb1.eps}
\includegraphics[width=4cm]{skysb2.eps}
\end{center}
\caption{
\textbf{Left:} coadd image of a nearby galaxy in the $i$-band from PDR1.
\textbf{Right:} same image but constructed using the new sky subtraction algorithm.
The images are stretched to the same level for a fair comparison.
}
\label{fig:skySubtraction}
\end{figure}
\subsection{Dynamic Object Detection}
\label{sec:dynamic_object_detection}
In previous versions of the pipeline,
the detection threshold was set statically
as a particular multiple of the noise ($5\sigma$, to be specific).
On coadds, especially coadds with many exposures,
we found that sources that were obviously present in the image were not detected.
We attribute this to an incorrect noise model:
the pipeline tracks an estimate of the variance of each image,
but that estimate can be wrong after convolution operations
since they move a fraction of the variance into covariance,
which is not tracked by the pipeline.
In order to deal with this,
we now set the detection threshold dynamically.
We measure the PSF fluxes for a sample of points chosen to be on empty sky,
avoiding object footprints. If the variance is perfectly correct,
the standard deviation of the PSF fluxes should agree with the uncertainty
expected from the variance over the effective area of the PSF.
The variance image is not perfect and the ratio between the standard deviation of
these PSF fluxes and the mean of quoted errors provides a correction factor to
the detection threshold.
Figure~\ref{fig:dynamicDetection} shows an example field
with and without this feature.
There are many faint sources that are missed by the previous detection algorithm
but are detected in the revised algorithm, demonstrating the improvement in
the object detection. This improvement is particularly important for the UltraDeep
layer, in which we are interested in very faint, distant galaxies.
The detection threshold is still effectively $5\sigma$ and we are not detecting
many fake sources.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{dynamicDetection.eps}
\end{center}
\caption{
$i$-band image of a small piece of the UltraDeep-COSMOS field ($0'.9\times0'.4$),
showing the improvement in the detection depth
using the new dynamic detection feature.
The green circles indicate sources detected with the old detection algorithm,
while the red circles indicate sources newly detected with the dynamic detection algorithm.
The faintest objects in this image are about $27.5-28.0$ mag.
}
\label{fig:dynamicDetection}
\end{figure}
\subsection{Artifact Rejection}
\label{sec:artifact_rejection}
We have updated the algorithm that identifies and clips transient artifacts before coaddition.
The new algorithm uses the time-series of PSF-matched warped\footnote{
When we coadd individual visits, we resample individual CCD images to a common pixel grid
using 3rd-order Lanczos interpolation \citep{bosch18}. We refer to this procedure as warping
in what follows.
}
images to identify transient artifacts, such as optical ghosts, satellite trails, and cosmic rays.
Most of the cosmic rays are identified and interpolated in the CCD processing, but some
are left unidentified and the new algorithm finds them.
The new algorithm takes both direct and PSF-matched warps as input and writes
direct coadds as output. Direct warps have been resampled to a common pixel grid.
PSF-matched warps, after being resampled to the pixel grid, have additionally been PSF-homogenized to
a Double Gaussian PSF model with a FWHM of 7.7 pixels (1.3 arcseconds), which is the seeing cut applied
in the data screening (Section \ref{sec:data_screening}) and thus all visits have better original seeing.
The new algorithm then performs the following steps.
The PSF-matched warps are stacked into a 2-sigma-clipped mean coadd which serves as a naive, artifact-free
model of the static sky.
To find artifacts, this PSF-matched sigma-clipped coadd is subtracted from each PSF-matched warp to
produce a ``difference warp.'' Source detection is run on each difference warp to detect sources, both
positive and negative. This step generates a set of regions (i.e., \texttt{Footprints}) for each visit,
where the pixels deviate more than 5$\sigma$ from the naive static sky model.
Some of these detections are non-astrophysical transients/artifacts to clip such
as optical ghosts and satellite trails, but other detections are not artifacts to clip.
These include astrophysical sources such as variable stars and quasars, and also
image subtraction imperfections.
These variable sources and subtraction imperfections can be separated from
the real transients using the number of epochs in which they appear.
Variable sources and subtraction imperfections appear in most epochs because if an object is hard to
subtract cleanly in one epoch, then it is hard to subtract in most.
This feature not only allows us to filter false positives but allows us to define ``transient'' as
a source that appears in a configurable percentage of visits.
As a side effect, a fraction of astrophysical transients such as supernovae and
asteroids may be labeled as transient depending on observing cadence.
This temporal threshold, between ``transient`` and ``static`` is parameterized as a piecewise linear
function of the number of visits $N$.
For $N$ of 1 or 2, the threshold is 0; there is not enough information in one or two epochs to identify outliers.
For $N$ of 3 or 4, the threshold is 1, and for $N=5$, up to 2 epochs can be clipped.
For $N>5$, the threshold is $2+0.03N$ to accommodate coadds of up to hundreds of epochs.
For each artifact in each warp difference, if more than 50\% of the footprint appears in fewer visits
than this threshold, it is labeled transient. Otherwise, it is labeled persistent and not clipped.
For more detail about the algorithm and performance compared with the clipping algorithm, see \citet{alsayyad18}.
As tested on a few tracts of PDR1 data, the new algorithm performs better in
both false positives and false negatives.
A few failure modes are known.
If an artifact persists in the coadd, it is because of one of the following reasons.
\begin{enumerate}
\item The number of epochs at its position is two or less.
Confirm by downloading the images of the number of epochs contributing to each pixel of the coadd,
with filenames \texttt{[patch]\_nImage.fits}.
\item In under-dithered regions, optical ghosts and chip defects overlap and appear at the same
position in most of the exposures. The algorithm thus interprets them as part of the persistent sky,
rather than transients.
\item The artifact is compact compared to an overlapping static source.
These are not clipped to protect against over-clipping around stars that are susceptible to false
positives in the image-differencing. If the number of pixels in the footprint of a static source is
greater than that of the artifact, the artifact is not clipped.
This scenario occurs, for example, when a satellite trail passes through a very bright star or galaxy.
\end{enumerate}
The new artifact rejection algorithm thus works well in the Wide layer, in which dithers between
visits are large. It is less efficient in Deep/UltraDeep layers due to smaller dithers as we
discuss in Section \ref{sec:remaining_artifacts}.
\subsection{Scattered Light in the $y$-band}
\label{sec:scattered_light_in_the_yband}
\begin{figure}
\begin{center}
\includegraphics[width=6.72cm,height=6.29cm]{ystraylight-20-40.eps}
\end{center}
\caption{
Scattered light (the two vertical arcs like eyelids) found in $y$-band images.
The image shows the whole focal plane.
}
\label{fig:Scattered light}
\end{figure}
Thanks to the hardware fix described in Section \ref{sec:new_filters},
scattered light from the rotator encoders is no longer seen.
However, all the $y$-band data taken before the fix suffer from it and we have developed software
to remove the scattered light. The spatial pattern of the scattered light
changes with rotator angle in a complicated way, making analytical modeling
of it difficult. We instead chose an empirical approach. We obtained a sequence
of exposures by moving the rotator from $-$180 to $+$180 degrees with a step of
0.5 degrees with the shutter open and the dome closed under dark conditions.
We use these data to simulate the scattered light pattern in a given science exposure.
We first split each CCD into different read-out channels and treat each channel as
a three-dimensional array with two spatial dimensions and one periodic dimension
for the rotation angle. We then applied the discrete wavelet transformation
(Cohen-Daubechies-Feauveau wavelet 9/7) to each of them and took level-6 approximation
coefficients along the two spatial dimensions for compression and denoising.
We achieved a compression ratio of $(2^6)^2 = 4096$, which means that
the total data volume of these exposures was reduced from 2.3TiB to 600MiB, which
is small enough to be distributed as part of the pipeline.
The scattered light subtraction procedure is as follows.
First, we compute the rotator angles at the start and end of an exposure.
Second, we load the compressed dark exposures
and interpolate them along the rotation dimension with periodic cubic splines.
We analytically integrate the cubic splines from the start angle to the end angle,
assuming that the rotator angle changes at a constant rate during an exposure.
This results in an expected illumination pattern on the CCDs that is yet to be
decompressed in the spatial dimension. Finally, we spatially decompress
the illumination pattern, scale it to the exposure time of the image being processed,
and subtract. This is done before the sky subtraction in the CCD processing.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{without-ystraysub.eps}\\
\includegraphics[width=6cm]{with-ystraysub.eps}
\end{center}
\caption{
{\bf Top}: $y$-band Coadd image without the scattered light subtraction.
{\bf Bottom}: Coadd image with the scattered light subtraction. This is only to demonstrate
the scattered light subtraction and no careful processing has been applied (e.g., there is
residual sky around the field edge, but that is not important).
}
\label{fig:Effect of the scattered light subtraction}
\end{figure}
In Figure~\ref{fig:Effect of the scattered light subtraction},
we show two $y$-band images with (bottom) and without (top) the scattered light subtraction.
This is a sample coadd image with two different position angles on the sky, differencing by 90 deg.
There is a hash-like pattern in the top image, which is due to the scattered light
remaining after the background subtraction. The pattern is clearly gone
in the bottom image once the scattered light subtraction is performed.
The $y$-band coadd images in this release are much cleaner than those in PDR1.
We note, however, that this correction was erroneously applied to data taken {\it after}
the hardware fix; see Section \ref{sec:over_subtracted_scattered_light_in_the_yband}.
\subsection{Effective Transmission Curve}
\label{sec:effective_transmission_curve}
The image data products, both single-epoch and coadds, now contain data
structures that report an estimate of the photometric transmission as a
function of both wavelength (within a band) and position on the image.
The single-epoch transmission curves are formed by multiplying separate
spatially-constant transmission curves for the detectors, optics, and fiducial
atmosphere with bandpass filter transmission curves that vary radially over
the focal plane. This is particularly important for the original $r$ and $i$
filters, which had strong radial dependence as discussed in Kawanomoto et al.
(2018; see also discussion in Section \ref{sec:photometry_in_i_and_i2}).
The coadd transmission curves are computed at each point on the coadd by
averaging the per-epoch transmission curves with the same weights used to
build the coadd at that point. This process is like the PSF-model coaddition approach
described in \citet{bosch18}. This naturally reproduces the true discontinuous
spatial structure of the effective coadd transmission curve, at the expense of
a complex internal data structure.
Unfortunately, the transmission curve information is not yet utilized when
applying calibrations to the pipeline's own measurements, as this requires
knowledge of the sub-band SEDs for objects, and the tools to infer this robustly
have not yet been developed. However, the transmission information is available in
our data products and can be exploited for scientific applications.
The easiest way for users to extract the transmission is to use tools in \code{hscPipe}
(or a compatible version of the LSST stack). A sample script is available
at the data release site (see the FAQ page).
\subsection{PSFEx Fix}
\label{sec:psfex_fix}
PSFEx (\cite{2013ascl.soft01001B}) is a widely used package for estimating an image's
point spread function (PSF). We have repackaged it to be usable from \code{python},
and separated the choice of candidate PSF stars from the actual PSF estimation
(Section 4.3 in \cite{bosch18}). We used a pixellated ("delta function") basis
when running PSFEx; although the individual basis functions are strongly undersampled,
fully-sampled models can still be shifted by sub-pixel offsets using sinc interpolation.
As mentioned in \citet{bosch18},
we discovered that the Lanczos kernels employed by PSFEx caused serious problems for
images with the very best seeing.
We use a determinant radius derived from the 2nd-order moment as a measure of
the size and define a fractional size residual as
\begin{equation}
\frac{r_{det,model} - r_{det,obs}}{r_{det,obs}},
\end{equation}
\noindent
where $r_{det,obs}$ and $r_{det,model}$ are the determinant radius of observed stars and
that of the model, respectively. Fig.~\ref{fig:psfex_fix} shows the fractional size
residual as a function of seeing. As the red points show, the fractional
size error increases up to 0.4\% with a sharp discontinuity at a FWHM of
around 0.5 arcsec. We recall that the pixel scale of HSC is 0.168 arcsec per pixel.
Rather than solving the problem of determining suitable
interpolation functions, we decided to resample by interpreting the models as constant
over the sub-pixels, rather than a continuous function sampled at the pixel center.
The blue points in Fig.~\ref{fig:psfex_fix} shows the improvement by this approach.
The fractional size residual is significantly reduced and is good enough to allow us to
process all of the HSC SSP data, even those taken under the best conditions.
We can now model the PSF reasonably accurately in individual visits, but we have discovered that
the image coaddition, which comes after the individual CCD processing, introduces
systematic errors in the PSF model. As we discuss in detail in Section~\ref{sec:shape_measurements},
the PSF model on the coadds is larger than the observed PSF by about 0.4\%.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{new_interpolation.ps}
\end{center}
\caption{
Difference in the fractional size of the PSF between the observed stars and PSF model estimated using PSFex,
as a function of seeing.
The PSF is estimated from a number of stars in individual visits, and we then calculate the FWHM using
an adaptive-Gaussian weighting scheme (Section 4.9.4 in \cite{bosch18}). The red points
use the original version of PSFex, and the width of the PSF models differs significantly from
the widths of the actual stars. The blue points show the results of using the modifications
described in Section~\ref{sec:psfex_fix}.
}
\label{fig:psfex_fix}
\end{figure}
\subsection{Lossless Image Compression}
\label{sec:lossless_image_compression}
The total data volume of the processed HSC data has been growing rapidly
as we collect more data. In order to save disk space,
images from the pipeline are now written using FITS tiled image compression.
The compression scheme chosen is the lossless \code{GZIP_2} algorithm
in \code{cfitsio} \citep{pence09},
which is applied to the image, mask and variance planes.
The images can be uncompressed using the \code{funpack} facility from \code{cfitsio},
or read using \code{hscPipe}.
\subsection{Revised color terms and Restricted Color Range for Photometric Calibration}
\label{sec:colorterms}
Our photometric calibration is based on the PanStarrs (PS) DR1 data \citep{schlafly12,tonry12,magnier13,chambers16};
we apply color terms
to translate the PS magnitudes into the HSC magnitudes and perform the zero-point
calibration. In \code{hscPipe v4} used in PDR1, we used the Bruzual-Persson-Gunn-Stryker atlas
to derive the color terms. This atlas is is an extension of the original \citet{gunn83} atlas into both the UV and near-infrared.
For PDR2, the color terms have been updated using the newer atlas of \citet{pickles98}.
The response functions of the HSC filters used to derive the color terms have also
been updated; the old color terms were computed using the filter transmission at
the center of each filter, but we now use the filter transmission weighted by the surface area.
This operation averages the radial variation of the transmission in the $r$ and $i$-bands
and better represents the system.
There is also a change in the way we select stars for the photometric zero-point calibration.
We used to use all the stars detected in each CCD, but late-type stars tend to have a large
intrinsic color scatter primarily due to variations in metallicity. In addition to the scatter,
there is also a systematic color offset depending on where we observe due to the stellar
population gradient across the Milky Way Galaxy. In order to reduce such effects,
we apply color cuts to exclude late-type stars from the calibration. To be specific,
we impose
\begin{equation}
g-r>0\ {\rm and}\ r-i<0.5.
\end{equation}
\noindent
The $g-r$ cut is not very important as such blue stars are quite rare. The $r-i$ cut
eliminates stars later than K6V, which show significant color variation with metallicity.
These cuts do reduce the number of stars available for calibration.
The reduction is dependent on the sky position, but if we look at
the COSMOS field for instance, about 40\% of the bright stars suitable for calibration
pass these color cuts and there are about 20 stars in each CCD for the zero-point calibration,
which is more than adequate for our purpose.
Comparisons with a previous internal data release do not seem to suggest a major improvement
in the zero-point uniformity, but we should in theory be more robust against metallicity variation
across the Milky Way Galaxy.
\subsection{Additional Mask Planes for Coadds}
\label{sec:additional-mask-planes-for-coadds}
As described in \citet{bosch18}, the \code{hscPipe}'s approach to PSF modeling
on coadds yields some objects that do not have a well-defined PSF, because objects
fall on a boundary such that different exposures contribute to different parts of the image.
The pipeline now includes more image-level mask planes and corresponding
catalog-level flag fields to indicate when this happens:
\begin{itemize}
\item The \texttt{INEXACT\_PSF} image mask bit is set on any pixel for
which the PSF is ill-defined. The corresponding catalog flags are
\texttt{base\_PixelFlags\_flag\_inexact\_psf}\footnote{
The pipeline outputs are stored in the database with slightly different
names. The correspondence is obvious. In this case, the flag is
named \code{\{filter\}\_pixelflags\_flag\_inexact\_psf}, where \code{filter}
should be the filter name such as $g$,$r$,$i$,$z$,$y$.
}
and \texttt{base\_PixelFlags\_flag\_inexact\_psfCenter}, where the former is
set for any object whose above threshold detection region contains such a
pixel, and the latter is set only when a pixel is near the center of the
object. Whenever \texttt{INEXACT\_PSF} is set, at least one of the
following descriptive flags is always also set to explain why it is set.
\item The \texttt{SENSOR\_EDGE} image mask bit is set when the pixel lies
near the edge of at least one input image. The corresponding catalog flags
are \texttt{base\_PixelFlags\_flag\_sensor\_edge[Center]}.
\item The \texttt{REJECTED} image mask bit is set on a coadd pixel when one
or more contributing input pixels were masked during single-epoch
processing, and could not be interpolated. The majority of pixels with
this mask landed on a bad amplifier or other known sensor defect. The
corresponding catalog flags are
\texttt{base\_PixelFlags\_flag\_rejected[Center]}.
\item The \texttt{CLIPPPED} image mask bit is set on a coadd pixel when one
or more contributing input pixels were identified as belonging to
artifacts via the image-differencing algorithm mentioned at the beginning of this section.
The corresponding catalog flags are
\texttt{base\_PixelFlags\_flag\_clipped[Center]}.
\end{itemize}
For many science cases, the PSF model inaccuracies reported by these flags are
actually negligible, as the PSFs of input observations are frequently quite
similar, and hence changes in which images contribute to the coadd do not
appreciably affect the PSF. We encourage science users to test their
analysis both with and without filtering on these flags to determine whether
this effect is important for those cases.
\section{Data}
\label{sec:data}
We mostly followed the same data processing procedure as in PDR1, but we briefly describe
how we screened the data and processed them with an emphasis on the differences from PDR1.
\subsection{Data Screening}
\label{sec:data_screening}
We applied mostly the same conditions to screen the raw data for the full
processing as in the last release (see Section 3.2 of \cite{aihara18b}).
The main conditions were (1) sky brightness $\leq 45000$ ADUs,
(2) seeing $\le 1.3$ arcsec, and (3) sky transparency $\geq 0.3$.
The $y$ and NB921 filters are affected by sky fringing, which needs to be subtracted off.
To generate fringe patterns in these filters, we used a slightly relaxed condition:
seeing below $1.5$ arcsec.
Our new sky subtraction algorithm (Section \ref{sec:global_sky_subtraction}) uses
data for the sky frame selected using the same relaxed condition.
For both fringe and sky frames, we generated
a master frame for each filter for each observing run. If the number of visits available
in a run is insufficient ($<50$), we combined data from a few nearby observing runs.
We performed careful visual inspections of the coadd images in
addition to the automated screening of individual visits and reprocessed
several tracts with problematic visits removed. For example, we removed 5 tracts in the $r$-band
that were accidentally traversed by laser light for adaptive optics from another telescope.
\subsection{Data Processing}
\label{sec:data_processing}
The processing flow remained largely the same as in the last data release, but
some small changes were made to incorporate the new features.
We first generated calibration images for the bias and dark subtraction, flat-fielding,
fringe subtraction, and global sky subtraction. Each raw CCD image was
processed by applying these calibration data. The removal of the $y$-band scattered
light described in Section~\ref{sec:scattered_light_in_the_yband} was also performed here (prior to the sky subtraction).
The astrometric and photometric calibrations were
carried out against the Pan-STARRS1 DR1 catalog. In this CCD processing,
the same configuration parameters were used in all the filters, except for NB387,
which is the least sensitive filter and had many fewer stars available for calibration.
We lowered the star selection threshold to include fainter stars so that the processing
did not fail.
The next step, global sky subtraction (Section~\ref{sec:global_sky_subtraction}), was
a big change from PDR1. This stage was implemented as a separate process and produced
sky images that were subtracted from the calibrated CCD images.
After the individual CCD calibrations, we performed a multi-visit calibration of
astrometry and photometry to refine the CCD calibrations. Then, the CCD images
were warped onto common coordinate grids and combined to generate deep coadds.
This was done for each band separately, but we combined the $i$ and $i2$,
and $r$ and $r2$-bands as mentioned earlier. Objects were detected on the coadds,
and detections from multiple filters were merged to a single detection catalog.
The final stage, multi-band measurements, performed object deblending and
various photometric measurements. The resultant multi-band catalog is the one
most useful for science. PSF-matched aperture photometry was mistakenly excluded
from this last step and were ran as an afterburner process. It has been merged with
the other measurements at the database.
\subsection{Image and Catalog Data}
\label{sec:image_and_catalog_data}
The pipeline generates calibrated CCD images, images warped to patches (warps), and coadds as well as the associated catalog files.
They are all available from our website. Once again, users should be
aware that the lossless compression has been applied to the image files
(see Section \ref{sec:lossless_image_compression}). The directory structure of
the pipeline outputs is similar to PDR1, but the exact locations of some of
the files are different. An important change for the users to notice is that the final coadds are now under
the \code{deepcoadd-results/} directory (PDR1 had them under \code{deepcoadd/}).
There are several new files, which we describe at the data release site.
Galaxy shape measurements based on the \citet{hirata03} algorithm were withheld from PDR1.
As some of the flat files contained the HSM shape measurements, these files were also withheld.
It was originally meant to be a short-term solution, but PDR1 ended up withholding these files
for about 2 years. A major side effect was that other useful information in those files were
inaccessible from users (e.g., single-epoch source catalogs that were not loaded to the database).
In this release, we again choose to withhold the shape measurements as well as the deblended images,
but we make all flat files available, where we exclude only the shapes and deblended images from the files.
We remind the reader again that the shape measurements from PDR1 are now available
as part of this PDR2 (see Section \ref{sec:calibrated_shape_measurements}). All
the flat files withheld from PDR1 are available, too. Some of the most useful files may
include {\tt meas} files, which include deblended images, and {\tt CALSRC} catalogs,
which are single-epoch source catalogs. See the PDR1 site for the list of flat files.
\subsection{Value-added Products}
\label{sec:value_added_products}
In addition to the main data set described above, we include a few value-added data
products in this release.
\begin{itemize}
\item {\bf COSMOS Wide-depth stacks:}
There are many visits in UD-COSMOS observed under a wide range of seeing conditions.
We have stacked a subsample of the UD-COSMOS visits to the nominal exposure times of the Wide survey
for 3 different sets of seeing conditions. In PDR1, we made coadds with 0.5, 0.7 and 1.0 arcsec
seeing, but in PDR2, we instead coadd UD-COSMOS visits with target seeing at
25\%, 50\%, and 75\% of the seeing distributions in each filter in the Wide layer.
The seeing for each filter for each stack is summarized in Table \ref{tab:wide_depth_seeing}.
The target seeing is chosen based on the S17A internal data, and the seeing in Table \ref{tab:wide_depth_seeing} is not
fully consistent with the median and quartile seeing summarized in Table \ref{tab:exptime}.
Nontheless, these Wide-depth stacks will be useful for characterizing the data quality variation in the Wide layer.
The multiband photometry for each stack is of course available.
\item {\bf Public spectroscopic redshifts:}
We have updated the list of public spectroscopic redshifts from the literature. The list includes
redshifts from zCOSMOS DR3 \citep{lilly09}, UDSz \citep{bradshaw13,mclure13},
3D-HST \citep{skelton14,momcheva16}, FMOS-COSMOS \citep{silverman15,kashino19},
VVDS \citep{lefevre13}, VIPERS PDR1 \citep{garilli14}, SDSS DR12 \citep{alam15}, the SDSS IV QSO catalog \citep{paris18},
GAMA DR2 \citep{liske15}, WiggleZ DR1 \citep{drinkwater10},
DEEP2 DR4 \citep{davis03,newman13}, DEEP3 \citep{cooper11,cooper12}, and
PRIMUS DR1 \citep{coil11,cool13}.
As one-to-one correspondence between the spectroscopic objects and photometric objects is not always
obvious, we match objects within 1 arcsec and all matched objects are stored in the database.
In most cases, the most likely match will be the object with the smallest matching distance.
There is also
a homogenized spectroscopic confidence flag for each object to make it easy for users to make a clean
redshift catalog (recall that each spectroscopic survey has its own flagging scheme). See the online documentation
for the definition. We emphasize that users should acknowledge the original data source(s) when using this table.
\item {\bf Random points:}
We draw random points with a density of 100 points per square arcmin for each coadd image for each filter.
These random points can be used for e.g., clustering analysis, identifying problematic areas, computing
the survey area and the fraction of masked areas, etc. The random points are available in the database
and the data release site describes how to use this table.
Note that the random points are affected by the issues with masks around bright stars (Section \ref{sec:bright_star_masks}).
We plan to update the random point catalog together with the revised masks in a future release.
\end{itemize}
We have also computed photometric redshifts for a large number of objects. They are not included
in the current release but will be released in a future incremental release. Other data products
may also be released and we will make announcements on the data release website. Registered users
will be notified.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccccc}
\hline
Stack & $g$ & $r$ & $i$ & $z$ & $y$\\
\hline
Best & 0.63 & 0.61 & 0.52 & 0.70 & 0.59\\
Median & 0.74 & 0.79 & 0.57 & 0.75 & 0.73\\
Worst & 0.87 & 0.89 & 0.67 & 0.83 & 0.94\\
\hline
\end{tabular}
\end{center}
\caption{
Seeing in arcsec for each filter and for each of the COSMOS Wide-depth stack.
}
\label{tab:wide_depth_seeing}
\end{table}
\section{Data Quality and Known Issues}
\label{sec:data_quality_and_known_issues}
We now demonstrate the quality of the data in this release. There have been a number of
pipeline changes since PDR1 as described above and we believe that the overall quality is significantly improved.
We have performed an extensive set of quality assurance tests in our validation campaign.
In what follows, we present some of the most important tests with one or two key figures for each test.
A full set of tests and figures can be found at the data release site.
\subsection{Photometry: Internal Consistency}
\label{sec:internal_consistency}
We first present the photometric quality of our data. We begin with internal consistency checks.
In Fig.~\ref{fig:kronpsf_diff}, we compare the difference between Kron magnitudes and PSF magnitudes
for bright ($i<21.5$) stars in the Wide XMM-LSS field in the $g$-band. The PSF photometry is
based on the model PSF constructed by coadding PSFs from individual visits \citep{bosch18}, while the Kron photometry
is a moment-based adaptive-aperture measurement on the coadd. Thus, the consistency between them is a good measure
of the internal consistency. As shown in the figure, we achieve $\sigma\sim0.01$ mag across
the field. This level of scatter is typical of most fields in most filters, except for the $y$-band, which shows
a larger scatter ($\sigma\sim0.015$), due at least in part to shallower depths.
Overall, this test demonstrates good internal accuracy of our photometry. We also perform the same
analysis comparing the CModel and PSF photometry. CModel asymptotically approaches PSF for
point sources and we indeed achieve an excellent performance of $\sigma\lesssim0.002$ mag (plot not shown).
These trends are largely filter-independent, although they do depend on the seeing size as expected.
All the plots for each field and filter are available online.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{W02_HSC-G-psfKron.eps}
\end{center}
\caption{
Scatter between Kron and PSF magnitudes in the $g$-band for bright ($i<21.5$) stars in the Wide XMM-LSS field.
The scatter is measured separately in each patch. The large squares show the tract borders.
}
\label{fig:kronpsf_diff}
\end{figure}
\subsection{Photometry: External Consistency}
\label{sec:external_consistency}
Next, we make comparisons between HSC and external data sets. Fig.~\ref{fig:psfmagdiff_ps1}
compares the $r$-band photometry between HSC and PS1 for point sources with $r<20$.
This magnitude cut is brighter than that applied in the previous figure because we now compare
with the shallower PS1 data.
We apply a color term to translate the PS1 system into the HSC system for
a fair comparison. As we use the PS1 photometry to calibrate the HSC zero-points, the comparison
here is not entirely external but it is still useful.
The left figure shows that the difference between HSC and PS1 is close to zero, as expected.
Most filters and fields do not show a significant difference, but there is a small offset in
the $i$-band in the Deep+UltraDeep fields and also in some small regions in the Wide fields.
This is due to a calibration issue discussed in Section \ref{sec:photometry_in_i_and_i2}.
The right figure shows a scatter of the magnitude difference. The scatter is at a level of
$\sim0.01$, indicating a good photometric consistency. A similar level of consistency is seen
in most of the other filters and in other fields. The $y$-band generally shows a slightly larger scatter ($\sim0.02$ mag)
possibly due to the varying water vapor absorption in the atmosphere. The comparison is much worse
in NB387 ($\sim0.2$ mag), but this is due to the intrinsic color scatter of stars;
the color terms to extrapolate from PS1 photometry to NB387 are sensitive to stellar metallicity variations,
which the PS1 photometry cannot fully capture.
It should be noted that there are small regions where the scatter is significantly larger in the broad-bands
(see Section \ref{sec:inconsistent_fluxes}).
Fig.~\ref{fig:color_comp} shows color differences between HSC and PS1. Here we choose the Wide-GAMA15H field
in the $g-r$ color as an example. The other colors and fields can be found online. As expected from the magnitude differences
discussed above, we observe no significant color offset and no spatial structure in the $g-r$ color.
The color scatter is small, $\sim1.5\%$, which is reassuring.
\begin{figure*}
\begin{center}
\includegraphics[width=8.5cm]{DUD_ELAIS_HSC-R-ps1-calPsf_delta.eps}
\includegraphics[width=8cm]{DUD_ELAIS_HSC-R-ps1-calPsf.eps}
\end{center}
\caption{
{\bf Left:} $r$-band PSF magnitude difference between HSC and PS1 for point sources in the Deep ELAIS-N1 field.
The difference is measured in each patch separately. The large squares show the tract borders.
{\bf Right:}
Same as left panel, but the color scale shows the scatter.
}
\label{fig:psfmagdiff_ps1}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{color_comp_ps1_24_0.eps}
\end{center}
\caption{
$g-r$ color difference between HSC and PS1 in the Wide-GAMA15H field.
}
\label{fig:color_comp}
\end{figure*}
Another useful external check is to compare the observed location of the stellar sequence on
a two-color diagram with the location expected from a stellar spectral library. We use
the \citet{pickles98} library as above and compute the synthetic magnitudes for each filter
using the HSC total system response functions. We fit 2nd order polynomials to the linear
part of the synthetic stellar sequence, avoiding late-type stars. We then estimate the offset
between the observed stars and the fitted curve. This is done for each patch separately.
If our calibration is good, we expect that
the offset is small (but not necessarily be zero because the stellar library likely has small but non-zero systematics)
and uniform over the entire survey area. The top panel of
Fig.~\ref{fig:stellar_sequence} shows a sample plot for the GAMA09H field using the $gri$ photometry.
The offset is mostly within $\sim0.02$ mag, indicating a good calibration, although there is
weak spatial structure.
The structure at R.A.=130-140 deg is likely due to the $i$ vs $i_2$ issue discussed in Section \ref{sec:photometry_in_i_and_i2}.
We have applied corrections for the Galactic extinction, but not all the stars are completely
behind the dust curtain and the correction may introduce a spatial structure.
This may actually explain the feature around R.A.=150 deg. as we do not observe a significant
difference between HSC and PS1 magnitudes there (we do not correct for the extinction in
the HSC vs. PS1 comparisons).
We further measure the scatter around the observed stellar sequence
as it is another good indicator of the photometric accuracy. The observed scatter of the stellar
sequence plotted in the bottom panel is $\sim0.02$ mag and is fairly uniform across the field.
Note that the scatter is due to the three filters, suggesting that the scatter in calibration errors
is roughly $0.02/\sqrt{3}$ per filter.
Overall, these tests suggests that our photometric calibration is accurate to about 1\%, which should be
sufficient to enable a wide variety of scientific explorations of the data.
The current accuracy is reasonably good, but we expect to achieve better accuracy in the future with
the effective transmission curve discussed in Section \ref{sec:effective_transmission_curve}.
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{offsets_22_0.eps}\\\vspace{1cm}
\includegraphics[width=12cm]{scatter_22_0.eps}
\end{center}
\caption{
{\bf Top:} Color offset of the stellar sequence with respect to the expected sequence
from the \citet{pickles98} stellar library. The color bars on the right shows the level
of offset. The dark gray regions are either the area where we do not have a sufficient
number of stars (mostly field edges) or the area that is not covered in all the required
filters (e.g., $gri$ in this case). In order to enhance the spatial (non-)uniformity,
we subtract the median offset over the entire field.
{\bf Bottom:} As in the top panel but for the color scatter of the stellar sequence.
The small squares represent patches and the large ones represent tracts.
The median scatter over the field is indicated in the plot.
}
\label{fig:stellar_sequence}
\end{figure*}
\subsection{Astrometry}
\label{sec:astrometry}
The astrometric catalog from Gaia is an obvious choice of external source to
evaluate the astrometric accuracy of the HSC data. As in the previous section,
we use bright point sources to estimate the astrometric errors relative to Gaia DR1
\citep{gaia16a,gaia16b}\footnote{
At the time when we processed the PDR2 data, the Gaia DR2 was not yet available.
We will use the DR2 (or newer) in our future release.
}.
As an example,
we show the Hectomap field in Fig.~\ref{fig:astrometry}, where we show the mean offset in
position in each patch. There seems to be a large-scale
trend that the offset becomes larger at larger Right Ascension. Such large-scale features are
also observed in other fields. In addition, there are a few small regions, typically
a chunk of several patches, where the offset
is larger than the others. Such small-scale features are seen in the other fields as well.
We have not yet understood where these features come from, but they might possibly due
to the PS1 catalog because the spatial pattern does not follow the tract borders
(we apply a tract-wide astrometric calibration in the processing).
They could also be due to proper motions, which we have ignored in our astrometric
calibration process.
In any case, even the large offsets are below 0.1 arcsec and most science
cases are unlikely to be significantly affected by these astrometric errors,
but users who require high precision object positions should be warned.
\begin{figure*}
\begin{center}
\includegraphics[width=16cm]{W06_HSC-I-gaia-dRaStars.eps}\\\vspace{0.5cm}
\includegraphics[width=16cm]{W06_HSC-I-gaia-dDecStars.eps}
\end{center}
\caption{
Astrometric differences between HSC and Gaia in the $i$-band. Mean offset per patch in
Right Ascension and Declination are plotted in the top and bottom panels, respectively.
The mean here is computed as $3\sigma$ clipped mean, and is thus robust against outliers.
}
\label{fig:astrometry}
\end{figure*}
\subsection{PSF Model}
\label{sec:shape_measurements}
A key ingredient for precise shape measurements is a good PSF model.
As described in \citet{bosch18}, our PSF model at a given position on a coadd is
constructed by coadding the PSF model from individual visits. One easy test of the
PSF model accuracy is to compare the size of the observed PSF in the coadds with that of the model
PSF at the same position.
Fig.~\ref{fig:size_residual} makes this comparison. We use the fractional size residual defined
in Eq. 2 and use only bright ($i<22$) stars.
The residual is small, but not
zero. The residual is about $4\times10^{-3}$, meaning that the model PSF is larger
than the observed PSF. This is unlikely to affect the object detection and photometry
at a significant level, but it does not pass our stringent requirement for cosmic shear analysis
(see discussion in \cite{mandelbaum18}).
Investigations are underway to fully understand this residual.
We have found that at least part of it is from
image warping: we warp individual CCD images with the third-order Lanczos kernel
when we generate coadds, but the size residual decreases if we increase the order to fifth order.
However, this does not seem to fully solve the problem and further work is needed here.
When we release the shape measurements from PDR2, we hope to release an updated version of
the PSF model that passes the cosmic shear requirement.
We note that galaxy shapes may be better measured in individual CCD
images than on coadds as it avoids the need for pixel resampling
during warping. However, the current residuals in the PSF models
obtained on the coadds are not sufficiently large to motivate the
substantial effort of developing, validating and using such an
approach. Preliminary evidence suggests that modest adjustments to our
current approach of measurements on the coadds would likely be
successful in reducing the PSF model residuals below our science
requirements for current as well as future releases (Armstrong et al.
in prep.).
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{size_residual.eps}
\end{center}
\caption{
Distribution of the fractional size residual in the $i$-band for bright stars ($i<22$) in the Wide-VVDS field.
The vertical dashed line shows the zero residual. The peak is positive, meaning that the width of
the modeled PSF is biased slightly large.
}
\label{fig:size_residual}
\end{figure}
\subsection{Survey Depth}
\label{sec:survey_depth}
Another useful quantity to characterize imaging data is the depth. There are multiple ways to
define the depth, but we adopt a simple definition here; $5\sigma$ limiting magnitude
for point sources. We evaluate the limiting magnitudes using stars with $S/N$ between
4.9 and 5.1 for each patch. We use the $S/N$ quoted by the pipeline, which is likely a slightly
optimistic estimate due to the ignored pixel-to-pixel covariance as discussed earlier.
Fig.~\ref{fig:depth} shows a map of the $i$-band depth in
the Deep/UltraDeep COSMOS field. The Deep and UltraDeep data were jointly processed as mentioned
earlier, giving rise to spatial structure in the figure. In the central pointing,
we reach $i\sim28$; this is the deepest optical image of the field in existence.
Fig.~\ref{fig:cosmos} is a nice illustration of the depth we reach; there are so many
objects in this small cutout that there is almost no empty space between the objects.
The depths in the other fields are fairly uniform, although there is also some spatial
structure due to combination of dithering pattern and seeing variations. See the QA page
of the data release site for more plots.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fig_dud_cosmos_depth_iband.eps}
\end{center}
\caption{
$5\sigma$ limiting depth in the $i$-band for point sources in D/UD-COSMOS evaluated separately
in each patch.
}
\label{fig:depth}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=16cm]{cosmos.eps}
\end{center}
\caption{
$gri$ color-composite of a small chunk ($3'.5\times2'.0$) of the COSMOS field centered at
R.A.=$\rm10^h00^m20^s.0$, Dec.=$\rm+02^\circ11'55''.0$. North is up. This image is
colored following the algorithm of \citet{lupton04}.
}
\label{fig:cosmos}
\end{figure*}
\subsection{Known Issues}
\label{sec:known_issues}
Thanks to the updated processing pipeline, the overall data quality is improved since PDR1.
However, there are some persistent problems and also new problems. This section
summarizes the problems known to date. We will not repeat the problems that persist from PDR1 here;
bright galaxy shredding and underestimated flux uncertainties
in convolved measurements are discussed in Sections 5.8.3 and 5.8.11 of \citet{aihara18a}, respectively.
Optical ghosts due to bright stars (Section 5.8.8 of \citet{aihara18a}) are significantly reduced, but
they are not completely gone and will be briefly discussed here.
The issue of deblending failures in crowded areas (Section 5.8.10 of \cite{aihara18a}) also persists,
but there is a pipeline change related to it and we will discuss it here as well.
The list of known issues will evolve with time; we will keep the list up to date at the data release site.
\subsubsection{Remaining Artifacts on Coadds}
\label{sec:remaining_artifacts}
The artifact rejection algorithm mentioned in Section \ref{sec:artifact_rejection} is
very effective, particularly in the Wide survey in which the dither is large
(approximately 1/3 of field of view). However, it is less
effective in UltraDeep: the dithers are smaller (several arcmin) and optical ghosts
stay roughly at the same position on the sky, making the artifact rejection based
on the image differencing difficult. Fig.~\ref{fig:ghosts} shows an example case in UD-COSMOS.
There are a few satellite trails remaining there as well, but they are due to the enhanced background from
bright star nearby (this is one of the failure modes discussed in Section \ref{sec:artifact_rejection}).
Work is in progress to predict the locations of optical ghosts using
the optical model of the instrument as well as to identify satellite trails using
the Hough transform. We expect that artifacts will be further reduced in future data releases.
\begin{figure}
\includegraphics[width=0.49\textwidth]{ghosts.eps}
\caption{
Remaining optical ghosts and satellite trails in UD-COSMOS. The image is in $gri$ and is approximately $23'\times19'$.
}
\label{fig:ghosts}
\end{figure}
\subsubsection{Bright Star Masks}
\label{sec:bright_star_masks}
As described in Section \ref{sec:global_sky_subtraction}, we have changed the sky
subtraction algorithm to preserve wings of bright objects. Because of this,
masks around bright stars that indicated regions where photometry was unreliable (bright star masks; \cite{coupon18})
became too small in most cases. In addition,
the masks used at the time of the processing were based on Gaia DR1 \citep{gaia16a,gaia16b},
and some faint stars were missing due to striping in the Gaia coverage.
There were also some bright stars that were simply missing from our catalog.
Fig.~\ref{fig:brightobject} shows an example.
All this has been largely fixed using Gaia DR2 \citep{gaia18} with a more conservative mask size.
The mask size for individual stars is determined by building an HSC source density map
for sources (with $grizy<24$) around every star in Gaia-DR2 brighter than $G=18$. We measure the source
density profile in expanding radial annuli around the star to compute the median density
profiles as a function of star brightness, and then calculate the radius to where
the profile reached $3\sigma$ above the source density background. This is chosen as
a compromise between mask size and number of false positive peak detections.
This is performed separately for each broad-band.
In the old masks, about 12\% of the objects are masked, while the fraction increases to 20\%
in the new masks.
Note that the masks are applied to all stars brighter than $G=18$.
These new masks are being validated as of this writing.
It seems they are still not perfect but are much better than the previous version.
We plan to release the new masks no later than September 1st 2019.
We note that all the measurements are performed even for objects inside the masks.
They just have a flag bit on; {\tt \{filter\}\_pixelflags\_bright\_object[center] = 'True'}
at the database.
\begin{figure*}
\begin{center}
\includegraphics[width=8cm]{brightobject.eps}
\includegraphics[width=8cm]{brightobject2.eps}
\end{center}
\caption{
Sample patch image (tract=9010, patch=4,1, $i$-band) with the bright star masks overlaid.
The left and right panels are for old and new masks. Note the missing mask on
the bright star at the center ($V\sim8.7$~mag.) in the old mask.
Also, the mask size is in general larger in the new mask.
}
\label{fig:brightobject}
\end{figure*}
\subsubsection{Deblending Failure in Crowded Areas}
\label{sec:deblending_failure}
Many of the issues in PDR1 have been mitigated, but not all, and the issue of poor
photometry in crowded areas such as clusters of galaxies due to object blending still persists.
It is now a bigger problem especially in the UD-COSMOS, which goes deep enough that crowding becomes an issue
and makes the deblending even more difficult.
Some objects in UltraDeep-COSMOS have much brighter CModel magnitudes compared to PDR1, but that is likely due to deblending failures.
While an improved deblender algorithm is being developed \citep{melchior18}, the workaround for now is the same as PDR1;
PSF-matched aperture photometry on the undeblended
image (i.e., image prior to deblending) to give meaningful object colors. The largest target seeing in PDR1 was 1.1 arcsec
and the PSF-matched photometry was not available when the original seeing was worse than that.
The largest target seeing is now increased to 1.3 arcsec,
which is the upper limit on the seeing constraint imposed in the data screening (Section \ref{sec:data_screening}).
All the coadds thus have better seeing and the PSF-matched photometry is
always available. Several different aperture sizes are used in the measurement,
but a small aperture (e.g., 1.5 arcsec) is recommended to avoid blending with
nearby sources.
Note that the aperture corrections \citep{bosch18} assuming unresolved point sources
have been applied to those aperture fluxes, allowing users to obtain meaningful colors.
For extended sources, they do not give total magnitudes.
\subsubsection{Photometry in $i$ and $i2$ Combined Area}
\label{sec:photometry_in_i_and_i2}
After calibrating individual CCDs, we perform a tract-wide photometric and astrometric
calibration using multiple visits of the same field (see Section 3.2 of \cite{bosch18}).
As mentioned earlier, we combine $i$ and $i2$ data in this process.
If a tract only has either the $i$ or $i2$-band only, it is calibrated to $i$ or $i2$.
However, if a tract has a mixture of $i$ and $i2$-band visits, a processing error caused
all the visits to be calibrated to the $i$-band (which will be fixed in the next release).
The nature of this problem is somewhat complex because we apply the color term for the $i$-band
to the $i2$-band data and do the multi-visit calibration. The difference between the $i$ and
$i2$ color terms is small, $\lesssim1$\% for most objects, except for red objects. We apply
the color cut to avoid using late-type stars for calibration (Section \ref{sec:colorterms}),
but the photometric zero-point can be off by up to 1.5\%. The net effect is that there is
a small offset in the zero-point between the $i2$-only region and the $i$+$i2$ combined region.
If $i$-band data dominate over $i2$, the effect is fairly small, but in regions where $i2$-band
data dominate, a clear zero-point offset is seen.
Fig.~\ref{fig:vvds_color_offset} showing the offset of the stellar sequence in $riz$
relative to the \citet{pickles98} library illustrates this problem.
The sharp tract borders at R.A.=330 deg and Dec.=4 deg are due to combination of
(1) difference between the $i$ and $i2$-bands and (2) the zero-point offset mentioned above.
Fig.~\ref{fig:vvds_stellar_sequence} compares the $riz$ stellar sequence in the $i$-band only,
$i2$-band only, and $i$+$i2$ combined regions.
First, the left panel compares $i$ and $i2$-band only regions. The zero-points
are calibrated correctly in both bands (the stellar sequence agrees at the blue end), but
the difference in the filter transmission introduces a color difference for red stars.
In the right panel, the $i$+$i2$ combined region is actually dominated by the $i2$-band,
but there are just a few $i$-band visits that overlap with this tract and the whole tract is
calibrated to $i$. The shape of the stellar sequence is similar to the $i2$-band only region as expected,
but because the $i$-band color term is applied to the $i2$-band data, there is a small
zero-point offset ($\Delta i=0.015$ mag). These two effects discussed here cause
the sharp boundaries observed in Fig.~\ref{fig:vvds_color_offset}.
The $r$ and $r2$-bands have the same issue, although the effect is less severe due to
the smaller bandpass difference.
There is a database table that indicates which filter a tract is calibrated to (\code{tract_colorterm}).
There is also a database column that shows the relative fraction of $i$ and $i2$-band data that contributes
to the coadd of each object (\code{filterfraction}). Users should refer to these tables and treat the filters
separately for applications that require accurate photometry or for objects with exotic colors.
For further analysis, the effective transmission curves discussed in Section
\ref{sec:effective_transmission_curve} will be very useful.
We may be able to provide a color term to translate $i2$ into $i$ (and vice versa) for each object to
put them on a common photometric system in a future release.
\begin{figure*}
\begin{center}
\includegraphics[width=16cm]{vvds_color_offset.eps}
\end{center}
\caption{
Same as Fig.~\ref{fig:stellar_sequence} but for $gri$ in the Wide-VVDS field.
The bottom-right part is calibrated to the $i$-band, while
the rest of the area is calibrated to the $i2$-band.
The green tracts are $i2$-only, $i$+$i2$ combined, and $i$-only regions from left to right
and the stellar sequence in those regions are compared in Fig.~\ref{fig:vvds_stellar_sequence}.
}
\label{fig:vvds_color_offset}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{vvds_stellar_sequence.eps}
\end{center}
\caption{
$r-i$ plotted against $i-z$. Only bright ($i<22$) stars are shown here.
The black, blue and red points are from
$i2$-only, $i$-only, and $i$+$i2$ combined regions indicated in Fig.~\ref{fig:vvds_color_offset}.
The black points are common between the two panels for comparison purposes.
}
\label{fig:vvds_stellar_sequence}
\end{figure}
\subsubsection{Over-subtracted Scattered Light in the $y$-band}
\label{sec:over_subtracted_scattered_light_in_the_yband}
Due to the way the $y$-band scattered light subtraction algorithm is implemented
in the pipeline (Section \ref{sec:scattered_light_in_the_yband}), we mistakenly applied the subtraction to
all the $y$-band data taken after
the hardware fix described in Section \ref{sec:encoder_laser_shield}, in which no scattered light is observed.
This results in
an oversubtracted sky with a spatial pattern the same as the scattered light as
shown in Fig.~\ref{fig:y_oversubtracted}. Only a small fraction of the $y$-band
data suffered from this error and the affected regions are not full-color regions (i.e.,
the regions are not yet observed in all the filters). We thus do not expect it to be
a major issue, but users looking only at the $y$-band data should be aware of this problem.
A list of affected tracts is available at the data release site and this problem will be
fixed in a future data release.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{y_oversubtracted.eps}
\end{center}
\caption{
$y$-band image around R.A.=157deg, approximately $9^\circ\times5^\circ$.
The image is heavily stretched to enhance the over-subtraction feature.
}
\label{fig:y_oversubtracted}
\end{figure}
\subsubsection{Inconsistent PSF fluxes between HSC and PS1}
\label{sec:inconsistent_fluxes}
As discussed in Section \ref{sec:external_consistency}, our photometric accuracy is
good to $\sim1$\%. However, there are regions where we observe a large scatter in
the PSF photometry between HSC and PS1. Fig.~\ref{fig:gpsf_magdiff_vvds} shows an example,
where we see extended regions with systematic errors larger than 0.05 magnitudes.
We observe a similar scatter map when we compare with SDSS (plot not shown here) and
thus it is likely an issue with the HSC photometry. The stellar sequence scatter seems
to follow the same spatial pattern but with a smaller scatter.
It turns out that this is due to artifact rejection described in Section \ref{sec:artifact_rejection}.
There are 5 visits around the bad photometry area and one of them has 1.1 arcsec seeing,
while the rest have $\sim0.6$ arcsec seeing. A fraction of bright stars are clipped by the artifact
rejection algorithm in the bad seeing visit and that makes the coadd PSF incorrect;
the image itself is
based on the 4 good seeing visits, while the PSF model is constructed using all the 5 visits
(the coadd PSF does not know which visit is clipped).
This inconsistency introduced the photometry offset. We have designed the artifact rejection
algorithm so that we do not clip real sources. We are tracking down why a small fraction of
bright stars are actually clipped. We will give updates at the website when we have more to report.
\begin{figure*}
\begin{center}
\includegraphics[width=16cm]{W05_HSC-G-ps1-calPsf.eps}
\end{center}
\caption{
Same as Fig.~\ref{fig:psfmagdiff_ps1} but for the VVDS field in the $g$-band, illustrating
extended regions with systematic errors between HSC and PS1 PSF photometry.
}
\label{fig:gpsf_magdiff_vvds}
\end{figure*}
\subsubsection{Zero fluxes without flags in CModel}
\label{sec:zero_fluxes_without_flags_in_cmodel}
A small fraction of all the objects ($\sim1$ \%) have zero CModel fluxes with
uncertainty NaN. This is likely a measurement failure, but the measurement flags and pixel flags
are set to \code{false} and users cannot screen them with the flags. This issue is being
tracked down. Users should filter out objects with CModel fluxes exactly zero
or uncertainty NaN in order not to be affected by the issue.
\subsubsection{Possible background residual}
\label{sec:possible_sky_residual}
Because we subtract the sky background on a relatively large scale, there may be a low-level sky
residual on a small scale. A preliminary investigation seems to show a filter-dependent sky
residual at a $\sim29$ mag$/$arcsec$^2$ level. Most sources are not affected by this level of
sky residual, but users interested in extended, low-surface brightness galaxies may want to be careful.
There is a set of useful objects in
each patch called {\it sky objects}. The pipeline picks 100 random points in a patch outside of
object footprints and make the blank-sky measurements, just like the measurements for real objects.
These sky objects are useful for measuring background fluctuation and residual.
The sky objects are stored in the database just like real objects and they have \texttt{merge\_peak\_sky = True}.
\section{Data Access}
\label{sec:data_access}
The data can be retrieved from the data release site
where all the quality assurance plots as well as list of known issues are summarized.
As in PDR1, the release website provides only the processed data. The raw data can be
retrieved from SMOKA\footnote{\url{https://smoka.nao.ac.jp/}}.
All the pipeline outputs are available as flat files.
There are a few online tools linked from the data release website to help users access the data they need,
such as a file search tool and an image cutout tool.
An online PSF retrieval tool allows users to retrieve the coadd PSF images at an arbitrary position on the sky.
The catalog products have been loaded to the database and users can use either
the online SQL editor or command-line tool to submit SQL queries and download the results.
The schema browser should be referred to for details of the database tables.
The online image browser, hscMap, offers a user-friendly environment to browse the massive
images. It has many useful features (e.g., user can upload a catalog and mark objects)
and the online manual describes them. Any questions and issues regarding the data access
should be sent to the helpdesk.
\section{Status of Collaborating Surveys}
\label{sec:status_of_collaborating_surveys}
The HSC-SSP survey has a number of collaborating surveys in other wavebands.
Here we give a brief update on two of them; a $u$-band follow-up imaging survey and
a near-IR follow-up survey. Both target the Deep/UltraDeep fields, where multiwavelength data enable a wide array of
galaxy evolution science.
The CFHT Large Area U-band Deep Survey (CLAUDS; Sawicki et al., MNRAS submitted) has used the MegaCam imager
on the Canada-France-Hawaii 3.6m telescope to
obtain very deep U-band images that overlap the HSC-SSP Deep/UltraDeep layers. The observations are now complete.
The new images, together with
pre-existing archival MegaCam data in some of the fields, have been processed, resampled, and stacked to match
the HSC-SSP tract/patch grid, astrometric solution, and pixel scale. Multiband ($U+grizy$) photometry is carried out
using SExtractor \citep{bertin96}
and an adaptation of \code{hscPipe} that can handle these CFHT $U$-band images.
The CLAUDS data cover 18.60~deg$^2$ with median seeing of FWHM=0.92" and to a median depth of $U = 27.1$ AB
(5$\sigma$ in 2" apertures); selected areas in the COSMOS and SXDS fields that total 1.36~deg$^2$ reach a median depth of
$U=27.7$ AB (5$\sigma$ in 2" apertures). Altogether, the CLAUDS images represent the equivalent of 113 classical-mode
CFHT nights and are the deepest $U$-band data ever taken over this combination of depth and area. The combined
CLAUDS and HSC-SSP datasets enable many science investigations by significantly enhancing photometric redshift
performance and allowing the selection of $z\sim 3$ Lyman Break Galaxies and quasars. Several science projects are
already underway with this combined dataset, and the CLAUDS team anticipates releasing these deep $U$ images and data
products (including CLAUDS+HSC-SSP $U+grizy$ catalogs based on HSC-SSP PDR2 data) to the public in 2020.
Turning to near-infrared data, the Deep and UltraDeep fields overlap with some of the major
near-infrared imaging surveys such as the Deep eXtragalactic Survey of the UKIRT Infrared
Deep Sky Survey (UKIDSS/DXS; \cite{kim11}), Ultra Deep Survey with the VISTA Telescope
(UltraVISTA; \cite{mccracken12}), VISTA Deep Extragalactic Observations Survey (VIDEO;
\cite{jarvis13}). These surveys, however, do not fully cover the Deep fields,
and Deep UKIRT Near-infrared Steward Survey (DUNES$^2$; Egami et al., in
preparation) is filling the missing part.
DUNES$^2$ has made excellent progress; the data acquisition has been essentially complete
with a total observing time of about 270 hours on UKIRT.
DUNES$^2$ is similar to UKIDSS/DXS in terms of depth, and covers the four flanking fields of E-COSMOS
($J$\,$\sim$\,23.6, $H$\,$\sim$\,23.2, $K$\,$\sim$\,23.2 mag at $5\sigma$ within 2 arcsec aperture; 3.0 deg$^2$ in total) and
DEEP2-3 field ($J$\,$\sim$\,23.3, $K$\,$\sim$\,23.1 mag; 4.5 deg$^2$). DUNES$^2$ also obtained $H$-band data for the central
0.9\arcmin$\times$1.7\arcmin\ region of ELAIS-N1 ($H\sim23.2$ mag; 1.5 deg$^2$), which was missing from
the UKIDSS/DXS survey. The data are being processed, and our plan is to use the HSC photometry pipeline to
produce fully band-merged source catalogs covering from the $U$ to $K$ bands, following the methodology developed
for the CLAUDS $U$-band data. We anticipate that such catalog products as well as images
will be made available publicly at the time of the HSC-SSP final data release (DR3).
We also note that a further follow-up near-IR imaging survey, DeepCos, designed to bring the majority (5.6 deg$^2$) of
the DUNES$^2$ footprint to the depth of VIDEO ($J=24.5$ and $K=23.5$), is currently on-going at UKIRT
(Lin et al. in prep.).
\section{Summary and Future Data Releases}
\label{sec:summary}
The data from 174 nights of HSC-SSP are now publicly available. The data are of high quality
and should enable a wide range of scientific explorations. However, there are known issues
with the data and we advise users to review the issue list (Section \ref{sec:known_issues}) before using the data.
We also ask users to acknowledge HSC-SSP; the sample acknowledgment text is given at the data release site.
In addition, the HSC technical papers given in Table \ref{tab:references} should be referred to where appropriate.
The pipeline is developed as part of LSST and therefore the appropriate LSST papers should also be referenced:
\citet{ivezic19}, and \citet{juric17}.
We have calibrated our data against the public Pan-STARRS data. We would like to encourage
users to reference Pan-STARRS as well:
\citet{chambers16}, \citet{schlafly12}, \citet{tonry12}, and \citet{magnier13}.
Looking towards the future,
our baseline plan is to make the next major data release (PDR3) in two years, but our observations
suffered from bad weather in winter 2017-2018 as well as from earthquakes due to the increased
volcanic activity at Kilauea in summer 2018. There was an additional hiatus due to a telescope problem
in September-October 2018. The survey has been significantly delayed due to these problems and
this may affect our data release plan. We will give updates on our website in due course.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{cccc}
\hline
Subject & Paper \\
\hline
Survey design & \citet{aihara18b}\\
Public Data Release 1 & \citet{aihara18a}\\
Public Data Release 2 & this paper\\
Camera system & \citet{miyazaki18}\\
Camera dewar & \citet{komiyama18}\\
Filters & \citet{kawanomoto18}\\
Processing pipeline & \citet{bosch18}\\
Onsite reduction system & \citet{furusawa18}\\
SynPipe & \citet{huang18}\\
Bright object masks & \citet{coupon18}\\
Photometric redshifts & \citet{tanaka18}\\
Lensing shape catalog & \citet{mandelbaum18}\\
\hline
\end{tabular}
\end{center}
\caption{
List of HSC technical papers.
}
\label{tab:references}
\end{table}
\section*{Acknowledgments}
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan,
and Princeton University. The HSC instrumentation and software were developed by the National
Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of
the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK),
the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University.
Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS),
Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA,
and Princeton University.
This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST
Project for making their code available as free software at http://dm.lsst.org.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
This paper is based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics, National Astronomical Observatory of Japan.
We thank the anonymous referee for a thoughtful report, which helped improve the paper.
This work is also based on zCOSMOS observations carried out using the Very Large Telescope at the ESO Paranal Observatory under Programme ID: LP175.A-0839, on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555, on data from the VIMOS VLT Deep Survey, obtained from the VVDS database operated by Cesam, Laboratoire d'Astrophysique de Marseille, France, on data from the VIMOS Public Extragalactic Redshift Survey (VIPERS). VIPERS has been performed using the ESO Very Large Telescope, under the "Large Programme" 182.A-0886. The participating institutions and funding agencies are listed at http://vipers.inaf.it. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/. Funding for the DEEP2 Galaxy Redshift Survey has been provided by NSF grants AST-95-09298, AST-0071048, AST-0507428, and AST-0507483 as well as NASA LTSA grant NNG04GC89G. Funding for PRIMUS is provided by NSF (AST-0607701, AST-0908246, AST-0908442, AST-0908354) and NASA (Spitzer-1356708, 08-ADP08-0019, NNX09AC95G). Funding for the DEEP3 Galaxy Redshift Survey has been
provided by NSF grants AST-0808133, AST-0807630, and AST-0806732.
This work is in part supported by MEXT Grant-in-Aid for Scientific Research on Innovative
Areas (No.~15H05887, 15H05892, 15H05893).
\bibliographystyle{apj}
|
1,116,691,498,542 | arxiv | \section{Introduction}
Neural networks (NNs) have achieved state-of-the-art results
in a wide variety of supervised learning tasks,
such as image recognition~\citep{krizhevsky2012imagenet,Simonyan14c,russakovsky2015imagenet},
speech recognition~\citep{seide2011conversational,hinton2012deep,dahl2012context}
and machine translation~\citep{bahdanau2014neural,cholearning}.
However, NNs have a major drawback in that output uncertainty is not well estimated.
NNs give point estimates of outputs at test inputs.
Estimating the uncertainty of the output is important in various situations.
First, the uncertainty can be used for rejecting the results.
In real-world applications such as medical diagnosis,
we should avoid automatic decision making with the difficult examples,
and ask human experts or conduct other examinations to achieve high reliability.
Second, the uncertainty can be used to calculate risk.
In some domains, it is important to be able to estimate the probability of
critical issues occurring,
for example with self-driving cars or nuclear power plant systems.
Third, the uncertainty can be used for the inputs of other machine learning tasks.
For example, uncertainty of speech recognition results helps in terms of
improving
machine translation performance in automatic speech translation systems~\citep{ney1999speech}.
The uncertainty would also be helpful for active learning~\citep{krause2007nonmyopic} and reinforcement learning~\citep{blundell2015weight}.
We propose a simple method that makes it possible for NNs to estimate output uncertainty.
With the proposed method, NNs are used
for the mean functions of Gaussian processes
(GPs)~\citep{rasmussen2006gaussian}.
GPs are used as prior distributions over smooth nonlinear functions,
and the uncertainty of the output can be estimated with Bayesian inference.
GPs perform well in various regression and classification tasks~\citep{williams1996gaussian,barber1997gaussian,naish2007generalized,nickisch2008approximations}.
Combining NNs and GPs gives us another advantage.
GPs exploit local generalization,
where generalization is achieved by
local interpolation between neighbors~\citep{bengio2013representation}.
Therefore, GPs can adjust target functions rapidly in the presense of training data, but fail to generalize in regions where there are no training data.
On the other hand,
NNs have good generalization capability for unseen input configurations
by learning multiple levels of distributed representations,
but require a huge number of training data.
Since GPs and NNs achieve generalization in different ways,
the proposed method can improve generalization performance
by adopting both of their advantages.
Zero mean functions are usually used
since GPs with zero mean functions and some specific kernels can approximate an arbitrary continuous function
given enough training data~\citep{micchelli2006universal}.
However, GPs with zero mean functions predict zero outputs far from training samples.
Figure~\ref{fig:GPillustration}(a) shows the predictions of GPs with zero mean functions and RBF kernels.
When trained with two samples, the prediction values
are close to the true values if there are training samples,
but far from the true values if there are none.
On the other hand, when GPs with appropriate nonzero mean functions are used as in Figure~\ref{fig:GPillustration}(b),
the prediction approximates the true values
even when there are no training samples.
Figure~\ref{fig:GPillustration} shows that GPs rapidly adjust the prediction
when there are training data
regardless of the mean function values.
The proposed method gives NNs more flexibility via GPs.
In general, the risk of overfitting increases as the model flexibility increases.
However, since the proposed method is based on Bayesian inference,
where nonlinear functions with GP priors are integrated out,
the proposed method can help alleviate overfitting.
To retain the high generalization capability of NNs with the proposed method,
large training data are required.
The computational complexity of the exact inference of GPs is
cubic in the number of training samples, which is prohibitive for large data.
We present a scalable stochastic inference procedure for the proposed method,
where sparse GPs are inferred by stochastic variational inference~\citep{hensman2013gaussian},
and NN parameters and kernel parameters are estimated by stochastic gradient descent methods, simultaneously.
By using stochastic optimization, the parameters are updated efficiently without analyzing all the data at each iteration,
where a noisy estimate of the gradient of the objective function is used.
The inference algorithm also enables us to handle massive data even when they cannot be stored in a memory.
\begin{figure}[t]
\centering
{\tabcolsep=-0.5em
\begin{tabular}{cccc}
no training & trained w/ two samples &
no training & trained w/ two samples \\
\includegraphics[width=11em]{zeromeanGP0.png}&
\includegraphics[width=11em]{zeromeanGP1.png}&
\includegraphics[width=11em]{NNmeanGP0.png}&
\includegraphics[width=11em]{NNmeanGP1.png}\\
\multicolumn{2}{c}{(a) GP with zero mean functions} &
\multicolumn{2}{c}{(b) GP with nonzero mean functions}\\
\end{tabular}}
\caption{True values (red), mean function values (green) and prediction values (blue) provided by GPs with zero mean functions (a) and GPs with nonzero mean functions (b). The blue area is the 95\% confidence interval of the prediction, and the red points indicate the training samples.}
\label{fig:GPillustration}
\end{figure}
\begin{comment}
The remainder of this paper is organized as follows.
In Section~\ref{sec:related}, we outline related work.
In Section~\ref{sec:method}, we propose
a simple approach for combining NNs and GPs,
and describe its scalable inference procedure.
In Section~\ref{sec:experiment}, we demonstrate the effectiveness
of the proposed method by using two real-world spatio-temporal data sets
in terms of uncertainty estimate and point estimate performance.
Finally, we present concluding remarks and a discussion of future work in
Section~\ref{sec:conclusion}.
\end{comment}
\section{Related work}
\label{sec:related}
Bayesian NNs are the most common way of introducing uncertainty into NNs,
where distributions over the NN parameters are inferred.
A number of Bayesian NN methods have been proposed including
Laplace approximation~\citep{mackay1992practical},
Hamiltonian Monte Carlo~\citep{neal1995bayesian},
variational inference~\citep{hinton1993keeping,graves2011practical,blundell2015weight,Louizos2016,sun2017learning},
expectation propagation~\citep{jylanki2014expectation},
stochastic backpropagation~\citep{hernandez2015probabilistic},
and
dropout~\citep{kingma2015variational,gal2016dropout} methods.
Our proposed method gives the output uncertainty of
NNs with a different approach,
where we conduct point estimation for the NN parameters,
but the NNs are combined with GPs.
Therefore, the proposed method incorporates the high generalization performance of NNs and the high flexibility of GPs,
and can handle large-scale data by using scalable NN stochastic optimization and GP stochastic variational inference.
Although zero mean functions are often used for GPs,
nonzero mean functions, such as polynomial functions~\citep{blight1975bayesian}, have also been used.
When the mean functions are linear in the parameters, the parameters can be integrated out,
which leads to another GP~\citep{o1978curve}.
However, scalable inference algorithms for GPs with flexible nonlinear mean functions like NNs have not been proposed.
NNs and GPs are closely related. NNs with a hidden layer converge to GPs in the limit of an infinite number of hidden units~\citep{neal1995bayesian}.
A number of methods combining NNs and GPs have been proposed.
Deep GPs~\citep{damianou2013deep} use GPs for each layer in NNs, where local generalization is exploited since their inputs are kernel values.
GP regression networks~\citep{wilson2012gaussian}
combine the structural properties of NNs
with the nonparametric flexibility of GPs
for accommodating input dependent signal and noise correlations.
Manifold GPs~\citep{calandra2016manifold} and deep NN based GPs~\citep{huang2015scalable} use NNs for transforming the input features of GPs.
Deep kernel learning~\citep{wilson2016deep} uses NNs to learn kernels for GPs.
The proposed method is different from these methods since it incorporates
the outputs of NNs into GPs.
\section{Proposed method}
\label{sec:method}
Suppose that we have a set of input and output pairs,
${\cal D}=(\mathbf{x}_{n},y_{n})_{n=1}^{N}$,
where $\mathbf{x}_{n}\in\mathbb{R}^{D}$ is the $n$th input,
and $y_{n}\in\mathbb{R}$ is its output.
Output $y_{n}$ is assumed to be generated by a nonlinear function
$f(\mathbf{x}_{n})$ with Gaussian noise.
Let $\mathbf{f}=(f_{n})_{n=1}^{N}$ be the vector of function values on
the observed inputs, $f_{n}=f(\mathbf{x}_{n})$.
Then, the probability of the output
$\mathbf{y}=(\mathbf{y}_{n})_{n=1}^{N}$
is given by
\begin{align}
p(\mathbf{y}|\mathbf{f})=\prod_{n=1}^{N}{\cal N}(y_{n}|f_{n},\beta^{-1}),
\end{align}
where $\beta$ is the observation precision parameter.
For the nonlinear function, we assume a GP model,
\begin{align}
f(\mathbf{x}) \sim {\cal GP}(g(\mathbf{x};\bm{\phi}),k(\mathbf{x},\mathbf{x}';\bm{\theta})),
\label{eq:nngp}
\end{align}
where $g(\mathbf{x};\bm{\phi})$ is the mean function
with parameters $\bm{\phi}$,
and $k(\mathbf{x},\mathbf{x}';\bm{\theta})$ is the kernel function
with kernel parameters $\bm{\theta}$.
We use a NN for the mean function, and
we call (\ref{eq:nngp}) NeuGaP,
which is a simple and new model that fills
a gap between the GP and NN literatures.
By integrating out the nonlinear function $\vec{f}$,
the likelihood is given by
\begin{align}
p(\vec{y}|\vec{X},\bm{\phi},\bm{\theta})=
{\cal N}(\vec{y}|\vec{g},\vec{K})=
(2\pi)^{-\frac{N}{2}}|\vec{K}|^{-\frac{1}{2}}
\exp\left(-\frac{1}{2}(\vec{y}-\vec{g})^{\top}\vec{K}^{-1}(\vec{y}-\vec{g})
\right),
\label{eq:gplikelihood}
\end{align}
where $\vec{X}=(\vec{x}_{n})_{n=1}^{N}$,
$\vec{K}$ is the $N\times N$ covariance matrix defined by the kernel function
$k(\vec{x},\vec{x}';\bm{\theta})$,
and
$\vec{g}=(g(\vec{x}_{n}))_{n=1}^{N}$
is the vector of the output values of the NN on the observed inputs.
The parameters in GPs are usually estimated
by maximizing the marginal likelihood (\ref{eq:gplikelihood}).
However, the exact inference is infeasible for large data since
the computational complexity is $O(N^{3})$
due to the inversion of the covariance matrix.
To reduce the computational complexity
while keeping the desirable properties of GPs,
we employ sparse GPs~\citep{snelson2006sparse,quinonero2005unifying,hensman2013gaussian}.
With a sparse GP,
inducing inputs $\mathbf{Z}=(\mathbf{z}_{m})_{m=1}^{M}$,
$\vec{z}_{m}\in\mathbb{R}^{D}$,
and their outputs $\mathbf{u}=(u_{m})_{m=1}^{M}$,
$u_{m}\in\mathbb{R}$, are introduced.
The basic idea behind sparse inducing point methods is that when the number of inducing points $M \ll N$,
computation can be reduced in $O(M^2N)$.
The inducing outputs $\vec{u}$ are assumed to be
generated by the nonlinear function of NeuGaP (\ref{eq:nngp})
taking the inducing inputs $\vec{Z}$ as inputs.
By marginalizing out the nonlinear function,
the probability of the inducing outputs is given by
\begin{align}
p(\vec{u})={\cal N}(\vec{u}|\vec{g}_{M},\vec{K}_{MM}),
\end{align}
where $\vec{g}_{M}=(g(\mathbf{z}_{m}))_{m=1}^{M}$ is the vector
of the NN output values on the inducing inputs,
$\vec{K}_{MM}$ is the $M\times M$ covariance matrix evaluated
between all the inducing inputs,
$\vec{K}_{MM}(m,m')=k(\mathbf{z}_{m},\mathbf{z}_{m'})$.
The output values at the observed inputs $\mathbf{f}$ are assumed to be
conditionally independent of each other given the inducing outputs $\mathbf{u}$,
then we have
\begin{align}
p(\vec{f}|\vec{u})=
\prod_{n=1}^{N}p(f_{n}|\vec{u})
=\prod_{n=1}^{N}{\cal N}(f_{n}|\mu_{n},\tilde{k}_{n}),
\label{eq:fu}
\end{align}
where
\begin{align}
\mu_{n}=g(\vec{x}_{n})+\vec{k}_{Mn}^{\top}\vec{K}_{MM}^{-1}(\vec{u}-\vec{g}_{M}), \quad
\tilde{k}_{n} = k(\vec{x}_{n},\vec{x}_{n})-\vec{k}_{Mn}^{\top}\vec{K}_{MM}^{-1}\vec{k}_{Mn}.
\end{align}
Here, $\vec{k}_{Mn}$ is the $M$-dimensional column vector of the covariance function
evaluated between observed and inducing inputs,
$\vec{k}_{Mn}(m)=k(\mathbf{x}_{n},\mathbf{z}_{m})$.
Equation (\ref{eq:fu}) is obtained in the same way as
the derivation of the predictive mean and variance of test data points
in standard GPs.
The lower bound of the log marginal likelihood of
the sparse GP to be maximized is
\begin{align}
\log p(\vec{y}) &= \log \int p(\vec{y}|\vec{u})p(\vec{u})d\vec{u}
\geq
\int q(\vec{u})
\log p
(\vec{y}|\vec{u})\frac{p(\vec{u})}{q(\vec{u})}
d\vec{u},
\label{eq:py}
\end{align}
where $q(\vec{u})={\cal N}(\vec{m},\vec{S})$
is the variational distribution of the inducing points,
and Jensen's inequality is applied~\citep{titsias2009variational}.
The log likelihood of the observed output $\vec{y}$
given the inducing points $\mathbf{u}$ is as follows,
\begin{align}
\log p(\mathbf{y}|\vec{u}) &=\log \int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\mathbf{u})d\mathbf{f}
\geq \int p(\mathbf{f}|\mathbf{u}) \log p(\mathbf{y}|\mathbf{f}) d\mathbf{f}
= \sum_{n=1}^{N} \log {\cal N}(y_{n}|\mu_{n},\beta^{-1})-\frac{1}{2}\beta \tilde{k}_{n},
\label{eq:pyu}
\end{align}
where Jensen's inequality is applied, and
the lower bound of $\log p(\vec{y}|\vec{u})$ is decomposed into
terms for each training sample.
By using (\ref{eq:py}) and (\ref{eq:pyu}),
the lower bound of $\log p(\vec{y})$ is given by
\begin{align}
\log p(\vec{y}) &\geq
\sum_{n=1}^{N} \left(\log {\cal N}(y_{n}|\tilde{\mu}_{n},\beta^{-1})-\frac{1}{2}\beta\tilde{k}_{n}
-\frac{1}{2}{\rm tr}(\vec{S}\bm{\Lambda}_{n})\right)
-{\rm KL}(q(\vec{u})||p(\vec{u}))\equiv L,
\label{eq:lowerbound}
\end{align}
where
\begin{align}
\tilde{\mu}_{n}=g(\vec{x}_{n})+\vec{k}_{Mn}^{\top}\vec{K}_{MM}^{-1}(\vec{m}-\vec{g}_{M}), \quad
\bm{\Lambda}_{n}=\beta\vec{K}_{MM}^{-1}\vec{k}_{Mn}\vec{k}_{Mn}^{\top}\vec{K}_{MM}^{-1},
\end{align}
and ${\rm KL}(q(\vec{u})||p(\vec{u}))$ is the KL divergence
between two Gaussians, which is calculated by
\begin{align}
{\rm KL}(q(\vec{u})||p(\vec{u}))
=\frac{1}{2}\left(
\log\frac{|\vec{K}_{MM}|}{|\vec{S}|}
-M+{\rm tr}(\vec{K}_{MM}^{-1}\vec{S})+(\vec{m}-\vec{g}_{M})^{\top}\vec{K}_{MM}^{-1}(\vec{m}-\vec{g}_{M})
\right).
\end{align}
The NN parameters $\bm{\phi}$ and kernel parameters $\bm{\theta}$
are updated efficiently by maximizing the lower bound (\ref{eq:lowerbound})
using stochastic gradient descent methods.
The parameters in the variational distribution, $\vec{m}$ and $\vec{S}$,
are updated efficiently by using stochastic variational inference~\citep{hoffman2013stochastic}.
We altenately iterate the stochastic gradient descent and stochastic variational inference for each minibatch of training data.
With stochastic variational inference,
the parameters of variational distributions are updated based on the natural gradients~\citep{amari1998natural},
which are computed by multiplying the gradients by the inverse of the Fisher information matrix.
The natural gradients provide faster convergence than standard gradients
by taking account of the information geometry of the parameters.
In the exponential family, the natural gradients with respect to natural parameters
correspond to the gradients with respect to expectation parameters~\citep{hensman2012fast}.
The natural parameters of Gaussian ${\cal N}(\vec{m},\vec{S})$ are
$\bm{\lambda}_{1}=\vec{S}^{-1}\vec{m}$ and
$\bm{\lambda}_{2}=-\frac{1}{2}\vec{S}^{-1}$.
Its expectation parameters are
$\bm{\eta}_{1}=\vec{m}$
and
$\bm{\eta}_{2}=\vec{m}\vec{m}^{\top}+\vec{S}$.
We take a step in the natural gradient direction by employing
$\bm{\lambda}^{(t+1)}=\bm{\lambda}^{(t)}+\ell_{t} \frac{\partial L}{\partial \bm{\eta}}$,
where $\frac{\partial L}{\partial \bm{\eta}}=G(\bm{\lambda})^{-1}\frac{\partial L}{\partial \bm{\lambda}}$ is the natural gradient of the objective function with respect to the natural parameter,
$G(\bm{\lambda})$ is the Fisher information, and $\ell_{t}$ is the step length at iteration $t$.
The update rules for the proposed model are given by
\begin{align}
\bm{\lambda}_{1}^{(t+1)}
=
\ell_{t} \Bigl(\beta\vec{K}_{MM}^{-1}\vec{k}_{Mn}(y_{n}-g_{n}+\vec{k}_{Mn}^{\top}\vec{K}_{MM}^{-1}\vec{g}_{M})\Bigr)
+(1-\ell_{t})\bm{\lambda}_{1}^{(t)},
\label{eq:svi1}
\end{align}
\begin{align}
\bm{\lambda}_{2}^{(t+1)}=
\ell_{t} \left( -\frac{1}{2}(
\bm{\Lambda}_{n}
+\vec{K}_{MM}^{-1})\right)
+ (1-\ell_{t}) \bm{\lambda}_{2}^{(t)}.
\label{eq:svi2}
\end{align}
We can use minibatches instead of a single training sample
to update the natural parameters.
The output distribution
given test input $\vec{x}^{*}$ is calculated by
\begin{align}
p(y^{*}|\vec{x}^{*},{\cal D})&\approx
\int\int p(y^{*}|f^{*})p(f^{*}|\vec{u})q(\vec{u})df^{*}d\vec{u}\nonumber\\
&={\cal N}(y^{*}|g(\vec{x}^{*})+\vec{k}_{M*}^{\top}\vec{K}_{MM}^{-1}(\vec{m}-\vec{g}_{M}),\beta^{-1}+\tilde{k}_{*}+\vec{k}_{M*}^{\top}\vec{K}_{MM}^{-1}\vec{S}\vec{K}_{MM}^{-1}\vec{k}_{M*}),
\end{align}
where $\vec{k}_{*M}$ is the covariance function column vector evaluated
between test input $\vec{x}^{*}$ and inducing inputs,
and $\tilde{k}_{*} = k(\vec{x}_{*},\vec{x}_{*})-\vec{k}_{M*}^{\top}\vec{K}_{MM}^{-1}\vec{k}_{M*}$.
\section{Experiments}
\label{sec:experiment}
\paragraph{Data}
We evaluated our proposed method by using two real-world spatio-temporal
data sets.
The first data set is the Comprehensive Climate (CC) data set
\footnote{\url{http://www-bcf.usc.edu/~liu32/data/NA-1990-20002-Monthly.csv}},
which consists of monthly climate reports for North America~\citep{bahadori2014fast,lozano2009spatial}.
We used 19 variables for 1990, such as month, latitude, longitude,
carbon dioxide and temperature,
which were interpolated on a $2.5\times2.5$ degree grid with 125 locations.
The second data set is the U.S. Historical Climatology Network (USHCN) data set
\footnote{\url{https://www.ncdc.noaa.gov/oa/climate/research/ushcn/}},
which consists of monthly climate reports at 1218 locations in U.S. for 1990.
We used the following seven variables:
month, latitude, longitude, elevation, precipitation,
minimum temperature, and maximum temperature.
The task was to estimate the distribution of a variable
given the values of the other variables as inputs;
there were 19 tasks in CC data, and seven tasks in USHCN data.
We evaluated the performance in terms of test log likelihoods.
We also used mean squared errors to evaluate point estimate performance.
We randomly selected some locations as test data.
The remaining data points were randomly split
into 90\% training data and 10\% validation data.
With CC data, we used 20\%, 50\% and 80\% of locations as test data,
and their training data sizes were 1081, 657 and 271, respectively.
With USHCN data, we used 50\%, 90\% and 95\% of locations as test data,
and their training data sizes were 6597, 1358 and 609, respectively.
\paragraph{Comparing Methods}
We compared the proposed method with GPs and NNs.
The GPs were sparse GPs inferred by stochastic variational inference.
The GPs correspond to the proposed method with a zero mean function.
With the proposed method and GPs,
we used the following RBF kernels for the kernel function,
$k(\vec{x},\vec{x}')=\alpha \exp \left(-\frac{\gamma}{2}\parallel\vec{x}-\vec{x}'\parallel^{2}\right)$,
and 100 inducing points.
We set the step size at epoch $t$ as $\ell_{t}=(t+1)^{-0.9}$
for the stochastic variational inference.
With the NNs, we used three-layer feed-forward NNs with five hidden units,
and we optimized the NN parameters $\bm{\phi}$
and precision parameter $\beta$
by maximizing the following likelihood,
$\sum_{n=1}^{N}\log {\cal N}(y_{n}|g(\vec{x}_{n};\bm{\phi}),\beta^{-1})$,
by using Adam~\citep{kingma2014adam}.
The proposed method used NNs with the same structure
for the mean function,
where the NN parameters were first optimized by maximizing the likelihood,
and then variational, kernel and NN parameters were estimated
by maximizing the variational lower bound (\ref{eq:lowerbound}) using the stochastic variational inference and Adam.
The locations of the inducing inputs were initialized by $k$-means results.
For all the methods, we set the minibatch size at 64,
and used early stoping based on the likelihood on a validation set.
\paragraph{Results}
Tables~\ref{tab:likelihood_ccds} and \ref{tab:likelihood_ushcn}
show the test log likelihoods with different missing value rates
with CC and USHCN data, respectively.
The proposed method achieved the highest average likelihoods
with both data sets.
The NN performed poorly when
many values were missing (Table~\ref{tab:likelihood_ccds}, 80\% missing).
On the other hand, since a GP is a nonparametric Bayesian method,
where the effective model complexity is automatically adjusted
depending on the number of training samples,
the GPs performed better than the NNs
with the many missing value data.
When the number of missing values was small
(Table~\ref{tab:likelihood_ccds}, 20\% missing),
the NN performed better than the GP.
The proposed method achieved the best performance
with different missing value rates by combining the advantages of NNs and GPs.
Tables~\ref{tab:mean_squared_error_ccds}
and \ref{tab:mean_squared_error_ushcn}
show the test mean squared errors with different missing value rates.
The proposed method achieved the lowest average errors with both data sets.
This result indicates that combining NNs and GPs also helps
to improve the generalization performance.
Table~\ref{tab:time} shows the computational time in seconds.
Figure~\ref{fig:predict_co2}
shows the prediction with its confidence interval obtained
by the proposed method, GP and NN.
The NN gives fixed confidence intervals at all test points,
and some true values are located outside the confidence intervals.
On the other hand, the proposed method flexibly changes confidence intervals depending on the test points.
The confidence intervals with the GP differ across different test points as with the proposed method.
However, they are wider than those of the proposed method,
since the mean functions are fixed at zero.
\begin{table}[t]
\centering
\caption{Test log likelihoods provided by the proposed method, GP, and NN with CC data. The bottom row shows the values averaged over all variables. Values in a bold typeface are statistically better (at the 5\% level) than those in normal typeface as indicated by a paired t-test.}
\label{tab:likelihood_ccds}
{\tabcolsep=0.4em
\begin{tabular}{|l|rrr||rrr||rrr|}
\hline
Missing & \multicolumn{3}{|c||}{20\%} & \multicolumn{3}{c||}{50\%} & \multicolumn{3}{c|}{80\%} \\
\hline
Method & Proposed & GP & NN & Proposed & GP & NN & Proposed & GP & NN \\
\hline
\rotatebox{0}{ MON } & $\mathbf{1.75}$ & $0.37$ & $1.13$ & $\mathbf{1.69}$ & $0.20$ & $1.09$ & $\mathbf{-0.58}$ & $\mathbf{-0.03}$ & $\mathbf{-0.64}$ \\
\rotatebox{0}{ LAT } & $\mathbf{1.85}$ & $0.64$ & $1.22$ & $\mathbf{1.88}$ & $0.63$ & $1.18$ & $\mathbf{0.88}$ & $0.40$ & $0.11$ \\
\rotatebox{0}{ LON } & $\mathbf{-0.53}$ & $\mathbf{-0.57}$ & $-0.70$ & $\mathbf{-0.66}$ & $\mathbf{-0.65}$ & $-0.77$ & $\mathbf{-0.82}$ & $\mathbf{-0.84}$ & $-1.02$ \\
\rotatebox{0}{ CO2 } & $\mathbf{1.47}$ & $0.66$ & $1.01$ & $\mathbf{1.35}$ & $0.45$ & $0.88$ & $\mathbf{0.70}$ & $0.29$ & $-0.14$ \\
\rotatebox{0}{ CH4 } & $\mathbf{1.37}$ & $0.87$ & $0.76$ & $\mathbf{1.09}$ & $0.81$ & $0.65$ & $\mathbf{0.70}$ & $0.51$ & $0.23$ \\
\rotatebox{0}{ CO } & $\mathbf{1.28}$ & $0.30$ & $0.82$ & $\mathbf{1.32}$ & $0.35$ & $0.85$ & $\mathbf{0.50}$ & $0.11$ & $-0.01$ \\
\rotatebox{0}{ H2 } & $\mathbf{1.08}$ & $0.42$ & $0.64$ & $\mathbf{0.92}$ & $0.41$ & $0.41$ & $\mathbf{0.46}$ & $\mathbf{0.26}$ & $-0.36$ \\
\rotatebox{0}{ WET } & $\mathbf{-0.59}$ & $-0.68$ & $-0.63$ & $\mathbf{-0.60}$ & $-0.69$ & $-0.66$ & $\mathbf{-0.86}$ & $\mathbf{-0.76}$ & $\mathbf{-0.96}$ \\
\rotatebox{0}{ CLD } & $\mathbf{-0.30}$ & $-0.39$ & $-0.39$ & $\mathbf{-0.33}$ & $-0.38$ & $-0.52$ & $\mathbf{-0.56}$ & $\mathbf{-0.51}$ & $-0.71$ \\
\rotatebox{0}{ VAP } & $\mathbf{0.53}$ & $0.38$ & $0.39$ & $\mathbf{0.43}$ & $0.32$ & $0.13$ & $\mathbf{0.08}$ & $\mathbf{0.16}$ & $-0.31$ \\
\rotatebox{0}{ PRE } & $\mathbf{-0.74}$ & $-0.90$ & $\mathbf{-0.76}$ & $\mathbf{-0.78}$ & $-0.84$ & $-0.82$ & $\mathbf{-0.94}$ & $\mathbf{-0.91}$ & $\mathbf{-0.99}$ \\
\rotatebox{0}{ FRS } & $\mathbf{0.55}$ & $0.44$ & $0.45$ & $\mathbf{0.49}$ & $\mathbf{0.43}$ & $0.36$ & $\mathbf{0.38}$ & $\mathbf{0.33}$ & $\mathbf{0.37}$ \\
\rotatebox{0}{ DTR } & $\mathbf{2.70}$ & $1.84$ & $2.53$ & $2.26$ & $1.82$ & $\mathbf{2.60}$ & $\mathbf{1.78}$ & $1.61$ & $\mathbf{1.33}$ \\
\rotatebox{0}{ TMN } & $\mathbf{3.98}$ & $3.03$ & $2.68$ & $\mathbf{3.41}$ & $3.02$ & $2.19$ & $\mathbf{2.77}$ & $\mathbf{2.85}$ & $2.00$ \\
\rotatebox{0}{ TMP } & $\mathbf{3.76}$ & $3.09$ & $3.16$ & $\mathbf{3.63}$ & $3.05$ & $2.99$ & $2.74$ & $\mathbf{2.92}$ & $2.10$ \\
\rotatebox{0}{ TMX } & $\mathbf{3.81}$ & $3.06$ & $3.13$ & $\mathbf{3.76}$ & $3.06$ & $3.22$ & $\mathbf{2.86}$ & $\mathbf{2.88}$ & $2.33$ \\
\rotatebox{0}{ GLO } & $\mathbf{0.85}$ & $0.78$ & $0.78$ & $\mathbf{0.76}$ & $\mathbf{0.71}$ & $0.67$ & $\mathbf{0.51}$ & $\mathbf{0.56}$ & $\mathbf{0.36}$ \\
\rotatebox{0}{ ETR } & $2.98$ & $2.52$ & $\mathbf{3.37}$ & $2.95$ & $2.48$ & $\mathbf{3.26}$ & $\mathbf{2.54}$ & $2.24$ & $\mathbf{2.45}$ \\
\rotatebox{0}{ ETRN } & $\mathbf{3.12}$ & $2.42$ & $\mathbf{3.08}$ & $2.78$ & $2.34$ & $\mathbf{3.08}$ & $2.53$ & $2.10$ & $\mathbf{2.77}$ \\
\hline
\rotatebox{0}{ Average } & $\mathbf{1.52}$ & $0.96$ & $1.19$ & $\mathbf{1.39}$ & $0.92$ & $1.09$ & $\mathbf{0.82}$ & $\mathbf{0.75}$ & $0.47$ \\
\hline
\end{tabular}}
\end{table}
\begin{table}[t]
\centering
\caption{Test log likelihoods provided by the proposed method, GP, and NN with USHCN data.}
\label{tab:likelihood_ushcn}
{\tabcolsep=0.4em
\begin{tabular}{|l|rrr||rrr||rrr|}
\hline
Missing & \multicolumn{3}{|c||}{50\%} & \multicolumn{3}{c||}{90\%} & \multicolumn{3}{c|}{95\%} \\
\hline
Method & Proposed & GP & NN & Proposed & GP & NN & Proposed & GP & NN \\
\hline
\rotatebox{0}{ MON } & $\mathbf{-1.33}$ & $-1.39$ & $\mathbf{-1.33}$ & $\mathbf{-1.35}$ & $-1.38$ & $-1.36$ & $\mathbf{-1.38}$ & $\mathbf{-1.39}$ & $-1.40$ \\
\rotatebox{0}{ LAT } & $\mathbf{-0.73}$ & $-1.04$ & $\mathbf{-0.73}$ & $\mathbf{-0.73}$ & $-0.94$ & $-0.76$ & $\mathbf{-0.75}$ & $-0.96$ & $-0.78$ \\
\rotatebox{0}{ LON } & $-1.12$ & $-1.25$ & $\mathbf{-1.10}$ & $\mathbf{-1.14}$ & $-1.19$ & $\mathbf{-1.13}$ & $\mathbf{-1.16}$ & $-1.23$ & $\mathbf{-1.16}$ \\
\rotatebox{0}{ ELE } & $\mathbf{-1.16}$ & $-1.27$ & $\mathbf{-1.14}$ & $\mathbf{-1.16}$ & $-1.25$ & $-1.19$ & $\mathbf{-1.21}$ & $\mathbf{-1.28}$ & $-1.27$ \\
\rotatebox{0}{ PRE } & $\mathbf{1.31}$ & $0.93$ & $1.23$ & $\mathbf{1.27}$ & $0.97$ & $1.13$ & $\mathbf{1.19}$ & $0.89$ & $0.91$ \\
\rotatebox{0}{ TMIN } & $\mathbf{1.04}$ & $0.90$ & $0.83$ & $\mathbf{0.98}$ & $0.89$ & $0.74$ & $\mathbf{0.86}$ & $0.81$ & $0.68$ \\
\rotatebox{0}{ TMAX } & $\mathbf{0.75}$ & $0.26$ & $0.71$ & $\mathbf{0.71}$ & $0.29$ & $0.60$ & $\mathbf{0.55}$ & $0.25$ & $0.45$ \\
\hline
\rotatebox{0}{ Average } & $\mathbf{-0.18}$ & $-0.41$ & $-0.22$ & $\mathbf{-0.20}$ & $-0.37$ & $-0.28$ & $\mathbf{-0.27}$ & $-0.41$ & $-0.37$ \\
\hline
\end{tabular}}
\end{table}
\begin{table}[t]
\centering
\caption{Test mean squared error provided by the proposed method, GP, and NN with CC data.}
\label{tab:mean_squared_error_ccds}
{\tabcolsep=0.4em
\begin{tabular}{|l|rrr||rrr||rrr|}
\hline
Missing & \multicolumn{3}{|c||}{20\%} & \multicolumn{3}{c||}{50\%} & \multicolumn{3}{c|}{80\%} \\
\hline
Method & Proposed & GP & NN & Proposed & GP & NN & Proposed & GP & NN \\
\hline
\rotatebox{0}{ MON } & $\mathbf{0.002}$ & $0.027$ & $0.006$ & $\mathbf{0.003}$ & $0.029$ & $0.006$ & $\mathbf{0.012}$ & $0.049$ & $0.020$ \\
\rotatebox{0}{ LAT } & $\mathbf{0.002}$ & $0.013$ & $0.008$ & $\mathbf{0.002}$ & $0.015$ & $0.007$ & $\mathbf{0.012}$ & $0.026$ & $0.036$ \\
\rotatebox{0}{ LON } & $\mathbf{0.157}$ & $\mathbf{0.163}$ & $0.230$ & $\mathbf{0.203}$ & $\mathbf{0.204}$ & $0.258$ & $\mathbf{0.305}$ & $\mathbf{0.320}$ & $0.396$ \\
\rotatebox{0}{ CO2 } & $\mathbf{0.003}$ & $0.012$ & $0.008$ & $\mathbf{0.004}$ & $0.018$ & $0.010$ & $\mathbf{0.014}$ & $0.031$ & $0.025$ \\
\rotatebox{0}{ CH4 } & $\mathbf{0.004}$ & $0.010$ & $0.013$ & $\mathbf{0.007}$ & $0.012$ & $0.015$ & $\mathbf{0.016}$ & $0.024$ & $0.026$ \\
\rotatebox{0}{ CO } & $\mathbf{0.005}$ & $0.027$ & $0.012$ & $\mathbf{0.004}$ & $0.027$ & $0.011$ & $\mathbf{0.028}$ & $0.048$ & $\mathbf{0.050}$ \\
\rotatebox{0}{ H2 } & $\mathbf{0.007}$ & $0.026$ & $0.017$ & $\mathbf{0.011}$ & $0.031$ & $0.023$ & $\mathbf{0.030}$ & $0.049$ & $0.054$ \\
\rotatebox{0}{ WET } & $\mathbf{0.189}$ & $0.235$ & $\mathbf{0.197}$ & $\mathbf{0.196}$ & $0.234$ & $0.209$ & $\mathbf{0.274}$ & $\mathbf{0.256}$ & $0.283$ \\
\rotatebox{0}{ CLD } & $\mathbf{0.105}$ & $0.125$ & $0.115$ & $\mathbf{0.120}$ & $0.130$ & $0.137$ & $\mathbf{0.153}$ & $\mathbf{0.151}$ & $0.165$ \\
\rotatebox{0}{ VAP } & $\mathbf{0.020}$ & $0.027$ & $0.024$ & $\mathbf{0.026}$ & $0.032$ & $0.030$ & $\mathbf{0.040}$ & $\mathbf{0.038}$ & $0.046$ \\
\rotatebox{0}{ PRE } & $\mathbf{0.253}$ & $0.327$ & $\mathbf{0.257}$ & $\mathbf{0.270}$ & $0.300$ & $0.280$ & $\mathbf{0.345}$ & $\mathbf{0.331}$ & $\mathbf{0.377}$ \\
\rotatebox{0}{ FRS } & $\mathbf{0.018}$ & $0.025$ & $0.022$ & $\mathbf{0.021}$ & $0.026$ & $0.024$ & $\mathbf{0.027}$ & $0.032$ & $\mathbf{0.028}$ \\
\rotatebox{0}{ DTR } & $\mathbf{0.000}$ & $0.002$ & $\mathbf{0.000}$ & $0.001$ & $0.002$ & $\mathbf{0.000}$ & $\mathbf{0.002}$ & $\mathbf{0.003}$ & $\mathbf{0.002}$ \\
\rotatebox{0}{ TMN } & $\mathbf{0.000}$ & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $\mathbf{0.000}$ & $0.001$ & $\mathbf{0.000}$ & $\mathbf{0.000}$ & $0.001$ \\
\rotatebox{0}{ TMP } & $\mathbf{0.000}$ & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $0.000$ & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $0.001$ \\
\rotatebox{0}{ TMX } & $\mathbf{0.000}$ & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $\mathbf{0.000}$ & $0.001$ \\
\rotatebox{0}{ GLO } & $\mathbf{0.011}$ & $0.013$ & $0.012$ & $\mathbf{0.013}$ & $0.015$ & $0.014$ & $\mathbf{0.019}$ & $\mathbf{0.019}$ & $\mathbf{0.020}$ \\
\rotatebox{0}{ ETR } & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $0.000$ & $0.000$ & $\mathbf{0.000}$ & $\mathbf{0.000}$ & $0.001$ & $\mathbf{0.000}$ \\
\rotatebox{0}{ ETRN } & $\mathbf{0.000}$ & $0.001$ & $\mathbf{0.000}$ & $\mathbf{0.000}$ & $0.001$ & $\mathbf{0.000}$ & $\mathbf{0.000}$ & $0.001$ & $\mathbf{0.000}$ \\
\hline
\rotatebox{0}{ Average } & $\mathbf{0.041}$ & $0.054$ & $0.048$ & $\mathbf{0.046}$ & $0.057$ & $0.054$ & $\mathbf{0.067}$ & $0.073$ & $0.081$ \\
\hline
\end{tabular}}
\end{table}
\begin{table}[t]
\centering
\caption{Test mean squared error provided by the proposed method, GP, and NN with USHCN data.}
\label{tab:mean_squared_error_ushcn}
{\tabcolsep=0.4em
\begin{tabular}{|l|rrr||rrr||rrr|}
\hline
Missing & \multicolumn{3}{|c||}{50\%} & \multicolumn{3}{c||}{90\%} & \multicolumn{3}{c|}{95\%} \\
\hline
Method & Proposed & GP & NN & Proposed & GP & NN & Proposed & GP & NN \\
\hline
\rotatebox{0}{ MON } & $\mathbf{0.834}$ & $0.936$ & $\mathbf{0.838}$ & $\mathbf{0.864}$ & $0.924$ & $0.880$ & $\mathbf{0.925}$ & $\mathbf{0.935}$ & $\mathbf{0.938}$ \\
\rotatebox{0}{ LAT } & $\mathbf{0.251}$ & $0.457$ & $\mathbf{0.253}$ & $\mathbf{0.252}$ & $0.383$ & $0.265$ & $\mathbf{0.262}$ & $0.389$ & $0.273$ \\
\rotatebox{0}{ LON } & $0.547$ & $0.717$ & $\mathbf{0.524}$ & $0.573$ & $0.648$ & $\mathbf{0.564}$ & $\mathbf{0.603}$ & $0.682$ & $\mathbf{0.588}$ \\
\rotatebox{0}{ ELE } & $\mathbf{0.581}$ & $0.726$ & $\mathbf{0.577}$ & $\mathbf{0.586}$ & $0.682$ & $0.617$ & $\mathbf{0.655}$ & $\mathbf{0.808}$ & $0.702$ \\
\rotatebox{0}{ PRE } & $\mathbf{0.004}$ & $0.012$ & $0.005$ & $\mathbf{0.005}$ & $0.012$ & $0.006$ & $\mathbf{0.006}$ & $0.013$ & $0.007$ \\
\rotatebox{0}{ TMIN } & $\mathbf{0.008}$ & $0.012$ & $0.011$ & $\mathbf{0.009}$ & $0.012$ & $0.013$ & $\mathbf{0.012}$ & $0.014$ & $0.014$ \\
\rotatebox{0}{ TMAX } & $\mathbf{0.013}$ & $0.042$ & $0.014$ & $\mathbf{0.015}$ & $0.041$ & $0.017$ & $\mathbf{0.022}$ & $0.044$ & $\mathbf{0.022}$ \\
\hline
\rotatebox{0}{ Average } & $\mathbf{0.320}$ & $0.415$ & $\mathbf{0.317}$ & $\mathbf{0.329}$ & $0.386$ & $0.338$ & $\mathbf{0.355}$ & $0.412$ & $0.364$ \\
\hline
\end{tabular}}
\end{table}
\begin{table}[t]
\centering
\caption{Computational time for inference in seconds.}
\label{tab:time}
\begin{tabular}{|l|rrr|rrr|}
\hline
Data & \multicolumn{3}{|c|}{CC} & \multicolumn{3}{|c|}{USHCN} \\
\hline
Missing & 20\% & 50\% & 80\% & 50\% & 90\% & 95\% \\
\hline
Proposed & $1409$ & $1020$ & $559$ & $5471$ & $1940$ & $1374$ \\
GP & $1163$ & $873$ & $463$ & $5074$ & $1854$ & $1388$ \\
NN & $334$ & $250$ & $109$ & $1746$ & $404$ & $226$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
{\tabcolsep=-0.5em
\begin{tabular}{ccc}
\includegraphics[width=14.5em]{nayeartimevar02_3_0_0.png}&
\includegraphics[width=14.5em]{nayeartimevar02_gp3_0_0.png}&
\includegraphics[width=14.5em]{nayeartimevar02_nn3_0_0.png}\\
(a) Proposed & (b) GP & (c) NN \\
\end{tabular}}
\caption{Prediction of held-out CO2 values using training data with 80\% missing values at a test location with CC data. The horizontal axis is month, the vertical axis is CO2, the blue bar is the 95\% confidence interval of the prediction, and the red point is the true value.}
\label{fig:predict_co2}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a simple method for combining neural networks
and Gaussian processes.
With the proposed method, neural networks are used as the mean function of Gaussian processes.
We present a scalable learning procedure based on stochastic gradient descent and stochastic variational inference.
With experiments using two real-world spatio-temporal data sets,
we demonstrated that the proposed method achieved better uncertainty estimation and generalization performance than neural networks and Gaussian processes.
There are several avenues that can be pursed as future work.
In our experiments, we used feed-forward neural networks.
We would like to use other types of neural networks, such as convolutional and recurrent neural networks.
Moreover, we plan to analyze the sensitivity with respect to
the structure of the neural networks, the number of inducing points and the choice of kernels.
Finally, the mean function of neural networks
could be inferred using Bayesian methods.
\bibliographystyle{abbrvnat}
\begin{small}
|
1,116,691,498,543 | arxiv | \section{Introduction and conclusion}
{\bf Background:}
Horowitz {\it et al.} \cite{PRC.63.025501} proposed a direct measurement
for neutron skin $r_{\rm skin}$.
The measurement consists of parity-violating weak scattering and
elastic electron scattering.
The neutron radius $r_n$ is determined from the former experiment, whereas
the proton radius $r_p$ is from the latter.
The direct measurement was applied for $^{208}$Pb and $^{48}$Ca.
As for $^{208}$Pb, the PREX collaboration presented
\begin{equation}
r_{\rm skin}^{208}({\rm PREX2}) = 0.283\pm 0.071\,{\rm fm},
\end{equation}
combining the original Lead Radius EXperiment (PREX)
result with the updated PREX2 result~\cite{Adhikari:2021phr,PRL.108.112502,PRC.85.032501}.
As for $^{48}$Ca, the CREX group presented~\cite{CREX:2022kgg}
\begin{eqnarray}
r_{\rm skin}^{48}({\rm CREX})&=&0.121 \pm 0.026\ {\rm (exp)} \pm 0.024\ {\rm (model)}
\notag \\
&=&0.071 \sim 0.171~{\rm fm}.
\label{CREX-value}
\end{eqnarray}
The $r_{\rm skin}^{208}({\rm PREX2}) $ and the $r_{\rm skin}^{48}({\rm CREX})$ are
most reliable at the present
stage, and provide crucial tests for the equation of state (EoS) of nuclear matter
\cite{PRC.102.051303,AJ.891.148,AP.411.167992,EPJA.56.63,JPG.46.093003}
as well as nuclear structure.
Reed {\it et al.} \cite{arXiv.2101.03193}
reported a value of the slope parameter of the EoS
and examine the impact of such a stiff symmetry energy
on some critical neutron-star observables.
The $r_{\rm skin}^{208}({\rm PREX2}) $ value
is considerably larger than the other experimental
values that are model-dependent~\cite{PRL.87.082501,PRC.82.044611,PRL.107.062502,PRL.112.242502}.
Meanwhile, the nonlocal dispersive-optical-model
(DOM) analysis of ${}^{208}{\rm Pb}$ yields
$r_{\rm skin}^{\rm DOM} =0.25 \pm 0.05$ fm \cite{PRC.101.044303}.
The value is consistent with $r_{\rm skin}^{208}({\rm PREX2})$.
Using the chiral (Kyushu) $g$-matrix folding model, we
determine $r_{\rm skin}^{208}({\rm exp})=0.278 \pm 0.035$~fm from reaction cross section
$\sigma_{\rm R}$ in $30 \leq E_{\rm lab} \leq 100$~MeV~\cite{Tagami:2020bee}.
In addition, for $^{4}$He+ $^{208}$Pb scattering, we determine
$r_{\rm skin}^{208}({\rm exp})=0.416\pm 0.146$~fm
from measured $\sigma_R$ in $E_{\rm lab} = 30 \sim 50$ MeV~\cite{Matsuzaki:2021hdm}.
These values are consistent with $r_{\rm skin}^{208}({\rm PREX2})$.
For $^{12}$C scattering on $^{9}$Be, $^{12}$C, $^{27}$Al targets,
we tested reliability of the Kyushu $g$-matrix folding model and found that
the folding model is reliable in $30 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm lab} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 100$~MeV and $250 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm lab} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 400$~MeV~\cite{PRC.101.014620}. Furthermore, we mentioned that the difference between
the $t$-matrix and the $g$-matrix is small in $E_{\rm lab} ~\,\makebox(1,1){$\stackrel{>}{\widetilde{}}$}\,~ 400$~MeV. Since the cutoff of the chiral nucleon-nucleon (NN) is 550~MeV, the chiral NN $t$-matrix is useful in
$400 ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ E_{\rm lab} ~\,\makebox(1,1){$\stackrel{<}{\widetilde{}}$}\,~ 500$~MeV. For $E_{\rm lab} ~\,\makebox(1,1){$\stackrel{>}{\widetilde{}}$}\,~ 400$~MeV,
the most famous $t$-matrix is Love-Franey (LF) $t$-matrix~\cite{LF}.
As for $^{208}$Pb, it is possible to determine reliable neutron radius $r_n({\rm PREX2})=5.727 \pm 0.071$ fm and
matter radius $r_m({\rm PREX2})=5.617 \pm 0.044$~fm from $r_p({\rm exp})$ = 5.444 fm~\cite {PRC.90.067304}
of electron scattering and $r_{\rm skin}^{208}({\rm PREX2})$.
The $r_p$ calculated with D1S-Gogny-HFB (D1S-GHFB) with the angular momentum projection (AMP) agrees with $r_p({\rm exp})$.
The neutron density calculated with D1S-GHFB+AMP is scaled so as to $r_n^{\rm scaling}=5.727$ fm.
In Ref.~~\cite{WAKASA2021104749}, we showed that
the LF $t$-matrix folding model with the scaled neutron density and the D1S-GHFB+AMP proton
one reproduces
the data $\sigma_R({\rm exp})$~\cite{Dietrich:2002swm,Nakano:2021dau}
at $E_{\rm lab} = 534.1, 549, 806$~MeV within total error bars. Nevertheless, we do not determine
$r_{\rm skin}^{208}$ from the data at $E_{\rm lab} = 534.1, 549, 806$~MeV.
As for $^{48}$Ca, an indirect measurement is made with the high-resolution $E1$ polarizability experiment (E1{\rm pE})~\cite{Birkhan:2016qkr}.
The skin value $r_{\rm skin}^{48}(E1{\rm pE}) =0.14 \sim 0.20$~fm
is consistent with $r_{\rm skin}^{48}({\rm CREX})$.
Using $^{4}$He+ $^{40}$Ca scattering in $E_{\rm lab} = 30 \sim 50$ MeV,
we determine matter radius $r_{\rm m}^{\rm 40}({\rm exp})$
from measured $\sigma_R$~\cite{Matsuzaki:2021hdm} ,
whereas Zenihiro {\it el al. } deduce neutron radii $r_{\rm n}^{48,40}({\rm exp})$ from the angular
distributions of the cross sections and analyzing powers of polarized proton elastic scattering at
$E_{\rm lab} = 295$~MeV~\cite{Zenihiro:2018rmz}.
The $r_{\rm skin}^{48}({\rm exp})=0.168^{+0.025}_{-0.028}$~fm determined by Zenihiro {\it el al.}
is consistent with $r_{\rm skin}^{48}({\rm CREX})$.
{\bf Aim:}
The first aim is to determine $r_{\rm skin}^{\rm 208}({\rm exp})$
from the data~\cite{Dietrich:2002swm,Nakano:2021dau} on $\sigma_{\rm R}$
of p+ $^{208}$Pb scattering at $E_{\rm lab} = 534.1, 549, 806$~MeV by
using the LF $t$-matrix folding model.
The second aim is to determine $r_{\rm skin}^{\rm 48}({\rm exp})$ with
the result $r_{\rm m}^{\rm 40}({\rm exp})=3.361 \pm 0.075$~fm~\cite{Matsuzaki:2021hdm}
of $^{4}$He+$^{40}$Ca scattering in $E_{\rm lab} = 30 \sim 50$ MeV and
the difference $\Delta \equiv r_{\rm m}^{48}({\rm exp})- r_{\rm m}^{40}({\rm exp})$,
since there is no data on $\sigma_{\rm R}$ for $^{4}$He+$^{48}$Ca scattering.
The derivation of $\Delta$ is shown below.
{\bf Method for determining $r_{\rm skin}^{\rm 48}({\rm exp})$:}
Zenihiro {\it el al. } determine neutron radii $r_{\rm n}^{40}({\rm exp})=3.375^{+0.022}_{-0.023}$~fm and
$r_{\rm n}^{48}({\rm exp})=3.555^{+0.025}_{-0.028}$~fm from the angular distributions of the cross sections and the analyzing powers of proton elastic scattering~\cite{Zenihiro:2018rmz}.
We can obtain the proton radii for $^{40, 48}$Ca with the isotope shift method based on the electron scattering~\cite{ADNDT.99.69}, i.e., $r_{\rm p}^{40}({\rm exp})=3.378$~fm and $r_{\rm p}^{48}({\rm exp})=3.385$~fm.
Using these values, we can obtain
$r_{\rm m}^{40}({\rm exp})=3.377^{+0.022}_{-0.023}$~fm,
$r_{\rm m}^{48}({\rm exp})=3.485^{+0.025}_{-0.028}$~fm.
From the central values of $r_{\rm m}^{40}({\rm exp})$ and $r_{\rm m}^{48}({\rm exp})$, we obtain
the difference $\Delta \equiv r_{\rm m}^{48}({\rm exp})- r_{\rm m}^{40}({\rm exp})$=0.109~fm.
In Ref.~\cite{Matsuzaki:2021hdm}, meanwhile,
we determined $r_{\rm m}^{\rm 40}({\rm exp})=3.361 \pm 0.075$~fm
from measured $\sigma_{\rm R}$ of $^{4}$He+ $^{40}$Ca scattering in $E_{\rm lab} =30 \sim 50$~MeV.
We can then obtain $r_{\rm m}^{48}({\rm exp})=3.470 \pm 0.075$~fm from
$r_{\rm m}^{\rm 40}({\rm exp})=3.361 \pm 0.075$~fm and $\Delta$.
The $r_{\rm m}^{48}({\rm exp})=3.470 \pm 0.075$~fm and
$r_{\rm p}^{48}({\rm exp})=3.385$~fm lead to $r_{\rm skin}^{\rm 48}({\rm exp})=0.144 \pm 0.075$~fm, respectively.
{\bf Method for determining $r_{\rm skin}^{\rm 208}({\rm exp})$:}
We use the folding model based on Lovey-dovey (LF) $t$-matrix~\cite{LF} to determine
$r_{\rm skin}^{\rm 208}({\rm exp})$ from data $\sigma_{\rm R}({\rm exp})$~\cite{Dietrich:2002swm,Nakano:2021dau}
at $E_{\rm lab} = 534.1, 549, 806$ MeV.
We have already applied the LF $t$-matrix folding model for p+$^{4,6,8}$He scattering at 700~MeV to determine
matter radii $r_m({\rm exp})$ from the high-accuracy data~\cite{Neumaier:2002eay}.
The results are $r_{m}({\rm exp})=2.48(3), 2.53(2)$~fm and $r_{\rm skin}=0.78(3), 0.82(2)$~fm
for $^{6,8}$He~\cite{WAKASA2022105329}.
Now we show the formulation on the LF $t$-matrix folding model below.
For proton-nucleus scattering, the potential $U({\bfi R})$
between an incident proton (p) and a target (${\rm T}$) has the direct and exchange parts,
$U^{\rm DR}$ and $U^{\rm EX}$, as
\begin{subequations}
\begin{eqnarray}
U^{\rm DR}({\bfi R}) & = &
\sum_{\mu,\nu}\int \rho^{\nu}_{\rm T}({\bfi r}_{\rm T})
t^{\rm DR}_{\mu\nu}(s;\rho_{\mu\nu}) d
{\bfi r}_{\rm T}\ ,\label{eq:UD} \\
U^{\rm EX}({\bfi R}) & = &
\sum_{\mu,\nu}
\int \rho^{\nu}_{\rm T}({\bfi r}_{\rm T},{\bfi r}_{\rm T}+{\bfi s}) \nonumber \\
& &
\times t^{\rm EX}_{\mu\nu}(s;\rho_{\mu\nu}) \exp{[-i{\bfi K}({\bfi R}) \cdot {\bfi s}/M]}
d {\bfi r}_{\rm T}\,~~
\label{eq:UEX}
\end{eqnarray}
\end{subequations}
where ${\bfi R}$ is the relative coordinate between p and T,
${\bfi s}=-{\bfi r}_{\rm T}+{\bfi R}$, and ${\bfi r}_{\rm T}$ is
the coordinate of the interacting nucleon from T.
Each of $\mu$ and $\nu$ denotes the $z$-component of isospin.
The non-local $U^{\rm EX}$ has been localized in Eq.~\eqref{eq:UEX}
with the local semi-classical approximation~\cite{Brieva-Rook-1,Brieva-Rook-2,Brieva-Rook-3}
where ${\bfi K}({\bfi R})$ is the local momentum between p and T,
and $M= A/(1 +A)$ for the mass number $A$ of T;
see Ref.~\cite{Minomo:2009ds} for the validity of the localization.
The direct and exchange parts, $t^{\rm DR}_{\mu\nu}$ and
$t^{\rm EX}_{\mu\nu}$, of the $t$ matrix are described by
\begin{align}
t_{\mu\nu}^{\rm DR}(s)
&=
\displaystyle{\frac{1}{4} \sum_S} \hat{S}^2 t_{\mu\nu}^{S1}
(s) \hspace*{0.1cm} \hspace*{0.1cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = \pm 1,
\\
t_{\mu\nu}^{\rm DR}(s)
&=
\displaystyle{\frac{1}{8} \sum_{S,T}}
\hat{S}^2 t_{\mu\nu}^{ST}(s)
\hspace*{0.1cm} \hspace*{0.1cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = 0,
\\
t_{\mu\nu}^{\rm EX}(s)
&=
\displaystyle{\frac{1}{4} \sum_S} (-1)^{S+1}
\hat{S}^2 t_{\mu\nu}^{S1} (s)
\hspace*{0.1cm} \hspace*{0.1cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = \pm 1,
\\
t_{\mu\nu}^{\rm EX}(s)
&=
\displaystyle{\frac{1}{8} \sum_{S,T}} (-1)^{S+T}
\hat{S}^2 t_{\mu\nu}^{ST}(s)
\hspace*{0.1cm} \hspace*{0.1cm}
{\rm for} \hspace*{0.1cm} \mu+\nu = 0
,
\end{align}
where $\hat{S} = {\sqrt {2S+1}}$ and $t_{\mu\nu}^{ST}$ are
the spin-isospin components of the $t$-matrix interaction.
As proton and neutron densities, $\rho^{\nu=-1/2}_{\rm T}$ and $\rho^{\nu=1/2}_{\rm T}$,
we use D1S-GHFB+AMP; see Ref.~\cite{Tagami:2019svt} for the formulation.
As a way of taking the center-of-mass correction to the densities,
we adapt the method of Ref.~\cite{Sumi:2012fr}.
We scale the D1S-GHFB+AMP neutron density so that
the radius $r_n({\rm scaling})$ of the scaled density can reproduce $\sigma_{\rm R}({\rm exp})$,
since the $r_p$ calculated with the D1S-GHFB+AMP density agrees with
$r_p^{\rm exp}=5.444$~fm~\cite{PRC.90.067304} of electron scaling.
The same procedure is taken the D1M-GHFB+AMP neutron density,
where D1M~\cite{Goriely:2009zz,Robledo:2018cdj} is an improved version of D1S and
the proton radius calculated with D1M-GHFB+AMP agrees with $r_p^{\rm exp}=5.444$~fm.
Our scaling procedure is explained below.
The scaled density $\rho_{\rm scaling}({\bfi r})$ is determined
from the original (D1S-GHFB+AMP or D1M-GHFB+AMP) one $\rho({\bfi r})$ as
\begin{eqnarray}
\rho_{\rm scaling}({\bfi r}) \equiv \frac{1}{\alpha^3}\rho({\bfi r}/\alpha), ~{\bfi r}_{\rm scaling} \equiv {\bfi r}/\alpha
\label{eq:scaling}
\end{eqnarray}
with a scaling factor
\begin{eqnarray}
\alpha=\sqrt{ \frac{\langle {\bfi r}^2 \rangle_{\rm scaling}}{\langle {\bfi r}^2 \rangle}} .
\end{eqnarray}
In Eq.~\eqref {eq:scaling}, we have replaced ${\bfi r}$ by ${\bfi r}/\alpha$ in the original density.
Eventually, ${\bfi r}$ dependence
of $\rho_{\rm scaling}({\bfi r})$ is different from that of $\rho({\bfi r})$.
We have multiplied the original density by $\alpha^{-3}$
in order to normalize the scaled density.
The symbol means $\sqrt{\langle {\bfi r}^2 \rangle_{\rm scaling}}$ is the root-mean-square
radius of $\rho_{\rm scaling}({\bfi r})$.
{\bf Results for $r_{\rm skin}^{\rm 208}({\rm exp})$:}
Figure~\ref{Fig-RXsec-p+Pb} shows $\sigma_R $ as a function of $E_{\rm lab}$.
The results of D1S-GHFB+AMP and D1M-GHFB+AMP
are near the lower bound of data~\cite{Dietrich:2002swm,Nakano:2021dau}
in $500 \leq E_{\rm lab} \leq 900$~MeV.
The result of D1S-GHFB+AMP is better than that of
D1M-GHFB+AMP.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.5\textwidth,clip]{208Pb-D1M.eps}
\caption{
$E_{\rm lab}$ dependence of reaction cross sections $\sigma_{\rm R}$
for $p$+$^{208}$Pb scattering.
Open circles stand for the results of the LF $t$-matrix folding model with the D1S-GHFB+AMP densities, whereas open triangles correspond to that with the D1M-GHFB+AMP densities.
The data are taken from Refs.~\cite{Dietrich:2002swm,Nakano:2021dau}.
}
\label{Fig-RXsec-p+Pb}
\end{center}
\end{figure}
Now we scale the D1S-GHFB+AMP neutron density so that
the result of the LF $t$ matrix folding model agrees with the data~\cite{Dietrich:2002swm,Nakano:2021dau}.
In the present case, the neutron scaling factor is $\alpha=1.017$.
Since the resulting $r_n({\rm exp})$ depends on $E_{\rm lab}$, we take
the the weighted mean and its total error for $E_{\rm lab} = 534.1, 549, 806$ MeV.
Neutron and matter radii thus obtained are $r_n({\rm exp})=5.768 \pm 0.047 $~fm and
$r_m({\rm exp})=5.643 \pm 0.047 $~fm, leading to
$r_{\rm skin}^{\rm 208}({\rm exp})=0.324 \pm 0.047 $~fm.
The same procedure is taken for D1M-GHFB+AMP.
This leads to $r_{\rm skin}^{\rm 208}({\rm exp})=0.333 \pm 0.047 $~fm, where
the neutron scaling factor is $\alpha=1.038$. The theoretical error
is evaluated with the difference between the central values of D1S-GHFB+AMP and
D1M-GHFB+AMP. The value is 0.009~fm.
The result of D1S-GHFB+AMP yields better agreement with the data than that of
D1M-GHFB+AMP. We then obtain
$r_{\rm skin}^{208}({\rm exp})=0.324 \pm (0.047)_{\rm exp} \pm (0.009)_{\rm th}~{\rm fm}$.
{\bf Discussions:}
Finally, the uncertainties of our results are listed.
\begin{description}
\item[~~~~~~~~~1.~Ambiguity of original densities taken]\mbox{}\\
As for proton and neutron densities for
$^{48}$Ca, we used D1S and D1M in Ref.~\cite{TAGAMI2022105155} .
Our result is $r_{\rm skin}^{48}({\rm exp})=0.158 \pm (0.023)_{\rm exp} \pm (0.012)_{\rm th}~{\rm fm}$; the theoretical error $(0.012)_{\rm th}~{\rm fm}$ is evaluated with D1S and D1M.
The same procedure is taken for $^{208}$Pb. Our result is
$r_{\rm skin}^{208}({\rm exp})=0.324 \pm (0.047)_{\rm exp} \pm (0.009)_{\rm th}~{\rm fm}$.
\item[~~~~~~~~~~2.~Experimental ambiguity]\mbox{}\\
Our present result $r_{\rm skin}^{48}=0.144 \pm 0.075$~fm based on $\Delta$
is consistent with $r_{\rm skin}^{48}({\rm exp})=0.158 \pm (0.023)_{\rm exp} \pm (0.012)_{\rm th}~{\rm fm}$
of Ref.~\cite{TAGAMI2022105155}.
The central values are different from each other. The difference comes from the data used.
\end{description}
{\bf Conclusion:}
Our final values are
$r_{\rm skin}^{208}({\rm exp})=0.324 \pm (0.047)_{\rm exp} \pm (0.009)_{\rm th}~{\rm fm}$ and $r_{\rm skin}^{48}=0.144 \pm 0.075$~fm.
Our values are consistent with $r_{\rm skin}^{208}({\rm PREX2}) $ and
$r_{\rm skin}^{48}({\rm CREX}) $, respectively.
These values are tabulated in Table \ref{skin values}.
\begin{table}[htb]
\begin{center}
\caption
{Results for $r_{\rm skin}^{208}({\rm exp})$ and $r_{\rm skin}^{48}({\rm exp}) $.
The values are shown in units of fm.
}
\begin{tabular}{cc}
\hline\hline
& $r_{\rm skin}^{208}({\rm exp})$ or $r_{\rm skin}^{48}({\rm exp})$\\
\hline
PREX2 & $0.283\pm 0.071$ \\
TW ($^{208}$Pb)& $0.324 \pm (0.047)_{\rm exp} \pm (0.009)_{\rm th} $ \\
CREX & $0.121 \pm 0.026{\rm (exp)} \pm 0.024{\rm (model)}$ \\
TW ($^{48}$Ca) & $0.144 \pm 0.075 $ \\
\hline
\end{tabular}
\label{skin values}
\end{center}
\end{table}
\begin{acknowledgments}
We would like to thank Dr. Toyokawa from his contribution.
\end{acknowledgments}
|
1,116,691,498,544 | arxiv | \section{Introduction}
\label{sec:intro}
In the last decade, the use of renewable energy sources is soaring and
is creating new challenges in the field of microgrid control. These important
structural changes of the power grid call for novel approaches that
must appropriately take into account the stochastic nature of the energy
produced by renewables.
To this end, optimization-based control techniques are increasingly used.
However they typically employ centralized approaches that require the
collection of the problem data at each node, which may lead to
a single point of failure. Distributed optimization approaches are a
promising alternative that allows for the solution of optimization
problems with spatially distributed data while preserving the locality
of the data and even resilience of the network in case of
failures~\cite{molzahn2017survey,nedic2018distributed,notarstefano2019distributed}.
We first review optimal control techniques, then we
recall approaches based on mixed-integer programming and finally
move to distributed approaches.
Optimal control techniques allow for shaping input trajectories
that take into account energy consumption/production costs and
user comfort. In recent times, they are increasingly achieved with
moving horizon techniques as Model Predictive Control (MPC)
as it flexibly allows one to tackle several challenges,
see e.g.~\cite{ouammi2015coordinated,cominesi2017two,le2017plug}.
Stochastic optimization-based approaches are also being developed.
In~\cite{zakariazadeh2014smart},
a stochastic optimization method for energy and reserve scheduling
with renewable energy sources and demand-side participation is considered.
The work~\cite{nguyen2015stochastic} studies a stochastic unit commitment
and economic dispatch problem with renewables and incorporating the
battery operating cost.
Another prominent approach is Mixed-Integer Linear
Programming (MILP), which is gathering significant attention
due to its ability to model logical statements that often occur
within microgrids.
In~\cite{kriett2012optimal}, a MILP optimal control approach of residential microgrid
is proposed.
In~\cite{marzband2014experimental} a mixed-integer nonlinear programming
formulation is considered with experimental validation for islanded-mode microgrids.
In~\cite{shirazi2017cost}, a MILP is formulated to achieve optimal load shifting in
microgrids.
The MPC and the MILP approaches have been combined in~\cite{parisio2014model},
which proposes a receding horizon implementation of the MILP approach on an
experimental testbed. A stochastic version of this work is considered in~\cite{parisio2016stochastic},
which further takes into account renewable energy sources and
aims at an environmental/economical operation of microgrids.
While these works take into account more and more aspects
of microgrids, they are all based on centralized optimization techniques
that require one of the nodes to be chosen as master, thus introducing
scalability and privacy issues.
As energy networks are intrinsically distributed, there is often the need to
devise distributed approaches %
that exploit the graph structure.
The work~\cite{molzahn2017survey} reviews distributed
methods for optimal power flow problems, while~\cite{dorfler2019distributed}
surveys distributed control approaches for autonomous
power grids.
In~\cite{bolognani2013distributed}, a distributed approach to optimal
reactive power compensation is proposed.
In~\cite{causevic2018energy,belluschi2020distributed}, the authors
propose distributed algorithms for optimal energy building management,
while~\cite{cavraro2020distributed} investigates a distributed feedback
control law to minimize power generation cost in prosumer-based
networks.
However, none of the mentioned works formulates a comprehensive
stochastic scheduling problem involving the demand-side in a distributed way.
Novel distributed methods relying on MILPs can take advantage of the
latest progress of distributed optimization methods.
MILPs are nonconvex and NP-hard, therefore large-scale instances
can be solved within acceptable time windows only suboptimally.
On this regard, the recent works~\cite{falsone2018distributed,camisa2021distributed}
propose distributed algorithms to compute feasible solutions of MILPs over networks.
The contributions of this paper are as follows. We consider a distributed stochastic microgrid control problem
consisting of several interconnected power units, namely generators, renewable
energy sources, storages and loads. We begin by recalling the microgrid model.
We then show that the optimal control problem
can be recast as a distributed MILP.
We then apply a two-stage stochastic programming approach
to the distributed MILP and show that also this problem can be
cast as a distributed MILP. This new problem is then tackled using
an approach inspired to recent approaches proposed in the literature, %
which are suitably modified to deal with the stochastic scenario.
The proposed algorithm provides a feasible solution to the two-stage
stochastic problem at each iteration, while preserving sensible data at each
node. As the algorithm progresses, the cost of the provided
solution improves and the expected violation of the power balance
constraint decreases.
For the asymptotic solution provided by the algorithm, we formally
prove an upper bound on the violation of the power balance
constraint.
We then apply the developed approach to a simulation scenario with
a large number of devices.
We perform realistic simulations by using open-source historical data,
taken from the EU platform Open Power System Data~\cite{opsd2020timeseries},
on energy generation/consumption in South Italy. We train a
Generative Adversarial Neural Network (GAN) based on these data and use
it to generate sample energy generation/consumption profiles.
The generated data is used to perform a Monte Carlo numerical
experiment on the Italian HPC CINECA infrastracture
to show the efficacy of the distributed algorithm.
The paper is organized as follows. In Section~\ref{sec:microgrid_deterministic}, we
describe the mixed-integer microgrid model and the stochastic
optimal control problem.
In Section~\ref{sec:distributed_stochastic}, we reformulate the problem as a
distributed MILP and apply the two-stage stochastic programming approach.
In Section~\ref{sec:algorithm}, we describe the proposed distributed algorithm
and provide theoretical results on the worst-case constraint violation,
while in Section~\ref{sec:simulations} we discuss Monte Carlo numerical simulations
on a practical scenario with a large number of devices and realistic synthesized data.
\section{Stochastic Mixed-Integer Microgrid Control with Renewables}
\label{sec:microgrid_deterministic}
\begin{table}[t]\centering
\caption{List of the main symbols and their definitions}
\label{tb:symbols}
{\rowcolors{2}{gray!20}{white}
\renewcommand{\arraystretch}{1.15}
\begin{tabular}{ll}
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Basic definitions}
\\
\hline
$N\in\mathbb{N}$
& Number of units in the system
\\
$\mathbb{I} = \until{N}$
& Set of units
\\
$\varepsilon>0$
& Very small number (e.g. machine precision)
\\
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Storages (indexed by $i \in \agents_\textsc{stor}$)}
\\
\hline
$x_i(k)$
& State of charge at time $k$
\\
$u_i(k)$
& Exchanged power ($\ge 0$ if charging) at time $k$
\\
$\delta_i(k)$
& Charging (1) / discharging (0) state
\\
$z_i(k)$
& Auxiliary optimization variable
\\
$\eta_i^c$, $\eta_i^d$
& Charging and discharging efficiencies
\\
$x^\textsc{min}_i$, $x^\textsc{max}_i$
& Minimum and maximum storage level
\\
$x_i^\textsc{pl}$
& Physiological loss of energy
\\
$C_i$
& Maximum output power
\\
$\zeta_i$
& Operation and maintenance cost coefficient
\\
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Generators (indexed by $i \in \agents_\textsc{gen}$)}
\\
\hline
$u_i(k)$
& Generated power ($\ge 0$) at time $k$
\\
$\delta_i(k)$
& On (1) / off (0) state (``on'' iff $u_i(k) > 0$)
\\
$\nu_i(k)$
& Epigraph variable for quadratic generation cost
\\
$\theta_i^\textsc{u}(k)$, $\theta_i^\textsc{d}(k)$
& Epigraph variables for startup/shutdown costs
\\
$T_i^\textsc{up}$, $T_i^\textsc{down}$
& Minimum up/down time
\\
$u_i^\textsc{min}$, $u_i^\textsc{max}$
& Min. and max. power that can be generated
\\
$r_i^\textsc{max}$
& Maximum ramp-up/ramp-down
\\
$\kappa_i^\textsc{u}(k)$, $\kappa_i^\textsc{d}(k)$
& Startup and shutdown costs
\\
$\zeta_i$
& Operation and maintenance cost coefficient
\\
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Renewable energy sources (indexed by $i \in \agents_\textsc{ren}$)}
\\
\hline
$P_i(k)$
& Generated power at time $k$
\\
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Controllable loads (indexed by $i \in \agents_\textsc{cl}$)}
\\
\hline
$\beta_i(k)$
& Curtailment factor ($\in [\beta_i^\textsc{min}, \beta_i^\textsc{max}]$) at time $k$
\\
$D_i(k)$
& Consumption forecast at time $k$
\\
$\beta_i^\textsc{min}$, $\beta_i^\textsc{max}$
& Minimum and maximum allowed curtailment
\\
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Connection to the main grid (indexed by $i \in \agents_\textsc{grid}$)}
\\
\hline
$u_i(k)$
& Imported power from the grid at time $k$
\\
$\delta_i(k)$
& Importing (1) or exporting (0) mode at time $k$
\\
$\phi_i(k)$
& Total expenditure for imported power at time $k$
\\
$\phi_i^\textsc{p}(k)$, $\phi_i^\textsc{s}(k)$
& Price for power purchase and sell at time $k$
\\
$P_i^\textsc{max}$
& Maximum exchangeable power
\\
\rowcolor{gray!30}
\hline
\multicolumn{2}{c}{Two-stage stochastic problem}
\\
\hline
$q_+, q_-$
& Costs associated to positive and negative recourse
\end{tabular}}
\end{table}
Let us begin by introducing the mixed-integer microgrid model.
For ease of exposition, we consider a fairly general model
inspired to the one in~\cite{parisio2016stochastic}
without taking into account some specific aspects
(see also Remark~\ref{rem:chp}).
This allows us to better highlight the main features of
the proposed approach while keeping the discussion not
too technical.
A microgrid consists of $N$ units, partitioned as follows.
Storages are collected in $\agents_\textsc{stor}$, generators in $\agents_\textsc{gen}$,
renewable energy sources in $\agents_\textsc{ren}$,
critical loads in $\agents_\textsc{lo}$, controllable loads in $\agents_\textsc{cl}$
and one connection with the utility grid in $\agents_\textsc{grid}$.
The whole set of units is
\begin{align*}
\mathbb{I} = \until{N} = \agents_\textsc{stor} \cup \agents_\textsc{gen} \cup \agents_\textsc{ren} \cup \agents_\textsc{lo} \cup \agents_\textsc{cl} \cup \agents_\textsc{grid}.
\end{align*}
Throughout the document, we interchangeably refer to the
units also as \emph{agents}.
In the next subsections we describe each type of units separately, while in
Section~\ref{sec:opt_control_pb} we will introduce the optimal control problem.
In the following, we denote the optimal control prediction horizon as $T \in \natural$.
\subsection{Storages}
For storage units $i \in \agents_\textsc{stor}$, let $x_i(k) \in {\mathbb{R}}$ be the stored energy level at time
$k$ and let $u_i(k) \in {\mathbb{R}}$ denote the power exchanged with the storage unit at time $k$
(positive for charging, negative for discharging).
The dynamics at each time $k$ amounts to $x_i(k+1) = x_i(k) + \eta_i u_i(k) - x_i^\textsc{pl}$,
where $\eta_i$ denotes the (dis)charging efficiency and $x_i^\textsc{pl}$ is a
physiological loss of energy. It is assumed that $\eta_i = \eta_i^c$ if
$u_i(k) \ge 0$ (charging mode), whereas $\eta_i = 1/\eta_i^d$ if $u_i(k) < 0$
(discharging mode), with $0 < \eta_i^c, \eta_i^d < 1$.
Thus, the dynamics is piece-wise linear. To deal with this,
we utilize mixed-integer
inequalities~\cite{bemporad1999control}. Let us
introduce additional variables $\delta_i(k) \in \{0,1\}$ and
$z_i(k) \triangleq \delta_i(k) u_i(k) \in {\mathbb{R}}$ for all $k$.
Each $\delta_i(k)$ is one if and only if $u_i(k) \ge 0$
(i.e. the storage unit at time $k$ is in the charging state).
After following the manipulations proposed in~\cite{parisio2014model},
we obtain the following model for the $i$-th storage unit,
\begin{subequations}
\begin{align}
&x_i(k+1) = x_i(k) + (\eta_i^c - \tfrac{1}{\eta_i^d}) z_i(k) + \tfrac{1}{\eta_i^d} u_i(k) - x_i^\textsc{pl},
\label{eq:storage_1}
\\
&E_i^1 \delta_i(k) + E_i^2 z_i(k) \le E_i^3 u_i(k) + E_i^4,
\label{eq:storage_2}
\\
&x_i^\textsc{min} \le x_i(k) \le x_i^\textsc{max},
\label{eq:storage_3}
\end{align}
for all time instants $k$, and %
\begin{align}
x_i(0) = x_{i,0},
\label{eq:storage_4}
\end{align}
\label{eq:storage}%
\end{subequations}
where~\eqref{eq:storage_1} is the dynamics,~\eqref{eq:storage_2} are
mixed-integer inequalities expressing the logical constraints,~\eqref{eq:storage_3}
are box constraints on the
state of charge (with $0 < x_i^\textsc{min} < x_i^\textsc{max}$),
and~\eqref{eq:storage_4} imposes the initial condition ($x_{i,0} \in {\mathbb{R}}$
is the initial state of charge of storage $i$). The matrices in~\eqref{eq:storage_2} are
\begin{align*}
E_i^1 \!=\!\! \begin{bmatrix}C_i \\ -(C_i \!+\! \varepsilon) \\ C_i \\ C_i \\ -C_i \\ -C_i\end{bmatrix}\!, \:\:
E_i^2 \!=\!\! \begin{bmatrix} 0 \\ 0 \\ 1 \\ -1 \\ 1 \\ -1\end{bmatrix}\!, \:\:
E_i^3 \!=\!\! \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \\ 0 \\ 0\end{bmatrix}\!, \:\:
E_i^4 \!=\!\! \begin{bmatrix} C_i \\ -\varepsilon \\ C_i \\ C_i \\ 0 \\ 0\end{bmatrix}\!,
\end{align*}
where $C_i > 0$ is the limit output power and
$\varepsilon > 0$ is a very small number (typically machine precision).
To each storage $i$ it is associated an operation and maintenance cost,
which is equal to
\begin{align}
J_i = \sum_{k=0}^{K-1} \zeta_i |u_i(k)|
= \sum_{k=0}^{K-1} \zeta_i (2 z_i(k) - u_i(k)),
\end{align}
where $\zeta_i > 0$ is the operation and maintenance cost per exchanged unit of power
and $2 z_i(k) - u_i(k) = |u_i(k)|$ is the absolute value of the power exchanged
with the storage.
\subsection{Generators}
\begin{subequations}
\label{eq:generator}
For generators $i \in \agents_\textsc{gen}$, let $u_i(k) \in {\mathbb{R}}, u_i(k) \ge 0$ denote the
generated power at time $k$. Since generators can be either on or off, as
done for the storages we let $\delta_i(k) \in \{0,1\}$ be an
auxiliary variable that is equal to $1$ if and only if $u_i(k) > 0$.
As in the case of storages, we must consider constraints on the operating
conditions of generators. Namely, if a generator is turned on/off, there is
a minimum amount of time for which the unit must be kept on/off.
This logical constraint is modeled by the inequalities
\begin{align}
&\delta_i(k) - \delta_i(k-1) \le \delta_i(\tau),
\nonumber
\\
& \hspace{2cm}
\tau = k+1, \ldots, \min(k+T_i^\textsc{up} - 1, T),
\label{eq:generator_1}
\end{align}
\begin{align}
&\delta_i(k-1) - \delta_i(k) \le \delta_i(\tau),
\nonumber
\\
& \hspace{2cm}
\tau = k+1, \ldots, \min(k+T_i^\textsc{down} - 1, T),
\label{eq:generator_2}
\end{align}
for all time instants $k$, %
where $T_i^\textsc{up}$ and
$T_i^\textsc{down}$ are the minimum up and down time of
generator $i$.
The power flow limit and the ramp-up/ramp-down limits are modeled
respectively by
\begin{align}
u_i^\textsc{min} \delta_i(k) &\le u_i(k) \le u_i^\textsc{max} \delta_i(k),
\label{eq:generator_3}
\\
-r_i^\textsc{max} \delta_i(k) &\le u_i(k) - u_i(k-1) \le r_i^\textsc{max} \delta_i(k),
\label{eq:generator_4}
\end{align}
for all times $k$, %
where $u_i^\textsc{max} \ge u_i^\textsc{min} \ge 0$ denote
the maximum and minimum power that can be generated by generator $i$
and $r_i^\textsc{max} \ge 0$ denotes the maximum ramp-up/ramp-down.
The cost associated to generator units is composed of three parts,
which are \emph{(i)} a (quadratic) generation cost,
\emph{(ii)} a start-up/shut-down cost and
\emph{(iii)} an operation and maintenance cost.
To model the generation cost, %
we
consider a piece-wise linearized version
$\max\limits_{\ell} \big( S_i^\ell u_i(k) + s_i^\ell \big)$
for all $k$ with appropriately defined $S_i^\ell, s_i^\ell \in {\mathbb{R}}$.
The startup $\theta_i^\textsc{u}$ and shutdown cost $\theta_i^\textsc{d}$ at each time
$k \in \fromto{0}{K-1}$ are equal to
\begin{align*}
\theta_i^\textsc{u}(k) &= \max\Big\{ 0, \:\: \kappa_i^\textsc{u}(k)[\delta_i(k) - \delta_i(k-1)] \Big\},
\\
\theta_i^\textsc{d}(k) &= \max\Big\{ 0, \:\: \kappa_i^\textsc{d}(k)[\delta_i(k-1) - \delta_i(k)] \Big\},
\end{align*}
where $\kappa_i^\textsc{u}(k), \kappa_i^\textsc{d}(k) > 0$
are the start-up and shut-down costs at time $k$.
The operation and maintenance cost is equal to $\zeta_i \delta_i(k)$,
where $\zeta_i > 0$ is a cost coefficient (we assume that there is no cost
when the generator is turned off).
Thus, the expression for the cost of each generator $i$ is
\begin{align*}
J_i =
\!\sum_{k=0}^{K-1} \!\Big[
\max\limits_{\ell} \big( S_i^\ell u_i(k) + s_i^\ell \big) + \zeta_i \delta_i(k)
+ \theta_i^\textsc{u}(k) + \theta_i^\textsc{d}(k)
\Big].
\end{align*}
Note that the cost function has internal maximizations and,
as such, is nonlinear.
However, since the cost is to be minimized, it can be recast as a linear function
by introducing so-called epigraph variables (see e.g.~\cite{boyd2004convex})
as follows.
As regards the generation cost, we replace it
with epigraph variables $\nu_i(k) \in {\mathbb{R}}$ and impose the constraints
\begin{align}
\nu_i(k) \ge S_i^\ell u_i(k) + s_i^\ell, \hspace{1cm} \forall \: \ell,
\label{eq:generator_5}
\end{align}
for all times $k$. %
Similarly, we can treat $\theta_i^\textsc{u}, \theta_i^\textsc{d} \in {\mathbb{R}}$
as epigraph variables and write the constraints
\begin{align}
\theta_i^\textsc{u}(k) &\ge \kappa_i^\textsc{u}(k)[\delta_i(k) - \delta_i(k-1)],
\label{eq:generator_6}
\\
\theta_i^\textsc{d}(k) &\ge \kappa_i^\textsc{d}(k)[\delta_i(k-1) - \delta_i(k)],
\label{eq:generator_7}
\\
\theta_i^\textsc{u}(k) &\ge 0,
\label{eq:generator_8}
\\
\theta_i^\textsc{d}(k) &\ge 0,
\label{eq:generator_9}
\end{align}\end{subequations}
for all $k$. We therefore obtain the following expression
for the cost function of generator $i$,
\begin{align}
J_i = \sum_{k=0}^{K-1} \big[ \nu_i(k) + \theta_i^\textsc{u}(k) + \theta_i^\textsc{d}(k) + \zeta_i \delta_i(k) \big].
\end{align}
\subsection{Renewable Energy Sources}
We consider two types of renewables, namely wind generators
and solar generators.
Rather than using a physical or dynamical model for these generators,
we use a predictor to generate realistic power production scenarios.
Indeed, thanks to the huge amount of historical datasets freely available
on the internet, neural network-based predictors have excellent
accuracy. More details are in Section~\ref{sec:gan}.
We will employ this technique also to generate power demand predictions.%
These units only contribute to the power balance
constraint~\eqref{eq:power_balance} through their
generated power at each time slot $k$, denoted as $P_i(k) \ge 0$,
and do not have associated cost or constraints.
Note that $P_i(k)$ are unknown beforehand and must be modeled as
stochastic variables having a certain probability distribution. We
discuss this aspect more in details in Section~\ref{sec:distributed_stochastic}.
\subsection{Loads}
We consider two types of loads, namely critical loads and controllable loads.
For critical loads $i \in \agents_\textsc{lo}$, we will denote by $D_i(k)$ the consumption
forecast at time $k$ and we assume it is given. There are no
optimization variables (and thus cost functions) associated with this kind
of units, however their consumption must be considered in the power balance
(cf. Section~\ref{sec:opt_control_pb}). %
For controllable loads $i \in \agents_\textsc{cl}$, let $D_i(k)$ be the consumption
forecast at time $k$, which is assumed to be given.
In case the microgrid has energy shortages, the consumption
of controllable loads can be curtailed to meet power balance
constraints. This is quantified with a curtailment factor
$\beta_i(k) \in [\beta_i^\textsc{min},\beta_i^\textsc{max}]$,
where $0 \le \beta_i^\textsc{min} \le \beta_i^\textsc{max} \le 1$ are
the bounds on the allowed curtailment.
The actual power consumption at time $k$
is thus $(1 - \beta_i(k)) D_i(k)$, i.e. if $\beta_i(k) = 0$ there is
no curtailment. The curtailment factor is an optimization variable
and can be freely chosen, thus in principle it can be $\beta_i(k) >0$
for some $k$ (even if there are no energy shortages) if this results in
a cost improvement. The following constraint must be imposed, %
\begin{align}
\beta_i^\textsc{min} \le \beta_i(k) \le \beta_i^\textsc{max},
\label{eq:load}
\end{align}
for all times $k$.
We assume the microgrid incurs in a cost that is proportional to the
total curtailed power, thus the cost function associated to controllable load $i$ is
\begin{align}
J_i = \sum_{k=0}^{K-1} \varphi_i D_i(k) \beta_i(k),
\end{align}
where $\varphi_i > 0$ is a penalty weight.
\subsection{Connection to the Utility Grid}
For the connection with the utility grid $i \in \agents_\textsc{grid}$, let $u_i(k) \in {\mathbb{R}}$
denote the imported (exported) power level from (to) the
utility grid. We use the convention that imported power at time $k$
is non-negative $u_i(k) \ge 0$.
As before, since the power purchase price is different from the power sell price,
we consider auxiliary optimization variables $\delta_i(k) \in \{0,1\}$ and $\phi_i(k) \in {\mathbb{R}}$ for all $k$.
The variable $\delta_i(k)$ models the logical statement $\delta_i(k) = 1$ if and only
if $u_i(k) \ge 0$ (i.e. power is imported from the utility grid).
The variable $\phi_i(k)$ represents the total expenditure (retribution)
for imported (exported) energy.
Denoting by $\phi_i^\textsc{p}(k), \phi_i^\textsc{s}(k) \ge 0$ the price for power
purchase and sell, it holds $\phi_i(k) = \phi_i^\textsc{p}(k) u_i(k)$ if
$\delta_i(k) = 1$ and $\phi_i(k) = \phi_i^\textsc{s}(k) u_i(k)$ if $\delta_i(k) = 0$.
By denoting by $P_i^\textsc{max} \ge 0$ the maximum exchangeable power,
the corresponding mixed-integer inequalities are (cf.~\cite{parisio2014model}),
\begin{align}
E_i^1 \delta_i(k) + E_i^2 \phi_i(k) \le E_i^3(k) u_i(k) + E_i^4,
\label{eq:grid}
\end{align}
for all $k$,
where the matrices are defined as
\begin{align*}%
E_i^1 \!=\!\!\! \begin{bmatrix} P_i^\textsc{max} \\ \!-\!P_i^\textsc{max} \!\!-\! \varepsilon\!\! \\ M_i \\ M_i \\ -M_i \\ -M_i \end{bmatrix}\!\!,
E_i^2 \!=\!\!\! \begin{bmatrix} 0 \\ 0 \\ 1 \\ -1 \\ 1 \\ -1 \end{bmatrix}\!\!,
E_i^3(k) \!=\!\!\! \begin{bmatrix} 1 \\ -1 \\ \phi^\textsc{p}(k)\! \\ \!-\phi^\textsc{p}(k)\!\! \\ \phi^\textsc{s}(k)\! \\ \!-\phi^\textsc{s}(k)\!\! \end{bmatrix}\!\!,
E_i^4 \!=\!\!\! \begin{bmatrix} \!P_i^\textsc{max}\!\! \\ -\varepsilon \\ M_i \\ M_i \\ 0 \\ 0\end{bmatrix}\!\!,
\end{align*}
with $M_i = P_i^\textsc{max} \cdot \max\limits_k(\phi^\textsc{p}(k), \phi^\textsc{s}(k))$.
Clearly, the cost associated with this unit is
\begin{align}
J_i = \sum_{k=0}^{K-1} \phi_i(k).
\end{align}
\subsection{Power Balance Constraint and Optimal Control Problem}
\label{sec:opt_control_pb}
Electrical balance must be met at each time $k$, i.e.,
\begin{align}
&u_{\agents_\textsc{grid}}(k)
=
\sum_{i \in \agents_\textsc{stor}}\! u_i(k) -\! \sum_{i \in \agents_\textsc{gen}} u_i(k) +\! \sum_{i \in \agents_\textsc{cl}} (1 - \beta_i(k)) D_i(k)
\nonumber
\\
&\hspace{1.6cm}
+ \sum_{i \in \agents_\textsc{lo}} D_i(k) - \sum_{i \in \agents_\textsc{ren}}\! P_i(k),
\label{eq:power_balance}
\end{align}
Recall that the length of the prediction horizon is $K \in \natural$.
The optimal control problem, which is a MILP, can be posed as
\begin{align}
\min_{u} \: & \sum_{k=0}^{K-1} \!\bigg[ \phi_\textsc{grid}(k) \!
+\!\!\! \sum_{i \in \agents_\textsc{gen}}\! (\zeta_i \delta_i(k) \!+\! \nu_i(k) \!+\! \theta_i^\textsc{u}(k) \!+\! \theta_i^\textsc{d}(k))
\nonumber
\\
& \hspace{0.3cm}
+\sum_{i \in \agents_\textsc{cl}} \varphi_i D_i(k) \beta_i(k) +\!\!\sum_{i \in \agents_\textsc{stor}} \!\!\big(\zeta_i (2 z_i(k) - u_i(k)) \big)\!\bigg]
\nonumber
\\
\textnormal{subj. to} \: & \: \text{storage constraints } \eqref{eq:storage}
\label{eq:microgrid_mpc_problem}
\\
& \: \text{generator constraints } \eqref{eq:generator}
\nonumber
\\
& \: \text{constraints } \eqref{eq:load}, \eqref{eq:grid}, \eqref{eq:power_balance}.
\nonumber
\end{align}
Note that problem~\eqref{eq:microgrid_mpc_problem} is a stochastic
optimization problem. Indeed, the equality constraint~\eqref{eq:power_balance}
is stochastic since it depends on $P_i(k)$. Next we show how to handle this
level of complexity.
\begin{remark}
\label{rem:chp}
Note that the microgrid model can also be extended to additionally
consider thermal loads and Combined Heat and Power (CHP) units,
which would additionally require
a thermal balance constraint. The architecture proposed in the
following can be easily adapted to deal with this scenario by making
only minor changes. However, in order not to complicate too much
the notation, we prefer not to introduce this further level of complexity,
which nevertheless can be handled by the proposed framework.
\oprocend
\end{remark}
\section{Distributed Constraint-coupled Stochastic Optimization}
\label{sec:distributed_stochastic}
To handle the stochastic quantities $P_i(k)$ we follow the ideas
of~\cite{parisio2016stochastic} and utilize a two-stage stochastic
optimization approach.
As we are interested in a distributed algorithm, instead of
applying the two-stage stochastic approach directly to problem~\eqref{eq:microgrid_mpc_problem},
we rather apply it to a distributed reformulation of problem~\eqref{eq:microgrid_mpc_problem}.
In this section, we first introduce the distributed reformulation of the
problem and then formalize the two-stage stochastic optimization approach.
\subsection{Constraint-coupled Reformulation}
\label{sec:constraint_coupled_formulation}
The optimal control problem~\eqref{eq:microgrid_mpc_problem} can be
reformulated in such a way that the distributed structure of the problem
becomes more evident. Formally, problem~\eqref{eq:microgrid_mpc_problem}
is equivalent to the stochastic \emph{constraint-coupled} MILP,
\begin{align}
\begin{split}
\min_{x_1,\ldots,x_N} \: & \: \sum_{i =1}^N c_i^\top x_i
\\
\textnormal{subj. to} \:
& \: \sum_{i=1}^N A_i x_i = b
\\
& \: x_i \in \Xi, \hspace{1cm} i = \interv{N},
\end{split}
\label{eq:MILP}
\end{align}
where, for all $i \in \until{N}$, the decision vector $x_i$ has $n_i = p_i + q_i$
components (thus $c_i \in {\mathbb{R}}^{n_i}$) with $p_i, q_i \in \natural$
and the local constraint set is of the form
\begin{align*}
\Xi = P_i \cap ({\mathbb{Z}}^{p_i} \times {\mathbb{R}}^{q_i}),
\end{align*}
for some nonempty compact polyhedron $P_i \subset {\mathbb{R}}^{p_i + q_i}$.
Moreover, the matrices $A_i \in {\mathbb{R}}^{K \times n_i}$
and the vector $b \in {\mathbb{R}}^K$ model coupling constraints among the variables.
The term ``constraint-coupled'' that we associate to problem~\eqref{eq:MILP}
is due to the fact that the constraints $\sum_{i=1}^N A_i x_i = b$ create
a link among all the variables $x_1, \ldots, x_N$, which otherwise could be
optimized independently from each other.
To achieve the mentioned reformulation, we now specify the quantities $x_i$,
$c_i$, $\Xi$, $A_i$ for each type of device, and the right-hand side vector $b$.
\emph{Storages}.
We assume that each storage unit $i \in \agents_\textsc{stor}$ is responsible for the
optimization vector $x_i$ consisting of the stack of $x_i(k), u_i(k), z_i(k) \in {\mathbb{R}}$
and $\delta_i(k) \in \{0, 1\}$ for all $k \in \fromto{0}{K-1}$ plus the variable
$x_i(K) \in {\mathbb{R}}$. The constraints in $\Xi$ are given by~\eqref{eq:storage},
while the cost function is
$c_i^\top x_i = \sum_{k=0}^{K-1} \zeta_i (2 z_i(k) - u_i(k))$.
\emph{Generators}.
Each generator $i \in \agents_\textsc{gen}$ is responsible for the optimization vector
$x_i$ consisting of the stack of
$u_i(k), \nu_i(k), \theta_i^\textsc{u}(k), \theta_i^\textsc{d}(k) \in {\mathbb{R}}$
and $\delta_i(k) \in \{0, 1\}$ for all $k \in \fromto{0}{K-1}$.
The constraints in $\Xi$ are given by~\eqref{eq:generator_1}--\eqref{eq:generator_9},
while the cost function is
$c_i^\top x_i = \sum_{k=0}^{K-1} \big( \zeta_i \delta_i(k) + \nu_i(k) +
\theta_i^\textsc{u}(k) + \theta_i^\textsc{d}(k) \big)$.
\emph{Critical loads}.
For the critical loads $i \in \agents_\textsc{lo}$ there are no variables to optimize,
but they must be taken into account in the coupling constraints.
\emph{Controllable loads}.
For each controllable load $i \in \agents_\textsc{cl}$ the optimization vector
$x_i$ consists of the stack of $\beta_i(k) \in {\mathbb{R}}$,
for all $k \in \fromto{0}{K-1}$, with constraints given by~\eqref{eq:load}.
Note that, for this class of devices, the local
constraint set is not mixed-integer. The cost function is
$c_i^\top x_i = \sum_{k=0}^{K-1} \varphi_i D_i(k) \beta_i(k)$.
\emph{Connection to the utility grid}.
For this device $i \in \agents_\textsc{grid}$, the optimization vector $x_i$ consists
of the stack of $u_i(k), \phi_i(k) \in {\mathbb{R}}$ and $\delta_i(k) \in \{0, 1\}$
for all $k \in \fromto{0}{K-1}$.
The local constraints are~\eqref{eq:grid}, while the cost function is
$c_i^\top x_i = \sum_{k=0}^{K-1} \phi_i(k)$.
\emph{Coupling constraints}.
Finally, the coupling constraints are given by~\eqref{eq:power_balance},
which can be encoded in the form $\sum_{i=1}^N A_i x_i = b$ by
appropriately defining the matrices $A_i$ and the vector $b$.
\begin{subequations}
In particular, the matrices $A_i \in {\mathbb{R}}^{K \times n_i}$ are such that
\begin{align}
[A_i x_i]_k &= u_i(k), &&\forall i \in \agents_\textsc{stor},
\\
[A_i x_i]_k &= -u_i(k), &&\forall i \in \agents_\textsc{gen},
\\
[A_i x_i]_k &= -\beta_i(k) D_i(k), &&\forall i \in \agents_\textsc{cl},
\\
[A_i x_i]_k &= -u_i(k), &&\forall i \in \agents_\textsc{grid},
\end{align}
for all times $k$, while the right-hand side vector $b \in {\mathbb{R}}^K$ is equal to
\begin{align}
b = - \sum_{i \in \agents_\textsc{cl}} D_i - \sum_{i \in \agents_\textsc{lo}} D_i + \sum_{i \in \agents_\textsc{ren}} P_i,
\label{eq:h_def}
\end{align}%
\end{subequations}
where here $D_i \in {\mathbb{R}}^K$ and $P_i \in {\mathbb{R}}^K$ denote the
stack of $D_i(k)$ and $P_i(k)$ for all times $k$.
Note that the power generated by the renewables introduces a stochasticity
in the right-hand side vector $b$ appearing in problem~\eqref{eq:MILP}.
In the considered distributed context,
we assume that each agent $i$ does not know the entire problem information.
In particular, we assume it only knows the local cost vector $c_i$, the local
constraint $\Xi$ and its matrix $A_i$ of the coupling constraint.
The exchange of information among $N$ agents occurs according to a
graph-based
communication model. We use $\mathcal{G}}\newcommand{\LL}{\mathcal{L} = (V, \mathcal{E}} \newcommand{\FF}{\mathcal{F})$ to indicate the undirected,
connected graph describing the network, where $V = \until{N}$ is the set of vertices
and $\mathcal{E}} \newcommand{\FF}{\mathcal{F}$ is the set of edges.
If $(i,j)\in \mathcal{E}} \newcommand{\FF}{\mathcal{F}$, then agent $i$ can communicate with agent $j$ and viceversa.
We use $\mathcal{N}_i$ to indicate the set of neighbors of agent $i$ in $\mathcal{G}}\newcommand{\LL}{\mathcal{L}$, i.e.,
$\mathcal{N}_i = \{ j \in V | (i,j) \in \mathcal{E}} \newcommand{\FF}{\mathcal{F} \}$.
\subsection{Two-stage Stochastic Optimization Approach}
In its current form, problem~\eqref{eq:MILP} cannot be practically solved due
to the right-hand side vector $b$ being unknown. To deal with this,
the approach consists
of considering a set of possible \emph{scenarios} that may arise
and then to formulate and solve a so-called \emph{two-stage}
stochastic optimization problem, which we now introduce.
Intuitively, in this uncertain scenario one has to ``a priori'' (i.e. without knowing the actual value of the random
vector $b$) choose a set of
control actions $u_i(k)$, such as generated/stored power or power curtailments,
in order to minimize a certain cost criterion in an expected sense. However, these
control actions will inevitably result in a violation of the power
balance constraint~\eqref{eq:power_balance}
``a posteriori'' (i.e. when the actual power production
of renewables, and hence value of the random vector $b$,
becomes known). To compensate for this infeasibility,
\emph{recourse} actions must be taken. These actions are associated to a
cost and will have an impact on the final performance achieved
by the whole control scheme.
In the jargon of two-stage stochastic optimization, the first-stage
optimization variables are those associated to the control actions
(i.e. $x_1, \ldots, x_N$ in problem~\eqref{eq:MILP}), while
the second-stage optimization variables (to be introduced shortly)
are those associated to recourses.
Formally, we denote by $\omega$ the random
vector collecting all the renewable energy generation profiles.
We assume a finite discrete probability distribution for
$\omega$ and we denote by $\pi_r$ the probability of each $\omega_r$,
i.e. $\pi_r = \mathbb{P} (\omega = \omega_i)$ for all $r \in \until{R}$.
To keep the notation consistent we denote the renewable energy profile
corresponding to $\omega_r$ as $P_{ir}(k)$.
We denote by $b_r$ the realization of $b$ associated to the scenario $\omega_r$.
Using these positions, the two-stage stochastic MILP can be
formulated as
\begin{align}
\begin{split}
\min_{\substack{x_1,\ldots,x_N \\ \eta_+, \eta_-}} \:
& \: \sum_{i =1}^N c_i^\top x_i + \sum_{k=0}^{K-1} \sum_{r=1}^R \pi_r (q_+ \eta_{kr+} + q_- \eta_{kr-})
\\
\textnormal{subj. to} \:
& \: -\eta_{r-} \le \sum_{i=1}^N A_i x_i - b_r \le \eta_{r+}, \hspace{0.5cm} r = \interv{R}
\\
& \: \eta_{+}, \eta_{-} \ge 0,
\\
& \: x_i \in \Xi, \hspace{1.5cm} i = \interv{N},
\end{split}
\label{eq:two_stage_MILP}
\end{align}
where $x_1, \ldots, x_N$ are the first-stage variables modeling the (a-priori) control actions
and $\eta_+, \eta_-$ are the two-stage variables modeling the (a-posteriori) recourse actions,
which are penalized in the cost with $q_+ \ge 0$ and $q_- \ge 0$, which are the costs related
to energy surplus and shortage, respectively.
In problem~\eqref{eq:two_stage_MILP}, we denoted by $\eta_{kr+}$ the variable associated with positive
recourse for scenario $r$ at time $k$.
We also use the symbol $\eta_{r+}$ to denote
the stack of $\eta_{kr+}$ for all $k$.
The stack of $\eta_{kr+}$ for all $k$ and $r$ is denoted
by $\eta_{+}$. A similar notation holds for $\eta_-$.
It can be seen that the additional term
in the cost is the expected value of the cost associated to recourse actions, i.e.
\begin{align*}
&\sum_{k=0}^{K-1} \!\sum_{r=1}^R \!\pi_r (q_+ \eta_{kr+} \!+\! q_- \eta_{kr-})
\!=\!\! \sum_{k=0}^{K-1} \!{\mathbb{E}} \!\bigg[ \!\Phi \bigg( \!\bigg[\!\sum_{i=1}^N A_{i} x_i - b \bigg]_k \bigg) \!\bigg]\!,
\end{align*}
where $\Phi(z) = q_+ z$ if $z \ge 0$ and $\Phi(z) = - q_- z$ if $z < 0$.
At a first glance, it may seem that the two-stage problem~\eqref{eq:two_stage_MILP}
loses the constraint-coupled structure of the distributed optimization problem~\eqref{eq:MILP}.
However, with a bit a manipulation, it is still possible to arrive at a similar result.
We begin by streamlining the notation. Define $\eta \in {\mathbb{R}}^{2KR}$, $\eta \ge 0$ as the
stack of $\eta_{+}$ and $\eta_{-}$, and the vector $d \in {\mathbb{R}}^{2KR}$
such that $d^\top \eta = \sum_{k=0}^{K-1} \sum_{r=1}^R \pi_r (q_+ \eta_{kr+} + q_- \eta_{kr-})$.
Moreover, define $H_i \in {\mathbb{R}}^{2KR \times n_i}$ and $h \in {\mathbb{R}}^{2KR}$ with
\begin{align*}
H_i &= \mathds{1} \otimes \begin{bmatrix} A_i^\top \!&\! -A_i^\top \end{bmatrix}^\top
\!= \begin{bmatrix}
A_i^\top \!&\!
-A_i^\top \!&\!
\cdots
\!&\!
A_i^\top \!&\!
-A_i^\top
\end{bmatrix}^\top\!,
\\
h &= \begin{bmatrix}
b_{1}^\top &
-b_{1}^\top &
\cdots
&
b_{R}^\top &
-b_{R}^\top
\end{bmatrix}^\top,
\end{align*}
where $\mathds{1} \in {\mathbb{R}}^R$ is the vector of ones and $\otimes$ denotes the kronecker product.
Thus, problem~\eqref{eq:two_stage_MILP} is equivalent to
\begin{align}
\begin{split}
\min_{\substack{x_1,\ldots,x_N \\ \eta}} \:
& \: \sum_{i =1}^N c_i^\top x_i + d^\top \eta
\\
\textnormal{subj. to} \:
& \: \sum_{i=1}^N H_i x_i - h \le \eta
\\
& \: \eta \ge 0, \:\:
x_i \in \Xi, \hspace{1cm} i = \interv{N}.
\end{split}
\label{eq:two_stage_MILP_stream}
\end{align}
By defining $\eta_1, \ldots, \eta_N \in {\mathbb{R}}^{2RK}$
such that $\sum_{i=1}^N \eta_i = \eta$ and each $\eta_i \ge 0$,
we see that problem~\eqref{eq:two_stage_MILP_stream}
is finally equivalent to
\begin{align}
\begin{split}
\min_{\substack{x_1,\ldots,x_N \\ \eta_1, \ldots, \eta_N}} \:
& \: \sum_{i =1}^N (c_i^\top x_i + d^\top \eta_i)
\\
\textnormal{subj. to} \:
& \: \sum_{i=1}^N (H_i x_i - \eta_i) \le h,
\\
& \: \eta_i \ge 0, \:\: x_i \in \Xi, \hspace{1cm} i = \interv{N},
\end{split}
\label{eq:two_stage_MILP_distr}
\end{align}
in the sense that any solution of~\eqref{eq:two_stage_MILP_stream}
can be reconstructed from a solution of~\eqref{eq:two_stage_MILP_distr} by using
$\eta = \sum_{i=1}^N \eta_i$.
Note that problem~\eqref{eq:two_stage_MILP_distr} has an unbounded feasible
set (because of the variables $\eta_i$) but it always admits an optimal solution
due to the terms $d^\top \eta_i$ minimized in the cost
(recall that $d \ge 0$).
\section{Distributed Algorithm and Analysis}
\label{sec:algorithm}
We now propose a distributed algorithm to compute a
feasible solution to problem~\eqref{eq:two_stage_MILP_distr}
and provide the convergence results.
\subsection{Distributed Algorithm Description}
Let us begin by describing the proposed distributed algorithm to solve
problem~\eqref{eq:two_stage_MILP_distr}.
The basic idea behind the distributed algorithm is
to compute a mixed-integer solution starting from an optimal
solution of the convex relaxation of problem~\eqref{eq:two_stage_MILP_stream}
obtained by replacing $\Xi$ with their convex hull $\conv{\Xi}$,
\begin{align}
\begin{split}
\min_{\substack{z_1,\ldots,z_N \\ \eta_1, \ldots, \eta_N}} \:
& \: \sum_{i =1}^N (c_i^\top z_i + d^\top \eta_i)
\\
\textnormal{subj. to} \:
& \: \sum_{i=1}^N (H_i z_i - \eta_i) \le h,
\\
& \: \eta_i \ge 0, \:\: z_i \in \conv{\Xi}, \hspace{0.5cm} i = \interv{N},
\end{split}
\label{eq:two_stage_MILP_stream_approx}
\end{align}
where we denote by $z_i$ the continuous counterpart of the mixed-integer variable $x_i$.
To do so, each agent $i$ maintains an auxiliary variable
$y_i^t \in {\mathbb{R}}^{2RK}$, which represents a local \emph{allocation}
of the coupling constraints (cf. Appendix~\ref{sec:primal_decomp}).
At each iteration $t$, the vector $y_i^t$
is updated according to~\eqref{eq:alg_z_LP}--\eqref{eq:alg_y_update}.
After $T_f > 0$ iterations, the agent computes
a tentative mixed-integer solution based
on the last computed allocation estimate (cf.~\eqref{eq:alg_x_MILP}).
Algorithm~\ref{alg:sg_algorithm} summarizes the steps from the
perspective of agent $i$.
\begin{algorithm}[htbp]
\floatname{algorithm}{Algorithm}
\begin{algorithmic}[0]
\Statex \textbf{Initialization}:
set $T_f > 0$ and
$y_{i}^0$ such that $\sum_{i=1}^N y_i^0 = h$
\smallskip
\Statex \textbf{Repeat} for $t = 0, 1, \ldots, T_f-1$ %
\smallskip
\StatexIndent[0.75]
\textbf{Compute} $\mu_i^t$ as a Lagrange multiplier of %
%
\begin{align}
\label{eq:alg_z_LP}
\begin{split}
\min_{z_{i}, \eta_i } \hspace{1.2cm} &\: c_i^\top z_{i} + d^\top \eta_i
\\
\textnormal{subj. to} \hspace{0.3cm}
\: \mu_i : \: & \: H_i z_i \leq y_i^t + \eta_i
\\
& \: \eta_i \ge 0, \:\: z_i \in \conv{\Xi}
\end{split}
\end{align}
\StatexIndent[0.75]
\textbf{Receive} $\mu_{j}^t$ from $j\in\mathcal{N}_i$ and update %
%
\begin{align}
\label{eq:alg_y_update}
\begin{split}
y_{i}^{t+1} = y_{i}^t + \alpha^t \sum_{j \in \mathcal{N}_i} \big( \mu_{i}^t - \mu_{j}^t \big)
\end{split}
\end{align}
\Statex
\textbf{Return} $(x_i^{T_f}, \eta_i^{T_f})$ as optimal solution of
%
\begin{align}
\label{eq:alg_x_MILP}
\begin{split}
\min_{x_{i}, \eta_i } \: &\: c_i^\top x_{i} + d^\top \eta_i
\\
\textnormal{subj. to} \: & \: H_i x_i \leq y_i^{T_f} + \eta_i
\\
& \: \eta_i \ge 0, \:\: x_i \in \Xi
\end{split}
\end{align}
\end{algorithmic}
\caption{\sgalgname/}
\label{alg:sg_algorithm}
\end{algorithm}
Let us briefly comment on the algorithm structure.
As it will be clear from the analysis,
the first two steps~\eqref{eq:alg_z_LP}--\eqref{eq:alg_y_update}
are used to compute an optimal solution of problem~\eqref{eq:two_stage_MILP_stream_approx},
while the last step~\eqref{eq:alg_x_MILP} reconstructs a mixed-integer solution.
Note that problem~\eqref{eq:alg_z_LP} is an LP and
problem~\eqref{eq:alg_x_MILP} is a MILP. From a computational
point of view, in order to compute a Lagrange multiplier of
problem~\eqref{eq:alg_z_LP} the agent can locally run either a dual
subgradient method or a dual cutting-plane method (cf.~\cite{camisa2021distributed}),
while an optimal solution to problem~\eqref{eq:alg_x_MILP} can be
found with any MILP solver.
In the next subsection we will prove
a worst-case violation of the power balance constraints.
\begin{remark}
An important fact is that the computed mixed-integer solution
always satisfies the coupling constraint appearing in
problem~\eqref{eq:two_stage_MILP_stream}
with a possibly high $\eta_i$, i.e.
%
\begin{align*}
\sum_{i=1}^N (H_i x_i^{T_f} - \eta_i^{T_f})
\le
\sum_{i=1}^N y_i^{T_f}
=
h,
\end{align*}
%
where the inequality follows by construction and the equality
follows by the forthcoming Lemma~\ref{lemma:DPD_convergence_sg}.
%
Thus, the algorithm can be stopped at any iteration $T_f \ge 0$ and the
resulting solution will be feasible for the two-stage
MILP~\eqref{eq:two_stage_MILP_stream}.
The greater the number of iterations, the higher is the optimality
of the computed solution and the lower is the expected violation
of the original power balance constraint.
\oprocend
\end{remark}
\subsection{Theoretical Results}
In this subsection, we provide theoretical results on Algorithm~\ref{alg:sg_algorithm}.
In particular, we will prove a bound for the
worst-case violation of the asymptotically computed mixed-integer solution.
To begin with, we recall some preliminary lemmas, where we remind that $K$
denotes the prediction horizon and $R$ is the total number of scenarios in
the stochastic problem).
\begin{lemma}[\cite{camisa2021distributed}]
\label{lemma:LP_integer_components_sg}
Let problem~\eqref{eq:two_stage_MILP_stream_approx} be feasible and let
$(\bar{z}_1, \ldots, \bar{z}_N, \bar{\eta}_1, \ldots, \bar{\eta}_N)$
be any vertex of its feasible set.
%
Then, there exists an index set $I_{\mathbb{Z}} \subseteq \until{N}$,
with cardinality $|I_{\mathbb{Z}}| \ge N-2RK$,
such that $\bar{z}_i \in \Xi$ for all $i \in I_{\mathbb{Z}}$.
\oprocend
\end{lemma}
The consequence of Lemma~\ref{lemma:LP_integer_components_sg} is that
at least $N-2RK$ blocks of the mixed-integer solution computed
asymptotically by Algorithm~\ref{alg:sg_algorithm} are equal to the corresponding
blocks of optimal solution of~\eqref{eq:two_stage_MILP_stream_approx}.
Next we recall convergence of the steps~\eqref{eq:alg_z_LP}--\eqref{eq:alg_y_update}.
To this end, we denote as $(z_1^\textsc{lp}, \ldots, z_N^\textsc{lp}, \eta_1^\textsc{lp}, \ldots, \eta_N^\textsc{lp})$
an optimal solution of problem~\eqref{eq:two_stage_MILP_stream_approx}, together with
the allocation vector $(y_1^\textsc{lp}, \ldots, y_N^\textsc{lp})$ associated to the primal decomposition
master problem (cf. Appendix~\ref{sec:primal_decomp}),
which is a vector satisfying
\begin{subequations}
\begin{align}
H_i z_i^\textsc{lp} - \eta_i^\textsc{lp} &\le y_i^\textsc{lp}, \hspace{0.3cm} \text{ for all } i \in \until{N},
\\
\text{and} \hspace{0.3cm}
\sum_{i=1}^N y_i^\textsc{lp} &= h.
\end{align}\end{subequations}
The following assumption is made on the step-size sequence.
\begin{assumption}
\label{ass:step-size_sg}
The step-size sequence $\{ \alpha^t \}_{t\ge0}$, with each $\alpha^t \ge 0$,
satisfies $\sum_{t=0}^{\infty} \alpha^t = \infty$,
$\sum_{t=0}^{\infty} \big( \alpha^t \big)^2 < \infty$.
\oprocend
\end{assumption}
The following proposition summarizes the convergence
properties of the steps~\eqref{eq:alg_z_LP}--\eqref{eq:alg_y_update}.
\begin{lemma}[\cite{camisa2021distributed}]
\label{lemma:DPD_convergence_sg}
Let problem~\eqref{eq:two_stage_MILP_stream_approx} be feasible and let
Assumption~\ref{ass:step-size_sg} hold. Consider the allocation vector sequence
$\{ y_1^t,\ldots,y_N^t \}_{t\ge 0}$ generated by
steps~\eqref{eq:alg_z_LP}--\eqref{eq:alg_y_update}
of Algorithm~\ref{alg:sg_algorithm}
with the allocation vectors $y_i^0$ initialized such that
$\sum_{i=1}^N y_i^0 = h$.
%
Then,
%
\begin{enumerate}
\item $\sum_{i=1}^N y_i^t = h$ for all $t \ge 0$;
\item $\lim_{t \to \infty} \| y_i^t - y_i^\textsc{lp} \| = 0$ for all
$i \in \until{N}$.
\oprocend
\end{enumerate}
\end{lemma}
Because of Lemma~\ref{lemma:DPD_convergence_sg}, from now
on we concentrate on the asymptotic mixed-integer solution computed
by Algorithm~\ref{alg:sg_algorithm}. In particular, we denote by
$(\bx^\infty_i, \eta^\infty_i)$ the optimal solution of problem~\eqref{eq:alg_x_MILP}
with allocation equal to $y_i^\textsc{lp}$, i.e.
\begin{align}
\label{eq:alg_x_MILP_asymptotic}
\begin{split}
\min_{x_{i}, \eta_i } \: &\: c_i^\top x_{i} + d^\top \eta_i
\\
\textnormal{subj. to} \: & \: H_i x_i \leq y_i^\textsc{lp} + \eta_i
\\
& \: \eta_i \ge 0, \:\: x_i \in \Xi.
\end{split}
\end{align}
We also define the lower bound of resources $\ell_i \in {\mathbb{R}}^{2RK}$
\begin{align*}
\ell_i \triangleq \min_{x_i, \eta_i} \: & \: H_i x_i - \eta_i
\\
\textnormal{subj. to} \: & \: x_i \in \conv{X_i}
\\
& \: 0 \le \eta_i \le M \mathds{1}.
\end{align*}
where $\min$ is component-wise and $M>0$ is a sufficiently large number.
Thus, it holds $\ell_i \le y_i$ for all
admissible allocations $y_i$, and in particular $\ell_i \le y_i^\textsc{lp}$.
Operatively, since the constraints on $x_i$
and $\eta_i$ are disjoint the vector $\ell_i$ can be computed by replacing
$x_i \in \conv{X_i}$ with $x_i \in X_i$.
In the next theorem we formalize the bound on the worst-case violation.
\begin{theorem}
\label{thm:worst_case_viol}
Let problem~\eqref{eq:two_stage_MILP_stream_approx} be feasible
and consider the asymptotic mixed-integer solution $(\bx^\infty_i, \eta^\infty_i)$
computed by each agent $i \in \until{N}$.
%
Then, the worst-case violation of the power balance constraint is
%
\begin{align*}
\sum_{i=1}^N H_i \bx^\infty_i - h
\le
\sum_{i \in I_{\mathbb{Z}}} \eta_i^\textsc{lp} + \sum_{i \notin I_{\mathbb{Z}}} \frac{c_i^\top (\bx^{\textsc{L}}_i - \bx^\infty_i) + d^\top \eta^{\textsc{L}}_i}{d^\textsc{min}} \mathds{1},
\end{align*}
%
where $d^\textsc{min} = \min_{j \in \until{2RK}} d_j$, $I_{\mathbb{Z}}$ denotes
the set of agents (satisfying $|I_{\mathbb{Z}}| \ge N-2RK$) for which $z_i^\textsc{lp} \in \Xi$
and $(\bx^{\textsc{L}}_i, \eta^{\textsc{L}}_i)$ is an optimal solution of problem~\eqref{eq:proof_worst_case_xil}.
\oprocend
\end{theorem}
The proof is provided in Appendix~\ref{sec:proof_theorem}.
Note that, since this bound is the sum of contributions of
the agents, it can be computed a posteriori in a distributed way
using a consensus scheme. To do so, they first need to detect
whether they belong to $I_{\mathbb{Z}}$ or not by computing the primal
solution $z_i^\textsc{lp}$ of~\eqref{eq:alg_z_LP} and by checking
whether it satisfies $z_i^\textsc{lp} \in \Xi$. Then, they run the
consensus scheme using as initial condition either $N \eta_i^\textsc{lp}$
(if $z_i^\textsc{lp} \in \Xi$) or $N\frac{c_i^\top (\bx^{\textsc{L}}_i - \bx^\infty_i) + d^\top \eta^{\textsc{L}}_i}{d^\textsc{min}} \mathds{1}$
(if $z_i^\textsc{lp} \notin \Xi$).
\section{Numerical Experiments}
\label{sec:simulations}
In this section, we validate the proposed framework through large-scale
numerical computations. All the simulations are performed with the \textsc{disropt}
package~\cite{farina2019disropt} and are performed on the Italian
HPC CINECA infrastructure. In order to make the simulations
realistic, we run Algorithm~\ref{alg:sg_algorithm} on a generated problem
with data synthesized using a deep Generative Adversarial Network
(GAN)~\cite{goodfellow2014generative}. In the next subsections, we
first provide details regarding the scenario generation for renewable
energy sources, then we show aggregate results on Monte Carlo simulations
and finally we show in more detail one specific simulation.
\subsection{Scenario Generation with Generative Adversarial Networks}
\label{sec:gan}
Recall that $b \in {\mathbb{R}}^{K}$ is a random variable that
depends on the total energy produced by the renewables~\eqref{eq:h_def}.
The variable $b$ has its own probability distribution and
$b_1, \ldots, b_R \in {\mathbb{R}}^K$ are
randomly drawn samples (cf.~\eqref{eq:two_stage_MILP}).
In order to generate such samples, we utilize a Generative Adversarial Network
trained with an open historical dataset from the EU. To train the neural network, we
used the data series provided by Open Power System Data \cite{opsd2020timeseries}.
In particular, we used the generation data of renewable energy sources
in South Italy. To guarantee a certain uniformity of the data, we narrowed the
dataset by concentrating only on summer months
and discarded days with missing information.
Each sample is a vector in
${\mathbb{R}}^{24}$ and contains information on
the power produced
during a day with a hourly resolution.
As for the utilized neural networks, the generative networks have
a $10$-dimensional input with the following layers:
\begin{itemize}
\item a dense layer with $1536$ units, batch normalization and Leaky ReLU activation function;
\item a layer that reshapes the input to the shape $(6, 256)$;
\item a transposed convolution layer with $128$ output filters, kernel size equal to $5$, stride $1$,
batch normalization and Leaky ReLU activation function;
\item a transposed convolution layer with $64$ output filters, kernel size equal to $5$, stride $2$,
batch normalization and Leaky ReLU activation function;
\item a transposed convolution layer with $1$ output filter, kernel size equal to $5$, stride $2$
and $\tanh$ activation function.
\end{itemize}
The ouput of the generative network is a $24$-dimensional vector containing
the power produced by the renewable unit at each time slot of the day.
The discriminator networks have a $24$-dimensional input with the following
layers:
\begin{itemize}
\item a convolution layer with $64$ output filters, kernel size equal to $5$, stride $2$ and
Leaky ReLU activation function;
\item a Dropout layer with rate $0.3$;
\item a convolution layer with $128$ output filters, kernel size equal to $5$, stride $2$ and
Leaky ReLU activation function;
\item a Dropout layer with rate $0.3$;
\item a layer that flattens the input;
\item a dense layer with one output unit.
\end{itemize}
The output of the discriminator networks is a scalar that denotes the
probability that the evaluated input is a real one or a generated one.
We used the neural networks to generate samples of solar energy and
wind energy. We used \textsc{Tensorflow} 2.4 to model the networks
and we performed the training with $10^4$ epochs using the ADAM
algorithm.
In Figure~\ref{fig:gan}, we show example profiles of
solar and wind energy generated by the networks.
It can be noted that generated trajectories of solar energy production have a
maximum at midday, while one of the trajectories has lower values than the
others and may be associated, for instance, to a cloudy day. In any case,
the power generated outside the time window 5am-8pm is close to zero,
consistently with real profiles.
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{gan_solar}
\hfill
\includegraphics[scale=1]{gan_wind}
\caption{
Five examples of power generation profiles generated by the GANs.
Left: solar energy, right: wind energy.
}
\label{fig:gan}
\end{figure}
\subsection{Monte Carlo Simulations}
To test the proposed framework, we performed $100$
Monte Carlo simulations in which we run
Algorithm~\ref{alg:sg_algorithm} on different realizations
of the energy generation scenarios (i.e., different realizations of $b$).
We considered a microgrid control problem with the following units: $20$ generators,
$20$ storages, $60$ controllable loads, $20$ critical loads, $40$ solar generators,
$15$ wind generators and the connection to the main grid.
For each instance of the problem, we extracted $R = 5$ scenarios and fixed a
$24$-hour prediction horizon and $1$-hour sampling time.
The initial conditions of storages and generators are generated randomly.
As regards the load profiles and the daily spot prices,
we utilized the data provided by~\cite{opsd2020timeseries}, which
are shown in Figure~\ref{fig:stochastic_daily_spot_prices}.
We then executed Algorithm~\ref{alg:sg_algorithm} for $500$ iterations with
a piece-wise constant step size that we initialize to $3.0$ and multiply by
$0.5$ every $100$ iterations.
The results of the simulations are shown in Figures~\ref{fig:montecarlo_cost}
and~\ref{fig:montecarlo_coupling}. In Figure~\ref{fig:montecarlo_cost},
we plot the cost of the mixed-integer solution computed by the algorithm
throughout its evolution (in particular the cost function of the two-stage
problem~\eqref{eq:two_stage_MILP}). The picture highlights how
the algorithm improves the cost at each iteration, i.e., the more iterations
are performed, the more the solution performance improves.
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{fig_cost}
%
\caption{
Evolution of the cost yielded by Algorithm~\ref{alg:sg_algorithm}. The solid
line is the mean value of the Monte Carlo trials while the dashed area represents
one standard deviation.
}
\label{fig:montecarlo_cost}
\end{figure}
In Figure~\ref{fig:montecarlo_coupling}, we show the value of the coupling
constraints for the two-stage problem~\eqref{eq:two_stage_MILP}.
The red and green lines correspond to the maximum value of $\eta_+$
and $\eta_-$ (with changed sign) respectively, where the maximum
is taken with respect to the scenarios, to the components
of the constraint and to the Monte Carlo trials.
The blue line represents the average value of the
power balance constraint, while the dashed area corresponds to one
standard deviation of the Monte Carlo trials.
At each time step the power balance constraints are always
in between the upper and lower line, while the uncertainty range
reduces as the algorithm progresses.
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{fig_coupling}
%
\caption{
Evolution of the coupling constraint value throughout the
evolution of Algorithm~\ref{alg:sg_algorithm}. The blue line represents
the average value of the power balance constraint (the dashed area
corresponds to one standard deviation). The upper and lower lines
are the maximum positive and negative two-stage violations
of the constraints.
}
\label{fig:montecarlo_coupling}
\end{figure}
\subsection{Results on a Single Instance}
\label{sec:simulation_stochastic}
To conclude this section, we show how Algorithm~\ref{alg:sg_algorithm}
behaves on a single instance of the Monte Carlo trials.
In Figure~\ref{fig:stochastic_consumed_curtailed_power}, we show the
total consumed power and the total curtailed power.
In Figure~\ref{fig:stochastic_storages}, we show the total power exchanged
with storage units (a positive value means that, overall, the storage units are
charging) and the global level of stored power.
The solution provided by the algorithm is such that storages
accumulate as much energy as they can during the peaks of power produced
by the renewables. This energy is then released during the subsequent hours
of the day.
In Figure~\ref{fig:stochastic_grid}, we show the total power exchanged
with the utility grid (a positive value means that power is purchased
from the grid).
Note that, during the peak of power produced by the renewables,
the microgrid exports energy to the main grid
in order to maximize the income.
In Figure~\ref{fig:stochastic_power_fraction}, we show where does the
total available power comes from. In particular we highlight the
fraction of power coming from generators, renewables and the utility grid.
In this simulation, the generators did not produce any energy.
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{stochastic_microgrid_spot_prices}
%
\caption{
Daily spot prices from Open Power System Data \cite{opsd2020timeseries}.
}
\label{fig:stochastic_daily_spot_prices}
\end{figure}
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{stochastic_microgrid_consumed_power}
\hfill
\includegraphics[scale=1]{stochastic_microgrid_curtailed_power}
\caption{
Total consumed power (critical and controllable loads) and curtailed power (for controllable loads only).
}
\label{fig:stochastic_consumed_curtailed_power}
\end{figure}
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{stochastic_microgrid_storage_exchange}
\hfill
\includegraphics[scale=1]{stochastic_microgrid_storage_level}
\caption{
Total average power exchanged by storage units (left) and level of total stored power (right).
}
\label{fig:stochastic_storages}
\end{figure}
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{stochastic_microgrid_grid}
%
\caption{
Total power exchanged with the utility grid.
}
\label{fig:stochastic_grid}
\end{figure}
\begin{figure}[htbp]\centering
\includegraphics[scale=1]{stochastic_microgrid_power_fractions}
%
\caption{
Fraction of consumed power coming from generators, renewables and utility grid at each time slot.
}
\label{fig:stochastic_power_fraction}
\end{figure}
\section{Conclusions}
In this paper, we considered a microgrid control problem
to be solved over a peer-to-peer network of agents.
Each agent represents a unit of the microgrid and must cooperate
with the other units in order to solve the problem
without a centralized coordinator. We used a challenging stochastic
mixed-integer microgrid model and proposed a distributed
algorithm to solve the problem, for which we provided theoretical
guarantees on the constraint violation. Numerical computations
on a synthesized problem using Generative Adversarial Networks
show the validity of the proposed approach.
|
1,116,691,498,545 | arxiv | \section{Extended empirical evaluation}
\label{app:experiments}
In our exploration of the MLI property, we performed many additional experiments. Generally, we found that turning common knobs of neural network training did not have a significant impact on networks satisfying the MLI property. For example, varying activation functions, loss functions, batch size, regularization, and different forms of initialization had no significant effect on the MLI property. In this section, we present a few of the more interesting additional experiments that we performed.
\subsection{Relationship between the MLI property and generalization}
\label{app:experiments_gen}
We are interested in whether the success/failure of the MLI property impacts the generalization ability of a neural network. To better understand this, we examined the test accuracy of the models we trained and studied the correlation with the MLI property on MNIST and CIFAR-10 datasets. Figure~\ref{fig:mlp_gen} shows the relationship between the test accuracy and $\min \Delta$. Note that we considered fully connected networks for MNIST dataset and VGG architectures for CIFAR-10. For the MNIST experiments, configurations that violated the MLI property had an average test accuracy of $96.94 (\pm 0.015)$ and those that satisfied the MLI property had the averaged test accuracy of $97.14 (\pm 0.018)$. Similarly, for CIFAR-10 experiments, configurations that violated the MLI property had an average test accuracy of $75.83 (\pm 0.084)$ and those that satisfied the MLI property had an average test accuracy of $76.99 (\pm 0.082)$. Overall, we did not identify a clear pattern between the MLI property and the generalization property of the neural network. At the very least, we ascertain that models violating the MLI property can achieve competitive test accuracy.
\begin{figure}[!h]
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/mnist_opt/mlp_generalization.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/mnist_opt/cnn_generalization.pdf}
\end{minipage}
\caption{Relationship between test accuracy and non-monotonicity in MNIST (left) and CIFAR10 (right) datasets. \textbf{\textcolor{blue}{Blue}} points represent networks where the MLI property holds and \textbf{\textcolor{orange}{orange}} points are networks where the MLI property fails.}
\label{fig:mlp_gen}
\end{figure}
\subsection{Impact of large learning rate}
\label{app:experiments_lr}
In Table~\ref{tab:cifar_lr}, we show the proportion of ResNets trained on CIFAR-10 and CIFAR-100 that violated the MLI property. The experimental set up matches that used to produce Tables~\ref{tab:cifar10_resnets} and Table~\ref{tab:cifar100_resnets}. There is a general trend towards higher learning rates encouraging non-monotonicity though the correlation is weaker than for the MNIST/Fashion-MNIST classifiers.
\begin{table*}[]
\centering
\small
\begin{tabular}{|l|r|l l l l l l l |}\hline
& LR: & 0.0003 & 0.001 & 0.003 & 0.01 & 0.03 & 0.1 & 0.3\\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{SGD}} & BN & 0.00 (3) & 0.25 (4) & 0.38 (13) & 0.12 (17) & 0.06 (17) & 0.24 (17) & 0.35 (17)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& No BN & - & 0.00 (7) & 0.00 (14) & 0.24 (17) & 0.00 (17) & 0.25 (16) & 0.00 (15)\rule[-1.2ex]{0pt}{0pt}\\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{Adam}} & BN & 0.38 (16) & 0.18 (17) & 0.35 (17) & 0.29 (17) & 0.67 (9) & 1.00 (6) & 1.00 (1)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& No BN & 0.00 (13) & 0.00 (17) & 0.00 (17) & 0.62 (8) & 0.00 (5) & 0.00 (11) & 0.00 (6)\rule[-1.2ex]{0pt}{0pt}\\ \hline
\end{tabular}
\caption{Proportion of trained CIFAR-10 and CIFAR-100 classifiers (achieving better than 1.0/2.0 training loss respectively) that had non-monotonic interpolations from initialization to final solution. The total number of runs in each bin is displayed in parentheses next to the proportion. A dashed line indicates that no networks achieved the threshold loss.}
\label{tab:cifar_lr}
\end{table*}
We provide additional evaluations of large learning rates in Figure~\ref{fig:mnist_delta_heatmaps}, where we evaluate the effect of changing network depth and width over varying learning rates. Full details are given in Appendix~\ref{app:mnist_additional}.
\subsection{Impact of optimization algorithm}
\label{app:experiments_opt}
\begin{figure*}[!t]
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_opt/mlp_sgd.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_opt/mlp_rms.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_opt/mlp_adam.pdf}
\end{minipage}
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_opt/mlp_kfac.pdf}
\end{minipage}
\vspace{-0.1cm}
\caption{Training loss over the linear interpolation connecting initial and final parameters. Each curve represents a network trained on MNIST \& Fashion-MNIST with different optimization algorithms. The MLI property generally holds for networks trained with SGD, but often fails for networks trained with RMSProp, Adam, and K-FAC.}
\label{fig:mlp_vary_opt}
\end{figure*}
In Figure~\ref{fig:mlp_vary_opt}, we show the training loss over the line connecting the initial and final parameters. We found that adaptive optimizers such as RMSProp and Adam consistently find final solutions that violate the MLI property. To better understand this feature, we compare the distance travelled for all optimization methods. In Figure~\ref{fig:mlp_opt} (left), we show the distance travelled in weight space when trained with SGD and Adam for MNIST \& Fashion-MNIST classification tasks. When trained with Adam, the optimizer moved further away from the initialization --- confirming the results of~\citet{amari2020does}. On the other hand, models trained with SGD often traveled less. Moreover, non-monotonic configurations occurred more frequently for networks that travelled far from initialization, suggesting that the non-monotonicity of adaptive optimizers may be due to them encouraging parameters to travel far from initialization.
We also investigated the relationship between non-monotonicity and Gauss length over varying optimizers. In Figure~\ref{fig:mlp_opt} (right), we show the Gauss length for networks trained using SGD and the Adam optimizer. When trained with Adam, on average the interpolation paths have a larger Gauss length and lead to more failures of the MLI property.
Table~\ref{tab:cifar100_resnets} contains our evaluation of the MLI property for ResNets trained with different architectures, optimizers, and initialization schemes (as in Table~\ref{tab:cifar10_resnets} for CIFAR-10 in the main paper). The general trends observed align with those observed on CIFAR-10 in the main paper.
\begin{table}[]
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{|c|c|c|c|c|c|}\hline
& & BN & BN-I & NBN-I & NBN-F\\\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{SGD}} & \% Non-monotonic & 0.45 (20) & 0.00 (16) & 0.00 (18) & 0.28 (18)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& $\min \Delta$ & 0.055 & 0 & 0 & 0.082\rule[-1.2ex]{0pt}{0pt}\\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{Adam}} & \% Non-monotonic & 0.62 (16) & 0.00 (15) & 0.00 (15) & 0.00 (19)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& $\min \Delta$ & 0.487 & 0 & 0 & 0\rule[-1.2ex]{0pt}{0pt}\\ \hline
\end{tabular}
\end{adjustbox}
\caption{Evaluation of effect of batch normalization, initialization, and choice of optimizer for residual networks trained on CIFAR-100 (achieving at least 2.0 training loss). Full explanation of table is given in main text, Section~\ref{sec:exp:how_persistent:bn}.}
\label{tab:cifar100_resnets}
\end{table}
\begin{figure}[!h]
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/mnist_opt/mono_vs_dis_adam.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures/mnist_opt/mono_vs_gl_adam.pdf}
\end{minipage}
\caption{For each MNIST \& Fashion-MNIST classifier, we compute the minimum $\Delta$ such that the interpolated loss is $\Delta$-monotonic. We plot models trained with SGD and Adam in the top and bottom rows respectively. On the left, we compare the distance moved in the weight space. On the right, we compare the Gauss length of the interpolated network outputs. \textbf{\textcolor{blue}{Blue}} points represent networks where the MLI property holds and \textbf{\textcolor{orange}{orange}} points are networks where the MLI property fails.}
\label{fig:mlp_opt}
\end{figure}
\subsection{Optimizer Ablations}
\label{app:experiments_opt_abl}
To get a better understanding of the influence of different optimization algorithms and the effects of moving in weight and function space, we conduct detailed experiments where we switch the optimizer during training. We use an architecture with 2 hidden layers of 1024 units on MNIST. The architecture has tanh activations and no batch normalization. We report the mean and standard error across five random seeds.
In the (SGD $\rightarrow$ Adam) experiments, we train the first $t=\{2, 10, 50\}$ epochs with SGD, using learning rates in the set $\{0.001, 0.003, 0.01, 0.03, 0.1\}$ and finish the training with Adam (with LR $0.001$) for $200 - t$ epochs. While using just SGD leads to a monotonic interpolation, switching to Adam made all runs not monotonic. These results are reported in Table \ref{tab:switch_sgd_adam}. Similarly, in Table \ref{tab:switch_sgd_adam}, we report results for (Adam $\rightarrow$ SGD). We switch to SGD with a learning rate of 0.03.
As in \citet{amari2020does}, we again find that Adam leads to much greater distance moved in weight space. Perhaps more surprisingly, the distance moved in weight space and the average Gauss length are largely consistent across different choices of the learning rate for SGD and the epoch at which we switch to Adam. In Table~\ref{tab:switch_adam_sgd}, we see that switching from Adam to SGD reduces both the average Gauss length and the distance travelled.
\begin{table}
\centering
\begin{tabular}{ccccc}
\hline
SGD LR \textbackslash{} switch\_epoch & 2 & 10 & 50 & None \\
\hline
0.001 & 7.0822 ± 0.0898 & 7.3366 ± 0.0034 & 6.9005 ± 0.0861 & 0.5341 ± 0.0069 \\
0.003 & 7.2208 ± 0.1517 & 7.0211 ± 0.0816 & 6.8331 ± 0.016 & 0.5773 ± 0.0069 \\
0.01 & 7.1144 ± 0.0773 & 7.2666 ± 0.0943 & 7.2307 ± 0.1776 & 0.6939 ± 0.0096 \\
0.03 & 7.3067 ± 0.026 & 7.3285 ± 0.0703 & 6.9953 ± 0.1248 & 1.1351 ± 0.0522 \\
0.1 & 7.3783 ± 0.1223 & 7.4592 ± 0.0714 & 6.8579 ± 0.1251 & 2.9144 ± 0.0824 \\
\hline
Distance & 2 & 10 & 50 & None \\
\hline
0.001 & 345.209 ± 2.917 & 340.998 ± 1.385 & 303.907 ± 0.809 & 8.096 ± 0.012 \\
0.003 & 346.826 ± 0.962 & 339.54 ± 0.39 & 303.721 ± 0.314 & 9.369 ± 0.015 \\
0.01 & 349.413 ± 0.856 & 339.056 ± 0.502 & 302.552 ± 0.368 & 10.893 ± 0.021 \\
0.03 & 345.592 ± 0.828 & 339.428 ± 0.608 & 304.109 ± 1.031 & 14.177 ± 0.041 \\
0.1 & 349.016 ± 1.009 & 350.103 ± 0.475 & 326.38 ± 0.628 & 108.508 ± 2.497 \\
\hline
\end{tabular}
\caption{Average Gauss length (top) and distance traveled (bottom) for given SGD learning rate and switching to Adam with learning rate 1e-3 during training.}
\label{tab:switch_sgd_adam}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccccc}
\hline
Adam LR \textbackslash{} switch\_epoch & 2 & 10 & 50 & None \\
\hline
0.001 & 1.447 ± 0.036 & 2.349 ± 0.024 & 4.472 ± 0.061 & 7.135 ± 0.048 \\
0.003 & 3.766 ± 0.052 & 7.325 ± 0.071 & 9.365 ± 0.115 & 9.933 ± 0.097 \\
0.01 & 6.156 ± 0.266 & 7.391 ± 0.423 & 9.873 ± 0.427 & 10.313 ± 0.231 \\
\hline
Distance & 2 & 10 & 50 & None \\
\hline
0.001 & 22.257 ± 0.113 & 59.986 ± 0.089 & 174.958 ± 0.061 & 350.174 ± 0.704 \\
0.003 & 58.773 ± 0.565 & 196.794 ± 0.773 & 420.652 ± 1.321 & 659.236 ± 2.785 \\
0.01 & 185.892 ± 2.26 & 384.983 ± 13.863 & 726.063 ± 8.888 & 1220.432 ± 6.893 \\
\hline
\end{tabular}
\caption{Average Gauss length (top) and distance traveled (bottom) for given Adam learning rate and switching to SGD with learning rate 0.03 during training.}
\label{tab:switch_adam_sgd}
\end{table}
To investigate why Adam is responsible for breaking monotonicity further, we follow the ``grafting" experiment described in \cite{agarwal2020disentangling}, where two optimizers are combined by using the step magnitude from the first and step direction from the second. Results where we use the SGD step magnitude (which varies with LR) and Adam direction are shown in \ref{tab:grafting}. All the runs are monotonic, so the direction chosen by Adam is not the primary influence on the optimization trajectory. In contrast, when we use the SGD step direction and the Adam magnitude we observe all runs to be non-monotonic and find the average distance traveled to be 381.65, suggesting that the magnitude of the updates is responsible for breaking monotonicity.
\begin{table}
\centering
\begin{tabular}{rll}
\hline
lr \textbackslash{} Optimizer & SGD & Adam \\
\hline
0.001 & 9.198 ± 0.016 & 8.096 ± 0.012 \\
0.003 & 10.932 ± 0.015 & 9.369 ± 0.015 \\
0.01 & 12.978 ± 0.013 & 10.893 ± 0.021 \\
0.03 & 16.619 ± 0.037 & 14.177 ± 0.041 \\
\hline
\end{tabular}
\caption{Average distance traveled where we use the SGD step magnitude and step direction given by SGD or Adam, respectively}
\label{tab:grafting}
\end{table}
\subsection{Additional weight distance experiments}
\label{app:experiments_weight_dis}
In this section, we investigate the relationship between normalized weight distance and non-monotonicity on image reconstruction (MNIST) and image classification (CIFAR-10 \& CIFAR-100) tasks.
\begin{figure*}[!ht]
\begin{minipage}{0.49\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/mnist_ae/dist_vs_delta_normalized.pdf}%
\end{minipage}\hfill%
\begin{minipage}{0.49\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/mnist_ae/gl_vs_delta.pdf}%
\end{minipage}
\caption{Weight distance (left) and Gauss length (right) against maximum non-monotonic bump height for image reconstruction task. For clarity, \textbf{\textcolor{blue}{Blue}} points represent networks where the MLI property holds and \textbf{\textcolor{orange}{orange}} points are networks where the MLI property fails.}
\label{fig:ae_metrics}
\end{figure*}
In Figure~\ref{fig:ae_metrics}, we show the correlation between the distance travelled in parameter space and the smallest $\Delta$ such that the loss interpolation is $\Delta$-monotonic (Definition~\ref{def:delta_mono}) for image reconstruction task. As expected, a small distance moved (or Gauss length, respectively) leads to monotonic interpolation. And beyond the strict limits of our analysis, we observed that larger weight distances are correlated with non-monotonicity.
In Figure~\ref{fig:cifar_wdist_delta}, we display the distance moved in weight space against the minimum $\Delta$ such that ResNets trained on CIFAR-10 and CIFAR-100 have $\Delta$-monotonic interpolations from initial to final parameters. In general, larger bumps occur at larger distances moved, as in our other experiments.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR10/dist_vs_delta.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR100/dist_vs_delta.pdf}
\end{minipage}
\caption{Distance moved in weight space against the minimum $\Delta$ such that ResNets trained on CIFAR-10 (left) and CIFAR-100 (right) have $\Delta$-monotonic interpolations from initialization to final parameters. We observe a general trend that larger distance in weight space corresponds to more significant non-monotonicities.}
\label{fig:cifar_wdist_delta}
\end{figure}
\subsection{Additional Gauss length experiments}
\label{app:experiments_gl}
\begin{wrapfigure}[12]{l}{0.5\textwidth}
\centering
\vspace{-0.5cm}
\includegraphics[width=\linewidth]{figures/mnist_ae/gl_vs_wdist_unnorm.pdf}
\vspace{-0.8cm}
\caption{Power law relationship between Gauss length and weight distance for autoencoders trained on MNIST ($R^2 = 0.705$).}
\label{fig:ae_power_law}
\end{wrapfigure}
In this section, we investigate the relationship between Gauss length and non-monotonicity on image reconstruction and image classification (CIFAR-10 \& CIFAR-100) tasks.
In Figure~\ref{fig:ae_metrics}, we show the correlation between the Gauss length and the smallest $\Delta$ such that the loss interpolation is $\Delta$-monotonic for the image reconstruction task. Similar to the weight distances, a small Gauss length leads to monotonic interpolation. We also observe that larger Gauss length are correlated with the failure of MLI property.
In Figure~\ref{fig:cifar_wdist_delta}, we display the distance moved in weight space against the minimum $\Delta$ such that ResNets trained on CIFAR-10 and CIFAR-100 have $\Delta$-monotonic interpolations from initial to final parameters. In general, larger bumps occur at larger distances moved, as in our other experiments.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR10/gl_vs_delta.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR100/gl_vs_delta.pdf}
\end{minipage}
\caption{Gauss length of function-space interpolation path against the minimum $\Delta$ such that ResNets trained on CIFAR-10 (left) and CIFAR-100 (right) have $\Delta$-monotonic interpolations. At small Gauss lengths, the networks generally satisfy the MLI property while larger bumps in the interpolation path are achieved for interpolations with larger Gauss lengths.}
\label{fig:cifar_gl_delta}
\end{figure}
\subsection{Additional Gauss length vs weight distance}
\label{app:experiments_gl_wd}
In this section, we investigate the relationship between Guass length and weight distance travelled for image reconstruction and image classification (CIFAR-10 \& CIFAR-100) tasks.
In Figure~\ref{fig:ae_power_law}, we plot the Gauss length of the interpolation path against the distance moved in weight space for autoencoders trained on MNIST. In this case, as in Figure~\ref{fig:mlp_power_law}, we observe a clear power-law relationship between the two.
In Figure~\ref{fig:cifar_power_law}, we plot the Gauss length of the interpolation path against the distance moved in weight space for ResNets trained on CIFAR-10 and CIFAR-100, over varying initialization schemes, optimizers, and the use of batchnorm (as in Tables~\ref{tab:cifar10_resnets}~and~\ref{tab:cifar100_resnets}). In this case, there is not a clear power law relationship but nonetheless a clear positive correlation remains between the Gauss length and the distance moved in weight space.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR10/gl_vs_wdist_unnorm.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR100/gl_vs_wdist_unnorm.pdf}
\end{minipage}
\caption{Gauss length of interpolation path against distance moved in weight space for ResNets trained on CIFAR-10 and CIFAR-100. While there is a positive correlation, the goodness of fit is lower for these networks than the MNIST classifiers and autoencoders ($R^2 = 0.399$ for CIFAR-10 and $R^2=0.402$ for CIFAR-100).}
\label{fig:cifar_power_law}
\end{figure}
\subsection{Impact of batch normalization}
In Figure~\ref{fig:mlp-bn}, we compare the distance travelled between models trained with batch normalization and without batch normalization for classifiers trained on the MNIST \& Fashion-MNIST datasets. We plot the minimum $\Delta$ such that the interpolated loss is $\Delta$-monotonic against the distance moved in weight space. We observe that models trained with batch normalization had a higher variance of distance travelled in weight space compared to models trained without batch normalization. Hence, when batch normalization is used, there are more configurations that travelled further in weight space. Consistent with our prior analysis, configurations that travelled far in parameter space tend to more break the MLI property. This hints that such behaviour of batch normalization can cause more frequent violation of the MLI property.
\begin{figure}
\vspace{-0.8cm}
\centering
\includegraphics[width=0.8\linewidth]{figures/mnist_opt/mono_vs_dis_bn.pdf}
\vspace{-0.5cm}
\caption{Monotonicity against distance moved in weight space for MNIST \& Fashion-MNIST classifiers. \textbf{\textcolor{blue}{Blue}} points represent networks where the MLI property holds and \textbf{\textcolor{orange}{orange}} points are networks where the MLI property fails.}
\label{fig:mlp-bn}
\end{figure}
\subsection{Additional loss landscape experiments}
\label{app:experiments_ll}
\paragraph{Loss landscape for network failing MLI.} In Section~\ref{sec:exp:what_landscape}, we showed loss landscape visualizations for networks that satisfied the MLI property. In Figure~\ref{fig:nonmono_loss_landscape}, we show the 2D projection of the loss landscape for a fully-connected FashionMNIST classifier that does not satisfy the MLI property. In this 2D slice, we observe a wide barrier in the loss landscape followed by a region of extremely flat curvature.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/loss_landscape_nonmono.pdf}
\vspace{-0.5cm}
\caption{Loss landscape projection of a FashionMNIST classifier that does not satisfy the MLI property. We observe a barrier in the loss followed by a region of extremely flat curvature.}
\label{fig:nonmono_loss_landscape}
\end{figure}
\begin{figure}
\vspace{-0.5cm}
\centering
\includegraphics[width=0.5\linewidth]{figures/CIFAR_search/CIFAR10/random_init_interp_mono_nonmono.png}
\vspace{-0.5cm}
\caption{Interpolating from random intializations to SGD solution, with original initialization-solution pair being non-monotonic. The initialization scheme used differs from the one used to train the network, and surprisingly leads to monotonic interpolations.}
\label{fig:cifar_rand_init_interp}
\end{figure}
\paragraph{MLI over permutation symmetries.} In addition to random initializations, we explored interpolations over the permutation symmetry group of initialization-solution pairs for fully-connected networks on MNIST. In Figure~\ref{fig:permuting_mli}, we utilized the fact that adjacent linear layers can be permuted without modifying the output function to randomly permute the initialization and final solution. This leads to different paths through weight space but with the end-points of the interpolation fixed at the original values. We observed that these permutations preserve the MLI property.
\begin{figure}
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_interp_nway/permute_init.png}\\
\end{minipage}\hfill%
\begin{minipage}{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_interp_nway/permute_final.png}
\end{minipage}
\centering
\caption{Interpolation loss between initial points and final solution on training set. (Left) random permutations of the initialization are shown. (Right) Random permutations of the solution are shown. Mean loss shown with ($\pm 1$) standard deviation as filled region.}
\label{fig:permuting_mli}
\end{figure}
\paragraph{Interpolations with different initialization distributions.} In Figure~\ref{fig:rand_init_to_optimum}, we showed that the monotonic (or non-monotonic) interpolations persist across different random initializations for a given final solution. However, it is possible that the monotonicity of the interpolations can change if we modify the initialization scheme. We took the network from the bottom right plot of Figure~\ref{fig:rand_init_to_optimum} and chose our initializations according to the scheme described in \citet{goyal2017accurate} --- where the final batch norm layer in each residual block is initialized to be zero so that the network function is close to the identity function. The result is shown in Figure~\ref{fig:cifar_rand_init_interp}, in this case, the random initializations are linearly connected to the solution while random samples from the original initialization distribution are not.
\paragraph{Additional landscape visualizations.} In Figure~\ref{fig:cut_resnet_cifar10_2inits_2optima}, we show additional 2D projections of the loss landscapes for ResNet20v1 networks on CIFAR-10. This confirms results seen elsewhere: linear interpolation between unrelated initial points and optima yield monotonic decreases in training loss and monotonic increases in accuracy.
\begin{figure}[!htb]
\begin{minipage}{0.5\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/resnet_cifar10_2inits_1optimum_6312514.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/resnet_cifar10_1init_2optima_68641566.pdf}
\end{minipage}
\caption{\label{fig:cut_resnet_cifar10_2inits_2optima} Two-dimensional sections of the weight space for ResNet20v1 trained on CIFAR-10. (\textbf{Left}) The plane is defined by 2 initializations (circles) and an optimum (cross) reached from one of them. (\textbf{Right}) The plane is defined by an initialization (circle) and two optima (crosses). For both training loss ({\color{green} \textbf{green}}) and training accuracy ({\color{purple} \textbf{purple}}), interpolations between both minima and optima yield monotonic decreases/increases, respectively.}
\end{figure}
In Figure~\ref{fig:roberta_esperanto_landscape_interp}, we show the same loss landscape projections as displayed in Figure~\ref{fig:cut_roberta_esperanto}. Additionally, here we include the loss over the interpolated paths.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/LM/roberta_LM_esperanto_init1tobothoptima_landscape_and_sections_22943544.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/LM/roberta_LM_esperanto_landscape_and_sections_84399585.pdf}
\end{minipage}\hfill%
\caption{Loss landscapes with loss over linear interpolations between initial and final parameters for RoBERTa trained as language model on Esperanto. Linear interpolation between all pairs leads to monotonic reduction in the loss. (\textbf{Left}) The loss over the plane defined by the initial parameters, optimum found by SGD, and an unrelated optimum. (\textbf{Right}) The loss over the plane defined by the initial parameters, optimum found by SGD, and an unrelated initialization.}
\label{fig:roberta_esperanto_landscape_interp}
\end{figure}
\subsection{Additional MNIST results}\label{app:mnist_additional}
In Figure~\ref{fig:784_all}, we show the full set of network interpolations used to produce Table~\ref{tab:mnist_lr}. Overall, we observed a significant effect from introducing batch normalization across all other settings considered, but particularly when the learning rate is large.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/784_search/all_interp_mnist.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/784_search/all_interp_fmnist.pdf}
\end{minipage}
\caption{Linear interpolations for MNIST (top) and Fashion-MNIST (bottom). Different curves represent trained networks with varying activation function, learning rate, choice of optimizer, and batch normalization. All networks achieve at most 0.1 final training loss.}
\label{fig:784_all}
\end{figure}
\paragraph{Varying depth and hidden size.} We explored the effect of varying depth and hidden size on the MLI property. Overall, we did not observe any substantial correlation between these factors and the MLI property (especially, when taking into account implicit effects on the critical learning rate).
In Figure~\ref{fig:mnist_delta_heatmaps}, we display heatmaps of $\min \Delta$ as a function of the learning rate, hidden size and depth of fully-connected neural networks trained on MNIST and Fashion-MNIST. Overall, we do not observe any significant effect from changing either the hidden size or the network depth --- the learning rate accounts for the dominant changes in the monotonicity of the interpolation. We trained each network with ReLU activations for 200 epochs with batch sizes of 512, using both Adam and SGD and with/without batch normalization. Only those models that achieved a training loss of 0.1 are displayed (cyan patches indicate that no model met these criteria for the corresponding configuration).
\begin{figure}
\centering
\resizebox{\linewidth}{!}{%
\includegraphics[height=5cm]{figures/784_search/delta_heatmap_lr_hsize.pdf}
\includegraphics[height=5cm]{figures/784_search/delta_heatmap_lr_depth.pdf}}
\caption{Heatmaps of the average $\min \Delta$ as a function of the learning rate, hidden size and network depth for fully-connected networks trained on MNIST and FashionMNIST. On the left, depth 3 networks with varying hidden sizes are compared. On the right, networks with hidden size 1024 are compared over varying depth. {\color{cyan} \textbf{Cyan}} patches indicate that no model with the given configuration achieved a minimum training loss of 0.1.}
\label{fig:mnist_delta_heatmaps}
\end{figure}
\subsection{Problem Difficulty}
We revisited the conclusion of \citet{goodfellow2014qualitatively} that the MLI property holds due to the relative ease of optimization. We explored this question on three fronts. First, we used a fixed network size and varied the number of data points in the dataset. Second, we used a fixed dataset size and varied the number of hidden units in a network of fixed depth (Figure~\ref{fig:mnist_delta_heatmaps}). And third, we varied random corruption of labels in the training dataset.
\label{app:problem-diff}
\paragraph{Dataset size.} We trained fully-connected networks on the Fashion-MNIST dataset using SGD with a learning rate of 0.1. The networks had a single hidden layer with 1000 hidden units, and we varied the dataset size from 10 up to the full size 60000. Figure~\ref{fig:vary_dsize} shows the linear interpolation trained on varying dataset sizes. We observed that even when the training dynamics are unstable and highly non-linear, the interpolation is still monotonically decreasing.
\begin{figure}
\begin{minipage}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/dset_varying/dsize=30.png}
\end{minipage}\hfill%
\begin{minipage}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/dset_varying/dsize=300.png}
\end{minipage}\hfill%
\begin{minipage}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/dset_varying/dsize=3000.png}
\end{minipage}
\caption{Linear interpolations (green) for neural networks trained on varying dataset sizes (30, 300, 3000 from left-to-right), with loss during training overlaid (blue). Even when the training dynamics are unstable and highly non-linear, the interpolation produces a smooth monotonic curve.}
\vspace{-0.4cm}
\label{fig:vary_dsize}
\end{figure}
\paragraph{MLI vs.~label corruption.} When the dataset is sufficiently simple, the learning problem is easy and SGD consistently finds solutions with the MLI property. To explore this hypothesis, we trained neural networks with label corruption. We trained a neural network with two hidden layers each with 1024 units (more details can be found at Appendix~\ref{app:experiment-specific}). The labels were corrupted by uniformly sampling labels for some proportion of the data points. We varied the label corruption from 0\% to 100\% in 2.5\% intervals. We varied the proportion of label corruption from 0\% up to 100\%. At all levels of label corruption, the MLI property persisted. One possible explanation for this result follows from the fact that logit gradients cluster together by logit index --- even for inputs belonging to different true classes \citep{fort2019emergent}. This provides an explanation for gradient descent exploring a low dimensional subspace relative to the parameter space. Therefore, corrupting the label will not disrupt this clustering at initialization and, as empirically verified, is unlikely to prevent the MLI property from holding.
\subsection{Learning Dynamics}
\label{appendix-dynamics}
\citet{lewkowycz2020large} observed a region of critical large learning rates wherein gradient descent breaks out of high-curvature regions at initialization and explores regions of high-loss before settling in a low-loss region with lower curvature. We might expect that such trajectories lead to initialization-solution pairs that do not satisfy the MLI property. On one hand, in Figure~\ref{fig:vary_dsize}, we observed several runs where SGD is seen to overcome large barriers but the MLI property holds. However, in Figure~\ref{fig:nonmono_loss_landscape} we observe a projection of the loss landscape which aligns with the qualitative description of the catapult phase: a barrier in the loss, with SGD settling in a region of much lower curvature. Overall, we consider our findings inconclusive on this front.
\subsection{MLI on held-out data}
In this work, we are primarily concerned with better understanding of the interaction between the MLI property and the training loss. Therefore, all of the results that we have reported are based on statistics computed over the training set. However, the same observations also hold generally when evaluating using held-out data (up to overfitting effects). This was confirmed by \citet{goodfellow2014qualitatively}, and in this section, we provide a short qualitative study verifying this for the settings that we have studied.
\paragraph{Image reconstruction.} In Figure~\ref{fig:mnist_holdout_compare} (left two plots), we compare the loss interpolations on the training set and test set for two trained autoencoders. In the first plot, the network satisfies the MLI property but in the second it does not. In both cases, the test loss interpolation closely follows the training loss.
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/mnist_ae/holdout_compare.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/784_search/holdout_compare.pdf}
\end{minipage}
\caption{Comparing loss interpolations on the train and test set. In the first and second plots, fully-connected autoencoders trained on MNIST are evaluated that do/don't satisfy the MLI property (respectively). The third and fourth plots display fully-connected MNIST classifiers.}
\label{fig:mnist_holdout_compare}
\end{figure}
\paragraph{MNIST Classifiers.} The third and fourth plots in Figure~\ref{fig:mnist_holdout_compare} show the train and test loss interpolations for fully-connected MNIST classifiers. In this case, the test loss increases towards the end of the interpolation path while the training loss stays small. This happens because the network becomes over-confident in its predictions and pays a larger cost for misclassification on the test-set (even though the accuracy remains the same). This observed behaviour is one reason why we favour exploration of the training loss throughout our work. Despite this, we do still observe the test loss following the general shape of the training loss for most of the interpolation path.
\paragraph{CIFAR-10 \& CIFAR-100 Classifiers.} In Figure~\ref{fig:cifar_holdout_compare}, we compare the loss interpolations on the training set and test set for ResNets trained on CIFAR-10 and CIFAR-100. The first two plots show CIFAR-10 classifiers with the third and fourth plot showing CIFAR-100 classifiers. The first and third plots show networks that satisfy the MLI property on the training loss, while the second and fourth show networks that fail to satisfy the MLI property on the training loss. As with the MNIST classifiers, we observe that the test loss has a tendency to increase towards the end of the interpolation path (while following the overall trend of the training loss).
\begin{figure}
\begin{minipage}{0.5\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR10/holdout_compare.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering%
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR100/holdout_compare.pdf}
\end{minipage}
\caption{Comparing loss interpolations on the train and test set. In the first and second plots, ResNets trained on CIFAR-10 are evaluated that do/don't satisfy the MLI property (respectively). The third and fourth plots display interpolation plots for ResNets trained on CIFAR-100.}
\label{fig:cifar_holdout_compare}
\end{figure}
\section{Additional Theoretical Analysis}\label{app:additional_theory}
In this section, we present additional theoretical analysis of the MLI property.
\subsection{Wide neural networks}
\label{app:wide_nets}
In this section, we prove that sufficiently wide fully-connected networks satisfy the MLI property. To do so, we lean on prior analysis from \citet{lee2019wide}. We assume that the fully-connected network has the following layer sizes $d \rightarrow m \rightarrow \ldots m \rightarrow k$, with $m \rightarrow \infty$. We also assume our loss function is mean-squared error,
\[\calL(\btheta) = \frac{1}{2}\sum_{i=1}^n \Vert f_{\btheta}(\bx_i) - \by_i \Vert^2.\]
\paragraph{Assumptions.} We borrow the setting established by \citet{lee2019wide} that consists of four assumptions.
\begin{enumerate}
\item The widths of the hidden layers are identical (as stated above).
\item The neural tangent kernel, $\frac{1}{n}J(\btheta)^\top J(\btheta)$, is full-rank with finite singular values. I.e.,
\[0 < \lambda_{\min}\left(\frac{1}{n}J(\btheta)^\top J(\btheta)\right) \leq \lambda_{\max}\left(\frac{1}{n}J(\btheta)^\top J(\btheta)\right) < \infty.\]
Further, we define $\eta_{\textrm{critical}} := 2/(\lambda_{\min} + \lambda_{\max})$
\item The training set $\{(\bx_i, \by_i)\}_{i=1}^n$ is contained in a compact set and contains no duplicate inputs.
\item The activation function, $\phi$, in the network satisfies the following,
\[ \vert \phi(0)\vert < \infty, \:\: \Vert \phi' \Vert_\infty < \infty, \:\: \sup_{\bx \neq \tilde{\bx}} \frac{\vert \phi'(\bx) - \phi'(\tilde{\bx})\vert}{\bx - \tilde{\bx}} < \infty\]
\end{enumerate}
\paragraph{Background.} We utilize two results from \citet{lee2019wide}. The first of which bounds the Jacobian matrix in Frobenius norm about initialization.
\begin{lemma}\label{lemma:locally_lipschitz_jacobian}(Locally Lipschitz Jacobian [Lemma~1 \citep{lee2019wide}])
Assume conditions 1-4 above. There is a $K > 0$ such that for every $C > 0$, with high probability over random initialization,
\begin{align*}
\frac{1}{\sqrt{m}}\Vert J(\btheta) \Vert_F &\leq K, \\
\frac{1}{\sqrt{m}}\Vert J(\btheta) - J(\btheta')\Vert_F &\leq K\Vert \btheta - \btheta'\Vert_2,
\end{align*}
for all $\btheta$ and $\btheta'$ such that $\Vert \btheta - \btheta_0\Vert \leq Cm^{-1/2}$ and $\Vert \btheta' - \btheta_0\Vert \leq Cm^{-1/2}$.
\end{lemma}
In words, Lemma~\ref{lemma:locally_lipschitz_jacobian} guarantees that the Frobenius norm of the Jacobian is close to initialization as width grows and that it does not vary too quickly. The second of these two constraints also guarantees that the norm of the network Hessian is bounded (by considering $\btheta$ and $\btheta'$ arbitrarily close).
The second result that we borrow provides a high-probability guarantee that infinitely wide neural networks find solutions near to their initialization (the lazy training regime \cite{chizat2018lazy}).
\begin{lemma}\label{lemma:small_weight_change}(Lazy training [Theorem~G.1 \citep{lee2019wide}])
Assume conditions 1-4 above. For all $\delta > 0$ and $\eta_0 < \eta_{\textrm{critical}}$, there exists $M \in \bbN$, $R_0 > 0$, and $K > 1$ such that for every $m > M$, with probability at least $1-\delta$ over random initialization, gradient descent with learning rate $\eta = \eta_0 / m$ applied for $T$ steps satisfies,
\[\Vert \btheta_T - \btheta_0 \Vert_2 \leq \frac{3KR_0}{\lambda_{\min}}m^{-1/2}.\]
\end{lemma}
\paragraph{MLI for infinite width networks.}
From the above, we can prove that in the limit of infinite width, gradient descent with a suitably small learning rate finds a solution that is linearly connected to the initialization.
Intuitively, this result holds as in a region near a minimum the objective is locally convex. As the width of the network grows, the minimum found by gradient descent becomes arbitrarily close to initialization and thus the linear interpolation is acting over a convex function.
For completeness, we first provide a simple proof that linear interpolations satisfy the MLI property in convex loss landscapes. The result itself follows from standard techniques presented in, for example, \citet{boyd2004convex}.
\begin{restatable}[Linearity and convexity gives MLI]{lemma}{linearconvexmli}\label{lemma:linear_convex_mli}
Let $\calL : \bbR^d \rightarrow \bbR$ be a convex, differentiable loss function. Further, let $\btheta^* \in \argmin \calL$. Then, for all $\btheta_0 \in \bbR^d$, $g(\alpha) := \calL(\btheta_0 + \alpha(\btheta^* - \btheta_0))$ is monotonically decreasing for $\alpha \in [0,1)$.
\end{restatable}
\begin{proof}
We have that $g(\alpha)$ is also a convex, differentiable function. Therefore, using the first-order convexity condition on $g$,
\[g'(\alpha) \leq \frac{g(1) - g(\alpha)}{1 - \alpha} \leq 0\]
\end{proof}
We now proceed with the main result of this section.
\begin{restatable}[Wide networks satisfy the MLI Property]{theorem}{infwidthmli}\label{thm:inf_width_mli}
Assume conditions 1-4 above. For all $\delta > 0$ and $\eta_0 < \eta_{\textrm{critical}}$, there exists $M \in \bbN$ such that for every $m > M$, with probability at least $1-\delta$ over random initialization, gradient descent with learning rate $\eta = \eta_0 / m$ satisfies,
\[\calL(\btheta_{\alpha_2}) - \calL(\btheta_{\alpha_1}) \leq 0,\]
for all $\alpha_2 > \alpha_1 \in [0,1)$.
\end{restatable}
\begin{proof}
For brevity, we write $\Delta\btheta = \btheta_T - \btheta_0$, with $\btheta_\alpha = \btheta_0 + \alpha \Delta\btheta$. Our approach is to linearize the loss in function-space and show that all remaining terms are quadratic in $\Delta\btheta$ and so are dominated by the linear terms for a sufficiently wide network.
We begin by considering the Taylor series of $\calL(\btheta_\alpha)$ about $\btheta_0$, using the Lagrange form of the remainder,
\begin{align}
\calL(\btheta_\alpha) &= \calL(\btheta_0) + \alpha \nabla_{\btheta}\calL(\btheta_0)^\top \Delta\btheta + \frac{1}{2}\alpha^2 \Delta\btheta^\top \nabla^2_{\btheta} \calL(\bx_i; \bar{\btheta}_\alpha) \Delta\btheta,\\
&= \calL(\btheta_0) + \frac{\alpha}{2n}\sum_{i=1}^n(f(\bx_i; \btheta_0) - \by_i)^\top J(\bx_i; \btheta_0) \Delta\btheta + \frac{1}{2}\alpha^2 \Delta\btheta^\top \nabla^2_{\btheta} \calL(\bx_i; \bar{\btheta}_\alpha) \Delta\btheta,
\end{align}
for some $\bar{\btheta}_\alpha$ on the line $[\btheta_0, \btheta_\alpha]$. Now, noting that the Hessian of $f$ with respect to $\btheta$ is a third-order tensor, we can utilize the integral form of the Taylor expansion to write,
\begin{equation}
\left(J(\bx_i; \btheta_0)\Delta\btheta\right)_j = f(\bx_i; \btheta_T)_j - f(\bx_i; \btheta_0)_j - \frac{1}{2}\Delta\btheta^\top\left(\int_0^1 \frac{\partial^2 f_j}{\partial \btheta^2}(\bx_i; \btheta_{\alpha'}) d\alpha'\right) \Delta\btheta,
\end{equation}
where the $j$ subscript notation indicates vector indexing. Collecting terms, we have
\begin{align*}
\calL(\btheta_\alpha) - \calL(\btheta_0) =& \frac{1}{2n}\sum_{i=1}^n \Bigl[\alpha(f(\bx_i; \btheta_0) - \by_i)^\top (f(\btheta_T) - f(\btheta_0)) + \frac{1}{2}\alpha^2 \Delta\btheta^\top \nabla^2_{\btheta} \calL(\bx_i; \bar{\btheta}_\alpha) \Delta\btheta,\\
& -\frac{1}{2}\alpha\sum_{j=1}^k (f(\bx_i; \btheta_0) - \by_i)_k \Delta\btheta^\top\left(\int_0^1 \frac{\partial^2 f_k}{\partial \btheta^2}(\bx_i; \btheta_{\alpha'}) d\alpha' \right) \Delta\btheta \Bigr].
\end{align*}
Now, noting that $\calL(\btheta_{\alpha_2}) - \calL(\btheta_{\alpha_1}) = \left(\calL(\btheta_{\alpha_2}) - \calL(\btheta_0)\right) - \left(\calL(\btheta_{\alpha_1}) - \calL(\btheta_0)\right)$, we have
\begin{align*}
\calL(\btheta_{\alpha_2}) - \calL(\btheta_{\alpha_1}) =& \frac{1}{2n}\sum_{i=1}^n \Bigl[(\alpha_2 - \alpha_1)(f(\bx_i; \btheta_0) - \by_i)^\top (f(\btheta_T) - f(\btheta_0))\\
& + \frac{1}{2}\Delta\btheta^\top\left(\alpha_2^2 \nabla^2_{\btheta} \calL(\bx_i; \bar{\btheta}_{\alpha_2}) - \alpha^2_1 \nabla^2_{\btheta} \calL(\bx_i; \bar{\btheta}_{\alpha_2}) \right)\Delta\btheta,\\
& -\frac{1}{2}(\alpha_2 - \alpha_1)\sum_{j=1}^k (f(\bx_i; \btheta_0) - \by_i)_k \Delta\btheta^\top\left(\int_0^1 \frac{\partial^2 f_k}{\partial \btheta^2}(\bx_i; \btheta_{\alpha'}) d\alpha' \right) \Delta\btheta \Bigr].
\end{align*}
The first term in the sum is negative as $\calL$ is convex in $f$ (and $\alpha_2 > \alpha_1$). It remains to show that the other terms behave asymptotically like $\Vert\Delta\btheta\Vert^2$. First, notice that we can decompose the Hessian of the loss as follows,
\begin{equation}
\nabla_{\btheta}^2\calL(\bx_i; \btheta) = J(\bx_i; \btheta)^\top J(\bx_i; \btheta) + \sum_{j=1}^k (f(\bx_i; \btheta) - \by_i)_j \frac{\partial^2 f_j}{\partial \btheta^2}(\bx_i; \btheta)
\end{equation}
Furthermore, by Lemma~\ref{lemma:small_weight_change}, there exists an $M' \in \bbN$ such that for all $m > M'$ we have $\Vert \Delta\btheta\Vert \leq O(m^{-1/2})$ with probability at least $1 - \delta$. Under this event, we can apply Lemma~\ref{lemma:locally_lipschitz_jacobian} to guarantee that the average Jacobian and Hessian norms are bounded about initialization:
\[\frac{1}{n}\sum_{i=1}^n \left\Vert J(\bx_i; \btheta) \right\Vert^2_F < \infty \:\:\:\textrm{ and }\:\:\: \frac{1}{n}\sum_{i=1}^n\sum_{j=1}^k\left\Vert \frac{\partial^2 f_j}{\partial \btheta^2}(\bx_i; \btheta) \right\Vert^2_F < \infty.\]
Therefore, there exists an $M \geq M'$, such that for all $m > M$ the negative first-order term dominates the second order terms. Under the $1-\delta$ probability event, this guarantees that the loss is monotonically decreasing along the linear interpolation.
\end{proof}
\subsection{A Noisy Quadratic Model}
\label{app:nqm}
The noisy quadratic model (NQM) \citep{schaul2013no, wu2018understanding, zhang2019algorithmic} serves as a useful guide for understanding the effects of stochasticity in asymptotic neural network training. Indeed, \citet{zhang2019algorithmic} demonstrate that the NQM makes predictions that are aligned with experimental results on deep neural networks. Using this model, we can provide an explanation for one possible cause of non-monotonicity: an inflection point of the interpolation curve with positive second derivative close to $\alpha=1$. Intuitively, we can imagine a bowl-shaped loss surface where the final parameters lies on the opposite side of the optima relative to the initialization. This non-monotonicity is likely to occur when training with smaller batch sizes and/or using larger (fixed) learning rates.
Let our loss function be as follows:
\def{\boldsymbol{\theta}}{{\boldsymbol{\theta}}}
\def{\textbf{c}}{{\textbf{c}}}
\begin{align}
\mathcal{L}({\boldsymbol{\theta}}) = \frac{1}{2} {\boldsymbol{\theta}}^{\top} \mathbf{K} {\boldsymbol{\theta}},
\end{align}
where ${\boldsymbol{\theta}} \in \mathbb{R}^d$ and $\mathbf{K} \in \mathbb{R}^{d \times d}$. The optimization algorithm receives stochastic gradients $\mathbf{K} {\boldsymbol{\theta}} + \mathbf{c}$, where $\mathbf{c} \sim \calN (\mathbf{0}, \mathbf{K})$. Consider the iterates $\{{\boldsymbol{\theta}}_{i}\}_{i=0}^{\top}$ produced by gradient descent. With a sufficiently small learning rate, the expected value of the iterate converges i.e. $\lim_{t \to \infty} \bbE[\mathcal{L}({\boldsymbol{\theta}}_t)] = 0$.
Also consider interpolating between arbitrary $\btheta_1$ and $\btheta_2$. The loss along the interpolation direction is $\mathcal{L}(\btheta_1 + \alpha (\btheta_2 - \btheta_1))$. We compute the derivative with respect to $\alpha$:
\begin{align}
\frac{\partial \mathcal{L}}{\partial \alpha} (\btheta_1 + \alpha (\btheta_2 - \btheta_1))
&= \frac{\partial}{\partial \alpha} \left[\frac{1}{2}(\btheta_1 + \alpha (\btheta_2 - \btheta_1))^{\top} \mathbf{K} (\btheta_1 + \alpha (\btheta_2 - \btheta_1)) \right] \\
&= (\btheta_2 - \btheta_1)^{\top} \mathbf{K} (\btheta_1 + \alpha (\btheta_2 - \btheta_1))
\end{align}
Hence, the loss is monotonically decreasing if, for all $\alpha \in [0,1]$,
\begin{align}
(\btheta_2 - \btheta_1)^{\top} \mathbf{K} (\btheta_1 + \alpha (\btheta_2 - \btheta_1)) < 0
\end{align}
In the one dimension case, this equation is saying that interpolation is non-monotonic when $\btheta_1$ and $\btheta_2$ are on the opposite side of the minima. More generally, note that because $\frac{\partial \mathcal{L}}{\partial \alpha}$ is linear in $\alpha$, the interpolation is monotonically decreasing if and only if both of these conditions at the endpoints are satisfied:
\begin{align}
(\btheta_2 - \btheta_1)^{\top} \mathbf{K} \btheta_1 &< 0 \\
(\btheta_2 - \btheta_1)^{\top} \mathbf{K} \btheta_2 &< 0
\end{align}
These two conditions correspond to a negative derivative with respect to $\alpha$ at $\btheta_1$ and $\btheta_2$. Since we choose a learning rate so that the loss decreases in expectation (and hence the derivative is anti-aligned with ${\textbf{c}}_2 - {\textbf{c}}_1$ at initialization), it suffices to check just the second condition.
We simulate learning in this model to measure the effect of stochasticity under varying learning rates on the MLI property. As in \citet{zhang2019algorithmic}, we use $\btheta_1 := \btheta \sim \calN(\mathbf{0}, \mathbf{I})$ and $\mathbf{K} = diag \{1, \frac{1}{2}, \frac{1}{3}, \dots, \frac{1}{d}\}$. As $t \to \infty$, the point $\btheta_2 := \btheta_T \sim \calN(\mathbf{0}, \eta \mathbf{K})$, where $\eta$ is the final learning rate and the random variable comes from the noise in the gradient.
Through empirical simulations, we verify that this is approximately a symmetric distribution about $0$, so the probability we have monotonic interpolation is roughly $\frac{1}{2}$. This is empirically verified in Figure~\ref{fig:nqm}. A smaller learning rate means that the distribution of $(\btheta_2 - \btheta_1)^{\top} \mathbf{K} \btheta_2$ has less variance. Because we discretize $\alpha$ when we check for MLI, we have $P((\btheta_2 - \btheta_1)^{\top} \mathbf{K} {\textbf{c}}_2) < \epsilon)$ increases as the learning rate decreases for some small $\epsilon$.
\begin{figure}[t]
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/nqm/nqm_lr1e-2.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/nqm/nqm_lr1e-3.pdf}
\end{minipage}
\caption{For smaller learning rates, the standard deviation of the distribution goes down. Hence the probability that $P((\btheta_2 - \btheta_1)^{\top} \mathbf{K} {\textbf{c}}_2) < \epsilon$ for some small $\epsilon$ goes up (indicating non-monotonicity from a inflection point near the optima that is hard to detect). We use an equal number of bins in both plots.}
\vspace{-0.4cm}
\label{fig:nqm}
\end{figure}
\section{Experiment Details}
\label{app:exp_details}
In this section, we provide full details of our experimental set-up. For all experiments, we discretize $\alpha$ in the interval $[0, 1]$ using 50 uniform steps to examine the MLI property. When training networks with SGD, we used a momentum coefficient of 0.9 and when training networks with the Adam optimizer, we used $\beta_1 = 0.9, \beta_2 = 0.999$ and $\epsilon=1e-08$. Unless specified otherwise, we used a batch size of 128.
\subsection{Image reconstruction experiments}
\label{app:exp_details_reconstruct}
In the image reconstruction experiments, we used deep autoencoders with the ReLU activation function. Our architecture consisted of $784 \rightarrow 512 \rightarrow H \rightarrow 512 \rightarrow 784$ units in each respective layer with $H \in \{1, 2, 5, 10, 25, 50, 100\}$. We trained the networks using either SGD with momentum or Adam. Each model was trained for 200 epochs using fixed learning rates in the set $\{0.3, 0.1, 0.03, 0.01, 0.003, 0.001, 0.0003, 0.0001\}$ and batch sizes of 512.
\subsection{Image classification experiments}
In the image classification experiments, we explored a large number of different architectures. We summarize all setting we explored below.
\paragraph{Multilayer Perceptron.} We train fully connected networks with varying widths and depths. For all experiments (except Figure~\ref{fig:mnist_delta_heatmaps}), widths were chosen from the set \{16, 128, 1024, 2048, 4096\} and depth was chosen from \{2, 4, 8\}. We trained each model using one of SGD, RMSProp, Adam, or KFAC, with fixed learning rates from the set \{3.0, 1.0, 0.3, 0.1, 0.03, 0.01, 0.003, 0.001, 0.0003, 0.0001\}. We experimented with 3 activation functions: tanh, sigmoid, and ReLU. We trained the networks for 200 epochs both with and without batch normalization.
\paragraph{Convolution Neural Network.} We trained Simple CNN, VGG16, VGG19, and ResNet-\{18, 20, 32, 44, 50, 56\} with and without batch normalization, on CIFAR-10 and CIFAR-100. The Simple CNN had two convolutional layers with a $5\times 5$ kernel followed by a single fully connected layer. We trained the networks with both SGD and the Adam optimizer. For all models, we used an initial learning rate in the set $\{0.3, 0.1, 0.03, 0.01, 0.003, 0.001, 0.0003, 0.0001\}$. For most models we fixed the learning rate throughout training but for the ResNets we used a waterfall learning rate decay (at 60, 90, and 120 epochs).
For the ResNet experiments without batch normalization, when using the Fixup \citep{zhang2019fixup} or block identity initialization \citep{goyal2017accurate} we replaced the batch normalization layers with scale and bias parameters taking the role of the standard batch norm affine transformation. The block identity initialization essentially consists of setting the final scale/bias parameters in each residual block to zero, so that the block computes only the skip connection (with possible down-sampling).
\subsection{Language modeling experiments}
We trained a RoBERTa transfomer-based model~\citep{liu2019roberta} on the language modelling task on an Esperanto dataset with the Huggingface framework~\citep{wolf-etal-2020-transformers}, as described in their tutorial\footnote{\url{https://huggingface.co/blog/how-to-train}} and building on a notebook they published\footnote{\url{https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb}}. We trained the model from two distinct random initializations for 1 epoch (taking approximately 2 hours on a free Google Colab GPU).
\subsection{Experiment specifics}
\label{app:experiment-specific}
\paragraph{MNIST \& Fashion-MNIST batch norm comparison.} We describe the experimental set-up used to produce Figure~\ref{fig:784_logit_viz} and Figure~\ref{fig:784_all}. We trained fully-connected networks whose architecture consisted of $784 \rightarrow 1024 \rightarrow 1024 \rightarrow 10$ units in each layer. We explored ReLU, sigmoid, and tanh activation functions and trained the networks with and without batch norm layers, that when used were inserted after each linear layer (except the last layer). The networks were trained for 200 epochs using fixed learning rates in the set $\{3.0, 1.0, 0.3, 0.1, 0.03, 0.01, 0.003, 0.001\}$ and with either the Adam optimizer or SGD with momentum.
\paragraph{Problem difficulty experiments.} For the experiments evaluating problem difficulty (parameter complexity and label corruption), described in Appendix~\ref{app:problem-diff}, we trained fully-connected networks on the FashionMNIST dataset. In all cases, the networks used ReLU activations and were trained with batch sizes of at most 512 (depending on dataset size), and for 200 epochs. Learning rates were fixed throughout training. When varying the dataset size, we trained models on random subsets of FashionMNIST with sizes in the set $\{10, 30, 100, 300, 1000, 3000, 10000, 30000, 60000\}$. We evaluated networks trained with learning rates in the set $\{0.03, 0.1, 0.3, 1.0\}$. For the experiments with varying levels of label corruption, we trained fully-connected networks with 2 hidden-layers each of width 1024 and without batch normalization.
\subsection{Two-layer linear models and the MLI property}
\label{app:two_layer_linear}
In this section, we apply Theorem~\ref{thm:small_gauss_mse_mono} to two-layer linear models. In particular, we prove sufficient conditions on any two-layer linear model to satisfy the MLI property and then prove that under certain assumptions the MLI property holds almost surely.
Our focus is on two-layer linear models, of the form $f(\bx) = VW \bx$, for $W \in \bbR^{k \times d}$ and $V \in \bbR^{m \times k}$. We consider optimizing these models with respect to the mean squared error.
\[\calL(X, Y; V, W) = \frac{1}{2n}\Vert VW X - Y \Vert_2^2,\]
where $X \in \bbR^{d \times n}$ and $Y \in \bbR^{m \times n}$. Note that this model also captures the linear autoencoder, when we set $X = Y$ with $m=d$.
We consider learning in the student-teacher setting, where the labels $Y$ are provided by a two-layer linear model with $k$ hidden units. This allows the application of Theorem~\ref{thm:small_gauss_mse_mono}, as the interpolation trajectory can reach the minimum of the objective. However, outside of this realizable setting we can still apply Theorem~\ref{thm:small_gauss_mse_mono} to the surrogate objective with $Y$ replaced by the minimum achievable target $\hat{Y}$ --- this objective aligns with the original at the global minimum.
Now consider a linear interpolation over initial parameters $V_0$, $W_0$ and final parameters $V_T$, $W_T$, denoted,
\[\bz(\alpha) = \left(V_0 + \alpha(V_T - V_0)\right)\left( W_0 + \alpha(W_T - W_0)\right)X.\]
Going forwards, we write $D_1 = (V_T - V_0)$ and $D_2 = (W_T - W_0)$. We first observe that the tangent to this curve is a linear function of $\alpha$:
\begin{equation}
\bz'(\alpha) = \left(D_1 W_0 + V_0 D_2 + 2\alpha D_1 D_2 \right)X.
\end{equation}
The Gauss length of the interpolated trajectory is given by the length of the projection of the tangent vectors onto the projective space, in this case the sphere with antipodal points identified. Immediately, we note that this line projects onto the sphere as an arc, with end points given by the projection of $\bz'(0)$ and $\bz'(1)$. Following this, the Gauss length is less than $\pi/2$ exactly when the (vectorized) inner product of the two endpoint is positive (implying the angle between them is at most $\pi / 2$). Furthermore, the Gauss length of the interpolation path is at most $\pi$ for any initial-final parameter pair.
The two endpoints are given by:
\begin{equation}
\bz'(0) = \left(D_1 W_0 + V_0 D_2\right)X \:\:\:\:\textrm{ and }\:\:\:\: \bz'(1) = \left(D_1 W_T + V_T D_2\right)X
\end{equation}
Recall the Kronecker product identity $\vecop{AX} = (I \otimes A)\vecop{X}$, where $\vecop{\cdot}$ indicates column-major vectorization. Then we have,
\begin{align*}
\langle\bz'(0) , \bz'(1)\rangle &= \vecop{\left(D_1 W_0 + V_0 D_2\right)X}^\top \vecop{\left(D_1 W_T + V_T D_2\right)X}\\
&= \vecop{X}^\top \left(I \otimes (D_1 W_0 + V_0 D_2)^\top\right)\left(I \otimes (D_1 W_T + V_T D_2)\right) \vecop{X}\\
&= \vecop{X}^\top \left(I \otimes \left((D_1 W_0 + V_0 D_2)^\top(D_1 W_T + V_T D_2)\right) \right)\vecop{X}
\end{align*}
Now, noting that $I \otimes A$ has the same eigenvalues as $A$ (with increased multiplicity), we have $\langle \bz'(0), \bz'(1) \rangle > 0$ for all $X$ if and only if all eigenvalues of $(D_1 W_0 + V_0 D_2)^\top(D_1 W_T + V_T D_2)$ are positive.
\paragraph{Proving that the MLI property holds with probability 1.} Under the \emph{tabula rasa} assumptions from \citet{saxe2019mathematical} we can prove that the MLI property holds almost surely. The assumptions that underly this setting are as follows.
\begin{enumerate}
\item The inputs are whitened ($\frac{1}{n}XX^\top = I$).
\item Initialization is balanced ($V_0 = W_0^\top$).
\item The learning rate of gradient descent is sufficiently small (relative to the largest singular value of the input-output correlation matrix ($\frac{1}{n} YX^\top = USR^\top$).
\end{enumerate}
Under these assumptions, \citet{saxe2019mathematical} prove that,
\[W(t) = Q \sqrt{A(t)} U^\top \:\textrm{ and }\: V(t) = U \sqrt{A(t)} Q^{-1},\]
for some invertible matrix $Q \in \bbR^{k\times k}$.
Under these dynamics, we have,
\[D_1 = U (\sqrt{A(t)} - \sqrt{A(0)}) Q^{-1} \textrm{ and } D_2~=~Q~(\sqrt{A(t)}~-~\sqrt{A(0)})~U^\top.\] Thus,
\begin{align*}
(D_1 W_0 + V_0 D_2)^\top(D_1 &W_T + V_T D_2) \\&= 4U\left(\sqrt{A(t)} - \sqrt{A(0)}\right)\sqrt{A(0)}U^\top U \left(\sqrt{A(t)} - \sqrt{A(0)}\right)\sqrt{A(t)} U^\top\\
&= 4U \sqrt{A(0)A(t)}\left(\sqrt{A(t)} - \sqrt{A(0)}\right)^2 U^\top
\end{align*}
This matrix is positive definite, and thus has positive eigenvalues. Therefore, the found solution will satisfy the MLI property.
\section{Theoretical Gauss Length Analysis}
\label{app:gauss-length}
In this section, we provide an analysis of the MLI property via the Gauss Length. In particular, we prove Theorem~\ref{thm:small_gauss_mse_mono} that states that if the logit interpolation of a network (from initialization to optimum) has small Gauss length then it must satisfy the MLI property. Using Theorem~\ref{thm:small_gauss_mse_mono}, we provide sufficient conditions for the MLI property to hold for two-layer linear models. And prove, under a class of these models satisfying some standard assumptions, that the MLI property holds almost surely.
Let's first recall the definition of the Gauss length.
\gausslength*
Explicitly, we have,
\[\langle \partial_\alpha \hat{\bv}(\alpha), \partial_\alpha \hat{\bv}(\alpha)\rangle = \frac{(\bv \cdot \bv) (\ba \cdot \ba) - (\ba \cdot \bv)^2}{\bv \cdot \bv} = \kappa(\alpha)^2 (\bv \cdot \bv),\]
where $\ba = \frac{\partial \bv}{\partial \alpha}$ and $\kappa$ denotes the curvature of $\bz$. Theorem~\ref{thm:small_gauss_mse_mono} is reproduced below for convenience.
\smallgaussmono*
To prove this result, we will require the following Lemma.
\begin{lemma}\label{lemma:wide_tangents}
Let $\bx^* \in \bbR^d$. Consider a smooth curve $\bz(t) \in \bbR^d$ for $t \in [0,1)$ with $\Vert \bz(0) - \bx^* \Vert > 0$ and $\bz(1) = \bx^*$. If there exists $b \in [0,1)$ with,
\[\Vert \bz(b) - \bx^* \Vert_2 > \Vert \bz(0) - \bx^* \Vert_2,\]
then there exists $t_1 \in [0,b)$ and $t_2 \in (b,1)$ such that $\langle \dot{\bz}(t_1), \dot{\bz}(t_2) \rangle \leq 0$.
\end{lemma}
\begin{proof}
We prove the contrapositive statement: If for all $t_1 \in [0, b)$ and $t_2 \in (b, 1)$, we have $\langle \dot{\bz}(t_1), \dot{\bz}(t_2) \rangle > 0$, then, for all $b \in [0,1)$, we have $\Vert \bz(b) - \bx^* \Vert_2 \leq \Vert \bz(0) - \bx^* \Vert_2$.
By the fundamental theorem of calculus, we have,
\begin{align*}0 < \int_{b}^{1} \int_{0}^{b} \langle \dot{\bz}(t_1), \dot{\bz}(t_2) \rangle dt_1 dt_2 &= \langle \bz(b) - \bz(0), \bx^* - \bz(b) \rangle,\\
&= \langle \bx^* - \bz(0) + \bz(b) - \bx^*, \bx^* - \bz(b) \rangle\\
&= \langle \bx^* - \bz(0), \bx^* - \bz(b) \rangle - \Vert \bx^* - \bz(b) \Vert_2^2,\\
\end{align*}
Now, notice that as $\Vert \bx^* - \bz(b) \Vert_2^2 \geq 0$, we must have $\langle \bx^* - \bz(0), \bx^* - \bz(b) \rangle > 0$. Thus, by applying the Cauchy-Schwarz inequality,
\begin{align*}
\langle \bx^* - \bz(0), \bx^* - \bz(b) \rangle - \Vert \bx^* - \bz(b) \Vert_2^2 &\leq \Vert \bx^* - \bz(0) \Vert_2 \Vert \bx^* - \bz(b) \Vert_2 - \Vert \bx^* - \bz(b) \Vert_2^2,\\
&= \Vert \bx^* - \bz(b) \Vert_2 \left(\Vert \bx^* - \bz(0) \Vert_2 - \Vert \bx^* - \bz(b) \Vert_2\right).
\end{align*}
It follows immediately that for any $b$ we must have,
\[\Vert \bz(b) - \bx^* \Vert_2 \leq \Vert \bz(0) - \bx^* \Vert_2,\]
as required.
\end{proof}
With this result, we proceed with the proof of Theorem~\ref{thm:small_gauss_mse_mono}.
\begin{proof}
We prove this theorem by considering the contrapositive statement: if there exists $a < b \in (0,1)$ such that $f(\bz(a)) < f(\bz(b))$ then the Gauss length is greater than $\pi / 2$.
Given such a pair $(a, b)$, we consider the restriction of $\bz$ to $[a, 1)$. By Lemma~\ref{lemma:wide_tangents}, there exists $t_1$ and $t_2$ such that $\langle \dot{\bz}(t_1), \dot{\bz}(t_2) \rangle < 0$.
Therefore, the normalized tangents also satisfy $\langle \hat{\bv}(t_1), \hat{\bv}(t_2) \rangle < 0$. Thus, the angle between the two normalized tangents (considered in the plane containing these two points and $\bx^*$) is at least $\pi / 2$. Therefore, the Gauss length of the curve on $(a, 1)$ must be at least $\pi / 2$ (with the minimum Gauss length path given by the shortest path on the projective plane connecting $\hat{\bv}(t_1)$ and $\hat{\bv}(t_2)$).
\end{proof}
Finally, we note here that the converse of Theorem~\ref{thm:small_gauss_mse_mono} does not hold. For example, one may define a curve that spirals towards the minima. This curve is monotonically decreasing but has arbitrarily large Gauss length.
\input{appendix/gauss_length_lae}
\section{Conclusion}
\label{sec:conclusion}
\citet{goodfellow2014qualitatively} first showed that linear interpolation between initial and final network parameters monotonically decreases the training loss. In this work, we provided the first evidence that this so called, Monotonic Linear Interpolation (MLI), is not a stable property of neural network training. In doing so, we provided a deeper theoretical understanding of the MLI property and properties of the loss landscape in general. Our empirical investigation of the MLI property explored variations in datasets, architecture, optimization, and other training mechanisms. We identified several mechanisms that systematically produce trained networks that violate the MLI property, and connected these mechanisms to our theoretical explanations of the MLI property. Additional results indicate that the MLI property is not unique to the initialization$\to$solution pair produced by training, but rather is a global property of the loss landscape connecting arbitrary initialization$\to$solution pairs. The empirical and theoretical analysis we presented highlights the intriguing properties of neural network loss landscapes.
\section{Exploring \& Explaining the MLI Property}
\label{section:experiments}
In this section, we present our empirical investigation of the following questions: 1) How persistent is the MLI property? 2) Why does the MLI property hold? 3) What does the MLI property tell us about the loss landscape of neural networks?
For all experiments, unless specified otherwise, we discretize $\alpha$ in the interval $[0, 1]$ using 50 uniform steps. Here we report statistics from the training set throughout but note that the same observations hold for held-out datasets. Many additional results can be found in Appendix~\ref{app:experiments}.
\paragraph{A note on batch normalization.} We experiment with networks that use batch normalization during training. These networks require additional care when interpolating network parameters as the running statistics will not align with the activation statistics during interpolation. Therefore, we opt to reset and \emph{warm up} the running statistics for each interpolated set of parameters. This warm-up consists of computing the activation statistics over an epoch of the training data, meaning that each interpolation curve requires an additional 50 epochs (the number of discretizations of $\alpha$) of data consumption to get accurate loss/accuracy estimates. Note that the learned affine transformation is interpolated as usual.
\paragraph{Experiment settings.} We summarize the main settings here with full details of our experimental procedure given in Appendix~\ref{app:exp_details}. We trained neural networks for reconstruction, classification, and language modeling. For the reconstruction tasks, we trained fully-connected deep autoencoders on MNIST~\citep{lecun2010mnist}. For the classification tasks, we trained networks on MNIST, Fashion-MNIST~\citep{xiao2017fashion}, CIFAR-10, and CIFAR-100~\citep{krizhevsky2009learning}. On these datasets, we explored fully-connected networks, convolutional networks, and residual architectures \citep{he2016deep}. In the above cases, we provide substantial exploration over varying architectures and optimization. We also provide a short study on the language modeling setting by training RoBERTa~\citep{liu2019roberta} on the Esperanto~\citep{conneau2019unsupervised} dataset. There, we verify the MLI property and visualize the loss landscape.
\input{sections/experiments/how}
\input{sections/experiments/why}
\input{sections/experiments/what}
\subsection{How persistent is the MLI property?}
\label{sec:exp:how_persistent}
We first investigate the persistence of the MLI property. \citet{goodfellow2014qualitatively} showed that the MLI property persists in classification and language modeling tasks (with LSTMs \citep{hochreiter1997long}) when trained with SGD. However, several modern advances in neural network training remain unaddressed and the limits of the MLI property have not been characterized. We provide a secondary investigation of the MLI property on reconstruction, classification, and language modelling tasks using modern architectures and methods.
In summary, we found that the MLI property is persistent over most standard neural network training knobs, including (but not limited to): layer width, depth, activation function, initialization method and regularization. However, there were three mechanisms through which we regularly observed the failure of the MLI property: the use of large learning rates, the use of adaptive optimizers such as Adam~\citep{kingma2014adam}, and the use of batch normalization~\citep{ioffe2015batch}. For the remainder of this section, we focus on the effect of these mechanisms but refer readers to Appendix~\ref{app:experiments} for a wider view of our study. We defer further analysis of explanations for the MLI property to Section~\ref{sec:exp:why_mli}.
\paragraph{Using large learning rates.} We found throughout that large learning rates were necessary to train networks that violated the MLI property. However, large learning rates alone were not always sufficient. In Table~\ref{tab:mnist_lr}, we show the proportion of networks with non-monotonic interpolations over varying learning rate (including only those models that achieved better than 0.1 training loss). Models trained with SGD using smaller learning rates always exhibited the MLI property. On the other hand, models trained with SGD with larger learning rates often violated the MLI property. For example, 71\% of the configurations with a learning rate of 1.0 were found to be non-monotonic. One hypothesis for this behaviour is due to the so-called catapult phase \citep{lewkowycz2020large, Jastrzebski2020The}, where large learning rates encourage the parameters to overcome a barrier in the loss landscape. Additional results on the effect of using larger learning rates can be found in Appendix~\ref{app:experiments_lr}.
\begin{table*}[]
\centering
\small
\begin{tabular}{|l|r|l l l l l l l l|}\hline
& LR: & 0.001 & 0.003 & 0.01 & 0.03 & 0.1 & 0.3 & 1.0 & 3.0\\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{SGD}} & BN & 0.00 (20) & 0.00 (24) & 0.00 (24) & 0.00 (24) & 0.00 (24) & 0.17 (24) & 0.83 (24) & 1.00 (16)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& No BN & 0.00 (4) & 0.00 (8) & 0.00 (12) & 0.00 (20) & 0.20 (20) & 0.00 (12) & 0.00 (4) & 0.00 (4)\rule[-1.2ex]{0pt}{0pt}\\ \hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{Adam}} & BN & 0.17 (24) & 0.68 (22) & 0.83 (24) & 1.00 (24) & 1.00 (16) & 1.00 (16) & 1.00 (4) & -\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& No BN & 0.00 (24) & 0.20 (20) & 0.00 (12) & 0.00 (4) & - & - & - & -\rule[-1.2ex]{0pt}{0pt}\\ \hline
\end{tabular}
\caption{Proportion of trained MNIST \& Fashion-MNIST classifiers (achieving better than 0.1 training loss) that had non-monotonic interpolations from initialization to final solution. The total number of runs with less than 0.1 training loss is displayed in parentheses next to the proportion. A dashed line indicates that no networks achieved 0.1 loss.}
\label{tab:mnist_lr}
\end{table*}
\subsubsection{The effect of adaptive optimizers}
Prior work has only investigated the MLI property when training with SGD. To address this gap, we trained a wide variety of networks with adaptive optimizers (RMSProp~\citep{hinton2012neural}, Adam~\citep{kingma2014adam}, and K-FAC~\citep{martens2015optimizing}). Across all settings, we found that adaptive optimizers with large learning rates frequently led to models violating the MLI property.
\paragraph{MNIST autoencoders.} For image reconstruction, we evaluated the MLI property for deep fully-connected autoencoders trained on MNIST. We trained autoencoders with SGD and Adam, with varying learning rates and with a varying number of hidden layer size.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/mnist_ae/ae_mnist_adam_sgd.pdf}
\vspace{-0.6cm}
\caption{Training loss over linear interpolation of deep autoencoders trained on MNIST using SGD and Adam. Each interpolation line is for a training configuration with different hyperparameters (achieving better than 30 training loss).}
\label{fig:ae_interpolations}
\vspace{-0.5cm}
\end{figure}
In Figure~\ref{fig:ae_interpolations}, we show the training loss over the interpolated path for autoencoders with final loss (MSE) lower than 30. The majority of the networks trained with SGD retained the MLI property (with few failures at large learning rates). However, when trained with the Adam optimizer, a larger proportion of converged networks exhibited non-monotonic interpolations.
\paragraph{MNIST \& Fashion-MNIST classifiers.} On the MNIST and Fashion-MNIST datasets, we explored varying dataset size, network size (depth/width of hidden layers), activation function, choice of optimizer, optimization hyperparameters, initialization methods, and the use of batch normalization. In Table~\ref{tab:mnist_lr}, we compare two-layer networks trained with SGD and Adam. Models trained with SGD typically retained the MLI property but those trained with Adam frequently did not.
In Appendix~\ref{app:experiments_opt}, we show additional results for models trained with RMSProp and K-FAC~\citep{martens2015optimizing} (whose behaviour is qualitatively close to Adam) along with the interpolated loss curves.
\paragraph{CIFAR-10 \& CIFAR-100 classifiers.} On CIFAR-10 and CIFAR-100 datasets, we trained two-layer convolutional neural networks (SimpleCNN), LeNet~\citep{lecun1989backpropagation}, AlexNet~\citep{krizhevsky2012imagenet}, VGG16, VGG19~\citep{simonyan2014very}, and ResNets~\citep{he2016deep} with different choices of optimizer and learning rates. In Figure~\ref{fig:cnn_vary_opt}, we show a broad overview of the interpolation paths for different architectures and optimizers. Across all models, the average $\min \Delta$ was 0.016 and 0.626 for SGD and Adam respectively. Overall, Adam-trained models violated the MLI property $3.2$ times more often than SGD.
\subsubsection{The effect of batch normalization}\label{sec:exp:how_persistent:bn}
Batch normalization's invention and subsequent ubiquity postdate the initial investigation of the MLI property. Even now, the relationship between the MLI property and the use of batch normalization has not been investigated. We provide the first such study in this section. We found that the use of batch normalization greatly increased the rate at which trained networks failed to satisfy the MLI property.
\paragraph{MNIST \& Fashion-MNIST classifers.} Table~\ref{tab:mnist_lr} shows the effect of batch normalization on the MLI property for fully connected classifiers trained on MNIST \& Fashion-MNIST. The networks trained with batch normalization failed to satisfy the MLI property more frequently than those without. This is more pronounced with large learning rates and with Adam.
\paragraph{CIFAR-10 \& CIFAR-100 classifers.} Next, we trained ResNet models on CIFAR-10 \& CIFAR-100 classification tasks. We evaluated ResNet-\{20,32,44,56\} trained with Adam and SGD and with varying learning rates. We also varied the distribution over initial parameters and whether or not batch normalization was applied. The results for CIFAR-10 are displayed in Table~\ref{tab:cifar10_resnets} (CIFAR-100 results are similar, and are presented in Appendix~\ref{app:experiments}). The column headers, ``BN'' and ``NBN'' indicate batch normalization and no batch normalization respectively. The suffices ``I'' and ``F'' indicate two alternative initialization schemes, block-identity initialization~\citep{goyal2017accurate} and Fixup initialization~\citep{zhang2019fixup}. For each configuration, we report the percentage of models violating the MLI property and the average minimum $\Delta$ such that the model is $\Delta$-monotonic (conditioning on $\Delta > 0$). Batch normalization led to significantly more networks with non-monotonic interpolations. We also observed that the initialization of the residual blocks plays an important role in shaping the loss landscape.
\begin{table}[]
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{|c|c|c|c|c|c|}\hline
& & BN & BN-I & NBN-I & NBN-F\\\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{ SGD }} & \% (total) & 0.54 (26) & 0.00 (26) & 0.00 (23) & 0.11 (27)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& $\min \Delta$ & 0.794 & 0.000 & 0.000 & 0.076 \rule[-1.2ex]{0pt}{0pt}\\\hline
\multirow{2}{*}{\rotatebox[origin=c]{90}{Adam}} & \% (total) & 0.77 (22) & 0.27 (30) & 0.20 (20) & 0.04 (23)\rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\
& $\min \Delta$ & 0.351 & 0.054 & 0.033 & 0.332 \rule[-1.2ex]{0pt}{0pt}\\\hline
\end{tabular}
\end{adjustbox}
\caption{Evaluation of effect of batch normalization, initialization, and choice of optimizer for residual networks trained on CIFAR-10 (achieving better than 1.0 training loss). We display the proportion of networks with non-monotonic interpolation and average $\min \Delta$ such that the network is $\Delta$-monotonic over varying training settings. Full explanation of table is given in main text.}
\label{tab:cifar10_resnets}
\vspace{-0.3cm}
\end{table}
\subsection{What does MLI say about loss landscapes?}
\label{sec:exp:what_landscape}
Thus far, we focused on the monotonicity of paths connecting the initialization and final network solution. In this section, we ask: are (non-)monotonic interpolations unique to the initialization and final solution pair?
\begin{figure}
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_interp_nway/init_nway_interp.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_interp_nway/init_opt_nway_interp.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/mnist_interp_nway/opt_nway_interp.pdf}
\end{minipage}\vspace{-0.2cm}
\caption{\footnotesize Linear interpolation for 10 FashionMNIST classifiers with less than 0.01 final loss. Left: Interpolating between all pairs of initializations. Middle: Interpolating from all initializations to all optima. Right: Interpolating between all pairs of optima.}
\label{fig:li_pairings}\vspace{-0.65cm}
\end{figure}
To this end, we evaluated linear interpolations between learned network parameters and unrelated random initializations (Figure~\ref{fig:rand_init_to_optimum}). For a fully-connected MNIST classifier and a ResNet-20 trained on CIFAR-10, we found that random initializations display the same interpolation behaviour as the original initialization-solution pair. This suggests that the MLI property is not tied to a particular pair of parameters but rather is a global property of the loss landscape. We also explored linear interpolations between pairs of initializations, initialization to optima pairs, and pairs of optima in Figure~\ref{fig:li_pairings}. No barriers were observed between the pairs of initializations or the initialization$\rightarrow$optimum pairs, but barriers are present between the optima. This highlights the rich structure present in the loss landscape of these models and aligns well with the qualitative predictions of~\citet{NEURIPS2019_48042b1d}.
Finally, we provide visualizations of the loss landscape via 2D projections of the parameter space. While low-dimensional projections of high-dimensional spaces are often misleading, in the case of linear interpolations, the entire path lies in the projected plane. Therefore, these visualizations give us valuable insight into connectivity in the loss landscape for multiple initialization $\to$ final solution paths.
In Figure~\ref{fig:cut_roberta_esperanto}, we show 2D projections of the loss landscape for RoBERTa~\citep{liu2019roberta} trained as a language model on Esperanto~\citep{conneau2019unsupervised} using the HuggingFace library~\citep{wolf-etal-2020-transformers}. We trained two models and plotted the initial points and optima for both. Both initial points are monotonically connected to both minima.
\begin{figure}[H]
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/LM/roberta_LM_esperanto_2inits_1optimum_37598351.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/LM/roberta_LM_esperanto_1init_2optima_80495789.pdf}
\end{minipage}\vspace{-0.5cm}
\caption{Two-dimensional sections of the weight space for RoBERTa trained as a language model on Esperanto. Left: plane defined by two initializations and the optima reached from one of them is shown. Right: plane defined by ``Init 1'' and two optima are shown (with ``Init 2'' projected onto the plane).}\label{fig:cut_roberta_esperanto}
\end{figure}
\subsection{Why does MLI hold?}
\label{sec:exp:why_mli}
\begin{figure*}[!h]
\begin{minipage}{0.5\linewidth}
\centering
\ifarxiv
\includegraphics[width=\linewidth]{figures/mnist_opt/mono_vs_dis_lr2.pdf}
\else
\includegraphics[width=0.9\linewidth]{figures/mnist_opt/mono_vs_dis_lr2.pdf}
\fi
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\ifarxiv
\includegraphics[width=\linewidth]{figures/mnist_opt/mono_vs_gl_lr.pdf}
\else
\includegraphics[width=0.9\linewidth]{figures/mnist_opt/mono_vs_gl_lr.pdf}
\fi
\end{minipage}
\caption{For each MNIST \& Fashion-MNIST classifier, we compute the minimum $\Delta$ such that the interpolated loss is $\Delta$-monotonic. We plot models trained with a learning rate of 0.1 and 0.0001 in the top and bottom rows respectively. On the left, we compare the distance moved in the weight space. On the right, we compare the Gauss length of the interpolated network outputs. \textbf{\textcolor{blue}{Blue}} points represent networks where the MLI property holds and \textbf{\textcolor{orange}{orange}} points are networks where the MLI property fails.}
\label{fig:nm_gl_ds}
\end{figure*}
In Section~\ref{sec:mli_property}, we discussed the parameter- and function-space perspectives of the MLI property. In our experiments, we explore these two perspectives on reconstruction and classification tasks. We computed the average Gauss length of the logit interpolations and the weight distance travelled. In both cases, these measures are predictive of MLI in practice, even for values exceeding the limits of our theory.
In Appendix~\ref{app:experiments_gen}, we provide the full set of results for all settings we explored. Additionally, we provide an investigation of the relationship between the MLI property and generalization. In summary, we did not find a clear relationship between the success of the MLI property and the generalization ability of the neural network.
\subsubsection{Weight distance vs. monotonicity}
Throughout our experiments, we found that weight distance was negatively correlated with the monotonicity of the interpolated network. In Figure~\ref{fig:nm_gl_ds} (left), we show the relationship between the (normalized) distance travelled in weight space and the minimum $\Delta$ such that fully-connected classifiers are $\Delta-$monotonic.
First, we note that larger learning rates encourage greater movement in weight space --- a finding that also extends to batch normalization and the use of adaptive optimizers. Second, we observed that the networks that travelled short distances during optimization consistently satisfied the MLI property. Conversely, networks with larger distances travelled in weight space were more likely to exhibit non-monotonic loss interpolations. In Appendix~\ref{app:experiments_weight_dis}, we show similar results for the autoencoders, CIFAR-10 \& CIFAR-100 classifiers, and comparisons over batch normalization and adaptive optimizers.
\begin{figure*}[!h]
\begin{minipage}{0.5\linewidth}
\centering
\ifarxiv
\includegraphics[width=\linewidth]{figures/mnist_interp_nway/rand_init_compare.pdf}
\else
\includegraphics[width=0.9\linewidth]{figures/mnist_interp_nway/rand_init_compare.pdf}
\fi
\end{minipage}\hfill%
\begin{minipage}{0.5\linewidth}
\centering
\ifarxiv
\includegraphics[width=\linewidth]{figures/CIFAR_search/CIFAR10/rand_init_compare.pdf}
\else
\includegraphics[width=0.9\linewidth]{figures/CIFAR_search/CIFAR10/rand_init_compare.pdf}
\fi
\end{minipage}\vspace{-0.5cm}
\caption{Classifier interpolation loss on training set between 15 different random initializations and an optimum. The top row shows interpolation towards a final solution that is monotonic with its original initialization. The bottom row shows this interpolation for a non-monotonic original pair. For the random initializations, mean loss is shown with standard deviation ($\pm 1$) as filled region.}
\label{fig:rand_init_to_optimum}
\vspace{-0.3cm}
\end{figure*}
\ifarxiv
\begin{wrapfigure}[13]{R}{0.5\linewidth}
\centering
\vspace{-0.5cm}
\includegraphics[width=\linewidth]{figures/mnist_opt/gl_distance.pdf}\vspace{-0.8cm}
\caption{Power law relationship between Gauss length and weight distance travelled for MLP \& Fashion-MNIST experiments. ($R^2 = 0.616$)}
\label{fig:mlp_power_law}
\vspace{-1cm}
\end{wrapfigure}
\else
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{figures/mnist_opt/gl_distance.pdf}\vspace{-0.5cm}
\caption{Power law relationship between Gauss length and weight distance travelled for MLP \& Fashion-MNIST experiments. ($R^2 = 0.616$)}
\label{fig:mlp_power_law}
\vspace{-0.4cm}
\end{figure}
\fi
\subsubsection{Gauss length vs. monotonicity}
We also observed a negative correlation between the Gauss length of the logit interpolations and the minimum $\Delta$ such that the loss interpolation is $\Delta$-monotonic. In Figure~\ref{fig:nm_gl_ds} (right), we make this comparison for classifiers trained on MNIST \& Fashion-MNIST. As our analysis predicts, small Gauss lengths lead to monotonic interpolations. And beyond the strict limits of our theoretical analysis, we find that as the Gauss length increases, the non-monotonicity also increases.
We also observed that larger learning rates lead to much larger Gauss lengths. As with the weight distance, this finding extends to batch normalization and the use of adaptive optimizers too (see Appendix~\ref{app:experiments_gl}). In Appendix~\ref{app:experiments_opt_abl}, we conduct an ablation study to investigate the relationship between Gauss length and the choice of optimizer by changing the optimizer in the middle of training (SGD $\to$ Adam and Adam $\to$ SGD). Switching to Adam at any point during training leads to large Gauss length and weight distance without a significant spike in the training loss --- with little variation due to the time of the optimizer switch.
\subsubsection{Gauss length vs weight distance}
When the distance moved in weight space is small, we would expect a small Gauss length as a linearization of the network provides a good approximation. However, it is not obvious what relationship (if any) should be expected more generally. Surprisingly, we consistently observed a power-law relationship between the average Gauss length and the distance moved in weight space (Figure~\ref{fig:mlp_power_law}). We observed this relationship across all of the experimental settings that we explored. Full results are presented in Appendix~\ref{app:experiments_gl_wd}.
\section{Introduction}
\label{sec:intro}
A simple and lightweight method to probe neural network loss landscapes is to linearly interpolate between the parameters at initialization and the parameters found after training. More formally, consider a neural network with parameters $\btheta \in \bbR^d$ trained with respect to loss function $\calL \colon \bbR^d \rightarrow \bbR$ on a dataset $\mathcal{D}$. Let the neural network be initialized with some parameters $\btheta_0$. Then, using a gradient descent optimizer, the network converges to some final parameters $\btheta_T$. A linear path is then constructed between these two parameters denoted $\btheta_{\alpha}=(1-\alpha)\btheta_{0}+\alpha\btheta_T$. A surprising phenomenon, first observed by \citet{goodfellow2014qualitatively}, is that the function $\calL(\btheta_{\alpha})$ typically monotonically decreases on the interval $\alpha \in [0,1]$. We call this effect the \emph{Monotonic Linear Interpolation (MLI) property} of neural networks.
The MLI property is illustrated in Figure~\ref{fig:title_figure_landscape}. The interpolated path ($\btheta_\alpha$) exhibits the MLI property as the training loss monotonically decreases along this line. Even more surprising, linear interpolation between an unrelated random initialization and the same converged parameters also satisfies the MLI property.
\citet{goodfellow2014qualitatively} showed that the MLI property persists on various architectures, activation functions, and training objectives in neural network training. They conclude their study by stating that ``the reason for the success of SGD on a wide variety of tasks is now clear: these tasks are relatively easy to optimize.'' In our work, we observe that networks violating the MLI property can be produced systematically and are also trained without significant difficulty. Moreover, since the publication of their research, there have been significant developments both in terms of the neural network architectures that we train today \citep{he2016deep, vaswani2017attention,huang2017densely} and our theoretical understanding of them \citep{amari2020does, jacot2018neural, draxler2018essentially, frankle2018lottery, fort2019emergent}. Hence, with a wider lens that addresses these developments, we believe that further investigation of this phenomenon is likely to yield new insights into neural network optimization and their loss landscapes.
\begin{figure}[!t]
\center{\includegraphics[width=1.0\linewidth]
{figures/loss_landscape_compare.pdf}}
\vspace{-0.6cm}
\caption{Monotonic linear interpolation for a ResNet-20 trained on CIFAR-10 from initialization to an optimum (\textbf{\textcolor{red}{red}}) and from an unrelated initialization to the same optimum (\textbf{\textcolor{blue}{blue}}). On the left, we show a 2D slice of the loss landscape, defined by the two initializations and optimum, along with the optimization trajectory projected onto the plane (\textbf{\textcolor{orange}{orange}}). On the right, we show the interpolated loss curves, with training loss shown relative to the proportion of distance travelled to the optimum.}
\label{fig:title_figure_landscape}
\vspace{-0.6cm}
\end{figure}
We study three distinct questions surrounding the MLI property. 1) How persistent is the MLI property? 2) Why does the MLI property hold? 3) What does the MLI property tell us about the loss landscape of neural networks? To address these questions, we provide an expanded empirical and theoretical study of this phenomenon.
\begin{figure*}[!t]
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cnn_archi/cnn_simple.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cnn_archi/cnn_lenet.pdf}
\end{minipage}\hfill%
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cnn_archi/cnn_vgg19.pdf}
\end{minipage}
\begin{minipage}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/cnn_archi/cnn_resnet18.pdf}
\end{minipage}
\vspace{-0.1cm}
\caption{Training loss over the linear interpolation connecting initial and final parameters. Each curve represents a network trained on CIFAR-10 with different hyperparameter configurations (achieving at least 1.0 training loss). The MLI property holds for networks trained with SGD, but often fails for networks trained with Adam.}
\vspace{-0.4cm}
\label{fig:cnn_vary_opt}
\end{figure*}
To evaluate the persistence of the MLI property, we train neural networks with varying architectures, optimizers, datasets, initialization methods, and training mechanisms (e.g.~batch normalization~\citep{ioffe2015batch}). We find that the MLI property persists for the majority of these settings but can be consistently broken through mechanisms that encourage the weights to move far from initialization. As far as we know, ours is the first work to observe that MLI is not a stable property of the network architecture.
One hypothesis for the MLI property is that the networks are close to linear along the interpolation path. We formalize this notion using tools from differential geometry and provide sufficient conditions for neural networks trained under the MSE loss to satisfy the MLI property. In particular, we prove that if the length under the Gauss map (which we refer to as the \emph{Gauss length}) of the interpolation trajectory in function space is small, then the network is guaranteed to have the MLI property. While the converse does not hold in general, we show that this quantity is correlated with monotonicity in practice. We connect this explanation to our prior observation that large distances moved in weight space encourage non-monotonic interpolations through a surprising power-law relationship between the distance moved and the average Gauss length.
Finally, we investigate the loss landscape of the neural networks we trained by evaluating the MLI property over alternative linear paths. For example, we examine the interpolation path connecting different initializations and final parameters (as in Figure~\ref{fig:title_figure_landscape}). Surprisingly, when the MLI property holds for an initialization $\to$ final solution pair, the MLI property also holds for unrelated initializations to the same solution.
In summary, our primary contributions include:
\begin{itemize}
\item We prove a sufficient condition for neural networks minimizing MSE to satisfy the MLI property.
\item We show that the MLI property does not always hold and that we can systematically control for/against it.
\item We identify several common training mechanisms that provide this control and connect them to our novel theoretical results.
\end{itemize}
\section{The Monotonic Linear Interpolation Property}
\label{sec:mli_property}
The Monotonic Linear Interpolation (MLI) property states that when a network is randomly initialized and then trained to convergence, the linear path connecting the initialization and converged solution is monotonically decreasing in the training loss. Specifically, we say that a network has the MLI property if, for all $\alpha_1, \alpha_2 \in [0,1]$ with $\alpha_1 < \alpha_2$,
\begin{equation}
\calL(\btheta_{\alpha_1}) \geq \calL(\btheta_{\alpha_2}),\textrm{ where } \btheta_\alpha = \btheta_0 + \alpha (\btheta_T - \btheta_0).
\end{equation}
Here, $\btheta_0$ denotes the parameters at initialization and $\btheta_T$ denotes the parameters at convergence.
\subsection{$\Delta$-Monotonicity}
\citet{goodfellow2014qualitatively} found that the MLI property holds for a wide range of neural network architectures and learning problems. They provided primarily qualitative evidence of this fact by plotting $\calL(\btheta_\alpha)$ with discretizations of $[0,1]$ using varying resolutions. We instead propose a simple quantitative measure of non-monotonicity.
\begin{definition}{($\Delta$-monotonicity)}\label{def:delta_mono}
Consider a linear parameter interpolation parameterized by $\alpha, \btheta_0$, and $\btheta_T$ with corresponding loss function $\calL$. The path is $\Delta$-monotonic for $\Delta \geq 0$ if for all $\alpha_1, \alpha_2 \in [0,1]$ with $\alpha_1 < \alpha_2$, we have $\calL(\alpha_2) - \calL(\alpha_1) < \Delta$.
\end{definition}
Intuitively, the above definition states that any bump due to increasing loss over the interpolation path should have a height upper-bounded by $\Delta$. We are interested in the smallest $\Delta \geq 0$ for which this definition holds. Notably, this minimum $\Delta$ can be approximated well numerically by stepping along the interpolation path in fixed intervals to find $\alpha_1$ and $\alpha_2$ giving the largest positive gap $\calL(\alpha_2) - \calL(\alpha_1)$.
\subsection{Weight-space perspective}
It is natural to attempt to reason about the MLI property in terms of the parameters of the neural network. Intuitively, the MLI property suggests that, during optimization, the parameters move into a nearby basin of low loss without encountering any high-loss barriers in their path.
We can formalize this intuition for ``Lazy Training'' \citep{chizat2018lazy}, where the weights find a minimum near their initial value. Consider the second-order Taylor series expansion about the converged minimum $\btheta^*$,
\begin{equation}
\calL(\btheta_0) \approx \calL(\btheta^*) + (\btheta_0 - \btheta^*)^\top \nabla_{\btheta}^2 \calL(\btheta^*)(\btheta_0 - \btheta^*).
\end{equation}
If the difference between the initial and converged parameters, $\Vert \btheta_0 - \btheta^* \Vert$, is sufficiently small, then this quadratic approximation holds well throughout the linear interpolation. In this case, the linear interpolation yields a monotonic decrease in the loss (Lemma~\ref{lemma:linear_convex_mli}, Appendix~\ref{app:additional_theory}).
Experimentally, we investigate the connection between the distance moved in weight space and the monotonicity of the resulting interpolation. We find that networks that move further in weight space during training are significantly more likely to produce non-monotonic initialization$\rightarrow$optimum interpolations. Theoretically, we investigate the MLI property for wide neural networks where lazy training occurs provably \citep{lee2019wide}. In this setting, we prove that the MLI property holds with high probability for networks of sufficient width (Theorem~\ref{thm:inf_width_mli}, Appendix~\ref{app:additional_theory}).
\subsection{Function-space perspective}
We typically train neural networks with a convex loss function applied to the network's output. While the parameter space of neural networks is extremely high-dimensional and exhibits symmetries, the function space is generally simpler and easier to reason about \citep{jacot2018neural}. To that end, we let
\begin{equation}\label{eqn:logit_interpolation}
\bz(\alpha; \bx) = f(\bx; \btheta_\alpha) \in \bbR^k,~~\alpha \in [0,1]
\end{equation}
denote the \emph{logit interpolation} of a neural network $f$ evaluated on data point $\bx$ with parameters $\btheta_\alpha=\btheta_0 + \alpha(\btheta_T - \btheta_0)$.
One special case that guarantees the MLI property is that of linear functions, $f(\bx; \btheta) = \btheta^\top \bx$ (with $\calL(\btheta_0) > \calL(\btheta_T)$). In this case, the logit interpolations are also linear and, under a convex loss function, $f$ will satisfy the MLI property \citep{boyd2004convex}. In practice, we work with non-linear neural networks that have non-linear logit interpolations. However, we observed that the logit interpolations are often close to linear (in a sense that we formalize soon) and that this coincides with the MLI property (Figure~\ref{fig:784_logit_viz}). Therefore, we raise the question: Can we guarantee the MLI property for logit interpolations that are \emph{close} to linear?
\begin{figure}[!hpt]
\centering
\ifarxiv
\includegraphics[width=0.8\linewidth]{figures/784_search/2D_PCA_logit_paths.pdf}
\else
\includegraphics[width=\linewidth]{figures/784_search/2D_PCA_logit_paths.pdf}
\fi
\vspace{-0.5cm}
\caption{2D projections (computed with PCA) of logit interpolations for fully-connected networks trained on Fashion-MNIST. Both networks achieve near-perfect final training accuracy. However, the first one (left) interpolates monotonically while the second one (right) does not. The only difference between these two networks is that the second was trained using batch normalization while the first was not.}
\vspace{-0.5cm}
\label{fig:784_logit_viz}
\end{figure}
\paragraph{Measuring logit linearity.} There is no standard method to measure the linearity of a curve, but there are several tools from differential geometry that are applicable. In this work, we focus on the length under the Gauss map, which we refer to as the \emph{Gauss length}, a unit-free measure that is related to the curvature. In the case of curves, the Gauss length is computed by mapping the normalized tangent vectors of the curve onto the corresponding projective space (through the so-called Gauss map), and then measuring the length of the curve in this space. This is described formally in the following definition.
\begin{restatable}[Gauss length]{definition}{gausslength}\label{def:gausslength}
Given a curve $\bz: (0,1) \rightarrow \bbR^d$. Let $\hat{\bv}(\alpha) = \frac{\partial\bz}{\partial\alpha} / \Vert\frac{\partial\bz}{\partial\alpha}\Vert_2$ denote the normalized tangent vectors. The length under the Gauss map (Gauss length) is given by:
\[\int_{0}^{1} \sqrt{\langle \partial_\alpha \hat{\bv}(\alpha), \partial_\alpha \hat{\bv}(\alpha) \rangle} d\alpha,\]
where $\partial_\alpha \hat{\bv}(\alpha)$ denotes the pushforward of the Gauss map acting on the acceleration vector.
\end{restatable}
We refer readers to \citet{lee2006riemannian} or \citet{poole2016exponential} for a more thorough introduction to these concepts. Intuitively, the Gauss length measures how much the curve bends along its path, with a Gauss length of zero indicating a linear path. In Theorem~\ref{thm:small_gauss_mse_mono}, we prove that a sufficiently small Gauss length guarantees the MLI property for MSE loss.
\begin{restatable}[Small Gauss length gives monotonicity]{theorem}{smallgaussmono}\label{thm:small_gauss_mse_mono}
Let $\calL(\bz) = \Vert \bz - \bz^* \Vert_2^2$ for $\bz^* \in \bbR^d$, and let $\bz: (0,1) \rightarrow \bbR^d$ be a smooth curve in $\bbR^d$ with $\bz(1) = \bz^*$ and $\calL(\bz(0)) > 0$. If the Gauss length of $\bz$ is less than $\pi / 2$, then $\calL \circ \bz(\alpha)$ is monotonically decreasing in $\alpha$.
\end{restatable}
See Appendix~\ref{app:gauss-length} for the proof. Informally, this theorem can be understood through a simple physical analogy. Imagine that you are standing on the inside surface of a uniform bowl and wish to increase your height before reaching the bottom. To do so, you must walk at an angle that is at least $\sfrac{\pi}{2}$ relative to the line connecting you to the bottom. Now, the smallest total rotation that guarantees your return to the bottom is at least $\sfrac{\pi}{2}$ radians.
Importantly, Theorem~\ref{thm:small_gauss_mse_mono} applies to arbitrary smooth curves including those produced in the function space of neural networks when we interpolate in the weight space ($\bz(\alpha; \bx)$ above). As an application of Theorem~\ref{thm:small_gauss_mse_mono}, in Appendix~\ref{app:two_layer_linear}, we give sufficient conditions for the MLI property to hold for two-layer linear models (whose loss landscape is non-convex with disconnected globally optimal manifolds \citep{pmlr-v97-kunin19a}). Furthermore, we prove that these sufficient conditions hold almost surely for models satisfying the \emph{tabula rasa} assumptions of \citet{saxe2019mathematical}.
One notable departure from the theory in our experiments is that we consider the average loss over the dataset. In this case, individual logit trajectories may be non-monotonic while the network satisfies the MLI property. Nonetheless, we find the average Gauss length to be a good indicator for the monotonicity of the network as a whole.
\section{Related Work}
\label{sec:related-work}
\paragraph{Monotonic linear interpolation.}~\citet{goodfellow2014qualitatively} were the first to observe that the MLI property persists on various architectures, activation functions, and training objectives in deep learning. In addition to their empirical evaluation, they provided a qualitative analysis of the MLI property in a toy model where they argued that the MLI property holds despite negative curvature about initialization and disconnected optima. Concurrent research \citep{frankle2020revisiting} extends the original work of \citet{goodfellow2014qualitatively} with evaluations on modern architectures trained with SGD.
In this work, we provide an expanded study of the MLI property. We first investigate the persistence of the MLI property on various tasks, including settings with modern architectures and techniques that were not invented at the time of the original investigation. Further, we show that despite the original work's claim, we can train networks that violate the MLI property without significant training difficulty. Our experiments yield new insights into neural networks' loss landscapes and uncover aspects of neural network training that correlate with the MLI property.
\vspace{-0.3cm}\paragraph{Linear connectivity.} This work is connected to empirical and theoretical advancements in understanding the loss landscape of neural networks. Much of this recent work has involved characterizing mode connectivity of neural networks. In general, linear paths between modes cross regions of high loss~\citep{goodfellow2014qualitatively}. However, \citet{garipov2018loss,draxler2018essentially} show that local minima found by stochastic gradient descent (SGD) can be connected via piecewise linear paths. \citet{frankle2019linear} further show that linearly connected solutions may be found if networks share the same initialization. \citet{fort2020deep} demonstrate the connection between linear connectivity and the advantage nonlinear networks enjoy over their linearized version. \citet{kuditipudi2019explaining} posit \textit{dropout stability} as one possible explanation for mode connectivity, with \citet{shevchenko2019landscape} extending these result to show that the loss landscape becomes increasingly connected and more dropout stable with increasing network depth. Finally, \citet{nguyen2019connected} shows that every sublevel set of an overparameterized network is connected, implying that all global minima are connected.
Note that the MLI property we study is distinct from mode connectivity, where paths are drawn between different final solutions instead of initialization $\to$ solution pairs. As far as we are aware, no prior work has explored connections between the MLI property and mode connectivity. This would make for exciting future work.
\paragraph{Loss landscape geometry.} Recent analysis argues that there exists a small subspace at initialization in which the network converges \citep{gur2018gradient, fort2019emergent, papyan2020traces}. \citet{li2018measuring} show that some of these spaces can be identified by learning in a random affine subspace of low dimension. \citet{fort2019goldilocks} show that the success of these random spaces is related to the \emph{Goldilocks zone} that depends on the Hessian at initialization. In a loose sense, the MLI can be considered a special case of these results, wherein a 1D space is sufficient for training to succeed. However, this is not the only mechanism in which neural network training can succeed --- the solutions that violate the MLI property can have good generalization capability and are found without difficulty.
It has long been argued that flatter minima lead to better generalization~\citep{hochreiter1997flat} with some caveats~\citep{dinh2017sharp}. Recent work has shown that (full-batch) gradient descent with a large learning rate is able to find flatter minima by overcoming regions of initial high curvature~\citep{lewkowycz2020large}. Intuitively, gradient descent breaks out of one locally convex region of the space and into another --- suggesting that a barrier in the loss landscape has been surpassed. In this paper, we show that training with larger learning rates can lead to failure of the MLI property. And in doing so, identify a high loss barrier between the initial and converged parameters. Moreover, we show that these barriers do not appear when training with smaller learning rates.
\paragraph{Neural tangent kernel.} Recent research has shown that over-parameterized networks appreciate faster and, in some cases, more linear learning dynamics~\citep{lee2019wide,matthews2018gaussian}. The Neural Tangent Kernel (NTK)~\citep{jacot2018neural} describes the learning dynamics of neural networks in their function space. Existing work argues that the NTK is near-constant in the infinite width setting \citep{sun2019optimization}, however recent work challenges this view in general~\citep{liu2020linearity}. \citet{fort2020deep} recently showed that the NTK evolves quickly early on during training but the rate of change decreases dramatically during training. In Appendix~\ref{app:wide_nets}, we draw connections between the NTK literature and the MLI property and show that sufficiently wide fully-connected networks exhibit the MLI property with high probability.
\paragraph{Optimization algorithms.} In this work, we investigate the role that optimization algorithms have on the MLI property (and thus the explored loss landscape more generally). \citet{amari2020does} recently showed that for linear regression, natural gradient descent~\citep{amari1998natural} travels further in parameter space, as measured by Euclidean distance, compared to gradient descent. We verify this claim empirically for larger networks trained with adaptive optimizers and observe that this co-occurs with non-monotonicity along the interpolating path $\btheta_\alpha$.
|
1,116,691,498,546 | arxiv | \section{Discussion and Conclusion}
In this paper we have extended a variational approach in order to
study the polaronic ground-state features of a one dimensional
$el-ph$ model with coupling to local and $nn$ lattice
displacements. Many physical quantities such as the ground state
energy and spectral weight, the average kinetic energy, the mean
number of phonons, and the electron-lattice correlation function
have been discussed making a comparison with the results obtained
with $SR$ and $LR$ interactions. It has been possible to ascertain
that most physical quantities are quantitatively equal to those
obtained for the $LR$ interaction as the $el-ph$ coupling in the
$ER$ case is large. A polaronic phase diagram based on the values
assumed by the spectral weight has been proposed. It has been
shown that the transition lines between the crossover and the
strong coupling regime continuously evolve toward that of the $LR$
case by increasing the coupling of the $ER$ system. The deviations
of the $ER$ case from $LR$ case become evident only in quantities
depending on distances larger than the lattice parameter, such as
in the electron-lattice correlation function. At neighbor nearest
sites for large values of the coupling, the $ER$ interaction is
able to reproduce the correlation function characteristic of the
$LR$, while, at intermediate values of the ratio
$\alpha_1/\alpha$, the lattice deformation shows an upturn as
function of the coupling constant $\lambda$.
Recently, a variational wave function \cite{perrossh} has been
proposed to study the polaron formation in Su-Schrieffer-Heeger
($SSH$) model where the electronic transfer integral depends on
the relative displacement between $nn$ sites. Unlike the original
$SSH$ model, the non-local electron-lattice coupling has been
assumed to be due to the interaction with optical phonon modes. It
has been shown that with this type of interaction the tendency
towards localization is hindered from the pathological sign change
of the effective next-nearest-neighbor hopping. Therefore it is
not possible to reach the strong coupling regime where most
properties obtained with the $ER$ density-type $el-ph$ coupling
bear strong resemblance with those in the $LR$ model. Only the
coupling with acoustic phonons is able to provide a solution with
localized behavior within the $SSH$ model. \cite{lamagna}
The variational approach for models with density-type $el-ph$
coupling can be generalized to high dimensions, where it can still
give a good description of ground state features. \cite{17,giulio}
However, in order to reproduce with the $ER$ interaction most
physical quantities of the $LR$ case, with increasing the
dimensionality, it is important to include not only coupling terms
at $nn$ sites but also at next nearest neighbors. Actually it is
necessary that the expansion of the coupling to near sites gives
rise to an $el-ph$ interaction vertex similar to that obtained in
the $LR$ case. Under these conditions the variational method is
able to interpolate between the behavior of the $SR$ case to the
$LR$ one with increasing the coupling of the interaction with
close sites.
\section*{Figure captions}
\begin {description}
\item{Fig.1}
The $el-ph$ matrix element $M_q$ (in units of $\alpha
\omega_0/{\sqrt L}$) for different ranges of the interaction as
function of the momentum q (in units of $\pi$).
\item{Fig.2}
The ground state energy $E_0$ in units of $\omega_0$ (a), the
ratio B/A at k=0 (b), the average kinetic energy $K$ in units of
the bare one (c) and the average phonon number $N$ (d) for
$t=\omega_0$ as a function of the coupling constant $\lambda$ for
different ranges of the $el-ph$ interaction: $SR$ (solid line),
$ER$ with $\alpha_1/\alpha=0.05$ (dash line), $ER$ with
$\alpha_1/\alpha=0.1$ (dot line), $ER$ with $\alpha_1/\alpha=0.2$
(dash-dot line), $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot
line), $LR$ (double dash-dot line).
\item{Fig.3}
(a) The ground state spectral weight at $\omega_0 /t =1$ as a
function of the coupling constant $\lambda$ for different ranges
of interaction: $SR$ (solid line), $ER$ with
$\alpha_1/\alpha=0.05$ (dash line), $ER$ with
$\alpha_1/\alpha=0.1$ (dot line), $ER$ with $\alpha_1/\alpha=0.2$
(dash-dot line), $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot
line), $LR$ (double dash-dot line).
(b) Polaron phase diagram for $SR$ (solid line), $ER$ with
$\alpha_1/\alpha=0.2$ (dash-dot line) , $ER$ with
$\alpha_1/\alpha=0.3$ (dash-double dot line), and $LR$ (double
dash-dot line) $el-ph$ interaction. The transition lines
correspond to model parameters such that the spectral weight
$Z=0.1$.
\item{Fig.4}
The electron-lattice correlation functions $S(R_l=0)$ (a) and
$S(R_l=\delta)$ (b) at $\omega_0 /t =1$ for different ranges of
the $el-ph$ interaction: $SR$ (solid line), $ER$ with
$\alpha_1/\alpha=0.05$ (dash line), $ER$ with
$\alpha_1/\alpha=0.1$ (dot line), $ER$ with $\alpha_1/\alpha=0.2$
(dash-dot line), $ER$ with $\alpha_1/\alpha=0.3$ (dash-double dot
line), $LR$ (double dash-dot line).
\end {description}
|
1,116,691,498,547 | arxiv | \section{Introduction}
Computed tomography (CT) is commonly used in orthopedic procedures.
Magnetic resonance imaging (MRI) is used along with CT to identify muscle structures and diagnose osteonecrosis due to its superior soft tissue contrast \cite{cvitanic2004mri}.
However, MRI has poor contrast for bone structures.
It would be helpful if a corresponding CT were available, as bone boundaries are more clearly seen and CT has standardized (i.e., Hounsfield) units.
Considering radiation exposure in CT, it is preferable if we can delineate boundaries of both muscle and bones in MRI.
Therefore, we aim at MR-to-CT synthesis.
Image synthesis has been extensively studied using the patch-based learning \cite{torrado2016fast} as well as deep learning, specifically, convolutional neural networks (CNN) \cite{zhao2017whole} and generative adversarial networks (GAN) \cite{kamnitsas2017unsupervised}. The conventional approaches required the paired training data, i.e., images of the same patient from multiple modalities that are registered, which limited the application. A method recently proposed by Zhu et al. \cite{zhu2017unpaired}, called CycleGAN, utilizes the unpaired training data by appreciating the cycle consistency loss function. While CycleGAN has already applied to MR-to-CT synthesis \cite{wolterink2017deep}, all these previous approaches in medical image application targeted CT and MRI of the head in which the scan protocol (i.e., field-of-view (FOV) and the head orientation within the FOV) is relatively consistent resulting in a small variation in the two image distributions even without registration, thus a small number of training data set (20 to 30) allowed a reasonable accuracy. On the other hand, our target anatomy, the hip region, has larger variation in the anatomy as well as their pose (i.e., joint angle change and deformation of muscles).
Applications of image synthesis include segmentation.
Some previous studies aimed at segmentation of musculoskeletal structures in MRI \cite{gilles2010musculoskeletal,ranzini2017joint}, but
the issues in these studies were the requirement for multiple sequences and devices.
Another challenge in segmentation of MRI is that there is no standard unit as in CT. Therefore, manually traced label data are necessary for training of each sequence and each imaging device. Thus, MR-to-CT synthesis realizes modality independent segmentation \cite{hamarneh2008simulation}.
In this study, we extend the CycleGAN approach by adding the gradient consistency (GC) loss to encourage edge alignment between images in the two domains and using an order-of-magnitude larger training data set (302 MR and 613 CT volumes) in order to overcome the larger variation and improve the accuracy at the boundaries.
We investigated dependency of image synthesis accuracy on 1) the number of training data and 2) incorporation of the GC loss. To demonstrate the applicability of our method, we also investigated a segmentation accuracy on synthesized images.
\section{Method}
\subsection{Materials}
The datasets we used in this study are MRI dataset consisting of 302 unlabeled volumes and CT dataset consisting of 613 unlabeled, and 20 labeled volumes which are associated with manual segmentation labels of 19 muscles around hip and thigh, pelvis, femur and sacrum bones. Patients with metallic artifact due to implant in the volume were excluded. As an evaluation dataset, we also used other three sets of paired MR and CT
volumes, and 10 MR volumes associated with manual segmentation labels of gluteus medius and minimus muscles, pelvis and femur bones, as a ground truth.
MR volumes were scanned in the coronal plane for diagnosis of osteonecrosis by a 1.0T MR imaging system.
The T1-weighted volumes were obtained by 3D spoiled gradient recalled echo sequence (SPGR)
with a repetition time (TR) of 7.9 ms, echo time (TE) of 3.08 ms, and flip angle of 30. The field of view was 320 mm, and the matrix size was 256$\times$256. The slab thickness was 76 mm, and the slice thickness was 2 mm without an inter-slice gap.
CT volumes were scanned in the axial plane for diagnosis of the patients subjected to total hip arthroplasty (THA) surgery. The field of view was 360$\times$360 mm and the matrix size was
512$\times$512. The slice thickness was 2.0 mm for the
region including pelvis and proximal femur, 6.0 mm
for the femoral shaft region, and 1.0 mm for the distal
femur region.
In this study, the CT volumes were cropped and resliced so that the FOV resembles that of MRI volumes, as shown in Figure \ref{fig:dataset}, and then resized to 256$\times$256.
\begin{figure}[!bt]
\centering
\includegraphics[width=0.95\textwidth]{figs/dataset_2-eps-converted-to.pdf}
\caption{Training datasets used in this study. MRI dataset consists of 302 unlabeled volumes and CT dataset consists of 613 unlabeled and 20 labeled volumes. N4ITK intensity inhomogeneity correction \cite{tustison2010n4itk} was applied to all MRI volumes. Two datasets have similar field-of-view, although these are not registered.
}
\label{fig:dataset}
\end{figure}
\subsection{Image synthesis using CycleGAN with gradient-consistency loss}
The underlying algorithm of the proposed MR-to-CT synthesis follows that of Zhu et al \cite{zhu2017unpaired} which allows to translate an image from CT domain to MR domain without pairwise aligned CT and MR training images of the same patient. The workflow of the proposed method is shown in Figure \ref{fig:overview}.
The networks $G_{CT}$ and $G_{MR}$ are generators to translate real MR and CT images to synthesized CT and MR images, respectivery. The networks $D_{CT}$ and $D_{MR}$ are discriminators to distinguish between real and synthesized images.
While discriminators try to distinguish synthesized images by maximizing adversarial losses $\mathcal{L} _{CT}$ and $\mathcal{L} _{MR}$, defined as
\begin{eqnarray}
\mathcal{L} _{CT} &=& \textstyle \sum_{x\in I_{CT}} \log D_{CT}(x) + \sum_{y\in I_{MR}} \log ( 1-D_{CT}(G_{CT}(y))), \\
\mathcal{L}_{MR} &=& \textstyle \sum_{y\in I_{MR}} \log D_{MR}(y) + \sum_{x\in I_{CT}} \log (1-D_{MR}(G_{MR}(x))),
\end{eqnarray}
generators try to synthesize images which is indistinguishable from the target domain by minimizing these losses. Where $x$ and $y$ are images from domains $I_{CT}$ and $I_{MR}$.
However, networks with large capacity have potential to converge to the one that translate the same set of images from source domain to any random permutation of images in the target domain. Thus, adversarial losses alone cannot guarantee that the learned generator can translate an individual input to a desired corresponding output. Therefore, the loss function is regularized by cycle
consistency, which is defined by the difference between real and reconstructed image, which is the inverse mapping of the synthesized image \cite{zhu2017unpaired}. The cycle consistency loss $\mathcal{L}_{Cycle}$ is defined as
\begin{eqnarray}
\mathcal{L}_{Cycle} &=& \textstyle \sum_{x\in I_{CT}} |G_{CT}(G_{MR}(x)) - x| + \sum_{y\in I_{MR}} |G_{MR}(G_{CT}(y)) - y|
\end{eqnarray}
We extended the CycleGAN approach by explicitly adding the gradient consistency loss between real and synthesized images to improve the accuracy at the boundaries.
The gradient correlation (GC) \cite{penney1998comparison} has been used
as a similarity metric in the medical image registration, which is defined by the normalized cross correlation between two images. Given gradients in horizontal and vertical directions of thes two images, $A$ and $B$, GC is defined as
\begin{eqnarray}
GC(A, B) &=& \frac{1}{2}\{NCC(\nabla_x A, \nabla_x B) + NCC(\nabla_y A, \nabla_y B) \}
\label{eq:gc} \\
\mathrm{where},\ NCC(A, B) &=& \frac{ \sum_{(i,j)}^{} (A-\bar{A}) (B-\bar{B})}{ \sqrt{ \sum_{(i,j)}^{} (A-\bar{A})^2 } \sqrt{ \sum_{(i,j)}^{} (B-\bar{B})^2 }} \nonumber
\end{eqnarray}
and $\nabla_x$ and $\nabla_y$ are the gradient operator of each direction, $\bar{A}$ is the mean value of $A$.
We formulate the gradient-consistency loss $\mathcal{L}_{GC}$ as
\begin{eqnarray}
\mathcal{L}_{GC} &=& \frac{1}{2}\{ \sum_{x\in I_{CT}} (1-GC(x, G_{MR}(x)))+ \sum_{y\in I_{MR}} (1-GC(y, G_{CT}(y)))\}
\end{eqnarray}
\begin{figure}[!bt]
\centering
\includegraphics[width=0.60\textwidth]{figs/overview_4-eps-converted-to.pdf}
\caption{Workflow of the proposed method. $G_{CT}$ and $G_{MR}$ are generator networks that translate MR to CT images, and CT to MR images, respectively. $D_{CT}$ and $D_{MR}$ are discriminator networks to distinguish between real and synthesized images. The cycle consistency loss $\mathcal{L}_{Cycle}$ is a regularization term defined by the difference between real and reconstructed image. To improve the accuracy at the edges, loss function is regularized by gradient consistency loss $\mathcal{L}_{GC}$. }
\label{fig:overview}
\end{figure}
Finally, our objective function is defined as:
\begin{eqnarray}
\mathcal{L}_{total} &=& \mathcal{L}_{CT} + \mathcal{L}_{MR} + \lambda_{Cycle} \mathcal{L}_{Cycle} + \lambda_{GC} \mathcal{L}_{GC}
\end{eqnarray}
where $\lambda_{Cycle}$ and $\lambda_{GC}$ are weights to balance each loss. Then, we solve:
\begin{eqnarray}
\hat{G}_{MR}, \hat{G}_{CT} = \arg \min_{G_{CT},G_{MR}} \max_{D_{CT},D_{MR}} \mathcal{L}_{total}
\end{eqnarray}
In this paper, we used 2D CNN with 9 residual blocks for generator, similar to the one proposed in \cite{johnson2016perceptual}.
For discriminators, we used $70 \times 70$ PatchGAN \cite{isola2017image}. We replaced the Eq. (1) and Eq. (2) by least-squares loss as in \cite{mao2016multi}. These settings follows \cite{zhu2017unpaired,wolterink2017deep}.
The CycleGAN was trained using Adam \cite{kingma2014adam} for the first $1\times10^5$ iterations at fixed learning rate of 0.0002, and the last $1\times10^5$ iterations at learing rate which linearly reducing to zero. The balancing weights were empirically determined as $\lambda_{Cycle} = 3$ and $\lambda_{GC} = 0.3$. CT and MR volumes are normalized such that intensity of [-150, 350] HU and [0, 100] are mapped to [0, 255], respectively.
\section{Result}
\subsection{Quantitative evaluation on image synthesis}
To evaluate image synthesis, we investigated dependency of the accuracy on the number of training data and with or without the GC loss.
The CycleGAN was trained with datasets of different sizes, i) 20 MR and 20 CT volumes, ii) 302 MR and 613 CT volumes, and both with and without GC loss.
We conducted two experiments. The first experiment used three sets of paired MR and CT volumes of the same patient for test data. Because availability of paired MR and CT volumes was limited, we conducted the second experiment in which unpaired 10 MR and 20 CT volumes were used.
In the first experiment, we evaluated synthesized CT by means of mean absolute error (MAE) and peak-signal-to-noise ratio (PSNR) [dB] between synthesized CT and ground truth CT, both of which were normalized as mentioned in 2.2. The ground truth CT here is a CT registered to the MR of the same patient.
CT and MR volumes were aligned using landmark-based registration as initialization, and then aligned using rigid and non-rigid registration. The results of MAE and PSNR are shown in Table \ref{tab:mae}. PSNR is calculated as $PSNR = 20 \log_{10}\frac{255}{\sqrt{MSE}}$, where MSE is mean squared error.
The average of MAE decreased and PSNR increased according to the increase of training data size and inclusion of GC loss, respectively. Fig \ref{fig:vis_paired} shows representative results.
\begin{table}[t]
\centering
\caption{Mean absolute error (MAE) and Peak-signal-to-noise ratio (PSNR) between synthesized and real CT volumes.}
\label{tab:mae}
\begin{tabular}{l|l|l|l|l|l|}
& & \multicolumn{2}{|c|}{20 volumes} & \multicolumn{2}{|c|}{$>$300 volumes} \\
& & \multicolumn{1}{|c|}{w/o GC} & \multicolumn{1}{|c|}{/w GC} & \multicolumn{1}{|c|}{w/o GC} & \multicolumn{1}{|c|}{/w GC} \\ \hline
\multirow{4}{*}{MAE}& Patient \#1 & 30.121 & 30.276 & 26.899 & 26.388 \\
& Patient \#2 & 26.927 & 26.911 & 22.319 & 21.593 \\
& Patient \#3 & 33.651 & 32.155 & 29.630 & 28.643 \\ \cdashline{2-6}
& Average $\pm$ SD & 30.233 $\pm$ 2.177 & 29.781 $\pm$ 1.777 & 26.283 $\pm$ 1.367 & 25.541 $\pm$ 1.129 \\ \hline
\multirow{4}{*}{PSNR} & Patient \#1 & 14.797 & 14.742 & 15.643 & 15.848 \\
& Patient \#2 & 15.734 & 15.628 & 17.255 & 17.598 \\
&Patient \#3 & 14.510 & 14.820 & 15.674 & 15.950 \\ \cdashline{2-6}
& Average $\pm$ SD & 15.014 $\pm$ 0.330 & 15.063 $\pm$ 0.380 & 16.190 $\pm$ 0.273 & 16.465 $\pm$ 0.296
\end{tabular}
\end{table}
\begin{figure}[!bt]
\centering
\includegraphics[width=0.9\textwidth]{figs/vis_paired-eps-converted-to.pdf}
\caption{Representative results of the absolute error between the ground truth paired CT and synthesized CT from two patients. Since the FOV of MR and CT volumes are slightly different, there is no corresponding region near the top edge of the ground truth volumes (filled with white color). This area was not used for evaluation.}
\label{fig:vis_paired}
\end{figure}
In the second experiment, we tested with unpaired 10 MR and 20 CT volumes.
Mutual information (MI) between synthesized CT and original MR was used for evaluation when the paired ground truth was not available.
The quantitative results are show in Fig.\ref{fig:eval_similarity}(a). The left side is the box and whisker plots of the mean of each slice of MI between real CT and synthesized MR (i.e., 20 data points in total). The right side is the mean of MI between real MR and synthesized CT (i.e., 10 data points in total). The result shows that the larger number of training data yielded statistically significant improvement ($p<0.01$) according to the paired $t$-test in MI. The GC loss also leads to an increase in MI between MR and synthesized CT ($p<0.01$). Fig.\ref{fig:eval_similarity}(b) and Fig.\ref{fig:vis_synthesis} show examples of the visualization of real MR and synthesized CT volumes. As indicated by arrows, we can see that synthesized volumes with GC loss preserved the shape near the femoral head and adductor muscles.
\begin{figure}[!bt]
\centering
\includegraphics[width=1.0\textwidth]{figs/ttest_mi_6-eps-converted-to.pdf}
\caption{Evaluation of similarity between the real and synthesized volumes. (a) quantitative comparison of mutual information on different training data size with and without the gradient-consistency loss. (b) representative result of one patient. }
\label{fig:eval_similarity}
\end{figure}
\begin{figure}[!bt]
\centering
\includegraphics[width=0.9\textwidth]{figs/vis_synthesis_3-eps-converted-to.pdf}
\caption{Representative results of translation from real MR to synthesized CT of four patients with and without the gradient consistency loss. As indicated by arrows, synthesized volumes with gradient consistency loss helped to preserve the shape near the adductor muscles. }
\label{fig:vis_synthesis}
\end{figure}
\subsection{Quantitative evaluation on segmentation}
To demonstrate the applicability of image synthesis in segmentation task, we evaluated the segmentation accuracy. Twenty labeled CT datasets were used to train the segmentation network. Then, we evaluated the segmentation accuracy with 10 MR volumes with manual segmentation labels of the gluteus medius and minimus muscles and femur.
We employed the 2D U-net proposed by Ronneberger et al. \cite{ronneberger2015u} as segmentation network, which is widely used in medical image analysis and demonstrated high performance with a limited number of labeled volumes. In MRI, muscle boundaries are clearer while bone boundaries are clearer in CT. To incorporate the advantage of both CT and MR, we modified the 2D U-net to take the two-channel input of both CT and synthesized MR images.
We trained on 2D U-net using Adam \cite{kingma2014adam} for $1\times10^5$ iterations at learning rate of 0.0001. At the test phase, a pair of MR and synthesized CT was used as two-channel input.
The results with 4 musculoskeletal structures for 10 patients are shown in Fig.\ref{fig:eval_seg} (i.e., 10 data points in total on each plot).
The result shows that the larger number of training data yielded statistically significant improvement in DICE on pelvis ($p<0.01$), femur ($p<0.01$), glutes medius ($p<0.01$) and glutes minimus regions ($p<0.05$) of paired $t$-test. The GC loss also leads to an increase in DICE on the glutes minimus regions ($p<0.01$).
The average DICE coefficient in the cases trained with more than 300 cases and GC loss was 0.808$\pm$0.036 (pelvis), 0.883$\pm$0.029 (femur), 0.804$\pm$0.040 (gluteus medius) and 0.669$\pm$0.054 (gluteus minimus), respectively.
Fig.\ref{fig:vis_segment} shows example visualization of real MR, synthesized CT, and esimated label for one patient.
The result with GC loss has smoother segmentation not only in the gluteus minimus but also near the adductor muscles.
\begin{figure}[!bt]
\centering
\includegraphics[width=0.85\textwidth]{figs/ttest_seg_ct_and_mr_4-eps-converted-to.pdf}
\caption{Evaluation of segmentation accuracy on different training data size in CycleGAN with and without the gradient-consistency loss. Segmentation of (a) pelvis, (b) femur, (c) gluteus medius and (d) gluteus minimus muscle in MR volumes were performed using MR-to-CT synthesis.}
\label{fig:eval_seg}
\end{figure}
\begin{figure}[!bt]
\centering
\includegraphics[width=0.85\textwidth]{figs/vis_segment_6-eps-converted-to.pdf}
\caption{Representative results of segmentation from one patient. The ground truth label is consist of 4 musculoskeletal structures in MRI. Although we evaluated only on 4 structures because ground truth were not available for the other structures on MRI, all 22 estimated labels are shown for qualitative evaluation. In the right-most column, all estimated labels are overlayed on the real MRI. p, f, gmed, gmin denote DICE of pelvis, femur, gluteus medius, and gluteus minimus, respectively.}
\label{fig:vis_segment}
\end{figure}
\section{Discussion and Conclusion} \label{sec:discussion section}
In this study, we proposed an image synthesis method which extended the CycleGAN approach by adding the GC loss to improve the accuracy at the boundaries. Specifically, the contributions of this paper are 1) introduction of GC loss in CycleGAN, and 2) quantitative and qualitative evaluation of the dependency of both image synthesis accuracy and segmentation accuracy on a large number of training data.
One limitation in this study is that we excluded the patients with implants, while our target cohort (i.e., THA patients) sometime has implant on one side, for example, in case of the planning of secondary surgery.
As a comparison against a single modality training, we performed 5-fold cross validation of MR segmentation using 10 labeled MR volumes (i.e., trained with 8 MR volumes and tested on remaining 2 MR volumes) using U-net segmentation network. The DICE was 0.815$\pm$0.046 (pelvis), 0.921$\pm$0.023 (femur), 0.825$\pm$0.029 (gluteus medius) and 0.752$\pm$0.045 (gluteus minimus), respectively.
We found the gap of accuracy between modality independent and dependent segmentation. A potential improvement of modality independent segmentation is to construct an end-to-end network that performs image synthesis and segmentation \cite{huo2017adversarial}.
Our future work also includes development of a method that effectively incorporates information in unlabeled CT and MR volumes to improve segmentation accuracy \cite{zhang2017deep}.
|
1,116,691,498,548 | arxiv | \section{Supplemental material}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{supp1.pdf}
\caption{Diffraction patterns from the bosons at $T/T_c=0.25$ and $\tilde{g} = 0.0217$. The intensity is normalized to 1 and in log scale.}
\label{fig:sup_diffs}
\end{figure*}
\myparagraph{Superfluidity and Bose-Einstein Condensation in two dimensions}
In this section, we clarify some confusion that may arise regarding the significance of superfluidity in our system, and its relation to the presence, or lack, of a Bose-Einstein condensate (BEC).
It is well-known that, in a homogeneous continuous system in $d=2$, there can be no second order phase transition, as fluctuations prevent long-range order from being established \cite{Hohenberg1967, Mermin1966}. In the case of Bose-Einstein condensation, the order parameter is represented by the condensate fraction \cite{pitaevski2003bose}; therefore, there can be no BEC in a homogeneous system in 2D.
On the other hand, the superfluid fraction is not an order parameter, and can instead be characterized as a response function to an external velocity field, a property that has been extensively used to characterize superfluidity through a reduction of the moment of inertia (we also do so in the next section). In this sense, it can be different from zero even when long-range order is absent. In $d=2$, even when long-range order is forbidden, a different kind of quasi-long-range order can be formed in the context of the Berezinskii-Kosterlitz-Thouless (BKT) transition, which leads to a non-zero superfluid fraction below a certain temperature \cite{KosterlitzThouless1973, ber70, cha95}.
When harmonic trapping is introduced, the system is not homogeneous anymore, and it is possible for the system to display BEC in 2D; this is indeed the case for non-interacting bosons in $d=2$. The question of whether interacting bosons in a trap undergo a transition of the BEC or BKT kind has led to investigations of what is called the BEC/BKT crossover, both theoretically and experimentally \cite{hol07, fle15}.
In this paper, we take the critical temperature of the $d=2$ trapped Bose gas as a reference point, but we do not concern ourselves with the intricacies related to boson condensation and the BEC/BKT crossover. For our purposes, what is important is that we can distinguish superfluid and insulating phases, and our methods, as described in the next section, rely only on the definition of superfluidity as a response function, with no explicit reference to condensation.
\myparagraph{Details on the Path integral Monte Carlo method} The core of the method lies in the application of Feynman's path integral to the partition function of a quantum system at finite temperature \cite{fey98, fey10}. Thermodynamic properties can then be measured on an equivalent, classical system, where each quantum particle is represented by a classical polymers. Quantum concepts, such as coherence and superfluidity, can be mapped across the equivalence as properties of the polymers, and can consequently be sampled by employing Monte Carlo procedures such as the Metropolis algorithm. In addition, we use the canonical Worm algorithm \cite{PhysRevLett.96.070601, Boninsegni2006} to efficiently sample configurations of connected polymers, which are crucial to the understanding of superfluidity. Reviews of and introductions to PIMC can be found in \cite{cep95, krauth2006statistical}.
The advantage of path integral techniques lies in their ability to determine the thermodynamic properties of the system starting from its basic constituents - the atoms and the microscopic interaction - within a precision limited only by numerical and statistical errors. In practice, the equivalence is realized approximately by breaking up the imaginary time interval $\beta$ into smaller intervals $\tau = \beta /M$. To each particle $i$ corresponds, then, a classical polymer made of $j=1\dots M$ beads, connected with each other through harmonic springs. Errors introduced by the equivalence are reduced as $M$ increases.
The basic version of our algorithm makes use of the harmonic propagator to efficiently simulate the behavior of bosons in the trapping potential, while the lattice is taken into account as en external potential in the sampling rates. The hard-core interaction is implemented through the pair-product approximation, requiring, in two dimensions, the use of tables for the propagator \cite{bar79, cep95, pil06}.
\myparagraph{Zonal superfluid fraction}
We define the zonal superfluid estimator, which was referenced in the main text. We begin with a quick review of the local estimator \cite{kwo06}.
In the context of the two-fluid model \cite{tis38, lon54}, the onset of superfluidity is described by separating the fluid into two components, a superfluid of density $\rho_s$ and a normal one of density $\rho_n$, contributing to the total density of the fluid:
\begin{equation}
\rho = \rho_n + \rho_s,
\end{equation}
The ratio of the superfluid density to the total one is the superfluid fraction,
\begin{equation}
n_s = \frac{\rho_s}{\rho}.
\end{equation}
The two components have different properties in terms of flow and entropy transport; in particular, the superfluid component displays zero viscosity, and is therefore unresponsive to the application of external velocity fields. When we consider angular velocities, this leads to a reduction of the total moment of inertia, compared to a classical fluid in the same conditions. This relationship is stated as
\begin{equation}
n_s = 1 - \frac{I}{I_{cl}},
\end{equation}
where $I$ is the measured moment of inertia, which only the normal component contributes to, while $I_{cl}$ is the classical moment of inertia, which is the one the same mass of fluid would have if it behaved classically.
In the context of PIMC, the expectation value of the angular momentum is given in terms of the area encircled by particle paths, leading to the estimator
\begin{equation} \label{sm_globsl}
n_s = \frac{2 m}{\lambda \beta} \frac{\langle A_z^2 \rangle}{I_{cl}},
\end{equation}
which is equation \eqref{global} in the main text, where we omitted the non-ergodic term for brevity, and $\lambda = \hbar^2/2m$. In this expression,
\begin{equation}
A_z = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{M} \textbf{r}_{i,j} \times \textbf{r}_{i,j+1}
\end{equation}
is the total area enclosed by particle paths, and $\textbf{r}_{i,j}$ is the position of the $j$-th bead in the $i$-th particle.
We can manipulate the equations above to give
\begin{equation}\label{eq_iicl}
I = I_{cl} (1 - n_s) = I_{cl} - \frac{2 m}{\lambda \beta} \langle A_z^2 \rangle.
\end{equation}
In inhomogeneous systems, the fields describing the two components acquire a spatial dependence, and so does the superfluid fraction itself:
\begin{equation}
\rho(\textbf{r}) = \rho_n(\textbf{r}) + \rho_s(\textbf{r}),
\end{equation}
\begin{equation}
n_s(\textbf{r}) = \frac{\rho_s(\textbf{r})}{\rho(\textbf{r})}.
\end{equation}
This local superluid fraction can be characterized by breaking up the estimator \eqref{sm_globsl} into local contributions.
$I_{cl}$ is written explicitly as
\begin{equation}
I_{cl} = \int d\textbf{r} \; \rho(\textbf{r}) r^2.
\end{equation}
Conversely, the measured moment of inertia is calculated by considering only the contribution of the normal component:
\begin{equation}
I = \int d\textbf{r} \; \rho_n(\textbf{r}) r^2 = \int d\textbf{r} \; \left[\rho(\textbf{r}) - \rho_s(\textbf{r})\right] r^2 = I_{cl} - \int d\textbf{r} \; \rho_s(\textbf{r}) r^2.
\end{equation}
This, by comparison with \eqref{eq_iicl}, leads us to
\begin{equation} \label{eq_nsicl}
\int d\textbf{r} \; \rho_s(\textbf{r}) r^2 = n_s I_{cl} = \frac{2 m}{\lambda \beta} \langle A_z^2 \rangle.
\end{equation}
A possible definition then suggests itself, as
\begin{equation}
\rho_s(\textbf{r}) = \frac{2 m}{\lambda \beta}\frac{ \langle A_z A_z(\textbf{r}) \rangle }{r^2};
\end{equation}
this will integrate to the appropriate amount as long as $ \int d\textbf{r} \; A_z(\textbf{r}) = A_z $. The most common choice \cite{kwo06} is to define
\begin{equation}
A_z(\textbf{r}) = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{M} \textbf{r} \times \textbf{r}_{i,j+1} \delta(\textbf{r}-\textbf{r}_{i,j}).
\end{equation}
The $r^2$ term in the denominator is sometimes named the ``local contribution to the classical moment of inertia''. This is not entirely correct, since the local contribution is actually $\rho(\textbf{r}) r^2$. It is possible to express the decomposition of the superfluid fraction so that the local moment of inertia becomes directly relevant. From \eqref{eq_nsicl}, we find that
\begin{equation}
n_s = \frac{1}{I_{cl}} \int d\textbf{r} \; \rho_s(\textbf{r}) r^2 = \frac{1}{I_{cl}} \int d\textbf{r} \; n_s(\textbf{r}) \rho(\textbf{r}) r^2
\end{equation}
meaning that the global superfluid fraction is given by the average of the local superfluid fraction, weighted by the local moment of inertia.
As we mentioned in the main text, the local estimator is noisy and difficult to sample, especially in the localized phase. We can, however, exploit the integral decomposition to define superfluid fractions in different regions of the system. Given a region $A$, we can write
\begin{equation}
n^A_s = \frac{1}{I^A_{cl}} \int_A d\textbf{r} \; \rho_s(\textbf{r}) r^2 = \frac{1}{I^A_{cl}} \int_A d\textbf{r} \; n_s(\textbf{r}) \rho(\textbf{r}) r^2,
\end{equation}
with the same definitions as before, but limiting the integration to the $A$ region. If the system is partitioned in a finite number of regions $A$, $B$..., we can then recover the global superfluid fraction as
\begin{equation}
n_s = \frac{I^A_{cl}}{I_{cl}} n^A_s + \frac{I^B_{cl}}{I_{cl}} n^B_s + \dots
\end{equation}
This is, again, an average of the superfluid fractions of each region, weighted by the respective moment of inertia. Crucially, this decomposition shows that a region can have a finite superfluid fraction, but still give a negligible contribution to the global $n_s$, if the associated moment of inertia is small. This is the case for regions close to the trap center.
\myparagraph{Density profiles}
In the context of PIMC, two main ways are available to display the spatial configuration of the system.
The first is to select a system configuration at a given simulation step, and to plot the position of each bead $\textbf{r}_i^j$, drawing a line between each pair of connected beads. The resulting figures are usually called snapshots. One advantage of this approach is that it allows to explicitly display connections between different particles, and therefore to obtain a visual representation of coherence. Such snapshots are the ones that we plot in \figref[a-c]{phasediagram}.
The second method is to plot density profiles, which are obtained as averages over simulation steps, as well as over the positions of all different beads associated to each particle. In continuous space, the average is usually performed by separating the simulation area into bins, and counting the number of beads in each at every simulation step. To obtain the density profiles shown in \figref[c-e]{geometry}, we counted particles in 360 bins in correspondence of the circles drawn in \figref[a]{geometry}.
\myparagraph{Diffraction patterns}
The structure factor is a quantity directly related to diffraction patterns, that can be observed experimentally in scattering experiments. It is defined, for a particle density $n(\textbf{r}) = \sum_i \delta(\textbf{r}_i)$, as
\begin{equation}
\label{struct}
I(\textbf{q}) = \langle n(\textbf{q}) n(-\textbf{q}) \rangle,
\end{equation}
with
\begin{equation}
n(\textbf{q}) = \int d^2\textbf{r} \; e^{-i\textbf{q}\cdot\textbf{r}} n(\textbf{r}) = \sum_{j} e^{-i\textbf{q}\cdot\textbf{r}^j}
\end{equation}
the Fourier transform of the particle distribution \cite{cha95}. To measure this quantity, we compute the sum and average over beads and simulations steps, similarly to what we do for the density profiles. This is done for a set of wavevectors, on the vertices of a grid in $\textbf{q}$-space.
In \figref{sup_temps1}, we display some diffraction patterns. These are the same reported in \figref[f-h]{geometry} of the main text, with the addition of the values of $V_0=0$ and $V_0/E_r=0.5$. As we could expect, the structure factor evolves from a single peak in the fluid phase to a typical quasicrystalline pattern.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{supp2.pdf}
\caption{Depletion of the global superfluid fraction at different values of $T$, in the non-interacting case. The dashed line is obtained analytically, while the dots are simulation results.}
\label{fig:sup_temps1}
\end{figure}
\begin{figure}[b!]
\includegraphics[width=\linewidth]{supp3.pdf}
\caption{Global superfluid fraction at different values of $T$, at $\tilde{g}=0.0217$. The lines are a guide for the eye.}
\label{fig:sup_temps2}
\end{figure}
\myparagraph{Temperature behavior of the non-interacting gas}
For free bosons in a harmonic trap, the temperature behavior of the global superfluid fraction can be predicted by analytical estimates. First, $n_s$ is related to the number of particles in the condensate, from considerations on its moment of inertia \cite{sch00}:
\begin{equation} \label{sm_2dsuper}
n_s(T) \simeq \frac{1}{1 + \frac{N - \langle N_0 \rangle}{\langle N_0 \rangle} \frac{2k_BT}{\hbar\omega}} ,
\end{equation}
where $\langle N_0 \rangle$ is the number of particles in the condensate at temperature $T$. This quantity can be directly computed from the energy density of states, to give
\begin{equation} \label{sm_condensation}
\langle N_0 \rangle = N - \int_0^{\infty} d\epsilon \rho(\epsilon) n(\epsilon) = N - \frac{k_B^2T^2}{\hbar^2\omega^2} \frac{\pi^2}{6},
\end{equation}
$\rho(\epsilon)$ being the energy density of states. Plugging \eqref{sm_condensation} into \eqref{sm_2dsuper}, we obtain a formula for the global superfluid fraction as a function of the temperature, which we plot as a dashed line in \figref{sup_temps1}. The dots are values of $n_s$ estimated from our simulations, which show perfect agreement with the analytical prediction.
In \figref[b]{globalns}, we showed plots of $n_s$ as a function of $V_0$, at different temperatures, for the interacting gas. In \figref{sup_temps2}, instead, we keep $V_0$ fixed and plot $n_s$ against $T$.
\end{document}
|
1,116,691,498,549 | arxiv | \section{Introduction}
The classical congruence subgroup problem (CSP) asks for, say, $G=SL_{n}\left(\mathbb{Z}\right)$
or $G=GL_{n}\left(\mathbb{Z}\right)$, whether every finite index
subgroup of $G$ contains a principal congruence subgroup, i.e. a
subgroup of the form $G\left(m\right)=\ker\left(G\to GL_{n}\left(\mathbb{Z}/m\mathbb{Z}\right)\right)$
for some $0\neq m\in\mathbb{Z}$. Equivalently, it asks whether the
natural map $\hat{G}\to GL_{n}(\hat{\mathbb{Z}})$ is injective, where
$\hat{G}$ and $\hat{\mathbb{Z}}$ are the profinite completions of
the group $G$ and the ring $\mathbb{Z}$, respectively. More generally,
the CSP asks what is the kernel of this map. It is a classical $19^{\underline{th}}$
century result that the answer is negative for $n=2$. Moreover (but
not so classical, cf. \cite{key-17}, \cite{key-4}), the kernel in
this case is $\hat{F}_{\omega}$ - the free profinite group on a countable
number of generators. On the other hand, it was proved in 1962 by
Mennicke \cite{key-22} and Bass-Lazard-Serre \cite{key-23} that
for $n\geq3$ the answer is affirmative, and the kernel is therefore
trivial.
By the observation $GL_{n}\left(\mathbb{Z}\right)\cong Aut\left(\mathbb{Z}^{n}\right)=Out\left(\mathbb{Z}^{n}\right)$,
the CSP can be generalized as follows: Let $\Gamma$ be a group and
$G\leq Aut\left(\Gamma\right)$ (resp. $G\leq Out\left(\Gamma\right)$).
For a finite index characteristic subgroup $M\leq\Gamma$ denote:
\begin{eqnarray*}
G\left(M\right) & = & \ker\left(G\to Aut\left(\Gamma/M\right)\right)\\
(\textrm{resp.}\,\,\,G\left(M\right) & = & \ker\left(G\to Out\left(\Gamma/M\right)\right)).
\end{eqnarray*}
Such a $G\left(M\right)$ will be called a ``principal congruence
subgroup'' and a finite index subgroup of $G$ which contains $G\left(M\right)$
for some $M$ will be called a ``congruence subgroup''. The CSP
for the pair $\left(G,\Gamma\right)$ asks whether every finite index
subgroup of $G$ is a congruence subgroup. In some sense, the CSP
tries to understand whether every finite quotient of $G$ comes from
a finite quotient of $\Gamma$.
One can easily see that the CSP is equivalent to the question: Is
the congruence map $\hat{G}=\underleftarrow{\lim}G/U\to\underleftarrow{\lim}G/G\left(M\right)$
injective? Here, $U$ ranges over all finite index normal subgroups
of $G$, and $M$ ranges over all finite index characteristic subgroups
of $\Gamma$. When $\Gamma$ is finitely generated, it has only finitely
many subgroups of given index $m$, and thus, the charateristic subgroups:
$M_{m}=\cap\left\{ \Delta\leq\Gamma\,|\,\left[\Gamma:\Delta\right]=m\right\} $
are of finite index in $\Gamma$. Hence, one can write $\hat{\Gamma}=\underleftarrow{\lim}_{m\in\mathbb{N}}\Gamma/M_{m}$
and have\footnote{By the celebrated theorem of Nikolov and Segal which asserts that
every finite index subgroup of a finitely generated profinite group
is open \cite{key-17-1}, the second inequality is actually an equality.
However, we do not need it. }:
\begin{eqnarray*}
\underleftarrow{\lim}G/G\left(M\right) & = & \underleftarrow{\lim}_{m\in\mathbb{N}}G/G\left(M_{m}\right)\leq\underleftarrow{\lim}_{m\in\mathbb{N}}Aut(\Gamma/M_{m})\\
& \leq & Aut(\underleftarrow{\lim}_{m\in\mathbb{N}}(\Gamma/M_{m}))=Aut(\hat{\Gamma})\,\,\,\,(\textrm{resp.}\,\,Out(\hat{\Gamma})).
\end{eqnarray*}
Therefore, when $\Gamma$ is finitely generated, the CSP is equivalent
to the question: Is the congruence map: $\hat{G}\to Aut(\hat{\Gamma})$
(resp. $\hat{G}\to Out(\hat{\Gamma})$) injective? More generally,
the CSP asks what is the kernel $C\left(G,\Gamma\right)$ of this
map. For $G=Aut\left(\Gamma\right)$ we will also use the simpler
notation $C\left(\Gamma\right)=C\left(G,\Gamma\right)$.
The classical congruence subgroup results mentioned above can therefore
be reformulated as $C\left(\mathbb{Z}^{2}\right)=\hat{F}_{\omega}$
while $C\left(\mathbb{Z}^{n}\right)=\left\{ e\right\} $ for $n\geq3$.
So the finite quotients of $GL_{n}\left(\mathbb{Z}\right)$ are closely
related to the finite quotients of $\mathbb{Z}^{n}$ when $n\geq3$,
but the finite quotients of $GL_{2}\left(\mathbb{Z}\right)$ are far
of being understandable by the finite quotients of $\mathbb{Z}^{2}$.
Very few results are known when $\Gamma$ is non-abelian. Most of
the results are related to $\Gamma=\pi_{g,n}$, the fundamental group
of $S_{g,n}$, the closed surface of genus $g$ with $n$ punctures.
In these cases one can take $G=PMod\left(S_{g,n}\right)$, the pure
mapping class group of $S_{g,n}$, and can naturally view it as a
subgroup of $Out\left(\pi_{g,n}\right)$ (cf. \cite{key-20}, chapter
8). Considering this cases, it is known that:
\begin{thm}
\label{thm:MCG}For $g=0,1,2$ and every $n\geq0,1,0$ respectively,
we have: $C\left(PMod\left(S_{g,n}\right),\pi_{g,n}\right)=\left\{ 1\right\} $.
\end{thm}
Note that when $g=1$ and $n=0$, $\pi_{1,0}\cong\mathbb{Z}^{2}$
and $PMod\left(S_{1,0}\right)\cong SL_{2}\left(\mathbb{Z}\right)$,
so: $C\left(PMod\left(S_{1,0}\right),\pi_{1,0}\right)=C\left(SL_{2}\left(\mathbb{Z}\right),\mathbb{Z}^{2}\right)=\hat{F}_{\omega}$.
The cases for $g=0$ were proved in \cite{key-16-1} (see also \cite{key-18}),
the cases for $g=1$ were proved in \cite{key-3} (see also \cite{key-19},
\cite{key-5}), and the cases for $g=2$ were proved in \cite{key-19}
(see also \cite{key-6-1} for the specific case where $g=2$ and $n=0$).
In particular, as $PMod\left(S_{1,1}\right)$ is isomorphic to the
special outer-automorphism group of $F_{2}$, we have an affirmative
answer for the full outer-automorphism group of $F_{2}$, and by some
standard arguments it shows that actually $C\left(F_{2}\right)$ is
trivial (see \cite{key-5}, \cite{key-7}). Note that for every $n>0$,
$\pi_{g,n}\cong F_{2g+n-1}$ = the free group on $2g+n-1$ generators.
Hence, the above solved cases give an affirmative answer for various
subgroups of the outer-automorphism group of finitely generated free
groups, while the CSP for the full $Aut\left(F_{d}\right)$ when $d\geq3$
is still unsettled, and so is the situation with $PMod\left(S_{g,n}\right)$
when $g\geq3$.
All the above settled cases have a common property which plays a crucial
role in the proof of Theorem \ref{thm:MCG}: There is an intrinsic
description of $G$ by iterative extension process by virtually free
groups (groups which have a finite index free subgroup). Actually,
in these cases, in some sense, we do understand the finite quotients
of $G$, and the CSP tells us that these quotients are closely related
to the finite quotients of $\Gamma$. This situation changes when
we pass to $G=Aut\left(F_{d}\right)$ for $d\geq3$ or $PMod\left(S_{g,n}\right)$
for $g\geq3$. In these cases we do not have a description of $G$
that can help to understand the finite quotients of $G$. So in some
sense, all the known cases do not give us a new understanding of the
finite quotients of $G$. Considering the abelian case, what makes
the result of Mennicke and Bass-Lazard-Serre so special is that it
not only shows that the finite quotients of $GL_{n}\left(\mathbb{Z}\right)$
are related to the finite quotients of $\mathbb{Z}^{n}$, but also
gives us a description of the finite quotients of $GL_{n}\left(\mathbb{Z}\right)$,
which we have not known without this result.
Denote now the free metabelian group on $n$ generators by $\Phi_{n}=F_{n}/F_{n}''$.
Considering the metabelian case, it was shown in \cite{key-7} (see
also \cite{key-6}) that $C\left(\Phi_{2}\right)=\hat{F}_{\omega}$.
In addition, it was proven there that $C\left(\Phi_{3}\right)\supseteq\hat{F}_{\omega}$.
So, the finite quotients of $Aut\left(\Phi_{2}\right)$ and $Aut\left(\Phi_{3}\right)$
are far of being connected to the finite quotients of $\Phi_{2}$
and $\Phi_{3}$, respectively.
Here comes the main theorem of this paper:
\begin{thm}
\label{thm:main}For every $n\geq4$, $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
is central in $\widehat{IA\left(\Phi_{n}\right)}$, where:
\[
IA\left(\Phi_{n}\right)=\ker\left(Aut\left(\Phi_{n}\right)\to Aut\left(\Phi_{n}/\Phi'_{n}\right)=GL_{n}\left(\mathbb{Z}\right)\right).
\]
\end{thm}
Using the commutative exact diagram (see $\varoint$\ref{sec:Inferences}):
\[
\begin{array}{ccccccc}
\widehat{IA\left(\Phi_{n}\right)} & \to & \widehat{Aut\left(\Phi_{n}\right)} & \to & \widehat{GL_{n}\left(\mathbb{Z}\right)} & \to & 1\\
& \searrow & \downarrow & & \downarrow\\
& & Aut(\hat{\Phi}_{n}) & \to & GL_{n}(\hat{\mathbb{Z}})
\end{array}
\]
and the fact that $\widehat{GL_{n}\left(\mathbb{Z}\right)}\to GL_{n}(\hat{\mathbb{Z}})$
is injective for $n\geq3$, we obtain that $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
is mapped onto $C\left(\Phi_{n}\right)$. Therefore we deduce that:
\begin{thm}
\label{thm:full}For every $n\geq4$, $C\left(\Phi_{n}\right)$ is
abelian.
\end{thm}
This is dramatically different from the cases of $n=2,3$ described
above. Theorem \ref{thm:full} tells us that when $n\geq4$ the situation
changes, and the finite quotients of $Aut\left(\Phi_{n}\right)$ are
closely related to the finite quotients of $\Phi_{n}$ in the following
manner:
\begin{cor}
\label{cor:description}Let $n\geq4$. Then, for every finite index
subgroup $H\leq G=Aut\left(\Phi_{n}\right)$, there exists a finite
index characteristic subgroup $M\leq\Phi_{n}$ and $r\in\mathbb{N}$
such that $G\left(M\right)'G\left(M\right)^{r}\subseteq H$.
\end{cor}
Note that by a theorem of Bachmuth and Mochizuki \cite{key-24}, $Aut\left(F_{n}\right)\to Aut\left(\Phi_{n}\right)$
is surjective for every $n\geq4$, and thus $G=Aut\left(\Phi_{n}\right)$
is finitely generated. Hence, the principal congruence subgroups of
the form $G\left(M\right)$ are finitely generated, and thus, the
subgroups of the form $G\left(M\right)'G\left(M\right)^{r}$ are also
of finite index in $Aut\left(\Phi_{n}\right)$. Therefore, the quotients
of the form $Aut\left(\Phi_{n}\right)/G\left(M\right)'G\left(M\right)^{r}$
describe all the finite quotients of $Aut\left(\Phi_{n}\right)$.
In particular, our theorem gives us a description of the finite quotients
of $Aut\left(\Phi_{n}\right)$ when $n\geq4$ - just like the theorem
of \cite{key-22} and \cite{key-23} gives for $GL_{n}\left(\mathbb{Z}\right)$
when $n\geq3$. Corollary \ref{cor:description} obviously does not
hold for $n=2,3$. So, the picture is that while the dichotomy in
the abelian case is between $n=2$ and $n\geq3$, in the metabelian
case we have a dichotomy between $n=2,3$ and $n\geq4$.
In \cite{key-24-1}, Kassabov and Nikolov showed that $\ker(\widehat{SL_{n}\left(\mathbb{Z}\left[x\right]\right)}\to SL_{n}(\widehat{\mathbb{Z}\left[x\right]}))$
is central and not finitely generated, when $n\geq3$. In \cite{key-14}
we use their techniques and an interesting surjective representations:
\[
IA\left(\Phi_{n}\right)\twoheadrightarrow\ker(GL_{n-1}\left(\mathbb{Z}[x^{\pm1}]\right)\overset{x\to1}{\longrightarrow}GL_{n-1}\left(\mathbb{Z}\right))
\]
to show also that:
\begin{thm}
\label{cor:not finitely}For every $n\geq4$, $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
is not finitely generated.
\end{thm}
We remark that despite the result of the latter theorem, we do not
know whether $C\left(\Phi_{n}\right)$ is also not finitely generated.
In fact we can not even prove at this point that it is not trivial
(for more, see $\varoint$\ref{sec:Inferences}).
The main line of the proof of Theorem \ref{thm:main} is as follows:
For $G=IA\left(\Phi_{n}\right)$ we first take the principal congruence
subgroups $G\left(M_{n,m}\right)$ where $M_{n,m}=\left(\Phi'_{n}\Phi_{n}^{m}\right)'\left(\Phi'_{n}\Phi_{n}^{m}\right)^{m}$.
By \cite{key-6}, $\hat{\Phi}_{n}=\underleftarrow{\lim}\left(\Phi_{n}/M_{n,m}\right)$,
and thus we deduce that the subgroups of the form $G\left(M_{n,m^{4}}\right)$
are enough to represent the congruence subgroups of $IA(\Phi_{n})$
in the sense that every congruence subgroup contains one of these
principal congruence subgroups. Then, we follow the steps of the theorem
of Bachmuth and Mochizuki \cite{key-24}, showing that $Aut\left(F_{n}\right)\to Aut\left(\Phi_{n}\right)$
is surjective for $n\geq4$, and we try to build $G\left(M_{n,m^{4}}\right)$
with elements of $\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $.
This process, combined with some classical results from algebraic
K-theory enables us to show that
\[
\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle G\left(M_{n,m^{4}}\right)/\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle
\]
is finite and central in $IA\left(\Phi_{n}\right)/\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $,
and thus, $\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $
is of finite index in $IA\left(\Phi_{n}\right)$. In particular, as
every normal subgroup of index $m$ in $IA\left(\Phi_{n}\right)$
contains $\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $,
we deduce that the groups of the form $\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $
are enough to represent the finite index subgroups of $IA\left(\Phi_{n}\right)$.
From here, it follows easily that $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
is central in $\widehat{IA\left(\Phi_{n}\right)}$.
We hope that the solution of the free metabelian case will help to
understand some more cases of non-abelian groups, such as the automorphism
group of a free group and the mapping class group of a surface. The
immediate next challenges are the automorphism groups of free solvable
groups.
Let us point out that, as remarked in$\varoint$5 in \cite{key-7},
one can deduce from Theorem \ref{thm:full} that for every $n\geq4$,
$Aut\left(\Phi_{n}\right)$ is not large, i.e does not contain a finite
index subgroup which can be mapped onto a free group. This is in contrast
with $Aut\left(\Phi_{2}\right)$ and $Aut\left(\Phi_{3}\right)$ which
are large.
The paper is organized as follows: In $\varoint$\ref{sec:structure}
we present some needed notations and discuss $IA\left(\Phi_{n}\right)$
and some of its subgroups. Then, up to a main lemma, in $\varoint$\ref{sec:structure-1}
we prove the main theorem of the paper, Theorem \ref{thm:main}. In
$\varoint$\ref{sec:elementary} we compute some elements of $\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $
which we use in the proof of the main lemma. In $\varoint$\ref{sec:The-main-lemma}
we prove the main reminded lemma. We end the paper with the proof
of Theorem \ref{thm:full}, and some remarks on the problem of computing
$C\left(\Phi_{n}\right)$ and $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$.
\textbf{Acknowledgements:} I wish to offer my deepest thanks to my
great supervisor Prof. Alexander Lubotzky for his sensitive and devoted
guidance, and to the Rudin foundation trustees for their generous
support during the period of the research.
\section{\label{sec:structure}Some properties of $IA\left(\Phi_{n}\right)$
and its subgroups}
Let $G=IA\left(\Phi_{n}\right)=\ker\left(Aut\left(\Phi_{n}\right)\to Aut\left(\Phi_{n}/\Phi'_{n}\right)=GL_{n}\left(\mathbb{Z}\right)\right)$.
We start with recalling some of the properties of $G=IA\left(\Phi_{n}\right)$
and its subgroups, as presented in Section 3 in \cite{key-14}. We
also refer the reader to \cite{key-14} for the proofs of the statements
in this section. We start with the following notations:
\begin{itemize}
\item $\Phi_{n}=F_{n}/F''_{n}$= the free metabelian group on $n$ elements.
Here $F''_{n}$ denotes the second derivative of $F_{n}$, the free
group on $n$ elements.
\item $\Phi_{n,m}=\Phi_{n}/M_{n,m}$, where $M_{n,m}=\left(\Phi'_{n}\Phi_{n}^{m}\right)'\left(\Phi'_{n}\Phi_{n}^{m}\right)^{m}$.
\item $IG_{n,m}=G(M_{n,m})=\ker\left(IA\left(\Phi_{n}\right)\to Aut\left(\Phi_{n,m}\right)\right).$
\item $IA_{n}^{m}=\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $.
\item $R_{n}=\mathbb{Z}[\mathbb{Z}^{n}]=\mathbb{Z}[x_{1}^{\pm1},\ldots,x_{n}^{\pm1}]$
where $x_{1},\ldots,x_{n}$ are the generators of $\mathbb{Z}^{n}$.
\item $\mathbb{Z}_{m}=\mathbb{Z}/m\mathbb{Z}$.
\item $\sigma_{i}=x_{i}-1$ for $1\leq i\leq n$. We also denote by $\vec{\sigma}$
the column vector which has $\sigma_{i}$ in its $i$-th entry.
\item $\mathfrak{A}_{n}=\sum_{i=1}^{n}\sigma_{i}R_{n}$ = the augmentation
ideal of $R_{n}$.
\item $H_{n,m}=\ker\left(R_{n}\to\mathbb{Z}_{m}[\mathbb{Z}_{m}^{n}]\right)=\sum_{i=1}^{n}\left(x_{i}^{m}-1\right)R_{n}+mR_{n}$.
\end{itemize}
By the well known Magnus embedding (see \cite{key-36}, \cite{key-37},
\cite{key-35-1}), one can identify $\Phi_{n}$ with the matrix group:
\[
\Phi_{n}=\left\{ \left(\begin{array}{cc}
g & a_{1}t_{1}+\ldots+a_{n}t_{n}\\
0 & 1
\end{array}\right)\,|\,g\in\mathbb{Z}^{n},\,a_{i}\in R_{n},\,g-1=\sum_{i=1}^{n}a_{i}(x_{i}-1)\right\}
\]
where $t_{i}$ is a free basis for $R_{n}$-module, under the identification
of the generators of $\Phi_{n}$ with the matrices
\[
\left(\begin{array}{cc}
x_{i} & t_{i}\\
0 & 1
\end{array}\right)\,\,\,\,\,1\leq i\leq n.
\]
Moreover, for every $\alpha\in IA\left(\Phi_{n}\right)$, one can
describe $\alpha$ by its action on the generators of $\Phi_{n}$,
by:
\[
\alpha:\left(\begin{array}{cc}
x_{i} & t_{i}\\
0 & 1
\end{array}\right)\mapsto\left(\begin{array}{cc}
x_{i} & a_{i,1}t_{1}+\ldots+a_{i,n}t_{n}\\
0 & 1
\end{array}\right)
\]
and this description gives an injective homomorphism (see \cite{key-13},
\cite{key-36}):
\begin{eqnarray*}
IA\left(\Phi_{n}\right) & \hookrightarrow & GL_{n}\left(R_{n}\right)\\
\textrm{defined by}\,\,\,\,\alpha & \mapsto & \left(\begin{array}{ccc}
a_{1,1} & \cdots & a_{1,n}\\
\vdots & & \vdots\\
a_{n,1} & \cdots & a_{n,n}
\end{array}\right)
\end{eqnarray*}
which gives an identification of $IA\left(\Phi_{n}\right)$ with the
subgroup:
\begin{eqnarray*}
IA\left(\Phi_{n}\right) & = & \left\{ A\in GL_{n}\left(R_{n}\right)\,|\,A\vec{\sigma}=\vec{\sigma}\right\} \\
& = & \left\{ I_{n}+A\in GL_{n}\left(R_{n}\right)\,|\,A\vec{\sigma}=\vec{0}\right\} .
\end{eqnarray*}
One can find the proof of the following proposition in \cite{key-14}
(Propositions 3.1 and 3.2):
\begin{prop}
\label{prop:augmentation}Let $I_{n}+A\in IA\left(\Phi_{n}\right)$.
Then:\end{prop}
\begin{itemize}
\item \textit{If one denotes the entries of $A$ by $a_{k,l}$ for $1\leq k,l\leq n$,
then for every $1\leq k,l\leq n$, $a_{k,l}\in\sum_{l\neq i=1}^{n}\sigma_{i}R_{n}\subseteq\mathfrak{A}_{n}$.}
\item \textit{$\det\left(I_{n}+A\right)$ is of the form: $\det\left(I_{n}+A\right)=\prod_{r=1}^{n}x_{r}^{s_{r}}$
for some $s_{r}\in\mathbb{Z}$.}
\end{itemize}
Consider now the map:
\[
\begin{array}{c}
\Phi_{n}=\left\{ \left(\begin{array}{cc}
g & a_{1}t_{1}+\ldots+a_{n}t_{n}\\
0 & 1
\end{array}\right)\,|\,g\in\mathbb{Z}^{n},\,a_{i}\in R_{n},\,g-1=\sum_{i=1}^{n}a_{i}(x_{i}-1)\right\} \\
\downarrow\\
\left\{ \left(\begin{array}{cc}
g & a_{1}t_{1}+\ldots+a_{n}t_{n}\\
0 & 1
\end{array}\right)\,|\,g\in\mathbb{Z}_{m}^{n},\,a_{i}\in\mathbb{Z}_{m}[\mathbb{Z}_{m}^{n}],\,g-1=\sum_{i=1}^{n}a_{i}(x_{i}-1)\right\}
\end{array}
\]
which induced by the projections $\mathbb{Z}^{n}\to\mathbb{Z}_{m}^{n}$,
$R_{n}=\mathbb{Z}[\mathbb{Z}^{n}]\to\mathbb{Z}_{m}[\mathbb{Z}_{m}^{n}]$.
Using result of Romanovski\u{\i} \cite{key-40}, it is shown in \cite{key-6}
that this map is surjective and that $\Phi_{n,m}$ is canonically
isomorphic to its image. Therefore, we can identify the principal
congruence subgroup of $IA\left(\Phi_{n}\right)$, $IG_{n,m}$, with:
\begin{eqnarray*}
IG_{n,m} & = & \left\{ A\in\ker\left(GL_{n}\left(R_{n}\right)\to GL_{n}\left(\mathbb{Z}_{m}[\mathbb{Z}_{m}^{n}]\right)\right)\,|\,A\vec{\sigma}=\vec{\sigma}\right\} \\
& = & \left\{ I_{n}+A\in GL_{n}\left(R_{n},H_{n,m}\right)\,|\,A\vec{\sigma}=\vec{0}\right\} .
\end{eqnarray*}
Let us step forward with the following definitions:
\begin{defn}
Let $A\in GL_{n}\left(R_{n}\right)$, and for $1\leq i\leq n$, denote
by $A_{i,i}$ the minor which obtained from $A$ by erasing its $i$-th
row and $i$-th column. Now, for every $1\leq i\leq n$, define the
subgroup $IGL_{n-1,i}\leq IA\left(\Phi_{n}\right)$, by:
\[
IGL_{n-1,i}=\left\{ I_{n}+A\in IA\left(\Phi_{n}\right)\,|\,\begin{array}{c}
\textrm{The\,\,}i\textrm{-th\,\, row\,\, of\,\,}A\textrm{\,\, is\,\,0,}\\
I_{n-1}+A_{i,i}\in GL_{n-1}\left(R_{n},\sigma_{i}R_{n}\right)
\end{array}\right\} .
\]
The following proposition is proven in \cite{key-14} (Proposition
3.4): \end{defn}
\begin{prop}
\label{prop:iso}For every $1\leq i\leq n$ we have: $IGL_{n-1,i}\cong GL_{n-1}\left(R_{n},\sigma_{i}R_{n}\right)$.
\end{prop}
We recall the following definitions from Algebraic K-Theory:
\begin{defn}
Let $R$ be a commutative ring (with identity), $H\vartriangleleft R$
an ideal, and $d\in\mathbb{N}$. Then:\end{defn}
\begin{itemize}
\item $E_{d}\left(R\right)=\left\langle I_{d}+rE_{i,j}\,|\,r\in R,\,1\leq i\neq j\leq d\right\rangle \leq SL_{d}\left(R\right)$
where $E_{i,j}$ is the matrix which has $1$ in the $\left(i,j\right)$-th
entry and $0$ elsewhere.
\item $SL_{d}\left(R,H\right)=\ker\left(SL_{d}\left(R\right)\to SL_{d}\left(R/H\right)\right)$.
\item $E_{d}\left(R,H\right)$ = the normal subgroup of $E_{d}\left(R\right)$,
which is generated as a normal subgroup by the elementary matrices
of the form $I_{d}+hE_{i,j}$ for $h\in H$.
\end{itemize}
Under the above identification of $IGL_{n-1,i}$ with $GL_{n-1}\left(R_{n},\sigma_{i}R_{n}\right)$,
for every $1\leq i\leq n$ we define:
\begin{defn}
Let $H\vartriangleleft R_{n}$. Then:
\begin{eqnarray*}
ISL_{n-1,i}\left(H\right) & = & IGL_{n-1,i}\cap SL_{n-1}\left(R_{n},H\right)\\
IE_{n-1,i}\left(H\right) & = & IGL_{n-1,i}\cap E{}_{n-1}\left(R_{n},H\right)\leq ISL_{n-1,i}\left(H\right).
\end{eqnarray*}
\end{defn}
\section{\label{sec:structure-1}The main theorem's proof}
Using the above notations we prove in $\varoint$\ref{sec:The-main-lemma}
the following main lemma:
\begin{lem}
\label{thm:stage 1}For every $n\geq4$ and $m\in\mathbb{N}$ one
has:
\begin{eqnarray*}
IG_{n,m^{2}} & \subseteq & IA_{n}^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{n,m}\right)\\
& = & IA_{n}^{m}\cdot ISL_{n-1,1}\left(\sigma_{1}H_{n,m}\right)\cdot\ldots\cdot ISL_{n-1,n}\left(\sigma_{n}H_{n,m}\right).
\end{eqnarray*}
\end{lem}
Observe that it follows that when $n\geq4$, then for every $m\in\mathbb{N}$:
\begin{eqnarray*}
IG_{n,m^{4}} & \subseteq & IA_{n}^{m^{2}}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{n,m^{2}}\right)\\
& \subseteq & IA_{n}^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{n,m^{2}}\right)\\
& \subseteq & IA_{n}^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(H_{n,m^{2}}\right).
\end{eqnarray*}
The following Lemma is proved in \cite{key-14}, using classical results
from Algebraic K-theory (Lemma 7.1 in \cite{key-14}):
\begin{lem}
\label{thm:stage 2}For every $n\geq4$, $1\leq i\leq n$ and $m\in\mathbb{N}$
one has:
\[
IE_{n-1,i}\left(H_{n,m^{2}}\right)\subseteq IA_{n}^{m}.
\]
\end{lem}
Let us now quote the following proposition (see \cite{key-14}, Corollary
2.3):
\begin{prop}
\label{cor:important}Let $R$ be a commutative ring, $H\vartriangleleft R$
ideal of finite index and $d\geq3$. Assume also that $E_{d}\left(R\right)=SL_{d}\left(R\right)$.
Then:
\[
SK_{1}\left(R,H;d\right)=SL_{d}\left(R,H\right)/E{}_{d}\left(R,H\right)
\]
is a finite group which is central in $GL_{d}\left(R\right)/E{}_{d}\left(R,H\right)$.
\end{prop}
Now, according to Proposition \ref{cor:important} and the fact that
$E_{d}\left(R_{n}\right)=SL_{d}\left(R_{n}\right)$ for every $d\geq3$
\cite{key-33}, we obtain that for every $n\geq4$:
\[
SL_{n-1}\left(R_{n},H_{n,m}\right)/E{}_{n-1}\left(R_{n},H_{n,m}\right)=SK_{1}\left(R,H_{n,m};n-1\right)
\]
is a finite group. Thus
\[
ISL_{n-1,i}\left(H_{n,m}\right)/IE_{n-1,i}\left(H_{n,m}\right)\leq SL_{n-1}\left(R_{n},H_{n,m}\right)/E{}_{n-1}\left(R_{n},H_{n,m}\right)
\]
is also a finite group. Hence, the conclusion from Lemmas \ref{thm:stage 1}
and \ref{thm:stage 2} is that for every $m\in\mathbb{N}$, one can
cover $IG_{n,m^{4}}$ with finite number of cosets of $IA_{n}^{m}$.
As $IG_{n,m^{4}}$ is obviously a finite index subgroup of $IA\left(\Phi_{n}\right)$
we deduce that $IA_{n}^{m}$ is also a finite index subgroup of $IA\left(\Phi_{n}\right)$.
Therefore, as every normal subgroup of $IA\left(\Phi_{n}\right)$
of index $m$ cotains $IA_{n}^{m}$ we deduce that one can write explicitely
$\widehat{IA\left(\Phi_{n}\right)}=\underleftarrow{\lim}\left(IA\left(\Phi_{n}\right)/IA_{n}^{m}\right)$.
On the other hand, it is proven in \cite{key-6} that $\hat{\Phi}_{n}=\underleftarrow{\lim}\Phi_{n,m}$,
and thus:
\begin{cor}
For every $n\geq4$:
\begin{eqnarray*}
C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right) & = & \ker\left(\underleftarrow{\lim}\left(IA\left(\Phi_{n}\right)/IA_{n}^{m}\right)\to\underleftarrow{\lim}\left(IA\left(\Phi_{n}\right)/IG_{n,m}\right)\right)\\
& = & \ker\left(\underleftarrow{\lim}\left(IA\left(\Phi_{n}\right)/IA_{n}^{m}\right)\to\underleftarrow{\lim}\left(IA\left(\Phi_{n}\right)/IG_{n,m^{4}}\right)\right)\\
& = & \underleftarrow{\lim}\left(IA_{n}^{m}\cdot IG_{n,m^{4}}/IA_{n}^{m}\right).
\end{eqnarray*}
\end{cor}
Now, Proposition \ref{cor:important} gives us also that for every
$m\in\mathbb{N}$ and $n\geq4$, the subgroup $SK_{1}\left(R_{n},H_{n,m};n-1\right)$
is central in $GL_{n-1}\left(R_{n}\right)/E{}_{n-1}\left(R_{n},H_{n,m}\right)$.
This fact is used in \cite{key-14} to prove that (see the arguments
in Section 5 in \cite{key-14}) :
\begin{prop}
\label{thm:stage 3}For every $n\geq4$, $m\in\mathbb{N}$ and $1\leq i\leq n$
the subgroup:
\[
IA_{n}^{m}\cdot ISL_{n-1,i}\left(\sigma_{i}H_{n,m^{2}}\right)/IA_{n}^{m}
\]
is central in $IA\left(\Phi_{n}\right)/IA_{n}^{m}$.\end{prop}
\begin{cor}
For every $n\geq4$ and $m\in\mathbb{N}$ the elements of the set
\[
IA_{n}^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{n,m^{2}}\right)/IA_{n}^{m}
\]
belong to the center of $IA\left(\Phi_{n}\right)/IA_{n}^{m}$.
\end{cor}
The conclusion from the latter corollary is that for every $n\geq4$
and $m\in\mathbb{N}$, the set
\[
IA_{n}^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{n,m^{2}}\right)/IA_{n}^{m}
\]
is an $abelian$ $group$ which contained in the center of $IA\left(\Phi_{n}\right)/IA_{n}^{m}$.
In particular, $IA_{n}^{m}\cdot IG_{n,m^{4}}/IA_{n}^{m}$ is contained
in the center of $IA\left(\Phi_{n}\right)/IA_{n}^{m}$, and thus,
$C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$ is in the center
of $\widehat{IA\left(\Phi_{n}\right)}$. This finishes, up to the
proof of Lemma \ref{thm:stage 1}, the proof of Theorem \ref{thm:main}.
So it remains to prove Lemma \ref{thm:stage 1}. But before we start
to prove this lemma, we need to compute some elements of $IA_{n}^{m}$.
We will do this in the following section.
\section{\label{sec:elementary}Some elementary elements of $\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $}
In this section we compute some elements of $IA_{n}^{m}=\left\langle IA\left(\Phi_{n}\right)^{m}\right\rangle $
which needed through the proof of Lemma \ref{thm:stage 1}. As one
can see below, we separate the elementary elements to two types. In
addition, we separate the treatment of the elements of type 1, to
two parts. We hope this separations will make the process clearer.
Additionally to the previous notations, on the section, and also later
on, we will use the notation:
\[
\mu_{r,m}=\sum_{i=0}^{m-1}x_{r}^{i}\,\,\,\,\textrm{for}\,\,\,\,1\leq r\leq n.
\]
\subsection{Elementary elements of type 1}
\begin{prop}
\label{prop:type 1.1}Let $n\geq3$, $1\leq u\leq n$ and $m\in\mathbb{N}$.
Denote by $\vec{e}_{i}$ the $i$-th row standard vector. Then, the
elements of $IA\left(\Phi_{n}\right)$ of the form (the following
notation means that the matrix is similar to the identity matrix,
except the entries in the $u$-th row):
\[
\left(\begin{array}{ccccccc}
& I_{u-1} & & 0 & & 0\\
a_{u,1} & \cdots & a_{u,u-1} & 1 & a_{u,u+1} & \cdots & a_{u,n}\\
& 0 & & 0 & & I_{n-u}
\end{array}\right)\leftarrow u\textrm{-th\,\,\,\,\ row}
\]
when $\left(a_{u,1},\ldots,a_{u,u-1},0,a_{u,u+1},\ldots,a_{u,n}\right)$
is a linear combination of the vectors:
\begin{eqnarray*}
& 1. & \left\{ m\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\,|\,i,j\neq u,\,i\neq j\right\} \\
& 2. & \left\{ \sigma_{k}\mu_{k,m}\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\,|\,i,j,k\neq u,\,i\neq j\right\} \\
& 3. & \left\{ \sigma_{k}\mu_{i,m}\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\,|\,i,j,k\neq u,\,i\neq j,\,k\neq j\right\}
\end{eqnarray*}
with coefficients in $R_{n}$, belong to $IA_{n}^{m}$.
\end{prop}
Before proving this proposition, we present some more elements of
this type. Note that for the following proposition we assume $n\geq4$:
\begin{prop}
\label{prop:type 1.2}Let $n\geq4$, $1\leq u\leq n$ and $m\in\mathbb{N}$.
Then, the elements of $IA\left(\Phi_{n}\right)$ of the form:
\[
\left(\begin{array}{ccccccc}
& I_{u-1} & & 0 & & 0\\
a_{u,1} & \cdots & a_{u,u-1} & 1 & a_{u,u+1} & \cdots & a_{u,n}\\
& 0 & & 0 & & I_{n-u}
\end{array}\right)\leftarrow u\textrm{-th\,\,\,\,\ row}
\]
when $\left(a_{u,1},\ldots,a_{u,u-1},0,a_{u,u+1},\ldots,a_{u,n}\right)$
is a linear combination of the vectors:
\begin{eqnarray*}
& 1. & \left\{ \sigma_{u}^{2}\mu_{u,m}\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\,|\,i,j\neq u,\,i\neq j\right\} \\
& 2. & \left\{ \sigma_{u}\sigma_{j}\mu_{i,m}\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\,|\,i,j\neq u,\,i\neq j\right\}
\end{eqnarray*}
with coefficients in $R_{n}$, belong to $IA_{n}^{m}$.\end{prop}
\begin{proof}
(of Proposition \ref{prop:type 1.1}) Without loss of generality,
we assume that $u=1$. Observe now that for every $a_{i},b_{i}\in R_{n}$
for $2\leq i\leq n$ one has:
\[
\left(\begin{array}{cccc}
1 & a_{2} & \cdots & a_{n}\\
0 & & I_{n-1}
\end{array}\right)\left(\begin{array}{cccc}
1 & b_{2} & \cdots & b_{n}\\
0 & & I_{n-1}
\end{array}\right)=\left(\begin{array}{cccc}
1 & a_{2}+b_{2} & \cdots & a_{n}+b_{n}\\
0 & & I_{n-1}
\end{array}\right).
\]
Hence, it is enough to prove that the elements of the following forms
belong to $IA_{n}^{m}$ (when we write $a\vec{e}_{i}$ we mean that
the entry of the $i$-th column in the first row is $a$):
\begin{eqnarray*}
1. & \left(\begin{array}{cc}
1 & mf\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right) & i,j\neq1,\,i\neq j,\,f\in R_{n}\\
2. & \left(\begin{array}{cc}
1 & \sigma_{k}\mu_{k,m}f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right) & i,j,k\neq1,\,i\neq j,\,f\in R_{n}\\
3. & \left(\begin{array}{cc}
1 & \sigma_{k}\mu_{i,m}f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right) & i,j,k\neq1,\,i\neq j,\,k\neq j,\,f\in R_{n}.
\end{eqnarray*}
We start with the elements of form 1. Here we have:
\[
\left(\begin{array}{cc}
1 & mf\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right)=\left(\begin{array}{cc}
1 & f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right)^{m}\in IA_{n}^{m}.
\]
We pass to the elements of form 2. In this case we have:
\begin{eqnarray*}
IA_{n}^{m} & \ni & \left[\left(\begin{array}{cc}
1 & f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right)^{-1},\left(\begin{array}{cc}
x_{k} & -\sigma_{1}\vec{e}_{k}\\
0 & I_{n-1}
\end{array}\right)^{m}\right]\\
& = & \left(\begin{array}{cc}
1 & \sigma_{k}\mu_{k,m}f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right).
\end{eqnarray*}
We finish with the elements of form 3. If $k=i$, it is a private
case of the previous case, so we assume $k\neq i$. So we assume that
$i,j,k$ are all different from each other and $i,j,k\neq1$ - observe
that this case is interesting only when $n\geq4$. The computation
here is more complicated than in the previous cases, so we will demonstrate
it for the private case: $n=4$, $i=2$, $j=3$, $k=4$. It is clear
that symmetrically, with similar argument, the same holds in general
when $n\geq4$ for every $i,j,k\neq1$ which different from each other.
So:
\begin{eqnarray*}
IA_{4}^{m} & \ni & \left[\left(\begin{array}{cccc}
1 & 0 & -\sigma_{4}f & \sigma_{3}f\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right),\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & -\sigma_{3} & x_{2} & 0\\
0 & 0 & 0 & 1
\end{array}\right)^{-m}\right]\\
& = & \left(\begin{array}{cccc}
1 & -\sigma_{4}f\mu_{2,m}\sigma_{3} & \sigma_{4}f\sigma_{2}\mu_{2,m} & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right).
\end{eqnarray*}
\end{proof}
We pass now to the proof of Proposition \ref{prop:type 1.2}.
\begin{proof}
(of Proposition \ref{prop:type 1.2}) Also here, without loss of generality,
we assume that $u=1$. Thus, all we need to show is that also the
elements of the following forms belong to $IA_{n}^{m}$:
\begin{eqnarray*}
1. & \left(\begin{array}{cc}
1 & \sigma_{1}^{2}\mu_{1,m}f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right) & i,j\neq1,\,i\neq j,\,f\in R_{n}\\
2. & \left(\begin{array}{cc}
1 & \sigma_{1}\sigma_{j}\mu_{i,m}f\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
0 & I_{n-1}
\end{array}\right) & i,j\neq1,\,i\neq j,\,f\in R_{n}.
\end{eqnarray*}
Also here, to simplify the notations, we will demonstrate the proof
in the private case: $n=4$, $i=2$, $j=3$. We start with the first
form. From Proposition \ref{prop:type 1.1} we have (an element of
form 2 in Proposition \ref{prop:type 1.1}):
\[
IA_{4}^{m}\ni\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & \sigma_{3}\sigma_{1}\mu_{1,m}f & -\sigma_{2}\sigma_{1}\mu_{1,m}f & 1
\end{array}\right).
\]
Therefore, we also have:
\begin{eqnarray*}
IA_{4}^{m} & \ni & \left[\left(\begin{array}{cccc}
x_{4} & 0 & 0 & -\sigma_{1}\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right),\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & \sigma_{3}\sigma_{1}\mu_{1,m}f & -\sigma_{2}\sigma_{1}\mu_{1,m}f & 1
\end{array}\right)\right]\\
& = & \left(\begin{array}{cccc}
1 & -\sigma_{3}\sigma_{1}^{2}\mu_{1,m}f & \sigma_{2}\sigma_{1}^{2}\mu_{1,m}f & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right).
\end{eqnarray*}
We pass to the elements of form 2. From Proposition \ref{prop:type 1.1}
we have (an element of form 3 in Proposition \ref{prop:type 1.1}):
\[
IA_{4}^{m}\ni\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & \sigma_{1}\sigma_{3}\mu_{2,m}f & -\sigma_{1}\sigma_{2}\mu_{2,m}f & 1
\end{array}\right)
\]
and therefore, we have:
\begin{eqnarray*}
IA_{4}^{m} & \ni & \left[\left(\begin{array}{cccc}
1 & 0 & \sigma_{4} & -\sigma_{3}\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right),\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & \sigma_{1}\sigma_{3}\mu_{2,m}f & -\sigma_{1}\sigma_{2}\mu_{2,m}f & 1
\end{array}\right)\right]\\
& = & \left(\begin{array}{cccc}
1 & -\sigma_{1}\sigma_{3}^{2}\mu_{2,m}f & \sigma_{3}\sigma_{1}\sigma_{2}\mu_{2,m}f & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right).
\end{eqnarray*}
\end{proof}
\subsection{Elementary elements of type 2}
\begin{prop}
\label{prop:type 2}Let $n\geq4$, $1\leq u<v\leq n$ and $m\in\mathbb{N}$.
Then, the elements of $IA\left(\Phi_{n}\right)$ of the form:
\[
\left(\begin{array}{ccccc}
I_{u-1} & 0 & 0 & 0 & 0\\
0 & 1+\sigma_{u}\sigma_{v}f & 0 & -\sigma_{u}^{2}f & 0\\
0 & 0 & I_{v-u-1} & 0 & 0\\
0 & \sigma_{v}^{2}f & 0 & 1-\sigma_{u}\sigma_{v}f & 0\\
0 & 0 & 0 & 0 & I_{n-v}
\end{array}\right)\begin{array}{c}
\leftarrow u\textrm{-th\,\,\,\,\ row}\\
\\
\leftarrow v\textrm{-th\,\,\,\,\ row}
\end{array}
\]
for $f\in H_{n,m}$, belong to $IA_{n}^{m}$.\end{prop}
\begin{proof}
As before, to simplify the notations we will demonstrate the proof
in the case: $n=4$, $u=1$ and $v=2$, and it will be clear from
the computation that the same holds in the general case, provided
$n\geq4$.
First observe that for every $f,g\in R_{n}$ we have:
\[
\left(\begin{array}{cccc}
1+\sigma_{1}\sigma_{2}f & -\sigma_{1}^{2}f & 0 & 0\\
\sigma_{2}^{2}f & 1-\sigma_{1}\sigma_{2}f & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right)\left(\begin{array}{cccc}
1+\sigma_{1}\sigma_{2}g & -\sigma_{1}^{2}g & 0 & 0\\
\sigma_{2}^{2}g & 1-\sigma_{1}\sigma_{2}g & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right)
\]
\[
=\left(\begin{array}{cccc}
1+\sigma_{1}\sigma_{2}\left(f+g\right) & -\sigma_{1}^{2}\left(f+g\right) & 0 & 0\\
\sigma_{2}^{2}\left(f+g\right) & 1-\sigma_{1}\sigma_{2}\left(f+g\right) & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right)
\]
so it is enough to consider the cases $f\in mR_{4}$ and $f\in\sigma_{r}\mu_{r,m}R_{4}$
for $1\leq r\leq4$, separately. Consider now the following computation.
For an arbitrary $f\in R_{n}$ we have:
\begin{eqnarray*}
& & \left[\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
-\sigma_{2}f & \sigma_{1}f & 0 & 1
\end{array}\right),\left(\begin{array}{cccc}
x_{4} & 0 & 0 & -\sigma_{1}\\
0 & x_{4} & 0 & -\sigma_{2}\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right)^{-1}\right]\\
& & \cdot\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
-\sigma_{4}\sigma_{2}f & \sigma_{4}\sigma_{1}f & 0 & 1
\end{array}\right)=\left(\begin{array}{cccc}
1+\sigma_{1}\sigma_{2}f & -\sigma_{1}^{2}f & 0 & 0\\
\sigma_{2}^{2}f & 1-\sigma_{1}\sigma_{2}f & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right).
\end{eqnarray*}
Therefore, we conclude that if:
\[
\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
-\sigma_{4}\sigma_{2}f & \sigma_{4}\sigma_{1}f & 0 & 1
\end{array}\right),\,\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
-\sigma_{2}f & \sigma_{1}f & 0 & 1
\end{array}\right)\in IA_{4}^{m}
\]
then also:
\[
\left(\begin{array}{cccc}
1+\sigma_{1}\sigma_{2}f & -\sigma_{1}^{2}f & 0 & 0\\
\sigma_{2}^{2}f & 1-\sigma_{1}\sigma_{2}f & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}\right)\in IA_{4}^{m}.
\]
Thus, the cases $f\in mR_{4}$ and $f\in\sigma_{r}\mu_{r,m}R_{4}$
for $r\neq4$, are obtained immediately from Proposition \ref{prop:type 1.1}.
Hence, it remains to deal with the case $f\in\sigma_{r}\mu_{r,m}R_{4}$
for $r=4$. However, it is easy to see that by switching the roles
of $3$ and $4$, the remained case is also obtained by similar arguments.
\end{proof}
\section{\label{sec:The-main-lemma}A main lemma}
In this section we prove Lemma \ref{thm:stage 1} which states that
for every $n\geq4$ and $m\in\mathbb{N}$ we have:
\begin{eqnarray*}
IG_{n,m^{2}} & \subseteq & IA_{n}^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{n,m}\right)\\
& = & IA_{n}^{m}\cdot ISL_{n-1,1}\left(\sigma_{1}H_{n,m}\right)\cdot\ldots\cdot ISL_{n-1,n}\left(\sigma_{n}H_{n,m}\right).
\end{eqnarray*}
The proof will be presented in a few stages - each of which will have
a separated subsection. In this sections $n\geq4$ will be constant,
so we will make notations simpler and write:
\[
\begin{array}{ccccc}
R=R_{n}, & \mathfrak{A}=\mathfrak{A}_{n}, & H_{m}=H_{n,m}, & IA^{m}=IA_{n}^{m}, & IG_{m}=IG_{n,m}.\end{array}
\]
We will also use the following notations:
\[
\begin{array}{cc}
O_{m}=mR, & U_{r,m}=\mu_{r,m}R\,\,\,\,\textrm{when\,\,}\,\,\mu_{r,m}=\sum_{i=0}^{m-1}x_{r}^{i}\,\,\,\,\textrm{for}\,\,\,\,1\leq r\leq n\end{array}.
\]
Notice that it follows from the definitions, that: $H_{m}=\sum_{r=1}^{n}\sigma_{r}U_{r,m}+O_{m}$
(we note that in \cite{key-14} we used the notation $U_{r,m}$ for
$\sigma_{r}\mu_{r,m}R$).
\subsection{Reducing Lemma \ref{thm:stage 1}'s proof}
We start this subsection with introducing the following objects:
\begin{defn}
Let $m\in\mathbb{{N}}$. Define:
\begin{eqnarray*}
R\vartriangleright J_{m} & = & \sum_{r=1}^{n}\sigma_{r}^{3}U_{r,m}+\mathfrak{A}^{2}O_{m}+\mathfrak{A}O_{m}^{2}\\
\mathbb{\mathbb{J}}_{m} & = & \left\{ I_{n}+A\,|\,\begin{array}{c}
I_{n}+A\in IA\left(\Phi_{n}\right)\cap GL_{n}\left(R,J_{m}\right)\\
\det\left(I_{n}+A\right)=\prod_{r=1}^{n}x_{r}^{s_{r}m^{2}},\,\,s_{r}\in\mathbb{Z}
\end{array}\right\} .
\end{eqnarray*}
\end{defn}
\begin{prop}
\label{prop:reduction1}For every $m\in\mathbb{N}$ we have:
\[
IG_{m^{2}}=IA\left(\Phi_{n}\right)\cap GL_{n}\left(R,H_{m^{2}}\right)\subseteq\mathbb{\mathbb{J}}_{m}.
\]
\end{prop}
\begin{proof}
Let $x\in R$. Notice that $\sum_{i=0}^{m-1}x^{i}\in\left(x-1\right)R+mR$.
In addition, by replacing $x$ by $x^{m}$ we obtain: $\sum_{i=0}^{m-1}x^{mi}\in\left(x^{m}-1\right)R+mR$.
Hence:
\begin{eqnarray*}
x^{m^{2}}-1 & = & \left(x-1\right)\sum_{i=0}^{m^{2}-1}x^{i}=\left(x-1\right)\sum_{i=0}^{m-1}x^{i}\sum_{i=0}^{m-1}x^{mi}\\
& \in & \left(x-1\right)\left(\left(x-1\right)R+mR\right)\left(\left(x^{m}-1\right)R+mR\right)\\
& \subseteq & \left(x-1\right)^{2}\left(x^{m}-1\right)R+\left(x-1\right)^{2}mR+\left(x-1\right)m^{2}R.
\end{eqnarray*}
Thus, we obtain that $H_{m^{2}}=\sum_{r=1}^{n}(x_{r}^{m^{2}}-1)R+m^{2}R\subseteq J_{m}+O_{m}^{2}$.
Now, let $I_{n}+A\in IG_{m^{2}}=IA\left(\Phi_{n}\right)\cap GL_{n}\left(R,H_{m^{2}}\right)$.
From the above observation and from Proposition \ref{prop:augmentation},
it follows that every entry of $A$ belongs to $\left(J_{m}+O_{m}^{2}\right)\cap\mathfrak{A}=J_{m}$.
In addition, by Proposition \ref{prop:augmentation}, the determinant
of $I_{n}+A$ is of the form $\prod_{r=1}^{n}x_{r}^{s_{r}}$. On the
other hand, we know that under the projection $x_{r}^{m^{2}}\mapsto1$
and $m^{2}\mapsto0$ one has: $I_{n}+A\mapsto I_{n}$ and thus also
$\prod_{r=1}^{n}x_{r}^{s_{r}}=\det\left(I_{n}+A\right)\mapsto1$.
Therefore, $\det\left(I_{n}+A\right)$ is of the form $\prod_{r=1}^{n}x_{r}^{m^{2}s_{r}}$,
as required.\end{proof}
\begin{cor}
\label{cor:first reduction}Let $n\geq4$ and $m\in\mathbb{N}$. Then,
for proving Lemma \ref{thm:stage 1} it suffices to prove that:
\[
\mathbb{\mathbb{J}}_{m}\subseteq IA^{m}\cdot\prod_{i=1}^{n}ISL_{n-1,i}\left(\sigma_{i}H_{m}\right).
\]
\end{cor}
We continue with defining the following objects:
\begin{defn}
\label{def:objects}For $0\leq u\leq n$ and $1\leq v\leq n$, define
the following ideals of $R=R_{n}=\mathbb{Z}[x_{1}^{\pm1},\ldots,x_{n}^{\pm1}]$:
\begin{eqnarray*}
\mathfrak{\tilde{A}}_{u} & = & \sum_{r=u+1}^{n}\sigma_{r}R\\
\tilde{J}_{m,u,v} & = & \begin{cases}
\mathfrak{\tilde{A}}_{u}\left(\sum_{r=1}^{u}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)+\\
\sum_{r=u+1}^{n}\sigma_{r}^{3}U_{r,m} & v\leq u\\
\mathfrak{\tilde{A}}_{u}\left(\sum_{r=1}^{u}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)+\\
\sum_{v\neq r=u+1}^{n}\sigma_{r}^{3}U_{r,m}+\mathfrak{A}\sigma_{v}^{2}U_{v,m} & v>u
\end{cases}
\end{eqnarray*}
and for $0\leq u\leq n$ define the groups: $\tilde{\mathbb{A}}_{u}=IA\left(\Phi_{n}\right)\cap GL_{n}(R,\mathfrak{\tilde{A}}_{u})$,
and:
\[
\mathbb{\mathbb{\tilde{J}}}_{m,u}=\left\{ I_{n}+A\in IA\left(\Phi_{n}\right)\,|\,\begin{array}{c}
\det\left(I_{n}+A\right)=\prod_{i=1}^{n}x_{i}^{s_{i}m^{2}}\textrm{,\,\,every\,\,entry\,\,in}\\
\textrm{the\,\,}v\textrm{-th\,\,colmun\,\,of\,\,}A\textrm{\,\,belongs\,\,to\,\,}\tilde{J}_{m,u,v}
\end{array}\right\} .
\]
\end{defn}
\begin{rem}
If $I_{n}+A\in\mathbb{\mathbb{\tilde{J}}}_{m,u}$, the entries of
the columns of $A$ may belong to different ideals in $R$, so it
is not obvious that $\mathbb{\tilde{\mathbb{J}}}_{m,u}$ is indeed
a group, i.e. closed under matrix multiplication and the inverse operation.
However, showing that $\mathbb{\tilde{\mathbb{J}}}_{m,u}$ is a group
is not difficult and we leave it to the reader.
\end{rem}
Notice now the extreme cases:
1. For $u=0$ we have (for every $v$ and $m$): $\mathfrak{\tilde{A}}_{0}=\mathfrak{A}$,
and $J_{m}\subseteq\tilde{J}_{m,0,v}$. Hence, we have $\mathbb{\mathbb{J}}_{m}\subseteq\mathbb{\mathbb{\tilde{J}}}_{m,0}$.
2. For $u=n$ we have (for every $v$ and $m$): $\mathfrak{\tilde{A}}_{n}=\tilde{J}_{m,n,v}=0$.
Hence, we also have $\mathbb{\mathbb{\tilde{J}}}_{m,n}=\left\{ I_{n}\right\} $.
\begin{cor}
\label{cor:reduction 2}For proving Lemma \ref{thm:stage 1}, it is
enough to prove that for every $1\leq u\leq n$:
\[
\mathbb{\mathbb{\tilde{J}}}_{m,u-1}\subseteq IA^{m}\cdot ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cdot\mathbb{\mathbb{\tilde{J}}}_{m,u}.
\]
\end{cor}
\begin{proof}
Using that $IA^{m}$ is normal in $IA\left(\Phi_{n}\right)$ and the
latter observations, under the above assumption, one obtains that:
\begin{eqnarray*}
\mathbb{\mathbb{J}}_{m}\subseteq\mathbb{\mathbb{\tilde{J}}}_{m,0} & \subseteq & IA^{m}\cdot ISL_{n-1,1}\left(\sigma_{1}H_{m}\right)\cdot\mathbb{\mathbb{\tilde{J}}}_{m,1}\\
& \subseteq & \ldots\\
& \subseteq & \prod_{u=1}^{n}\left(IA^{m}\cdot ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\right)\cdot\mathbb{\mathbb{\tilde{J}}}_{m,n}\\
& = & IA^{m}\prod_{u=1}^{n}ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)
\end{eqnarray*}
which is the requirement of Corollary \ref{cor:first reduction}.
\end{proof}
We continue with defining the following objects:
\begin{defn}
For $0\leq u\leq n$ and $1\leq v\leq n$, define the following ideals
of $R$:
\begin{eqnarray*}
J_{m,u,v} & = & \begin{cases}
\mathfrak{A}\left(\sum_{r=1}^{u}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)+\\
\sum_{r=u+1}^{n}\sigma_{r}^{3}U_{r,m} & v\leq u\\
\mathfrak{A}\left(\sum_{r=1}^{u}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)+\\
\sum_{v\neq r=u+1}^{n}\sigma_{r}^{3}U_{r,m}+\mathfrak{A}\sigma_{v}^{2}U_{v,m} & v>u
\end{cases}
\end{eqnarray*}
and for $0\leq u\leq n$ define the group:
\[
\mathbb{J}_{m,u}=\left\{ I_{n}+A\in IA\left(\Phi_{n}\right)\,|\,\begin{array}{c}
\det\left(I_{n}+A\right)=\prod_{i=1}^{n}x_{i}^{s_{i}m^{2}}\textrm{,\,\,every\,\,entry\,\,in}\\
\textrm{the\,\,}v\textrm{-th\,\,colmun\,\,of\,\,}A\textrm{\,\,belongs\,\,to\,\,}J_{m,u,v}
\end{array}\right\} .
\]
\end{defn}
It follows from the definitions that for every $1\leq u\leq n$ we
have:
\begin{enumerate}
\item $J_{m,u-1,v}\subseteq J_{m,u,v}$, but $\mathfrak{\tilde{A}}_{u-1}\supseteq\mathfrak{\tilde{A}}_{u}$.
Thus, we have also
\item $\mathbb{J}_{m,u-1}\subseteq\mathbb{J}_{m,u}$, but $\tilde{\mathbb{A}}_{u-1}\supseteq\tilde{\mathbb{A}}_{u}$.
\end{enumerate}
Here comes the connection between the latter objects to the objects
defined in Definition \ref{def:objects}.
\begin{prop}
\label{lem:connection}For every $0\leq u\leq n$ and $1\leq v\leq n$
we have: $J_{m,u,v}\cap\mathfrak{\tilde{A}}_{u}=\tilde{J}_{m,u,v}$,
and hence: $\mathbb{J}_{m,u}\cap\tilde{\mathbb{A}}_{u}=\mathbb{\mathbb{\tilde{\mathbb{J}}}}_{m,u}$.\end{prop}
\begin{proof}
It is clear from the definitions that we have: $\tilde{J}_{m,u,v}\subseteq J_{m,u,v}\cap\mathfrak{\tilde{A}}_{u}$,
so we have to show an inclusion on the opposite way. Let $a\in J_{m,u,v}\cap\mathfrak{\tilde{A}}_{u}$.
As:
\[
\tilde{J}_{m,u,v}\supseteq\begin{cases}
\sum_{r=u+1}^{n}\sigma_{r}^{3}U_{r,m} & v\leq u\\
\sum_{v\neq r=u+1}^{n}\sigma_{r}^{3}U_{r,m}+\mathfrak{A}\sigma_{v}^{2}U_{v,m} & v>u
\end{cases}
\]
we can assume that: $a\in\mathfrak{A}\left(\sum_{r=1}^{u}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)\cap\mathfrak{\tilde{A}}_{u}$.
Observe now that by dividing an element $b\in R$ by $\sigma_{u+1},\ldots,\sigma_{n}$
(with residue), one can present $b$ as a summand of an element of
$\mathfrak{\tilde{A}}_{u}$ with an element of $R_{u}=\mathbb{Z}[x_{1}^{\pm1},\ldots,x_{u}^{\pm1}]$.
Hence, $R=\mathfrak{\tilde{A}}_{u}+R_{u}$ and $\mathfrak{A}=\mathfrak{\tilde{A}}_{u}+\mathfrak{A}_{u}$,
where $\mathfrak{A}_{u}$ is the augmentation ideal of $R_{u}$. Hence:
\begin{eqnarray*}
a & \in & (\mathfrak{\tilde{A}}_{u}+\mathfrak{A}_{u})^{2}\sum_{r=1}^{u}\sigma_{r}\mu_{r,m}(\mathfrak{\tilde{A}}_{u}+R_{u})\\
& & +\,(\mathfrak{\tilde{A}}_{u}+\mathfrak{A}_{u})^{2}m(\mathfrak{\tilde{A}}_{u}+R_{u})+(\mathfrak{\tilde{A}}_{u}+\mathfrak{A}_{u})m^{2}(\mathfrak{\tilde{A}}_{u}+R_{u})\\
& \subseteq & \tilde{J}_{m,u,v}+\mathfrak{A}_{u}^{2}\sum_{r=1}^{u}\sigma_{r}\mu_{r,m}R_{u}+\mathfrak{A}_{u}^{2}mR_{u}+\mathfrak{A}_{u}m^{2}R_{u}.
\end{eqnarray*}
Hence, we can assume that $a\in\left(\mathfrak{A}_{u}^{2}\sum_{r=1}^{u}\sigma_{r}\mu_{r,m}R_{u}+\mathfrak{A}_{u}^{2}mR_{u}+\mathfrak{A}_{u}m^{2}R_{u}\right)\cap\mathfrak{\tilde{A}}_{u}=\left\{ 0\right\} $,
i.e. $a=0\in\tilde{J}_{m,u,v}$, as required.
\end{proof}
Due to the above, we can now reduce Lemma \ref{thm:stage 1}'s proof
as follows.
\begin{cor}
\label{cor:reduction}For proving Lemma \ref{thm:stage 1} it suffices
to show that given $1\leq u\leq n$, for every $\alpha\in\mathbb{\tilde{\mathbb{J}}}_{m,u-1}$
there exist $\beta\in IA^{m}\cap\mathbb{J}_{m,u}$ and $\gamma\in ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cap\mathbb{\mathbb{J}}_{m,u}$
such that $\gamma\alpha\beta\in\tilde{\mathbb{A}}{}_{u}$.\end{cor}
\begin{proof}
As clearly $\mathbb{J}_{m,u}\supseteq\mathbb{\mathbb{J}}_{m,u-1}\supseteq\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$,
we obtain from Proposition \ref{lem:connection} that: $\gamma\alpha\beta\in\tilde{\mathbb{A}}{}_{u}\cap\mathbb{J}_{m,u}=\mathbb{\mathbb{\tilde{J}}}_{m,u}$.
Thus:
\[
\alpha\in ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cdot\mathbb{\mathbb{\tilde{J}}}_{m,u}\cdot IA^{m}=IA^{m}\cdot ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cdot\mathbb{\mathbb{\tilde{J}}}_{m,u}.
\]
This yields that $\mathbb{\mathbb{\tilde{J}}}_{m,u-1}\subseteq IA^{m}\cdot ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cdot\tilde{\mathbb{\mathbb{J}}}_{m,u}$
which is the requirement of Corollary \ref{cor:reduction 2}.
\end{proof}
\subsection{A technical lemma}
I this section we will prove a technical lemma, which will help us
in subsection \ref{sub:Finishing} to prove Lemma \ref{thm:stage 1}.
In the following subsections $1\leq u\leq n$ will be constant. Under
this statement, we will use the following notations:
\begin{itemize}
\item For $a\in R$ we denote its image in $R_{u}$ under the projection
$x_{u+1},\ldots,x_{n}\mapsto1$ by $\bar{a}$. In addition, we denote
its image in $R_{u-1}$ under the projection $x_{u},\ldots,x_{n}\mapsto1$
by $\bar{\bar{a}}$.
\item For $\alpha\in GL_{n}\left(R\right)$ we denote its image in $GL_{n}\left(R_{u}\right)$
under the projection $x_{u+1},\ldots,x_{n}\mapsto1$ by $\bar{\alpha}$.
\item Similarly, we will use the following notations for every $m\in\mathbb{N}$:
\begin{itemize}
\item $\mathfrak{\bar{A}}=\mathfrak{A}_{u}=\sum_{i=1}^{u}\sigma_{i}R_{u}$,
$\bar{U}_{r,m}=\mu_{r,m}R_{u}$ for $1\leq r\leq u$, $\bar{O}_{m}=mR_{u}$
and $\bar{H}_{m}=H_{u,m}=\sum_{r=1}^{u}\sigma_{r}\bar{U}_{r,m}+\bar{O}_{m}$.
\item $\bar{\bar{\mathfrak{A}}}=\mathfrak{A}_{u-1}=\sum_{i=1}^{u-1}\sigma_{i}R_{u-1}$,
$\bar{\bar{U}}_{r,m}=\mu_{r,m}R_{u-1}$ for $1\leq r\leq u-1$ and
$\bar{\bar{O}}_{m}=mR_{u-1}$.
\end{itemize}
\end{itemize}
Now, let $\alpha=I_{n}+A\in\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$,
and denote the entries of $A$ by $a_{i,j}$. Consider the $u$-th
row of $A$. Under the above assumption, for every $v$ we have:
\[
a_{u,v}\in\begin{cases}
\mathfrak{\tilde{A}}_{u-1}\left(\sum_{r=1}^{u-1}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)+\\
\sum_{r=u}^{n}\sigma_{r}^{3}U_{r,m} & v<u\\
\mathfrak{\tilde{A}}_{u-1}\left(\sum_{r=1}^{u-1}\mathfrak{A}\sigma_{r}U_{r,m}+\mathfrak{A}O_{m}+O_{m}^{2}\right)+\\
\sum_{v\neq r=u}^{n}\sigma_{r}^{3}U_{r,m}+\mathfrak{A}\sigma_{v}^{2}U_{v,m} & v\geq u.
\end{cases}
\]
Hence we have:
\begin{equation}
\bar{a}_{u,v}\in\begin{cases}
\sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\bar{\mathfrak{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right)+\mathfrak{\bar{A}}\sigma_{u}^{2}\bar{U}_{u,m}\\
=\sigma_{u}\left(\sum_{r=1}^{u}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right) & v=u\\
\sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right)+\sigma_{u}^{3}\bar{U}_{u,m}\\
=\sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right) & v\neq u.
\end{cases}\label{eq:reminder}
\end{equation}
We can state now the technical lemma:
\begin{lem}
Let $\alpha=I_{n}+A\in\mathbb{\tilde{\mathbb{J}}}_{m,u-1}$. Then,
there exists $\delta\in IA^{m}\cap\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$
such that for every $v\neq u$, the $\left(u,v\right)$-th entry of
$\overline{\alpha\delta^{-1}}$ belongs to $\sigma_{u}^{2}\bar{H}_{m}$.
\end{lem}
We will prove the lemma in two steps. Here is the first step:
\begin{prop}
Let $\alpha=I_{n}+A\in\mathbb{\tilde{\mathbb{J}}}_{m,u-1}$. Then,
there exists $\delta\in IA^{m}\cap\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$
such that for every $v<u$, the $\left(u,v\right)$-th entry of $\overline{\alpha\delta^{-1}}$
belongs to $\sigma_{u}^{2}\bar{H}_{m}$.\end{prop}
\begin{proof}
So let $\alpha=I_{n}+A\in\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$, and
observe that for every $1\leq v\leq u-1$ one can write $\bar{a}_{u,v}=\sigma_{u}\bar{b}_{u,v}$
for some: $\bar{b}_{u,v}\in\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}$.
In addition, as it is easy to see that:
\[
\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}=\sum_{r=1}^{u-1}(\sigma_{u}R_{u}+\bar{\bar{\mathfrak{A}}})\sigma_{r}(\sigma_{u}\bar{U}_{r,m}+\bar{\bar{U}}_{r,m})\subseteq\sigma_{u}\sum_{r=1}^{u-1}\sigma_{r}\bar{U}_{r,m}+\sum_{r=1}^{u-1}\bar{\bar{\mathfrak{A}}}\sigma_{r}\bar{\bar{U}}_{r,m}
\]
\[
\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}=(\sigma_{u}R_{u}+\bar{\bar{\mathfrak{A}}})(\sigma_{u}\bar{O}_{m}+\bar{\bar{O}}_{m})+(\sigma_{u}\bar{O}_{m}+\bar{\bar{O}}_{m})^{2}\subseteq\sigma_{u}\bar{O}_{m}+\bar{\bar{\mathfrak{A}}}\bar{\bar{O}}_{m}+\bar{\bar{O}}_{m}^{2}
\]
one can write $\bar{b}_{u,v}=\sigma_{u}\bar{c}_{u,v}+\bar{\bar{b}}_{u,v}$
for every $1\leq v\leq u-1$, for some:
\begin{eqnarray*}
\bar{\bar{b}}_{u,v} & \in & \sum_{r=1}^{u-1}\bar{\bar{\mathfrak{A}}}\sigma_{r}\bar{\bar{U}}_{r,m}+\bar{\bar{\mathfrak{A}}}\bar{\bar{O}}_{m}+\bar{\bar{O}}_{m}^{2}\\
\bar{c}_{u,v} & \in & \sum_{r=1}^{u-1}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}\bar{U}_{u,m}+\bar{O}_{m}=\bar{H}_{m}.
\end{eqnarray*}
Notice, that as $A$ satisfies the condition $A\vec{\sigma}=\vec{0}$
we have the equality $\sigma_{1}a_{u,1}+\ldots+\sigma_{n}a_{u,n}=0$,
which yields the following equalities as well:
\begin{eqnarray*}
\sigma_{1}\bar{a}_{u,1}+\ldots+\sigma_{u-1}\bar{a}_{u,u-1}+\sigma_{u}\bar{a}_{u,u} & = & 0\\
& \Downarrow\\
\sigma_{1}\bar{b}_{u,1}+\ldots+\sigma_{u-1}\bar{b}_{u,u-1}+\bar{a}_{u,u} & = & 0\\
& \Downarrow\\
\sigma_{1}\bar{\bar{b}}_{u,1}+\ldots+\sigma_{u-1}\bar{\bar{b}}_{u,u-1} & = & 0.
\end{eqnarray*}
Observe now that for every $1\leq v\leq u-1$ we have:
\[
\sigma_{u}\bar{\bar{b}}_{u,v}\in\sigma_{u}\left(\sum_{r=1}^{u-1}\bar{\bar{\mathfrak{A}}}\sigma_{r}\bar{\bar{U}}_{r,m}+\bar{\bar{\mathfrak{A}}}\bar{\bar{O}}_{m}+\bar{\bar{O}}_{m}^{2}\right)\subseteq\tilde{J}_{m,u-1,v}
\]
and thus, if we define:
\[
\delta=\left(\begin{array}{ccccc}
& I_{u-1} & & 0 & 0\\
\sigma_{u}\bar{\bar{b}}_{u,1} & \cdots & \sigma_{u}\bar{\bar{b}}_{u,u-1} & 1 & 0\\
& 0 & & 0 & I_{n-u}
\end{array}\right)\leftarrow u\textrm{-th}\,\,\,\,\textrm{row}
\]
then $\delta\in\tilde{\mathbb{\mathbb{J}}}_{m,u-1}$. We claim now
that we also have $\delta\in IA^{m}$. We will prove this claim soon,
but assuming this claim, we can now multiply $\alpha$ from the right
by $\delta^{-1}\in\tilde{\mathbb{\mathbb{J}}}_{m,u-1}\cap IA^{m}$
and obtain an element in $\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$ such
that the image of its $\left(u,v\right)$-th entry for $1\leq v\leq u-1$,
under the projection $x_{u+1},\ldots,x_{n}\mapsto1$, is:
\begin{eqnarray*}
\bar{a}_{u,v}-\sigma_{u}\bar{\bar{b}}_{u,v}\left(1+\bar{a}_{u,u}\right) & = & \sigma_{u}^{2}\bar{c}_{u,v}-\sigma_{u}\bar{\bar{b}}_{u,v}\bar{a}_{u,u}\\
& \in & \sigma_{u}^{2}\bar{H}_{m}+\sigma_{u}^{2}\left(\sum_{r=1}^{u-1}\bar{\bar{\mathfrak{A}}}\sigma_{r}\bar{\bar{U}}_{r,m}+\bar{\bar{\mathfrak{A}}}\bar{\bar{O}}_{m}+\bar{\bar{O}}_{m}^{2}\right)\\
& = & \sigma_{u}^{2}\bar{H}_{m}
\end{eqnarray*}
as required.
\end{proof}
So it remains to prove the following claim:
\begin{claim}
Let $n\geq4$, $1\leq u\leq n$, and $\bar{\bar{b}}_{u,v}\in\sum_{r=1}^{u-1}\bar{\bar{\mathfrak{A}}}\sigma_{r}\bar{\bar{U}}_{r,m}+\bar{\bar{\mathfrak{A}}}\bar{\bar{O}}_{m}+\bar{\bar{O}}_{m}^{2}$
for $1\leq v\leq u-1$ which satisfy the condition:
\begin{equation}
\sigma_{1}\bar{\bar{b}}_{u,1}+\ldots+\sigma_{u-1}\bar{\bar{b}}_{u,u-1}=0.\label{eq:condition--}
\end{equation}
Then:
\[
u\textrm{-th\,\,\,\,row}\rightarrow\left(\begin{array}{ccccc}
& I_{u-1} & & 0 & 0\\
\sigma_{u}\bar{\bar{b}}_{u,1} & \cdots & \sigma_{u}\bar{\bar{b}}_{u,u-1} & 1 & 0\\
& 0 & & 0 & I_{n-u}
\end{array}\right)\in IA^{m}.
\]
\end{claim}
\begin{proof}
It will be easier to prove a bit more - we will prove that if for
every $1\leq v\leq u-1$:
\[
\bar{\bar{b}}_{u,v}\in\sum_{v\neq r=1}^{u-1}\bar{\bar{\mathfrak{A}}}\sigma_{r}\bar{\bar{U}}_{r,m}+\bar{\bar{\mathfrak{A}}}^{2}\bar{\bar{U}}_{v,m}+\bar{\bar{O}}_{m}
\]
then the vector: $\vec{b}=(\bar{\bar{b}}_{u,1},\ldots,\bar{\bar{b}}_{u,u-1},0,\ldots,0)$
is a linear combination of the vectors:
\[
\left\{ \begin{array}{c}
\sigma_{k}\mu_{k,m}\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\\
\sigma_{k}\mu_{i,m}\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)
\end{array},m\left(\sigma_{i}\vec{e}_{j}-\sigma_{j}\vec{e}_{i}\right)\,|\,i,j,k\leq u-1,\,i\neq j\right\}
\]
with coefficients in $R_{u-1}.$ This will show that $\sigma_{u}(\bar{\bar{b}}_{u,1},\ldots,\bar{\bar{b}}_{u,u-1},0,\ldots,0)$
is a linear combination of the vectors in Propositions \ref{prop:type 1.1}
and \ref{prop:type 1.2}, so the claim will follow.
We start with expressing $\bar{\bar{b}}_{u,1}$ explicitly by writing:
\[
\bar{\bar{b}}_{u,1}=\sum_{r=2}^{u-1}\sum_{i=1}^{u-1}\sigma_{i}\sigma_{r}\mu_{r,m}p_{i,r}+\sum_{i,j=1}^{u-1}\sigma_{i}\sigma_{j}\mu_{1,m}q_{i,j}+mr
\]
for some $p_{i,r},\,q_{i,j},\,r\in R_{u-1}$. Now, Equation \ref{eq:condition--}
gives that under the projection $\sigma_{2},\ldots,\sigma_{u-1}\mapsto0$,
$\bar{\bar{b}}_{u,1}\mapsto0$. It follows that $\bar{\bar{b}}_{u,1}\in\sum_{i=2}^{u-1}\sigma_{i}R_{u-1}\subseteq\bar{\bar{\mathfrak{A}}}$.
In particular, $r\in\bar{\bar{\mathfrak{A}}}$, so we can write:
\[
\bar{\bar{b}}_{u,1}=\sum_{r=2}^{u-1}\sum_{i=1}^{u-1}\sigma_{i}\sigma_{r}\mu_{r,m}p_{i,r}+\sum_{i,j=1}^{u-1}\sigma_{i}\sigma_{j}\mu_{1,m}q_{i,j}+\sum_{i=1}^{u-1}\sigma_{i}mr_{i}
\]
for some $p_{i,r},\,q_{i,j},\,r_{i}\in R_{u-1}$.
Observe now that by dividing $r_{1}$ by $\sigma_{2},\ldots,\sigma_{u-1}$
(with residue) we can write $r_{1}=r'_{1}+\sum_{i=2}^{u-1}\sigma_{i}r'_{i}$
where $r'_{1}$ depends only on $x_{1}$. Therefore, by replacing
$r_{1}$ by $r'_{1}$ and $r_{i}$ by $r_{i}+\sigma_{1}r'_{i}$ for
$2\leq i\leq n$, we can assume that $r_{1}$ depends only on $x_{1}$.
Similarly, by dividing $q_{1,1}$ by $\sigma_{2},\ldots,\sigma_{u-1}$,
we can assume that $q_{1,1}$ depends only on $x_{1}$. Now, by replacing
$\vec{b}$ with:
\begin{eqnarray*}
\vec{b} & - & \sum_{r=2}^{u-1}\sum_{i=1}^{u-1}\sigma_{i}\mu_{r,m}p_{i,r}\left(\sigma_{r}\vec{e}_{1}-\sigma_{1}\vec{e}_{r}\right)\\
& - & \sum_{i=2}^{u-1}\sum_{j=1}^{u-1}\sigma_{j}\mu_{1,m}q_{i,j}\left(\sigma_{i}\vec{e}_{1}-\sigma_{1}\vec{e}_{i}\right)-\sum_{j=2}^{u-1}\sigma_{1}\mu_{1,m}q_{1,j}\left(\sigma_{j}\vec{e}_{1}-\sigma_{1}\vec{e}_{j}\right)\\
& - & \sum_{i=2}^{u-1}mr_{i}\left(\sigma_{i}\vec{e}_{1}-\sigma_{1}\vec{e}_{i}\right)
\end{eqnarray*}
we can assume that $\bar{\bar{b}}_{u,1}$ is a polynomial which depends
only on $x_{1}$. On the other hand, we already saw that Equation
\ref{eq:condition--} yields that $\bar{\bar{b}}_{u,1}\in\sum_{i=2}^{u-1}\sigma_{i}R_{u-1}$,
so we can actually assume that $\bar{\bar{b}}_{u,1}=0$.
We continue in this manner by induction. In the $1\leq v\leq u-1$
stage we assume that $\bar{\bar{b}}_{u,1}=\ldots\bar{\bar{b}}_{u,v-1}=0$.
Then we write:
\[
\bar{\bar{b}}_{u,v}=\sum_{v\neq r=1}^{u-1}\sum_{i=1}^{u-1}\sigma_{i}\sigma_{r}\mu_{r,m}p_{i,r}+\sum_{i,j=1}^{u-1}\sigma_{i}\sigma_{j}\mu_{v,m}q_{i,j}+mr
\]
for some $p_{i,r},\,q_{i,j},\,r\in R_{u-1}$. The condition $\bar{\bar{b}}_{u,1}=\ldots=\bar{\bar{b}}_{u,v-1}=0$
and Equation \ref{eq:condition--} give that $\sigma_{v}\bar{\bar{b}}_{u,v}+\sigma_{v+1}\bar{\bar{b}}_{u,v+1}+\ldots+\sigma_{u-1}\bar{\bar{b}}_{u,u-1}=0$
and thus, under the projection $\sigma_{v+1},\ldots,\sigma_{u-1}\mapsto0$,
$\bar{\bar{b}}_{u,v}\mapsto0$, so $\bar{\bar{b}}_{u,v}\in\sum_{i=v+1}^{u-1}\sigma_{i}R_{u-1}\subseteq\bar{\bar{\mathfrak{A}}}$.
In particular, $r\in\bar{\bar{\mathfrak{A}}}$, so we can write:
\[
\bar{\bar{b}}_{u,v}=\sum_{v\neq r=1}^{u-1}\sum_{i=1}^{u-1}\sigma_{i}\sigma_{r}\mu_{r,m}p_{i,r}+\sum_{i,j=1}^{u-1}\sigma_{i}\sigma_{j}\mu_{v,m}q_{i,j}+\sum_{i=1}^{u-1}\sigma_{i}mr_{i}
\]
for some $p_{i,r},\,q_{i,j},\,r_{i}\in R_{u-1}$.
Now, as we explained previously, by dividing $p_{i,r},\,q_{i,j},\,r_{i}$
for $1\leq i,j,r\leq v$ by $\sigma_{v+1},\ldots,\sigma_{u-1}$, we
can assume that these polynomials depend only on $x_{1},\ldots,x_{v}$.
Thus, by replacing $\vec{b}$ with:
\begin{eqnarray*}
\vec{b} & - & \sum_{r=v+1}^{u-1}\sum_{i=1}^{u-1}\sigma_{i}\mu_{r,m}p_{i,r}\left(\sigma_{r}\vec{e}_{v}-\sigma_{v}\vec{e}_{r}\right)-\sum_{r=1}^{v-1}\sum_{i=v+1}^{u-1}\sigma_{r}\mu_{r,m}p_{i,r}\left(\sigma_{i}\vec{e}_{v}-\sigma_{v}\vec{e}_{i}\right)\\
& - & \sum_{i=v+1}^{u-1}\sum_{j=1}^{u-1}\sigma_{j}\mu_{v,m}q_{i,j}\left(\sigma_{i}\vec{e}_{v}-\sigma_{v}\vec{e}_{i}\right)-\sum_{i=1}^{v}\sum_{j=v+1}^{u-1}\sigma_{i}\mu_{v,m}q_{i,j}\left(\sigma_{j}\vec{e}_{v}-\sigma_{v}\vec{e}_{j}\right)\\
& - & \sum_{i=v+1}^{u-1}mr_{i}\left(\sigma_{i}\vec{e}_{v}-\sigma_{v}\vec{e}_{i}\right)
\end{eqnarray*}
we can assume that $\bar{\bar{b}}_{u,v}$ is a polynomial which depends
only on $x_{1},\ldots,x_{v}$, without changing the assumption that
$\bar{\bar{b}}_{u,w}=0$ for $w<v$. But we saw that in this situation
Equation \ref{eq:condition--} yields that $\bar{\bar{b}}_{u,v}\in\sum_{i=v+1}^{u-1}\sigma_{i}R_{u-1}$,
so we can actually assume that $\bar{\bar{b}}_{u,v}=0$, as required.
\end{proof}
Here is the second step of the technical lemma's proof:
\begin{prop}
Let $\alpha=I_{n}+A\in\mathbb{\tilde{\mathbb{J}}}_{m,u-1}$ such that
for every $v<u$, $\bar{a}_{u,v}\in\sigma_{u}^{2}\bar{H}_{m}$. Then,
there exists $\delta\in IA^{m}\cap\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$
such that for every $v\neq u$, the $\left(u,v\right)$-th entry of
$\overline{\alpha\delta^{-1}}$ belongs to $\sigma_{u}^{2}\bar{H}_{m}$.\end{prop}
\begin{proof}
So let $\alpha=I_{n}+A\in\mathbb{\tilde{\mathbb{J}}}_{m,u-1}$ such
that for every $v<u$, , $\bar{a}_{u,v}\in\sigma_{u}^{2}\bar{H}_{m}$.
We remined that by Equation \ref{eq:reminder}, for every $v>u$ we
have: $\bar{a}_{u,v}\in\sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right)$.
Hence, we can write explicitly:
\[
\bar{a}_{u,v}=\sigma_{u}\left(\sum_{r=1}^{u-1}\sum_{i=1}^{u}\sigma_{i}\sigma_{r}\mu_{r,m}p_{r,i}+\sigma_{u}^{2}\mu_{u,m}q+\sum_{i=1}^{u}m\sigma_{i}r_{i}+m^{2}s\right)
\]
for some: $p_{r,i},\,q,\,r_{i},\,s\in R_{u}$. Clearly, as $\mathfrak{\bar{A}}\bar{O}_{m}\supseteq\mathfrak{\bar{A}}\bar{O}_{m}^{2}$,
by dividing $s$ by $\sigma_{i}$ for $1\leq i\leq u$ (with residue),
we can assume that $s\in\mathbb{Z}$. Consider now the following element:
\[
IA^{m}\ni\left(I_{n}+\sigma_{v}E_{u,u}-\sigma_{u}E_{u,v}\right)^{m^{2}}=I_{n}+\sigma_{v}\mu_{v,m^{2}}E_{u,u}-\sigma_{u}\mu_{v,m^{2}}E_{u,v}=\delta'.
\]
By the computation in the proof of Proposition \ref{prop:reduction1},
we obtain that:
\[
\mu_{v,m^{2}}\in\sigma_{v}^{2}U_{v,m}+\sigma_{v}O_{m}+O_{m}^{2}
\]
and thus (we remind that $v>u$):
\begin{eqnarray*}
\sigma_{v}\mu_{v,m^{2}} & \in & \sigma_{v}\left(\sigma_{v}^{2}U_{v,m}+\sigma_{v}O_{m}+O_{m}^{2}\right)\subseteq\tilde{J}_{m,u-1,u}\\
\sigma_{u}\mu_{v,m^{2}} & \in & \sigma_{u}\left(\sigma_{v}^{2}U_{v,m}+\sigma_{v}O_{m}+O_{m}^{2}\right)\subseteq\tilde{J}_{m,u-1,v}.
\end{eqnarray*}
In addition, the determinant of $\delta'$ is $x_{v}^{m^{2}}$. Therefore,
$\delta'\in\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$. Observe now that
as $v>u$, under the projection $\sigma_{u+1},\ldots,\sigma_{n}\mapsto0$,
$x_{v}\mapsto1$, and $\delta$ is therefore maped to:
\[
\bar{\delta}'=I_{n}-m^{2}\sigma_{u}E_{u,v}.
\]
Thus, if we multiply $\alpha$ from the right by $\delta'^{s}$ we
obtain that the value of the entries in the $u$-th row under the
projection $\sigma_{u+1},\ldots,\sigma_{n}\mapsto0$ does not change,
besides the value of the entry in the $v$-th colmun, which changes
to (see Equation \ref{eq:reminder} for the ideal which contains $\bar{a}_{u,u}$):
\begin{eqnarray*}
\bar{a}_{u,v}-sm^{2}\sigma_{u}\left(1+\bar{a}_{u,u}\right) & \in & \sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}\right)\\
& & +\,\sigma_{u}^{2}\left(\sum_{r=1}^{u}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\bar{\mathfrak{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right)\\
& = & \sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}\right).
\end{eqnarray*}
Hence, we can assume that $\bar{a}_{u,v}\in\sigma_{u}\sum_{i=1}^{u-1}\sigma_{i}f_{i}+\sigma_{u}^{2}\left(\sum_{r=1}^{u}\sigma_{r}\bar{U}_{r,m}+\bar{O}_{m}\right)=\sigma_{u}\sum_{i=1}^{u-1}\sigma_{i}f_{i}+\sigma_{u}^{2}\bar{H}_{m}$,
for some $f_{i}\in\sum_{r=1}^{u-1}\sigma_{r}\bar{U}_{r,m}+\bar{O}_{m}$.
Define now (the coefficient of $\vec{e}_{v}$ is the value of the
$\left(u,v\right)$-th entry):
\[
\delta_{v}=\left(\begin{array}{ccccc}
& I_{u-1} & & 0 & 0\\
-\sigma_{v}\sigma_{u}f_{1} & \cdots & -\sigma_{v}\sigma_{u}f_{u-1} & 1 & \left(\sigma_{u}\sum_{i=1}^{u-1}\sigma_{i}f_{i}\right)\vec{e}_{v}\\
& 0 & & 0 & I_{n-u}
\end{array}\right)\in\mathbb{\mathbb{\tilde{J}}}_{m,u-1}.
\]
By proposition \ref{prop:type 1.1}, we obviously have: $\delta_{v}\in IA^{m}$.
In addition, as $v>u$, under the projection $\sigma_{u+1},\ldots,\sigma_{n}\mapsto0$
we have:
\[
\bar{\delta}_{v}=\left(\begin{array}{ccc}
I_{u-1} & 0 & 0\\
0 & 1 & \sigma_{u}\left(\sum_{i=1}^{u-1}\sigma_{i}f_{i}\right)\vec{e}_{v}\\
0 & 0 & I_{n-u}
\end{array}\right).
\]
Thus, by multiplying $\alpha$ from the right by $\bar{\delta}_{v}^{-1}$
we obtain that the value of the entries in the $u$-th row under the
projection $\sigma_{u+1},\ldots,\sigma_{n}\mapsto0$ does not change,
besides the value of the entry in the $v$-th colmun, which changes
to:
\begin{eqnarray*}
\bar{a}_{u,v}-\sigma_{u}\left(\sum_{i=1}^{u-1}\sigma_{i}f_{i}\right)\left(1+\bar{a}_{u,u}\right) & \in & \sigma_{u}^{2}\bar{H}_{m}+\sigma_{u}^{2}\left(\sum_{r=1}^{u}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\mathfrak{A}\bar{O}_{m}+\bar{O}_{m}^{2}\right)\\
& = & \sigma_{u}^{2}\bar{H}_{m}.
\end{eqnarray*}
Thus, defininig $\delta=\prod_{v=u+1}^{n}\delta_{v}$ finishes the
proof of the proposition, and hence, also the proof of the technical
lemma.
\end{proof}
\subsection{\label{sub:Finishing}Finishing Lemma \ref{thm:stage 1}'s proof}
We remind that we have a constant $1\leq u\leq n$. We remind also
that by Corollary \ref{cor:reduction}, it suffices to show that given
$\alpha\in\mathbb{\tilde{\mathbb{J}}}_{m,u-1}$ there exist $\beta\in IA^{m}\cap\mathbb{J}_{m,u}$
and $\gamma\in ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cap\mathbb{\mathbb{J}}_{m,u}$
such that $\gamma\alpha\beta\in\tilde{\mathbb{A}}{}_{u}$.
So let $\alpha=I_{n}+A\in\mathbb{\mathbb{\tilde{J}}}_{m,u-1}$. By
the above technical lemma, there exists $\delta\in IA^{m}\cap\mathbb{\mathbb{\tilde{J}}}_{m,u-1}\subseteq IA^{m}\cap\mathbb{J}_{m,u}$
such that for every $v\neq u$, the $\left(u,v\right)$-th entry of
$\overline{\alpha\delta^{-1}}$ belongs to $\sigma_{u}^{2}\bar{H}_{m}$.
Thus, by replacing $\alpha$ with $\alpha\delta^{-1}$, with out loss
of generality, we can assume that we have $\bar{a}_{u,v}\in\sigma_{u}^{2}\bar{H}_{m}$
for every $v\neq u$. I.e. for every $v\neq u$ one can write: $\bar{a}_{u,v}=\sigma_{u}^{2}\bar{b}_{u,v}$
for some $\bar{b}_{u,v}\in\bar{H}_{m}$.
Now, for every $v\neq u$ define the matrix:
\[
\delta_{v}=I_{n}+\left(\begin{array}{c}
\sigma_{1}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
\sigma_{2}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
\vdots\\
\sigma_{n}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)
\end{array}\right)\in\mathbb{J}_{m,u}
\]
which is equals, by direct computation, to the multiplication of the
matrices:
\[
\mathbb{J}_{m,u}\ni\varepsilon_{v,k}=I_{n}+\left(\begin{array}{c}
0\\
\sigma_{k}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
0
\end{array}\right)\leftarrow k\textrm{-th\,\,\ row}
\]
for $k\neq u,v$ and the matrix (the following is an example for $v>u$):
\[
\mathbb{J}_{m,u}\ni\eta_{v}=I_{n}+\left(\begin{array}{c}
0\\
\sigma_{u}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
0\\
\sigma_{v}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
0
\end{array}\right)\begin{array}{c}
\leftarrow u\textrm{-th\,\,\ row}\\
\\
\leftarrow v\textrm{-th\,\,\ row}
\end{array}
\]
i.e. $\delta_{v}=\eta_{v}\cdot\prod_{u,v\neq k=1}^{n}\varepsilon_{v,k}$
(observe that the matrices $\varepsilon_{v,k}$ commute, so the product
is well defined). One can see that by Propositions \ref{prop:type 1.1}
and \ref{prop:type 1.2}, $\varepsilon_{v,k}\in IA^{m}$ for every
$k\neq u,v$. Moreover, by Proposition \ref{prop:type 2}, $\eta_{v}\in IA^{m}$.
Hence, $\delta_{v}\in IA^{m}\cap\mathbb{J}_{m,u}$. Now, as for every
$1\leq i\leq n$ we have $\sum_{j=1}^{n}a_{i,j}\sigma_{j}=0$ (by
the condition $A\vec{\sigma}=\vec{0}$), $\alpha\cdot\prod_{u\neq v=1}^{n}\delta_{v}$
is equals to:
\[
\left[I_{n}+\left(\begin{array}{ccc}
a_{1,1} & \cdots & a_{1,n}\\
\vdots & & \vdots\\
a_{n,1} & \cdots & a_{n,n}
\end{array}\right)\right]\prod_{u\neq v=1}^{n}\left[I_{n}+\left(\begin{array}{c}
\sigma_{1}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
\sigma_{2}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
\vdots\\
\sigma_{n}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)
\end{array}\right)\right]
\]
\[
=I_{n}+\left(\begin{array}{ccc}
a_{1,1} & \cdots & a_{1,n}\\
\vdots & & \vdots\\
a_{n,1} & \cdots & a_{n,n}
\end{array}\right)+\sum_{u\neq v=1}^{n}\left(\begin{array}{c}
\sigma_{1}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
\sigma_{2}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)\\
\vdots\\
\sigma_{n}\bar{b}_{u,v}\left(\sigma_{v}\vec{e}_{u}-\sigma_{u}\vec{e}_{v}\right)
\end{array}\right).
\]
It is easy to see now that if we denote $\alpha\cdot\prod_{u\neq v=1}^{n}\delta_{v}=I_{n}+C$,
then for every $v\neq u$, $\bar{c}_{u,v}=0$, when $c_{i,j}$ is
the $\left(i,j\right)$-th entry of $C$. Hence, we also have:
\[
\bar{c}_{u,u}\sigma_{u}=\sum_{v=1}^{n}\bar{c}_{u,v}\bar{\sigma}_{v}=0\,\,\,\,\Longrightarrow\,\,\,\,\bar{c}_{u,u}=0.
\]
Thus, we can write $\overline{\alpha\cdot\prod_{u\neq v=1}^{n}\delta_{v}}=I_{n}+\bar{C}$
when the matrix $\bar{C}$ has the following properties:
\begin{itemize}
\item The entries of the $u$-th row of $\bar{C}$ are all $0$.
\item As $a_{i,v}\in\tilde{J}_{m,u-1,v}$ for every $i,v$, by the computation
for Equation \ref{eq:reminder} we have: $\bar{a}_{i,v}\in\sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right)$
for every $i,v\neq u$. Hence, for every $i,v\neq u$ we have:
\begin{eqnarray*}
\bar{c}_{i,v} & \in & \sigma_{u}\left(\sum_{r=1}^{u-1}\mathfrak{\bar{A}}\sigma_{r}\bar{U}_{r,m}+\sigma_{u}^{2}\bar{U}_{u,m}+\mathfrak{\bar{A}}\bar{O}_{m}+\bar{O}_{m}^{2}\right)+\sigma_{u}\mathfrak{\bar{A}}\bar{H}_{m}\\
& = & \sigma_{u}\left(\mathfrak{\bar{A}}\bar{H}_{m}+\bar{O}_{m}^{2}\right).
\end{eqnarray*}
\end{itemize}
Now, as $\det(\delta_{v})=1$ for every $v\neq u$, $\det(\overline{\alpha\cdot\prod_{u\neq v=1}^{n}\delta_{v}})=\det(\overline{\alpha})=\prod_{i=1}^{u}x_{i}^{s_{i}m^{2}}$.
However, as the entries of $\bar{C}$ have the above properties, this
determinant is mapped to $1$ under the projection $\sigma_{u}\mapsto0$.
Thus, $\det(\overline{\alpha\cdot\prod_{u\neq v=1}^{n}\delta_{v}})$
is of the form $x_{u}^{s_{u}m^{2}}$. Now, set $i_{0}\neq u$, and
denote:
\[
\zeta=I_{n}+\sigma_{u}\mu_{u,m^{2}}E_{i_{0},i_{0}}-\sigma_{i_{0}}\mu_{u,m^{2}}E_{i_{0},u}=\left(I_{n}+\sigma_{u}E_{i_{0},i_{0}}-\sigma_{i_{0}}E_{i_{0},u}\right)^{m^{2}}\in IA^{m}.
\]
By the computation in the proof of Proposition \ref{prop:reduction1},
we obtain that:
\[
\mu_{u,m^{2}}\in\sigma_{u}^{2}U_{u,m}+\sigma_{u}O_{m}+O_{m}^{2}
\]
and thus:
\begin{eqnarray*}
\sigma_{u}\mu_{u,m^{2}} & \in & \sigma_{u}\left(\sigma_{u}^{2}U_{u,m}+\sigma_{u}O_{m}+O_{m}^{2}\right)\subseteq\sigma_{u}\left(\mathfrak{\bar{A}}\bar{H}_{m}+\bar{O}_{m}^{2}\right)\subseteq J_{m,u,i_{0}}\\
\sigma_{i_{0}}\mu_{u,m^{2}} & \in & \sigma_{i_{0}}\left(\sigma_{u}^{2}U_{u,m}+\sigma_{u}O_{m}+O_{m}^{2}\right)\subseteq J_{m,u,u}
\end{eqnarray*}
so $\zeta\in IA^{m}\cap\mathbb{J}_{m,u}$. In addition $\det\left(\zeta\right)=x_{u}^{m^{2}}$.
Therefore, $\overline{\alpha\cdot\prod_{v\neq u}\delta_{v}\zeta^{-s_{u}}}$,
writen as $I_{n}+\bar{C}$, has the following properties:
\begin{itemize}
\item The entries of the $u$-th row of $\bar{C}$ are all $0$.
\item For every $i,v\neq u$ we have: $\bar{c}_{i,v}\in\sigma_{u}\left(\mathfrak{\bar{A}}\bar{H}_{m}+\bar{O}_{m}^{2}\right)$,
so we can write $\bar{c}_{i,v}=\sigma_{u}d_{i,v}$ for some $d_{i,v}\in\mathfrak{\bar{A}}\bar{H}_{m}+\bar{O}_{m}^{2}$.
\item For every $1\leq i\leq n$ we have: $\sum_{k=1}^{u}\sigma_{k}\bar{c}_{i,k}=0$,
so $\bar{c}_{i,u}=-\sum_{k=1}^{u-1}\sigma_{k}d_{i,k}$ .
\item $\det\left(I_{n}+\bar{C}\right)=1$.
\end{itemize}
I.e.:
\[
\bar{c}_{i,j}=\begin{cases}
0 & i=u\\
-\sum_{k=1}^{u-1}\sigma_{k}d_{i,k} & j=u\\
\sigma_{u}d_{i,j} & i,j\neq u
\end{cases}
\]
for some $d_{i,j}\in\mathfrak{\bar{A}}\bar{H}_{m}+\bar{O}_{m}^{2}$,
and $\det\left(I_{n}+\bar{C}\right)=1$.
Define now $\beta=\prod_{v\neq u}\delta_{v}\zeta^{-s_{u}}$ and $\gamma^{-1}$
by ($\gamma^{-1}{}_{i,j}$ is the $\left(i,j\right)$-th entry of
$\gamma^{-1}$. Notice that the sums in the $u$-th column run until
$n$ and not until $u-1$ as in $I_{n}+\bar{C}$, as we need the following
matrix to be an element of $IA\left(\Phi_{n}\right)$. However, clearly,
this addition does not changes the value of the determinant, and it
stays to be $1$):
\[
\gamma^{-1}{}_{i,j}=\begin{cases}
0 & i=u\\
-\sum_{u\neq k=1}^{n}\sigma_{k}d_{i,k} & j=u\\
\sigma_{u}d_{i,j} & i,j\neq u.
\end{cases}
\]
So $\beta\in IA^{m}\cap\mathbb{J}_{m,u}$. In addition, as $d_{i,j}\in\mathfrak{\bar{A}}\bar{H}_{m}+\bar{O}_{m}^{2}\subseteq H_{m}$
and $\det\left(\gamma^{-1}\right)=\det\left(I_{n}+\bar{C}\right)=1$,
$\gamma\in ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)$. Moreover, $\gamma\in\mathbb{J}_{m,u}$.
Hence, we obtained $\beta\in IA^{m}\cap\mathbb{J}_{m,u}$ and $\gamma\in ISL_{n-1,u}\left(\sigma_{u}H_{m}\right)\cap\mathbb{J}_{m,u}$
such that $\overline{\gamma\alpha\beta}=I_{n}$, i.e. $\gamma\alpha\beta\in\tilde{\mathbb{A}}{}_{u}$,
as required.
\section{\label{sec:Inferences}Remarks and problems for further research}
We will prove now Theorem \ref{thm:full}, which asserts that $C\left(\Phi_{n}\right)$
is abelian for every $n\geq4$. But before, let us have the following
proposition, which is slightly more general than Lemma 2.1. in \cite{key-5},
but proven by similar arguments:
\begin{prop}
\label{prop:exact-1}Let $1\to G_{1}\overset{\alpha}{\to}G_{2}\overset{\beta}{\to}G_{3}\to1$
be a short exact sequence of groups. Assume also that $G_{1}$ is
finitely generated. Then:
1. The sequence $\hat{G}_{1}\overset{\hat{\alpha}}{\to}\hat{G}_{2}\overset{\hat{\beta}}{\to}\hat{G}_{3}\to1$
is also exact.
2. The kernel $\ker(\hat{G}_{1}\overset{\hat{\alpha}}{\to}\hat{G}_{2})$
is central in $\hat{G}_{1}$.\end{prop}
\begin{proof}
(of Theorem \ref{thm:full}) By Proposition \ref{prop:exact-1}, the
commutative exact diagram:
\[
\begin{array}{ccccccccc}
1 & \to & IA\left(\Phi_{n}\right) & \to & Aut\left(\Phi_{n}\right) & \to & GL_{n}\left(\mathbb{Z}\right) & \to & 1\\
& & & \searrow & \downarrow & & \downarrow\\
& & & & Aut(\hat{\Phi}_{n}) & \to & GL_{n}(\hat{\mathbb{Z}}) & .
\end{array}
\]
gives rise to the commutative exact diagram:
\[
\begin{array}{ccccccc}
\widehat{IA\left(\Phi_{n}\right)} & \to & \widehat{Aut\left(\Phi_{n}\right)} & \to & \widehat{GL_{n}\left(\mathbb{Z}\right)} & \to & 1\\
& \searrow & \downarrow & & \downarrow\\
& & Aut(\hat{\Phi}_{n}) & \to & GL_{n}(\hat{\mathbb{Z}})
\end{array}
\]
Now, as $n\geq4$, by the CSP for $GL_{n}\left(\mathbb{Z}\right)$,
the map $\widehat{GL_{n}\left(\mathbb{Z}\right)}\to GL_{n}(\hat{\mathbb{Z}})$
is injective, so one obtains by diagram chasing, that $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)=\ker(\widehat{IA\left(\Phi_{n}\right)}\to Aut(\hat{\Phi}_{n}))$
is mapped onto $C\left(\Phi_{n}\right)=\ker(\widehat{Aut\left(\Phi_{n}\right)}\to Aut(\hat{\Phi}_{n}))$
through the map $\widehat{IA\left(\Phi_{n}\right)}\to\widehat{Aut\left(\Phi_{n}\right)}$.
In particular, as by Theorem \ref{thm:main} $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
is central in $\widehat{IA\left(\Phi_{n}\right)}$ for every $n\geq4$,
it is also abelian, and thus $C\left(\Phi_{n}\right)$ is an image
of abelian group, and therfore abelian, as required. \end{proof}
\begin{problem}
\label{prob:Is1}Is $C\left(\Phi_{n}\right)$ not finitely generated?
trivial?
\end{problem}
We proved in \cite{key-14} that $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
is not finitely generated for every $n\geq4$. This may suggest that
also $C\left(\Phi_{n}\right)$ is not finitely generated, or at list,
not trivial. Moreover, if $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
was not central in $\widehat{IA\left(\Phi_{n}\right)}$, we could
use the fact that $IA\left(\Phi_{n}\right)$ is finitely generated
for every $n\geq4$ \cite{key-24}, and by the second part of Proposition
\ref{prop:exact-1} we could derive that the image of $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$
in $\widehat{Aut\left(\Phi_{n}\right)}$ is not trivial. However,
we showed that $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$ is
central in $\widehat{IA\left(\Phi_{n}\right)}$, so it is possible
that $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)\subseteq\ker(\widehat{IA\left(\Phi_{n}\right)}\to\widehat{Aut\left(\Phi_{n}\right)})$
and thus $C\left(\Phi_{n}\right)$ is trivial.
We saw in \cite{key-14} that for every $i$ there is a natural surjective
map
\[
\hat{\rho}_{i}:\widehat{IA\left(\Phi_{n}\right)}\twoheadrightarrow\widehat{GL_{n-1}\left(\mathbb{Z}[x_{i}^{\pm1}],\sigma_{i}\mathbb{Z}[x_{i}^{\pm1}]\right)}.
\]
These maps enabled us to show in \cite{key-14} that for every $n\geq4$,
$C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)$ can be written as
\[
C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)=(C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)\cap_{i=1}^{n}\ker\hat{\rho}_{i})\rtimes\prod_{i=1}^{n}C_{i}
\]
where
\begin{eqnarray*}
C{}_{i} & \cong & \ker(\widehat{GL_{n-1}\left(\mathbb{Z}[x_{i}^{\pm1}],\sigma_{i}\mathbb{Z}[x_{i}^{\pm1}]\right)}\to GL_{n-1}(\widehat{\mathbb{Z}[x_{i}^{\pm1}]}))\\
& \cong & \ker(\widehat{SL_{n-1}\left(\mathbb{Z}[x_{i}^{\pm1}]\right)}\to SL_{n-1}(\widehat{\mathbb{Z}[x_{i}^{\pm1}]})).
\end{eqnarray*}
are central in $\widehat{IA\left(\Phi_{n}\right)}$. Here we showed
that also $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)\cap_{i=1}^{n}\ker\hat{\rho}_{i}$
lie in the center of $\widehat{IA\left(\Phi_{n}\right)}$ but we still
do not know to determine whether:
\begin{problem}
\label{prob:Is2}Is $C\left(IA\left(\Phi_{n}\right),\Phi_{n}\right)=\prod_{i=1}^{n}C_{i}$
or it contains more elements?
\end{problem}
It seems that having the answer to Problem \ref{prob:Is2} will help
to solve Problem \ref{prob:Is1}.
|
1,116,691,498,550 | arxiv | \section{Introduction}
Efficient and natural human communication relies on implicit shared knowledge and underlying reasoning processes. Despite rapid progress in language-enabled AI agents for tasks like question answering and more, state-of-the-art systems still struggle to explain their decisions in natural language. To improve their interpretability and robustness, a number of multi-hop explanation generation and identification benchmarks based on large, unstructured corpora of facts have been created~\cite{mihaylov2018can,khot2020qasc,jhamtani-clark-2020-learning}. However, when generating explanation chains, powerful deep neural networks can be too cumbersome to use in large-scale applications, while the fastest systems lack reliability, as they depend on syntactic features and ignore semantic relations between concepts~\cite{banerjee2020knowledge,jhamtani-clark-2020-learning}. In this work, we present novel approaches to integrate efficient syntactic retrieval methods with flexible semantic modeling methods for multi-hop explanation.
Our methods simulate a multi-hop reasoning process from the retrieval and synthesis of evidence to re-ranking candidate explanations.
\section{Related Work}
Recent work has focused on different aspects of multi-hop reasoning for question answering and related natural language understanding tasks. One line of work has incorporated highly structured knowledge graphs into language understanding by
combining graphical methods with language models~\cite{lin-etal-2019-kagnet,ji-etal-2020-language,yasunaga-etal-2021-qa},
augmenting language model inputs with relational knowledge~\cite{zhang-etal-2019-ernie,chen-etal-2020-improving,xu-etal-2021-fusing},
and applying language models to relational knowledge to infer multi-hop reasoning paths through knowledge graphs \cite{wang-etal-2020-connecting}. Others have further explored training language models with semi-structured relational knowledge \cite{sap2019atomic,bosselut-etal-2019-comet,mostafazadeh-etal-2020-glucose,Hwang2020COMETATOMIC}, i.e., where nodes are natural language sentences rather than canonicalized concepts, to later use for generating multi-hop explanations in natural language \cite{shwartz-etal-2020-unsupervised,Bosselut2019DynamicKG}.
For generating multi-hop explanations from entirely unstructured corpora, other work has explored using multi-step syntactic information retrieval methods \cite{jhamtani-clark-2020-learning}, and modeling such corpora as knowledge graphs with relations induced by shared mentions of concepts between documents \cite{dhingra2020differentiable,lin-etal-2021-differentiable}. While the former approach lacks the ability to capture semantic relationships between evidence sentences, the latter demands high time and space complexity both in generating a graph from corpora of millions of facts, and in everyday uses of adding or removing facts from the corpus.
More recent work has used pre-trained word embeddings to add some lightweight semantic representation to syntactic evidence retrieval \cite{yadav-etal-2021-want}.
Unlike these approaches, we present a flexible and relatively lightweight pipeline to apply both syntactic and learned, contextualized semantic approaches in multi-hop explanation generation, including evidence retrieval, multi-hop reasoning over evidence, and re-ranking candidate explanations.
\section{Problem Statement}
In the research community, two types of explanation have been studied: introspective explanation and justification explanation~\cite{biran2017explanation}. The former explicates how a decision is made, and the latter gathers evidence to support a decision. In this study, we focus on the task of justification explanation. Specifically, we explore the problem of generating multi-hop explanations to support the answer to a natural language question, where the explanation chain is generated from an unstructured corpus of declarative facts. Unstructured natural language corpora are suitable knowledge resources for human-AI interaction, as humans can easily support reasoning by providing their own commonsense knowledge in short, natural language statements. This carefully restricted problem of explanation generation consists of two key challenges. First, we must solve the \textit{retrieval} task to gather candidate supporting evidence from the corpus. Second, we need to invoke a \textit{multi-hop reasoning} process to connect pieces of evidence to form the most valid explanation to justify the answer to the question.
\subsection{Datasets}
To explore this problem, we consider two datasets. First, the Question Answering via Sentence Composition (QASC) dataset provides about 10,000 multiple-choice science questions \cite{khot2020qasc}. QASC is a challenging problem, as each question requires composing two facts from a corpus of about 17 million declarative facts to connect the question and its correct answer.
For example, given the question ``\textit{Differential heating of air} can be harnessed for what?'' and correct answer ``\textit{electricity production},'' the answer can be explained by composing the facts ``\textit{Differential heating of air} produces wind'' and ``Wind is used for \textit{producing electricity},'' which connect the question and answer.
QASC includes a gold, human-curated 2-hop explanation from the corpus for each question-answer pair.
Meanwhile, the Explainable QASC (eQASC) dataset adds 10 automatically generated explanations for each question-answer pair, each of which are labeled by annotators as valid or invalid \cite{jhamtani-clark-2020-learning}.
While the state-of-the-art accuracy on QASC has reached up to 90\%,\footnote{See \url{https://allenai.org/data/qasc}.} only 76\% of questions have any valid explanation chains in eQASC. This indicates that \textit{explaining} the answers to questions in QASC is a more challenging problem than answering them. This motivates us to further explore the problem of generating multi-hop explanations for QASC.
\section{Methods}
In our experiments toward multi-hop explanation generation, we consider syntactic and semantic multi-hop retrieval methods, then explore ways to re-rank retrieved explanations to reduce the pool of candidates.
\subsection{Syntactic Methods}\label{sec:syntactic}
Syntactic information retrieval methods enable quick searching of millions of documents.
eQASC was originally generated using ElasticSearch, \footnote{https://www.elastic.co/} a fast but primarily syntactic search engine based on keyword overlap. After indexing the QASC corpus facts into an ElasticSearch index, \citet{jhamtani-clark-2020-learning} used a simple procedure (shown in Figure~\ref{fig:syntactic_pipeline}) to generate a 2-hop explanation for each question-answer pair from QASC. First, query the corpus for $N=20$ candidate first facts. For each candidate first fact, query the corpus for $M=4$ candidate second facts, where each candidate second fact must contain a word that appears in the question-answer pair, and a word that appears in the first fact. The purpose of this restriction is to force the resulting chain of facts to connect concepts in the question and answer through intermediate concepts. Lastly, from the set product of all candidate first facts and all candidate second facts, select up to $K=10$ candidate explanation chains, ranked by the sum of retrieval scores from the ElasticSearch engine.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{syntactic_pipeline.pdf}
\vspace{-5pt}
\caption{Syntactic pipeline used to generate multi-hop explanations in eQASC. First, the question-answer (Q-A) pair is used to query the ElasticSearch index for $N$ candidate first facts, each of which is used to query it for $M$ candidate second facts. All candidate first and second facts are paired, and the top-scored $K$ chains are returned as explanations. }
\vspace{-1em}
\label{fig:syntactic_pipeline}
\end{figure}
\paragraph{Expanding syntactic retrieval.}
This is a simple, fast approach to generate a large number of candidate explanations. To improve the likelihood of generating a valid explanation, we can expand and diversify the search results by increasing $N$, $M$, and $K$. Specifically, we increase each of them to 200.
\subsection{Semantic Methods}\label{sec:semantic}
Alternatively, semantic information retrieval methods can enable stronger meaning representation than syntactic methods with a trade-off of search speed. Typical approaches generate a semantic vector embedding for all documents in a corpus. They then generate a comparable embedding of the query, and use vector similarity measures to rank documents.
\paragraph{Dense passage retrieval.}
Dense passage retrieval (DPR) is a recent approach to semantic information retrieval which learns dual encoders for queries and documents \cite{karpukhin-etal-2020-dense}. They are trained such that the query and document encoders generate similar embeddings for semantically similar queries and documents. Similarity is measured by inner product of vectors, and is maximized for matching queries and documents, but minimized for irrelevant queries and documents.
We can then use the document encoder to index the facts in a corpus, and efficiently query the index using the embedding from the query encoder.
For each question-answer pair in QASC and the two facts in its gold explanation chain, we can train a dense passage retriever to generate similar embeddings for the question-answer pair (query) and these facts (documents), then encode all facts in the QASC corpus for search purposes.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{semantic_pipeline.pdf}
\vspace{-1em}
\caption{Proposed semantic explanation pipeline. Facts are encoded using a fact encoder (in blue) and stored in a dense index, while the question-answer pair is encoded by a query encoder (in yellow). Maximum inner product search (MIPS) is used to query the index for $N$ candidate first facts, which are each re-encoded (in green), then used to query the index again for $M$ candidate second facts. All candidate first and second facts are paired, and the top-scored $K$ chains are returned as explanations. }
\vspace{-1em}
\label{fig:semantic expl pipeline}
\end{figure*}
\paragraph{Multi-hop reasoning.}
To generate a multi-hop explanation using this approach, we need to facilitate reasoning over the facts and queries. Given a question-answer pair from QASC, we first query the DPR index for {$N=5$} facts. To reduce error accumulation in generating the chain of facts, we then \textit{re-encode} each fact into a new query embedding incorporating the candidate fact and the original query.
The re-encoder is a lightweight feedforward network inspired by a similar fact-translating function proposed in \citet{lin-etal-2021-differentiable}. Given the embeddings $q_{QA}$ and $d_1$ for the question-answer pair query and first fact document respectively, we use the gold explanation chains from QASC to learn the re-encoder $g(q_{QA}, d_1)$. Specifically, if $d_1$ is the embedding of the first fact in the gold explanation chain, we maximize the inner product between the re-encoded output $q_{r}$ and the document embedding $d_2$ for the second gold fact. Next, we query the DPR index again using $q_r$ to obtain {$M=2$} candidate second facts. To reduce noise, we filter out any facts that mention no concepts in either the question or answer.
Lastly, from the set product of all candidate first facts and all candidate second facts, select up to $K=10$ candidate explanation chains, ranked by the sum of retrieval scores, i.e., inner products when querying the DPR index. Our semantic explanation pipeline is shown in full in Figure~\ref{fig:semantic expl pipeline}. It is worth noting that our lightweight re-encoder operation can extend to any number of hops.
\subsection{Re-Ranking Candidate Explanations}
Both of our syntactic and semantic multi-hop retrieval systems can quickly propose candidate explanation chains for questions in QASC. However, both approaches over-generate candidates, and the high number of candidates (i.e., up to 200) limits the practical usefulness of the systems to an end user. As such, we lastly propose a re-ranker for candidates based on large-scale, pre-trained language models \cite{devlin-etal-2019-bert,liu2019roberta}. Specifically, we use the gold explanation chains from QASC to fine-tune a language model to the classification task of whether or not a candidate explanation is valid for a question-answer pair. We then re-rank a pool of candidates based on the system's estimated likelihood that each explanation chain is valid, and keep only the top $K=10$ candidates for a direct comparison to the syntactic approach used to generate eQASC.
\section{Experimental Results}
We next apply these approaches to the task of selecting 2-hop reasoning chains for question-answer pairs in QASC, and directly compare our results to the original procedure to generate eQASC. We compare systems by their individual \textit{gold retrieval rate} on the validation set for QASC, i.e., the percentage of question-answer pairs for which the gold explanation chain from QASC was successfully reproduced.\footnote{As the ordering of facts in QASC explanation chains does not typically matter, the gold retrieval rate counts both the forward and reverse forms of gold explanation chains.} This serves as an indicator of the quality of generated explanations, as it suggests that generated explanations tend to look more like those curated from the corpus by humans.
\subsection{Expanded Syntactic Explanation}
As mentioned earlier, we first expanded the ElasticSearch-based approach used to generate eQASC by increasing the search hyperparameters $N$, $M$, and $K$ each to 200. Selected results from this are listed in Table~\ref{tab:expand syntactic}. By only increasing $K$ (i.e., the number of candidate explanation chains considered) to 200, the retrieval rate increases from 31.1\% to 37.0\%. When increasing $N$ and $M$ (i.e., the number of candidate first and second facts considered) also to 200, the retrieval rate further increases to 46.5\%, a net 15.4\% gain.
\begin{table}
\centering
\footnotesize
\begin{tabular}{ccc|c}
\toprule
\textbf{N} & \textbf{M} & \textbf{K} & \textbf{Gold Retrieval Rate (\%)} \\\midrule
20 & 4 & 10 & 31.1 \\
20 & 4 & 200 & 37.0 \\
200 & 200 & 200 & \textbf{46.5} \\
\bottomrule
\end{tabular}
\normalsize
\caption{Gold explanation chain retrieval rates for syntactic multi-hop retrieval with ElasticSearch on QASC validation set. The first row indicates the original search hyperparameters used to generate eQASC, while the last two rows increase hyperparameters to expand and diversify the search.}
\vspace{-1em}
\label{tab:expand syntactic}
\end{table}
\subsection{Syntactic-Semantic Multi-Hop Explanation}
Next, we incorporate our semantic multi-hop retrieval process powered by DPR.
\paragraph{Training details.}
The dual encoders for DPR are learned starting from pre-trained \textsc{BERT}-base~\cite{devlin-etal-2019-bert}. The best encoders are selected based on the mean squared error between embeddings for matching question-answer pairs and facts on the QASC validation set. The batch size is fixed at 16, while learning rate and number of training epochs are selected based on a grid search.
For the re-encoder, the training batch size, learning rate, and number of epochs are similarly selected based on a grid search, minimizing the mean squared error between the output re-encoded queries and target fact embeddings on the validation set.
\paragraph{Results.}
Table~\ref{tab:semantic} compares the gold retrieval rate of various combinations of the syntactic and semantic approaches for multi-hop retrieval on the QASC validation and testing sets.\footnote{When combining the syntactic and semantic approaches, we replace up to the lowest-ranked 25\% of syntactic candidate explanation chains with the top semantic candidate explanation chains.} Our results show that while using only the semantic candidate explanation chains leads to a 13.9\% gold retrieval rate at best, combining the expanded syntactic and semantic candidates gives us the best result of up to 51.1\% gold retrieval rate, outperforming the case where only the expanded syntactic candidates are considered. Thus, the semantic approach finds some of the missing gold explanations that the syntactic approach misses, suggesting that both syntactic and semantic approaches are needed for generating the best-quality explanations on QASC questions.
\begin{table}
\centering
\footnotesize
\begin{tabular}{c|cc}
\toprule
\textbf{Approach} & \multicolumn{2}{c}{\textbf{Gold Retrieval Rate (\%)}} \\
& \textit{Validation} & \textit{Test} \\\midrule
syntactic & 37.0 & 40.2 \\
syntactic (exp.) & 46.5 & 49.3 \\
semantic & 10.8 & 13.9 \\
syntactic (exp.) + semantic & \textbf{49.9} & \textbf{51.1} \\
\bottomrule
\end{tabular}
\normalsize
\caption{Gold explanation chain retrieval rates (top $K=200$ candidates) for combinations of multi-hop retrieval approaches on QASC. Syntactic refers to the second result from Table~\ref{tab:expand syntactic}, while syntactic (exp.) refers to the expanded third result. Semantic refers to the previously introduced DPR-based multi-hop retrieval approach.}
\vspace{-1em}
\label{tab:semantic}
\end{table}
\subsection{LM Re-Ranking}
While our results improve the gold retrieval rate by a wide margin, recall that our multi-hop retrieval approaches for QASC increase the number of candidate explanation chains $K$ to 200. Such a large set of candidates is not useful in practice, as a human user would have to sort through a cumbersome number of explanations in order to judge the machine's understanding of the question and answer. As such, we lastly present our experiments on \textit{re-ranking} candidate explanation chains, which enables us to truncate our results to $K=10$ top candidate explanation chains without massive performance drops, and consequently compare our approach directly to the original approach used to generate eQASC.
\begin{table}
\centering
\footnotesize
\begin{tabular}{c|c|cc}
\toprule
\textbf{Retrieval Approach} & \textbf{Re-Ranker} &\multicolumn{2}{c}{\textbf{Gold RR (\%)}} \\
& & \textit{Val.} & \textit{Test} \\\midrule
syntactic & -- & 31.1 & 34.1 \\\midrule
syntactic (exp.) & \textsc{BERT} & 36.3 & 34.0 \\
syntactic (exp.) + semantic & \textsc{BERT} & {36.4} & {34.1} \\\midrule
syntactic (exp.) & \textsc{RoBERTa} & 37.9 & 36.2 \\
syntactic (exp.) + semantic & \textsc{RoBERTa} & \textbf{38.1} & \textbf{36.4} \\
\bottomrule
\end{tabular}
\normalsize
\caption{Gold explanation chain retrieval rates (RR; top $K=10$ candidates) for combinations of multi-hop retrieval approaches on QASC, re-ranked by fine-tuned language models. Syntactic refers to the original approach used to generate eQASC, while syntactic (exp.) refers to the expanded third result from Table~\ref{tab:expand syntactic}. Semantic refers to our proposed DPR-based multi-hop retrieval approach.}
\vspace{-1em}
\label{tab:reranking}
\end{table}
\paragraph{Training details.}
Using the re-ranking approach described earlier, we fine-tune the \textsc{BERT}~\cite{devlin-etal-2019-bert} and \textsc{RoBERTa}~\cite{liu2019roberta}
pre-trained language models.\footnote{For both models, we use the ``base'' form which has 12 hidden layers, a hidden dimension of 768, and 12 attention heads.} Models are trained with a 1:2 ratio of gold and invalid explanation chains, with 3 unique invalid explanation chains randomly sampled from ElasticSearch results per gold explanation chain (forward and reverse forms). Models are selected based on instances achieving the highest top-1 gold retrieval rate, i.e., proportion of question-answer pairs where the gold explanation chain is ranked highest, on the QASC validation set similarly redistributed in this way.
\paragraph{Results discussion.}
Table~\ref{tab:reranking} compares the final gold retrieval rates for the top $K=10$ re-ranked candidates from various approaches. While the original syntactic approach for generating eQASC achieves a respective 31.1\% and 34.1\%
gold retrieval rate on the validation and testing sets, our expanded syntactic approach achieves up to 37.9\% and 36.2\% gold retrieval rate with \textsc{RoBERTa}. Again with \textsc{RoBERTa}, our syntactic-semantic multi-hop retrieval achieves the best results of 38.1\% and 36.4\% on the validation and testing sets, respectively, exceeding the baselines.
After narrowing down from 200 candidate explanations to 10 with the re-ranker, we retain up to a 7.0\% net improvement of gold retrieval rate compared with the baseline. Given that the net gain was 13.9\% with 200 candidate explanations, one future direction is to improve the re-ranker performance, so that we can retain more of this improvement.
To achieve this, one option is to revisit the re-ranker training, which did not incorporate negative examples proposed by DPR, and may experience generalization issues.
\section{Conclusion}
In this work, by utilizing a small amount of ground truth supervision, we explored approaches to improve the generation of multi-hop explanations from a corpus of declarative facts. We show that both fast, syntactic methods and slow, semantic methods are useful for gathering relevant evidence for explanation.
To facilitate multi-hop reasoning from one piece of evidence to the next, we had some success in using a lightweight feedforward re-encoder, as opposed to state-of-the-art graph-based approaches that consume too much time and memory for practical online use.
As many of our approaches over-generate candidate explanations, we lastly explored using pre-trained language models to re-rank and filter candidates. Our results suggest this is a significant challenge, and future work may further explore this problem.
|
1,116,691,498,551 | arxiv | \section{Introduction and main result}
This paper is concerned with estimates on moments of negative eigenvalues of Schr\"odinger operators $ (-\Delta)^s -\mathcal C_{s,d} |x|^{-2s} - V$ in $L_2(\mathbb{R}^d)$ in terms of integrals of the potential $V$. Here
\begin{equation} \label{eq:csd}
\mathcal C_{s,d} := 2^{2s} \frac{\Gamma((d+2s)/4)^2}{\Gamma((d-2s)/4)^2}
\end{equation}
is the sharp constant in the Hardy inequality
\begin{equation}\label{eq:hardy}
\int_{\mathbb{R}^d} |p|^{2s} |\hat u(p)|^2 \,dp
\geq \mathcal C_{s,d} \int_{\mathbb{R}^d} |x|^{-2s} |u(x)|^2 \,dx\,,
\qquad u\in C_0^\infty(\mathbb{R}^d)\,,
\end{equation}
which is valid for $0<s<d/2$ \cite{He} and we write $\hat u(p) := (2\pi)^{-d/2} \int_{\mathbb{R}^d} u(p) e^{-ip\cdot x}\,dx$ for the Fourier transform of $u$. In \cite{FrLiSe1} we have shown that for any $\gamma>0$, $0<s\leq 1$ and $0<s<d/2$ one has
\begin{equation}\label{eq:hltintro}
\tr\left((-\Delta)^s -\mathcal C_{s,d} |x|^{-2s} - V\right)_-^\gamma
\leq L_{\gamma,d,s}^{\mathrm{HLT}} \int_{\mathbb{R}^d} V(x)_+^{\gamma+d/2s} \,dx
\end{equation}
with a constant $L_{\gamma,d,s}^{\mathrm{HLT}}$ independent of $V$. Here and in the following, $t_\pm:=\max\{\pm t,0\}$ denotes the positve and negative parts of a real number or a self-adjoint operator $t$. The case $s=1$ in \eqref{eq:hltintro} has been shown earlier in \cite{EkFr}. We refer to \eqref{eq:hltintro} as \emph{Hardy-Lieb-Thirring inequality} since it is (up to the value of the constant) an improvement of the Lieb-Thirring inequality \cite{LiTh}
\begin{equation}\label{eq:lt}
\tr\left((-\Delta)^s - V\right)_-^\gamma
\leq L_{\gamma,d,s} \int_{\mathbb{R}^d} V(x)_+^{\gamma+d/2s} \,dx \,.
\end{equation}
It should be pointed out that if $0<s<d/2$, then \eqref{eq:lt} is valid even for $\gamma=0$ (as first shown by Cwikel, Lieb, and Rozenblum) while \eqref{eq:hltintro} is not. We refer to the surveys \cite{LaWe,Hu} for background and references concerning \eqref{eq:lt}.
The original motivation for \eqref{eq:lt} came from the problem of stability of non-relati\-vis\-tic matter \cite{LiSe}. Likewise, our motivation for \eqref{eq:hltintro} was stability of \emph{relativistic} matter in \emph{magnetic fields}. For this problem it is crucial that \eqref{eq:hltintro} continues to holds if $(-\Delta)^s$ is replaced by $(D-A)^{2s}$ with a magnetic vector potential $A\in L_{2,{\rm loc}}(\mathbb{R}^d,\mathbb{R}^d)$, and that the constant can be chosen independently of $A$. Here, as usual, $D=-i\nabla$ and the operator $(D-A)^{2s}:=((D-A)^2)^s$ is defined using the spectral theorem. Using the magnetic version of \eqref{eq:hltintro} we could prove stability of relativistic matter in magnetic fields up to and including the critical value of the nuclear charge $\alpha Z=2/\pi=\mathcal C_{1/2,3}$; see \cite{FrLiSe1} and also \cite{FrLiSe2}.
The purpose of this paper is fourfold.
\begin{enumerate}
\item We will give a new, much simpler proof of \eqref{eq:hltintro}. While the method in \cite{FrLiSe1} relied on rather involved relations between Sobolev inequalities and decay estimates on heat kernels, the present proof uses nothing more than \eqref{eq:lt} (with $\gamma=0$ and with $s$ replaced by some $t<s$) and the generalization of a powerful (though elementary to prove) new inequality by Solovej, S\o rensen and Spitzer \cite{SoSoSp}.
\item We will extend \eqref{eq:hltintro} to its optimal parameter range $0<s<d/2$. For $d\geq 3$ and $1<s<d/2$ this is a new result, even for integer values of $s$ when the operator is local. This result can not be attained with the method of \cite{FrLiSe1}, since positivity properties of the heat kernel break down for $s>1$.
\item Though our new proof of \eqref{eq:hltintro} does \emph{not} work in the presence of a magnetic field, we shall prove a new operator-theoretic result, which says that any non-magnetic Lieb-Thirring inequality implies a magnetic Lieb-Thirring inequality (with possibly a different constant). This recovers, in particular, that \eqref{eq:hltintro} holds if $(-\Delta)^s$ is replaced by $(D-A)^{2s}$ and $0<s\leq 1$. (The reason for the restriction $s\leq 1$ at this point is that we need a diamagnetic inequality.) Another application of this result concerns the recent inequality in \cite{KoVuWe} corresponding to the endpoint $\gamma=0$ of \eqref{eq:lt} with $s=1$, $d=2$ .
\item We show that an analog of inequality \eqref{eq:hltintro} for $s=1/2$, $d=3$ holds in a model for pseudo-relativistic electrons that includes spin. The difficulty here is that the potential energy is non-local. This new estimate simplifies some of the proofs in \cite{FrSiWa} and will be, we believe, a crucial ingredient in the proof of stability of matter in this model.
\end{enumerate}
Here is the precise statement of our result.
\begin{theorem}\label{hlt}
Let $d\geq 1$, $0<s<d/2$ and $\gamma>0$. Then there is a constant $L_{\gamma,d,s}^\mathrm{HLT}$ such that
\begin{equation}\label{eq:hlt}
\tr\left((-\Delta)^s -\mathcal C_{s,d} |x|^{-2s} - V\right)_-^\gamma
\leq L_{\gamma,d,s}^\mathrm{HLT} \int_{\mathbb{R}^d} V(x)_+^{\gamma+d/2s} \,dx \,.
\end{equation}
If $d\geq 2$, $0<s\leq 1$ and $(-\Delta)^s$ is replaced by $(D-A)^{2s}$ for some $A\in L_{2,{\rm loc}}(\mathbb{R}^d,\mathbb{R}^d)$, then \eqref{eq:hlt} remains valid if $L_{\gamma,d,s}^\mathrm{HLT}$ is replaced by $L_{\gamma,d,s}^\mathrm{HLT} \,(e/p)^p \,\Gamma(p+1)$ with $p=\gamma+d/2s$.
\end{theorem}
The crucial ingredient in our proof of \eqref{eq:hlt} is the following lower bound for the quadratic form
$$
h_s[u] := \int_{\mathbb{R}^d} |p|^{2s} |\hat u(p)|^2 \,dp - \mathcal C_{s,d} \int_{\mathbb{R}^d} |x|^{-2s} |u(x)|^2 \,dx
$$
of the operator $ (-\Delta)^s -\mathcal C_{s,d} |x|^{-2s}$.
\begin{theorem}\label{hardyrem}
Let $0<t<s<d/2$. Then there exists a constant $\kappa_{d,s,t}>0$ such that for all $u\in C_0^\infty(\mathbb{R}^d)$ one has
\begin{equation}\label{eq:hardyremscal}
h_s[u]^\theta \|u\|^{2(1-\theta)} \geq \kappa_{d,s,t} \|(-\Delta)^{t/2} u\|^2 \,,
\qquad \theta:=t/s\,.
\end{equation}
\end{theorem}
In the special case $d=3$ and $s=1/2$ this is a recent result by Solovej, S\o rensen and Spitzer \cite[Thm. 11]{SoSoSp}. The results reported here are motivated by their work. Below we shall show that their proof extends to arbitrary $0<s<d/2$.
Our original proof of \eqref{eq:hlt} in \cite{FrLiSe1} for $0<s\leq 1$ relied on the Gagliardo-Nirenberg-type inequality
\begin{equation}\label{eq:hs}
h_s[u]^\theta \|u\|^{2(1-\theta)} \geq \sigma_{d,s,q} \|u\|_q^2 \,,
\qquad \theta:=\frac ds\left(\frac12-\frac1q\right) \,,
\end{equation}
for $2<q<2s/(d-2s)$. This is weaker than \eqref{eq:hardyremscal} in view of the Sobolev inequality \cite[Thms. 4.3 and 8.3]{LiLo}
$$
\|(-\Delta)^{t/2} u\|^2 \geq S_{d,t} \|u\|_q^2 \,,
\qquad q=\frac{2d}{d-2t} \,.
$$
What makes \eqref{eq:hardyremscal} much easier to prove than \eqref{eq:hs} is that it is a \emph{linear} inequality, that is, all norms are taken in $L_2(\mathbb{R}^d)$. Indeed, \eqref{eq:hardyremscal} is easily seen to be equivalent to the operator inequality
\begin{equation}\label{eq:hardyrem}
(-\Delta)^{s} - \mathcal C_{s,d} |x|^{-2s} \geq K_{d,s,t} l^{-2(s-t)}(-\Delta)^t - l^{-2s} \,,
\qquad l>0\,,
\end{equation}
where $K_{d,s,t}= \left( s^{-s} t^t (s-t)^{s-t} \right)^{1/s} \kappa_{d,s,t}$, and this is the way we shall prove it in the next section.
\textbf{Acknowledgements.} The author would like to thank E. Lieb and R. Seiringer for very fruitful discussions, as well as J. P. Solovej, T. {\O}stergaard S{\o}rensen and W.~Spitzer for useful correspondence. Support through DAAD grant D/06/49117 and U.S. National Science Foundation grant PHY 06 52854 is gratefully acknowledged.
\section{Proof of Theorem \ref{hardyrem}}
Throughout this section we assume that $0<s<d/2$. Recall that for $0<\alpha<d$ the Fourier transform of $|x|^{-d+\alpha}$ is given by
\begin{equation}\label{ft1x}
b_{d-\alpha} \left(|\cdot|^{-d+\alpha}\right)^\wedge (p)
= b_{\alpha} |p|^{-\alpha},
\qquad b_\alpha := 2^{\alpha/2} \Gamma(\alpha/2)\,;
\end{equation}
see, e.g., \cite[Thm.~5.9]{LiLo}, where another convention for the Fourier transform is used, however. This implies that for $2s<\alpha<d$ one has
\begin{equation}\label{eq:convol}
\int_{\mathbb{R}^d} \frac1{|p-q|^{d-2s} |q|^{\alpha}} \,dq = \Psi_{s,d}(\alpha) \frac1{|p|^{\alpha-2s}} \,,
\end{equation}
where
\begin{equation*}\label{eq:psi}
\Psi_{s,d}(\alpha)
:= (2\pi)^{d/2} \frac{b_{2s} \, b_{\alpha-2s} \, b_{d-\alpha}}{b_{d-2s} \, b_{d-\alpha+2s} \, b_{\alpha}}
= \frac{\pi^{d/2} \,\Gamma(s)}{\Gamma((d-2s)/2)} \ \frac{\Gamma((\alpha-2s)/2)\,\Gamma((d-\alpha)/2)}{\Gamma((d-\alpha+2s)/2)\,\Gamma(\alpha/2)}
\,.
\end{equation*}
We shall need the following facts about $\Psi_{s,d}(\alpha)$ as a function of $\alpha\in(2s,d)$.
\begin{lemma}\label{incr}
$\Psi_{s,d}$ is an even function with respect to $\alpha=(d+2s)/2$ and one has
\begin{equation}\label{eq:psimin}
\Psi_{s,d}((d+2s)/2)= (2\pi)^{d/2} \frac{b_{2s}}{b_{d-2s}}\mathcal C_{s,d}^{-1}
\end{equation}
with $\mathcal C_{s,d}$ from \eqref{eq:csd}. Moreover, $\Psi_{s,d}$ is strictly decreasing on $(2s, (d+2s)/2)$ and strictly increasing on $((d+2s)/2,d)$.
\end{lemma}
This is Lemma 3.2 from \cite{FrLiSe1} in disguise.
\begin{proof}[Proof of Lemma \ref{incr}]
$\Psi_{s,d}(\alpha)$ is obviously invariant under replacing $\alpha$ by $d+2s-\alpha$, and its value at $\alpha=(d+2s)/2$ follows immediately from definition \eqref{eq:csd}. To prove the monotonicity we write
\begin{equation*}
\Psi_{s,d}(\alpha) = \frac{\pi^{d/2}\ \Gamma(s)}{\Gamma((d-2s)/2)} \ \frac{f(t)}{f(s+t)}\,,
\qquad t=(\alpha-2s)/2 \,,
\end{equation*}
where $T:=(d-2s)/2$ and $f(t):= \Gamma(t)/\Gamma(T+s-t)$. We need to show that $\log(f(t)/f(s+t))$ is strictly decreasing in $t\in (0,T/2)$. Noting that
$$
\frac{f'(t)}{f(t)} = \psi(t) + \psi(T+s-t)
$$
with $\psi:=\Gamma'/\Gamma$ the Digamma function, we have
$$
\frac{d}{dt} \log \frac{f(t)}{f(t+s)} = \psi(t) + \psi(T+s-t) - \psi(t+s) - \psi(T-t)
=-\int_{t}^{t+s} h(\tau) \,d\tau
$$
with $h(\tau):= \psi'(\tau)-\psi'(T+s-\tau)$ for $0<\tau<T+s$. Since $\psi'$ is strictly decreasing \cite[(6.4.1)]{AbSt}, $h$ is an odd function with respect to $\tau=(T+s)/2$ which is strictly positive for $\tau<(T+s)/2$. Since the midpoint of the interval $(t,t+s)$ lies to the left of $(T+s)/2$, the integral of $h$ over this interval is strictly positive, which proves the claim.
\end{proof}
Now we prove \eqref{eq:hardyrem}, following the strategy of Solovej, S\o rensen and Spitzer \cite{SoSoSp} in the special case $d=3$, $s=1/2$; see also \cite[Thm. 11]{LiYa} for a related argument.
\begin{proof}[Proof of Theorem \ref{hardyrem}]
For technical reasons we prove the theorem only for $2s/3\leq t<s$. It is easy to see that this implies the result for all $0<t<s$.
By a well-known argument (going back at least to Abel and, in the present context, to \cite{KoPeSe}) based on the Cauchy-Schwarz inequality one has for any positive measurable function $h$ on $\mathbb{R}^d$
\begin{equation*}
(2\pi)^{d/2} \frac{b_{2s}}{b_{d-2s}} \int_{\mathbb{R}^d} \frac{|u|^2}{|x|^{2s}} \,dx
= \iint_{\mathbb{R}^d\times\mathbb{R}^d} \frac{\overline{\hat u(p)} \hat u(q)}{|p-q|^{d-2s}} \,dp\,dq
\leq \int_{\mathbb{R}^d} t_h(p) |\hat u(p)|^2 \,dp \,,
\end{equation*}
where
$$
t_h(p):= h(p)^{-1} \int_{\mathbb{R}^d} \frac{h(q)}{|p-q|^{d-2s}} \,dq \,.
$$
Below we shall choose $h$ (depending on $l>0$) in such a way that for some positive constants $A$ and $B$ (depending on $d$, $s$ and $t$, but not on $l$) one has
\begin{equation}
\label{eq:tbound}
t_h(p) \leq \Psi_{s,d}((d+2s)/2) |p|^{2s} - A l^{-2(s-t)} |p|^{2t} + B l^{-2s} \,.
\end{equation}
(By scaling it would be enough to prove this for $l=1$, but we prefer to keep $l$ free.) Because of \eqref{eq:psimin} this estimate proves \eqref{eq:hardyrem}.
We show that \eqref{eq:tbound} holds with $h(p)=(|p|^{(d+2s)/2} + l^{\beta-(d+2s)/2} |p|^\beta)^{-1}$ where $\beta$ is a parameter depending on $t$ that will be fixed later. (Indeed, we shall choose $\beta=2t+(d-2s)/2$.) Since the derivatives of the function $r\mapsto r^{-1}$ have alternating signs one has $(a+b)^{-1} \leq a^{-1} - a^{-2} b + a^{-3} b^2$ and therefore
$$
\int_{\mathbb{R}^d} \frac{h(q)}{|p-q|^{d-2s}} \,dq
\leq \int_{\mathbb{R}^d} \frac{1}{|p-q|^{d-2s}}\left(\frac1{|q|^{(d+2s)/2}} - \frac{l^{\beta-(d+2s)/2}}{|q|^{d+2s-\beta}} + \frac{l^{2\beta-d-2s}}{|q|^{3(d+2s)/2-2\beta}} \right) \,dq \,.
$$
If we assume that $(d+6s)/4<\beta<(3d+2s)/4$ then the right side is finite and, using notation \eqref{eq:convol} with $\Psi$ instead of $\Psi_{s,d}$, equal to
$$
\Psi\left(\frac{d+2s}2\right) \frac1{|p|^{(d-2s)/2}} - \Psi(d+2s-\beta) \frac{l^{\beta-(d+2s)/2}}{|p|^{d-\beta}}
+ \Psi\left(\frac{3(d+2s)}2-2\beta\right) \frac{l^{2\beta-d-2s}}{|p|^{3d/2-2\beta+s}} \,.
$$
Thus
\begin{align*}
t_h(p) \leq &
\Psi\left(\frac{d+2s}2\right) |p|^{2s}
- \left(\Psi(d+2s-\beta) - \Psi\left(\frac{d+2s}2\right) \right) l^{\beta-(d+2s)/2} |p|^{\beta-(d-2s)/2} \\
& + \left(\Psi\left(\frac{3(d+2s)}2-2\beta\right) - \Psi(d+2s-\beta) \right) l^{2\beta-d-2s} |p|^{2\beta-d} \\
& + \Psi\left(\frac{3(d+2s)}2-2\beta\right) l^{3\beta-3d/2-3s} |p|^{3\beta-3d/2-s} \,.
\end{align*}
If we assume that $\beta\leq (d+2s)/2$, then the exponents of $|p|$ on the right side satisfy $2s\geq\beta-(d-2s)/2 \geq 2\beta-d \geq 3\beta-3d/2-s$, and if $\beta\geq(3d+2s)/6$ then the last exponent is non-negative. Now we choose $\beta=2t+(d-2s)/2$, so that the exponent of the second term is $2t$ and the condition $\beta\geq(3d+2s)/6$ is satisfied, since we are assuming that $t\geq 2s/3$. Moreover, according to Lemma \ref{incr}, the coefficient of the second term is negative. Finally, we use that there are constants $C_1$ and $C_2$ such that for any $\epsilon>0$ one has
$$
|p|^{2\beta-d} \leq \epsilon |p|^{\beta-(d-2s)/2} + C_1 \epsilon^{-\frac{2(2\beta+d)}{d+2s-2\beta}} \,,
\quad
|p|^{3\beta-3d/2-s} \leq \epsilon |p|^{\beta-(d-2s)/2} + C_2 \epsilon^{-\frac{6\beta-3d-2s}{2(d+2s-2\beta)}} \,.
$$
This concludes the proof of \eqref{eq:tbound}.
\end{proof}
\section{Proof of Theorem \ref{hlt}}
We fix $0<s<d/2$ and $\gamma>0$ and write
$$
\tr\left((-\Delta)^s -\mathcal C_{s,d}|x|^{-2s} -V\right)_-^\gamma
=\gamma \int_0^\infty N(-\tau, (-\Delta)^s -\mathcal C_{s,d}|x|^{-2s} -V) \, \tau^{\gamma-1}\,d\tau \,,
$$
where $N(-\tau,H)$ denotes the number of eigenvalues less than $-\tau$, counting multiplicities, of a self-adjoint operator $H$. We shall use \eqref{eq:hardyrem} with $l^{-2s}=\sigma\tau$ and some $0<t<s$ and $0<\sigma<1$ to be specified below. Abbreviating $K_t=K_{d,s,t}$ we find that
\begin{align*}
N(-\tau, (-\Delta)^s -\mathcal C_{s,d}|x|^{-2s} -V) & \leq N(0, K_t (\sigma\tau)^{(s-t)/s}(-\Delta)^t -V + (1-\sigma)\tau) \\
& = N\left(0, (-\Delta)^s - K_t^{-1} (\sigma\tau)^{-(s-t)/s} \left(V - \left(1-\sigma\right)\tau\right) \right) \,.
\end{align*}
Now we use \eqref{eq:lt} with $\gamma=0$ and $s$ replaced by $t$ (see \cite{Da} for $t\leq 1$ and \cite{Cw} for $t<d/2$). Abbreviating $L_t=L_{0,d,t}$ we have
$$
N(-\tau, (-\Delta)^s -\mathcal C_{s,d}|x|^{-2s} -V)
\leq L_{t} K_t^{-d/2t} (\sigma\tau)^{-d(s-t)/2st} \int_{\mathbb{R}^d} \left(V - \left(1-\sigma\right)\tau\right)_+^{d/2t} \,dx
$$
and
\begin{align*}
& \tr\left( (-\Delta)^s -\mathcal C_{s,d}|x|^{-2s} -V\right)_-^\gamma \\
& \quad \leq \gamma L_{t} K_t^{-d/2t} \sigma^{-d(s-t)/2st} \int_{\mathbb{R}^d} dx \int_0^\infty d\tau \tau^{\gamma-1-d(s-t)/2st} \left(V - \left(1-\sigma\right)\tau\right)_+^{d/2t} \,dx \\
& \quad = \gamma L_{t} K_t^{-d/2t} \sigma^{-\frac{d(s-t)}{2st}} (1-\sigma)^{-\gamma+\frac{d(s-t)}{2st}} \ \frac{\Gamma(\gamma-\tfrac{d(s-t)}{2st}) \Gamma(\tfrac d{2t}+1)}{\Gamma(\gamma+\tfrac d{2s}+1)} \
\int_{\mathbb{R}^d} V_+^{\gamma+d/2s} \,dx \,.
\end{align*}
Here we assumed that $t>ds/(2\gamma s+d)$ so that the $\tau$ integral is finite. Finally, we optimize over $0<\sigma<1$ by choosing $\sigma=d(s-t)/2\gamma st$ and over $ds/(2\gamma s+d)<t<s$ to complete the proof of \eqref{eq:hlt}.
The statement about the inclusion of $A$ follows from Example \ref{diamagex} and Theorem \ref{diamagneg} in the following section.
\section{Magnetic Lieb-Thirring inequalities}
In this section we discuss Lieb-Thirring inequalities for magnetic Schr\"odinger operators, that is, \eqref{eq:lt} (and its generalizations) with $(-\Delta)^s$ replaced by $(D-A)^{2s}$ for some vector field $A\in L_{2,{\rm loc}}(\mathbb{R}^d,\mathbb{R}^d)$.
It is a remarkable fact that all presently known proofs of Lieb-Thirring inequalities, which allow for the inclusion of a magnetic field, yield the same constants in the magnetic case as in the non-magnetic case. It is unknown whether this is also true for the unknown sharp constants. Note that the diamagnetic inequality implies that the lowest eigenvalue does not decrease when a magnetic field is added, but there is no such result for, e.g., the number or the sum of eigenvalues; see \cite{AvHeSi, Li2}. Rozenblum \cite{Ro} discovered, however, that any power-like bound on the number of eigenvalues in the non-magnetic case implies a similar bound in the magnetic case, with possibly a worse constant. Here we show the same phenomenon for \emph{moments} of eigenvalues.
We work in the following abstract setting. Let $(X,\mu)$ be a sigma-finite measure space and let $H$ and $M$ be non-negative operators in $L_2(X,\mu)$ such that for any $u\in L_2(X,\mu)$ and any $t>0$
\begin{equation}\label{eq:domination}
|\exp(-tM) u(x)| \leq (\exp(-tH)|u|)(x)
\qquad \mu-\text{a.e.}\ x\in X \,.
\end{equation}
Note that this implies that $\exp(-tH)$ is positivity preserving. We think of $H$ as a non-magnetic operator, $M$ a magnetic operator and \eqref{eq:domination} as a diamagnetic inequality. It might be useful to keep the following example in mind.
\begin{example}\label{diamagex}
Let $X=\mathbb{R}^d$ with Lebesgue measure, $H=(-\Delta)^s$, and $M=(D-A)^{2s}$ for some $0<s\leq 1$ and $A\in L_{2,{\rm loc}}(\mathbb{R}^d)$. The diamagnetic inequality \eqref{eq:domination} in the case $s=1$ was shown in \cite{Si1}, and in the case $0<s<1$ it follows from the $s=1$ case since the function $\lambda\mapsto\exp(-\lambda^s)$ is completely monotone and hence by Bernstein's theorem \cite{Do} the Laplace transform of a positive measure. More generally, \eqref{eq:domination} holds for $H=(-\Delta)^s+W$ and $M=(D-A)^{2s}+W$ with $s$ and $A$ as before and a, say, bounded function $W$. This can be seen using Trotter's product formula. By an approximation argument the inequality holds also for $W(x)=-\mathcal C_{s,d}|x|^{-2s}$.
\end{example}
The main result in this section is
\begin{theorem}\label{diamagneg}
Let $H$ and $M$ be as above and assume that there exist some constants $L>0$, $\gamma\geq 0$, $p>0$ and a non-negative function $w$ on $X$ such that for all $V\in L_p(V,w\,d\mu)$ one has
\begin{equation}\label{eq:diamagnegass}
\tr(H-V)_-^{\gamma} \leq L \int_X V_+^{p} w \,d\mu \,.
\end{equation}
Then one also has
\begin{equation}\label{eq:diamagneg}
\tr(M-V)_-^\gamma \leq L \left(\frac ep\right)^p \Gamma(p+1) \int_X V_+^p w \,d\mu \,.
\end{equation}
\end{theorem}
We do not know whether the factor $(e/p)^p \Gamma(p+1)$ in \eqref{eq:diamagneg} can be omitted. Results from \cite{FrLoWe} about the eigenvalues of the Landau Hamiltonian in a domain (but without potential) seem to indicate that a factor $>1$ is necessary. Our proof of Theorem \ref{diamagneg} uses some ideas from \cite{Ro} where the case $\gamma=0$ was treated; see also \cite{FrLiSe2} for a result about operators with discrete spectrum.
\begin{remark}
With the same proof one can deduce estimates on $\tr f(M)$ from estimates on $\tr f(H)$ for more general functions $f$. For example, let $d=2$ and $f(t):=|\ln |t||^{-1}$ if $- e^{-1}< t<0$, $f(t):=1$ if $t\leq -e^{-1}$, and $f(t):=0$ if $t\geq 0$. Then there exists a constant $L$ and for any $q>1$ a constant $L_q$ such that for all $l>0$ and $A\in L_{2,{\rm loc}}(\mathbb{R}^2,\mathbb{R}^2)$
$$
\tr f\left(l^2((D-A)^2-V)\right)
\leq L \int_{|x|<l} \! V(x)_+ \left|\log\frac{|x|}l\right| \,dx + L_q \int_0^\infty \!\!\left(\int_{\mathbb{S}} V(r\omega)_+^q \,d\omega \right)^{1/q} \!r\,dr \,.
$$
Indeed, this follows by Lemma \ref{average} via integration from the $A\equiv 0$ result of \cite{KoVuWe}.
\end{remark}
The key ingredient in the proof of Theorem \ref{diamagneg} is a bound on the negative eigenvalues of $M-V$ by those of $H-\alpha V$, averaged over all coupling constants $\alpha$. As before, we denote by $N(-\tau,A)$ the number of eigenvalues less than $-\tau$, counting multiplicities, of a self-adjoint operator $A$.
\begin{lemma}\label{average}
Let $H$ and $M$ be non-negative self-adjoint operators satisfying \eqref{eq:domination} and let $V\geq 0$. Then for any $\tau\geq 0$ and $t>0$ one has
\begin{equation}\label{eq:diamagnegproof}
N(-\tau,M- V) \leq t e^t \int_0^\infty N(-\tau,H-\alpha V) e^{-\alpha t}\,d\alpha \,.
\end{equation}
\end{lemma}
\begin{proof}
Since \eqref{eq:domination} remains valid with $H+\tau$ and $M+\tau$ in place of $H$ and $M$ we need only consider $\tau=0$. Moreover, by a density argument we may assume that $V>0$ a.e. We define $h:=V^{-1/2} H V^{-1/2}$ and $m:=V^{-1/2} M V^{-1/2}$ via quadratic forms and claim that \eqref{eq:domination} holds with $h$ and $m$ in place of $H$ and $M$. Since this fact is proved in \cite[Thm. 3]{Ro} we only sketch the main idea. Indeed, for any $\sigma>0$
\begin{equation*}
(m+\sigma)^{-1} = V^{1/2} (M+\sigma V)^{-1} V^{1/2}
= \int_0^\infty V^{1/2}\exp(-s(M+\sigma V)) V^{1/2} \,ds\,,
\end{equation*}
and by \eqref{eq:domination} and Trotter's product formula $|\exp(-s(M+\sigma V)) V^{1/2} u| \leq \exp(-s(H+\sigma V)) V^{1/2} |u|$ a.e. Hence $|(m+\sigma)^{-1}u| \leq (h+\sigma)^{-1} |u|$ a.e. Iterating this inequality and recalling that $(1+tm/n)^{-n} \to \exp(-tm)$ strongly as $n\to\infty$, we obtain \eqref{eq:domination} for $h$ and $m$.
By \cite[Thm. 4.1]{Si} this analog of \eqref{eq:domination} implies that
$$
\tr\exp(-tm) = \|\exp(-tm/2)\|_2^2\leq \|\exp(-th/2)\|_2^2 = \tr\exp(-th)
$$
with $\|\cdot\|_2$ the Hilbert-Schmidt norm, and hence by the Birman-Schwinger principle
\begin{equation*}
N(M-V) = N(1,m)\leq e^t \tr\exp(-tm) \leq e^t \tr\exp(-th) \,.
\end{equation*}
Using the Birman-Schwinger principle once more, we find
\begin{equation*}
\tr\exp(-th) = t \int_0^\infty N(\alpha,h) e^{-t\alpha}\,d\alpha
= t \int_0^\infty N(H-\alpha V) e^{-t\alpha}\,d\alpha \,,
\end{equation*}
proving \eqref{eq:diamagnegproof}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{diamagneg}]
By the variational principle we may assume that $V\geq 0$. By Lemma \ref{average} one has for any $t>0$
\begin{align*}
\tr(M-V)_-^\gamma
& = \gamma \int_0^\infty N(-\tau, M-V) \tau^{\gamma-1} \,d\tau \\
& \leq \gamma t e^t \int_0^\infty \int_0^\infty N(-\tau,H-\alpha V) \tau^{\gamma-1} \,d\tau e^{-\alpha t}\,d\alpha \\
& = t e^t \int_0^\infty \tr(H-\alpha V)_-^\gamma e^{-\alpha t}\,d\alpha \,,
\end{align*}
and by assumption \eqref{eq:diamagnegass} the right hand side can be bounded from above by
\begin{equation*}
L t e^t \left(\int_0^\infty \alpha^p e^{-\alpha t}\,d\alpha \right) \int_X V^p w \,d\mu
= L t^{-p} e^t \Gamma(p+1) \int_X V^p w \,d\mu \,.
\end{equation*}
Now the assertion follows by choosing $t=p$.
\end{proof}
\section{A pseudo-relativistic model including spin}
Throughout this section we assume that $d=3$. The helicity operator $h$ on $L_2(\mathbb{R}^3,\mathbb{C}^2)$ is defined as the Fourier multiplier corresponding to the matrix-valued function $p\mapsto \mathbf\sigma\cdot p/|p|$, where $\mathbf\sigma=(\sigma_1,\sigma_2,\sigma_3)$ denotes the triple of Pauli matrices. The properties of these matrices imply that $h$ is a unitary and self-adjoint involution. The analog of the Hardy (or Kato) inequality \eqref{eq:hardy} is
\begin{equation}\label{eq:eps}
\int_{\mathbb{R}^3} |\xi| |\hat u(\xi)|^2 \,d\xi
\geq \tilde{\mathcal C} \int_{\mathbb{R}^3} \frac{|u(x)|^2 + |(hu)(x)|^2}{2 \, |x|} \,dx\,,
\qquad u\in C_0^\infty(\mathbb{R}^3,\mathbb{C}^2)\,,
\end{equation}
with the sharp constant
$$
\tilde{\mathcal C} = \frac{2}{2/\pi+\pi/2} \,;
$$
see \cite{EvPeSi}. Note that this constant is strictly larger than
$$
\mathcal C:= \mathcal C_{1/2,3}=2/\pi \,,
$$
which is the constant one would get if $hu$ were replaced by $u$ on the right side of \eqref{eq:eps}.
For a function $V$ on $\mathbb{R}^3$ taking values in the Hermitean $4\times4$ matrices we introduce the non-local potential $$
\Phi(V) := \frac12 \begin{pmatrix}1_{L_2(\mathbb{R}^3,\mathbb{C}^2)} \\ h\end{pmatrix}^* V \begin{pmatrix}1_{L_2(\mathbb{R}^3,\mathbb{C}^2)} \\ h\end{pmatrix} \,,
$$
where $\begin{pmatrix}1_{L_2(\mathbb{R}^3,\mathbb{C}^2)} \\ h\end{pmatrix}$ is considered as an operator from $L_2(\mathbb{R}^3,\mathbb{C}^2)$ to $L_2(\mathbb{R}^3,\mathbb{C}^4)$. The operator $\sqrt{-\Delta} - \Phi(V)$ in $L_2(\mathbb{R}^3,\mathbb{C}^2)$ has been suggested by Brown and Ravenhall as the Hamiltonian of a massless, relativistic spin-1/2 particle in a potential $-V$. It results from projecting onto the positive spectral subspace of the Dirac operator. One of the advantages of this operator over the simpler $\sqrt{-\Delta} - V$ is that it is well-defined for nuclear charges $\alpha Z\leq \tilde{\mathcal C}$, which includes all known elements. We refer to \cite{LiSe} for more background about this model. Despite the efforts in \cite{LiSiSo,BaEv,HoSi} the problem of stability of matter for the corresponding many-particle system is not yet completely understood and the following result, we believe, might be useful in this respect.
\begin{theorem}\label{hltbr}
Let $d=3$ and $\gamma>0$. Then there is a constant $\tilde L_{\gamma}^\mathrm{HLT}$ such that
\begin{equation}\label{eq:hltbr}
\tr\left(\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})-\Phi(V) \right)_-^\gamma
\leq \tilde L_{\gamma}^\mathrm{HLT} \int_{\mathbb{R}^3} \tr_{\mathbb{C}^4} V(x)_+^{\gamma+3} \,dx \,.
\end{equation}
\end{theorem}
For the proof of this theorem we need some facts about the partial wave decomposition of the operator $\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})$ from \cite{EvPeSi}. This operator commutes with the total angular momentum operator $\mathbf J=\mathbf L+\frac12{\bf\sigma}$, where $\mathbf L=-i\nabla\times x$, as well as with the operator $\mathbf L^2$. The subspace corresponding to total angular momentum $j=1/2$ is of the form $\mathfrak H_{1/2,0} \oplus \mathfrak H_{1/2,1}$, where the subspaces $\mathfrak H_{1/2,l}$ correspond to the eigenvalues $l(l+1)$ of $\mathbf L^2$.
The next result, essentially contained in \cite{FrSiWa}, says that on the space $\mathfrak H_{1/2,0} \oplus \mathfrak H_{1/2,1}$ the operator $\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})$ is controlled by the operator $\sqrt{-\Delta} - \mathcal C |x|^{-1}$ with the \emph{smaller} coupling constant $\mathcal C$. (Strictly speaking, the latter operator should be tensored with $1_{\mathbb{C}^2}$, but we suppress this if there is no danger of confusion.)
\begin{lemma}\label{comp}
If $0\not\equiv\psi\in\mathfrak H_{1/2,0}\cap C_0^\infty(\mathbb{R}^3,\mathbb{C}^2)$, then
\begin{equation*}\label{eq:comp1}
\frac2{1+(2/\pi)^2} \geq
\frac{\left(\psi, \left(\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})\right)\psi \right)}{\left(\psi, \left(\sqrt{-\Delta} - \mathcal C |x|^{-1}\right)\psi \right)}
\geq \frac1{1+(2/\pi)^2} \,.
\end{equation*}
If $0\not\equiv\psi\in\mathfrak H_{1/2,1}\cap C_0^\infty(\mathbb{R}^3,\mathbb{C}^2)$, this bound is true provided $\left(\psi, \left(\sqrt{-\Delta} - \mathcal C |x|^{-1}\right)\psi \right)$ is replaced by $\left(h\psi, \left(\sqrt{-\Delta} - \mathcal C |x|^{-1}\right)h\psi \right)$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{comp}]
We prove the assertion only for $l=1$ since the lower bound for $l=0$ is contained in \cite[Lemma 2.7]{FrSiWa} and the upper bound is proved as below. By orthogonality we may assume that the Fourier transform of $\psi$ is of the form $\hat\psi(\xi) = |\xi|^{-2} g(|\xi|) \Omega_{1/2,1,m}(\frac{\xi}{|\xi|})$ where $m\in\{-1/2,1/2\}$ and $\Omega_{1/2,1,m}$ are explicit functions in $L_2(\mathbb{S}^2,\mathbb{C}^2)$. By the properties of these functions one has $\widehat{h\psi}(\xi) = - |\xi|^{-2} g(|\xi|) \Omega_{1/2,0,m}(\frac{\xi}{|\xi|})$. According to the ground state representation \cite[Lem\-ma 2.6]{FrSiWa} one has
\begin{align*}
\left(\psi, \left(\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})\right)\psi \right)
& = \frac{\tilde{\mathcal C}}{2\pi} \int_0^\infty \int_0^\infty |g(p)-g(q)|^2 \tilde k(\tfrac12(\tfrac pq +\tfrac qp)) \frac{dp}p \, \frac{dq}{q} \,, \\
\left(h\psi, \left(\sqrt{-\Delta} - \mathcal C |x|^{-1}\right)h\psi \right)
& = \frac{\mathcal C}{2\pi}
\int_0^\infty \int_0^\infty |g(p)-g(q)|^2 k(\tfrac12(\tfrac pq +\tfrac qp)) \frac{dp}p \, \frac{dq}{q} \,,
\end{align*}
where $\tilde k(t)= \frac12 (Q_0(t)+Q_1(t))$, $k(t)=Q_0(t)$, and $Q_l$ are the Legendre functions of the second kind \cite[8.4]{AbSt}. The assertion now follows from the fact that $Q_0\geq Q_1\geq 0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{hltbr}]
We first claim that for any $0<t<1/2$ there is a $\tilde K_t>0$ such that
\begin{equation}\label{eq:hardyrembr}
\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})
\geq \tilde K_t l^{-1+2t} (-\Delta)^t - l^{-1} \,,
\quad l>0\,.
\end{equation}
Indeed, it follows from Lemma \ref{comp} and \eqref{eq:hardyrem} that on $\mathfrak H_{1/2,0}\oplus\mathfrak H_{1/2,1}$ one has for any $0<t<1/2$
$$
\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})
\geq \left(1+(2/\pi)^2\right)^{-1} \left( K_t l^{-1+2t} (-\Delta)^t - l^{-1} \right) \,,
\quad l>0 \,.
$$
On the other hand, the arguments of \cite{EvPeSi} show that there exists a constant $\tilde{\mathcal C}'>\tilde{\mathcal C}$ such that $\sqrt{-\Delta} \geq \tilde{\mathcal C}'\Phi(|x|^{-1})$ on
$\left(\mathfrak H_{1/2,0}\oplus\mathfrak H_{1/2,1}\right)^\bot$. Hence on that space
$$
\sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1})
\geq \frac{\tilde{\mathcal C}'-\tilde{\mathcal C}}{\tilde{\mathcal C}'} \sqrt{-\Delta}
\geq \frac{\tilde{\mathcal C}'-\tilde{\mathcal C}}{\tilde{\mathcal C}'}
\left( \frac 1{2t} l^{-1+2t} (-\Delta)^t - \frac{1-2t}{2t} l^{-1} \right) \,,
\quad l>0 \,.
$$
This proves \eqref{eq:hardyrembr}.
Given \eqref{eq:hardyrembr}, the proof of \eqref{eq:hltbr} is similar to that of \eqref{eq:hlt}. We may assume that $V(x)=v(x) I_{\mathbb{C}^4}$ for a non-negative, \emph{scalar} function $v$ (otherwise, replace $V(x)$ by $v(x) I_{\mathbb{C}^4}$ where $v(x)$ is the operator norm of the $4\times 4$ matrix $V(x)_+$). For a given $l>0$ and $0<t<1/2$ we introduce the operator $H:= \tilde K_t l^{-1+2t} (-\Delta)^t - v -l^{-1}$ in $L_2(\mathbb{R}^3,\mathbb{C})$. Then according to \eqref{eq:hardyrembr} one has
\begin{equation*}\label{eq:scalar}
N(-\tau, \sqrt{-\Delta} - \tilde{\mathcal C} \Phi(|x|^{-1}) -\Phi(V))
\leq N(-\tau, \tfrac12 (H\otimes 1_{\mathbb{C}^2} +h (H\otimes 1_{\mathbb{C}^2}) h))
\leq 4 N(-\tau, H) \,.
\end{equation*}
In the last inequality we used that $N(-\tau, \frac12(A+B))\leq N(-\tau,A) + N(-\tau,B)$ for any self-adjoint, lower semi-bounded operators $A$ and $B$, which follows from the variational principle. Now one can proceed in the same way as in the proof of \eqref{eq:hlt}.
\end{proof}
\bibliographystyle{amsalpha}
|
1,116,691,498,552 | arxiv | \section{Introduction}
Phenomenology at the LHC often involves high multiplicity final
states. For example, backgrounds to Higgs searches involve processes
such as $PP\rightarrow W^+W^- + 2~\mbox{jets}$ and $PP\rightarrow
t\bar{t}\ +\ b\bar{b}$. Both these examples involve $2\rightarrow 4$
scatterings. At leading order (LO) such high multiplicity final state
amplitudes can be evaluated using either numerical recursive
techniques~\cite{Berends:1987me,Mangano:2002ea,Draggiotis:2002hm} or
other numerical and/or algebraic
techniques~\cite{Ishikawa:1993qr,Stelzer:1994ta,Krauss:2001iv,Maltoni:2002qb,Boos:2004kh}.
However, ${\cal O}\left(\alpha_S\right)$, next-to-leading order (NLO)
corrections to the scattering amplitudes are desirable. Not only do
NLO corrections give a first reliable prediction of total rates, they
also give a good error estimate on the shapes of distributions. At
NLO the current state of the art for hadron colliders are $2
\rightarrow 3$ processes.
Thus NLO predictions for $PP\rightarrow 3\
\mbox{jets}$~\cite{Kilgore:1996sq,Nagy:2003tz} (based on virtual
corrections of ref.~\cite{Bern:1993mq,Bern:1994fz,Kunszt:1994tq}) and
$PP\rightarrow V\ +\ 2\ \mbox{jets}$~\cite{Campbell:2002tg} (based on
virtual corrections of
ref.~\cite{Glover:1996eh,Campbell:1997tv,Bern:1997sc}) are known, and
codes for $PP\rightarrow t\bar{t}\ +\
\mbox{jet}$~\cite{Brandenburg:2004fw,Uwer:2005tq} and $PP\rightarrow
H\ +\ 2\ \mbox{jets}$ via gluon fusion~\cite{Ellis:2005qe} are under
construction. Other processes such as $PP\rightarrow V_1V_2\ +\
\mbox{jet}$ and $PP\rightarrow V_1V_2V_3$ are now feasible.
By contrast the consideration of $2\rightarrow 4$ processes is still
in its infancy. In electroweak physics the full one-loop electroweak
corrections to $e^+e^-\rightarrow\ 4\ \mbox{fermions}$ were calculated
in Ref.~\cite{Denner:2005fg,Denner:2005nd}. However the calculation of
NLO $2\rightarrow 4$ QCD scattering cross sections is currently
unexplored. Such a calculation involves both the evaluation of the
one-loop six-point virtual corrections and the inclusion of the
$2\rightarrow 5$ scattering bremsstrahlung contributions through Monte
Carlo integration.
In this paper we consider the virtual corrections to six-gluon
scattering which is relevant for a calculation of $PP\rightarrow
4$~jets. By considering the one-loop corrections to $gg\rightarrow
gggg$ we select the most complicated QCD six-point processes. If the
amplitude is calculated in terms of Feynman diagrams, the number of
diagrams is very large and the gauge cancellations between these
diagrams is the most severe. These cancellations could be a concern
in a semi-numerical procedure; the six-gluon amplitude therefore
provides a stringent test of the method. In this paper we consider
neither the bremsstrahlung contributions, nor the one-loop processes
involving external quarks, which are needed to obtain results for a
physical cross section.
The technique for the analytic calculation of the one-loop corrections
to multi-gluon amplitudes which is relevant for this paper
is the decomposition of the calculation into simpler
pieces with internal loops of ${\cal{N}}=4$ and ${\cal{N}}=1$
multiplets of super-symmetric Yang-Mills particles and a residue
involving only scalar particles in the
loops~\cite{Bern:1993mq,Bern:1994zx,Bern:1994cg}.
After recent
advances~\cite{Bidder:2004tx,Bern:2005ji,Bern:2005cq,Britto:2005ha},
all supersymmetric contributions have been computed analytically,
however not all of the scalar contributions for six-gluon amplitudes
(or higher) are known yet. We present here numerical results for
six-gluon contributions. For supersymmetric pieces we provide
completely independent cross-checks of analytical results.
Although all one-loop $2\rightarrow 2$ and almost all of the currently
known $2\rightarrow 3$ amplitudes were calculated using analytic
techniques, we believe that semi-numerical or hybrid
numerical/analytic techniques offer promise for more rapid progress.
This technique was demonstrated recently for the case of the one-loop
$\mbox{H}\ +\ 4\ \mbox{partons}$ amplitude~\cite{Ellis:2005qe}.
Many methods have been proposed to calculate NLO amplitudes, both
semi-numerical~\cite{Fleischer:1999hq,Passarino:2001jd,Binoth:2003ak,Duplancic:2003tv,
Nagy:2003qn,Belanger:2003sd,delAguila:2004nf,Denner:2002ii,Giele:2004iy,vanHameren:2005ed,Denner:2005nn}
or numerical~\cite{Soper:2001hu,Anastasiou:2005cb}. Of these methods
only a few have actually been used to evaluate one-loop amplitudes.
Only by using the methods in explicit calculations one can be sure
that all numerical issues have been addressed properly.
In section II we discuss the colour algebra involved with the
evaluation of a six-gluon amplitude. The numerical techniques used in
this paper are discussed in section III, while in section IV the
comparison is made with numerous super-symmetric and the few scalar
results, which exist in the literature. Finally, our conclusions in
section V summarize the paper.
\section{Six-gluon amplitude at one-loop}
At tree-level, amplitudes with $n$ external gluons can be decomposed
into colour-ordered sub-amplitudes, multiplied by a trace of $n$
colour matrices, $T^a$. The traceless, hermitian, $N_c\times N_c$
matrices, $T^a$, are the generators of the $SU(N_c)$ algebra.
Following the usual conventions for this branch of the QCD literature,
they are normalized so that $\mathop{\rm Tr}\nolimits( T^a T^b) = \delta^{ab}$. Summing
over all non-cyclic permutations the full amplitude ${\cal A}^{\rm
\scriptsize \mbox{\rm tree}}_ n$ is reconstructed from the sub-amplitudes
$A_n^{\rm \scriptsize
\mbox{\rm tree}}(\sigma)$~\cite{Berends:1987me,Mangano:1987xk},
\begin{equation}
{\cal A}_{n}^{\rm tree}(\{p_i,\lambda_i,a_i\}) =
g^{n-2} \sum_{\sigma \in S_n/Z_n} \mathop{\rm Tr}\nolimits( T^{a_{\sigma(1)}}
\cdots T^{a_{\sigma(n)}} )
\ A^{\rm tree}_n (p_{\sigma(1)}^{\lambda_{\sigma(1)}},\ldots,
p_{\sigma(n)}^{\lambda_{\sigma(n)}})\ .
\end{equation}
The momentum, helicity ($\pm$), and colour index of the $i$-th
external gluon are denoted by $p_i$, $\lambda_i$, and $a_i$
respectively. $g$ is the coupling constant, and $S_n/Z_n$ is the set
of $(n-1)!$ non-cyclic permutations of $\{1,\ldots, n\}$.
The expansion in colour sub-amplitudes is slightly more complicated at
one-loop level. Let us consider the case of massless internal
particles of spin $J=0,1/2,1$ corresponding to a complex scalar, a
Weyl fermion or a gluon. If all internal particles belong to the
adjoint representation of SU$(N_c)$, the colour decomposition for
one-loop $n$-gluon amplitudes is given by~\cite{Bern:1990ux},
\begin{equation}
{\cal A}_n^{[J]} ( \{p_i,h_i,a_i\} ) = g^n
\sum_{c=1}^{\lfloor{n/2}\rfloor+1}
\sum_{\sigma \in S_n/S_{n;c}}
{\rm Gr}_{n;c}( \sigma ) \,A_{n;c}^{[J]}(\sigma) \,,
\label{Oneloopform}
\end{equation}
where ${\lfloor{x}\rfloor}$ denotes the largest integer less than or
equal to $x$ and $S_{n;c}$ is the subset of $S_n$ which leaves the
double trace structure in ${\rm Gr}_{n;c}(1)$ invariant.
The leading-colour structure is simply given by,
\begin{equation}
{\rm Gr}_{n;1}(1) = N_c\ \mathop{\rm Tr}\nolimits (T^{a_1}\cdots T^{a_n} ) \,.
\end{equation}
The subleading-colour structures are given by products of colour traces
\begin{equation}
{\rm Gr}_{n;c}(1) = \mathop{\rm Tr}\nolimits( T^{a_1}\cdots T^{a_{c-1}} )\,
\mathop{\rm Tr}\nolimits ( T^{a_c}\cdots T^{a_n}) \,.
\end{equation}
The subleading sub-amplitudes $A_{n;c>1}$ are determined by the
leading ones $A^{[1]}_{n;1}$ through the merging
relation~\cite{Kleiss:1988ne,Bern:1990ux,Bern:1994zx,DelDuca:1999rs}
\begin{equation}
A^{[1]}_{n;c>1}(1,2,\ldots,c-1;c,c+1,\ldots,n)\ =\
(-1)^{c-1} \sum_{\sigma \in {\rm OP}\{\alpha\}\{\beta\}}
A^{[1]}_{n;1}(\sigma_1,\ldots,\sigma_n) \, ,
\label{Kleiss-Kuijf}
\end{equation}
where $\alpha_i \in \{\alpha\} \equiv \{c-1,c-2,\ldots,2,1\}$,
$\beta_i \in \{\beta\} \equiv \{c,c+1,\ldots,n-1,n\}$, and
${\rm OP}\{\alpha\}\{\beta\}$ is the set of ordered permutations of
$\{1,2,\ldots,n\}$ but with the last element $n$ fixed. The ordered
permutations are defined as a set of all mergings of $\alpha_i$ with
respect to the $\beta_i$, such that the cyclic ordering of the
$\alpha_i$ within the set $\{\alpha\}$ and of the $\beta_i$ within the
set $\{\beta\}$ is unchanged. In practice, since $n$ is fixed, no
further cycling of the set $\{\beta\}$ is required. Thus a complete
description can be given in terms of the leading colour sub-amplitudes
$A_{n;1}$ alone.
The contribution of a single flavour of Dirac fermion
in the fundamental representation, (relevant for quarks in QCD) is
\begin{equation}
{\cal A}_{n}^{\rm Dirac}(\{p_i,\lambda_i,a_i\}) =
g^n \sum_{\sigma \in S_n/Z_n} \mathop{\rm Tr}\nolimits( T^{a_{\sigma(1)}}
\cdots T^{a_{\sigma(n)}} )
\ A^{[1/2]}_{n;1} (p_{\sigma(1)}^{\lambda_{\sigma(1)}},\ldots,
p_{\sigma(n)}^{\lambda_{\sigma(n)}})\ .
\end{equation}
Simple colour arguments~\cite{Bern:1990ux} allow one to demonstrate
that this colour sub-amplitude is the same as the leading colour
sub-amplitude for a single Weyl fermion in the adjoint representation
defined in Eq.~(2.2).
Since the subleading colour amplitudes are not independent, we shall
henceforth drop them from our discussion. To simplify the notation we
shall also drop the subscripts $n$ and $c$. The amplitude denoted by
$A$ will thus refer to leading colour amplitude with six external
gluons.
\section{Method of calculation}
The method we use is purposely kept as simple as possible. Especially
in numerical methods this is desirable for both keeping track of
numerical accuracy and code transparency.
To generate all the required Feynman diagrams we use
Qgraf~\cite{Nogueira:1991ex}. The Qgraf output is easily manipulated
using Form~\cite{Vermaseren:2000nd} to write the amplitude in the form
\begin{equation}
A(1,2,3,4,5,6)=\sum_{N=2}^6\sum_{M=0}^N
K_{\mu_1\cdots\mu_M}(p_1,\epsilon_1;\ldots;p_6,\epsilon_6)
I_N^{\mu_1\cdots\mu_M}(p_1,\ldots,p_6) \, ,
\end{equation}
where the kinematic tensor $K$ depends on the purely four-dimensional
external vectors and contains all the particle and process
information. The $N$-point tensor integrals of rank $M$ are defined in
$D$ dimensions as
\begin{equation}
I_N^{\mu_1\cdots\mu_M}(p_1,\ldots,p_6)=
\int \frac{d^Dl}{i \pi^{D/2}} \frac{l^{\mu_1}\ldots l^{\mu_M}}{d_1d_2 \ldots d_N},
\;\;\; d_i \equiv (l+q_i)^2,\;\;\; q_i \equiv \sum_{j=1}^i p_j\,,
\end{equation}
and can be evaluated semi-numerically.
For $N\leq 4$ we use the method of
\cite{Giele:2004ub,Giele:2004iy,Ellis:2005zh} which we already
developed, tested and used in the calculation of $\mbox{H} + 4\
\mbox{partons}$ at one-loop~\cite{Ellis:2005qe}. In general, the
basis integrals will contain divergences in $\epsilon=(4-D)/2$ from
soft, collinear and ultraviolet divergences and the answer returned by
the semi-numerical procedure will be a Laurent series in inverse
powers of $\epsilon$.
For the five~(six)-point tensor integrals the method we use relies on
the completeness (over-completeness) of the basis of external momenta
for a generic phase space point. We therefore use a technique for
tensor reduction which generalizes the methods of
ref.~\cite{vanNeerven:1983vr,vanOldenborgh:1989wn}. This technique is
valid as long as the basis of external momenta is
complete\footnote{For exceptional momentum configurations (such as
threshold regions or planar event configurations) this is not the
case. Exceptional configurations can be treated using a
generalization of the expanded relations proposed in
refs.~\cite{Giele:2004ub,Ellis:2005zh}. This is beyond the scope of
this paper.}. Assuming we have a complete basis of external momenta
we can select a set of 4 momenta $\{p_{k_1},p_{k_2},p_{k_3},p_{k_4}\}$
which form the basis of the four-dimensional space. We can then
decompose the loop momentum
\begin{equation}
l^\mu=\sum_{i=1}^4 l\cdot p_{k_i} v_{k_i}^\mu
=V^\mu+\frac{1}{2}\sum_{i=1}^4 \left(d_{k_i}-d_{k_i-1}\right) v_{k_i}^\mu\,,
\end{equation}
where the $v_{k_i}$ are defined as linear combinations of the basis
vectors
\begin{equation}\label{axial}
v^{\mu}_{k_i} = \sum_{j=1}^4 [G^{-1}]_{ij} p^\mu_{k_j}, \;\;\; G_{ij} =p_{k_i} \cdot p_{k_j}\,,
\end{equation}
where $G$ is the Gram matrix and
\begin{equation}
V^\mu=-\frac{1}{2}\sum_{i=1}^4 (r_{k_i}-r_{k_i-1})v^\mu_{k_i},\;\;\;
r_k=q_k^2\,.
\end{equation}
With this relation it is now easy to reduce an $N$-point function of
rank $M$ to a lower rank $N$-point function and a set of lower rank
$(N-1)$-point functions
\begin{equation}
I_N^{\mu_1\cdots\mu_M}=I_N^{\mu_1\cdots\mu_{M-1}}V^{\mu_M}
+\frac{1}{2}\sum_{i=1}^4\left(I_{N,k_i}^{\mu_1\cdots\mu_{M-1}}-I_{N,k_i-1}^{\mu_1\cdots\mu_{M-1}}\right)v_{k_i}^{\mu_M}\,,
\end{equation}
where $I_{N,j}$ is a $(N-1)$-point integral originating from $I_N$
with propagator $d_j$ removed. More explicitly, choosing without loss
of generality the base set $\{p_1,p_2,p_3,p_4\}$, we get
\begin{eqnarray}
\lefteqn{I_N^{\mu_1\cdots\mu_M}(p_1,p_2,p_3,p_4,p_5,\ldots,p_N)=
I_N^{\mu_1\cdots\mu_{M-1}}(p_1,p_2,p_3,p_4,p_5,\ldots,p_N) V^{\mu_M}(p_1,p_2,p_3,p_4)}
\nonumber\\&+&\frac{1}{2}
\left(I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1+p_2,p_3,p_4,p_5,\ldots,p_N)
-I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_2,p_3,p_4,p_5,\ldots,p_N)\right)
\nonumber\\&&\times
v_1^{\mu_M}(p_1,p_2,p_3,p_4)
\nonumber\\&+&\frac{1}{2}
\left(I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1,p_2+p_3,p_4,p_5,\ldots,p_N)
-I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1+p_2,p_3,p_4,p_5,\ldots,p_N)\right)
\nonumber\\&&\times
v_2^{\mu_M}(p_1,p_2,p_3,p_4)
\nonumber\\&+&\frac{1}{2}
\left(I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1,p_2,p_3+p_4,p_5,\ldots,p_N)
-I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1,p_2+p_3,p_4,p_5,\ldots,p_N)\right)
\nonumber\\&&\times
v_3^{\mu_M}(p_1,p_2,p_3,p_4)
\nonumber\\&+&\frac{1}{2}
\left(I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1,p_2,p_3,p_4+p_5,\ldots,p_N)
-I_{N-1}^{\mu_1\cdots\mu_{M-1}}(p_1,p_2,p_3+p_4,p_5,\ldots,p_N)\right)
\nonumber\\&&\times
v_4^{\mu_M}(p_1,p_2,p_3,p_4)\,.
\nonumber\\
\end{eqnarray}
For example, applying this relation repeatedly to the tensor six-point
integrals we will be left with the scalar six-point integral and
five-point tensor integrals. The five-point tensor integrals can be
reduced using the same technique. Subsequently we can use the method
of~\cite{Giele:2004ub,Giele:2004iy,Ellis:2005zh} to further
numerically reduce all remaining integrals to the basis of scalar 2-,
3- and 4-point integrals. This procedure turns out to be efficient and
straightforward to implement numerically.
\section{Comparison with the literature}
Since we have directly calculated the loop amplitudes with internal
gluons and fermions we can easily obtain the result for QCD with an
arbitrary number $n_f$ of flavours of quarks,
\begin{equation}
{A}^{\rm QCD} = A^{[1]} + \frac{n_f}{N} A^{[1/2]}\, .
\end{equation}
However since the analytic calculations in the literature are
presented in terms of supersymmetric theories we need to re-organize
our results to compare with other authors.
\subsection{Supersymmetry}
Since we have calculated the amplitudes with massless spin $1$, spin
$1/2$ and spin $0$ particles in the internal loop we can combine our
results as follows
\begin{eqnarray}
{A}^{{\cal{N}}=4}&=&A^{[1]}+4 A^{[1/2]}+3 A^{[0]}\, , \\
{A}^{{\cal{N}}=1}&=& A^{[1/2]}+A^{[0]}.
\end{eqnarray}
${A}^{{\cal{N}}=4}$, so constructed, describes an amplitude where the full
supersymmetric ${\cal{N}}=4$ multiplet runs in the loop, and ${A}^{{\cal{N}}=1}$
denotes the contribution from an ${\cal{N}}=1$ super-multiplet running in
the loop.
In analytic calculations the intention is to proceed in the opposite
direction. Amplitudes with multiplets of supersymmetric Yang-Mills in
internal loops have much improved ultra-violet behavior and are
four-dimensional cut-constructible. For this reason, all of these
supersymmetric amplitudes have been calculated and most have been
presented in a form suitable for numerical evaluation. As far as
six-gluon amplitudes with scalars in the loop, ${A}^{[0]}$, are
concerned three of the needed eight independent helicity amplitudes
have been published so far. Only in the helicity combinations where
all contributions are known can one reconstruct the ingredients needed
for QCD amplitudes
\begin{eqnarray}
{A}^{[1]}&=&{A}^{{{\cal{N}}}=4}-4{A}^{{\cal{N}}=1}+{A}^{[0]} \, ,\\
{A}^{[1/2]}&=& {A}^{{\cal{N}}=1}-{A}^{[0]}\, .
\end{eqnarray}
\subsection{Numerical results}
As a preparatory exercise we performed a check of the four- and
five-point gluon one-loop amplitudes. We found agreement with the
literature~\cite{Ellis:1985er,Kunszt:1993sd,Bern:1993mq}.
We now turn to the amplitude for six-gluons which is the main result
of this paper. Our numerical program allows the evaluation of the
one-loop amplitude at an arbitrary phase space point and for arbitrary
helicities. For a general phase space point it is useful to re-scale
all momenta so that the momenta of the gluons, (and the elements of
the Gram matrix), are of $O(1)$ before performing the tensor
reduction. Without loss of generality we can assume that this has been
done.
To present our numerical results we choose a particular phase space
point with the six momenta $p_i$ chosen as follows, $(E,p_x,p_y,p_z)$,
\begin{eqnarray}
\label{specificpoint}
p_1 & = & \frac{\mu}{2} (-1, +\sin\theta, +\cos\theta \sin\phi, +\cos\theta \cos\phi ), \nonumber \\
p_2 & = & \frac{\mu}{2} (-1, -\sin\theta, -\cos\theta \sin\phi, -\cos\theta \cos\phi ), \nonumber \\
p_3 & = & \frac{\mu}{3} (1,1,0,0), \nonumber \\
p_4 & = & \frac{\mu}{7} (1,\cos\beta,\sin\beta,0), \nonumber \\
p_5 & = & \frac{\mu}{6} (1,\cos\alpha \cos\beta, \cos\alpha \sin\beta,\sin\alpha), \nonumber \\
p_6 & = & -p_1-p_2-p_3-p_4-p_5\, ,
\end{eqnarray}
where $\theta= \pi/4,\phi= \pi/6,\alpha= \pi/3,\cos \beta= -7/19$.
Note that the energies of $p_1$ and $p_2$ are negative and $p_i^2=0$.
In order to have energies of $O(1)$ we make the choice for the scale
$\mu=n=6$~[GeV]. As usual $\mu$ also denotes the scale which is used
to carry the dimensionality of the $D$-dimensional integrals. The
results presented contain no ultraviolet renormalization.
Analytic results require the specification of eight helicity
combinations: all other amplitudes can be obtained by the parity
operation or cyclic permutations. We choose these eight combinations
to be the two finite amplitudes ($++++++,-+++++$), the maximal
helicity violating amplitudes ($--++++,-+-+++,-++-++$), and the
next-to-maximal helicity violating amplitudes
($---+++,--+-++,-+-+-+$). These eight amplitudes would not be
sufficient for a numerical evaluation, but the numerical approach
allows the evaluation of any helicity configuration at will.
In Table~\ref{tableneq4} we give results for a particular colour
sub-amplitude ${A}^{{\cal{N}}=4}(1,2,3,4,5,6)$ for the above eight choices
of the helicity. An overall factor of $i c_\Gamma$ has been removed from
all the results in the Tables~\ref{tableneq4}, \ref{tableneq1}, and
\ref{tablescalar}
\begin{equation}
c_\Gamma = {(4 \pi)^\epsilon \over 16 \pi^2 }
{\Gamma(1+\epsilon)\Gamma^2(1-\epsilon)\over\Gamma(1-2\epsilon)}\ .
\label{cgdef}
\end{equation}
The results for the ${\cal{N}}=4$ amplitudes depend on the number of
helicities of gluons circulating in internal loops.
For a recent description
of regularization schemes see, for example, ref.~\cite{Bern:2002zk}.
Our results are
presented in the 't Hooft-Veltman scheme.
The translation to the four-dimensional helicity scheme is immediate
\begin{equation}
{A}^{{\cal{N}}=4}_{\rm FDH} = {A}^{{\cal{N}}=4}_{\rm t-HV} + \frac{c_\Gamma}{3}
{A}_{\rm tree}\,.
\label{THVtoFDH}
\end{equation}
Note that analytic results from the literature are quoted in the
four-dimensional helicity scheme, which respects supersymmetry. These
results have been translated to the 't Hooft-Veltman scheme using
Eq.~(\ref{THVtoFDH}) before insertion in our tables.
\begin{scriptsize}
\TABLE{
\begin{tabular}{|c|c|c|c|c|}
\hline
Helicity & $1/\epsilon^2$ & $1/\epsilon$ & 1 &[Ref]/(Eq.\#) \\
\hline
$++++++$ & $0$ & $0$ & $0$ & \\
$++++++$ & $(-1.034+i~2.790 ) 10^{-8}$&$ (-9.615+i~3.708 ) 10^{-8}$&$ -(0.826+i~2.514) 10^{-7}$ & [SN-A] \\
\hline
$-+++++$ & $0$ & $0$ & $0$ & \\
$-+++++$ & $(1.568+ i~2.438) 10^{-8}$ & $ (-0.511 +i~1.129) 10^{-7}$&$ -(3.073+i~0.1223) 10^{-7}$ & [SN-A] \\
\hline
\hline
$--++++$ & $-161.917+i~54.826 $ & $ -489.024-i~212.415 $ & $ -435.281-i~1162.971 $ & \cite{Bern:1994zx}/(4.19)\\
$--++++$ & $(-0.933 +i~1.513) 10^{-8} $ & $ -(7.655+i~0.440)10^{-8} $ & $ -(-0.221+i~1.834)10^{-7} $ & [SN-A] \\
\hline
$-+-+++$ & $ -33.024 + i~44.423 $ & $ -169.358 + i~33.499 $ & $ -330.119 -i~229.549 $ & \cite{Bern:1994zx}/(4.19) \\
$-+-+++$ & $(-7.542+i~0.939) 10^{-8} $ & $ -(1.157 +i~0.363)10^{-8} $ & $ -(3.474 +i~2.856)10^{-8} $ & [SN-A] \\
\hline
$-++-++$ & $ -0.5720 - i~3.939 $ & $ 6.929 - i~10.302 $ & $ 28.469 -i~5.058 $ & \cite{Bern:1994zx}/(4.19) \\
$-++-++$ & $(-2.279 +i~1.803)10^{-8} $ & $ -(1.176 +i~0.399)10^{-7} $ & $ (0.054-i~3.307)10^{-7} $ & [SN-A] \\
\hline
\hline
$---+++$ & $ -6.478 -i~10.407 $ & $ 6.825 -i~37.620 $ & $ 75.857 - i~47.081 $ & \cite{Bern:1994cg}/(6.19) \\
$---+++$ & $ (2.686-i~1.668)10^{-8} $ & $ (1.232+i~0.554)10^{-7} $ & $ (0.020+i~3.334 )10^{-7} $ & [SN-A] \\
\hline
$--+-++$ & $ 14.074-i~22.908 $ & $ 80.503- i~23.464 $ & $ 169.047 + i~93.601 $ & \cite{Bern:1994cg}/(6.24) \\
$--+-++$ & $ -(1.619+i~0.943)10^{-8} $ & $ -(1.030+i~8.234)10^{-8} $ & $ (1.560 -i~0.801)10^{-8} $ & [SN-A] \\
\hline
$-+-+-+$ & $ 13.454+i~13.177 $ & $ 3.495+i~58.632 $ & $ -88.32+i~103.340 $ & \cite{Bern:1994cg}/(6.26) \\
$-+-+-+$ & $ (1.045-i~0.113)10^{-9} $ & $ (-0.772+i~1.652))10^{-8} $ & $ (-7.795+i~7.881))10^{-8} $ & [SN-A] \\
\hline
\hline
\end{tabular}
\caption{$\cal{N}$=4 color ordered sub-amplitudes evaluated at the specific point, Eq.~(\ref{specificpoint}).
The results are given in the 'tHooft-Veltman regularization scheme.
[SN-A] means the difference between the semi-numerical result and
the analytical one.}
\label{tableneq4}
}
\end{scriptsize}
\begin{scriptsize}
\TABLE{
\begin{tabular}{|c|c|c|c|c|}
\hline
Helicity & $1/\epsilon^2$ & $1/\epsilon$ & 1 &[Ref]/(Eq.\#) \\
\hline
$++++++$ & 0 & 0 & 0 & \\
$++++++$ & $(-3.470+i~9.320) 10^{-9}$&$(-3.226+i~1.253) 10^{-8}$&$ -(3.899+i~8.969) 10^{-8}$ & [SN-A] \\
\hline
$-+++++$ & 0 & 0 & 0 & \\
$-+++++$ & $(5.228+i~8.127) 10^{-9}$&$(-1.678+i~3.775) 10^{-8} $&$ -(1.013+i~0.2066) 10^{-7} $ & [SN-A] \\
\hline
\hline
$--++++$ & 0 & $26.986-i~9.1376$ & $101.825-i~52.222$ & \cite{Bern:1994cg}/(5.9)\\
$--++++$ &$(-3.297+i~5.194) 10^{-9}$ & $ -(-2.104+i~0.344) 10^{-8} $ & $(0.949 -i~4.895) 10^{-8} $ & [SN-A] \\
\hline
$-+-+++$ & $0$ & $ 5.504-i~7.404 $ & $ 21.811-i~29.051 $ & \cite{Bern:1994cg}/(5.12)\\
$-+-+++$ & $(-1.847 + i~0.8566) 10^{-10} $ & $ -(6.141+i~4.633 ) 10^{-10} $ & $ (3.095+i~2.138) 10^{-7} $ & [SN-A] \\
\hline
$-++-++$ & $0$ & $0.09533+i~0.6565$ & $ -2.183+i~3.260 $ & \cite{Bern:1994cg}/(5.12)\\
$-++-++$ & $(-7.599+i~6.018) 10^{-9}$ & $ -(3.929+i~1.304)10^{-8} $ & $(0.008-i~1.100)10^{-7} $ & [SN-A] \\
\hline
\hline
$---+++$ & $0$ & $1.080 +i~1.735$ & $ 0.722+i~5.285$ & \cite{Bidder:2004tx}/(9) \\
$---+++$ & $(8.965-i~5.555) 10^{-9}$ & $(4.107 +i~1.858)10^{-8} $ & $ (0.002+i~1.114)10^{-7} $ & [SN-A] \\
\hline
$--+-++$ & $0$ & $-2.346+i~3.819$ & & \cite{Britto:2005ha}/(5.4,2.3)\\
$--+-++$ & $(-5.351-i~2.825) 10^{-9}$ & $-2.346+i~3.819$ & $-2.238+i~17.687$ & [SN] \\
\hline
$-+-+-+$ & $0$ & $-2.242-i~2.196$ & & \cite{Britto:2005ha}/(5.13,2.3)\\
$-+-+-+$ & $(1.124-i~0.2060) 10^{-10}$ & $-2.242-i~2.196$ & $-1.721-i~7.433$ & [SN] \\
\hline
\hline
\end{tabular}
\caption{$\cal{N}$=1 color ordered sub-amplitudes evaluated at the specific point, Eq.~(\ref{specificpoint}).
[SN] means that the result is obtained using our semi-numerical
code, while [SN-A] denotes the difference between the semi-numerical
result and the analytical one.}
\label{tableneq1}
}
\end{scriptsize}
In Table~\ref{tableneq1} we give results for the colour sub-amplitudes
${A}^{{\cal{N}}=1}(1,2,3,4,5,6)$ for the same eight helicity choices and
where possible compare with analytical results.~\footnote{In
Eq.~(5.16) of ref.~\cite{Bern:1994cg} for the degenerate case
m=j-1=2 one has $\hat{{\cal C}}_m = \{j+1, \ldots, n-1 \} $, as can
be seen from Fig.~8 of this same paper. This point has also been
made in ref.~\cite{Cachazo:2004zb}. }
Note that because of the relation
\begin{equation}
{A}^{{\cal{N}}=1}|_{\rm singular} = \frac{c_\Gamma}{\epsilon} A^{\rm tree}\, ,
\end{equation}
the column giving the single pole can as well be considered as a
listing of the results for the colour-ordered sub-amplitudes at tree
graph level (stripped only of the overall factor of $i$).
We note that for two of the helicity amplitudes $--+-++$ and $-+-+-+$
we were unable to evaluate the analytic results numerically. This was
due to the fact that calculating the residue of certain poles as
required by the formula in ref.~\cite{Britto:2005ha}, resulted in zero
value denominators of sub-expressions\footnote {We thank the authors
of ref.~\cite{Britto:2005ha} for confirming that there are problems
with the numerical evaluation of the formula for these amplitudes in
their paper.}.
\begin{scriptsize}
\TABLE{
\begin{tabular}{|c|c|c|c|c|}
\hline
Helicity & $1/\epsilon^2$ & $1/\epsilon$ & 1 & [Ref]/(Eq.\#) \\
\hline
$++++++$ & $0$ & $0$ & $ (4.867 + i~2.092) 10^{-1}$&\cite{Bern:2005ji}/(4.3)\\
$++++++$ & $(3.672 +i~9.749) 10^{-9} $ & $(-3.404 + i~1.238) 10^{-8}$& $ -(3.016+ i~9.169) 10^{-8} $& [SN-A] \\
\hline
$-+++++$ & 0 & 0 & $-3.194 + i~0.6503 $ & \cite{Bern:2005ji}/(4.10)\\
$-+++++$ & $(5.921 +i~8.411) 10^{-9}$ & $(-1.606 +i~4.051) 10^{-8} $ & $ -(1.086 +i~0.038) 10^{-7} $ & [SN-A] \\
\hline
\hline
$--++++$ & $0$ & $8.995-i~3.046 $& {$43.089-i~20.288 $} &\cite{Bern:2005cq}/(4.27,4.28) \\
$--++++$ & $(1.280 + i~0.002) 10^{-8}$ & $(2.768+i~4.232) 10^{-8} $ & $ (-1.004+i~0.955)10^{-7} $ & [SN-A] \\
\hline
$-+-+++$ & $(1.045-i~0.580) 10^{-8}$ & $1.835-i~2.468 $ & $9.752-i~11.791$ & [SN] \\
\hline
$-++-++$ & $(-7.791+i~6.717) 10^{-9}$ & $3.178\cdot 10^{-2}+i~0.2188 $ & $-1.447+i~0.1955$ & [SN] \\
\hline
\hline
$---+++$ & $(8.934-i~5.359) 10^{-9}$ & $0.3599+ i~0.5782$ & $ 0.5617+i~5.8166$ & [SN] \\
\hline
$--+-++$ & $(0.1016 +i~1.276) 10^{-8}$ & $ -0.7819 +i~1.273 $ & $ -0.6249+i~6.552$ & [SN] \\
\hline
$-+-+-+$ & $(1.065- i~0.5417) 10^{-8}$ & $ -0.7475-i~0.7321 $ & $ -1.298 - i~3.255$ & [SN] \\
\hline
\hline
\end{tabular}
\caption{One loop six gluon colour ordered sub-amplitudes with a scalar loop
evaluated the specific point Eq.~(\ref{specificpoint}). [SN] means
that the result is obtained using our semi-numerical code, while
[SN-A] denotes the difference between the semi-numerical result and
the analytical one.}
\label{tablescalar}
}
\end{scriptsize}
Lastly in Table~\ref{tablescalar} we give results for the colour
sub-amplitudes $A^{[0]}(1,2,3,4,5,6)$ for scalar gluons, for the same
eight helicity choices.\footnote{In ref.~\cite{Bern:2005cq} [v1-v3]
the definition of $F_f$ has an overall sign missing, a typographical
error not present in the original calculation of the $\cal{N}$ = 1 term
in ref.~\cite{Bern:1994cg}.}
For all amplitudes for which no analytic result exists, we checked the
gauge invariance of the amplitudes by changing the gluon polarization.
The gauge invariance was obeyed with a numerical accuracy of ${\cal
O}\left(10^{-8}\right)$. To evaluate a single colour-ordered
sub-amplitude for a complex scalar took 9 seconds on a 2.8GHz Pentium
processor. To evaluate the complete set of 64 possible helicities will
be less than 64 times longer, because the scalar integrals are stored
during the calculation of the first amplitude are applicable to all
other configurations with the same external momenta.
\section{Conclusions}
In this paper we have presented numerical results which demonstrate
that the complete one-loop amplitude for six-gluon scattering is now
known numerically.
By forming multiplets of SUSY Yang Mills in the internal loops,
we were able compare with most of the known analytic results.
In addition, we have presented numerical results for amplitudes which
are currently completely unknown. Note that the analytic and
semi-numerical results are complementary. The hardest piece to
calculate analytically is the scalar contribution $A^{[0]}$, which is
the easiest for the semi-numerical approach. Thus it is possible that
a numerical code involving both semi-numerical and analytic results
will be the most efficient and expedient. Our results demonstrate the
power of the semi-numerical method, which can supplant the analytic
method where it is too arduous and provide a completely independent
check where analytic results already exist.
After inclusion of the one-loop corrections to the other parton
subprocesses involving quarks it would be possible to proceed to a NLO
evaluation of the rate for four jet production. We intend to use
these methods to calculate NLO corrections to other processes which we
consider to be of more pressing phenomenological interest.
\section*{Acknowledgements}
We would like to thank Zvi Bern, Lance Dixon and David Kosower for
providing helpful comments on the draft of this manuscript. We also
acknowledge useful discussions with John Campbell, Vittorio Del Duca
and Fabio Maltoni.
|
1,116,691,498,553 | arxiv | \section{Introduction}
One of the main goals of the LHC is the identification of the mechanism
of electroweak symmetry breaking. The most frequently investigated
models are the Higgs mechanism within the Standard
Model (SM) and within the Minimal Supersymmetric Standard Model
(MSSM)~\cite{mssm}. Contrary to the case of the SM, in the MSSM
two Higgs doublets are required.
This results in five physical Higgs bosons instead of the single Higgs
boson in the SM. These are the light and heavy ${\cal CP}$-even Higgs bosons, $h$
and $H$, the ${\cal CP}$-odd Higgs boson, $A$, and the charged Higgs bosons,
$H^\pm$.
The Higgs sector of the MSSM can be specified at lowest
order in terms of the gauge couplings, the ratio of the two Higgs vacuum
expectation values, $\tan \beta \equiv v_2/v_1$, and the mass of the ${\cal CP}$-odd
Higgs boson, $M_A$ (or $M_{H^\pm}$, the mass of the charged Higgs boson).
Consequently, the masses of the ${\cal CP}$-even neutral and the charged Higgs
bosons are dependent quantities that can be
predicted in terms of the Higgs-sector parameters, e.g.\
$M_{H^\pm}^2 = M_A^2 + M_W^2$, where $M_W$ denotes the mass of the $W$~boson.
The same applies to
the production and decay properties of the MSSM Higgs bosons%
\footnote{If the production or decay involves SUSY particles at
tree-level, also other MSSM parameters enter the prediction at lowest
order.
.~Higgs-phenomenology
in the MSSM is strongly affected by higher-order corrections, in
particular from the sector of the third generation quarks and squarks,
so that the dependencies on various other MSSM parameters can be
important, see e.g.\ \citeres{PomssmRep,habilSH,mhiggsAWB} for reviews.
Searches for the charged Higgs bosons of the MSSM (or a more general
Two Higgs Doublet Model (THDM)) have been carried out at
LEP~\cite{LEPchargedHiggsPrel}, yielding a bound of
$M_{H^\pm} \gsim 80 \,\, \mathrm{GeV}$~\cite{LEPchargedHiggsProc,LEPchargedHiggs}.
The Tevatron placed additional bounds on the MSSM parameter space from
charged Higgs-boson searches, in particular at large $\tan \beta$ and low
$M_A$~\cite{Tevcharged}. At the LHC the charged Higgs bosons will be
accessible best at large $\tan \beta$ up to $M_A \lsim 800 \,\, \mathrm{GeV}$
\cite{atlastdr,cmstdr,benchmark3}. At the ILC, for
$M_{H^\pm} \lsim \sqrt{s}/2$ a high-precision determination of the charged
Higgs boson properties will be
possible~\cite{tesla,orangebook,acfarep,Snowmass05Higgs}.
The prospective sensitivities at
the LHC are usually displayed in terms of the parameters $M_A$ and $\tan \beta$
(or $M_{H^\pm}$ and $\tan \beta$) that characterize the MSSM Higgs sector at lowest
order. The other MSSM
parameters are conventionally fixed according to certain benchmark
scenarios~\cite{benchmark2}.
The respective LHC analyses of the $5\,\sigma$ discovery contours for the
charged Higgs boson are given in \citere{HchargedATLAS} for
ATLAS and in \citeres{lightHexp,heavyHexp} for CMS.
However, within these analyses the variation with relevant SUSY
parameters as well as possibly relevant loop corrections in the Higgs
production and decay~\cite{benchmark3} have been neglected.
We focus in this paper on the $5\,\sigma$ discovery contours for the
charged MSSM Higgs boson
for the two cases $M_{H^\pm} < m_{t}$ and $M_{H^\pm} > m_{t}$,
within the $m_h^{\rm max}$~scenario and the no-mixing
scenario~\cite{benchmark2,benchmark3} (i.e.\ we concentrate on the
${\cal CP}$-conserving case).
They are obtained by using the latest CMS
results~\cite{lightHexp,heavyHexp} derived in a model-independent
approach, i.e.\ making no assumption on the Higgs boson production
mechanism or decays. However, the detection relies on the decay mode
of the charged Higgs bosons to $\tau\nu_\tau$. Furthermore only SM
backgrounds have been assumed.
These experimental results are combined with up-to-date theoretical
predictions for charged Higgs production and decay in the MSSM, taking
into account also the decay to SUSY particles that can in principle
suppress the branching ratio of the charged Higgs boson decay to
$\tau\nu_\tau$.
For the interpretation of the exclusion bounds and prospective discovery
contours in the benchmark scenarios it is important to assess how
sensitively the results depend on those parameters that have been fixed
according to the benchmark prescriptions. In \citeres{benchmark3,cmsHiggs}
this issue has been analyzed for the neutral heavy MSSM Higgs bosons,
and it has been found that the by far largest effect arises from the
variation of the Higgs-mixing parameter~$\mu$.
Consequently, we investigate how the
5$\,\sigma$ discovery regions in the $M_{H^\pm}$--$\tan \beta$ plane
for the charged MSSM Higgs boson obtainable with the CMS experiment at
the LHC are affected by a variation of the
mixing parameter~$\mu$.
\section{Experimental analysis}
\label{sec:exp}
The main
production channels at the LHC are
\begin{equation}
pp \to t\bar t \; + \; X, \quad
t \bar t \to t \; H^- \bar b \mbox{~~or~~} H^+ b \; \bar t~
\label{pp2Hpm}
\end{equation}
and
\begin{equation}
gb \to H^- t \mbox{~~or~~} g \bar b \to H^+ \bar t~.
\label{gb2Hpm}
\end{equation}
The decay used in the analysis to detect the charged Higgs boson is
\begin{equation}
H^\pm \; \to \; \tau \nu_\tau \; \to \; {\rm hadrons~}\nu_\tau.
\label{Hbug}
\end{equation}
The analyses described below correspond to
CMS experimental sensitivities based on full simulation studies,
assuming an integrated luminosity of 30~$\mbox{fb}^{-1}$.
In these analyses a top quark mass of $m_{t} = 175 \,\, \mathrm{GeV}$ has been
assumed.
\subsection{The light charged Higgs Boson}
\label{sec:lightHpm}
The ``light charged Higgs boson'' is characterized by $M_{H^\pm} < m_{t}$.
The main production channel is given in \refeq{pp2Hpm}. Close to
threshold also \refeq{gb2Hpm} contributes. The relevant (i.e.\
detectable) decay channel is given by \refeq{Hbug}.
The experimental analysis, based on 30~$\mbox{fb}^{-1}$\ collected with CMS, is
presented in \citere{lightHexp}. The events were required to be
selected with the single lepton trigger, thus exploiting the
$W \to \ell \nu$ decay mode of a $W$~boson from the decay of
one of the top quarks in \refeq{pp2Hpm}.
The total number of events leading to final states with the signal
characteristics is evaluated, including their respective experimental
efficiencies. The various channels and the corresponding efficiencies
can be found in \refta{tab:lightHp}. The efficiencies are given for
$M_{H^\pm} = 160 \,\, \mathrm{GeV}$, but vary only insignificantly over the parameter
space under investigation.
The number of signal-like events is evaluated as the sum of
background and Higgs-boson signal events,
\begin{align}
N_{\rm ev} =& \;N_{\rm background}
\mathrm{(from~the~processes~in~\refta{tab:lightHp})} \nonumber \\
&+ {\cal L} \times \sigma(pp \to t \bar t + X)
\times {\rm BR}(t \to H^\pm b)
\times {\rm BR}(H^\pm \to \tau \nu_\tau) \\
&\mbox{}\hspace{41.5mm} \times {\rm BR}(\tau \to \mbox{hadrons})
\times \mbox{exp.\ eff.}~, \nonumber
\end{align}
where ${\cal L}$ denotes the luminosity, and the experimental efficiency
is given in \refta{tab:lightHp}.
A $5\,\sigma$ discovery can be achieved if a parameter point results in
more than 5260~events (with 30~$\mbox{fb}^{-1}$).\\
\newpage
\noindent
We furthermore used
\begin{align}
{\rm BR}(W^\pm \to \ell \nu_\ell) &~= ~0.217 ~~(\ell = \mu, e), \nonumber \\
{\rm BR}(W^\pm \to \tau \nu_\tau) &~= ~0.1085 , \nonumber \\
{\rm BR}(W^\pm \to \mbox{jets}) &~= ~0.67 , \\
{\rm BR}(\tau \to \mbox{hadrons}) &~= ~0.65 . \nonumber
\end{align}
The next-to-leading order LHC cross section for top quark pairs is
taken to be 840~pb~\cite{sigmatt}.
For the $W^\pm$+3 jets background the leading
order cross section for the process $pp \to W^{\pm} + \rm 3~jets$,
$W^{\pm} \to \ell^{\pm} \nu$ ($\ell=e,~\mu$) of 840~pb was used,
as given by the MadGraph~\cite{MadGraph} generator.
\begin{table}[htb!]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c|c|} \hline
channel & exp.\ efficiency \\ \hline\hline
$pp \to t \bar t +X,\; t \bar t \to H^+ b \; \bar t
\to (\tau^+ \bar{\nu}_\tau) \; b \; (W^- \bar b)$;
~$\tau \to \mbox{hadrons}$, $W \to \ell \nu_\ell$ & 0.0052 \\
\hline
$pp \to t \bar t +X,\; t \bar t \to W^+ \; W^- \; b \bar b
\to (\tau \nu_\tau) \; (\ell \nu_\ell) \; b \bar b$;
~$\tau \to \mbox{hadrons}$ & 0.00217 \\
\hline
$pp \to t \bar t +X,\; t \bar t \to W^+ \; W^- \; b \bar b
\to (\ell \nu_\ell) \; (\ell \nu_\ell) \; b \bar b$ & 0.000859 \\
\hline
$pp \to t \bar t +X,\; t \bar t \to W^+ \; W^- \; b \bar b
\to (\mbox{jet jet}) \; (\ell \nu_\ell) \; b \bar b$ & 0.000134 \\
\hline
$pp \to W + \rm 3~jets$, $W \to \ell \nu$ & 0.000013 \\
\hline\hline
\end{tabular}
\end{center}
\vspace{-1em}
\caption{Relevant signal (first line) and background
channels for the light charged Higgs boson and their
respective experimental efficiencies. The charge conjugated processes
ought to be included. The efficiency for the charged Higgs production
is given for $M_{H^\pm} = 160 \,\, \mathrm{GeV}$, but varies only insignificantly
over the relevant parameter space. $\ell$ denotes $e$ or $\mu$.
}
\label{tab:lightHp}
\renewcommand{\arraystretch}{1.0}
\end{table}
\subsection{The heavy charged Higgs Boson}
\label{sec:heavyHpm}
The ``heavy charged Higgs boson'' is characterized by $M_{H^\pm} \gsim m_{t}$.
Here \refeq{gb2Hpm} gives the largest contribution to the production cross
section, and very close to
threshold \refeq{pp2Hpm} can contribute somewhat. The relevant decay
channel is again given in \refeq{Hbug}.
The experimental analysis, based on 30~$\mbox{fb}^{-1}$\ collected with CMS, has been
presented in \citere{heavyHexp}. The fully hadronic final state
topology was considered, thus events were selected with the single
$\tau$ trigger at Level-1 and the combined $\tau$-$E_{\rm T}^{\rm miss}$ High
Level trigger.
The backgrounds considered were $t \bar t$, $W^\pm t$,
$W^\pm + 3~{\rm jets}$ as well as QCD multi-jet background.
The $t \bar t$ and QCD multi-jet processes were generated with
PYTHIA~\cite{pythia}, $W^\pm t$ was
generated with the TopRex generator~\cite{toprex} and
$W^\pm + 3~{\rm jets}$ with MadGraph~\cite{MadGraph}.
The production cross sections for the $t\bar t$~background processes were
normalized to the NLO cross sections~\cite{sigmatt}.
The total background amounts (after cuts) to
$1.7 \pm 1$ events, independently of the charged Higgs boson mass.
\noindent
The number of signal events is evaluated as
\begin{equation}
N_{\rm ev} = {\cal L} \times \sigma(pp \to H^\pm + X)
\times {\rm BR}(H^\pm \to \tau \nu_\tau)
\times {\rm BR}(\tau \to \mbox{hadrons})
\times \mbox{exp.\ eff.}~,
\end{equation}
where ${\cal L}$ denotes the luminosity, and the experimental efficiency
is given in \refta{tab:heavyHp} as a function of $M_{H^\pm}$.
A $5\,\sigma$ discovery corresponds to a number of signal events larger
than $14.1$.
\begin{table}[htb!]
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c||cccccc|}
\hline\hline
$M_{H^\pm}$ [GeV] & 171.6 & 180.4 & 201.0 & 300.9 & 400.7 & 600.8 \\
\hline
exp.\ eff.\ [$10^{-4}$] & 3.5 & 4.0 & 5.0 & 23 & 32 & 42 \\
\hline\hline
\end{tabular}
\end{center}
\vspace{-1em}
\caption{Experimental efficiencies for the heavy charged Higgs boson
detection.
}
\label{tab:heavyHp}
\renewcommand{\arraystretch}{1.0}
\end{table}
The efficiency for the charged Higgs boson production over the
full mass range considered was evaluated with the PYTHIA~\cite{pythia}
generator processes 401 ($gg \to tbH^{\pm}$) and 402 ($qq \to tbH^{\pm}$)
implemented as described in ~\citere{tbH}.
\section{Calculation of cross section and branching ratios}
\label{sec:theo}
While the phenomenology of the production and decay processes of the
charged MSSM Higgs bosons at the LHC is mainly characterized by
the parameters $M_A$ (or $M_{H^\pm}$) and $\tan \beta$ that govern the Higgs sector
at lowest
order, other MSSM parameters enter via higher-order contributions (see
e.g.\ \citere{benchmark3} and references therein),
and also via the kinematics of Higgs-boson decays into
supersymmetric particles. The other MSSM parameters are usually fixed
in terms of benchmark scenarios. The most commonly used scenarios are
the ``$m_h^{\rm max}$'' and ``no-mixing'' benchmark
scenarios~\cite{benchmark2,benchmark3}. According to the
definition of \citere{benchmark2} the $m_h^{\rm max}$ scenario is given by,
\begin{eqnarray}
\mbox{\underline{$m_h^{\rm max}:$}} &&
M_{\rm SUSY} = 1000 \,\, \mathrm{GeV}, \quad X_t = 2\, M_{\rm SUSY}, \quad A_b = A_t, \nonumber \\
&& \mu = 200 \,\, \mathrm{GeV}, \quad M_2 = 200 \,\, \mathrm{GeV}, \quad m_{\tilde{g}} = 0.8\,M_{\rm SUSY}~.
\label{mhmax}
\end{eqnarray}
Here $M_{\rm SUSY}$ denotes the diagonal soft SUSY-breaking parameters in the
sfermion mass matrices, $m_{t}\,X_t \equiv m_{t}\, (A_t - \mu/\tan \beta)$ is the
off-diagonal entry in the scalar top mass matrix. $A_{t(b)}$ denote the
trilinear Higgs-stop (-sbottom) couplings, $\mu$ is the Higgs mixing
parameter, $m_{\tilde{g}}$ the gluino mass, and $M_2$ and $M_1$ denote the soft
SUSY-breaking parameters in the chargino/neutralino sector.
The parameter $M_1$ is fixed via the GUT relation
$M_1 = (5s_W^2)/(3c_W^2) \, M_2$.
The no-mixing scenario differs from the $m_h^{\rm max}$ scenario only in the
definition of
vanishing mixing in the stop sector and a larger value of $M_{\rm SUSY}$,
\begin{eqnarray}
\mbox{\underline{no-mixing:}} &&
M_{\rm SUSY} = 2000 \,\, \mathrm{GeV}, \quad X_t = 0, \quad A_b = A_t, \nonumber \\
&& \mu = 200 \,\, \mathrm{GeV}, \quad M_2 = 200 \,\, \mathrm{GeV}, \quad m_{\tilde{g}} = 0.8\,M_{\rm SUSY}~.
\label{nomix}
\end{eqnarray}
The value of the top-quark mass in \citere{benchmark2} was chosen
according to the experimental central value at that time. For our
numerical analysis below, we use
the value, $m_{t} = 175 \,\, \mathrm{GeV}$, see \refse{sec:exp}.
Using the current value of $m_{t} = 172.6 \,\, \mathrm{GeV}$~\cite{mt1726}
would lead to a small shift of the discovery contours right at
threshold, but is insignificant for the qualitative results of this
analysis.
In \citere{benchmark3} it was suggested that in the search for heavy
MSSM Higgs bosons the $m_h^{\rm max}$ and no-mixing scenarios, which originally
were mainly designed for the search for the light ${\cal CP}$-even Higgs boson
$h$, should be extended by several discrete values of $\mu$ (see below),
\begin{equation}
\mu = \pm 200, \pm 500, \pm 1000 \,\, \mathrm{GeV} ~.
\label{eq:variationmu}
\end{equation}
In our analyses here we focus on $\mu = \pm 200, \pm 1000 \,\, \mathrm{GeV}$.
\bigskip
For the calculation of cross sections and branching ratios we use a
combination of up-to-date theory evaluations. The
interaction of the charged Higgs boson with the $t/b$~doublet can be
expressed in terms of an effective Lagrangian~\cite{deltamb2},
\begin{equation}
\label{effL}
{\cal L} = \frac{g}{2M_W} \frac{\overline{m}_b}{1 + \Delta_b} \Bigg[
\sqrt{2} \, V_{tb} \, \tan \beta \; H^+ \bar{t}_L b_R \Bigg] + {\rm h.c.}
\end{equation}
Here $\overline{m}_b$ denotes the running bottom quark mass including SM QCD
corrections.
The prefactor $1/(1 + \Delta_b)$ in \refeq{effL} arises from the
resummation of the leading $\tan \beta$-enhanced corrections to all orders.
The explicit
form of $\Delta_b$ in the limit of heavy SUSY masses and $\tan \beta \gg 1$
reads~\cite{deltamb1}
\begin{equation}
\Delta_b = \frac{2\alpha_s}{3\,\pi} \, m_{\tilde{g}} \, \mu \, \tan \beta \,
\times \, I(m_{\tilde{b}_1}, m_{\tilde{b}_2}, m_{\tilde{g}}) +
\frac{\alpha_t}{4\,\pi} \, A_t \, \mu \, \tan \beta \,
\times \, I(m_{\tilde{t}_1}, m_{\tilde{t}_2}, |\mu|) ~.
\label{def:dmb}
\end{equation}
Here $m_{\tilde{t}_1}$, $m_{\tilde{t}_2}$, $m_{\tilde{b}_1}$, $m_{\tilde{b}_2}$ denote the $\tilde{t}$ and
$\tilde{b}$~masses. $\alpha_s$ is the strong coupling
constant, while $\alpha_t \equiv h_t^2 / (4 \pi)$ is defined via the top
Yukawa coupling. The analytical expression for $I(\ldots)$ can be found
in \citere{benchmark3}.
Large negative values of $(\mu\,m_{\tilde{g}})$ and $(\mu\,A_t)$ (it should be
noted that both
benchmark scenarios have positive $m_{\tilde{g}}$ and $A_t$) can lead to a
strong enhancement of the
$H^\pm t b$ coupling, while large positive values lead to a strong
suppression.
Concerning the $m_h^{\rm max}$ and the no-mixing benchmark scenarios,
as discussed in \citeres{cmsHiggs,benchmark3} the $\Delta_b$ effects are
much more pronounced in the $m_h^{\rm max}$ scenario, where the two terms in
\refeq{def:dmb} are of similar size. In the no-mixing scenario the first
term in \refeq{def:dmb} dominates, while the second term is small. A
further suppression is caused by the larger value of $M_{\rm SUSY}$ (see
\refeq{nomix}) in comparison with the $m_h^{\rm max}$
scenario. Consequently, the total effect of $\Delta_b$ is smaller in the
no-mixing scenario (see also the discussion in \citere{benchmark3}).
For the production cross section in \refeq{pp2Hpm} we use the SM cross
section $\sigma(pp \to t \bar t) = 840~\rm{pb}$~\cite{sigmatt}%
\footnote{
The corresponding SUSY corrections are small~\cite{sigmattSUSY} and have
been neglected.
}%
~times the ${\rm BR}(t \to H^\pm\, b)$ including the $\Delta_b$ corrections
described above.
The production cross section in \refeq{gb2Hpm} is evaluated as given in
\citeres{HpmXSa,HpmXSb}. In addition also the $\Delta_b$ corrections of
\refeq{effL} are applied. Finally the ${\rm BR}(H^\pm \to \tau \nu_\tau)$ is
evaluated taking into account all decay channels, among which the most
relevant are $H^\pm \to tb, cs, W^{(*)}h$. Also possible decays to
SUSY particles are taken into account. For the decay to $tb$ again
the $\Delta_b$ corrections are included.
All the numerical evaluations are performed with the program
{\tt FeynHiggs}~\cite{feynhiggs,mhiggslong,mhiggsAEC,mhcMSSMlong}, see
also \citere{mhcMSSM2L}.
\section{Numerical analysis}
\label{sec:numanal}
The numerical analysis has been performed in the $m_h^{\rm max}$~and the
no-mixing scenarios~\cite{benchmark2,benchmark3} for
$\mu = -1000, -200, +200, +1000 \,\, \mathrm{GeV}$.
We separately present the results for the light and the heavy charged
Higgs and finally compare with the results in the CMS PTDR, where the
results had been obtained fixing $\mu = +200 \,\, \mathrm{GeV}$ and neglecting the
$\Delta_b$ corrections, as well as neglecting the charged Higgs-boson decays
to SUSY particles.
\subsection{The light charged Higgs boson}
In \reffi{fig:reachlight} we show the
results for the $5\,\sigma$ discovery contours for the light
charged Higgs boson, corresponding to the experimental
analysis in \refse{sec:lightHpm}, where the charged Higgs boson
discovery will be possible in the areas above the curves shown in
\reffi{fig:reachlight}.
As described above, the experimental analysis was performed for the
CMS detector and 30~$\mbox{fb}^{-1}$. The top quark mass is set to $m_{t} = 175 \,\, \mathrm{GeV}$.
The thick (thin) lines correspond to positive (negative) $\mu$, and the
solid (dotted) lines have $|\mu| = 1000 (200) \,\, \mathrm{GeV}$.
The curves stop at $\tan \beta = 60$, where we stopped the evaluation of
production cross section and branching ratios. For negative $\mu$ very
large values of $\tan \beta$ result in a strong enhancement of the bottom
Yukawa coupling, and for $\Delta_b \to -1$ the MSSM enters a non-perturbative
regime, see \refeq{effL}.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.45\textwidth]{mhmax_lightChH_MHP.eps}\hspace{1em}
\includegraphics[width=0.45\textwidth]{nomix_lightChH_MHP.eps}
\caption{%
Discovery reach for the light charged Higgs boson of CMS with 30~$\mbox{fb}^{-1}$\ in the
$M_{H^\pm}$--$\tan \beta$~plane for the $m_h^{\rm max}$~scenario (left) and the no-mixing
scenario (right).
}
\label{fig:reachlight}
\end{center}
\end{figure}
Within the $m_h^{\rm max}$ scenario, shown in the left plot of
\reffi{fig:reachlight}, the search for the light charged Higgs boson covers
the area of large $\tan \beta$ and $M_{H^\pm} \lsim 130 \ldots 160 \,\, \mathrm{GeV}$.
The variation with
$\mu$ induces a strong shift in the $5\,\sigma$ discovery contours. This
corresponds to a shift in $\tan \beta$ of
$\Delta\tan \beta = 15$ for $M_{H^\pm} \lsim 110 \,\, \mathrm{GeV}$, rising up to $\Delta\tan \beta = 40$ for
larger $M_{H^\pm}$ values. The discovery region is largest (smallest) for
$\mu = -(+)1000 \,\, \mathrm{GeV}$, corresponding to the largest (smallest)
production cross section.
The results for the no-mixing scenario are shown in the right plot of
\reffi{fig:reachlight}. The effects of the variation of $\mu$ are much
less pronounced in this scenario, as discussed in \refse{sec:theo}, due
to the smaller
absolute value of $\Delta_b$ (see also the corresponding analysis for neutral
heavy Higgs bosons in \citere{cmsHiggs}). The shift in $\tan \beta$ for
$M_{H^\pm} = 110 \,\, \mathrm{GeV}$ is about $\Delta\tan \beta = 5$ going from $\mu = -1000 \,\, \mathrm{GeV}$ to
$+1000 \,\, \mathrm{GeV}$.
For $\tan \beta = 60$ (where we stop our analysis) the covered $M_{H^\pm}$ values
range from $150 \,\, \mathrm{GeV}$ to $164 \,\, \mathrm{GeV}$.
In this charged Higgs boson mass range for the considered benchmark
scenarios no decay channels into SUSY particles are open, i.e.\ the
observed effects are all due to higher-order corrections, in particular
associated with~$\Delta_b$.
\subsection{The heavy charged Higgs boson}
In \reffi{fig:reachheavy} we show the
results for the $5\,\sigma$ discovery contours for the heavy
charged Higgs boson, corresponding to the experimental
analysis in \refse{sec:heavyHpm}. The Higgs boson discovery will be
possible in the areas above the curves.%
\footnote{
An analysis in other benchmark scenarios that are in
agreement with the cold dark matter density constraint imposed by WMAP
and other cosmological data~\cite{WMAP} can be found in \citere{ehhow}.}%
~As before, the experimental analysis was performed for the
CMS detector and 30~$\mbox{fb}^{-1}$. The top quark mass is set to $m_{t} = 175 \,\, \mathrm{GeV}$.
The thick (thin) lines correspond to positive (negative) $\mu$, and the
solid (dotted) lines have $|\mu| = 1000 (200) \,\, \mathrm{GeV}$.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.45\textwidth]{mhmax_heavyChH_MHP.eps}\hspace{1em}
\includegraphics[width=0.45\textwidth]{nomix_heavyChH_MHP.eps}
\caption{%
Discovery reach for the heavy charged Higgs boson of CMS with 30~$\mbox{fb}^{-1}$\ in the
$M_{H^\pm}$--$\tan \beta$~plane for the $m_h^{\rm max}$~scenario (left) and the no-mixing
scenario (right).
}
\label{fig:reachheavy}
\end{center}
\end{figure}
The $5\,\sigma$ discovery regions for the search for heavy charged Higgs
bosons in the $m_h^{\rm max}$ scenario are shown in the left plot of
\reffi{fig:reachheavy}. For $M_{H^\pm} = 170 \,\, \mathrm{GeV}$, where the experimental
analysis stops, we find a strong variation
in the accessible parameter space for $\mu = -(+)1000 \,\, \mathrm{GeV}$ of $\Delta\tan \beta = 40$.
It should be noted in this context that close to threshold, where both
production mechanisms, \refeqs{pp2Hpm} and (\ref{gb2Hpm}), contribute,
the theoretical
uncertainties are somewhat larger than in the other regions.
For $M_{H^\pm} = 300 \,\, \mathrm{GeV}$ the variation in the $5\,\sigma$ discovery contours
goes from $\tan \beta = 38$ to $\tan \beta = 54$. For $\mu = -1000 \,\, \mathrm{GeV}$ and larger
$\tan \beta$ values the bottom Yukawa coupling becomes so large
that a perturbative treatment would no longer be reliable in this
region, and correspondingly we do not continue the respective curve(s).
The shape of the $\mu = +1000 \,\, \mathrm{GeV}$ curve has a local minimum at
$M_{H^\pm} \approx 300 \,\, \mathrm{GeV}$ that is not (or only very weakly) present in the other
curves, and that is also not visible in the original CMS analysis in
\citere{heavyHexp} (obtained for $\mu = +200 \,\, \mathrm{GeV}$, but neglecting the
$\Delta_b$ effects). The reason for the local minimum can be traced back to
the strongly improved experimental efficiency going from
$M_{H^\pm} = 200 \,\, \mathrm{GeV}$ to $300 \,\, \mathrm{GeV}$, see \refta{tab:heavyHp}. The better
efficiency at $M_{H^\pm} = 300 \,\, \mathrm{GeV}$ corresponds to a lower required cross
section ($\propto \tan^2 \beta\hspace{1mm}$) and/or a lower ${\rm BR}(H^\pm \to \tau \nu_\tau)$
to obtain the same number of signal events.
On the other hand, going from $M_{H^\pm} = 200 \,\, \mathrm{GeV}$ to $300 \,\, \mathrm{GeV}$ this effect
is in most cases overcompensated by a decrease of the cross
section due to the increase in $M_{H^\pm}$. The overcompensation results in
an increase in $\tan \beta$ for the higher $M_{H^\pm}$ value.
For $\mu = +1000 \,\, \mathrm{GeV}$, however, $\Delta_b$ is very large,
suppressing strongly the charged Higgs production cross section as well
as the ${\rm BR}(H^\pm \to tb)$. The overall effect is a somewhat better
reach in $\tan \beta$ for $M_{H^\pm} = 300 \,\, \mathrm{GeV}$ than for $M_{H^\pm} = 200 \,\, \mathrm{GeV}$.
In comparison with the analysis of \citere{benchmark3}, based on the
older CMS analysis given in \citere{heavyHexpold}, several differences
can be observed. The feature of the local minimum is absent in
\citere{benchmark3}, the variation of the $5\,\sigma$ discovery contours
with $\mu$ is weaker, and the effect of the decay of the charged Higgs
boson to a chargino and a neutralino is more pronounced in
\citere{benchmark3}. The reason for these differences is the strongly
reduced discovery region in the new CMS analysis~\cite{heavyHexp}
employed here as compared to the old CMS analysis~\cite{heavyHexpold}
used in \citere{benchmark3}. The reach in $\tan \beta$ is worse by
$\sim 15 (30)$ for $M_A = 200 (400) \,\, \mathrm{GeV}$ in the new analysis.%
\footnote{
The old analysis uses $\mu = -200 \,\, \mathrm{GeV}$~\cite{heavyHexpold}, while the
new analysis set $\mu = +200 \,\, \mathrm{GeV}$~\cite{heavyHexp}. However, since the
$\Delta_b$ corrections are neglected in \citeres{heavyHexpold,heavyHexp},
the effect on the discovery regions should be small.
}%
~Thus, at the substantially worse (i.e.\ higher) $\tan \beta$ values employed
here the $\Delta_b$ effects are more pronounced, leading to the local minimum
for $\mu = +1000 \,\, \mathrm{GeV}$ and to a larger absolute variation in $\tan \beta$ with the
size and the sign of $\mu$, see \refse{sec:theo}.
In the high $\tan \beta$ region furthermore the $\Delta_b$ effects dominate over the
impact of the decay of the charged Higgs to charginos and neutralinos.
As an example, for $\mu = +200 \,\, \mathrm{GeV}$ and $M_{H^\pm} = 400 \,\, \mathrm{GeV}$ the old
analysis in \citere{benchmark3} found that the discovery region starts
at $\tan \beta = 32$, where ${\rm BR}(H^\pm \to \cha{}\neu{}) \approx 15\%$.
Here we find that the discovery region starts at $\tan \beta = 64$, where
${\rm BR}(H^\pm \to \cha{}\neu{}) \approx 3\%$.
The no-mixing scenario is shown in the right plot of
\reffi{fig:reachheavy}. The features are the same as in the $m_h^{\rm max}$
scenario. However, due to the smaller size of $|\Delta_b|$, see
\refse{sec:theo}, they are much less pronounced. The variation in $\tan \beta$
stays at or below the level of $\Delta\tan \beta = 10$ for the whole range of
$M_{H^\pm}$.
\subsection{Comparison with the CMS PTDR}
In \reffi{fig:reach} we show the
combined results for the $5\,\sigma$ discovery contours for the light and
the heavy charged Higgs boson, corresponding to the experimental
analyses in the $m_h^{\rm max}$ scenario as presented in the two previous
subsections. They are compared with the results presented in the CMS
PTDR~\cite{cmstdr}. Contrary to the previous sections, we now show the
$5\,\sigma$ discovery contours in the $M_A$--$\tan \beta$ plane.
The thick (thin) lines correspond to positive (negative) $\mu$, and the
solid (dotted) lines have $|\mu| = 1000 (200) \,\, \mathrm{GeV}$. The thickened
dotted (red/blue) lines represent the CMS PTDR results, obtained for
$\mu = +200 \,\, \mathrm{GeV}$ and neglecting the $\Delta_b$ effects.
\begin{figure}[htb!]
\begin{center}
\includegraphics[width=0.60\textwidth]{mhmax_ChHvsPTDR.eps}
\caption{%
Discovery reach for the charged Higgs boson of CMS with 30~$\mbox{fb}^{-1}$\ in the
$M_A$--$\tan \beta$~plane for the $m_h^{\rm max}$~scenario for
$\mu = \pm 200, \pm 1000 \,\, \mathrm{GeV}$ in comparison with the results from the CMS
PTDR (thickened dotted (red and blue) lines), obtained for
$\mu = +200 \,\, \mathrm{GeV}$ and neglecting the $\Delta_b$ effects.
}
\label{fig:reach}
\end{center}
\end{figure}
Apart from the variation in the $5\,\sigma$ discovery contours with the
size and the sign of $|\mu|$, two differences can be observed in the
comparison of the PTDR results to the new results obtained here, i.e.\
including the $\Delta_b$ corrections in the production and decay of the
charged Higgs boson as well as taking the decay to SUSY particles into
account.
For the light charged Higgs analysis the discovery contours are now
shifted to smaller $M_A$ values, for negative $\mu$ even ``bending over''
for larger $\tan \beta$ values. The reason is the more complete inclusion of
higher-order corrections (full one-loop and leading \order{\alpha_t\alpha_s}
two-loop) to the relation between $M_A$ and
$M_{H^\pm}$~\cite{mhcMSSMlong,mhcMSSM2L}.
The second feature is a small gap between the light and the heavy
charged Higgs analyses, while in the PTDR analysis all charged Higgs
masses could be accessed. The gap can be observed best by comparing the
$m_h^{\rm max}$ scenario in \reffis{fig:reachlight} and \ref{fig:reachheavy}.
This gap is largest for $\mu = +1000 \,\, \mathrm{GeV}$ and smallest for
$\mu = -1000 \,\, \mathrm{GeV}$, where it amounts only up to $\sim 5 \,\, \mathrm{GeV}$.
Possibly the heavy charged Higgs analysis strategy exploiting the fully
hadronic final state can be extended to smaller $M_A$ values to
completely close the gap.
For the interpretation of \reffi{fig:reach} it should be kept in mind
that the accessible area in the heavy Higgs analysis also ``bends over''
to smaller $M_A$ values for larger $\tan \beta$, thus decreasing the visible
gap in \reffi{fig:reach}.
\section{Conclusions}
We have studied the variation of the $5\,\sigma$ discovery contours for the
search for the charged MSSM Higgs boson with the SUSY parameters.
We combine the latest results for the
CMS experimental sensitivities based on full simulation studies with
state-of-the-art theoretical predictions of MSSM Higgs-boson properties.
The experimental analyses are done assuming an integrated luminosity of
30~$\mbox{fb}^{-1}$\ for the two cases, $M_{H^\pm} < m_{t}$ and $M_{H^\pm} > m_{t}$.
The numerical analysis has been performed in the $m_h^{\rm max}$~and the
no-mixing scenarios for $\mu = \pm 200, \pm 1000 \,\, \mathrm{GeV}$.
The impact of the variation of $\mu$ enters in particular via the
higher-order correction $\Delta_b$, affecting
the charged Higgs production cross section and branching ratios. Also
the decays of the charged Higgs boson to SUSY particles have been taken
into account.
As a general feature, large negative $\mu$ values give the largest
reach, while large positive values yield the smallest $5\,\sigma$ discovery
areas.
The search for the light charged Higgs boson covers the the area of
large $\tan \beta$ and $M_{H^\pm} \lsim 160 \,\, \mathrm{GeV}$.
The variation with $\mu$ within the $m_h^{\rm max}$ scenario induces a strong
shift in the $5\,\sigma$ discovery contours with
$\Delta\tan \beta = 15$ for $M_{H^\pm} = 100 \,\, \mathrm{GeV}$, rising up to $\Delta\tan \beta = 40$ for
larger $M_{H^\pm}$ values. The discovery region is largest (smallest) for
$\mu = -(+)1000 \,\, \mathrm{GeV}$, corresponding to the largest (smallest)
production cross section. The effects are similar, but much less
pronounced, in the no-mixing scenario.
The search for the heavy charged Higgs boson reaches up to $M_{H^\pm} \lsim
400 \,\, \mathrm{GeV}$ for large $\tan \beta$.
Within the $m_h^{\rm max}$ scenario the variation of $\mu$ induces a very
strong shift in the $5\,\sigma$ discovery contours of up to $\Delta\tan \beta = 40$
for $M_{H^\pm} \gsim m_{t}$. As in the light charged Higgs case, within the
no-mixing scenario the effects show the same qualitative behavior, but
are much less pronounced.
Combining the search for the light and the heavy charge Higgs boson, we
find a small gap, while in the CMS Physics Technical Design Report
analysis all charged Higgs masses could be accessed.
Possibly the heavy charged Higgs analysis strategy exploiting the fully
hadronic final state can be extended to smaller $M_A$ values to
completely close the gap. This issue deserves further studies.
\subsection*{Acknowledgements}
The work of S.H.\ was partially supported by CICYT (grant FPA~2007--66387).
Work supported in part by the European Community's Marie-Curie Research
Training Network under contract MRTN-CT-2006-035505
`Tools and Precision Calculations for Physics Discoveries at Colliders'.
|
1,116,691,498,554 | arxiv | \section*{Introduction}}
Ultracold molecules and in particular ultracold polar molecules are at the forefront of precision spectroscopy, sensing, controlled studies of chemical reactions,
quantum many-body physics, and quantum computing\cite{Carr_Review2009,OspelkausExpKKRb2010,bala2018,PerreaultScience2017,Rvachov2017,Rui_ExpNaK2017,PerreaultNatChem2018,croftbalahua2018,Ye_ExpNaRb2018,Hu_KRb2019}.
Polar molecules comprised of heteronuclear alkali metal dimers such as KRb, NaK, NaRb and LiNa have attracted considerable attention in recent years in controlled studies of chemical reactions\cite{OspelkausExpKKRb2010,Rvachov2017,Rui_ExpNaK2017,Ye_ExpNaRb2018,Hu_KRb2019}.
Electronically non-adiabatic effects are expected to play an important role in atom-dimer reactions involving these molecules.
The reactions proceed along a barrierless reaction pathway into a deep attractive potential well.
A conical intersection (CI) occurs between the ground electronic state and the first excited doublet electronic state within the attractive well region
and this CI is energetically accessible even for collision energies in the utracold limit for ground state reactants.
Thus, a non-adiabatic quantum mechanical treatment is required that includes both electronic states.
Explicit quantum calculations for these reactions remain a formidable challenge even for dynamics on a single Born-Oppenheimer adiabatic electronic potential energy
surface (PES)\cite{makrides2015,croftNatCom2017,croftPhysRev2017}.
Fortunately, we have recently developed a new quantum reactive scattering methodology that has made it possible to treat non-adiabatic ultracold reactions occurring on
two coupled electronic states for the first time\cite{Kendrick2018nonad}.
In this work, we present a first principles full-dimensional quantum dynamics study of non-adiabatic effects in the
Li + LiNa($v=0$, $j=0$) $\to$ Li$_2$($v'$, $j'$) + Na reaction.
The rotationally resolved rate coefficients are computed as a function of collision energy from $1\,{\rm nK}$ to $10\,{\rm K}$ using a coupled two-state diabatic electronic representation\cite{Kendrick2018nonad,Kendrick2018nonadHH2,Kendrick2019nonadHHD}.
The non-adiabatic results are compared to a conventional Born-Oppenheimer calculation based on a single adiabatic electronic PES.
Both of these calculations are also compared to a universal model which is based on a simple one-dimensional reaction path consisting of a long-range van der Waals (C$_6$) potential\cite{C6_Hui2019}.
Quantum interference between the two reaction pathways which encircle the CI is shown to significantly enhance or suppress the rate coefficients at ultracold collision energies (i.e., $E_c < 1\,{\rm mK}$).
The geometric phase (GP) which is included in the non-adiabatic calculations reverses the nature of the quantum interference from constructive to destructive and vice
versa\cite{natcom2015,PRL2015,Hazra2015HO2}.
Thus, the non-adiabatic ultracold rate coefficients are significantly enhanced or suppressed relative to the conventional Born-Oppenheimer rates coefficients when quantum interference effects are significant.
The quantum dynamics calculations are based on accurate {\it ab initio} electronic PESs which are computed for both the ground and first excited states for the first time.
A state-of-the-art electronic structure code (MOLPRO) is used to compute the electronic PESs and the non-adiabatic coupling elements\cite{molpro}.
Strong fluctuations are observed in the rotationally resolved rate coefficient distributions.
A statistical analysis of these fluctuations reveals that they are Poissonian which is consistent with an underlying classically chaotic dynamics\cite{croftNatCom2017,croftPhysRev2017}.
The Poisson distributions are shown to be robust with respect to variations in the PES and chemical system and therefore appear to be a universal property of these types of reactions that proceed through a potential well.
{\color{red} \section*{Results}}
\subsection*{Potential Energy Surfaces for Li$_2$Na}
The Born-Oppenheimer electronic PESs are plotted in Fig.~\ref{fig01} for both the ground and first excited electronic states of the Li$_2$Na molecule.
These surfaces are computed in full dimensionality (i.e., as a function of all three bond lengths) from first principles (see {\bf Materials and Methods} for details)\cite{molpro}.
The PESs in Fig.~\ref{fig01} are two-dimensional slices plotted for a fixed Li$_2$ bond length of $6.25\,{\rm a}_{\rm 0}$ (close to its equilibrium bond length)
and show the topology of the effective interaction potential experienced by the Na nuclei in the vicinity of Li$_2$.
Notable features include the two deep attractive wells (blue colored regions) on the ground state surface (black contours) and the inverted cone of the excited electronic state (red contours).
All energies are reported relative to the bottom of the asymptotic potential well for the Li$_2$ + Na product channel.
The minimum energy of the symmetric potential wells is $-5\,814\,{\rm K}$
(see the thick solid black curve in Fig.~S1 in {\bf Supplementary Materials}).
The ground and excited state PESs exhibit a conical intersection for T-shaped (i.e., $C_{2v}$) geometries (see Fig.~\ref{fig01} inset).
The minimum energy of the conical intersection is $-3\,140\,{\rm K}$ (see the thick solid red curve in Fig.~S1).
The asymptotic energy of the Li + LiNa($v=0$, $j=0$) reactant channel is shown by the thick black contour line at $2\,228\,{\rm K}$ (see also the thick horizontal dashed line in Fig.~S1).
From Fig.~\ref{fig01} we see that for ultracold collisions of Li with LiNa in its ground vibrational and rotational state, {\it both} the ground and excited electronic states are energetically accessible in the interaction region.
Thus, both electronic states and the couplings between them must be included in the quantum dynamics calculations (see {\bf Materials and Methods} for details)\cite{Kendrick2018nonad}.
These couplings include the GP associated with the conical intersection shown in Fig.~\ref{fig01}.
As discussed in detail in the following section, the GP can lead to a dramatic enhancement or suppression of the ultracold rotationally
resolved rate coefficients\cite{natcom2015,PRL2015,Hazra2015HO2}.
We note that a traditional GP calculation\cite{natcom2015,PRL2015,Hazra2015HO2}.
(which is computationally more feasible) on the ground adiabatic electronic state is not applicable for this system since the CI is located {\it below} the energy of the incident channel.
\subsection*{Rotationally Resolved Rate Coefficients as a function of collision energy}
Figure \ref{fig02} plots a representative rotationally resolved rate coefficient for the Li + LiNa($v=0$, $j=0$) $\to$ Li$_2$($v'=3$, $j'=5$) + Na reaction as a function of collision energy from $1\,{\rm nK}$ to $10\,{\rm K}$.
Unless otherwise stated, all rate coefficients include the appropriate nuclear spin statistical factors of 2/3 and 1/3 for even and odd exchange symmetry (associated with the two identical $^6$Li nuclei), respectively.
At ultracold collision energies ($< 1\,{\rm mK}$), only a single partial wave (i.e., $l=0$ where $l$ is the orbital angular momentum of Li about LiNa) contributes to the collision and the rate coefficient becomes finite (often referred to as the Wigner regime)\cite{WignerLimit,balaThresh97}.
The specific values of the ultracold rate coefficients require exact quantum mechanical calculations on accurate PESs and are computationally demanding (see {\bf Materials and Methods} for details).
The red curve in Fig.~\ref{fig02} is from the coupled two-diabatic electronic states calculation ($2\times 2$) and the black curve is from the calculation on a single adiabatic ground electronic state which does {\it not} include the GP (denoted as NGP for No GP).
We see that in the ultracold limit the $2\times 2$ rate coefficient (red) is significantly enhanced ($\approx 50\times$) relative to the NGP one (black).
The enhancement is due to constructive quantum interference between the direct and looping contributions to the total scattering amplitude\cite{natcom2015}.
The GP associated with the conical intersection shown in Fig.~\ref{fig01} changes the sign of the interference term and hence the nature of the quantum interference from destructive to constructive and vice versa (for more details see the discussion and Eqs.~1 - 6 in
{\bf Supplementary Materials}).\cite{natcom2015,PRL2015,meadtruhlar79,mead80H3,berry84,kendrick2003,althorpe2005,zygelman2017,izmaylov2017,Xi2017,GPexp2018,GPexp2020}.
Furthermore, due to the unique properties of ultracold collisions, according to Levinson's theorem\cite{Levinson1953}
the scattering phase shifts preferentially approach an integral multiple of $\pi$.
Thus, the quantum interference often approaches its maximal values effectively turning the reaction on or off (i.e., a quantum switch!)\cite{natcom2015,PRL2015}.
The total rate coefficients summed over all final vibrational and rotational states of Li$_2$ are plotted in Fig.~S2.
The GP effects tend to wash out in the sum over final states so that the $2\times 2$ and NGP total ultracold rate coefficients are similar in magnitude (i.e., ${\rm K}_{\rm 2\times 2}/{\rm K}_{NGP}\approx 1.05$ at $E_c = 1.0\,{\rm nK}$).
Interestingly, both the $2\times 2$ and NGP ultracold rate coefficients lie below the universal value (i.e., ${\rm K}_{\rm 2\times 2}/{\rm K}_{\rm univ}\approx 0.89$ and
${\rm K}_{\rm NGP}/{\rm K}_{\rm univ}\approx 0.85$ at $E_c = 1.0\,{\rm nK}$).
The universal rate coefficient is computed using a simple one-dimensional model based only on the long-range $C_6$ potential along the reaction path and ignores all reflections\cite{C6_Hui2019}.
Thus, the smaller $2\times 2$ and NGP rates are most likely due to non-reactive (elastic) reflections that are included in the exact quantum mechanical calculations.
The sensitivity of the rate coefficients to the accuracy of the PES was also investigated.
Fig.~S3 plots the total $2\times 2$ and NGP rate coefficients as a function of a scaling parameter $\lambda$ for the PES.
The 3-body contribution to the PES is multiplied by $\lambda$ whereas the 2-body (pairwise) interaction potentials are left unchanged.
This ensures that the asymptotic energies and long-range interactions are unchanged and that only the effective depth of the Li$_2$Na PES is altered.
The range in $\lambda$ (i.e., $\pm 3\,\%$) was chosen to reflect the estimated uncertainty in the {\it ab initio} computed 3-body interaction PES.
Results for $\lambda=1$ correspond to the unscaled PES.
The NGP total rate coefficient oscillates between $2.16\times 10^{-10}$ and $3.74\times 10^{-10}\,{\rm cm}^3/{\rm s}$ (i.e., by $-20\,\%$ and $+38\,\%$ relative to the unscaled NGP rate coefficient).
The $2\times 2$ total rate coefficient oscillates between $2.46\times 10^{-10}$ and $4.49\times 10^{-10}\,{\rm cm}^3/{\rm s}$ (i.e., by $-13\,\%$ and $+58\,\%$ relative to the unscaled $2\times 2$ rate coefficient).
Interestingly, the effect of PES scaling on the rotationally resolved rate coefficients is much larger due to sudden changes in the nature of the quantum interference around the CI.
An example is plotted in Fig.~S4 for the Li$_2$($v'=3$, $j'=5$) + Na product state which shows large sudden enhancements or suppression in the rate coefficients as a function of $\lambda$ (see Figs. S5, S6, and Eqs. 1 - 6 for additional details).
\subsection*{Ultracold Rate Coefficient Distributions}
All of the rotationally resolved rate coefficients are plotted in Fig.~\ref{fig03} at the ultracold collision energy of $1\,{\rm nK}$ for each final
vibrational product state of Li$_2$ from $v'=0$ to $3$.
The red and black rate coefficients (vertical bars) correspond to the $2\times 2$ and NGP calculations, respectively.
Many of the $2\times 2$ rate coefficients are significantly enhanced or suppressed relative to the NGP rate coefficients.
As discussed above, this effect is due to the GP which is included in the $2\times 2$ calculations but not in the NGP calculations.
The sign change associated with the GP alters the nature of the quantum interference and hence the magnitude of the rate coefficients.
For $v'=0$ (panel A) particularly large GP effects are seen in the product rotational states $j'=4$, $7$, $15$, $23$, $30$, $35-37$ and $41$ for which the $2\times 2$ rate coefficients are suppressed relative to the NGP ones.
In contrast, the product rotational states for $j'=24$, $34$, and $38$ show significantly enhanced $2\times 2$ rates coefficients.
For $v'=1$ (panel B) notably suppressed $2\times 2$ rate coefficients are observed for the product rotational states $j'=12$, $20$, $27$, and $35$ whereas notably enhanced $2\times 2$ rate coefficients occur for $j'=14$, $21$, $24$, $26$, $30$, $32$, and $33$.
For $v'=2$ (panel C) notably suppressed $2\times 2$ rate coefficients are observed for the product rotational state $j'=17$ whereas notably enhanced $2\times 2$ rate coefficients occur for $j'=1$, $11$, $24$, $27$, and $28$.
Finally, for $v'=3$ (panel D) notably suppressed $2\times 2$ rate coefficients are observed for the product rotational states $j'=4$, $8$, $9$, $13$, and $17$ whereas notably enhanced $2\times 2$ rate coefficients occur for $j'=3$, $5$, and $15$.
In summary, the magnitude of the GP effect on the ultracold rotationally resolved rate coefficients varies significantly across all values of the product ro-vibrational states of Li$_2$($v'$, $j'$).
Figure \ref{fig04} plots the normalized distributions $s=K/\langle K\rangle$ where $\langle K\rangle$ denotes the average value of the rate coefficients $K$ for a given data set.
The probability distributions are computed by binning the $K_{v'j'}$ into eight equally spaced intervals up to five times the average value.
Four normalized data sets are plotted. The red and black data points denote the $2\times 2$ and NGP rate coefficients, respectively.
The circles and squares correspond to the results of even and odd exchange symmetry.
The four data sets span all of the vibrational and rotational states shown in Fig.~\ref{fig03}.
For reference, the Poisson distribution ($e^{-s}$) is also plotted (solid black curve).
We see that on average all four data sets are consistent with the Poisson distribution.
Thus, a statistical analysis of the erratic looking rotational rate coefficient distributions of Fig.~\ref{fig03}
provides a unified description of all the results.
We note that the Poisson nature of the rotational distributions was also reported previously for the ultracold K + KRb reaction\cite{croftNatCom2017,croftPhysRev2017}.
This property appears to be very robust and is independent of the details of the PES and occurs for both the $2\times 2$ and NGP results.
For example, in Figs. S7 and S8 the Poisson distributions are plotted for 25 different values
of the PES scaling parameter $\lambda$ for each exchange symmetry even and odd, respectively.
The collective set of $100$ distributions are consistent with the Poisson distribution.
The K + KRb results\cite{croftNatCom2017,croftPhysRev2017} together with the present work confirms what appears to be a universal property of ultracold chemical reactions
with a potential well supporting long-lived complex formation: the rotationally resolved rate coefficient probability distributions are Poissonian.
{\color{red}\section*{Discussion}}
Many ultracold chemical reactions under active experimental investigation, such as Li + LiNa $\to$ Li$_2$ + Na, K + NaK $\to$ K$_2$ + Na, KRb + KRb $\to$ K$_2$ + Rb$_2$,
and NaRb + NaRb $\to$ Na$_2$ + Rb$_2$\cite{Rvachov2017,Rui_ExpNaK2017,Ye_ExpNaRb2018,Hu_KRb2019}
have a barrierless reaction pathway and a deep attractive potential well.
In addition, they also exhibit a CI between the ground and first excited electronic states in the interaction region.
This CI is energetically accessible even for ultracold collisions involving reactant diatomic molecules
in their ground ro-vibrational state (e.g., LiNa($v=0$, $j=0$)).
Thus, an exact quantum mechanical calculation is required which includes both electronic states using accurate {\it ab initio} PESs\cite{Kendrick2018nonad}.
To the authors' knowledge, the first non-adiabatic calculations of this kind are reported in this work for the ultracold Li + LiNa($v=0$, $j=0$) $\to$ Li$_2$($v'$, $j'$) + Na
reaction.
Two reaction pathways (direct and looping) which encircle the CI contribute to the ultracold rate coefficients for the Li + LiNa reaction
and the resulting quantum interference between these two pathways can be constructive or destructive.
Due to the unique properties of ultracold collisions, the quantum interference often approaches its maximal values
which leads to a significantly enhanced or suppressed rate coefficient (i.e., the reaction is effectively turned on or off).
Furthermore, the GP associated with the CI changes the sign on the interference term which
reverses the nature of the quantum interference.
Thus, a non-adiabatic calculation which includes the excited electronic state and its associated GP is crucial for obtaining the correct theoretical prediction of the rate coefficients.
A conventional Born-Oppenheimer calculation based on a single adiabatic ground electronic state PES will give the opposite (incorrect) prediction whenever significant quantum interference occurs.
The novel quantum interference mechanism associated with ultracold collisions represents a realization of a molecular quantum switch.
The large dynamic range of this quantum switch might be exploited by experimentalists to control the reaction outcome via
the application of external fields and/or the selection of a particular initial quantum state\cite{natcom2015,PRL2015,Hazra2015HO2}.
The large quantum interference effects observed in the rotationally resolved rate coefficients mostly cancel out in the total rate coefficient summed over all product states.
The total ultracold rate coefficients for the non-adiabatic and adiabatic calculations differ by only $5\,\%$.
Interestingly, the ultracold rate coefficients from both sets of calculations lie about $10$ to $15\,\%$ below the universal value based on a
simple one-dimensional long-range (C$_6$) potential. This non-universal behavior suggests that non-reactive (i.e., elastic) reflections are significant.
In contrast, excellent agreement between exact quantum dynamics calculations and a universal model was reported for the K + KRb reaction\cite{croftNatCom2017}.
The rotationally resolved rate coefficient distributions are also shown to exhibit Poisson behavior.
The ${\bf S}$ matrix for open chaotic quantum systems obeys the statistics of unitary symmetric random matrices,
one of which is the Poisson law behavior of the squares of off-diagonal matrix elements\cite{Blumel88,Dyson62}.
Since state-to-state rates are directly proportional to the square of the corresponding ${\bf S}$ matrix element, this Poisson law
behavior follows directly from the underlying classically chaotic motion of the reaction\cite{Honvault2000}.
Chaotic classical trajectories are extremely complicated and tangled for reactions with long-lived intermediate complexes,
as such these results show that the ultracold LiNa + Li reaction proceeds via complex formation.
Such intermediate complexes can be observed experimentally using a combination of mass spectrometry and velocity map imaging,
as was recently demonstrated for the ultracold KRb + KRb $\to$ K$_2$ + Rb$_2$ reaction\cite{Hu_KRb2019}.
As shown explicitly in this work for the first time, the Poisson nature of these rotational distributions is robust to variations in the PES,
occurs for different chemical systems (i.e., both light Li$_2$Na and heavy KKRb\cite{croftNatCom2017})
and theoretical methods (i.e., both non-adiabatic ($2\times 2)$ and adiabatic (NGP)).
The robust and universal nature of the Poisson ultracold rotational rate coefficient distributions makes this property an ideal experimental observable.
We hope that the theoretical results presented in this work will help stimulate new experimental and theoretical studies into the intriguing
ultracold energy regime. The unique properties of ultracold collisions are still largely unexplored.
Ultracold molecules continue to show exceptional promise for future technological applications in quantum control, sensing and precision measurements.
{\color{red} \section*{Materials and Methods}}
\noindent
{\bf {Potential Energy Surfaces of LiNaLi}.}
Accurate and complete information on PESs of the LiNaLi collisional complex are absent in the literature and their computation required substantial effort due to the complexity of the multi-electron open-shell systems.
Our electronic structure calculations have been carried out with the MOLPRO program package \cite{molpro}. Core electron shells of Li and Na are described by the Stuttgart/Cologne energy-consistent, single-valence electron, relativistic pseudo-potentials (ECPs), ECP2SDF and
ECP10SDF\cite{fuentealba1983psd},
leaving only three valence electrons in the active space for explicit treatment.
The polarization of the effective cores and residual core-valence correlations are modeled via the {\it l}-independent core polarization potential
(CPP) with M{\"u}ller-Meyer damping functions \cite{muller1984treatment}. The CPP parameters, {i.e.} the static dipole polarizabilities of the atomic cores, $\alpha^{+}_{\rm c}$, are taken from Ref.~\cite{mitroy2010theory} and the cutoff functions with exponents $0.95$ a.u. and $0.82$ a.u. for Li and Na, respectively, are employed. Here, a.u. stands for atomic unit. Basis sets from Ref.~\cite{zuchowski2010reactions} describe the three valence electrons, specifically, uncontracted $sp$ basis sets augmented by additional $s$, $p$, $d$ and $f$ polarization functions are used for both Li and Na. The multi-configurational
self-consistent field (MCSCF) \cite{werner1985seconda,werner1985_2} method is first used to obtain configuration state functions (CSFs).
An MRCI calculation is then performed using a large active space constructed from the CSFs, giving the three-dimensional adiabatic surfaces of the two lowest energy states for LiNaLi, $V_1$ and $V_2$ as functions of the three bond-lengths. Nonadiabatic coupling matrix elements between these two electronic surfaces are computed at the same level of MRCI theory with the numerical finite differential method (DDR procedure). For use in the reactive scattering calculations the
non-adiabatic coupling function is spatially integrated to generate the three-dimensional mixing angle $\beta$ \cite{Domcke2004}.
Finally, fitted global full-dimensional PESs were constructed from the {\it ab~initio} energies using the reproducing kernel Hilbert space (RKHS) technique \cite{Ho1996, Unke2017}.
\vspace {5mm}
\noindent
{\bf Non-adiabatic Quantum Dynamics}.
The non-adiabatic quantum dynamics calculations solve the time-independent two-state ($2\times 2$) diabatic Schr\"odinger equation for the nuclear motion given by\cite{Kendrick2018nonad}
\begin{equation}
\left [
\left (
\begin{array}{cc}
{\hat T} & 0 \\
0 & {\hat T} \\
\end{array}
\right )
+ \left (
\begin{array}{cc}
{\tilde V}_{11} & {\tilde V}_{12}\\
{\tilde V}_{21} & {\tilde V}_{22}\\
\end{array}
\right )
\right ]\,\left (
\begin{array}{c}
{\tilde \psi}_1\\
{\tilde \psi}_2\\
\end{array}
\right ) = E\,
\left (
\begin{array}{c}
{\tilde \psi}_1\\
{\tilde \psi}_2\\
\end{array}
\right )
\label{Diab2x2eq}
\end{equation}
where the first term in brackets in Eq. \ref{Diab2x2eq} is the diabatic kinetic energy operator
for the nuclear motion with matrix elements ${\hat T}={-\hbar^2\over 2\mu}\,\nabla^2$ where
$\nabla$ denotes the derivatives with respect to the six nuclear coordinates (three bond lengths and three Euler angles)
relative to the center of mass and $\mu$ is the three-body reduced mass.
The second term is the diabatic potential matrix $\tilde{\bf V}$ which is a function of the three bond lengths with matrix elements given by
\begin{align}
{\tilde V}_{11}&= V_1\,\cos^2\beta + V_2\,\sin^2\beta\, , \label{V11eq}\\
{\tilde V}_{22}&= V_2\,\cos^2\beta + V_1\,\sin^2\beta\, , \label{V22eq}\\
{\tilde V}_{12}&= {\tilde V}_{21} = (V_2 - V_1)\,\cos\beta\,\sin\beta , \label{V12eq}
\end{align}
where $V_1$ and $V_2$ are the adiabatic PESs and $\beta$ is their mixing angle as discussed above.
In contrast to the $2\times 2$ diabatic Schr\"odinger Eq. \ref{Diab2x2eq}, the conventional Born-Oppenheimer (NGP) quantum dynamics calculations solve the adiabatic single surface Schr\"odinger equation
\begin{equation}
\Bigr [- {\hbar^2\over 2\,\mu}\,\nabla^2 + V_1({\rm x})\Bigl ] \,\psi_1({\bf x}) = E\,\psi_1({\bf x}) \, .
\label{BOeq}
\end{equation}
The quantum dynamics calculations use Adiabatically adjusting Principal axis Hyperspherical (APH) coordinates
in the interaction region and Delves hyperspherical coordinates in the long-range asymptotic region\cite{packparker87,Kendrick99,Kendrick2018nonad}.
The hyperradius $\rho$ is common to both coordinate systems which facilitates the coordinate transformation from the APH to Delves at an intermediate value of $\rho_m$ (determined by numerical convergence studies).
In the interaction region, the two-dimensional (2D) surface function Hamiltonian matrix is diagonalized on a discrete grid in $\rho$ ($144$ logarithmically spaced points were used between $\rho_i=6.0\,{\rm a}_{\rm 0}$
and $\rho_m=33.0\,{\rm a}_{\rm 0}$). The 2D basis functions consist of a hybrid FBR (Finite Basis Representation) in $\phi$ and DVR (Discrete Variable Representation) in $\theta$\cite{light85dvr_sdt,Kendrick99,Kendrick2018nonad}.
The size of the FBR and DVR varies with $\rho$ and is determined from numerical convergence studies. The size of the 2D Hamiltonian matrix is dramatically reduced by using SDT (Sequential Diagonalization Truncation)\cite{light85dvr_sdt}.
An efficient numerical eigensolver (PARPACK) is used to numerically diagonalize the sparse 2D Hamiltonian matrix\cite{arpack}.
For the zero total angular momentum ($J=0$) studied in this work, the matrix dimension varied between approximately $10\,000$ for small $\rho$ to $2\,500$ for large $\rho$.
The set of 2D eigensolutions form a basis for the one-dimensional coupled-channel propagation in $\rho$.
A log-derivative propagation technique is used to propagate a matrix of solutions (the log-derivative matrix) from $\rho=\rho_i$ to $\rho=\rho_m$.
The number of coupled channels propagated in this work was $820$ in the APH region and $500$ in the Delves region for each exchange symmetry even or odd.
The Delves functions for each diatomic arrangement channel consist of ro-vibrational wave functions computed numerically using a one-dimensional Numerov propagator for the vibrational motion
and a set of analytic spherical harmonics for the rotational part.
The log-derivative matrix is transformed from the APH to Delves coordinates at $\rho=\rho_m$ using the overlap matrix between the APH and Delves wave functions.
The log-derivative propagation is then continued using the Delves ro-vibrational basis across $482$ uniformly spaced $\rho$ values to the final asymptotic $\rho_f= 144.6\,{\rm a}_{\rm 0}$.
At the final value of $\rho=\rho_f$, the overlap matrix between the Delves functions and Jacobi basis functions is computed which enables the evaluation of the scattering ${\bf S}$ matrix\cite{packparker87}.
Once the ${\bf S}$ matrix is computed, cross sections $\sigma_{fi}$ and rate coefficients $K_{fi}=v\,\sigma_{fi}$ (where $v$ is the relative collision velocity) can be computed using standard expressions\cite{packparker87}.
We note that the $f,i$ denote the collective final and initial quantum numbers of the diatomic products and reactants (i.e., $f=(\tau',v',j')$ and $i=(\tau,v,j)$
where $\tau'$ and $\tau$ denote the diatomic arrangement channel Li$_2$ or LiNa), respectively.
\pagebreak
{\color{red} \section*{Supplementary Materials}}
Supplementary material for this article is available at http://adavances.sciencemag.org/xxx
\clearpage
{\color{red}\section*{References and Notes}}
|
1,116,691,498,555 | arxiv | \section{Introduction}
\label{xxsec1}
Let $\mathcal{A}$ be an associative algebra over a field
$\mathbb{F}$. A linear mapping $\tau:
\mathcal{A}\longrightarrow \mathcal{A}$ is called a
\textit{derivation} if $\tau(AB)=\tau(A)B+A\tau(B)$ for all $A, B\in
\mathcal{A}$. Recall that a linear
mapping $g: \mathcal{A}\rightarrow \mathcal{A}$ is called a
\textit{generalized derivation} if there exists a derivation
$\tau$ such that
$$
g(AB)=g(A)B+A\tau(B)
$$
for all $A, B\in\mathcal{A}$. A linear mapping $L:
\mathcal{A}\longrightarrow \mathcal{A}$ is called a \textit{Lie
derivation} if $L([A, B])=[L(A), B]+[A, L(B)]$ for all $A, B\in
\mathcal{A}$, where $[A, B] = AB-BA$ is the usual Lie product. Clearly, every derivation on $\mathcal{A}$ is a Lie
derivation. But, the converse statement is in general not true. More recently, there have been a number of papers on the study of conditions
under which mappings such as derivations and Lie derivations of noncommutative algebras or operator algebras can be completely determined by the action on some subsets of the given algebras(see
\cite{Bresar, ChebotarKeLeeWong, JiQi, LuJing, LiPanShen, Qi, QiCuiHou, QiHou1, QiHou2, QiHou3, Zhou, ZhuXiongZhang} and the references therein). Of these, the case of Lie-type mappings are very interesting and important.
Let $X$ be a Banach space over the real or complex field $\Bbb{F}$ with ${\rm dim}X\geq3$ and $\mathcal{B}(X)$ be the algebra of all bounded linear operators on $X$. Lu and Jing \cite{LuJing} gave a characterization of Lie derivations on $\mathcal{B}(X)$ by acting on zero products. Let $L: \mathcal{B}(X)\rightarrow \mathcal{B}(X)$ be a linear mapping satisfying $L([A,B]) = [L(A),B]+[A,L(B)]$ for any $A, B \in \mathcal{B}(X)$ with $AB = 0$ (resp. $AB = P$, here $P$ is a fixed nontrivial idempotent). Then $L=d+\tau$ ,where $d$ is a derivation of $\mathcal{B}(X)$ and $\tau : \mathcal{B}(X)\rightarrow \Bbb{F}I$ is a linear mapping vanishing
at commutators $[A,B]$ with $AB = 0$ (resp. $AB = P$). The related problem has also been investigated for prime rings and triangular algebras in \cite{JiQi, QiHou3}, for $\mathcal{J}$-subspace
lattice algebras in \cite{Qi, QiHou1}, respectively.
However, people pay much less attention to the structure of Lie-type
higher derivations on algebras by local action. The
objective of this article is to describe the structure of Lie($\xi$-Lie)
higher derivations on $\mathcal{J}$-subspace
lattice algebras by acting on zero products.
Let us first recall some basic facts related to Lie higher
derivations of an associative algebra $\mathcal{A}$. Let $\mathbb{N}$ be
the set of all non-negative integers and
$G=\{L_k\}_{k=0}^\infty$ be a family of linear
mappings of $\mathcal{A}$ such that $L_0=id_{\mathcal{A}}$. $G$ is
called:
\begin{enumerate}
\item[(i)] a \emph{higher derivation} if
$$
L_k(xy)=\sum_{i+j=k}L_i(x)L_j(y)
$$
for all $x, y\in\mathcal{A}$ and for each $k\in\mathbb{N}$;
\item[(ii)] a \emph{Lie higher derivation} if
$$
L_k([x, y])=\sum_{i+j=k}[L_i(x), L_j(y)]
$$
for all $x, y\in\mathcal{A}$ and for each $k\in\mathbb{N}$;
\item [(iii)] a \emph{generalized higher derivation} if there exists a higher derivation $D=\{d_i\}_{i\in \mathbb{N}}$ such that
$$
L_k(xy)=\sum_{i+j=k}L_i(x)d_j(y)
$$
for all $x,y\in \mathcal {A}$ and for each $k\in \mathbb{N}$. Then $D$ is called an \textit{associated higher derivation} of $G$.
\end{enumerate}
Note that $L_1$ is always a Lie derivation if $D=\{L_k\}_{k\in\mathbb{N}}$ is a Lie higher
derivation. Obviously, every
higher derivation is a Lie higher derivation. But the converse
statements are in general not true. Various higher derivations,
which consist of a family of some additive mappings, frequently appear
in commutative and noncommutative contexts( see
\cite{FerreroHaetinger1, FerreroHaetinger2, Han, Han1, Hazewinkel, LiGuo, LiPanShen, QiHou4, WeiXiao, XiaoWei1, XiaoWei2} and so on).
In \cite{Han1} the first author gave a characterization concerning Lie-type higher derivations of associative algebras. These properties
easily enable us to transfer the problems of Lie-type higher derivations into the same problems related
to Lie-type derivations. In this method, the first author describe Lie-type higher derivations on different operator algebras easily. However, it seems difficult to give a unified approach in characterizing Lie-type higher derivations on associative algebras by local action.
So in terms of the problem studied in the current work, namely, describing the form of the Lie-type
higher mappings on operator algebras by local action, the method mentioned above is not applicable. Hence we pay our attention to Lie($\xi$-Lie) higher derivations on $\mathcal{J}$-subspace
lattice algebras by acting on zero products in this article.
Let $X$ be a Banach space over the real or complex field
$\mathbb{F}$. A family $\mathcal{L}$ of subspaces of $X$ is called a
subspace lattice of $X$ if it contains $\{0\}$ and $X$, and is
closed under the operations closed linear span $\bigvee$ and
intersection $\bigwedge$ in the sense that $\bigvee_{\gamma\in
\Gamma}L_{\gamma}\in \mathcal{L}$ and $\bigwedge_{\gamma\in
\Gamma}L_{\gamma}\in \mathcal{L}$ for every family
$\{L_{\gamma}\colon \gamma\in \Gamma\}$ of elements in
$\mathcal{L}$. For a subspace lattice $\mathcal{L}$ of $X$, the
associated subspace lattice algebra $\mathrm{Alg}\mathcal{L}$ is the
set of operators on $X$ leaving each subspace in
$\mathcal{L}$ invariant. As the above definition, for an arbitrary
subspace $K$ in $\mathcal{L}$ we can establish
$$
K_- =\bigvee \{ L \in \mathcal{L}: K \nsubseteq L \}.
$$
The class of $\mathcal{J}$-subspace lattices was defined by Panaia
in his dissertation \cite{Panaia} and covers atomic Boolean
subspace lattices and pentagon subspace lattices.
$\mathcal{J}$-subspace lattices are a particular sort of
complemented lattice, satisfying certain other criteria. To be
precise, define
$$
\mathcal{J(L)}= \{K \in {\mathcal L}\colon K \neq \{0\}\ \text{and}\
K_-\neq X \}.
$$
Then $\mathcal{L}$ is called a \textit{$\mathcal{J}$-subspace
lattice}(simply, JSL) on $X$, provided all of the following conditions are
satisfied:
(1)$\bigvee \{K\colon K \in \mathcal{J(L)}\}=X$;
(2)$\bigwedge\{K_- \colon K \in \mathcal{J(L)} \}=\{0\}$;
(3)$K \bigvee K_-=X$ for each $K$ in $\mathcal{J(L)}$;
(4)$K\bigwedge K_-=\{0\}$ for each $K$ in $\mathcal{J(L)}$.
\noindent If $\mathcal{L}$ is a $\mathcal{J}$-subspace lattice, the associated
subspace lattice algebra ${\rm Alg}\mathcal{L}$ is called a
\textit{$\mathcal{J}$-subspace lattice algebra}(JSL algebras). The center is defined usually by $Z({\rm Alg}\mathcal{L}) = \{Z \in {\rm Alg}\mathcal{L}: ZA = AZ \ \text{for all} \ A \in {\rm Alg}\mathcal{L}\}$.
The outline of our paper is organized as follows. Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the real or complex field $\Bbb{F}$ and $\mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace lattice algebra. In the
second section we give a characterization of a family $\{L_n\}_{n=0}^\infty$ of linear mappings satisfying
$$
L_n([A, B])=\sum_{i+j=n}[L_i(A), L_j(B)]
$$
for any $A, B \in {\rm Alg}\mathcal{L}$ with $AB=0$. This permits us to transfer the
problems related to Lie higher derivations by acting on zero products into the same problems of the corresponding Lie derivations. Then this result is applied to describe the Lie higher derivations by acting on zero products on ${\rm Alg}\mathcal{L}$. Basing on the second section, we continue to investigate $\xi$-Lie higher derivations by acting on zero products in the third section, where $1\neq \xi\in \Bbb{F}$.
\section{Characterizations of Lie higher derivations by acting on zero products}
\label{xxsec2}
For any $A \in \mathcal{B}(X)$,
denote by $A^*$ the adjoint of $A$. For $x \in X$ and $f \in X^*$, the rank-one operator
$x \otimes f$ is defined by $(x \otimes f)y=f(y)x$ \ \text{for all}\ $y \in X$. For any non-empty subset $\mathfrak{L} \in X$,
$\mathfrak{L}^{\bot}$
denotes its annihilator, that is, $\mathfrak{L}^{\bot} = \{f \in X^* : f(x) = 0 \ \text{for all} \ x \in \mathfrak{L}\}$.
Now Let us first give some lemmas related to rank-one operator, which is crucial to what follows.
\begin{lemma} {\rm (}\cite{Longstaff}{\rm )}\label{xxsec2.l1}. Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$. Then
$x\otimes f\in \mathrm{Alg}\mathcal{L}$ if and only if there exists a subspace $K\in \mathcal{J}(\mathcal{L})$ such that $x \in K$
and $f \in K_-^{\bot} $.
\end{lemma}
\begin{lemma} {\rm (}\cite{LongstaffPanaia}{\rm )}\label{xxsec2.l2}. Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ and
$K\in \mathcal{J}(\mathcal{L})$. Then, for any nonzero vector $x \in K$
, there exists $f \in K_-^{\bot} $
such
that $f(x) =1$; dually, for any nonzero functional $f \in K_-^{\bot} $
, there exists $x \in K$
such that $f(x) =1$.
\end{lemma}
\begin{lemma} {\rm (}\cite{Qi}{\rm )}\label{xxsec2.l3}. Every rank one operator $x\otimes f\in \mathrm{Alg}\mathcal{L}$ is a linear combination
of idempotents in $\mathrm{Alg}\mathcal{L}$.
\end{lemma}
Moreover, finite-rank operators will also be used in later. Given a subspace lattice $\mathcal{L}$, by $\mathcal{F}_{\mathcal{L}}(K)$ we denote the subspace spanned by all rank one operators $x \otimes f$ with
$x \in K$ and $f \in K_-^{\bot}$ for arbitrary $K\in \mathcal{J}(\mathcal{L})$.
$\mathcal{F}(\mathcal{L})$ denotes the algebra of all finite rank operators in
$\mathrm{Alg}\mathcal{L}$.
\begin{lemma}{\rm (}\cite{LuLi} or \cite{Panaia}{\rm )}\label{xxsec2.l4}. Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$. Suppose that $A$ is an operator of rank $n$
in $\mathcal{F}(\mathcal{L})$. Then $A$ can be written as a sum of n rank-1 operators in $ \mathrm{Alg}\mathcal{L}$.
\end{lemma}
Let $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ be a family of linear mappings such that
$$
L_n([A, B])=\sum_{i+j=n}[L_i(A), L_j(B)]
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$. The following result shows that the restriction of $\{L_n\}_{n=0}^{\infty}$ to $\mathcal{F}(\mathcal{L})$ is actually a Lie higher derivation.
\begin{lemma}\label{xxsec2.l5}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $ \mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Suppose that $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings such that
$$
L_n([A, B])=\sum_{i+j=n}[L_i(A), L_j(B)]
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$. Then $$
L_n([A,F])=[L_n(A),F]+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(A), L_j(F)]+[A, L_n(F)]
$$
for all $A \in \mathrm{Alg}\mathcal{L}$ and $F\in \mathcal{F}(\mathcal{L})$.
\end{lemma}
\begin{proof} The lemma will be proved through two claims.
\textbf{Claim 1.} $L_{n}(\mathbb{F}I) \subseteq Z(\mathrm{Alg}\mathcal{L})$.
Let us show this claim by induction on the index $n$. When $n=1$, $L_1$ satisfies
$L_1([A,B]) = [L_1(A),B] + [A,L_1(B)]$ for any $A,B \in \mathrm{Alg}\mathcal{L}$ with $AB = 0$, and $L_1(\mathbb{F}I) \subseteq Z(\mathrm{Alg}\mathcal{L})$ (see \cite{Qi}).
Now let $s\in \mathbb{N}$ with $s\geq 1$ and we assume that the conclusion holds for all $s\leq n$.
For any scalar $\lambda$ and any idempotent $P \in \mathrm{Alg}\mathcal{L}$, in view of the fact $\lambda P(I - P) = 0$, we have
$$
\begin{aligned}
L_{n+1} ([\lambda P,I-P])&=[L_{n+1} (\lambda P),I-P]+[\lambda P,L_{n+1} (I-P)]
\\ \ \ &+\sum_{\substack{i+j={n+1} \\ 0<i,j<{n+1} }}[L_i(\lambda P), L_j(I-P)]\\
&=PL_{n+1} (\lambda P) - L_{n+1} ( \lambda P)P + \lambda PL_{n+1} (I) - \lambda PL_{n+1} (P)\\
&\ \ - \lambda L_{n+1} (I)P+ \lambda L_{n+1} (P)P+ \sum_{\substack{i+j={n+1} \\ 0<i,j<{n+1} }}[L_i(\lambda P), L_j(I-P)].
\end{aligned}
$$
On the other hand, by $(\lambda I - \lambda P)P = 0$ one can assert
$$
\begin{aligned}
L_{n+1} ([\lambda I - \lambda P,P])&=[L_{n+1} (\lambda I - \lambda P),P]+[\lambda I - \lambda P,L_{n+1} (P)]\\
& \ \ +\sum_{\substack{i+j={n+1} \\ 0<i,j<{n+1} }}[L_i(\lambda I - \lambda P), L_j(P)]\\
&=L_{n+1} (\lambda I)P - L_{n+1} (\lambda P)P - PL_{n+1} (\lambda I) + PL_{n+1} (\lambda P)\\
&- \lambda PL_{n+1} (P) + \lambda L_{n+1} (P)P+\sum_{\substack{i+j={n+1} \\ 0<i,j<{n+1} }}[L_i(\lambda I - \lambda P), L_j(P)].
\end{aligned}
$$
Comparing the above two equations, we get from the induction hypothesis that
\begin{equation}\label{xxsec2e1}
\lambda PL_{n+1}(I) - \lambda L_{n+1}(I)P = L_{n+1}(\lambda I)P - PL_{n+1}(\lambda I).
\end{equation}
A similar discussion as in \cite{Qi} shows that $L_{n+1}(\mathbb{F}I) \subseteq Z(\mathrm{Alg}\mathcal{L})$.
\textbf{Claim 2.}
For any $A \in \mathrm{Alg}\mathcal{L}$ and any rank one operator $x\otimes f\in \mathrm{Alg}\mathcal{L}$, we have
$$
L_n([A,x\otimes f])=[L_n(A),x\otimes f]+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(A), L_j(x\otimes f)]+[A, L_n(x\otimes f)].
$$
Take any $A \in \mathrm{Alg}\mathcal{L}$ and any idempotent $P \in \mathrm{Alg}\mathcal{L}$. For any scalar $\lambda$,
notice that $AP(\lambda I - \lambda P) = 0$. By Claim 1 it follows that
$$
\begin{aligned}
& L_n(\lambda PAP)-L_n(\lambda AP)=L_n([AP, \lambda I-\lambda P])\\
&=[L_n(AP),\lambda I-\lambda P]+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(AP), L_j(\lambda I-\lambda P)]+[AP, L_n(\lambda I-\lambda P)]\\
&=\lambda PL_n(AP) - \lambda L_n(AP)P - APL_n( \lambda P) + L_n(\lambda P)AP\\
&+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(AP), L_j(-\lambda P)]
\end{aligned}
$$
However, since $(A -AP)\lambda P = 0$, we obtain
$$
\begin{aligned}
& L_n(\lambda PAP)-L_n(\lambda PA)=L_n([A-AP, \lambda P])\\
&=[L_n(A-AP), \lambda P]+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(A-AP), L_j( \lambda P)]+[A-AP, L_n( \lambda P)]\\
&=\lambda L_n(A)P - \lambda PL_n(A) + \lambda PL_n(AP) - \lambda L_n(AP)P\\
&+AL_n(\lambda P) - APL_n(\lambda P) - L_n(\lambda P)A + L_n(\lambda P)AP\\
&+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(A-AP), L_j(\lambda P)]
\end{aligned}
$$
Comparing the last two relations leads to
\begin{equation}\label{xxsec2e2}
L_n([A,\lambda P])=[L_n(A),\lambda P]+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(A), L_j(\lambda P)]+[A, L_n(\lambda P)].
\end{equation}
for all $A \in \mathrm{Alg}\mathcal{L}$. Now, by Lemma \ref{xxsec2.l3},
the claim is true. Furthermore, taking into account Lemma \ref{xxsec2.l4}, we can further get
$$
L_n([A,F])=[L_n(A),F]+\sum_{\substack{i+j=n \\ 0<i,j<n}}[L_i(A), L_j(F)]+[A, L_n(F)]
$$
for all $A \in \mathrm{Alg}\mathcal{L}$ and $F\in \mathcal{F}(\mathcal{L})$.
\end{proof}
We must indicate that the proofs in the following lemma is essentially the same as those in \cite{Lu}, but it is done in a slightly different way.
\begin{lemma}\label{xxsec2.l7}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the real or complex field $\Bbb{F}$ and $\mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace lattice algebra. Suppose that $\delta: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings such that
$\delta([A, F])=[\delta(A), F]+[A, \delta(F)]$ for all $A\in \mathrm{Alg}\mathcal{L}$, $F\in \mathcal{F}(\mathcal{L})$ and $n\in \mathbb{N}$.
Then for any $K \in \mathcal{J}(\mathcal{L})$, there is an operator
$S$ in $\mathcal{F}_{\mathcal{L}}(K)$ and an operator $\tau(P)$ in $Z( \mathrm{Alg}\mathcal{L})$ such that $ \delta(x\otimes f) = [x\otimes f, S]+\tau(x\otimes f)$ for all $x \in K$ and $f\in K_-^{\bot}$.
\end{lemma}
\begin{proof}
Let $P$ be an idempotent operator in $\mathcal{F}_{\mathcal{L}}(K)$. Set $P_1 =P$ and $P_2 = I - P$.
Then for any $A_{11} \in P_1\mathrm{Alg}\mathcal{L}P_1$ we have
$$0 = \delta([P_1,A_{11}]) = \delta(P_1)A_{11} - A_{11} \delta(P_1) + P_1\delta(A_{11})- \delta(A_{11})P_1.$$
Multiplying the above equality by $P_1$ from left and right sides gives
$$
P_1\delta(P_1)P_1A_{11} = A_{11}P_1\delta(P_1)P_1.
$$
In an analogous manner, we also get
$$
\begin{aligned}
&P_2\delta(P_1)P_2A_{22} = A_{22}P_2\delta(P_1)P_2,\\
&P_1\delta(P_1)P_1A_{12} = A_{12}P_2\delta(P1)P_2,\\
&P_2\delta(P_1)P_2A_{21} = A_{21}P_1\delta(P_1)P_1.
\end{aligned}
$$
These facts imply that $ \tau (P) := P_1\delta(P_1)P_1 + P_2\delta(P_1)P_2 \in Z(\mathrm{Alg}\mathcal{L})$. Now let $S=P_1\delta(P_1)P_2 - P_2\delta(P_1)P_1$, then it is easy to check $S=[\delta(P_1),P_2]=[P_1 ,\delta(P_1)]$. Note that $\mathcal{F}_{\mathcal{L}}(K)$ is an ideal of $\mathrm{Alg}\mathcal{L}$(see Lemma 2.4 in \cite{Lu}), so
it is easy to check that $S$ in $\mathcal{F}_{\mathcal{L}}(K)$ and $\delta(P) = [P, S] + \tau (P)$.
\textbf{Case 1.} If $f(x)\neq 0$, then the result follows from the linearity.
\textbf{Case 2.} Now assume that $f(x) = 0$. In light of Lemma \ref{xxsec2.l2} one can take $y \in K$ such that $f(y) = 1$.
By the previous fact it follows that $\delta(y\otimes f) = [y\otimes f, [y\otimes f,\delta(y\otimes f)]] + \tau(y\otimes f)$.
Let us define a mapping $\Delta: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ by
\begin{equation}\label{xxsec2e8}\Delta(A) =\delta(A) - [A,[y\otimes f,\delta(y\otimes f)]],\end{equation}
where $A\in \mathrm{Alg}\mathcal{L}$. Then $\Delta(y\otimes f)\in Z(\mathrm{Alg}\mathcal{L})$ and therefore,
$$
\begin{aligned}
&\ \Delta(x\otimes f) =\Delta([x\otimes f,y\otimes f]) =[\Delta(x\otimes f),y\otimes f]\\
&=\Delta(x\otimes f)y\otimes f-y\otimes f\Delta(x\otimes f),\\
\end{aligned}
$$
which implies that $\Delta(x\otimes f)=(I-y\otimes f)\Delta(x\otimes f)y\otimes f$. Furthermore, $ \Delta(x\otimes f)$ can be rewritten as $z\otimes f$, where
$z\in K$ and $f(z)=0$. Choosing $g \in K_-^{\bot}$ such that $g(x)=1$,
we arrive at
\begin{equation}\label{xxsec2e9}
\Delta(x\otimes f)=z\otimes f=(z\otimes g)(x\otimes f)-(x\otimes f)(z\otimes g).
\end{equation}
Combining (\ref{xxsec2e8}) with (\ref{xxsec2e9}) yields
$$
\begin{aligned}
\delta(x\otimes f)&=[x\otimes f,[y\otimes f,\delta(y\otimes f)]]-[x\otimes f,z\otimes g]\\
&=[x\otimes f,[y\otimes f,\delta(y\otimes f)]-z\otimes g],
\end{aligned}
$$
from which we can see that the conclusion still remains to be established in this case.
\end{proof}
The following proposition will give a new characterization to the family of linear mappings $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ satisfying
$$
L_n([A, B])=\sum_{i+j=n}[L_i(A), L_j(B)]
$$
for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$. These properties will
partly enable us to transfer the problems of $\{L_n\}_{n=0}^{\infty}$ into the same problems related
to Lie derivations on $\mathcal{J}$-subspace
lattice algebras by acting on zero product.
\begin{proposition}\label{xxsec2.p1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $\mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Suppose that $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings satisfying
$$
L_n([A, B])=\sum_{i+j=n}[L_i(A), L_j(B)]
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$. Then there is a sequence $\{\delta_n\}_{n=0}^\infty :\mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ of linear mappings satisfying \begin{equation}
\label{xxsec2e9c1}\delta_n([A, F])=[\delta_n(A), F]+[A, \delta_n(F)]\end{equation}
for all $A\in \mathrm{Alg}\mathcal{L}$ and $F\in \mathcal{F}(\mathcal{L})$
such that
$$
(n+1)L_{n+1}=\sum_{k=0}^nL_{n-k} \delta_{k+1}
$$
for each non-negative integer $n$.
\end{proposition}
\begin{proof}
Let us prove this proposition by induction on the index $n$. If $n=0$, then
$$
L_1([A, B])=[L_1(A), L_0(B)]+[L_0(A),L_1(B)]=[L_1(A), B]+[A,
L_1(B)]
$$
for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$. If we set $\delta_1=L_1$, then
$\delta_1([A, B])=[\delta_1(A), B]+[A, \delta_1(B)]$ for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$. By Lemma \ref{xxsec2.l5} we know that $\delta_1([A, F])=[\delta_1(A), F]+[A, \delta_1(F)]$ holds for all $A\in \mathrm{Alg}\mathcal{L}$ and $F\in \mathcal{F}(\mathcal{L})$.
We now suppose that $\delta_k$ is a well-established mapping of $ \mathrm{Alg}\mathcal{L}$ for each $k\leq n$. Let us
define
$$
\delta_{n+1}=(n+1)L_{n+1}-\sum_{k=0}^{n-1}L_{n-k}\delta_{k+1}.
$$
It is sufficient to show that $\delta_{n+1}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ satisfies
the equality (\ref{xxsec2e9c1}).
For any $A \in \mathrm{Alg}\mathcal{L}$ and any rank one operator $x\otimes f\in \mathrm{Alg}\mathcal{L}$,
by the induction hypothesis, we can compute that
$$
\begin{aligned}
\delta_{n+1}([A,x\otimes f])&=(n+1)L_{n+1}([A,x\otimes f])-\sum_{k=0}^{n-1}L_{n-k}\delta_{k+1}([A,x\otimes f])\\
&=(n+1)L_{n+1}([A,x\otimes f])-\sum_{k=0}^{n-1}L_{n-k}[\delta_{k+1}(A),x\otimes f]\\&\ \ -\sum_{k=0}^{n-1}L_{n-k}[A,\delta_{k+1}(x\otimes f)].
\end{aligned}
$$
Applying Lemma \ref{xxsec2.l7}, we can further get
$$
\begin{aligned}
\delta_{n+1}([A,x\otimes f])&=(n+1)L_{n+1}([A,x\otimes f])-\sum_{k=0}^{n-1}L_{n-k}[\delta_{k+1}(A),x\otimes f]\\&
\ \ -\sum_{k=0}^{n-1}L_{n-k}[A,(x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f)]\\
&=(n+1)\sum_{k=0}^{n+1}[L_k(A),
L_{n+1-k}(x\otimes f)]-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i\delta_{k+1}(A),
L_{n-k-i}(x\otimes f)]\\
&\ \ -\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
L_{n-k-i}((x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f))].
\end{aligned}
$$
For convenience, let us write
$$
\begin{aligned}
U&=\sum_{k=0}^{n+1}k[L_k(A), L_{n+1-k}(x\otimes f)]-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i\delta_{k+1}(A), L_{n-k-i}(x\otimes f)],\\
V&=\sum_{k=0}^{n+1}(n+1-k)[L_k(A),
L_{n+1-k}(x\otimes f)]-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
L_{n-k-i}((x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f))].
\end{aligned}
$$
Then it is easy to verify $\delta_{n+1}([A,x\otimes f])=U+V$. In the expression of sum
$\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}$, we know that $k\neq n$ and
$0\leq k+i\leq n$. If we set $r=k+i$, then
$$\begin{aligned}
U&=\sum_{k=0}^{n+1}k[L_k(A), L_{n+1-k}(x\otimes f)]-\sum_{r=0}^{n}\sum_{0\leq k\leq r,k\neq n}[L_{r-k}\delta_{k+1}(A),L_{n-r}(x\otimes f)]\\
&=\sum_{r=0}^{n}(r+1)[L_{r+1}(A),
L_{n-r}(x\otimes f)]-\sum_{k=0}^{n-1}[L_{n-k}\delta_{k+1}(A), x\otimes f]\\
&\hspace{10pt}- \sum_{r=0}^{n-1}\sum_{k=0}^{r}[L_{r-k}\delta_{k+1}(A),
L_{n-r}(x\otimes f)]\\
&=\sum_{r=0}^{n-1}[(r+1)L_{r+1}(A)-\sum_{k=0}^{r}L_{r-k}\delta_{k+1}(A),L_{n-r}(x\otimes f)]\hspace{45pt}\\
&\quad +(n+1)[L_{n+1}(A),
x\otimes f]-\sum_{k=0}^{n-1}[L_{n-k}\delta_{k+1}(A), x\otimes f].
\end{aligned}$$
Applying the induction hypothesis to the above equality, we obtain
$$
U=[(n+1)L_{n+1}(A)-\sum_{k=0}^{n-1}L_{n-k}\delta_{k+1}(A),
x\otimes f]=[\delta_{n+1}(A), x\otimes f].
$$
On the other hand, a direct computation gives
$$\begin{aligned}
V&=\sum_{i=0}^{n}[L_i(A), (n+1-i)L_{n+1-i}(x\otimes f)] -\sum_{i=1}^{n}\sum_{k=0}^{n-i}[L_i(A), L_{n-k-i}((x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f))]\\
&\ \ \ \ -\sum_{k=0}^{n-1}[A, L_{n-k}((x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f))]\\
&=\sum_{i=1}^{n}[L_i(A), (n+1-i)L_{n+1-i}(x\otimes f) -\sum_{k=0}^{n-i}L_{n-k-i}((x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f))]\\
&\ \ \ +[A, (n+1)L_{n+1}(x\otimes f)]-\sum_{k=0}^{n-1}[A, L_{n-k}((x\otimes f){S}_{k+1}-{S}_{k+1}(x\otimes f))]
\end{aligned}$$
Furthermore, using the induction hypothesis again, we obtain
\begin{equation}\label{xxsec2e9c5}
\begin{aligned}
V&=\sum_{i=1}^{n}[L_i(A),
(n+1-i)L_{n+1-i}(x\otimes f)-\sum_{k=0}^{n-i}L_{n-k-i} \delta_{k+1}(x\otimes f)]\\
&\ \ +[A,(n+1)L_{n+1}(x\otimes f)-\sum_{k=0}^{n-1}L_{n-k} \delta_{k+1}(x\otimes f)]]\\
&\ \ +\sum_{i=1}^{n}[L_i(A),\sum_{k=0}^{n-i}L_{n-k-i} \tau_{k+1}(x\otimes f)]+[A,\sum_{k=0}^{n-1}L_{n-k} \tau_{k+1}(x\otimes f)]\\
&=[A, (n+1)L_{n+1}(x\otimes f)-\sum_{k=0}^{n-1}L_{n-k} \delta_{k+1}(x\otimes f)]\\
&\ \ +\sum_{i=1}^{n}[L_i(A),\sum_{k=0}^{n-i}L_{n-k-i} \tau_{k+1}(x\otimes f)]+[A,\sum_{k=0}^{n-1}L_{n-k} \tau_{k+1}(x\otimes f)].
\end{aligned}
\end{equation}
Note that
$$
\begin{aligned}
&\sum_{i=1}^{n}[L_i(A),\sum_{k=0}^{n-i}L_{n-k-i} \tau_{k+1}(x\otimes f)]+[A,\sum_{k=0}^{n-1}L_{n-k} \tau_{k+1}(x\otimes f)]\\
&=[A,L_{n} \tau_{1}(x\otimes f)+L_{n-1} \tau_{2}(x\otimes f)+\cdots+L_{1} \tau_{n}(x\otimes f)]\\
& \ \ +[L_1(A),L_{n-1} \tau_{1}(x\otimes f)+L_{n-2} \tau_{2}(x\otimes f) +\cdots+L_{1} \tau_{n-1}(x\otimes f)+L_{0} \tau_{n}(x\otimes f)]\\
& \ \ +\cdots\\
& \ \ +[L_{n-1}(A),L_{1} \tau_{1}(x\otimes f) + \tau_{2}(x\otimes f)]\\
& \ \ +[L_n(A), \tau_{1}(x\otimes f)]\\
&=L_n[A,\tau_1(x\otimes f) +L_{n-1}[A,\tau_2(x\otimes f)]+\cdots+L_1[A,\tau_n(x\otimes f)],
\end{aligned}
$$
Taking into account the relation (\ref{xxsec2e9c5}) yields
$$
\begin{aligned}
V&=[A, (n+1)L_{n+1}(x\otimes f)-\sum_{k=0}^{n-1}L_{n-k}(\delta_{k+1}(x\otimes f)]\\
&\ \ +L_n[A,\tau_1(x\otimes f)]+L_{n-1}[A,\tau_2(x\otimes f)]+\cdots+L_1[A,\tau_n(x\otimes f)]\\
&=[A, \delta_{n+1}(x\otimes f)].
\end{aligned}
$$
Finally, we conclude that
$$
\delta_{n+1}([A, x\otimes f])=U+V=[\delta_{n+1}(A), x\otimes f]+[A, \delta_{n+1}(x\otimes f)].
$$
It follows from Lemma \ref{xxsec2.l4} that $\delta_{n+1}$ satisfies (\ref{xxsec2e9c1}) due to the linearity.
\end{proof}
Before giving our main result we recall some of the basic concepts about inner higher derivations which will be used in latter. Let $\mathcal{A}$ be an associative algebra. We denote the set of higher derivations of order $m$ on $\mathcal{A}$ by $D_m(\mathcal{A})$, which is a group under the multiplication $*$ defined by
$$
(d*d')_n=\sum_{\substack{i+j=n}}d_i\circ d'_j,\ \ n\leq m,
$$
where $d, d'\in D_m(\mathcal{A})$(see \cite{Nowicki} and the
references therein). Let $\mathbf{a}=(a_n)_{n\leq m}$ be a sequence in $\mathcal{A}$. Denote by $\Delta(\mathbf{a})$ a family of mappings defined by
$$\Delta(\mathbf{a})_n=([a_1,1]*[a_2, 2]*\cdots*[a_n, n])_n, \ \ n\leq m \ \ \ \eqno(\bigstar)$$
where $$[a,k]_n(x)=\left\{
\begin{array}{ll}
x, & \hbox{if $n=0$ ;} \\
0, & \hbox{if $k\nmid n$;} \\
a^rx-a^{r-1}xa , & \hbox{if $n\neq0$, and $ n=kr$.}
\end{array}
\right.
$$
Then $\Delta(\mathbf{a})\in D_m(\mathcal{A})$ is called an \textit{inner higher derivation} of order $m$ (\cite{Nowicki}).
In the context of this article, $\mathbf{T}_K=(T_{Kn})_{n\in \mathbb{N}}$ is a sequence in $\mathcal{B}(K)$ and $\Delta(\mathbf{T})$ is a family of mappings defined by $$\Delta(\mathbf{T})_{Kn}=([{T_K}_1,1]*[{T_K}_2, 2]*\cdots*[{T_K}_n, n])_{n}. $$ For example, let $A \in \mathrm{Alg}\mathcal{L}$, then
$$\begin{aligned}
&\Delta(\mathbf{T})_{K1}(A)=T_{K1}A-AT_{K1}\\
&\Delta(\mathbf{T})_{K2}(A)=T_{K1}^2A-T_{K1}AT_{K1}+T_{K2}A-AT_{K2},\\
&\Delta(\mathbf{T})_{K3}(A)=T_{K1}^3A-T_{K1}^2AT_{K1}+T_{K1}T_{K2}A+A T_{K2}T_{K1}\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -T_{K1}AT_{K2}-T_{K2}AT_{K1}+T_{K3}A-AT_{K3}.
\end{aligned}$$
Now we are in a position to state the main theorem of this section.
\begin{theorem}\label{xxsec1.1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $ \mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Suppose that $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings. Then $\{L_n\}_{n=0}^{\infty}$ satisfies
$$
L_n([A, B])=\sum_{i+j=n}[L_i(A), L_j(B)]
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$ if and
only if for each $K \in \mathcal{J}(\mathcal{L})$, there exist a family of linear mappings $\{\Delta(\mathbf{T})_{Kn} \}_{n=1}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ and a sequence of linear functionals $\{h_{Kn}\}_{n\in \mathbb{N}}: \mathrm{Alg}\mathcal{L}\rightarrow \mathbb{F}$ satisfying $h_{Kn}([A,B]) = 0$ whenever $AB = 0$ such
that $ L_n(A)x = (\Delta(\mathbf{T})_{Kn} (A)+ h_{Kn}(A)I)x$ for all $A \in \mathrm{Alg}\mathcal{L}$ and all $x \in K$.
\end{theorem}
\begin{proof}
The ``if'' part is obvious. We will prove the ``only if''
part. The proof will be obtianed via an induction method.
\textbf{Claim 1.} $ L_1(A)x = (\Delta(\mathbf{T})_{K1} (A)+ h_{K1}(A))x$ for all $A \in \mathrm{Alg}\mathcal{L}$ and for all $x\in K$, and the linear
mapping $h_{K1}: \mathrm{Alg}\mathcal{L}\rightarrow \mathbb{F}$ vanishes on all commutators.
It follows from Proposition \ref{xxsec2.p1} that there is a
sequence of linear mappings $\{\delta_n\}_{n=0}^\infty :\mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ satisfying $\delta_n([A, F])=[\delta_n(A), F]+[A, \delta_n(F)]$ for all $A\in \mathrm{Alg}\mathcal{L}$, $F\in \mathcal{F}(\mathcal{L})$ such that
$$
(n+1)L_{n+1}=\sum_{k=0}^nL_{n-k} \delta_{k+1}
$$
for each non-negative integer $n$.
Clearly, the restriction of each $\delta_i$ to $\mathcal{F}(\mathcal{L})$ is a Lie derivation. This implies that it is standard by \cite[Theorem 3.1]{Lu}. That is, there exists a derivation $d_i: \mathcal{F}(\mathcal{L})\rightarrow \mathrm{Alg}\mathcal{L}$ and a linear mapping $\tau _i: \mathcal{F}(\mathcal{L})\rightarrow Z(\mathrm{Alg}\mathcal{L})$ vanishing on every commutator such
that
\begin{equation}\label{xxsec2e10}\delta_i(F) = d_i(F) + \tau_i(F)\end{equation}
for every $F\in \mathcal{F}(\mathcal{L})$.
Let us fix an element $K\in \mathcal{J}(\mathcal{L})$ and choose $x_K \in K$. In view of Lemma \ref{xxsec2.l2}, one can take $f_K \in K_-^{\bot} $
such that
$f_K(x_K) =1$. Note that $x \otimes f_K\in \mathcal{F}(\mathcal{L})$ for all $x \in K$. Thus we can define a linear mapping $R_{Ki}: K\rightarrow K$ by
$$
R_{Ki}(x) = d_i(x \otimes f_K)x_K
$$
for all $x \in K$. Then for any $F\in \mathcal{F}(\mathcal{L})$ and any $x \in K$, we have
$$R_{Ki}(Fx) = d_i(Fx \otimes f_K)x_K = d_i(F)(x \otimes f_K)x_K + Fd_i(x \otimes f_K)x_K,$$
from which we can get
\begin{equation}\label{xxsec2et19}d_i(F)x = (R_{Ki}F - FR_{Ki})x \end{equation}for all $x \in K$. Since each
$\delta_i(i\in \mathbb{N})$ in the sequence
$\{\delta_n\}_{n=0}^\infty$ is of the form (\ref{xxsec2e10}), each $\delta_i(i\in \mathbb{N})$ can be written as
\begin{equation}\label{xxsec2et20}\delta_i(F)x =(R_{Ki}F-FR_{Ki})x + \tau_i(F)x\end{equation}
for every $F\in \mathcal{F}(\mathcal{L})$ and for all $x \in K$. Here we omit the verification of the boundedness of each $R_{Ki}$, which is similar to the proof of \cite[The Main Theorem]{Lu}.
In view of equality (\ref{xxsec2et20}) we assert
$$
\delta_i([A, F])x= (R_{Ki}(AF - FA) - (AF - FA)R_{Ki})x + \tau_i([A, F])x
$$
for all $x \in K$. On the other hand, by Lemma \ref{xxsec2.l5} we conclude
$$
\begin{aligned}
\delta_i([A, F])x&=[\delta_i(A), F]x + [A, d_i(F)]x\\
&=(\delta_i(A)F - F\delta_i(A))x + (A(R_{Ki}F - FR_{Ki}) - (R_{Ki}F - FR_{Ki})A)x.
\end{aligned}
$$
for all $x \in K$. Comparing the last two equalities gives
$$(\delta_i(A) -(R_{Ki}A - AR_{Ki}))Fx = F(\delta_i(A) - (R_{Ki}A - AR_{Ki}))x + \tau_i ([A, F])x.$$for all $x \in K$.
Let $y \in K$. Choosing $f \in\operatorname{dim}K_-^{\bot} $
with $f(x) = 1$ and then putting $F =y\otimes f$ in the last equation, we
arrive at
\begin{equation}\label{xxsec2e15}(\delta_i(A)- (R_{Ki}A- AR_{Ki}))y = f((\delta_i(A)-(R_{Ki}A - AR_{Ki}))x)y +\tau_i ([A,y \otimes f])x\end{equation}
for all $y \in K$.
Note that $\tau_i ([A, y \otimes f])$ is in the center
of $ \mathrm{Alg}\mathcal{L}$, So the restriction of $\tau_i ([A, x \otimes f])$ to arbitrary $x\in K$ is in fact a scalar multiple of $x$. Let us now take $x=y$. It follows from equality (\ref{xxsec2e15}) that $\delta_i(A)- (R_{Ki}A- AR_{Ki}))y$ is a scalar multiple of $y$. Consequently, there exists a scalar $h'_{Ki}(A)$ such that
$$(\delta_i(A) - (R_{Ki}A - AR_{Ki}))y = h'_{Ki}(A)I_Ky$$
for all $y \in K$ and all $A \in \mathrm{Alg}\mathcal{L}$. Using the linearity of $\delta_i$ one can easily check that $h'_{Ki}$ is a linear mapping.
Therefore,
\begin{equation}\label{xxsec2e20c1}\begin{aligned}L_1(A)x&= \delta_1(A)x =(R_{K1}A-AR_{K1})x + h'_{K1}(A)I_Kx\\
&=\Delta(\mathbf{T})_{K1} (A)x+ h_{K1}(A)I_Kx\end{aligned}\end{equation}
for all $A \in \mathrm{Alg}\mathcal{L}$ and for all $x\in K$. Moreover, by the assumption on $L_1$ we know that $h_{K1}([A,B]) = 0$ whenever $AB=0$ for any $A,B \in\mathrm{Alg}\mathcal{L}$.
We now suppose that $L_k$ is a well-established mapping for each $k\leq n$. Then we need only to prove the following Claim 2.
\textbf{Claim 2.} $ L_{n+1}(A)x = (\Delta(\mathbf{T})_{Kn+1} (A)+ h_{Kn+1}(A))x$ for all $A \in \mathrm{Alg}\mathcal{L}$, $x\in K$,
and $h_{Kn+1}: \mathrm{Alg}\mathcal{L}\rightarrow \mathbb{F}$ vanishes on all commutators.
The proof of this claim will be realized through the following two steps.
\textbf{Step 1.} $L_{n+1}(F)x=\Delta(\mathbf{T})_{K n+1}(F)x+S_{n+1}(F)x$ for all $F \in \mathcal{F}(\mathcal{L})$ and for all $x\in K$ and there exists a scalar $\lambda_{Kn+1}(F)$ such that $S_{n+1}(F)x=\lambda_{Kn+1}(F)x$.
Note that $\Delta(\mathbf{T})_{Ki}$ and $S_i$($1\leq i\leq n$) have been well established due to Claim 1. Hence we have
\begin{equation}\label{xxsec2e20c3}
\begin{aligned}
L_{n+1}(F)x
&=\frac{1}{n+1}(L_n\delta_1 +L_{n-1}\delta_2 +\cdots +L_1\delta_{n} +L_0\delta_{n+1} )(F)x\\
&=\frac{1}{n+1}[(\Delta(\mathbf{T})_{Kn} + S_{n} )(R_{K1}F - FR_{K1} + h'_{K1}(F) )\\&\hspace{10pt}+(\Delta(\mathbf{T})_{Kn-1} + S_{n-1} )(R_{K2}F - FR_{K2} + h'_{K2}(F) ) +\cdots\\
&\hspace{10pt} +(\Delta(\mathbf{T})_{K1} + S_1 )(R_{Kn}F - FR_{Kn}+h'_{Kn}(F))+R_{Kn+1}F - FR_{Kn+1} + h'_{Kn+1}(F) ] x\\
&=\frac{1}{n+1}[\Delta(\mathbf{T})_{Kn} (R_{K1}F-FR_{K1})+\Delta(\mathbf{T})_{Kn-1} (R_{K2}F-FR_{K2})+\cdots\\&\hspace{10pt}+\Delta(\mathbf{T})_{K1}(R_{Kn}F-FR_{Kn})+R_{Kn+1}F - FR_{Kn+1}+S'_{n+1}]x\\
&\triangleq [\Delta(\mathbf{T})_{K n+1}(F) +S_{n+1}(F)]x\\
\end{aligned}
\end{equation}
for all $F \in \mathcal{F}(\mathcal{L})$ and all $x\in K$.
It is also easy to verify that $S_{n+1}(F)x=\lambda_{Kn+1}(F)x$
for all $F \in \mathcal{F}(\mathcal{L})$ and all $x\in K$; besides,
$ \Delta(\mathbf{T})_{Kn+1}$ in the above collections is a mapping of the form ($\bigstar)$ with order $n+1$. Now let us make a simple proof of the latter. Obviously(or see \cite{Hazewinkel}), $$(\Delta(\mathbf{T})_{K1},\cdots,\Delta(\mathbf{T})_{Kn},\Delta(\mathbf{T})_{Kn+1})$$ is a higher
derivation of order $n+1$ on $\mathcal{F}(\mathcal{L})$. We might as well assume that $\Delta'(\mathbf{T})_{Kn+1}$ is a a mapping of the form ($\bigstar)$ with order $n+1$, that is,
$$\Delta'(\mathbf{T})_{Kn+1}=([{T_K}_1,1]*[{T_K}_2, 2]*\cdots*[{T_K}_n, n]*[{T_K}'_{n+1}, n+1])_{n+1},$$
then $$(\Delta(\mathbf{T})_{K1},\cdots,\Delta(\mathbf{T})_{Kn},\Delta'(\mathbf{T})_{Kn+1})$$ is also a higher
derivation of order $n+1$ on $\mathcal{F}(\mathcal{L})$. A direct calculation shows that $\Delta(\mathbf{T})_{Kn+1}-\Delta'(\mathbf{T})_{Kn+1}$ is a usual derivation on $\mathcal{F}(\mathcal{L})$(or see \cite[Lemma 4.1]{Nowicki}). Note that equality (\ref{xxsec2et19}) implies that there exists some ${T_K}''_{n+1} \in \mathcal{B}(K)$ such that
$$\Delta(\mathbf{T})_{Kn+1}-\Delta'(\mathbf{T})_{Kn+1}=[{T_K}''_{n+1}, n+1]_{n+1}.$$
Hence, setting ${T_K}'_{n+1}+{T_K}''_{n+1}={T_K}_{n+1}$, we get
$$
\begin{aligned}
\Delta(\mathbf{T})_{Kn+1}&=([{T_K}_1,1]*[{T_K}_2, 2]*\cdots*[{T_K}_n, n]*[{T_K}'_{n+1},n+1])_{n+1}+[{T_K}''_{n+1},n+1]_{n+1}\\
&=([{T_K}_1,1]*[{T_K}_2, 2]*\cdots*[{T_K}_n, n]*[{T_K}'_{n+1}+{T_K}''_{n+1},n+1])_{n+1}\\
&=([{T_K}_1,1]*[{T_K}_2, 2]*\cdots*[{T_K}_n, n]*[{T_K}_{n+1},n+1])_{n+1},
\end{aligned}
$$
which is the desired form.
\textbf{Step 2.} $L_{n+1}$ has the desired form and the Claim 2 holds.
Take any operator $A\in \mathrm{Alg}\mathcal{L} $. For any $K \in \mathcal{J}(\mathcal{L})$ and any $x \in K$, by Lemma \ref{xxsec2.l2}., there exists some $f\in K_-^{\bot} $ such that $f(x)=1$. Note that $\mathcal{F}_{\mathcal{L}}(K)$ is a ideal of $\mathrm{Alg}\mathcal{L}$, so by equality (\ref{xxsec2e20c3}) we have
$$
L_{n+1}([A,x\otimes f])x= \Delta(\mathbf{T})_{K n+1}([A,x\otimes f])x+S_{n+1}([A,x\otimes f])x\\
$$
for all $x\in K$. On the other hand, using Lemma \ref{xxsec2.l5} and the induction hypothesis, we get
$$
\begin{aligned}
L_{n+1}([A,x\otimes f])x&=[L_{n+1}(A),x\otimes f]x+[A,L_{n+1}(x\otimes f])x\\
&\ \ +\sum_{\substack{i+j=n+1 \\ 0<i,j<n+1}}[L_i(A), L_j(x\otimes f)]x\\
&=[L_{n+1}(A),x\otimes f]x+[A,\Delta(\mathbf{T})_{K n+1}(x\otimes f])x\\
&\ \ +\sum_{\substack{i+j=n+1 \\ 0<i,j<n+1}}[\Delta(\mathbf{T})_{K i}(A), \Delta(\mathbf{T})_{K j}(x\otimes f)] x
\end{aligned}
$$for all $x \in K$. From the last two relations we obtain
$$\begin{aligned}
& (L_{n+1}(A)-\Delta(\mathbf{T})_{K n+1}(A))x\\
&=f((L_{n+1}(A)-\Delta(\mathbf{T})_{K n+1}(A))x)x+S_{n+1}([A,x\otimes f])x
\end{aligned}$$
for all $x \in K$. It follows from Step 1 that $S_{n+1}([A,x\otimes f])x$ is a scalar multiple $x$.
Consequently, $(L_{n+1}(A)-\Delta(\mathbf{T})_{K n+1}(A))x$ is also a scalar multiple of $x$.
Namely,
there exists a scalar $h_{Kn+1}(A)$ such that
$$L_{n+1}(A) x= (\Delta(\mathbf{T})_{K n+1}(A)+ h_{Kn+1}(A)I_K)x$$ holds for all $A\in \mathrm{Alg}\mathcal{L} $ and all $x\in K$. Moreover, it is also easy
to check that $h_{Kn+1} $ is linear and $h_{Kn+1}([A,B]) = 0$ for any $A,B \in\mathrm{Alg}\mathcal{L}$ with $AB=0$.
\end{proof}
Using the above theorem an immediate corollary is the following theorem form \cite{Qi}.
\begin{corollary}\label{xxsec2.c1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $ \mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Then a linear mapping $\delta: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ satisfies
$$
\delta([A, B])=[\delta(A), B]+[A, \delta(B)]
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$ if and
only if, for each $K \in \mathcal{J}(\mathcal{L})$, there exist an operator $T_K\in \mathcal{B}(K)$ and a linear functional $h_K: \mathrm{Alg}\mathcal{L}\rightarrow \mathbb{F}$ satisfying $h_{K}([A,B]) = 0$ whenever $AB = 0$ such
that $ \delta(A)x = (T_{K}A-AT_{K}+ h_K(A)I)x$ for all $A \in \mathrm{Alg}\mathcal{L}$ and all $x \in K$.
\end{corollary}
\section{Characterizations of $\xi$-Lie higher derivations by acting on zero
products}
Let $\mathcal{A}$ be an associative algebra over a field $\mathbb{F}$, a binary operation $[A,B]_{\xi}$ = $AB - \xi BA$ is called the \textit{$\xi$-Lie
product} of $A, B\in\mathcal{A}$ (see \cite{QiHou}). Recall that a linear mapping
$L: \mathcal{A}\rightarrow \mathcal{A}$ is called a \textit{$\xi$-Lie derivation} if $L([A,B]_{\xi}) = [L(A),B]_{\xi} + [A,L(B)]_{\xi}$
for all $A, B\in\mathcal{A} $.
In this section, we will give the characterization of $\xi$-Lie higher derivations with $\xi\neq 1$
on $\mathcal{J}$-subspace
lattice algebras by acting on zero products.
The following lemmas will be used in the sequel.
\begin{lemma}\label{xxsec3.l1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $ \mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Suppose that $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings satisfying
$$
L_n([A, B]_{\xi})=\sum_{i+j=n}[L_i(A), L_j(B)]_{\xi}
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$. Then for each $n\in \mathbb{N}$, $L_n(0)=0$.
\end{lemma}
\begin{proof} In the case of $k=1$, it is easy to check that $L_1(0) = [L_1(0),0]_{\xi} + [0,L_1(0)]_{\xi}=0$. Let $s\in \mathbb{N}$ with $s\geq 1$. Assume that the lemma is true for
all $s<k$. Then by the induction hypothesis we assert
$$
L_k(0)=L_k([0, 0]_{\xi})=\sum_{\substack{i+j=k \\ 0<i,j<k}}[L_i(0), L_j(0)]_{\xi}=0.
$$
\end{proof}
\begin{lemma}\cite[Theorem 3.1]{Qi}\label{xxsec3.l2}
Let $L$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the real or complex field $\mathbb{F}$. Suppose that $L : \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a linear mapping and $1\neq \xi \in \mathbb{F}$. Then $L$ satisfies $L([A,B]_{\xi}) = [L(A),B]_{\xi} + [A,L(B)]_{\xi}$ whenever
$ A, B \in \mathrm{Alg}\mathcal{L}$ with $AB = 0$ if and only if one of the following statements holds.
\begin{enumerate}
\item $\xi\neq 0$, L is a derivation of $\mathrm{Alg}\mathcal{L}$.
\item $\xi= 0, $ $L(I) \in Z(\mathrm{Alg}\mathcal{L})$ and there exists a linear derivation $\delta : \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ such that $L(A) = \delta(A) + L(I)A$ for all $ A \in \mathrm{Alg}\mathcal{L}$. That is, $L$
is a generalized derivation of $\mathrm{Alg}\mathcal{L}$ with associated derivation $\delta$.
\end{enumerate}
\end{lemma}
The following proposition will permit us to transfer the problem of $\xi$-Lie higher derivation with $\xi\neq 1$ by acting on zero products on $\mathcal{J}$-subspace lattice algebras into the same problems related to the corresponding $\xi$-Lie derivation.
\begin{proposition}\label{xxsec3.p1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $ \mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Suppose that $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings satisfying
$$
L_n([A, B]_{\xi})=\sum_{i+j=n}[L_i(A), L_j(B)]_{\xi}
$$ whenever $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB=0$. Then there is a family of linear mappings
$\{\delta_n\}_{n=0}^\infty$ satisfying $\delta_n([A, B]_{\xi})=[\delta_n(A), B]_{\xi}+[A, \delta_n(B)]_{\xi}$ for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$ such that
$$
(n+1)L_{n+1}=\sum_{k=0}^n\delta_{k+1}L_{n-k}
$$
for each non-negative integer $n$.
\end{proposition}
\begin{proof}
Let us prove it by induction on the index $n$. If $n=0$, then
$$
L_1([A, B]_{\xi})=[L_1(A), L_0(B)]_{\xi}+[L_0(A),L_1(B)]_{\xi}=[L_1(A), B]_{\xi}+[A,
L_1(B)]_{\xi}
$$
for all $A, B\in \mathrm{Alg}\mathcal{L}$ with $AB= 0$. Let us set $\delta_1=L_1$. Then
$\delta_1([A, B]_{\xi})=[\delta_1(A), B]_{\xi}+[A, \delta_1(B)]_{\xi}$ for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$.
We now suppose that $\delta_k$ is a well-established mapping of $ \mathrm{Alg}\mathcal{L}$ for each $k\leq n$. Define
$$
\delta_{n+1}=(n+1)L_{n+1}-\sum_{k=0}^{n-1}\delta_{k+1}L_{n-k}.
$$
It is sufficient to show that $\delta_{n+1}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ satisfying $$\delta_{n+1}([A, B]_{\xi})=[\delta_{n+1}(A), B]_{\xi}+[A, \delta_{n+1}(B)]_{\xi}$$ for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$.
\textbf{Case 1.} $\xi\neq 0$.
For arbitrary elements $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$, we have
$$
\begin{aligned}
\delta_{n+1}([A,B]_{\xi})&=(n+1)L_{n+1}([A,B]_{\xi})-\sum_{k=0}^{n-1}\delta_{k+1}L_{n-k}([A,B]_{\xi})\\
&=(n+1)\sum_{k=0}^{n+1}[L_k(A),
L_{n+1-k}(B)]_{\xi}\\
& \hspace{10pt}-\sum_{k=0}^{n-1}\delta_{k+1}\left(\sum_{i=0}^{n-k}[L_i(A),
L_{n-k-i}(B)]_{\xi}\right).
\end{aligned}
$$
Using the induction hypothesis and Lemma \ref{xxsec3.l2}, we obtain
$$
\begin{aligned}
\delta_{n+1}([A, B]_{\xi})&=\sum_{k=0}^{n+1}(k+n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}-\\
&\hspace{10pt}\sum_{k=0}^{n-1}\delta_{k+1}\left(\sum_{i=0}^{n-k}[L_i(A), L_{n-k-i}(B)]_{\xi}\right)\\
&=\sum_{k=0}^{n+1}k[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[\delta_{k+1}(L_i(A)),
L_{n-k-i}(B)]_{\xi}\\
&\hspace{10pt}+\sum_{k=0}^{n+1}(n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
\delta_{k+1}(L_{n-k-i}(B))]_{\xi} .
\end{aligned}
$$
Let us write
$$
\begin{aligned}
U&=\sum_{k=0}^{n+1}k[L_k(A), L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[\delta_{k+1}(L_i(A)), L_{n-k-i}(B)]_{\xi},\\
V&=\sum_{k=0}^{n+1}(n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
\delta_{k+1}(L_{n-k-i}(B))]_{\xi}.
\end{aligned}
$$
Then $\delta_{n+1}([A,B])=U+V$. In the expression of sum $\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}$, we notice that $k\neq n$ and
$0\leq k+i\leq n$. If we set $r=k+i$, then
$$\begin{aligned}
U&=\sum_{k=0}^{n+1}k[L_k(A), L_{n+1-k}(B)]_{\xi}-\sum_{r=0}^{n}\sum_{0\leq k\leq r,k\neq n}[\delta_{k+1}(L_{r-k}(A)),L_{n-r}(B)]_{\xi}\\
&=\sum_{r=0}^{n}(r+1)[L_{r+1}(A),
L_{n-r}(B)]_{\xi}-\sum_{r=0}^{n-1}\sum_{k=0}^{r}[\delta_{k+1}(L_{r-k}(A)),
L_{n-r}(B)]_{\xi}\\
&\hspace{10pt}- \sum_{k=0}^{n-1}[\delta_{k+1}(L_{n-k}(A)), B]_{\xi}\\
&=\sum_{r=0}^{n-1}[(r+1)L_{r+1}(A)-\sum_{k=0}^{r}\delta_{k+1}(L_{r-k}(A)),L_{n-r}(B)]_{\xi}\hspace{45pt}\\
&\quad +(n+1)[L_{n+1}(A),
B]_{\xi}-\sum_{k=0}^{n-1}[\delta_{k+1}(L_{n-k}(A)), B]_{\xi}.
\end{aligned}$$
By the induction hypothesis $
(r+1)L_{r+1}(A)=\sum_{k=0}^{r}\delta_{k+1}(L_{r-k}(A))
$
for $r=0,\cdots, n-1$, we get
$$
U=[(n+1)L_{n+1}(A)-\sum_{k=0}^{n-1}\delta_{k+1}(L_{n-k}(A)),
B]_{\xi}=[\delta_{n+1}(A), B]_{\xi}.
$$
Similarly, one can deduce that $V=[A, \delta_{n+1}(B)]_{\xi}$. We therefore conclude
$$
\delta_{n+1}([A, B]_{\xi})=U+V=[\delta_{n+1}(A), B]_{\xi}+[A, \delta_{n+1}(B)]_{\xi}
$$
for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB=0$, which is the desired result.
\textbf{Case 2.} $\xi=0$.
Note that the sequence $\{\delta_{k+1}\}_{k=0}^{n-1}$ is a family of generalized derivations by Lemma \ref{xxsec3.l2}. That is, for an arbitrary $k=0, 1,\cdots, n-1$, $\delta_{k+1}(A)=\tau_{k+1}(A)+\delta_{k+1}(I)(A)$ for all $A \in \mathrm{Alg}\mathcal{L}$.
Hence we have
$$
\begin{aligned}
\delta_{n+1}([A,B]_{\xi})&=(n+1)L_{n+1}([A,B]_{\xi})-\sum_{k=0}^{n-1}\delta_{k+1}L_{n-k}([A,B]_{\xi})\\
&=(n+1)\sum_{k=0}^{n+1}[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\delta_{k+1}\left(\sum_{i=0}^{n-k}[L_i(A),
L_{n-k-i}(B)]_{\xi}\right)
\end{aligned}
$$
for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB=0$. In view of Lemma \ref{xxsec3.l1} we know that $L_{n-k}([A,B]_{\xi})=0$ in the above relation. Using the induction hypothesis one can compute
$$
\begin{aligned}
\delta_{n+1}([A, B]_{\xi})&=\sum_{k=0}^{n+1}(k+n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}\\
&\hspace{10pt}-\sum_{k=0}^{n-1}\tau_{k+1}\left(\sum_{i=0}^{n-k}[L_i(A), L_{n-k-i}(B)]_{\xi}\right)\\
&=\sum_{k=0}^{n+1}k[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[\tau_{k+1}(L_i(A)),
L_{n-k-i}(B)]_{\xi}\\
&\hspace{10pt}+\sum_{k=0}^{n+1}(n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
\tau_{k+1}(L_{n-k-i}(B))]_{\xi}\\
&=\sum_{k=0}^{n+1}k[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[\delta_{k+1}(L_i(A)),
L_{n-k-i}(B)]_{\xi}\\
&\hspace{10pt}+\sum_{k=0}^{n+1}(n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
\delta_{k+1}(L_{n-k-i}(B))]_{\xi}
\end{aligned}
$$
for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$. If we set
$$
U=\sum_{k=0}^{n+1}k[L_k(A), L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[\delta_{k+1}(L_i(A)), L_{n-k-i}(B)]_{\xi}$$
and
$$
V=\sum_{k=0}^{n+1}(n+1-k)[L_k(A),
L_{n+1-k}(B)]_{\xi}-\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}[L_i(A),
\delta_{k+1}(L_{n-k-i}(B))]_{\xi}.
$$
Then $\delta_{n+1}([A,B])=U+V$ for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$. Similarly, we can show that
$U=[\delta_{n+1}(A), B]_{\xi}$ and $V=[A, \delta_{n+1}(B)]_{\xi}$. Thus we have
$$
\delta_{n+1}([A, B]_{\xi})=P+Q=[\delta_{n+1}(A), B]_{\xi}+[A, \delta_{n+1}(B)]_{\xi}
$$
for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$.
\end{proof}
Now we are in a position to state the main theorem of this section.
\begin{theorem}\label{xxsec3t1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $ \mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Let $\xi\in \mathbb{F}$ with $\xi\neq1$. Suppose that $\mathcal{G}$ is the
set of all $\xi$-Lie higher derivations on $ \mathrm{Alg}\mathcal{L}$ by acting on zero products and $\mathcal{H}$
be the set of all sequences of $\xi$-Lie derivations on $ \mathrm{Alg}\mathcal{L}$ by acting on zero products
with first component zero. Then there is a one-to-one correspondence
between $\mathcal{G}$ and $\mathcal{H}$.
\end{theorem}
\begin{proof}
It follows from Proposition \ref{xxsec3.p1} that for an arbitrary
$G=\{L_n\}_{n=0}^{\infty}\in\mathcal{G}$ on $ \mathrm{Alg}\mathcal{L}$ satisfying $$
L_n([A, B]_{\xi})=\sum_{i+j=n}[L_i(A), L_j(B)]_{\xi}
$$ whenever $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$, there is a sequence
$D=\{\delta_n\}_{n=0}^\infty$ of linear maps satisfying $\delta_n([A, B]_{\xi})=[\delta_n(A), B]_{\xi}+[A, \delta_n(B)]_{\xi}$ for any $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$ on $\mathrm{Alg}\mathcal{L}$ with $\delta_0=0$ such that
$$
(n+1)L_{n+1}=\sum_{k=0}^n \delta_{k+1}L_{n-k}
$$
for each non-negative integer $n$. Hence the following mapping
$$
\begin{aligned}
\varphi: \mathcal{G} &\longrightarrow \mathcal{H} \\
\{L_n\}_{n=0}^{\infty}=G &\longmapsto
D=\{\delta_n\}_{n=0}^{\infty}
\end{aligned}
$$
is well-defined. Note that the solution of the recursive
relation of Proposition \ref{xxsec3.p1} is unique. Therefore
$\varphi$ is injective.
We next prove that $\varphi$ is also surjective. For a given
sequence $D=\{\delta_n\}_{n=0}^\infty$ of Lie derivations with
$\delta_0=0$, one can define $L_0=I$ and
$$
(n+1)L_{n+1}=\sum_{k=0}^n \delta_{k+1}L_{n-k}
$$
for each $n$. It is sufficient to show
that $G=\{L_n\}_{n=0}^{\infty}$ is a $\xi$-Lie higher derivation on $ \mathrm{Alg}\mathcal{L}$ by acting on zero products.
Obviously, $L_1=\delta_1$ is a $\xi$-Lie derivation on $ \mathrm{Alg}\mathcal{L}$ by acting on zero products. Assume that $L_k([A,B]_{\xi})=\sum_{i=0}^k[L_i(A),
L_{k-i}(B)]_{\xi}$ for any $A,B\in \mathrm{Alg}\mathcal{L}$ with $AB=0$ and for each $k\leq n$.
Note that
$$\begin{aligned}
(n+1)L_{n+1}([A, B]_{\xi})&=\sum_{k=0}^n \delta_{k+1}L_{n-k}([A, B]_{\xi})\\
&=\sum_{k=0}^n \delta_{k+1}\left(\sum_{i=0}^{n-k}[L_i(A),
L_{n-k-i}( B)]_{\xi}\right)
\end{aligned}$$
for any $A,B\in \mathrm{Alg}\mathcal{L}$ with $AB=0$.
\textbf{Case 1.} $\xi\neq 0$
Using Lemma \ref{xxsec3.l2} and the induction hypothesis, we compute that
$$
\begin{aligned}
(n+1)L_{n+1}([A, B]_{\xi})&=\sum_{k=0}^n \sum_{i=0}^{n-k}\left\{[\delta_{k+1}(L_i(A)), L_{n-k-i}(B)]_{\xi}+[L_i(A), \delta_{k+1}(L_{n-k-i}(B))]_{\xi}\right\}\\
&=\sum_{i=0}^n[\sum_{k=0}^{n-i}\delta_{k+1}L_{n-i-k}(A), L_i(B)]_{\xi}+
\sum_{i=0}^n[L_i(A), \sum_{k=0}^{n-i}\delta_{k+1}L_{n-i-k}(B)]_{\xi}\\
&=\sum_{i=0}^n[(n-i+1)L_{n-i+1}(A), L_i(B)]_{\xi}+\sum_{i=0}^n[L_i(A), (n-i+1)L_{n-i+1}(B)]_{\xi}\\
&=\sum_{i=1}^{n+1}i[L_i(A), L_{n+1-i}(B)]_{\xi}+\sum_{i=0}^n(n+1-i)[L_i(A), L_{n-i+1}(B)]_{\xi}\\
&=(n+1)\sum_{k=0}^{n+1}[L_k(A), L_{n+1-k}(B)]_{\xi}
\end{aligned}
$$
for any $A,B\in \mathrm{Alg}\mathcal{L}$ with $AB=0$.
\textbf{Case 2.} $\xi=0$
By Lemma \ref{xxsec3.l2} it follows that $\delta_n$ is a generalized derivation with associated derivation $\tau_n$ for each $n\in \mathbb{N}$. That is, $\delta_n(A)=\tau_n(A)+\delta_n(I)A$ holds for all $A\in \mathrm{Alg}\mathcal{L}$. In an analogous manner, one can show
$$
\begin{aligned}
(n+1)L_{n+1}([A, B]_{\xi})&=\sum_{k=0}^n \sum_{i=0}^{n-k}\left\{[\tau_{k+1}(L_i(A)), L_{n-k-i}(B)]_{\xi}+[L_i(A), \tau_{k+1}(L_{n-k-i}(B))]_{\xi}\right\}\\
&=\sum_{i=0}^n[\sum_{k=0}^{n-i}\tau_{k+1}L_{n-i-k}(A), L_i(B)]_{\xi}+
\sum_{i=0}^n[L_i(A), \sum_{k=0}^{n-i}\tau_{k+1}L_{n-i-k}(B)]_{\xi}\\
&=\sum_{i=0}^n[\sum_{k=0}^{n-i}\delta_{k+1}L_{n-i-k}(A), L_i(B)]_{\xi}+
\sum_{i=0}^n[L_i(A), \sum_{k=0}^{n-i}\delta_{k+1}L_{n-i-k}(B)]_{\xi}\\
&=\sum_{i=0}^n[(n-i+1)L_{n-i+1}(A), L_i(B)]_{\xi}+\sum_{i=0}^n[L_i(A), (n-i+1)L_{n-i+1}(B)]_{\xi}\\
&=\sum_{i=1}^{n+1}i[L_i(A), L_{n+1-i}(B)]_{\xi}+\sum_{i=0}^n(n+1-i)[L_i(A), L_{n-i+1}(B)]_{\xi}\\
&=(n+1)\sum_{k=0}^{n+1}[L_k(A), L_{n+1-k}(B)]_{\xi}
\end{aligned}
$$
for any $A,B\in \mathrm{Alg}\mathcal{L}$ with $AB=0$.
In any case, one can get
$$
\begin{aligned}
L_{n+1}([A, B]_{\xi})=\sum_{k=0}^{n+1}[L_k(A), L_{n+1-k}(B)]_{\xi}
\end{aligned}
$$
for any $A,B\in \mathrm{Alg}\mathcal{L}$ with $AB=0$. This shows that
$G=\{L_n\}_{n=0}^{\infty}$ is a $\xi$-Lie higher derivation of
$\mathrm{Alg}\mathcal{L}$. Thus $G\in \mathcal{G}$ and this completes the proof.
\end{proof}
Before stating the second main result in this section, we need a conclusion that characterizes generalized higher derivations in terms of generalized derivations.
\begin{lemma}\label{xxsec3.l5}
Let $\mathcal{A}$ be an associative algebra and $G=\{L_n\}_{n=0}^{\infty}$ be a generalized higher derivation with an associated higher derivation
$D=\{d_n\}_{n=0}^{\infty}$.
Then there is a family of generalized derivations $\{\gamma_n\}_{n\in \mathbb{N}}$ with the family of associated derivations $\{\tau_n\}_{n\in \mathbb{N}}$ such that
$$
(n+1)L_{n+1}=\sum_{k=0}^n \gamma_{k+1}L_{n-k},\ \
(n+1)d_{n+1}=\sum_{k=0}^n \tau_{k+1}d_{n-k}$$
for each nonnegative integer $n$.
\end{lemma}
\begin{proof}
Let us show it by induction on $n$. If $n=0$, then
$$
L_1(xy)=L_1(x)y+xd_1(y)
$$
for all $x, y\in\mathcal{A}$. Let us write $\gamma_1=L_1$ and $\tau_1=d_1$.
Then $\gamma_1$ is a generalized derivation of $\mathcal{A}$ with associated derivation $\tau_1$.
Suppose that $\gamma_k$ is a generalized derivation of $\mathcal{A}$ with associated derivation $\tau_k$ and that associated derivation $\tau_k$ satisfies
$kd_{k}=\sum_{s=0}^{k-1}\tau_{s+1}d_{k-1-s}$
for each $k\leq n$. Let us define
$$
\gamma_{n+1}=(n+1)L_{n+1}-\sum_{k=0}^{n-1}\gamma_{k+1}L_{n-k}.
$$
It is sufficient to prove that $\gamma_{n+1}$ is a generalized derivation of $\mathcal{A}$.
For any $x,y \in\mathcal{A}$, we have
$$\begin{aligned}
\gamma_{n+1}( x y )&=(n+1)L_{n+1}(xy)-\sum_{k=0}^{n-1}\gamma_{k+1}L_{n-k}(xy)\\
&=(n+1)\sum_{k=0}^{n+1}L_k(x)d_{n+1-k}(y) -\sum_{k=0}^{n-1}\gamma_{k+1}\left(\sum_{i=0}^{n-k}L_i(x)d_{n-k-i}(y) \right).
\end{aligned}
$$
Therefore,
$$\begin{aligned}
\gamma_{n+1} (x y) &=\sum_{k=0}^{n+1}(k+n+1-k) L_k(x) d_{n+1-k}(y) -
\sum_{k=0}^{n-1}\gamma_{k+1}\left(\sum_{i=0}^{n-k} L_i(x) d_{n-k-i}(y) \right)\\
&=\sum_{k=0}^{n+1}k L_k(x) d_{n+1-k}(y) +\sum_{k=0}^{n+1} (n+1-k)L_k(x)d_{n+1-k}(y) \\
&\quad -\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}\{ \gamma_{k+1} (L_i(x)) d_{n-k-i}(y) + L_i(x) \tau_{k+1} (d_{n-k-i}(y)) \}.
\end{aligned}$$
Let us write
$$\begin{aligned}
U&=\sum_{k=0}^{n+1}k L_k(x) d_{n+1-k}(y) -\sum_{k=0}^{n-1}\sum_{i=0}^{n-k} \gamma_{k+1}(L_i(x)) d_{n-k-i}(y), \\
V&=\sum_{k=0}^{n+1} (n+1-k)L_k(x)d_{n+1-k}(y) -\sum_{k=0}^{n-1}\sum_{i=0}^{n-k} L_i(x) \tau_{k+1}(d_{n-k-i}(y)).
\end{aligned}$$
Thus $\gamma_{n+1}[x, y]=U+V$. In the summands $\sum_{k=0}^{n-1}\sum_{i=0}^{n-k}$, we know that $0\leq k+i\leq n$
and $k\neq n$. If we set $r=k+i$, then
$$\begin{aligned}
U&=\sum_{k=0}^{n+1}k L_k(x) d_{n+1-k}(y) -\sum_{r=0}^{n}\sum_{0\leq k\leq r, k\neq n} \gamma_{k+1}(L_{r-k}(x)) d_{n-r}(y) \\
&=\sum_{k=0}^{n+1}k L_k(x) d_{n+1-k}(y) -\sum_{r=0}^{n-1}\sum_{k=0}^{r} \gamma_{k+1}(L_{r-k}(x)) d_{n-r}(y) -
\sum_{k=0}^{n-1} \gamma_{k+1}(L_{n-k}(x)) y \\
&=\sum_{r=0}^{n}(r+1) L_{r+1}(x) d_{n-r}(y) -\sum_{r=0}^{n-1}\sum_{k=0}^{r} \gamma_{k+1}(L_{r-k}(x)) d_{n-r}(y) -
\sum_{k=0}^{n-1} \gamma_{k+1}(L_{n-k}(x)) y \\
&=\sum_{r=0}^{n-1}[ (r+1)L_{r+1}(x)-\sum_{k=0}^{r}\gamma_{k+1}(L_{r-k}(x)) ] d_{n-r}(y) \\
&\quad +(n+1) L_{n+1}(x) y -\sum_{k=0}^{n-1} \gamma_{k+1}(L_{n-k}(x)) y .
\end{aligned}$$
By the induction hypothesis, $(r+1)L_{r+1}(x)=\sum_{k=0}^{r}\gamma_{k+1}(L_{r-k}(x))$ for $r=0,\cdots,n-1$.
Therefore, we deduce that
$$
U= [(n+1)L_{n+1}(x)-\sum_{k=0}^{n-1}\gamma_{k+1}(L_{n-k}(x))] y=\gamma_{n+1}(x) y.
$$
On the other hand, a direct computation shows that
$$\begin{aligned}
V&=\sum_{k=0}^{n+1} (n+1-k)L_k(x)d_{n+1-k}(y) -\sum_{k=0}^{n-1}\sum_{i=0}^{n-k} L_i(x) \tau_{k+1}(d_{n-k-i}(y)) \\
&=\sum_{k=0}^{n+1} (n+1-k)L_k(x)d_{n+1-k}(y) -\sum_{i=0}^{n}\sum_{k=0}^{n-i} L_i(x) \tau_{k+1}(d_{n-k-i}(y)) + x \tau_{n+1}(y) \\
&=\sum_{i=0}^{n} (n+1-i)L_i(x)d_{n+1-i}(y) -\sum_{i=0}^{n}\sum_{k=0}^{n-i} L_i(x) \tau_{k+1}(d_{n-k-i}(y)) + x \tau_{n+1}(y).
\end{aligned}$$
Using the induction hypothesis again, we obtain
$$\begin{aligned}
V&=\sum_{i=0}^{n}L_i(x)[(n+1-i)d_{n+1-i}(y)-\sum_{k=0}^{n-i}\tau_{k+1}(d_{n-k-i}(y))] + x \tau_{n+1}(y) \\
&=x[(n+1)d_{n+1}(y)-\sum_{k=0}^{n-1}\tau_{k+1}(d_{n-k}(y))-\tau_{n+1}(y)] + x \tau_{n+1}(y) \\
&= x \tau_{n+1}(y) ,
\end{aligned}$$
where $\tau_{n+1}=(n+1)d_{n+1}-\sum_{k=0}^{n-1}\tau_{k+1}d_{n-k}$.
Hence $$\gamma_{n+1}(xy)=\gamma_{n+1}(x)y+x \tau_{n+1}(y).$$ Whence $\gamma_{n+1}$ is a generalized derivation of $\mathcal{A}$ with the associated derivation $\tau_{n+1}$ and
this completes the proof.
\end{proof}
The second main result in this section reads as follows.
\begin{theorem}\label{xxsec2.1}
Let $\mathcal{L}$ be a $\mathcal{J}$-subspace lattice on a Banach space $X$ over the
real or complex field $\mathbb{F}$ and $\mathrm{Alg}\mathcal{L}$ be the associated $\mathcal{J}$-subspace
lattice algebra. Suppose that $\{L_n\}_{n=0}^{\infty}: \mathrm{Alg}\mathcal{L}\rightarrow \mathrm{Alg}\mathcal{L}$ is a family of linear mappings and $\xi\in \Bbb{F}$ with $\xi\neq 1$. Then $\{L_n\}_{n=0}^{\infty}$ satisfies
$$
L_n([A, B]_{\xi})=\sum_{i+j=n}[L_i(A), L_j(B)]_{\xi}
$$ for any $A, B\in\mathrm{Alg}\mathcal{L}$ with $AB = 0$ if and only if one of the following statements hold.
\begin{enumerate}
\item $\xi\neq 0, $ $\{L_n\}_{n=0}^{\infty}$ is a higher derivation of $\mathrm{Alg}\mathcal{L}$.
\item $\xi= 0, $ $\{L_n\}_{n=0}^{\infty}$ is a generalized higher derivation of $\mathrm{Alg}\mathcal{L}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{Case 1.} (a) $\xi\neq 0$
The proof can be obtained by using Lemma \ref{xxsec3.l2}, Theorem \ref{xxsec3t1} and \cite[Theorem 2.4]{Han1}.
\textbf{Case 2.} (b) $\xi= 0$
The ``if'' part can be obtained from Lemma \ref{xxsec3.l2}, Lemma \ref{xxsec3.l5} and Theorem \ref{xxsec3t1}.
We need only to consider the ``only if'' part.
Proposition \ref{xxsec3.p1} means that there is a family $\{\delta_n\}_{n=0}^\infty$ of linear mappings satisfying $\delta_n([A, B]_{\xi})=[\delta_n(A), B]_{\xi}+[A, \delta_n(B)]_{\xi}$ whenever $A, B \in \mathrm{Alg}\mathcal{L}$ with $AB= 0$ such that
$
(n+1)L_{n+1}=\sum_{k=0}^n\delta_{k+1}L_{n-k}
$
for each nonnegative integer $n$. Now each $\delta_n$ is a generalized derivation, which is due to Lemma \ref{xxsec3.l2}. We might as well assume the associated derivation of $\delta_n$ be $\tau_n$ for each nonnegative integer $n$.
To prove $\{L_n\}_{n=0}^{\infty}$ is a generalized higher derivation, let us take an inductive approach for the index $n$. It is clear that $L_1=\delta_1$ is a generalized derivation of $\mathrm{Alg}\mathcal{L}$ with the associated derivation $\tau_1$. Suppose that
$L_k(AB)=\sum_{i=0}^kL_i(A)d_{k-i}(B)$ for all $A, B \in \mathrm{Alg}\mathcal{L}$ and
$kd_k=\sum_{s=0}^{k-1}\tau_{s+1}d_{k-1-s}$ for all $k\leq n$.
Let us now prove that $L_{n+1}(AB)=\sum_{i=0}^{n+1}L_i(A)d_{n+1-i}(B)$ for all $A, B \in \mathrm{Alg}\mathcal{L}$. Note that
$$\begin{aligned}
(n+1)L_{n+1}(AB)&=\sum_{k=0}^n \delta_{k+1}L_{n-k}(AB)\\
&=\sum_{k=0}^{n }\sum_{i=0}^{n-k}\delta_{k+1}(L_i(A)
d_{n-k-i}(B))\\&=\sum_{k=0}^n \sum_{i=0}^{n-k}\left\{[\delta_{k+1}(L_i(A))d_{n-k-i}(B)+L_i(A)\tau_{k+1}(d_{n-k-i}(B))]\right\}\\&=\sum_{i=0}^n\left(\sum_{k=0}^{n-i}\delta_{k+1}L_{n-i-k}(A)\right)d_i(B)+
\sum_{i=0}^nL_i(A)\left(\sum_{k=0}^{n-i}\tau_{k+1}d_{n-i-k}(B)\right).
\end{aligned}$$
So by the induction hypothesis we can do the following computation.
$$
\begin{aligned}
(n+1)L_{n+1}(AB)
&=\sum_{i=0}^n(n-i+1)L_{n-i+1}(A)d_i(B)+A \sum_{k=0}^{n}\tau_{k+1}d_{n-k}(B)\\
&\ +\sum_{i=1}^nL_i(A) (n-i+1)d_{n-i+1}(B)\\\
&=\sum_{i=1}^{n+1}iL_i(A)d_{n+1-i}(B)+A \sum_{k=0}^{n}\tau_{k+1}d_{n-k}(B)\\
&\ +\sum_{i=1}^n(n+1-i)L_i(A)d_{n-i+1}(B)\\
&=\sum_{i=1}^{n+1}(n+1)L_i(A)d_{n+1-i}(B)+A \sum_{k=0}^{n}\tau_{k+1}d_{n-k}(B)\\
&=(n+1)\sum_{k=0}^{n+1}L_k(A)d_{n+1-k}(B),
\end{aligned}
$$
where $(n+1)d_{n+1}=\sum_{k=0}^{n}\tau_{k+1}d_{n-k}$. This completes the
proof.
\end{proof}
|
1,116,691,498,556 | arxiv | \section{Introduction}\label{intro}
With the discovery of the missing piece, the Higgs boson at the Large Hadron Collider (LHC)~\cite{Chatrchyan:2012xdj, Aad:2012tfa} at CERN the Standard Model (SM) of particle physics
has been turned into a complete theory. From the last several decades it has been a well known fact that most of the theoretical predictions of this theory are in good agreement with various experimental results. However, at the same time, different experimental results in various directions compelling us to formulate physics beyond the SM (BSM). For example, dark matter relic density has been measured with
a great precision from the temperature and polarization anisotropies of the cosmic
microwave background (CMB) radiation by experiments like WMAP \cite{Hinshaw:2012aka}
and Planck \cite{Ade:2015xua}. On top of that various indirect evidence such as
rotation curve \cite{Sofue:2000jx}, gravitational lensing of distant objects \cite{Bartelmann:1999yn},
collision between galaxy clusters (such as Bullet cluster \cite{Clowe:2003tk} etc.) etc. have
strongly support for the existence of dark matter. However, in the SM there is no such candidate for dark matter. On the other hand neutrino oscillation
experiments \cite{Fukuda:1998mi, Ahmad:2002jz, Araki:2004mb, Abe:2011sj}
firmly established massive nature of at least two neutrinos
and have accurately measured three intergenerational mixing angles, both of
which are missing in the SM due to non-existence of the right handed counterparts
of left handed neutrinos. Besides, the CP-violation in the quark sector is
not at all sufficient to explain the observed baryon asymmetry of the Universe
\cite{Tanabashi:2018oca}. Furthermore, there is an enduring
$\sim 3.5\sigma$ discrepancy \cite {Tanabashi:2018oca} between experimentally measured value of
the anomalous magnetic moment of muon [$(g-2)_\mu$] and its SM predictions, which strongly indicates
the presence of a new physics (NP) beyond the SM.
Apart from the above mentioned facts, over the last few years different flavour physics experiments like LHCb, Belle and Babar have been consistently shown that experimental data for different observables are in significant disagreement with respect to the corresponding SM predictions. Indeed this situation demands the invocation of NP effects. Recently the LHCb collaboration has reported additional hints for violation of Lepton Flavour Universality (LFU) between $b \to s \mu^+ \mu^-$ and $b \to s e^+ e^-$ processes. The LFU violation\footnote{Evidences of LFUV via charge current semileptonic $b \to c \ell \nu$ transition processes have also been observed. For example experimental results show significant deviations for observables $R_{D^{(*)}}$~\cite{average} and $R_{J/\psi}$~\cite{Aaij:2017tyk} from the corresponding SM predictions.} (LFUV) can be measured with the help of following observables $R_K$ and $R_{K^*}$
\begin{eqnarray}
R_{K^{(*)}} =\frac{{\rm Br} \left( B \to K^{(*)} \mu^+ \mu^-\right)}
{ {\rm Br} \left( B \to K^{(*)} e^+ e^-\right)} \,.
\end{eqnarray}
Summary of the corresponding experimental results with their SM predictions for different di-lepton invariant mass squared ($q^2$) ranges are given in Table~\ref{exp-data}.
\begin{table}[H]\label{exp-data}
\begin{center}
\begin{tabular}{|c|cr|cr|cr|}
\hline
Observable & ~~~~SM prediction & & Measurement & &Deviations &\\
\hline
$R_K : q^2 = [1.1,6] \, \text{GeV}^2$ & $1.00 \pm 0.01 $& \cite{Descotes-Genon:2015uva,Bordone:2016gaq} & $0.846^{+0.060+0.016}_{-0.054-0.014}$ & \cite{Aaij:2019wad} & 2.5$\sigma$ &\\
\hline
$R_{K^*} ^{\rm low}: q^2 = [0.045,1.1] \, \text{GeV}^2$ & $0.92 \pm 0.02$ & \cite{Capdevila:2017bsm} & $0.660^{+0.110}_{-0.070} \pm 0.024$ &
\cite{Aaij:2017vbb} & $2.1\sigma-2.3\sigma$ &\\
\hline
$R_{K^*}^{\rm central} : q^2 = [1.1,6] \, \text{GeV}^2$ & $1.00 \pm 0.01 $& \cite{Descotes-Genon:2015uva,Bordone:2016gaq} & $0.685^{+0.113}_{-0.069} \pm 0.047$ & \cite{Aaij:2017vbb} & $2.4\sigma-2.5\sigma$ &\\
\hline
\end{tabular}
\caption{The experimental values of the observables along with their SM predictions for different ranges of $q^2$.}
\end{center}
\end{table}
Deviations from the SM predictions shown in the Table~\ref{exp-data}\footnote{For $R_{K^*}$, new preliminary measurements have been given by Belle \cite{RKstar_Belle_update} for two $q^2$ ranges. For $q^2\in[0.1, 8]$ GeV$^2$ the value of $R_{K^*}$ is $0.90^{+0.27}_{-0.21}\pm 0.10$ while for $q^2\in[15, 19]$ GeV$^2$ the corresponding value is $1.18^{+0.52}_{-0.32}\pm 0.10$.} can be resolved by invoking additional
NP contributions to some of the Wilson Coefficients (WCs) which are involved in the effective Hamiltonian for
$b\to s \ell \ell$ ($\ell\equiv$ charged lepton, i.e., electron (e) and muon ($\mu$)) transition.
Furthermore, if these anomalies are associated with other observables for the rare processes $b\to s \mu \mu$ transitions, then it has been observed that a NP scenario with {\it additional} contribution to the WC $C^\mu_9$ (but not in
$C^e_9$ ) is more acceptable. The operator corresponding to the WC $C^\ell_9$ is $\mathcal{O}_9\equiv \frac{e^2}{16\pi^2}(\bar{s} \gamma_{\alpha} P_{L} b)(\bar{\ell} \gamma^\alpha \ell)$. From the Table~\ref{exp-data}, it is readily evident that NP interfere destructively with the SM, which ensures the sign of $C_9^{\text{NP},\mu}$
is negative. The best-fit value of $C_9^{\text{NP},\mu}$ is $\approx -1$
~\cite{Descotes-Genon:2013wba,Hiller:2014yaa,Ghosh:2014awa,Altmannshofer:2014rta,Descotes-Genon:2015uva,Hurth:2016fbr, Capdevila:2017bsm, Altmannshofer:2017yso, Aebischer:2019mlg}. Moreover, NP scenario with $C_9^{\text{NP},\mu} = -C_{10}^{\text{NP},\mu}$ (where the WC $C^\ell_{10}$ is associated with the operator $\mathcal{O}_{10}\equiv \frac{e^2}{16\pi^2}(\bar{s} \gamma_{\alpha} P_{L} b)(\bar{\ell} \gamma^\alpha\gamma_5\ell)$) is also a very appealing from the
model building point of view~\cite{Ghosh:2014awa,Altmannshofer:2014rta,Descotes-Genon:2015uva,Hurth:2016fbr,Capdevila:2017bsm, Altmannshofer:2017yso, Aebischer:2019mlg}.
Inspired by these results, several BSM scenarios using extra non-standard
$Z$-boson\;\cite{Gauld:2013qba,Glashow:2014iga,Bhattacharya:2014wla, Crivellin:2015mga,
Crivellin:2015era,Celis:2015ara,Sierra:2015fma,Belanger:2015nma,Gripaios:2015gra,
Allanach:2015gkd,Fuyuto:2015gmk,Chiang:2016qov,Boucenna:2016wpr,Boucenna:2016qad,Celis:2016ayl,
Altmannshofer:2016jzy,Bhattacharya:2016mcc,Crivellin:2016ejn,Becirevic:2016zri,GarciaGarcia:2016nvr,Bhatia:2017tgo,Ko:2017yrd,Chen:2017usq,Baek:2017sew,Bonilla:2017lsq,Barman:2018jhz} and leptoquark~\cite{Hiller:2014yaa,Biswas:2014gga,Gripaios:2014tna,Sahoo:2015wya,Becirevic:2015asa,
Alonso:2015sja,Calibbi:2015kma,
Huang:2015vpt,Pas:2015hca,Bauer:2015knc,Fajfer:2015ycq,Barbieri:2015yvd,
Sahoo:2015pzk,
Dorsner:2016wpm,Sahoo:2016nvx,Das:2016vkr,Chen:2016dip,Becirevic:2016oho,Becirevic:2016yqi,Bhattacharya:2016mcc,Sahoo:2016pet,
Barbieri:2016las,Cox:2016epl, Alok:2017sui, Hati:2018fzc}
have been demonstrated the viable interpretation of the anomalies.
In this article, we ameliorate some of these problems in a correlated manner within a single framework by introducing an extra local ${\rm U}(1)_{L_\mu-L_\tau}$ symmetry to the SM gauge symmetry,
where $L_{\mu}$ and $L_{\tau}$ indicate lepton numbers for the second and third
generations of charged leptons and their corresponding neutrinos.
Apart from being an anomaly free gauged ${\rm U}(1)$ extension, the ${L_\mu-L_\tau}$
symmetry naturally violets the LFU between $e$ and $\mu$ because the ${L_\mu-L_\tau}$
charge of leptons are such that the corresponding new non-standard gauge boson
couples only to $\mu (\tau)$ but not to $e$. This scenario was originally formulated
by Volkas et. al.\,\cite{He:1990pn,He:1991qd}. Thereafter, several variants of
${\rm U}(1)_{L_\mu-L_\tau}$ model have been studied in the context of different
phenomenological purposes: e.g.,\,\,contribution of the ${\rm U}(1)_{L_\mu-L_\tau}$
gauge boson to explain the $(g-2)_\mu$ anomaly~\cite{Ma:2001md, Baek:2001kca,
Heeck:2011wj, Harigaya:2013twa, Altmannshofer:2016brv, Biswas:2016yan, Biswas:2016yjr,
Banerjee:2018eaf}, dark matter phenomenology~\cite{Baek:2008nz, Das:2013jca, Patra:2016shz,
Biswas:2016yan, Biswas:2016yjr, Biswas:2017ait, Foldenauer:2018zrz},
generation of neutrino masses and mixing parameters~\cite{Ma:2001md, Choubey:2004hn,
Adhikary:2006rf, Baek:2015mna, Xing:2015fdg, Biswas:2016yan, Banerjee:2018eaf} etc.
For the purpose of explaining $b \to s \mu^+ \mu^-$ anomaly, this type of ${\rm U}(1)_{L_\mu-L_\tau}$ model has also been modified from its minimal version, {\it albeit} in a different approach \cite{Altmannshofer:2014cfa, Crivellin:2015mga,
Altmannshofer:2015mqa, Arnan:2016cpy, Altmannshofer:2016jzy, Chen:2017usq, Baek:2017sew,
Singirala:2018mio, Hutauruk:2019crc, Baek:2019qte}. In the present article, we introduce a $\mathbb{Z}_2$-odd bottom quark like non-standard fermion field
$\chi$ which is vectorial in nature under the ${\rm U}(1)_{L_\mu-L_\tau}$ symmetry.
It couples to all generations of the down-type SM quarks via Yukawa like interaction involving
a $\mathbb{Z}_2$-odd scalar doublet $\Phi$. Moreover, we
introduce a $\mathbb{Z}_2$-odd singlet scalar $S$ which helps us to explain the flavour
anomaly, dark matter and $(g-2)_\mu$ anomaly simultaneously. A $\mathbb{Z}_2$-even complex
scalar singlet field $\eta$ with a nonzero $L_{\mu}-L_{\tau}$ charge
has been introduced for the purpose of breaking of
U(1)$_{L_\mu-L_\tau}$ symmetry spontaneously. Apart from these fields we have
the usual Higgs doublet field $H$ which breaks the SU(2)$_{L}\times {\rm U}(1)_{Y}$ symmetry.
Therefore, in the broken phase of both electroweak (SU(2)$_{L}\times {\rm U}(1)_{Y}$)
and U(1)$_{L_\mu-L_\tau}$ symmetries, we have three physical $\mathbb{Z}_2$-odd
neutral scalars emerge from the mixing between $\Phi$ and $S$. The
lightest field among the three physical $\mathbb{Z}_2$-odd
neutral scalars can be considered as a potentially viable dark matter candidate. This is an admixture of both doublet and singlet
scalar representations and have distinct phenomenology compared
to the standard Inert Doublet \cite{Barbieri:2006dq, LopezHonorez:2006gr, Lundstrom:2008ai}
and the Scalar Singlet models \cite{McDonald:1993ex, Burgess:2000yq, Biswas:2011td, Cline:2013gha},
where the low mass dark matter regime is almost ruled by the latest bound on spin independent
scattering cross section from XENON1T \cite{Aprile:2018dbl} as well as by the upper limit on Higgs invisible
branching fraction from LHC \cite{Khachatryan:2016whc}. This is mainly due to the
fact that in these models in the low mass regime ($M_{\rm DM}\leq 62.5$ GeV),
dark matter candidate predominantly
annihilates into $b\bar{b}$ final state.
On the contrary, in the present
scenario, the dark matter candidate in the low mass regime can annihilate
into a pair of $L_{\mu}-L_{\tau}$ gauge boson $Z_{\mu\tau}$ and the branching fraction of
this annihilation channel is controlled by dark sector mixing angle
$\theta_D$. This actually makes the dark matter freeze-out process
extremely correlated with the flavour physics anomalies and $(g-2)_{\mu}$ anomaly,
where an $\mathcal{O}(\rm MeV)$ light $Z_{\mu\tau}$ plays a pivotal role.
Since, $Z_{\mu\tau}$ does not have direct couplings to the first generation
leptons and quarks, constraints from the LEP and more recently from the LHC
on the $g_{Z_{\mu\tau}}-M_{Z_{\mu\tau}}$ plane are relatively relaxed. Particularly,
light gauge boson with $M_{Z_{\mu\tau}}\la 100$ MeV and also with
moderate gauge coupling $g_{Z_{\mu\tau}}\la 10^{-3}$ is still allowed
from the experiments measuring neutrino trident processes namely CCFR \cite{Mishra:1991bv}
CHARM-II \cite{Geiregat:1990gz}. Moreover, apart from $(g-2)_{\mu}$ anomaly and
flavour physics related issues, such a light gauge boson has excellent
cosmological implication. The reason is that it can relax the $\sim3\sigma$ tension
between the measurements of Hubble constant ($H_0$) from two
different epochs\footnote{At two different redshifts ($z$), one is from
the CMB experiment Planck \cite{Ade:2015xua} at high $z$ while another one
is from the local measurement using Hubble Space Telescope \cite{Riess:2016jrr}
at low $z$.}
by providing extra contribution to the
radiation energy density ($\Delta{N_{eff}}\sim 0.2-0.5$)
through the alteration of neutrino decoupling temperature \cite{Escudero:2019gzq}.
In the present scenario the non-standard neutral gauge boson $Z_{\mu\tau}$ emerge from
all three neutral gauge bosons associated with ${\rm SU}(2)_{L}$,
${\rm U}(1)_{Y}$ and ${\rm U}(1)_{L_{\mu}-L_{\tau}}$ gauge groups by diagonalising a $3\times 3$ mixing matrix.
The additional contribution to the anomalous magnetic moment of muon
comes from an effective $\mu^{+} \mu^{-} \gamma$ vertex which has been generated from one loop penguin
diagram involving $Z_{\mu\tau}$. Moreover, we also have one
loop contribution from a diagram involving other BSM scalar (an orthogonal state of
the SM-like Higgs boson arises from the mixing between
$H$ and $\eta$ in the broken phase of the theory). However, its effect on $(g-2)_{\mu}$ is negligibly small.
To this end, we would like to mention another novel signature of the present scenario.
The correlation between dark sector and flavour physics sector is
not only due to $L_{\mu}-L_{\tau}$ gauge boson but also due to all
the $\mathbb{Z}_2$-odd neutral particles (including dark matter candidate of the present scenario) along
with the coloured $\mathbb{Z}_2$-odd fermion $\chi$ generate non-standard one loop contributions
to produce $b \to s\mu^+\mu^-$ transition. In the present scenario, one can produce
non-standard contributions to both the WCs $C^\mu_9$ and $C^\mu_{10}$ respectively,
however, the contribution of the latter is insignificant and hence our analysis will
be furnished with $C_9^{\text{NP},\mu}$ only. The NP contribution to $C^\mu_9$
is obtained from non-standard penguin and self-energy diagrams and there is no
further NP contribution from box-diagram at one loop level. Moreover, we consider the constraint from the branching ratio of another flavour changing neutral current (FCNC) process $B \to X_s \gamma$.
Hence, we have computed the branching ratio of this decay in the present scenario.
Further, neutrino masses and mixings can easily be addressed in these class
of $L_{\mu}-L_{\tau}$ models via Type-I seesaw mechanism by adding three right handed
neutrinos, which are singlet under the SM gauge groups and two of them have equal
and opposite $L_{\mu}-L_{\tau}$ charges for anomaly cancellation. Since a detailed
analysis on neutrino masses and mixings in the present scenario is beyond
the scope of this article, hence for the sake of completeness, we just have added three
right handed neutrinos in the Lagrangian and find the Majorana mass matrix for the
light neutrinos. A more comprehensive analysis on diagonalisation of the light neutrino
mass matrix and thereby finding the mass eigenvalues and mixing angles in
the $L_{\mu}-L_{\tau}$ scenario has already been done in \cite{Biswas:2016yan}.
Finally, in order to impose the constraints on the parameter space of the
present scenario from the LHC experiment, we use the latest ATLAS data \cite{ATLAS:2019vcr} of
non-observation of a resonant $\ell^{+}\ell^{-}$ signal at the LHC running at 13 TeV
for the high mass range of $Z_{\mu\tau}$. Hence, we will estimate the cross section
for the process $pp \to Z_{\mu\tau} \to \ell^+ \ell^-$ at the 13 TeV LHC in the present
scenario. Consequently, it will be an interesting part of this exercise that, how
the LHC data can constrain the values of non-standard gauge coupling constant as well
as the $Z-Z_{\mu\tau}$ mixing angle.
The article is organised as follows. In Sec. \ref{model} we introduce the model with possible field content and interactions as well as we set our notations. Then in Sec. \ref{flav}, we show the calculational details of flavour physics observables and after that we will discuss $(g-2)_\mu$ anomaly in Sec. \ref{g2}. In Sec. \ref{dm}, we show the viability of our dark matter candidate of the present scenario considering all possible bounds from ongoing experiments and explain how can we correlate the dark matter with the flavour physics anomalies. We briefly discuss neutrino mass generation via Type-I seesaw mechanism
in Sec. \ref{neu}. Sec. \ref{cldr} deals with constraint that are obtained from non-observation
of a resonant $\ell^{+}\ell^{-}$ signal at the LHC running at 13 TeV. Finally, we summarize our
results and conclude in Sec. \ref{con}.
\section{The \boldmath${\rm U(1)}_{L_{\mu}-L_{\tau}}$ model}\label{model}
In order to facilitate our motivations (discussed in Section \ref{intro}), we propose
an anomaly free ${\rm U}(1)_{L_{\mu}-L_{\tau}}$ gauge extension of the SM.
This scenario is free from mixed gauge-gravitational and axial vector
gauge anomalies because these anomalies cancel between second and third
generations of charged leptons and their corresponding neutrinos
due to their equal and opposite ${L_{\mu}-L_{\tau}}$ charges. The
Lagrangian which remains invariant under the
${\rm SU}(3)_{C}\times {\rm SU}(2)_{L} \times {\rm U}(1)_{Y}
\times {\rm U}(1)_{L_\mu-L_\tau}\times \mathbb{Z}_2$ symmetry is given by,
\begin{eqnarray}\label{LagT}
\mathcal{L}&=&\mathcal{L}_{\rm SM} + \mathcal{L}_{N} + \mathcal{L}_{\chi}
+ (D_{\alpha}{\eta})^{\dagger} (D^{\alpha}{\eta}) +
(D_{\alpha}{\Phi})^{\dagger} (D^{\alpha}{\Phi}) + \frac 12 \partial_{\alpha}S \partial^{\alpha}S\\ \nonumber
&-&\frac{1}{4} \hat{B}_{\alpha \beta} \hat{B}^{\alpha \beta}
- \frac{1}{4} \hat{X}_{\alpha \beta} \hat{X}^{\alpha \beta} +
\frac{\epsilon}{2} \hat{X}_{\alpha \beta} \hat{B}^{\alpha \beta}-V(H, \eta, \Phi, S)\;,
\end{eqnarray}
where
\begin{eqnarray}
\hat{B}_{\alpha \beta} &=& \partial_{\alpha} \hat{B}_\beta - \partial_{\beta}
\hat{B}_{\alpha} \,\,\,\, {\rm and} \,\,\, \hat{X}_{\alpha \beta} =
\partial_{\alpha} \hat{X}_\beta - \partial_{\beta} \hat{X}_{\alpha} \,\,,
\label{fieldtensor}
\end{eqnarray}
are field strength tensors for the two U(1) gauge fields\footnote{We are denoting the basis of gauge
fields having off-diagonal kinetic term by using a hat notation.} $\hat{B}_{\alpha}$
and $\hat{X}_{\alpha}$ respectively while the Lorentz indices $\alpha,\;\beta\equiv
0,1\ldots 3$. The term contains both field strength tensors is the
kinetic mixing term between $\hat{B}_{\alpha}$ and $\hat{X}_{\alpha}$,
which is not forbidden by any of the symmetries of the present model. Full list of particle contents
and their quantum numbers under various symmetry groups are given
in Table~\ref{gauqn}.
\begin{table}[h!]
\centering
\footnotesize
\resizebox{16cm}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline\hline
\multicolumn{1}{|c|}{Gauge groups}& \multicolumn{13}{|c|}{Fermion fields}
& \multicolumn{4}{|c|}{Scalar fields} \\
\cline{2-18}
{} &\multicolumn{3}{|c|}{Quark fields}&\multicolumn{9}{|c|}{Lepton fields}
& \multicolumn{1}{|c|}{} & {}& {} & {}&\\
\cline{2-13}
{} &$Q_{Li}$ & $u_{Ri}$ & $d_{Ri}$ & $L_{Le}$ & $L_{L\mu}$ &
$L_{L\tau}$ & $e_{R}$ & $\mu_{R}$ & $\tau_{R}$ & $N_{eR}$ &
$N_{\mu R}$ & $N_{\tau R}$ &~$\chi$~&~$H$~&~$\eta$~&~$\Phi$~&$S$
\\ \hline
\multicolumn{1}{|c|}{${\rm SU}(3)_C$} & 3 & 3 & 3 & 1 & 1 & 1 & 1 & 1 & 1
& 1 & 1 & 1 & 3 & 1 & 1 & 1 & 1\\ \hline
\multicolumn{1}{|c|}{${\rm SU}(2)_L$} & 2 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 1
& 1 & 1 & 1 & 1 & 2 & 1 & 2 & 1\\ \hline
\multicolumn{1}{|c|}{${\rm U}(1)_Y$} & $\frac 16$ & $\frac 23$ & -$\frac 13$
& $\frac 12$ & $\frac 12$ & $\frac 12$ & -1 & -1 & -1 & 0 & 0 & 0 & -$\frac 13$
& $\frac 12$ & 0 & $\frac 12$ & 0\\ \hline
\multicolumn{1}{|c|}{${\rm U(1)}_{L_{\mu}-L_{\tau}}$} & 0 & 0 & 0 & 0 & 1 & -1
& 0 & 1 & -1 & 0 & 1 & -1 & -1 & 0 & -1 & 1 & 0\\ \hline\hline\hline
\multicolumn{1}{|c|}{$\mathbb{Z}_2$ symmetry} & + & + & + & + & + & + & + & + & + & + & +
& + & - & + & + & - & -\\ \hline\hline
\end{tabular}}
\caption{Gauge quantum numbers and $\mathbb{Z}_2$ parity of different SM and BSM
particles.}
\label{gauqn}
\end{table}
As has been discussed in earlier that the $L_{\mu}-L_{\tau}$ extension
of the SM is anomaly free, however, for the purpose of neutrino
mass generation via Type-I seesaw mechanism we invoke three
SM gauge singlet right handed neutrinos ($N_{Ri}$) having
nonzero $L_{\mu}-L_{\tau}$ charge in such a manner so that their
inclusion does not introduce any further anomaly. The Lagrangian
of right handed neutrinos is denoted by $\mathcal{L}_{N}$ which contains
kinetic energy terms, mass terms and Yukawa terms associated
with the SM lepton doublets ($L_{Li}$) allowed by
the symmetries of the present model.
\begin{eqnarray}
\mathcal{L}_{N}&=&
\sum_{j=e,\,\mu,\,\tau}\frac{i}{2}\,\overline{N^j_{R}}\gamma^{\alpha}D_{\alpha} N^j_{R}
-\dfrac{1}{2}\,M_{ee}\,\overline{(N^e_{R})^{c}}\,N^e_{R}
-\dfrac{M_{\mu \tau}}{2}(\overline{(N^\mu_{R})^{c}}\,N^\tau_{R}
+\overline{(N^\tau_{R})^{c}}\,N^\mu_{R}) \nonumber \\ &&
-\dfrac{y_{e \mu}}{2}(\overline{(N^e_{R})^{c}}\,N^\mu_{R}
+ \overline{(N^\mu_{R})^{c}}\,N^e_{R})\,\eta
- \dfrac{y_{e \tau}}{2}(\overline{(N^e_{R})^{c}}N^\tau_{R}
+ \overline{(N^\tau_{R})^{c}}\,N^e_{R})\,\eta^*
\nonumber \\ &&
-\sum_{i=e,\,\mu,\,\tau} y_{i}\,\overline{L^i}_{L}
\tilde {H} N^{i}_{R} + {\rm h.c.}\;,
\label{lagN}
\end{eqnarray}
where $\tilde {H}=i\,\sigma_2H^*$. $M_{ee}$, $M_{\mu \tau}$ are the bare mass parameters
while $y_{e\mu}$, $y_{e \tau}$ and $y_i$ are the dimensionless Yukawa couplings.
In order to generate $b \to s$ transition at one loop level
involving $\mathbb{Z}_2$-odd scalars a non-standard SU(2)$_{L}$ singlet
fermionic field $\chi$ with a colour charge has been
introduced in this scenario. This fermion is also $\mathbb{Z}_2$-odd
and has an electric charge identical to SM down-type quarks. Furthermore, both left and right chiral
parts of $\chi$ field have same $L_{\mu}-L_{\tau}$ charge making it
a vector like fermion under ${\rm U}(1)_{L_{\mu}-L_{\tau}}$ symmetry.
The Lagrangian of this field is given by
\begin{eqnarray}
\mathcal{L}_{\chi}=i\,\bar{\chi}\gamma^\alpha D_\alpha\chi - M_{\chi}
\bar{\chi}\chi- \left(\sum_{j=1}^{3}\,f_j\, \overline{Q_{Lj}}\,\Phi\,\chi_R + {\rm h.c.}\right)\;,
\label{lchi}
\end{eqnarray}
where $M_\chi$ is the bare mass parameter for the $\chi$ field and $f_j$s are
couplings of the Yukawa type interactions among the SM quark doublets
($Q_{Lj}$), $\mathbb{Z}_2$-odd scalar doublet $\Phi$ and the right chiral
part of $\chi$. The above Yukawa type interactions terms involving
$s$ and $b$ quarks have significant roles in $b\rightarrow s$ transition
and hence in the explanation of $R_{K^{(*)}}$ anomalies. The covariant derivative
$D_{\alpha}$ for the field $\chi$ is defined as
\begin{equation}
D_\alpha\chi\equiv \bigg(\partial_\alpha-i g_1 \frac 13 \hat{B}_\alpha +
i g_{Z_{\mu\tau}} n_\chi \hat{X}_\alpha + i g_3 \frac{\Lambda^a}{2} G^a_\alpha\bigg)\chi\;,
\label{Dchi}
\end{equation}
where $g_1$, $g_{Z_{\mu\tau}}$ and $g_3$ are the ${\rm U}(1)_Y$, ${\rm U}(1)_{L_\mu-L_\tau}$ and ${\rm SU}(3)_C$ gauge coupling constants respectively. $n_\chi$ is the ${L_\mu-L_\tau}$ charge
of $\chi$. Further, $\Lambda^a$s ($a = 1,2 \ldots 8$) are the eight Gell-Mann matrices
representing the generators for SU(3)$_{C}$ while the
corresponding gauge fields are denoted by $G^a_\alpha$.
The $4^{\rm th}$, $5^{\rm th}$ and $6^{\rm th}$
terms of the Eq.~(\ref{LagT}) represent the kinetic terms for
all the non-standard scalar representations ($\eta$, $\Phi$ and $S$)
introduced in the present model for specific purposes.
Particularly, the complex singlet (under the SM gauge
group) scalar $\eta$ is necessary to break the ${\rm U}(1)_{L_\mu-L_\tau}$
symmetry spontaneously as it is the only scalar field which has not only a
${\rm U}(1)_{L_\mu-L_\tau}$ charge but also has a nonzero vacuum expectation value (VEV) $v_2$. Consequently, after ${L_\mu-L_\tau}$
symmetry breaking one obtains a massive non-standard neutral gauge boson. It has played
crucial roles in different aspects:\,\,e.g., $(g-2)_\mu$ anomaly explanation,
amelioration of the anomalies that are related to $b\to s \mu \mu$ transition and most importantly it provides
new annihilation channels for the dark matter candidate, which alters its
dynamics from the standard case. Moreover, a $\mathbb{Z}_2$-odd ${{\rm SU}(2)_{L}}$
scalar doublet $\Phi$ having both ${\rm U}(1)_Y$ and ${\rm U}(1)_{L_\mu-L_\tau}$
charges which are required to get the NP contribution to $b\rightarrow s$
transition via the Yukawa like interaction given in Eq.\,(\ref{lchi}). Although,
one of the neutral components of $\Phi$ (lightest one) is stable, but for
the simultaneous explanation of the dark matter enigma, $(g-2)_{\mu}$
anomaly and $R_{K^{(*)}}$ anomalies we include another real singlet scalar
field $S$ which is also odd under $\mathbb{Z}_2$ symmetry. Covariant derivatives for the scalar fields $\eta$ and $\Phi$
are given as follows
\begin{eqnarray}
D_\alpha \eta &\equiv& \bigg(\partial_\alpha+ i g_{Z_{\mu\tau}}
n_\eta \hat{X}_\alpha \bigg)\eta\;,
\label{Deta} \\
D_\alpha\Phi &\equiv& \bigg(\partial_\alpha+i g_1 \frac 12
\hat{B}_\alpha + i g_{Z_{\mu\tau}} n_\Phi \hat{X}_\alpha + i g_2
\frac{\sigma^a}{2} W^a_\alpha\bigg)\Phi\;,
\label{Dphi}
\end{eqnarray}
where $\sigma^a$ are the three Pauli's spin matrices with $a$
runs from 1 to 3. $n_{X}$ denotes the ${L_\mu-L_\tau}$ charge
of the corresponding scalar fields $X=\Phi,\,\eta$.
Further, $g_2$ is the ${\rm SU}(2)_L$ gauge coupling constant
and $W^a_\alpha$s are the corresponding gauge bosons.
Finally, the scalar potential $V(H,\,\eta,\,\Phi,\,S)$ in Eq.\,(\ref{LagT})
contains those interactions terms among the scalar fields which
remain invariant under all the symmetries of the present model,
has the following form,
\begin{eqnarray}
V(H, \eta, \Phi, S)&=&-m^2_{H}(H^\dagger H)-m^2_\eta(\eta^\dagger \eta) + m^2_\Phi(\Phi^\dagger \Phi) + \frac{m^2_S}{2} S^2\\ \nonumber
&+& \lambda_H (H^\dagger H)^2 + \lambda_\eta (\eta^\dagger \eta)^2 + \lambda_\Phi (\Phi^\dagger \Phi)^2 + \frac{\lambda_S}{4} S^4\\ \nonumber
&+& \lambda_1(H^\dagger H)(\eta^\dagger \eta) + \lambda_2 (H^\dagger H)(\Phi^\dagger \Phi)
+ \lambda_3 (H^\dagger \Phi)(\Phi^\dagger H) \\ \nonumber
&+& \lambda_4(\Phi^\dagger \Phi)(\eta^\dagger \eta)
+ \lambda_5(\Phi^\dagger \Phi)S^2 + \lambda_6(\eta^\dagger \eta)S^2
+ \lambda_7(H^\dagger H)S^2 \\ \nonumber
&+& \left[\lambda_8(H^\dagger \Phi)S\eta+{\rm h.c.}\right]\;,
\label{Vpot}
\end{eqnarray}
where $m_{H}$, $m_{\eta}$, $m_\Phi$ and $m_S$ are real parameters
having dimension of mass and $\lambda_i$s $(i= H, \eta, S, 1,2 \ldots 7)$
are dimension less, real quartic coupling constants because the corresponding
operators are self-conjugate in nature. However, the quartic coupling
$\lambda_8$ can in general be a complex parameter and thus can act as an
extra source of CP-violation. Since in this work we are not studying
any CP-violating effects, we have taken $\lambda_8$ as a real parameter
and this assumption will not alter our conclusions. Although, the term proportional to
$\lambda_8$ has important significance in this model as it
generates mixing between $\Phi$ and $S$. Later we will
discuss more elaborately on this issue. The component wise
structure of the scalar fields are given in the following
\begin{eqnarray}
H=
\begin{pmatrix}
h^+ \\
\dfrac{h_1+v_1+i z_1}{\sqrt{2}}
\end{pmatrix},
\,\,\,\,
\eta=
\begin{pmatrix}
\dfrac{h_2+ v_2+ i z_2}{\sqrt{2}}
\end{pmatrix},
\,\,\,\,
\Phi=
\begin{pmatrix}
\phi^+ \\
\dfrac{\phi^0+a^0}{\sqrt{2}}
\end{pmatrix},
\label{scalflds}
\end{eqnarray}
where $v_1$ and $v_2$ are the VEVs of the scalar fields\footnote{$H$ and $\eta$ are
even under $\mathbb{Z}_2$ symmetry and hence $\mathbb{Z}_2$ remains unbroken.} $H$ and
$\eta$ respectively. After breaking of both electroweak and $L_{\mu}-L_{\tau}$
symmetries by the respective VEVs $v_1$ and $v_2$, one can have mixing between the real components
$h_1$ and $h_2$ due to the presence of an interaction term proportional to
$\lambda_1$ in $V(H, \eta, \Phi, S)$. The mixing matrix in the basis
$\frac{1}{\sqrt{2}}(h_1\,\,\,h_2)^{T}$ has the following form,
\begin{eqnarray}
\mathcal{M}^2_{\rm scalar} = \left(\begin{array}{cc}
2\lambda_H v^2_1 ~~&~~ \lambda_1 v_1 v_2 \\
~~&~~\\
\lambda_1 v_1 v_2 ~~&~~ 2 \lambda_\eta v^2_2
\end{array}\right) \,\,.
\end{eqnarray}
Diagonalising the mass squared matrix by an orthogonal transformation, we obtain
two physical CP-even neutral scalars $H_1$ which has been considered as SM like Higgs of mass 125.5 GeV
and $H_2$. These fields are also even under $\mathbb{Z}_2$ symmetry similarly as
$h_1$ and $h_2$. The physical states $H_1$ and $H_2$ are related with previous
states $h_1$ and $h_2$ by the following relation,
\begin{eqnarray}
\left(\begin{array}{c} H_1 \\ H_2\end{array}\right)
=\left(\begin{array}{cc}\cos\theta_s ~-\sin\theta_s
\\ \sin\theta_s ~~~~\cos\theta_s \end{array}\right)
\left(\begin{array}{c} h_1 \\ h_2\end{array}\right) \,\,,
\label{CP-massmatrix}
\end{eqnarray}
where $\theta_s$ is the mixing angle which can
be expressed as,
\begin{eqnarray}\label{CP-angle}
\theta_s &=& \frac{1}{2}~\tan^{-1}\left(\frac{\frac{\lambda_1}{\lambda_\eta}\frac{v_1}{v_2}}
{1 - \frac{\lambda_H}{\lambda_\eta}\frac{v^2_1}{v^2_2}}\right) \,\,.
\end{eqnarray}
Mass eigenvalues corresponding to the physical scalars $H_1$ and $H_2$
are given by,
\begin{eqnarray}
M_{H_1} &=& \sqrt{\lambda_H v^2_1 + \lambda_{\eta} v^2_2 +
\sqrt{(\lambda_H v^2_1 - \lambda_\eta v^2_2)^2 + (\lambda_1 v_1 v_2)^2} }\ , \\
\label{CP-mass1}
M_{H_2} &=& \sqrt{\lambda_H v_1^2 + \lambda_{\eta} v^2_2 -
\sqrt{(\lambda_H v^2_1 - \lambda_\eta v^2_2)^2 + (\lambda_1 v_1 v_2)^2} } \,\ .
\label{CP-mass2}
\end{eqnarray}
Furthermore, similar to the $\mathbb{Z}_2$-even sector, the $\mathbb{Z}_2$-odd
sector also exhibits mass mixing between $\phi^0$ and $S$. This also
happens when both $H$ and $\eta$ get nonzero VEVs and in this case the term proportional to
$\lambda_8$ in $V(H, \eta, \Phi, S)$ is solely responsible for such mixing. Therefore,
the $\mathbb{Z}_2$-odd real singlet scalar $S$ mixes with CP-even component $\phi^0$
of the $\mathbb{Z}_2$-odd doublet $\Phi$. However, as there is no spontaneous CP-violation,
the CP-odd component $a^0$ remains decoupled from the CP-even fields and with respect
to the basis $\frac{1}{\sqrt{2}}(S\,\,\,\phi^0\,\,a^0)^{T}$, the $3\times 3$ odd-sector
mixing matrix has a block diagonal form,
\begin{eqnarray}
\mathcal{M}^2_{\rm DM} = \left(\begin{array}{ccc}
(m^2_S+v^2_1\lambda_7+v^2_2\lambda_6) & \frac{v_1v_2\lambda_8}{\sqrt{2}} & 0 \\
\frac{v_1v_2\lambda_8}{\sqrt{2}} & \frac 12\{2m^2_\Phi+v^2_1(\lambda_2+\lambda_3)+v^2_2\lambda_4\} & 0 \\
0 & 0 & \frac 12\{2m^2_\Phi+v^2_1(\lambda_2+\lambda_3)+v^2_2\lambda_4\} \end{array}
\right)\,. \nonumber \\
\label{dm-mixing}
\end{eqnarray}
One can easily diagonalise this matrix using an orthogonal transformation
by an angle $\theta_D$ between $S$ and $\phi^0$. Therefore, after diagonalisation
we have three physical states $\rho_1$, $\rho_1$ and $\rho_3$, where
$\rho_1$ and $\rho_2$ are orthogonal linear combinations of $S$ and $\phi^0$
while the remaining physical scalar $\rho_3$ exactly coincides with $a^0$.
In matrix notation, the basis transformation can be shown as
\begin{eqnarray}
\left(\begin{array}{c} \rho_1 \\ \rho_2 \\ \rho_3\end{array}\right)
=\left(\begin{array}{ccc}\cos\theta_D &-\sin\theta_D&0\\
\sin\theta_D& \cos\theta_D&0\\
0&0&1\end{array}\right)
\left(\begin{array}{c} S \\ \phi^0 \\ a^0\end{array}\right) \,\,,
\label{CP-massmatrix}
\end{eqnarray}
where the mixing angle $\theta_D$ can be expressed in terms
of parameters of the Lagrangian as,
\begin{eqnarray}\label{dm-angle}
\theta_D &=& \frac{1}{2}~\tan^{-1}\left(\frac{2\sqrt{2}v_1 v_2\lambda_8}
{2m^2_\Phi-2m^2_S+v^2_1(\lambda_2+\lambda_3-2\lambda_7)+v^2_2(\lambda_4-2\lambda_6)}\right) \,\,.
\end{eqnarray}
Among the three states ($\rho_1$, $\rho_1$ and $\rho_3$), we choose $\rho_1$ as the lightest
odd particle (LOP) which is regarded as the stable dark matter candidate in this scenario. Thus,
the dark matter candidate in this scenario is an admixture of singlet and doublet states.
The expressions for the masses of these $\mathbb{Z}_2$-odd scalar fields are given below
\begin{eqnarray}
M_{\rho_1}=\sqrt{(m^2_S+v^2_1\lambda_7+v^2_2\lambda_6)\cos^2\theta_D-\sqrt{2}v_1 v_2\lambda_8\cos\theta_D\sin\theta_D+M^2_{\rho_3}\sin^2\theta_D}\;,
\label{mrho1} \\
M_{\rho_2}=\sqrt{(m^2_S+v^2_1\lambda_7+v^2_2\lambda_6)\sin^2\theta_D+\sqrt{2}v_1 v_2\lambda_8\cos\theta_D\sin\theta_D+M^2_{\rho_3}\cos^2\theta_D}\;,
\label{mrho2}
\end{eqnarray}
where
\begin{eqnarray}
M_{\rho_3}&=& \sqrt{m^2_\Phi+\frac 12 \left[ v^2_1(\lambda_2+\lambda_3)+v^2_2\lambda_4\right]}\,.
\label{mrho3}
\end{eqnarray}
Further using Eqs.\,(\ref{mrho1}-\ref{mrho3}), one can establish a
relation between $M_{\rho_1}$, $M_{\rho_2}$, $M_{\rho_3}$ and $\theta_D$, which has the
following form
\begin{eqnarray}
M^2_{\rho_3} = {M^2_{\rho_1}\sin^2\theta_D+M^2_{\rho_2}\cos^2\theta_D}\;.
\label{mho3-relation}
\end{eqnarray}
Therefore, the mass of the CP-odd scalar $\rho_3$ is not an independent
quantity in the present scenario and it becomes fixed though the above
relation once we know other parameters like $M_{\rho_1}$, $M_{\rho_2}$ and $\theta_D$.
This is a consequence of that, the $2\times2$ and $3\times3$ elements of
the dark sector mixing matrix $\mathcal{M}^2_{\rm DM}$ are identical. From the
symmetry argument this can be understood as follows. The splitting between the
coefficients of ${\phi^0}^2$ ($\varpropto$ $2\times 2$ element of
${\mathcal{M}^2_{\rm DM}}$) and ${a^0}^2$ ($\varpropto$ $3\times 3$
element of ${\mathcal{M}^2_{\rm DM}}$) of a $\mathbb{Z}_2$-odd doublet $\Phi$ is
obtained from a term like $(H^\dagger\Phi)^2$ (usual $\lambda_5$ term in the Inert Doublet
Model \cite{Barbieri:2006dq}), which is forbidden here by the ${\rm U(1)}_{L_{\mu}-L_{\tau}}$
symmetry invariance. Additionally, in the dark sector we also have a charged scalar
$\phi^\pm$ and its mass term is given by
\begin{eqnarray}
M_{\phi^\pm}&=&\sqrt{M^2_{\rho_3}-\frac 12 v^2_1\lambda_3}\;.
\label{mphpm}
\end{eqnarray}
Let us now find out the effects of the extra ${\rm U(1)}_{L_{\mu}-L_{\tau}}$
local gauge symmetry on the gauge sector and generate the physical states
of the gauge bosons with their proper mass terms. In the Eq.\,(\ref{fieldtensor}),
$\hat{B}_{\alpha}$ and $\hat{X}_{\alpha}$ are denoted as gauge fields corresponding
to gauge groups U(1)$_{Y}$ and U(1)$_{L_\mu-L_\tau}$ respectively. As mentioned
earlier, the kinetic terms for the two U(1) gauge fields with hat notation are not diagonal
and it is clearly evident from the presence of a mixing term
between two U(1) gauge fields proportional $\epsilon$.
The kinetic mixing parameters is severely constrained
from the electroweak precision data (sensitive mainly in the low mass
regime of the extra gauge boson) \cite{Hook:2010tw, Cline:2014dwa}
and also from di-lepton searches at the LHC (for relatively high mass
regime i.e., few hundred GeV to few TeV range). Now, one can perform a basis transformation from ``hat'' states
to ``un-hat'' states, due to which the off-diagonal
kinetic term vanishes. This can be achieved by applying a
following transformation\footnote{This transformation matrix is
not a unique one. For a general $2\times 2$ real matrix,
we have four independent elements. However, using $c_1=c_2=1$ and $c_3$ = 0,
we have only three independent equations to solve for four variables. Here,
$c_1$, $c_2$ and $c_3$ are coefficients of $\frac{1}{4}B_{\mu\nu}B^{\mu\nu}$,
$\frac{1}{4}X_{\mu\nu}X^{\mu\nu}$ and $\frac{\epsilon}{2}B_{\mu\nu}X^{\mu\nu}$
respectively. Thus, one can express three elements in terms of the
fourth one and for each real value of that element, we will have a different
transformation matrix which eventually cancels the kinetic mixing term.
For the particular matrix that we have used here is obtained by setting
$2\times 1$ element of the transformation matrix equal to zero. Such a special
choice easily reproduces all the phenomena of electromagnetism.},
\begin{eqnarray}
\left(\begin{array}{c} B_{\alpha} \\ X_{\alpha}\end{array}\right) = \left(\begin{array}{cc}
1 &-\epsilon\\ 0 &\sqrt{1-\epsilon^2}\end{array}\right)\left(\begin{array}{c} \hat{B}_{\alpha} \\
\hat{X}_{\alpha}\end{array}\right)\,\,,
\label{hat-unhat-matrix}
\end{eqnarray}
and since experiment dictates $\epsilon \ll$ 1,
therefore using the approximation $\mathcal{O}(\epsilon^2)\approx 0$ we have
\begin{eqnarray}
\hat{B}_{\alpha} \simeq B_{\alpha} + \epsilon X_{\alpha} \,\,\,\,
{\rm and}\,\,\,
\hat{X}_{\alpha} \simeq X_{\alpha} \,\, .
\label{hat-unhat-trans}
\end{eqnarray}
After the occurrence of both electroweak symmetry breaking (EWSB)\footnote
{In the present scenario, after EWSB one can readily determine
the mass of the $W^\pm$ gauge boson which is exactly equal to
that of the SM, i.e., $M_W = \frac 12 g_2v_1$.}
and ${L_\mu-L_\tau}$ breaking by the VEVs of the neutral components
of $H$ and $\eta$, we obtain a $3\times 3$ mass square matrix in the basis
of three neutral gauge bosons namely $W^\alpha_3$, $B^{\alpha}$, $X^{\alpha}$
using Eqs.\,(\ref{Deta}-\ref{Dphi}, \ref{hat-unhat-trans}),
\begin{eqnarray}
\mathcal{M}^2_{\rm gauge} = \left(\begin{array}{ccc}
\frac{1}{4}g^2_2v^2_1 &-\frac{1}{4}g_2 g_1v^2_1 & -\frac{1}{4}g_2 g_1v^2_1\epsilon \\
-\frac{1}{4}g_2 g_1v^2_1 & \frac{1}{4}g^2_1v^2_1 & \frac{1}{4} g^2_1 v^2_1\epsilon \\
-\frac{1}{4}g_2 g_1v^2_1\epsilon & \frac{1}{4} g^2_1 v^2_1\epsilon & g^2_{Z_{\mu\tau}}v^2_2 \end{array}\right)\,\, .
\label{gauge-mixing}
\end{eqnarray}
The above matrix has a special symmetry. If we rotate $W^{\alpha}_3$
and $B^{\alpha}$ by the Weinberg angle $\tan \theta_{\rm W} = \dfrac{g_1}{g_2}$,
the matrix $\mathcal{M}^2_{\rm gauge}$ reduces to a $2\times 2$ block diagonal
structure with respect to an intermediate state $\mathcal{Z}^\alpha \equiv \cos \theta_{\rm W}
W^\alpha_3 - \sin \theta_{\rm W} B^\alpha$ and $X^\alpha$ while the other
orthogonal state i.e. $A^\alpha = \sin \theta_{\rm W} W^\alpha_3 + \cos \theta_{\rm W} B^\alpha$
having zero mass eigenvalue becomes completely decoupled. This is possible
due to the special choice of the transformation matrix we have considered
in Eq.\,(\ref{hat-unhat-matrix}). Now, once we reduce a $3\times3$
matrix to a $2\times2$ block diagonal form, we already have made our life
very simple and next task is to perform another orthogonal transformation
between the states $\mathcal{Z}^{\alpha}$ and $X^{\alpha}$ to finally get
the physical $Z$ and $Z_{\mu\tau}$ bosons. This is mathematically demonstrated
below for both mass matrix as well as eigenstates,
\begin{eqnarray}
\mathcal{M}^2_{\rm gauge}\,
\xRightarrow{\mathcal{O}(\theta_{\rm W})}
\left(\begin{array}{ccc}
\frac{1}{4}(g_1^2+g_2^2)v_1^2 & 0 & -\frac{\epsilon}{4} g_1
\sqrt{g_1^2+g_2^2} v_1^2 \\
0 & 0 & 0 \\
-\frac{\epsilon}{4} g_1 \sqrt{g_1^2+g_2^2} v_1^2
& 0 & g^2_{Z_{\mu\tau}} v^2_2
\end{array}
\right)\,
\xRightarrow{\mathcal{O}(\theta_{\mu\tau})}
\left(\begin{array}{ccc}
M_Z & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & M_{Z_{\mu\tau}}
\end{array}
\right)\,
\end{eqnarray}
and
\begin{eqnarray}
\hspace{2cm}\left(\begin{array}{c} W^{\alpha}_3 \\B^{\alpha}
\\X^{\alpha}\end{array}\right)
\xRightarrow{\mathcal{O}(\theta_{\rm W})^{T}}
\left(\begin{array}{c} \mathcal{Z}^\alpha_3\\ A^{\alpha}\\ X^{\alpha}
\end{array}\right)
\xRightarrow{\mathcal{O}(\theta_{\mu\tau})^{T}}
\left(\begin{array}{c} Z^\alpha\\ A^{\alpha}\\ Z^{\alpha}_{\mu\tau}
\end{array}\right)\,,
\end{eqnarray}
where, the masses of two massive neutral gauge bosons ($Z$ and $Z_{\mu\tau}$)
are respectively given as
\begin{eqnarray}
M_{Z} &=& \sqrt{\frac{g^2_2(v^2_1+v^2_2)}{4}\cos^2\theta_{\mu\tau} + g^2_{Z_{\mu\tau}}v^2_2\sin^2\theta_{\mu\tau} + \frac{g_1\sqrt{(g^2_1+g^2_2)} v^2_1\epsilon}{4}\sin 2\theta_{\mu\tau}}\;, \\
\label{Zmass}
M_{Z_{\mu\tau}} &=& \sqrt{\frac{g^2_2(v^2_1+v^2_2)}{4}\sin^2\theta_{\mu\tau} + g^2_{Z_{\mu\tau}}v^2_2\cos^2\theta_{\mu\tau}- \frac{g_1\sqrt{(g^2_1+g^2_2)} v^2_1\epsilon}{4}\sin 2\theta_{\mu\tau}}\;,
\label{Z'mass}
\end{eqnarray}
and the two orthogonal transformation matrices are given by,
\begin{eqnarray}
\mathcal{O(\theta_{\rm W})} = \left(\begin{array}{ccc}
\cos \theta_{\rm W} & \sin \theta_{\rm W} & 0\\
-\sin \theta_{\rm W} & \cos \theta_{\rm W} & 0 \\
0 & 0 & 1
\end{array}
\right)\,,\hskip 0.2in
\mathcal{O(\theta_{\mu\tau})} = \left(\begin{array}{ccc}
\cos \theta_{\mu\tau} & 0 & \sin \theta_{\mu\tau}\\
0 & 1 & 0 \\
-\sin \theta_{\mu\tau} & 0 & \cos \theta_{\mu\tau}
\end{array}
\right)\,\,.
\end{eqnarray}
Finally, the gauge basis and the mass basis of the neutral gauge bosons
are related the following orthogonal transformation
\begin{eqnarray}
\hspace{2cm}\left(\begin{array}{c} Z^{\alpha} \\A^{\alpha} \\Z^{\alpha}_{\mu\tau}\end{array}\right)
&=& \mathcal{O}(\theta_{\rm W},\,\theta_{\mu\tau})^{T}
\left(\begin{array}{c} W^\alpha_3\\ B^{\alpha}\\ X^{\alpha}\end{array}\right)\,\, ,
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{O}(\theta_{\rm W},\,\theta_{\mu\tau})^{T} &=&
\mathcal{O}(\theta_{\mu\tau})^{T}\,\mathcal{O}(\theta_{\rm W})^{T} \nonumber \\
&=&
\left(\begin{array}{ccc}
\cos\theta_{\mu\tau} \cos\theta_{\rm W} &-\cos\theta_{\mu\tau}
\sin\theta_{\mu\tau}& -\sin\theta_{\mu\tau}\\
\sin\theta_{\rm W}&\cos\theta_{\rm W} & 0\\
\sin\theta_{\mu\tau} \cos\theta_{\rm W}&-\sin\theta_{\mu\tau}
\sin\theta_{\rm W}&\cos\theta_{\mu\tau}
\end{array}\right)\,\, ,
\label{u-matrix}
\end{eqnarray}
where $\theta_{\rm W}$, as mentioned above, is the familiar Weinberg angle
and $\theta_{\mu\tau}$ is the mixing angle between two neutral gauge bosons
$Z$ and $Z_{\mu\tau}$. These mixing angles can be expressed in terms
gauge coupling constants, VEVs and the kinetic mixing parameters as follows,
\begin{eqnarray}
\theta_{\rm W} = \tan^{-1}\left(\dfrac{g_1}{g_2}\right)\ , \ \ \ \ \
\theta_{\mu\tau} = \frac{1}{2}\,\tan^{-1}\left(\dfrac{\dfrac{2\,\epsilon g_1}{\sqrt{g^2_1 + g^2_2}}}
{1 - \dfrac{4g^2_{Z_{\mu\tau}}}{g^2_1 + g^2_2} \dfrac{v^2_2}{v^2_1}}\right) \,\, .
\label{gauge-mix-angle}
\end{eqnarray}
Before we proceed any further, it is worthwhile to mention about the independent parameters.
In this model, in addition to the SM parameters, we have fourteen
new parameters in the scalar sector (excluding SM-Like Higgs boson mass
and VEV $v_1$), three additional Yukawa like coupling constants and
one mass term in the extended quark sector\footnote{Here, we are not considering
Yukawa like coupling constants and bare mass terms in the extended neutrino sector.}
and two more couplings in the gauge sector in the form of
new gauge coupling $g_{Z_{\mu\tau}}$ and kinetic mixing parameter $\epsilon$.
These twenty independent parameters are: $M_{H_2}$, $M_{\phi^{\pm}}$, $M_{\rho_1}$, $M_{\rho_2}$,
$M_{Z_{\mu\tau}}$, $\theta_D$, $\theta_s$, $\lambda_{\Phi}$, $\lambda_S$,
$\lambda_2$, $\lambda_4$, $\lambda_5$, $\lambda_6$, $\lambda_7$, $f_1$,
$f_2$, $f_3$, $M_{\chi}$, $g_{Z_{\mu\tau}}$ and $\theta_{\mu\tau}$.
In terms of these independent parameters the other parameters
appearing in the Lagrangian (Eq.\,(\ref{LagT})) can be
obtained using Eqs.\,(\ref{CP-angle}-\ref{CP-mass2}),
Eqs.\,(\ref{dm-angle},\,\ref{mrho1}), Eqs.\,(\ref{mrho3},
\ref{mphpm}) and Eqs.\,(\ref{Z'mass},\,\ref{gauge-mix-angle})\footnote{Additionally, one
needs to use minimization conditions of the scalar potential $V(H,\,\eta,\,\Phi,\,S)$.}.
\section{\boldmath${b \to s}$ flavour observables}\label{flav}
\subsection{\boldmath$R_{K^{(*)}}$ anomalies}\label{RKRKs}
In the present scenario the NP part of the effective Hamiltonian $\mathcal H_\text{eff}(\equiv \mathcal H_\text{eff}^\text{SM} +
\mathcal H_\text{eff}^\text{NP}$) that describes the $b \to s \ell\ell$ transitions is given by
\begin{equation}
\label{eq:HeffRK}
\mathcal{H}_\text{eff}^\text{NP} = - \frac{4\,G_F}{\sqrt{2}} V_{tb}V_{ts}^* \frac{e^2}{16\pi^2}
\sum_{\ell=e,\mu}
C^{\rm NP}_{9\;\ell} (\bar{s} \gamma_{\alpha} P_{L} b)(\bar{\ell} \gamma^\alpha \ell) +
C^{\rm NP}_{10\;\ell} (\bar{s} \gamma_{\alpha} P_{L} b)( \bar{\ell} \gamma^\alpha \gamma_5 \ell)+\text{h.c.} \,,
\end{equation}
where $G_F$ is the Fermi constant, $V_{ij}$ are the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements.
Here we neglect other dimension-six operators for example, $C_7$ can not give significant contributions to the processes, because it corresponds
to the dipole operator that is strictly constrained by branching ratio of $B\to X_s \gamma$ \cite{Kawamura:2017ecz}.
Also four-quark operators \cite{Jager:2017gal} cannot
play any significant role for the violation of LFU, hence they are irrelevant in this work.
Moreover, four-fermion contact interactions with scalar currents could be
a natural source of LFU violation, although they are highly constrained
by existing measurements of the $B_s \to \mu^+\mu^-$ and $B_s \to e^+e^-$
branching ratios~\cite{Aaij:2017vad,Aaltonen:2009vr}. The NP contribution to the WC $C^{\rm NP,\ell}_{9}=C^{\ell}_{9Z}+C^{\ell}_{9Z_{\mu\tau}}$ can be obtained from
\begin{eqnarray}\label{dc9}
C^{\ell}_{9Z(Z_{\mu\tau})}&=&-\frac{\sqrt{2}}{16 \pi \alpha_{\rm em}G_F V_{tb}V_{ts}^*}\frac{\mathscr{L}^9_{Z(Z_{\mu\tau})}}{M^2_{Z(Z_{\mu\tau})}}\bigg(-\frac{\mathcal{G}_{Z(Z_{\mu\tau})} f_2 f_3}{4 }\bigg[-2\ln(m^2_\chi)-1\\ \nonumber
&&+h_q(x_1)(1-2x_1)\sin^2\theta_D+h_q(x_2)(1-2x_2)\cos^2\theta_D+h_q(x_3)(1-2x_3)\bigg] \\ \nonumber
&+&\frac{\mathcal{C}_{Z(Z_{\mu\tau})} f_2 f_3}{4 }\bigg[\{-\ln(M^2_{\rho_1})+h_w(x_1,r_1)\}\sin^2\theta_D+\{-\ln(M^2_{\rho_2})+h_w(x_2,r_2)\}\cos^2\theta_D\bigg]\\ \nonumber
&-&\frac{\mathcal{S}_{Z(Z_{\mu\tau})} f_2 f_3}{4 }\bigg[\{-\ln(M^2_{\rho_1})+h_s(x_1)\}\sin^2\theta_D+\{-\ln(M^2_{\rho_2})+h_s(x_2)\}\cos^2\theta_D\\ \nonumber
&&+\{-\ln(M^2_{\rho_3})+h_s(x_3)\}\bigg]\bigg)\;,
\end{eqnarray}
while the NP contribution to the WC
$C^{\rm NP,\ell}_{10}=C^{\ell}_{10Z}+C^{\ell}_{10Z_{\mu\tau}}$ is given by
\begin{eqnarray}\label{dc10}
C^{\ell}_{10Z(Z_{\mu\tau})}&=&-\frac{\sqrt{2}}{16 \pi \alpha_{\rm em} G_F V_{tb}V_{ts}^*}\frac{\mathscr{L}^{10}_{Z(Z_{\mu\tau})}}{M^2_{Z(Z_{\mu\tau})}}\bigg(-\frac{\mathcal{G}_{Z(Z_{\mu\tau})} f_2 f_3}{4 }\bigg[-2\ln(m^2_\chi)-1\\ \nonumber
&&+h_q(x_1)(1-2x_1)\sin^2\theta_D+h_q(x_2)(1-2x_2)\cos^2\theta_D+h_q(x_3)(1-2x_3)\bigg] \\ \nonumber
&+&\frac{\mathcal{C}_{Z(Z_{\mu\tau})} f_2 f_3}{4 }\bigg[\{-\ln(M^2_{\rho_1})+h_w(x_1,r_1)\}\sin^2\theta_D+\{-\ln(M^2_{\rho_2})+h_w(x_2,r_2)\}\cos^2\theta_D\bigg]\\ \nonumber
&-&\frac{\mathcal{S}_{Z(Z_{\mu\tau})} f_2 f_3}{4 }\bigg[\{-\ln(M^2_{\rho_1})+h_s(x_1)\}\sin^2\theta_D+\{-\ln(M^2_{\rho_2})+h_s(x_2)\}\cos^2\theta_D\\ \nonumber
&&+\{-\ln(M^2_{\rho_3})+h_s(x_3)\}\bigg]\bigg)\;,
\end{eqnarray}
although we have found that the contribution of $C^{\rm NP,\ell}_{10}$ ($\ell\equiv\mu$) is insignificant\footnote{Due to this reason there is no significant NP contribution to the decay $B_s\to \mu^+\mu^-$. Therefore, there is no stringent constraint from the branching ratio of this process to our analysis.} and we will focus only on $C^{\ell}_{9Z(Z_{\mu\tau})}$ ($\ell\equiv\mu$) in rest of the analysis\footnote{Therefore, the present scenario can be considered as a typical scenario which can provide the NP contribution to $C^{\ell}_{9}$ ($\ell\equiv\mu$) only. Although, there is a NP contribution to $C^{e}_{9}$ but practically it has no significance due to very small mixing between $Z$ and $Z_{\mu\tau}$. Hence, the coupling between $Z_{\mu\tau}$ and $e^+e^-$ pair is effectively vanishing in nature.}. $\alpha_{\rm em}$ is the fine structure constant. Here $x_{1,2,3}=\frac{M^2_\chi}{M^2_{\rho_{1,2,3}}}$ and $r_{1,2}=\frac{M^2_{\rho_{3}}-M^2_{\rho_{1,2}}}{M^2_{\rho_{1,2}}}$. The expressions of the factors $g_{Z(Z_{\mu\tau})}$, $c_{Z(Z_{\mu\tau})}$, $s_{Z(Z_{\mu\tau})}$, $\mathscr{L}^9_{Z(Z_{\mu\tau})}$, $\mathscr{L}^{10}_{Z(Z_{\mu\tau})}$ and the functions $h_q(x)$, $h_w(x,r)$, $h_s(x)$ are given in the Appendix~\ref{flav_app}. In Fig.~\ref{newdia} we have shown relevant Feynman diagrams responsible for the additional contribution to the $b\to s \mu\mu$ transition. It is clearly evident from these Feynman diagrams that the NP contribution to the WC $C^{\rm NP,\ell}_{9}$ is provided by the non-standard bottom like fermion field $\chi$ and the dark matter candidate $\rho_1$ with its partners $\rho_2$ and $\rho_3$. Later we provide the dark matter phenomenology of a weakly interacting massive particle (WIMP) type dark matter candidate $\rho_1$ and related issues by considering the constraints of flavour physics observables that we have considered in this article.
\begin{figure}[t]
\begin{center}
\subfloat[]{\label{fig1a}\includegraphics[scale=0.8,angle=0]{RK_1}}
\subfloat[]{\label{fig1b}\includegraphics[scale=0.8,angle=0]{RK_2}}
\subfloat[]{\label{fig1c}\includegraphics[scale=0.8,angle=0]{RK_3}}\\
\subfloat[]{\label{fig1d}\includegraphics[scale=0.8,angle=0]{RK_4}}
\subfloat[]{\label{fig1e}\includegraphics[scale=0.8,angle=0]{RK_5}}
\caption{$Z$ and $Z_{\mu\tau}$-penguin and self-energy diagrams that
contribute to the decay of $b\to s \mu\mu$ in addition to SM contribution.}
\label{newdia}
\end{center}
\end{figure}
To ameliorate the tension between the SM prediction and experimental data
for $R_{K^{(*)}}$ we use $C^{\rm NP,\mu}_{9} \in [-1.26, -0.63]$ \cite{Aebischer:2019mlg}
in $2\sigma$ interval. For the purpose of notational simplicity, from now and onwards, we use $\Delta{C_9}$ for the total NP contributions to the WC $C_9$ for $\ell =\mu$, i.e., $C^{\rm NP,\mu}_{9} = C^{\mu}_{9Z}
+ C^{\mu}_{9Z_{\mu\tau}}=\Delta{C_9}$.
\begin{figure}[h!]
\centering
\subfloat[Variation of $\Delta{C_9}$ with $M_{\rho_1}$
for $M_{\rho_2} = 506$ GeV, $M_{\chi}= 1300$ GeV,
$g_{Z_{\mu\tau}}=0.93\times 10^{-3}$, $M_{Z_{\mu\tau}}=0.076$ GeV and
$\theta_D = 0.095$ rad. \label{Fig:dc9-vs-mrho1}]{
\includegraphics[height=6cm,width=7.8cm,angle=0]{dc9-vs-dm21.pdf}
}
\hskip 0.2in
\subfloat[Variation of $\Delta{C_9}$ with $M_{\chi}$
for $M_{\rho_2} = 506$ GeV, $M_{\rho_1}= 26.5$ GeV,
$g_{Z_{\mu\tau}}=0.93\times 10^{-3}$, $M_{Z_{\mu\tau}}=0.076$ GeV and
$\theta_D = 0.095$ rad. \label{Fig:dc9-vs-mchi}]{
\includegraphics[height=6cm,width=7.8cm,angle=0]{dc9-vs-mchi.pdf}
}
\vskip 0.2in
\subfloat[Variation of $\Delta{C_9}$ with $M_{Z_{\mu\tau}}$
for $M_{\rho_1}=26.5$ GeV, $M_{\rho_2} = 506$ GeV, $M_{\chi}= 1300$ GeV,
$f_2\times f_3=2.53$ and $\theta_D = 0.095$ rad.\label{Fig:dc9-vs-mZp}]{
\includegraphics[height=6cm,width=7.8cm,angle=0]{dc9-vs-mZp.pdf}
}
\hskip 0.2in
\subfloat[Variation of $\Delta{C_9}$ with $\theta_D$ for
$M_{\rho_2} = 506$ GeV, $M_{\chi}= 1300$ GeV, $M_{Z_{\mu\tau}}=0.076$ GeV,
$g_{Z_{\mu\tau}}=0.93\times 10^{-3}$ and $f_2\times f_3=0.8$. \label{Fig:dc9-vs-thD}]{
\includegraphics[height=6cm,width=7.8cm,angle=0]{dc9-vs-thd_rho1.pdf}
}
\caption{Variation of $\Delta{C_9}$ with respect to different parameters.}
\label{Fig:dc9lineplot}
\end{figure}
In order to understand the dependence of $\Delta{C_9}$ on the model parameters
we have shown the variation of $\Delta{C_9}$ in Fig.\,\ref{Fig:dc9lineplot}. In this figure there are four panels
which represent the variation of $\Delta{C_9}$ with respect to four important
parameters namely $M_{\rho_1}$, $M_{\chi}$, $M_{Z_{\mu\tau}}$ and $\theta_D$.
In Fig.\,\ref{Fig:dc9-vs-mrho1},
we have shown the variation of $\Delta{C_9}$ with mass of $\rho_1$ for three
different values of the product of Yukawa couplings $f_2$ and $f_3$. Here,
one can see that the magnitude of $\Delta{C_9}$ increases with decreasing values of
mass of $\rho_1$ which enters into loop diagrams (see Feynman diagrams
shown in Fig.\,\ref{newdia}). Consequently, the loop functions are enhanced which in turn increase the magnitude of $\Delta{C_9}$.
Moreover, as the NP contributions to
the WC $C_9$ (Eq.\,(\ref{dc9})) is proportional to
Yukawa couplings $f_2$ and $f_3$, the magnitude of $\Delta{C_9}$
enhances with $f_2\times f_3$. This feature is also clearly demonstrated
in Fig.\,\ref{Fig:dc9-vs-mrho1}. Similar to this plot, in Fig.\,\ref{Fig:dc9-vs-mchi},
we have illustrated the effect of $M_{\chi}$ on $\Delta{C_9}$ for the
same three different values of $f_2\times f_3$. Here also we have
found similar behaviour of $\Delta{C_9}$ with respect to
$M_{\chi}$ as we have observed for $M_{\rho_1}$. Further, we have also
displayed the effect of non-standard gauge boson mass $M_{Z_{\mu\tau}}$ on
$\Delta{C_9}$ in Fig.\,\ref{Fig:dc9-vs-mZp} for three different values of gauge
coupling $g_{Z_{\mu\tau}}=0.93\times 10^{-3}$, $0.35\times 10^{-3}$
and $0.1\times 10^{-3}$ respectively. In this case, the magnitude of
$\Delta{C_9}$ decreases caused by the propagator suppression for larger values of $M_{Z_{\mu\tau}}$.
It is clearly seen from Eq.\,(\ref{dc9}), where $\Delta{C_9}$ is inversely
proportional to $M^2_{Z_{\mu\tau}}$. On the other hand, in this plot $\Delta{C_9}$
increases significantly with the gauge coupling $g_{Z_{\mu\tau}}$ for
the considered mass range of $M_{Z_{\mu\tau}}$ ($0.01\leq
M_{Z_{\mu\tau}}\,({\rm GeV})\leq 0.1$). Finally, in Fig.\,\ref{Fig:dc9-vs-thD}
we have demonstrated the variation of $\Delta{C_9}$ with respect to the dark sector
mixing angle $\theta_D$ for three different choices of $M_{\rho_1}$. In this
plot, we have varied $\theta_D$ in range 0 to $\pi/2$. The oscillatory behaviour
of $\Delta{C_9}$ with respect to $\theta_D$ is due the combined effects of two
factors. One is the direct involvement of sine and cosine functions
within the expressions of $\Delta{C_9}$. Another one is the indirect effect
due to the change of $M_{\rho_3}$ with $\theta_D$, where the former undergoes
a full oscillation between $M_{\rho_2}$ to $M_{\rho_1}$ via Eq.\,(\ref{mho3-relation}) when
$\theta_D$ changes from $0$ to $\pi$. The morphology
of $\Delta{C_9}$ with respect to $\theta_D$ fits pretty well
with a function like $-A\sin^2\,(2\theta_D)$, where the exact
value of the normalisation constant $A$ depends on the values
of other parameters namely, $M_{\rho_1}$, $M_{\rho_2}$, $g_{Z_{\mu\tau}}$,
$M_{Z_{\mu\tau}}$ and $M_{\chi}$. Moreover, the oscillatory behaviour
of $\Delta{C_9}$ vanishes if we set $M_{\rho_1}=M_{\rho_2}$. Under this condition,
the dependence of $\theta_D$ disappears from the expression of $M_{\rho_3}$
and consequently $\Delta{C_9}$ becomes independent of $\theta_D$. Furthermore,
in all the four plots of Fig.\,\ref{Fig:dc9lineplot},
the grey coloured band represents $2\sigma$ range allowed range of fit
value of $\Delta{C_9}$ for explaining $R_{K^{(*)}}$ anomalies \cite{Aebischer:2019mlg}.
\subsection{\boldmath$B\to X_s\gamma$}\label{bsg}
The measurement of inclusive radiative $B$ decay process like $B\rightarrow X_s\gamma$ has also been shown deviation from the corresponding SM prediction. The world average experimental value of the branching ratio of this process is \cite{Amhis:2016xyh}
\begin{equation}\label{br_exp_bsg}
{\rm Br}^{\rm Exp}(B\rightarrow X_s\gamma)=(3.32\pm 0.16)\times10^{-4},
\end{equation}
for photon energy $E_{\gamma} >1.6$ GeV in the $B$-meson rest frame. Under the same conditions the corresponding SM prediction with higher order corrections is \cite{Misiak:2015xwa}
\begin{equation}\label{br_sm_bsg}
{\rm Br}^{\rm SM}(B\rightarrow X_s\gamma)=(3.36\pm 0.23)\times10^{-4}.
\end{equation}
It is quite evident that the theoretical prediction is in good agreement with the experimental value. Hence this small difference can tightly constrain any NP which contributes to this process. Keeping this in mind we have evaluated the NP contributions to this decay process in the present scenario. Consequently, we use the branching ratio of this process as one of the constraints in our analysis.
At quark level $B\to X_s\gamma$ decay is indicated by $b\to s\gamma$ transition. The effective Hamiltonian for this transition at the bottom quark mass ($\mu_b=m_b$) scale is given by (see ref.\;\cite{Buchalla:1995vs, Buras:1997fb})
\begin{equation} \label{Heff_at_mu}
{\cal H}_{\rm eff}(b\to s\gamma) = - \frac{G_{\rm F}}{\sqrt{2}} V_{ts}^* V_{tb}
\left[ \sum_{i=1}^6 C_i(\mu_b) \mathcal{O}_i + C_{7\gamma}(\mu_b) \mathcal{O}_{7\gamma}
+C_{8G}(\mu_b) \mathcal{O}_{8G} \right]\,.
\end{equation}
At first the WCs ($C_i$) have been calculated at electroweak scale ($\mu_W$) and using renormalisation group (RG) equations \cite{Buchalla:1995vs, Buras:1997fb, Buras:2003mk} they are evolved down to $\mu_b = m_b$ scale. The local operators $\mathcal{O}_1....\mathcal{O}_6$ represent four quark interactions and the explicit form of these operators can be found in \cite{Buras:1998raa}. The remaining operators $\mathcal{O}_{7\gamma}$ (electromagnetic dipole) and $\mathcal{O}_{8G}$ (chromomagnetic dipole) which are the most important for this decay and the expressions for these operators at the leading order are given by
\begin{equation}\label{O6B}
\mathcal{O}_{7\gamma} = \frac{e}{8\pi^2} m_b \bar{s}_{\alpha'} \sigma^{\alpha\beta}
(1+\gamma_5) b^{\alpha'} F_{\alpha\beta},\qquad
\mathcal{O}_{8G} = \frac{g_s}{8\pi^2} m_b \bar{s}^{\alpha'} \sigma^{\alpha\beta}
(1+\gamma_5)\Lambda^a_{\alpha'\beta'} b^{\beta'} G^a_{\alpha\beta}\;,
\end{equation}
with $\sigma^{\alpha\beta}=\frac{i}{2}[\gamma^\alpha, \gamma^\beta]$. The expressions of the WCs at $\mu_b$ scale is given by
\begin{eqnarray}
\label{C7eff}
C_{7\gamma}^{(0)eff}(\mu_b) & = &
\eta^\frac{16}{23} C_{7\gamma}^{(0)}(\mu_W) + \frac{8}{3}
\left(\eta^\frac{14}{23} - \eta^\frac{16}{23}\right) C_{8G}^{(0)}(\mu_W) +
C_2^{(0)}(\mu_W)\sum_{i=1}^8 h_i \eta^{a_i},
\\
\label{C8eff}
C_{8G}^{(0)eff}(\mu_b) & = &
\eta^\frac{14}{23} C_{8G}^{(0)}(\mu_W)
+ C_2^{(0)}(\mu_W) \sum_{i=1}^8 \bar h_i \eta^{a_i},
\end{eqnarray}
with
\begin{equation}
\eta = \frac{\alpha_s(\mu_W)}{\alpha_s(\mu_b)},~~~\alpha_s(\mu_b) = \frac{\alpha_s(M_Z)}{1
- \beta_0 \frac{\alpha_s(M_z)}{2\pi} \, \ln(M_Z/\mu_b)}, \qquad
\beta_0=\frac{23}{3}~,
\label{eq:asmumz}
\end{equation}
and
\begin{eqnarray}\label{c2}
C^{(0)}_2(\mu_W) &=& 1,\\
C^{(0)}_{7\gamma} (\mu_W) &=& -\frac{1}{2} D'(x_t, x_1, x_2, x_3)=-\frac{1}{2}\{(D'_0(x_t)+ D'(x_1, x_2, x_3)\},\\
C^{(0)}_{8G}(\mu_W) &=& -\frac{1}{2} E'(x_t, x_1, x_2, x_3)=-\frac{1}{2}\{(E'_0(x_t)+ E'(x_1, x_2, x_3)\}.
\end{eqnarray}
Apart from these other WCs vanish at the electroweak scale $\mu_W$. The superscript ``0'' indicates the leading logarithmic (LO) approximation. The values of $a_i$, $h_i$ and $\bar h_i$ can be obtained from \cite{Buras:2003mk}. The total (SM+NP) contribution at the LO is represented by the functions $D'(x_t, x_1, x_2, x_3)$ and $E'(x_t, x_1, x_2, x_3)$ while the functions $D'_0(x_t)$ and $E'_0(x_t)$ are designated as the corresponding SM contributions at the electroweak scale \cite{Inami:1980fz}
\begin{equation}\label{dp0}
D'_0(x_t)= -{{(8x_t^3 + 5x_t^2 - 7x_t)}\over{12(1-x_t)^3}}+
{{x_t^2(2-3x_t)}\over{2(1-x_t)^4}}\ln x_t~,
\end{equation}
\begin{equation}\label{ep0}
E'_0(x_t)=-{{(x_t^3-5x_t^2-2x_t)}\over{4(1-x_t)^3}} + {3\over2}
{{x_t^2}\over{(1 - x_t)^4}} \ln x_t~,
\end{equation}
with $x_t\equiv \frac{m^2_t}{M^2_W}$. The functions corresponding to electromagnetic and chromomagnetic dipole operators due to the NP particles (generated form Fig.~\ref{fig2}) are given in the following respectively
\begin{eqnarray}
D'(x_1, x_2, x_3)&=-&\frac{\sqrt{2}}{G_F V^\ast_{tb} V_{ts}}\frac{f_2f_3}{8}\frac 13 \left(\frac{\sin^2\theta_D}{M^2_{\rho_1}}h_b(x_1)+\frac{\cos^2\theta_D}{M^2_{\rho_2}}h_b(x_2)+\frac{1}{M^2_{\rho_3}}h_b(x_3)\right) \;,\\
\label{deldp}
E'(x_1, x_2, x_3)&=&\frac{\sqrt{2}}{G_F V^\ast_{tb} V_{ts}}\frac{f_2f_3}{8}\left(\frac{\sin^2\theta_D}{M^2_{\rho_1}}h_b(x_1)+\frac{\cos^2\theta_D}{M^2_{\rho_2}}h_b(x_2)+\frac{1}{M^2_{\rho_3}}h_b(x_3)\right) \;,\\
\label{delep}
\end{eqnarray}
while the function $h_b(x)$ is given in the Appendix~\ref{flav_app}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1,angle=0]{bsg}
\caption{The possible electromagnetic and chromomagnetic penguin
diagrams that are contributed to the decay $B\to X_s \gamma$ in
addition to the SM.}
\label{fig2}
\end{center}
\end{figure}
\begin{comment}
Radiative decay of $B$ meson shows strong dependence on {\it b} quark mass ($m_b$) and the CKM matrix ($V_{\rm CKM}$) elements. In order to reduce the uncertainties on $m_b$ and $V_{\rm CKM}$ it is a usual practice to normalise it
by the measured semileptonic decay rate $Br(B \to X_c e \bar{\nu}_e)$. Finally, in the leading logarithmic approximation one can write this ratio as:
\begin{equation}\label{main}
\frac{{\rm Br}(B \rightarrow X_s \gamma)}
{{\rm Br}(B \rightarrow X_c e \bar{\nu}_e)}=
\frac{|V_{ts}^* V_{tb}^{}|^2}{|V_{cb}|^2}
\frac{6 \alpha_{\rm em}}{\pi f(z)}~|C^{(0){\rm eff}}_{7}(\mu_b)|^2\,,
\end{equation}
where,
\begin{equation}\label{phase}
f(z)=1-8z^2+8z^6-z^8-24z^4\ln z~,
\end{equation}
is known as the phase space factor in $Br(B \to X_c e \bar{\nu}_e)$ with $z=m_c/m_b$. $m_c$ being the charm quark mass and $\alpha_{\rm em}$ is the fine structure constant.
\end{comment}
In SM the branching ratio of $B\rightarrow X_s\gamma$ has been estimated at a very high level of accuracy including higher order QED and QCD corrections. For example in refs.\;\cite{Chetyrkin:1996vx, Kagan:1998ym} one can find the full next-to leading order (NLO) QCD and QED corrections for this process in two different ways. The present precision level of experimental data requires that one should also include next-to-next-to leading order (NNLO) QCD corrections in this analysis. In this regard the first effort to measure NNLO QCD corrections for this process in SM was described in ref.\;\cite{Misiak:2006zs}. Finally in a recent article \cite{Misiak:2015xwa} one can find an updated and more complete NNLO QCD corrections to this process. Using the \cite{Misiak:2015xwa} one can calculate the branching ratio of $B\rightarrow X_s\gamma$ incorporating NNLO QCD corrections in NP scenario. Therefore, in the current article we also follow the same approach\footnote{This approach has also been used in the context of other BSM scenarios to measure the NP effects for this process: for example for nonminimal universal extra dimensional model
\cite{Datta:2016flx} and for two higgs doublet model \cite{Arhrib:2017yby}.} (as given in \cite{Misiak:2015xwa}) to measure NP contribution for this process with NNLO QCD corrections
\begin{equation}\label{nnlo}
{\rm Br}^{\rm NNLO}(B\rightarrow X_s\gamma)\times10^{4}=(3.36\pm 0.23) -8.22\Delta C_7-1.99\Delta C_8.
\end{equation}
Here $\Delta C_7$ and $\Delta C_8$ represent for the NP contributions to WCs for electromagnetic and chromomagnetic
dipole operators. In our convention, $\Delta C_7=-\frac12 D'(x_1,x_2,x_3)$ and $\Delta C_8=-\frac12 E'(x_1,x_2,x_3)$.
\section{\boldmath$(g-2)_\mu$ anomaly}\label{g2}
Using Dirac equation one can define the magnetic moment $\vec{\mathbb{M}}$ of muon in terms of its spin $\vec{\mathbb{S}}$ and gyromagnetic ratio ($g_{\mu}$) in the following way
\begin{eqnarray}
\vec{\mathbb{M}}= g_{\mu} \dfrac{e}{2\,m_\mu} \vec{\mathbb{S}},
\label{mug2}
\end{eqnarray}
which is one of the most accurately measured physical quantities.
Ideally the value of $g_{\mu}$ is equal to ``2''.
In SM one can easily calculate the one loop correction to this quantity
and that gives marginal shift from ``2''. Hence, to measure the deviation of
$g_{\mu}$ from its tree level value one can
define a quantity namely
\begin{eqnarray}
a_{\mu} = \dfrac{g_{\mu}-2}{2}\,.
\end{eqnarray}
This quantity has been precisely measured by the CERN experiments
and later on by the E821 experiment. The current average experimental
value is \cite{Tanabashi:2018oca}
\begin{eqnarray}
a_{\mu}^{\rm exp} = 116592091.0\pm 54\pm 33 \times 10^{-11}\,.
\label{mug2exp}
\end{eqnarray}
On the other hand total theoretical prediction of
this quantity considering all kinds of source of contributions
in SM is \cite{Tanabashi:2018oca}
\begin{eqnarray}
a_{\mu}^{\rm th} = 116591823.1\pm 34\pm 26 \times 10^{-11}\,.
\label{mug2th}
\end{eqnarray}
It is quite evident from the above Eqs.~\ref{mug2exp} and \ref{mug2th} that both the experimentally measured
and theoretically predicted values of $a_{\mu}$
are close to each other, however there still exists
some disagreement between these two quantities at the $3.5\sigma$
significance which is \cite{Tanabashi:2018oca},
\begin{eqnarray}
\Delta a_{\mu} = a_{\mu}^{\rm exp} - a_{\mu}^{\rm SM}
=268\pm 63\pm43 \times 10^{-11}\,.
\label{mug2delta}
\end{eqnarray}
Therefore, this anomaly with respect to the SM expectation
requires the interference of BSM theories where one obtains
extra contributions from some NP particles. In the present model\footnote{See
Ref.\,\,\cite{Lindner:2016bgg} for a review on $(g-2)_{\mu}$
in various BSM extensions.}, apart from the SM contribution,
we have two additional one loop diagrams (see Fig.~\ref{muong2}) in which the extra neutral gauge boson $Z_{\mu \tau}$
and extra CP-even scalar $H_2$ are involved.
\begin{figure}[H]
\begin{center}
\subfloat[]{\label{fig3a}\includegraphics[scale=1,angle=0]{g2_Zp}}
\subfloat[]{\label{fig3b}\includegraphics[scale=1,angle=0]{g2_s}}
\caption{Relevant penguin diagrams that are contributed to the $(g-2)_\mu$ in addition to the SM.}
\label{muong2}
\end{center}
\end{figure}
The additional contribution from Fig.~\ref{fig3a} is given by \cite{Gninenko:2001hx, Baek:2001kca},
\begin{equation}\label{g2_Z}
\delta a_\mu^{Z_{\mu\tau}} = \frac{1}{8\pi^2}~
\left(a^2_{Z_{\mu\tau}}F^a_{Z_{\mu\tau}}(R_{Z_{\mu\tau}})-b^2_{Z_{\mu\tau}}F^b_{Z_{\mu\tau}}(R_{Z_{\mu\tau}})\right)\; \\
\end{equation}
with $R_{Z_{\mu\tau}}\equiv M^2_{Z_{\mu\tau}}/m^2_{\mu}$ and
\begin{eqnarray}
a_{Z_{\mu\tau}}&=&\frac{g_2}{4\cos\theta_W}(1-4\sin^2\theta_W)\sin\theta_{\mu\tau}-\bigg(g_{Z_{\mu\tau}}-\frac 34 \frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\bigg)\cos\theta_{\mu\tau}\;,\\
\label{azz}
b_{Z_{\mu\tau}}&=-&\frac{g_2}{4\cos\theta_W}\bigg(\sin\theta_{\mu\tau}-\sin\theta_W\epsilon\cos\theta_{\mu\tau}\bigg)\;,
\label{bzz}
\end{eqnarray}
\begin{eqnarray}
F^a_{Z_{\mu\tau}}(R_{Z_{\mu\tau}}) &=& \int_0^1 dx\, \frac{2x(1-x)^2}{(1
-x)^2+R_{Z_{\mu\tau}} x}
\;, \\
F^b_{Z_{\mu\tau}}(R_{Z_{\mu\tau}}) &=& \int_0^1 dx\,\frac{2(1-x)(3+x)}{(1
-x)^2+R_{Z_{\mu\tau}} x}\;.
\end{eqnarray}
Furthermore, the contribution from the extra CP-even scalar $H_2$ is given by \cite{Krawczyk:1996sm, Dedes:2001nx}
\begin{eqnarray}
\delta a_\mu^{H_2} = \frac{G_Fm_\mu^2}{4\pi^2 \sqrt{2}}~
\sin^2{\theta_s} ~R_{H_2}~ F_{H_2}(R_{H_2}) \;, \\
\label{g2_H2}
\end{eqnarray}
with $R_{H_2}\equiv m_\mu^2/M_{H_2}^2$~and
\begin{eqnarray}\label{g2_H2_int}
F_{H_2}(R_{H_2}) = \int_0^1 dx\, \frac{x^2(2-x)}{R_{H_2} x^2-x+1}.
\end{eqnarray}
However, we have checked that the contribution of CP-even scalar $H_2$ is insignificant with respect to
$Z_{\mu \tau}$ in the allowed parameter space.
\begin{figure}[h!]
\centering
\includegraphics[height=8cm,width=10cm,angle=0]{muon-g-2.pdf}
\caption{Allowed region in $g_{Z_{\mu\tau}}-M_{Z_{\mu\tau}}$
plane which explains the deviation between theoretical (SM)
prediction and experimental result in $1\sigma$ (green coloured points)
and $2\sigma$ (red coloured points) ranges respectively.}
\label{Fig:muong-2}
\end{figure}
In Fig.\,\ref{Fig:muong-2}, we have shown the allowed region
of $M_{Z_{\mu\tau}}$ and $g_{Z_{\mu\tau}}$ in $g_{Z_{\mu\tau}}-M_{Z_{\mu\tau}}$
plane by red coloured points, which can explain the discrepancy between
theoretical prediction (SM) and experimentally measurable value of
the anomalous magnetic moment of muon in $2\sigma$ range. The corresponding
$1\sigma$ allowed region is also indicated by green coloured points. We will
come back to this parameter space ($g_{Z_{\mu\tau}}-M_{Z_{\mu\tau}}$ plane)
with a detailed analysis, which includes constraints like dark matter relic density,
direct detection, observables related to rare $B$-meson decays ($R_{K^{(*)}}$, Br($B\rightarrow X_s\gamma$))
and also bounds from ongoing and future experiments like CCFR, LHC, DUNE, Borexino
etc. in the next section (see Fig.\,\ref{Fig:mzp-gzp} and related discussions).
\section{Dark Matter}
\label{dm}
We are in a stage, where we can discuss dark matter phenomenology.
The scalar sector of the present scenario contains
two $\mathbb{Z}_2$-odd scalar representations, one of them
is an SU$(2)_{L}$ doublet $\Phi$ having a nonzero $L_{\mu}-L_{\tau}$
charge while the rest is a gauge singlet scalar $S$. As we have seen
earlier in the Section \ref{model}, the term proportional to $\lambda_8$
in the scalar potential (Eq.\,\,\ref{Vpot}) enforces a mixing between
the CP-even component $\phi^0$ of the doublet $\Phi$ and the singlet
$S$. Therefore, in the odd sector we have three physical neutral scalars
namely, $\rho_1$, $\rho_2$ and $\rho_3$, out of which $\rho_1$ and
$\rho_2$ are two mutually orthogonal linear combinations of $S$
and $\phi^0$ while $\rho_3$ coincides with the CP-odd component
$a^0$ as the latter does not have any mixing with others. Being
$\mathbb{Z}_2$-odd, the lightest one among the neutral scalars
$\rho_1$, $\rho_2$ and $\rho_3$ is automatically stable and
can be an excellent dark matter candidate of the Universe.
In this work, we consider $\rho_1$ as the potential dark matter candidate
and depending upon the dark sector mixing angle $\theta_D$,
$\rho_1$ will be either ``singlet-like" or ``doublet-like" or
a mixed state. Later in this Section, we will show that although the
combined effects of both dark matter relic density bound and
flavour physics anomalies (including $(g-2)_\mu$) considering
in this work dictates that the dark matter candidate $\rho_1$ to be mostly
a ``single-like" state, its freeze-out process involves extra annihilation
channels involving $L_{\mu}-L_{\tau}$ gauge boson $Z_{\mu\tau}$, making this
scenario significantly different from the case of standard Scalar Singlet
dark matter \cite{McDonald:1993ex, Burgess:2000yq, Biswas:2011td, Cline:2013gha}.
The viability of the proposed dark matter candidate $\rho_1$ has been
investigated first by computing its relic density\footnote{Here, DM represents the short form of dark matter.}
$\Omega_{\rm DM} h^2$. This requires comoving number density
$Y$ at the present epoch ($T=T_0$, $T_0$ is the present temperature of the Universe),
which is a solution of the Boltzmann equation involving
all relevant annihilation and co-annihilation processes
in the collision term. The Boltzmann equation in terms of
$Y$ is given by \cite{Gondolo:1990dk, Griest:1990kh, Edsjo:1997bg},
\begin{eqnarray}
\dfrac{dY}{dx} = -\left(\dfrac{45\,G}{\pi}\right)^{-\frac{1}{2}}
\dfrac{M_{\rho_1}\,\sqrt{g_{\star}}}{x^2}
\langle{\sigma {\rm v}}\rangle_{\rm eff}\,
(Y^2-(Y^{\rm eq})^2)\,,
\label{eq:BEapprox}
\end{eqnarray}
where $Y=\sum_i Y_i$ with
$Y_i=\dfrac{n_i}{\rm s}$ being the comoving number density of $\mathbb{Z}_2$-odd particle
$i$ having number density $n_i$ and ${\rm s}$ stands for the entropy density
of the Universe. Moreover, $x=\dfrac{M_{\rho_1}}{T}$ is a dimensionless
variable and $G$ is the Newton's gravitational constant. The function
$g_{\star}$ \cite{Gondolo:1990dk} depends on degrees of freedom for entropy and energy densities
of the Universe. The quantity $\langle{\sigma {\rm v}}\rangle_{\rm eff}$ has
been defined as \cite{Griest:1990kh}
\begin{eqnarray}
\langle{\sigma {\rm v}}\rangle_{\rm eff} =
\sum_{i\,j}\langle{\sigma_{i\,j} {\rm v}_{i\,j}}\rangle
\times r_i\,r_j\,,
\label{eq:sigmaveff}
\end{eqnarray}
where, $\langle{\sigma_{i\,j} {\rm v}_{i\,j}}\rangle$
is the thermal averaged annihilation cross section between
particle $i$ and $j$ having relative velocity ${\rm v}_{i\,j}$.
$\langle{\sigma_{i\,j} {\rm v}_{i\,j}}\rangle$ has the following
expression in terms of cross section $\sigma_{i\,j}$,
\begin{eqnarray}
\langle{\sigma_{i\,j} {\rm v}_{i\,j}}\rangle &=&
\frac{1}{2\,M^2_{i}\,M^2_{j}\,T\,{\rm K}_2\left(\dfrac{M_i}{T}\right)\,
{\rm K}_2\left(\dfrac{M_j}{T}\right)} \times \int^{\infty}_{(M_i+M_j)^2}
\sigma_{ij}\,\,p^2_{ij}\,\sqrt{s}\,{\rm K}_1
\left(\frac{\sqrt{s}}{T}\right)\,ds\,, \nonumber \\
p_{ij} &=& \dfrac{\sqrt{s - (M_i+M_j)^2}
\sqrt{s-(M_i-M_j)^2}}{2\,\sqrt{s}}\,,
\label{eq:sigmavij}
\end{eqnarray}
with
\begin{eqnarray}
r_i = \dfrac{Y^{\rm eq}_i}{Y} = \dfrac{n^{\rm eq}_i}{n}=
\dfrac{g_i\left(1+\Delta_i\right)^{3/2}\exp[-\Delta_i\,x]}
{\sum_{i} g_i\left(1+\Delta_i\right)^{3/2}\exp[-\Delta_i\,x]}\,,
\label{eq:ri}
\end{eqnarray}
where, ${\rm K}_i$ is the $i^{\rm th}$ order Modified Bessel function of
second kind and $s$ is the Mandelstam variable. Further, $Y_i^{\rm eq}$ and $n^{\rm eq}_i$
are the equilibrium values of $Y_i$ and $n_i$ respectively while $n=\sum_i n_i$ is the
total number density of all the odd sector particles. This
is the most relevant quantity instead of individual $n_i$s,
since all heavier particles, which survive annihilation, will
eventually decay into the LOP ($\rho_1$). This is the actual reason of
expressing the Boltzmann equation in terms of total comoving number
density $Y$ instead of individual $Y_i$s. In the above,
$\Delta_i=\dfrac{M_{i}-M_{\rho_1}}{M_{\rho_1}}$, represents
the mass splitting between LOP and other heavier $\mathbb{Z}_2$-odd
particles. After implementation of the present model in {\tt FeynRules}~\cite{Alloul:2013bka} we have solved Boltzmann equation at $T=T_0$
using \texttt{micrOMEGAs} \cite{Belanger:2013oya}. Finally, we have obtained
$Y(T_0)$ which is related to the relic density
of LOP through the following relation \cite{Edsjo:1997bg}
\begin{eqnarray}
\Omega_{\rm DM} h^2 = 2.755\times 10^8\,\left(\dfrac{M_{\rho_1}}
{\rm GeV}\right)\,Y(T_0)\,.
\label{eq:omega}
\end{eqnarray}
Relic density $\Omega_{\rm DM} h^2$ of dark matter
has been measured precisely by satellite borne experiments
like Planck and WMAP and its present acceptable range
is $0.1172 \leq \Omega_{\rm DM} h^2 \leq 0.1226$
at 67\% confidence level (C.L.) \cite{Ade:2015xua}.
\begin{figure}[h!]
\includegraphics[scale=0.85]{DD.pdf}
\caption{Feynman diagram for the elastic scattering of
$\rho_1$ with nucleon $N$ through the exchange of scalar bosons
$H_1$ and $H_2$.}
\label{Fig:DD}
\end{figure}
Apart from this, one has to take into account the latest bound
on dark matter nucleon scattering cross section
from the ``ton-scale" direct detection experiment
namely XENON1T \cite{Aprile:2018dbl}, which till now provides
the most stringent upper bound on dark matter nucleon
spin independent scattering cross section ($\sigma_{\rm SI}$)
for dark matter mass ranging from 6 GeV to 1 TeV. Since the dark matter
candidate of the present scenario is a scalar, it has only spin independent
scattering with nucleon and such scattering is possible only
though scalar bosons $H_1$ and $H_2$. Feynman diagrams
of such elastic scattering $\rho_1 + N \rightarrow \rho_1 +N$
are shown in Fig. \ref{Fig:DD}. The corresponding expression of $\sigma_{\rm SI}$
is given by
\begin{eqnarray}
\sigma_{\rm SI} = \dfrac{\mu^2_{\rm red}}{4\pi}\left[\dfrac{M_N\,f_N}
{M_{\rho_1}\,v_1}\left(\dfrac{\,g_{H_1\rho_1\rho_1}}{M^2_{H_1}}
+\dfrac{\,g_{H_2\rho_1\rho_1}}{M^2_{H_2}}\right)\right]^2\,,
\label{eq:sigmasi}
\end{eqnarray}
where $g_{H_1(H_2)\rho_1\rho_1}$ is the coupling between $H_1$($H_2$)
and a pair of $\rho_1$. Expressions of these couplings are listed
in Appendix \ref{Dmcouplings}. Moreover, $f_N$ and $M_N$ are nuclear form
factor and nucleon mass respectively. For dark matter
scattering mediated by scalars $f_N\sim 0.3$ \cite{Cline:2013gha}.
We already know that non-observation of any dark matter
signal at direct detection experiments impose severe upper
bound on $\sigma_{\rm SI}$ with respect to dark matter mass.
From, the above expression of $\sigma_{\rm SI}$, it can be
seen clearly that such exclusion limit on $\sigma_{\rm SI}$
in turn puts an upper bound on the involved couplings like
$g_{H_1\rho_1\rho_1}$ and $g_{H_2\rho_1\rho_1}$.
Moreover, SM Higgs to $\rho_1\rho_1$ coupling for $M_{H_1}> 2\,M_{\rho_1}$
case is also constrained from the maximum allowed limit of
Higgs invisible decay width . At present, the upper limit on
invisible branching fraction of the SM Higgs boson is 0.24
at 95\% C.L. \cite{Khachatryan:2016whc}.
In the present model, the SM like Higgs boson in addition
to its ``standard decay modes", can also decay into
$Z_{\mu\tau}Z_{\mu\tau}$, $ZZ_{\mu\tau}$, $\rho_1\rho_1$ and
$\rho_1\rho_2$ final states \footnote{In this work, we are
focusing on low mass $Z_{\mu\tau}$ ($\sim 1$ MeV$-$100 MeV)
to address $(g-2)_\mu$ anomaly.}. Decay widths of such processes
are given below,
\begin{eqnarray}
\Gamma_{H_1\rightarrow Z_{\mu\tau}Z_{\mu\tau}} &=&
\dfrac{g_{H1Z_{\mu\tau}Z_{\mu\tau}}^2\,M^3_{H_1}}{128\,\pi M^4_{Z_{\mu\tau}}}
\left(12 \frac{M^4_{Z_{\mu\tau}}}{M^4_{H_1}} -
4 \frac{M^2_{Z_{\mu\tau}}}{M^2_{H_1}} + 1\right)
\sqrt{1-4\frac{M^2_{Z_{\mu\tau}}}{M^2_{H_1}}}\,,
\end{eqnarray}
\begin{eqnarray}
\Gamma_{H_1\rightarrow ZZ_{\mu\tau}} &=&
\dfrac{g^2_{H_1ZZ_{\mu\tau}}}{64\,\pi\,M_{H_1}}
\left(8+\dfrac{\left(M^2_{H_1}-M^2_{Z_{\mu\tau}}-M^2_{Z}\right)^2}
{M^2_{Z}M^2_{Z_{\mu\tau}}}\right)
\sqrt{1-\left(\dfrac{M_Z+M_{Z_{\mu\tau}}}{M_{H_1}}\right)^2} \times \nonumber \\
&&\sqrt{1-\left(\dfrac{M_Z-M_{Z_{\mu\tau}}}{M_{H_1}}\right)^2}\,\,,\\
\Gamma_{H_1\rightarrow\rho_1\rho_1} &=& \dfrac{g^2_{H_1\rho_1\rho_1}}{32\,\pi\,M_{H_1}}\,
\sqrt{1-4\frac{M^2_{\rho_1}}{M^2_{H_1}}}\,\,,\\
\Gamma_{H_1\rightarrow\rho_1\rho_2} &=&
\dfrac{g^2_{H_1\rho1\rho2}}{16\,\pi\,M_{H_1}}
\sqrt{1-\left(\dfrac{M_{\rho_1}+M_{\rho_2}}{M_{H_1}}\right)^2}
\sqrt{1-\left(\dfrac{M_{\rho_2}-M_{\rho_1}}{M_{H_1}}\right)^2}\,\,,
\end{eqnarray}
and
\begin{eqnarray}
\Gamma_{H_1}^{\rm Inv} = \Gamma_{H_1\rightarrow Z_{\mu\tau}Z_{\mu\tau}}
+ \Gamma_{H_1\rightarrow ZZ_{\mu\tau}} +
\Gamma_{H_1\rightarrow \rho_1\rho_2} +
\Gamma_{H_1\rightarrow \rho_1\rho_1}\,.
\end{eqnarray}
Expressions of all the coupling involved in the above decay widths are
given in Appendix \ref{Dmcouplings}. According to the latest results from LHC,
$\Gamma_{H_1}^{\rm Inv} \leq 0.24\,\,\Gamma^{\rm SM}_{\rm Higgs}$,
where $\Gamma^{\rm SM}_{\rm Higgs} = 4.13$ MeV, is total
decay width of the SM Higgs boson \cite{Denner:2011mq}.
\begin{figure}[h!]
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_Z1Z1_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_Z1Z1_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_Z1Z1_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_Z1Z1_5.pdf}\\
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_ZZ1_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ZZ1_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ZZ1_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ZZ1_5.pdf} \\
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_ZZ_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ZZ_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ZZ_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ZZ_5.pdf} \\
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_ww_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ww_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ww_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_ww_5.pdf} \\
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_hh_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_hh_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_hh_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_hh_5.pdf} \\
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_hh2_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_hh2_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_hh2_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_hh2_5.pdf}\\
\vskip 0.05in
\includegraphics[height=2.7cm,width=2.7cm,angle=0]{RD_bb_3.pdf}
\includegraphics[height=2.7cm,width=2.7cm,angle=0]{RD_bb_4.pdf}
\includegraphics[height=2.7cm,width=2.7cm,angle=0]{RD_bb_1_2.pdf}
\includegraphics[height=2.7cm,width=2.7cm,angle=0]{RD_cc_1_2.pdf}
\includegraphics[height=2.7cm,width=2.7cm,angle=0]{RD_tptm_1_2.pdf}
\includegraphics[height=2.7cm,width=2.7cm,angle=0]{RD_tt_1_2.pdf}
\caption{Feynman diagrams of dark matter annihilation channels
contributing significantly to the freeze-out process.}
\label{Fig:FD_anni}
\end{figure}
\begin{figure}[h!]
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_21_hh_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_21_hh_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_21_hh_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_21_hh_5.pdf}
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_22_hh_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_22_hh_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_22_hh_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_22_hh_5.pdf}
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_33_hh_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_33_hh_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_33_hh_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_33_hh_5.pdf}
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_pm_WW_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_WW_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_WW_4_5_6.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_WW_7_8_9.pdf}
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_pm_AA_1.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_AA_2_3.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_AA_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_AA_5.pdf}
\vskip 0.05in
\hskip -0.2in
\includegraphics[height=2.7cm,width=3.0cm,angle=0]{RD_pm_AZ_1.pdf}
\hskip 0.1in
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_AZ_4.pdf}
\hskip 0.1in
\includegraphics[height=2.7cm,width=4.3cm,angle=0]{RD_pm_AZ_5.pdf}
\caption{Co-annihilation channels contributing to the relic density of
$\rho_1$ in the high mass region.}
\label{Fig:FD_Coanni}
\end{figure}
The dark matter candidate ($\rho_1$) of the present scenario is
a thermal WIMP, which remains in equilibrium with the
thermal bath until its freeze-out though annihilations and co-annihilations into
various final states allowed by the symmetries of the model.
In this work, we have considered $M_{\rho_1}$ between 10 GeV
to 1 TeV. For low dark matter masses (i.e. $M_{\rho_1}<100$ GeV),
$\rho_1$ predominantly annihilates into a pair of $Z_{\mu\tau}$. In some
cases, depending upon the relevant couplings, $b\bar{b}$, $c\bar{c}$
and $\tau\bar{\tau}$ final states are also possible. Moreover, co-annihilations among
the $\mathbb{Z}_2$-odd particles in the low mass regime are insignificant as
we have considered all heavier $\mathbb{Z}_2$-odd particles masses
larger than 100 GeV throughout this analysis to evade experimental
bounds \cite{Lundstrom:2008ai}. Alternatively, for the heavier mass range
of $\rho_1$, there are various possibilities. First of all depending
upon the mass splitting between $\rho_1$ and other $\mathbb{Z}_2$-odd
particles (parametrised by a quantity $\Delta_i$, defined earlier)
there can either be annihilation or co-annihilations. In the former
case, depending on the values of the associated couplings $\rho_1 \rho_1 \rightarrow
Z_{\mu\tau}Z_{\mu\tau},\,ZZ_{\mu\tau},\,H_2H_2,\,H_1H_1,\,H_1H_2,\,W^+W^-,\,ZZ$
and $t\bar{t}$ final states can be important. On the other hand, co-annihilation
plays a pivotal role during the freeze-out of $\rho_1$ when $\Delta_i \leq 0.2$
\cite{Griest:1990kh} for any $\mathbb{Z}_2$-odd particle
$i$ ($i=\rho_2$, $\rho_3$, $\phi^{\pm}$).
In this circumstances, various co-annihilations among these dark sector
particles like $\rho_1\rho_2\rightarrow H_1H_1$, $\rho_i\rho_i \rightarrow H_1 H_1\,\,(i=1-3)$,
$\phi^+\phi^- \rightarrow W^+W^-,\,\gamma\gamma,\,\gamma Z,\,ZZ$ etc. become
predominant. Feynman diagrams of all significant annihilation and
co-annihilation channels are shown in Figs.\,\ref{Fig:FD_anni}
and \ref{Fig:FD_Coanni} respectively.
\begin{figure}[h!]
\includegraphics[height=9cm,width=12cm,angle=0]{MDM_sigmaSI_all_flavour.pdf}
\caption{Allowed parameter space in $\sigma_{\rm SI}$ vs $M_{\rho_1}$ plane
subject to various experiments bounds indicated in the legend.}
\label{Fig:mdm-sigmaSI}
\end{figure}
In Fig.\ref{Fig:mdm-sigmaSI}, we have plotted spin independent scattering
cross section $\sigma_{\rm SI}$ of $\rho_1$ with its mass $M_{\rho_{1}}$,
varying between 10 GeV to 1 TeV. In this plot all red coloured points
satisfy relic density constraint i.e., $0.1172\leq\Omega_{\rm DM}h^2\leq0.1226$
and bound related to Higgs invisible decay modes as well.
The blue dashed-dot line represents the latest bound on $\sigma_{\rm SI}$
from XENON1T experiment. All the parameter space below the
blue dashed-dot line are still allowed and can be probed in near future
by ``multi-ton-scale'' direct detection experiments like XENONnT. Therefore, if we consider
direct constrains like relic density, direct detection and Higgs
invisible decay only, there are still enough parameter space
left (although few portion mostly in the low mass dark matter regime
have already been ruled-out) for the entire considered mass range
of $\rho_1$. However, the situation does not remain same when one
tries to explain flavour physics anomalies and $(g-2)_\mu$ anomaly within this framework. The allowed parameter space in
$\sigma_{\rm SI}-M_{\rho_1}$ plane gets severely restricted
when we impose bound on the NP contribution
to the WC $C_9$ (i.e. $-1.26\leq\Delta{C_9}\leq-0.63$
in $2\sigma$ range \cite{Aebischer:2019mlg}) to explain $R_{K^{(*)}}$ anomalies.
This has been indicated by green coloured points in the above plot where
one can notice that the low dark matter mass regime (i.e. $M_{\rho_1}\la 100$ GeV)
is the most favourable to address $R_{K^{(*)}}$ anomalies. This can be understood
from the behaviour of $\Delta{C_9}$ (Eq.\,(\ref{dc9}))
with respect to the mass of ${\rho_1}$ as illustrated in
Fig.\,\ref{Fig:dc9-vs-mrho1}, where the magnitude
of $\Delta{C_9}$ sharply decreases with the increase of
$M_{\rho_1}$. Furthermore, in this framework, we have also tried to explain
both Br($B\rightarrow X_s \gamma$) and $(g-2)_{\mu}$ anomaly, the two
long-standing anomalies of the SM from their experimental
counterparts. These are indicated by cyan and yellow coloured
points respectively in $\sigma_{\rm SI}-M_{\rho_1}$ plane.
For the branching ratio of $B\rightarrow X_s \gamma$, we have
used $3\sigma$
($2.84 \leq {\rm Br}(B\rightarrow X_s \gamma)\times 10^4
\leq 3.80$ \cite{Amhis:2016xyh}) while the $2\sigma$ band
i.e. $115.44 \leq \Delta{a_{\mu}} \times 10^9 \leq 420.56$ \cite{Tanabashi:2018oca}
for $(g-2)_{\mu}$ has been taken into account\footnote{Here we would like to mention that, another constraint e.g., $B^0_s-\bar{B^0_s}$ mixing which could be relevant for the present scenario. However, NP contributions to the $B^0_s-\bar{B^0_s}$ mixing arise from the present scenario via box diagrams and these are negligibly small. The reason is that, apart from the dark matter particle, all non-standard particles which generate box diagrams are sufficiently massive (especially the non-standard fermion $\chi$, whose mass that we have taken $\geq 1$ TeV throughout the analysis). At this point it is relevant to mention that, from the recent 13 TeV LHC data \cite{Aaboud:2018pii}, a down-type quark ($\mathcal{B}$) with charge (-1/3) is excluded for masses below 1.22 TeV for the decay channels $\mathcal{B}\to Z b/Wt/ {\rm SM~Higgs}~b$. However, this bound is not applicable in our case, as in our model the field $\chi$ is odd under $\mathbb{Z}_2$ symmetry, therefore such decays are restricted by the $\mathbb{Z}_2$ symmetry. Although, for the sake of conservative approach we use $M_\chi \geq 1$ TeV in our analysis. Hence, the loop functions are substantially suppressed. Consequently, the NP contribution to $B^0_s-\bar{B^0_s}$ mixing would not put any stringent constraint in our scenario.}.
We have checked that in the low dark matter mass
region ($M_{\rho_1}\leq 100$ GeV), $\rho_1$
predominantly annihilates into the $Z_{\mu\tau}$ pair. This
actually makes dark matter physics strongly correlated with the
physics of rare $B$-decays and anomalous magnetic moment of $\mu$,
where the role of new gauge boson $Z_{\mu\tau}$ is extremely crucial.
Moreover, it also helps us to evade the strong bound
coming from the experiments of direct detection \cite{Aprile:2018dbl},
indirect detection \cite{Ahnen:2016qkx} and also from
the collider on Higgs invisible branching \cite{Khachatryan:2016whc}
for the low mass scalar dark matter \cite{Athron:2017kgt, Casas:2017jjg, Biswas:2017dxt},
where $b\bar{b}$ final state is the principal annihilation channel.
Therefore, in spite of being a gauge singlet $\mathbb{Z}_2$-odd
scalar field, the mixing with another $\mathbb{Z}_2$-odd field
(part of an ${\rm SU(2)}_{L}$ doublet) having nonzero $L_{\mu}-L_{\tau}$
charge, makes the entire dynamics of our dark matter
candidate $\rho_1$ strikingly different from the standard
Scalar Singlet dark matter scenario \cite{McDonald:1993ex,
Burgess:2000yq, Biswas:2011td, Cline:2013gha}.
Finally, for the completeness we would like to mention here that the yellow
coloured points in $\sigma_{\rm SI}-M_{\rho_1}$ plane are those
which satisfy all the experimental results we have considered in
this work.
\begin{figure}[t!]
\includegraphics[height=7.5cm,width=8.2cm,angle=0]{MDM-Mrho2.pdf}
\includegraphics[height=7.5cm,width=8.2cm,angle=0]{MDM-thetaD.pdf}
\caption{Left(Right) panel: Allowed region in
$M_{\rho_2}-M_{\rho_1}$($\theta_D-M_{\rho_1}$) plane from
various experimental results considered in this work.}
\label{Fig:mrho1-mrho2}
\end{figure}
In the left panel of Fig.\,\ref{Fig:mrho1-mrho2}, we have
shown ranges of $M_{\rho_1}$ and $M_{\rho_2}$ allowed
by various experimental results. The allowed region in
$M_{\rho_2}-M_{\rho_1}$ plane from both relic
density as well as direct detection bounds are indicated
by the green coloured points. Similar to the previous
plot in Fig.\,\ref{Fig:mdm-sigmaSI}, here also when we have imposed
various flavour physics constraints, the allowed parameter
space shrinks to a smaller region concentrated mainly
in the low mass regime of $\rho_1$. The parameter
space which reproduces $\Delta{C_9}$
in $2\sigma$ range for explaining $R_{K^{(*)}}$ anomalies has been
shown by the blue coloured points. On the other hand, the red coloured
points are indicating those values of $M_{\rho_1}$ and $M_{\rho_2}$
which in addition to above mentioned experimental results also
satisfy Br($B\rightarrow X_s\gamma$) in $3\sigma$ range. Moreover,
as we have already known that the dark matter candidate $\rho_1$ is an admixture
of a real scalar singlet $S$ and a CP-even neutral component ($\phi^0$) of a
doublet $\Phi$. While both $S$ and $\Phi$ are $\mathbb{Z}_2$-odd
but only $\Phi$ has nonzero $L_{\mu}-L_{\tau}$ charge. Therefore, the
interaction of $\rho_1$ with $L_{\mu}-L_{\tau}$ gauge boson $Z_{\mu\tau}$
(e.g. annihilation of $\rho_1$ into a pair of $Z_{\mu\tau}$) is governed
by the mixing angle $\theta_D$. Larger the mixing angle, larger is the annihilation
rate into $Z_{\mu\tau}Z_{\mu\tau}$ final state, making $\rho_1$ less abundant
at the present epoch. Therefore, the relic density bound puts an upper limit
on the maximum allowed value of $\theta_D$, which is more stringent
in the low dark matter mass region where $Z_{\mu\tau}Z_{\mu\tau}$
is the principal annihilation mode. This feature is clearly visible
in the right panel of Fig.\,\ref{Fig:mrho1-mrho2}, where we
have shown the allowed range of $\theta_D$ with respect to $M_{\rho_1}$.
However, in the high mass regime ($M_{\rho_1}\geq500$ GeV), large values of $\theta_D\ga0.3$ rad
are still allowed because for such large $\theta_D$, $\rho_1$ is mostly an ${\rm SU(2)}_L$
doublet like state (similar to the Inert Doublet dark matter in high
mass range \cite{Hambye:2009pw, Chakrabarty:2015yia, Biswas:2017dxt})
which attains the present abundance of dark matter through co-annihilations with other
$\mathbb{Z}_2$-odd fields into various bosonic final states (both vector and scalar).
Moreover, we have also seen from the Fig.\,\ref{Fig:mdm-sigmaSI}
that the magnitude of $\Delta{C_9}$ (Eq.\,(\ref{dc9})) decreases with the increasing values of masses
of the particles $\rho_1$, $\rho_2$ and $\rho_3$ involving
within $b\rightarrow s$ transition loops. Now, although the masses of
$\rho_1$ and $\rho_2$ are indeed free parameters of the
present model, the mass of the remaining scalar $\rho_3$
becomes fixed for a particular choice of $M_{\rho_1}$
$M_{\rho_2}$ and $\theta_D$ via Eq.\,(\ref{mho3-relation}).
Here, $M_{\rho_3}$ actually oscillates between
$M_{\rho_2}$ and $M_{\rho_1}$ as we vary $\theta_D$
from $0$ to $\pi/2$. As we are working
in the limit $M_{\rho_1}<M_{\rho_2}$ (since $\rho_1$ is our
dark matter candidate), large $\theta_D$
ensures low mass for $\rho_3$ (using Eq.\,(\ref{mho3-relation}))
and hence enlarge loop contribution to $\Delta{C_9}$.
Thus, $R_{K^{(*)}}$ anomalies prefer larger values of $\theta_D$,
which is a contrasting situation compared to the low mass regime
of $\rho_1$, where relic density bound favours relatively smaller
values of mixing angle to suppress large
annihilation into $Z_{\mu\tau}Z_{\mu\tau}$. As a result, both dark matter
relic density bound and $R_{K^{(*)}}$ anomalies are simultaneously
addressable for $0.01< \theta_D\,({\rm rad})< 0.3$, when
$M_{\rho_1}$ is mostly concentrated below 100 GeV range. This has been
demonstrated by the blue coloured points in $\theta_D-M_{\rho_1}$ plane.
Similar to the left panel, here also red colour points represent the portion in the parameter space which has been satisfied by the constraint of Br($B\rightarrow X_s\gamma$) as well.
\begin{figure}[h!]
\includegraphics[height=8cm,width=10cm,angle=0]{Mrho2-Mrho3.pdf}
\caption{Allowed values of $M_{\rho_2}$ and $M_{\rho_3}$
satisfying all the considered experimental constraints. Degeneracy
between $M_{\rho_2}$ and $M_{\rho_3}$ is indicating the fact that
low values of $\theta_D$ ($0.01< \theta_D\,({\rm rad})
< 0.3$) are only allowed while dark matter mass
$M_{\rho_1}$ lies below 100 GeV (shown in colour code).}
\label{Fig:Mrho2-Mrho3}
\end{figure}
Since, the allowed values of $\theta_D$ which satisfy all
the experimental results considered in this work fall in the
range $0.01< \theta_D\,({\rm rad})< 0.3$ (red coloured points
in the right panel of Fig.\,\ref{Fig:mrho1-mrho2}), this makes $M_{\rho_2}$
and $M_{\rho_3}$ almost degenerate and this has been demonstrated
in Fig.\,\ref{Fig:Mrho2-Mrho3}, where the colour bar is indicating
corresponding values of mass of the dark matter candidate $\rho_1$.
\begin{figure}[h!]
\includegraphics[height=10cm,width=12cm,angle=0]{MZp-gZp.pdf}
\caption{Current status of the $g_{Z_{\mu\tau}}-M_{Z_{\mu\tau}}$
plane in the light of various experimental results. In this plane
we have shown the allowed regions satisfying bounds from Planck + XENON1T (cyan coloured
points), Planck + XENON1T + $R_{K^{(*)}}$ anomalies (green coloured points) and
Planck + XENON1T + $R_{K^{(*)}}$ anomalies + Br($B \rightarrow X_s \gamma$)
(yellow coloured points) respectively. Moreover, red coloured points are
indicating those values of $M_{Z_{\mu\tau}}$ and
$g_{Z_{\mu\tau}}$ which address $(g-2)_\mu$ in
$2\sigma$ range. In this plot, we have also taken into account
the invisible decay branching constraint of the SM-like
Higgs boson $H_1$.}
\label{Fig:mzp-gzp}
\end{figure}
Finally, in Fig.\,\ref{Fig:mzp-gzp} we have shown our results in $g_{\mu\tau}-M_{Z_{\mu\tau}}$ plane, which is at the present moment extremely constrained by
various experimental results. In this figure (Fig.\,\ref{Fig:mzp-gzp}), the red coloured points represent those values of $g_{\mu\tau}$ and $M_{Z_{\mu\tau}}$ which
explain $(g-2)_\mu$ in $2\sigma$ range. Here, in
the $g_{\mu\tau}-M_{\mu\tau}$ plane, most strongest constraint till now
comes from neutrino trident production. Neutrino trident production is a
process of producing $\mu^+\mu^-$ pair via neutrino scattering in the Coulomb
field of a target nucleus ($N$), i.e. $\nu_{\mu} (\overline{\nu_{\mu}}) + N \rightarrow
\nu_{\mu} (\overline{\nu_{\mu}}) + \mu^+ \mu^- + N$. In the SM, this process
is possible via $W^\pm$ and $Z$ bosons only. Moreover, if there exists any new
neutral gauge boson (similar to $Z_{\mu\tau}$ in the present work) which couples
to both muons and muon-neutrinos then that gauge boson can also contribute significantly to
the trident production cross section. However, all the experimental collaborations
namely, CCFR \cite{Mishra:1991bv}, CHARM-II \cite{Geiregat:1990gz} and NuTeV \cite{Adams:1999mn}
have measured neutrino trident events and their measured cross sections are in good agreement with that
of the SM prediction i.e. $\dfrac{\sigma_{\rm CCFR}}{\sigma_{\rm SM}} = 0.82\pm 0.28$,
$\dfrac{\sigma_{\rm CHARM-II}}{\sigma_{\rm SM}} = 1.58\pm 0.57$
and $\dfrac{\sigma_{\rm NuTeV}}{\sigma_{\rm SM}} = 0.72^{+1.73}_{-0.72}$ respectively.
These results therefore put strong constraint in the mass-coupling plane of the
new gauge boson. In Fig.\,\ref{Fig:mzp-gzp}, the crossed region
above the black dashed line represents 95\% C.L. upper bound \cite{Altmannshofer:2014pba}
on $g_{\mu\tau}$ as a function of $M_{Z_{\mu\tau}}$
using neutrino trident cross section measured by
the CCFR collaboration\footnote{Furthermore, it is clearly evident from the Fig.\,\ref{Fig:mzp-gzp}, that, due to the consideration of CCFR experimental data we naturally incorporate the constraint of the branching ratio of $\tau\to\mu\nu_\tau\bar{\nu}_\mu$. The reason is that the parameter space (in $g_{\mu\tau}-M_{Z_{\mu\tau}}$ plane) which describes all the concerned observables simultaneously does not overlap with the portion that has already been ruled out from the branching ratio of $\tau\to\mu\nu_\tau\bar{\nu}_\mu$ \cite{Altmannshofer:2014cfa}. Moreover, we have explicitly checked that the NP contribution for the decay $\tau\to\mu\nu_\tau\bar{\nu}_\mu$ due to $Z_{\mu\tau}$ is practically vanishing in nature in the allowed parameter space of the present scenario.}. Consequently, all the crossed regions above black dashed line are excluded by neutrino trident production. Besides, there is a further constraint from the measurement of the SM $Z$ boson decay to $4\mu$ final state at the LHC. This has also been indicated by the
grey region at the topmost corner of right side of this plot.
Cyan coloured points represent those values of $g_{Z_{\mu\tau}}$ and $M_{Z_{\mu\tau}}$
which satisfy bounds related to dark matter physics namely, relic density,
direct detection and Higgs invisible branching ratio. On top of
the existing dark matter constraints, the effects of flavour physics
observables like $R_{K^{(*)}}$ anomalies ($2\sigma$ bound on $\Delta C_9$)
and $R_{K^{(*)}}$ + Br($B \rightarrow X_s \gamma$) on the mass as well
as the coupling of $Z_{\mu\tau}$ have been shown by green and
yellow coloured points respectively. Therefore, from this plot
it can be easily seen that although maximum portions of $g_{\mu\tau}-M_{Z_{\mu\tau}}$
plane have already been excluded by the results of CCFR collaboration,
there is still a small but interesting region left in this parameter
space which is $0.01 \leq M_{Z_{\mu\tau}}\,({\rm GeV})\leq 0.1$ and
$3\times 10^{-4}\leq g_{\mu\tau} \leq 10^{-3}$. This region of the parameter space of the present model can address dark matter, $(g-2)_{\mu}$ anomaly,
$R_{K^{(*)}}$ anomalies and Br($B \rightarrow X_s \gamma$) simultaneously
and more exciting thing is that this parameter space can be probed
within a few years by the DUNE experiment \cite{Acciarri:2015uup} measuring neutrino
trident events (shown by black dashed line) \cite{Altmannshofer:2019zhy}.
This will surely be the test of our model, at least the benchmark
points (if not the full model) in the low mass dark matter region
which are compatible to both dark matter and
flavour physics issues. For completeness in Table \ref{tab:BP}, we present three plausible benchmark points (BP1, BP2 and BP3) and corresponding numerical values of several physical quantities of the present scenario.
\begin{table}[h!]
\begin{center}
\vskip 0.5cm
\begin{tabular} {|c|c|c|c|}
\hline
Parameters/& BP1& BP2 & BP3\\
Observables & & &\\
\hline
\hline
$M_{\rho_1}$ (GeV) & 14.499& 26.515 & 36.767 \\
$M_{\rho_2}$ (GeV) & 478.254& 506.009 & 450.276 \\
$M_{\rho_3}$ (GeV) & 475.201& 503.742 & 449.255\\
$M_{\phi^{\pm}}$ (GeV) & 160.591& 121.443 & 101.748\\
$M_{H_2}$ (GeV) & 353.418& 401.503 & 352.41\\
$M_{\chi}$ (GeV) &1107.840 & 1300.660 & 1087.52\\
$M_{Z_{\mu\tau}}$ (GeV) & $5.052\times10^{-2}$ & $7.577\times10^{-2}$
& $3.167\times10^{-2}$\\
$v_2$ (GeV) & 76.328& 81.151 & 71.229\\
$g_{Z_{\mu\tau}}$ & $6.619\times10^{-4}$& $9.339\times10^{-4}$&
$4.447\times10^{-4}$\\
$\tan \theta_{\mu\tau}$ & $2.752\times10^{-6}$&$1.637\times10^{-5}$&
$5.337\times10^{-6}$ \\
$\tan \theta_{D}$ & 0.1135 & $9.511\times10^{-2}$&
$6.769\times10^{-2}$ \\
$\tan \theta_{s}$ & $3.203\times10^{-4}$& $9.893\times10^{-4}$ &
$4.643\times10^{-3}$ \\
$\lambda_{\Phi}$ & 0.1 & 0.1&0.1\\
$\lambda_{S}$ & 0.1& 0.1 &0.1\\
$\lambda_{2}$ & $9.520\times10^{-3}$ & $7.935\times10^{-2}$ &
$2.360\times10^{-4}$ \\
$\lambda_{4}$ & $2.499\times10^{-3}$& $9.691\times10^{-2}$ &
$1.128\times10^{-3}$ \\
$\lambda_{5}$ & $9.236\times10^{-3}$ & $1.994\times10^{-4}$ &
$1.066\times10^{-3}$ \\
$\lambda_{6}$ & $5.364\times10^{-4}$ & $1.835\times10^{-2}$ &
$8.175\times10^{-4}$ \\
$\lambda_{7}$ & $4.724\times10^{-2}$ & $2.243\times10^{-3}$ &
$2.599\times10^{-4}$ \\
$f_2 \times f_3$ & 1.657 &2.533& 3.228\\
$\Omega_{\rm DM}h^2$ & 0.1218 & 0.1206 & 0.1213\\
$\sigma_{\rm SI}$ (cm$^2$)& $5.480\times10^{-47}$& $1.688\times10^{-47}$ &
$1.076\times10^{-48}$ \\
${\rm Br}(\Gamma^{\rm Inv}_{H_1})$ & $1.639\times10^{-4}$& $1.954\times 10^{-3}$&
$2.094\times10^{-2}$\\
$\Delta{C_9}$ & -0.973& -0.7578 & -0.684\\
${\rm Br}(B \rightarrow X_s \gamma)$ &$3.196\times 10^{-4}$
& $3.173\times10^{-4}$ & $2.974\times10^{-4}$\\
$\Delta{a_{\mu}}$ & $218.495\times10^{-11}$ & $311.557\times10^{-11}$
& $129.438\times10^{-11}$\\
\hline
\hline
\end{tabular}
\end{center}
\caption{Viable benchmark points (BP1, BP2 and BP3) and corresponding numerical values of several physical quantities of the present scenario.}
\label{tab:BP}
\end{table}
\section{Neutrino masses and mixings}\label{neu}
In this section, we will discuss briefly about neutrino masses
and mixings. It has now been firmly established from the phenomena of
neutrino oscillations that there exist two tiny mass square differences
between three neutrino mass eigenstates i.e. $\Delta{m^2_{21}}
=7.39^{+0.21}_{-0.20} \times 10^{-5}$ eV$^{2}$, \footnote{$\Delta{m^2_{ij}}$
is defined as $m^2_i-m^2_j$.} and $\Delta{m^2_{31}} = 2.525^{+0.033}_{-0.032}
(-2.512^{+0.034}_{-0.032}) \times 10^{-3}$ eV$^{2}$
for the normal(inverted) hierarchy \cite{Esteban:2018azc} in $3\sigma$
range. This also indicates that to explain solar, atmospheric
and rector neutrino anomalies though three flavour neutrino
oscillation we need at least two neutrino mass eigenstates
having nonzero masses corresponding to mass squared differences
as mentioned above. Moreover, there are also precise measurements
of three intergenerational mixing angles namely the atmospheric
mixing angle ($40.3^{\degree}(40.6^{\degree})\leq\theta_{23}
\leq 52.4^{\degree}(52.5^{\degree})$)\footnote{Where numbers without(within)
brackets are for the normal(inverted) hierarchical scenario.},
the solar mixing angle ($31.61^{\degree}\leq\theta_{12}\leq 36.27^{\degree}$)
and the reactor mixing angle ($8.22^{\degree}(8.27^{\degree})\leq\theta_{13}
\leq 8.99^{\degree}(9.03^{\degree})$) \cite{Esteban:2018azc}. The latter
one is the most recent entry in that list. In the present model, although
we do not need any extra fermionic degrees of freedom to cancel
$L_{\mu}-L_{\tau}$ anomaly which actually cancels between $\mu$
and $\tau$ generations of charged leptons and corresponding neutrinos,
one can still introduce three right handed neutrinos $N_{Ri}$ ($i=e\,,\mu,\,,\tau$)
in an anomaly free manner, in the model, to address neutrino masses and mixings
via Type I seesaw mechanism. The Lagrangian for right handed neutrinos
are given in Eq.\,(\ref{lagN}). The light neutrino mass matrix $m_{\nu}$ after
spontaneous breaking of both SU(2)$_{L}\times {\rm U(1)}_Y$ and U(1)$_{L_{\mu}-L_{\tau}}$
symmetries has the following structure
\begin{eqnarray}
m_{\nu} &=& -{M_D}\,\mathcal{M_R}^{-1}\,M_D^T\,, \nonumber \\
&=& \dfrac{1}{2\,p} \left(\begin{array}{ccc}
y_{e}^{2}M_{\mu \tau}\,v^2_1\,e^{i\xi} &
-\dfrac{y_{e}\,y_{\mu}\,y_{e \tau} v^2_1\,v_2}{\sqrt{2}}\, &
-\dfrac{y_{e}\,y_{\tau}\,y_{e \mu} v^2_1\,v_2}{\sqrt{2}}\\
-\dfrac{y_{e}\,y_{\mu}\,y_{e \tau} v^2_1\,v_2}{\sqrt{2}}\, &
\dfrac{y_{\mu}^{2}\,y_{e\tau}^2\,v_1^2\,v_2^2\,e^{-i\xi}}{2\,M_{\mu\tau}} &
\dfrac{y_{\mu}\,y_{\tau}\,v_1^2}{2\,M_{\mu\tau}}(M_{ee}\,M_{\mu\tau}-p\,e^{-i\xi})\\
-\dfrac{y_{e}\,y_{\tau}\,y_{e \mu} v^2_1\,v_2}{\sqrt{2}}\, &
\dfrac{y_{\mu}\,y_{\tau}\,v_1^2}{2\,M_{\mu\tau}}(M_{ee}\,M_{\mu\tau}-p\,e^{-i\xi}) &
\dfrac{y_{\tau}^{2}\,y_{e\mu}^2\,v^2_1\,v_2^2\,e^{-i\xi}}{2\,M_{\mu\tau}} \\
\end{array}\right) \,\,,
\label{mass-matrix}
\end{eqnarray}
while the mass matrix for the heavy neutrinos coincides with $\mathcal{M}_R$.
In the above, $p=y_{e\mu}\,y_{e\tau}\,v_2^2-M_{ee}\,M_{\mu\tau}\,e^{i\xi}$.
Majorana mass matrix $\mathcal{M}_R$ and Dirac mass matrix
$M_D$ are given by,
\begin{eqnarray}
\mathcal{M}_{R} = \left(\begin{array}{ccc}
M_{ee} ~~&~~ \dfrac{v_2}{\sqrt{2}} y_{e \mu}
~~&~~\dfrac{v_2}{\sqrt{2}} y_{e \tau} \\
~~&~~\\
\dfrac{v_2}{\sqrt{2}} y_{e \mu} ~~&~~ 0
~~&~~ M_{\mu \tau} \,e^{i\xi}\\
~~&~~\\
\dfrac{v_2}{\sqrt{2}} y_{e \tau} ~~&
~~ M_{\mu \tau}\,e^{i\xi} ~~&~~ 0 \\
\end{array}\right) \,,\,\,\,\,
M_{D} = \dfrac{v_1}{\sqrt{2}}\left(\begin{array}{ccc}
y_e ~~&~~ 0 ~~&~~ 0 \\
~~&~~\\
0 ~~&~~ y_{\mu} ~~&~~ 0 \\
~~&~~\\
0 ~~&~~ 0 ~~&~~ y_{\tau} \\
\end{array}\right) \,.
\label{MR-md}
\end{eqnarray}
In the present case, due to $L_{\mu}-L_{\tau}$ flavour
symmetry, the Dirac mass matrix is exactly diagonal while
before U(1)$_{L_{\mu}-L_{\tau}}$ symmetry breaking only
three elements (only two are independent) are there in the
Majorana mass matrix $\mathcal{M}_{R}$. Only after symmetry
breaking we get additional elements proportional to $v_2$. Therefore,
$L_{\mu}-L_{\tau}$ symmetry breaking plays a crucial role here to get
desire structure of $m_{\nu}$ matrix. Also, looking at
both $M_D$ and $\mathcal{M}_R$ matrices, one can easily notice
that there can only be one complex element. Phases of other elements
can be absorbed by redefining both SM leptons and right handed
neutrinos. Now, one can calculate mass eigenvalues and mixing angles of light
neutrinos by diagonalising this $m_{\nu}$ matrix, which is a
complex symmetric matrix, indicating the Majorana nature
of the light neutrinos. If we consider, $v_2\sim 10^2$ GeV
(in the right ballpark to produce desired contribution to $(g-2)_{\mu}$),
$0.1 \la y_{e\mu},\,y_{e\tau} \la 1.0$ and $10\,{\rm GeV} \la M_{ee},\,M_{\mu\tau} \la$1 TeV
(100 GeV to TeV scale right handed neutrinos)
then we need Dirac couplings $10^{-7} \la y_{e},\,y_{\mu},\,y_{\tau}\la 10^{-5}$
to reproduce neutrino oscillation parameters.
Detail analysis of mass matrix diagonalisation and
comparison with latest 3$\sigma$ range of oscillation
parameters have been done in Ref. \cite{Biswas:2016yan}.
Moreover, we would like to mention here that although only two right handed neutrinos ($N^{\mu}_R$
and $N^{\tau}_R$) are sufficient to make the present model anomaly free,
such scenario is unable to reproduce all neutrino oscillation
parameters due to special flavour structure of the Dirac mass matrix.
\section{Constraint from di-lepton resonance search at 13 TeV LHC}\label{cldr}
Depending on the mass ranges, the {\it non-standard} $Z$ boson (which we designate as $Z_{\mu\tau}$ in this article) confronts constraints from collider searches. For example, if the mass of $Z_{\mu\tau}$ is less then SM $Z$ boson then there exists some viable parameter region for the favorable kind among the various NP models that exist in the literature. Furthermore, as the $Z_{\mu\tau}$ has no direct coupling with electron\footnote{Only possible via $Z$-$Z_{\mu\tau}$ mixing. Therefore, the interaction strength is insignificant.}, hence LEP searches cannot provide direct constraint on the light $Z_{\mu\tau}$. On the other hand, the Tevatron~\cite{Abazov:2010ti, Aaltonen:2011gp} and LHC~\cite{Khachatryan:2016zqb, Aaboud:2017buh, ATLAS:2019vcr} searches for $Z_{\mu\tau}$ to di-lepton final state only apply, however in this case $M_{Z_{\mu\tau}}>100~$GeV. Moreover, only relevant limit to the light $Z_{\mu\tau}$ case obtained from the LHC searches at $ p p \to Z\to 4\mu$\cite{Altmannshofer:2014pba}. At this point we remark in passing that, in our present article even though in the low mass limit of $Z_{\mu\tau}$ we have obtained certain
region of parameter space (depicted in Fig.\,\ref{Fig:mzp-gzp}) which has been satisfied by
some flavour physics data, dark matter constraints and $(g-2)_\mu$ anomaly, however, cross section for a process like $pp \to Z_{\mu\tau} \to \ell^+ \ell^-$ in that region of
parameter space is extremely tiny at the 13 TeV LHC.
On the other hand in the high mass region of $Z_{\mu\tau}$, the LHC searches put the tightest bound on its mass ($3\sim5$ TeV ~\cite{Khachatryan:2016zqb, Aaboud:2017buh, ATLAS:2019vcr}) in the di-muon final states. Thus, in the present article we use the exclusion data obtained by ATLAS collaboration \cite{ATLAS:2019vcr} for a di-lepton resonance search at the LHC experiment to constraint parameter space of the present scenario. In order to embed this limit in the present scenario, we first implement the model using {\tt FeynRules}~\cite{Alloul:2013bka}. Then we generate the cross section for the process $pp \to Z_{\mu\tau} \to \ell^+ \ell^-$ using {\tt Madgraph5}~\cite{Alwall:2014hca} with the default parton distribution functions {\tt NNPDF3.0}~\cite{Ball:2014uwa} at 13 TeV LHC\footnote{Production of $Z_{\mu\tau}$ at the LHC in the present model is possible due to the couplings of $q_i \bar{q_i}Z_{\mu\tau}$ which are generated via $Z$-$Z_{\mu\tau}$ mixing.}. Here $\ell(\equiv e, \mu)$, however, significant contribution has been generated from $\mu^+\mu^-$ final state. Finally, for a specific combination of coupling $g_{Z_{\mu\tau}}$ and $Z$-$Z_{\mu\tau}$ mixing angle $\theta_{\mu\tau}$ we compare the theoretical prediction of cross section for any particular value of mass (confined within the range [0.5, 5] TeV) of $Z_{\mu\tau}$ with the corresponding experimental data given by ATLAS collaboration \cite{ATLAS:2019vcr}.
\begin{figure}[t]
\begin{center}
\includegraphics[height=9cm,width=12cm,angle=0]{MZp_gZp}
\caption{Using the non-observation of a resonant $\ell^{+}\ell^{-}$ signal at the LHC running at 13 TeV, we have depicted the exclusion plots at 95\% C.L. in the $M_{Z_{\mu\tau}}-g_{Z_{\mu\tau}}$ plane for four different values of $Z$-$Z_{\mu\tau}$ mixing angle $\theta_{\mu\tau}$. The region above a particular curve has been ruled out from the non-observation of a resonant $\ell^{+}\ell^{-}$ signal in the 13 TeV run of LHC by latest ATLAS data \cite{ATLAS:2019vcr} considering mass range [0.5, 5] TeV.}
\label{mz_gz}
\end{center}
\end{figure}
In Fig.~\ref{mz_gz} we show the exclusion curves at 95\% C.L. in the $M_{Z_{\mu\tau}}-g_{Z_{\mu\tau}}$ plane for four different values of $Z$-$Z_{\mu\tau}$ mixing angle $\theta_{\mu\tau}$ using the ATLAS data \cite{ATLAS:2019vcr} for non-observation of a resonant $\ell^{+}\ell^{-}$ signal at the LHC running at 13 TeV with integrated luminosity 139 ${\rm fb}^{-1}$. In this case the region above a particular curve has been ruled out at 95\% C.L. from the non-observation of a resonant $\ell^{+}\ell^{-}$ signal in the 13 TeV run of LHC by ATLAS data \cite{ATLAS:2019vcr}. If we focus on a particular curve fixed by a particular value of mixing angle $\theta_{\mu\tau}$ then we observe that for the lower values of mass the coupling $g_{Z_{\mu\tau}}$ rapidly falls with the increasing values of mass $M_{Z_{\mu\tau}}$. This phenomena can be explained in the following way. In the lower mass range if we vary the mass then the cross section does not fall rapidly as desired by the ATLAS data. Hence, to acquire the proper cross section for a particular mass one should decrease the value of the coupling $g_{Z_{\mu\tau}}$. Once the lower mass range is over then with the increasing values of mass the curve exactly replicates the exclusion plot as given in \cite{ATLAS:2019vcr}. At this point, we would like to mention another notable feature of the exclusion curves (which is true for all over the mass range) that for a fixed value of mass if the mixing angle increases then to satisfy ATLAS data \cite{ATLAS:2019vcr} one requires decreasing values of coupling constant $g_{Z_{\mu\tau}}$. Furthermore, it is clearly evident form the Fig.~\ref{mz_gz} that as the mixing angle $\theta_{\mu\tau}$ increases large amount of area in the $M_{Z_{\mu\tau}}-g_{Z_{\mu\tau}}$ plane has been ruled out by the ATLAS data. Both of the features can be explained, if we analyse the structure of the coupling\footnote{The relevant couplings have been given in Appendix~\ref{Dmcouplings}.} between ${Z_{\mu\tau}}$ and $\ell^+ \ell^-$. If we decompose the coupling then we can find that there is one vectorial part and other is axial vectorial in nature. The latter one has no significant role in the concerned process but totally controlled by the vectorial part. We have also checked that, one can control the coupling (which in turn the vectorial part of the coupling) that satisfy the exclusion data with lower values of mixing angle $\theta_{\mu\tau}$. However, as the mixing increases then one looses the control over the coupling, i.e., there is no variation of coupling with larger mixing angle. Therefore, with larger mixing angle one can not vary the cross section properly, hence one can not have the required cross section for a particular mass. For example, if the mixing is set at $4.5\times 10^{-4}$ rad, then one can not go beyond 1500 GeV mass of $M_{Z_{\mu\tau}}$. Since in this situation after 1500 GeV mass we can not have the desired cross section by changing the value of $g_{Z_{\mu\tau}}$. Therefore, in order to translate the exclusion limit obtained by ATLAS data \cite{ATLAS:2019vcr} for non-observation of di-lepton resonance search at the LHC experiment in our model we have restricted ourselves within the relatively smaller values of mixings angle $\theta_{\mu\tau}$.
\section{Conclusions}\label{con}
In order to simultaneously resolving $R_{K^{(*)}}$ anomalies and
dark matter enigma, we have proposed a unified scenario by introducing an
extra local ${\rm U(1)}_{L_{\mu}-L_{\tau}}$ gauge symmetry to the Standard Model.
This ${\rm U(1)}_{L_{\mu}-L_{\tau}}$ gauge symmetry provides a neutral non-standard
gauge boson $Z_{\mu\tau}$ which has versatile effects on different phenomenological
aspects that have been considered in this article. For the purpose of breaking of
the ${\rm U(1)}_{L_{\mu}-L_{\tau}}$ symmetry spontaneously a complex scalar field
$\eta$ has been invoked to the scalar sector in addition to the usual Standard Model
Higgs doublet $H$. Three singlet right handed neutrinos have also been introduced
in order to explain the observed oscillation data by incorporating neutrino masses
and mixings via Type-I seesaw mechanism. Furthermore, for the proper establishment
of correlation between $R_{K^{(*)}}$ anomalies and dark matter puzzle, a bottom quark
like coloured fermion field $\chi$ has been included in this scenario. This non-standard
fermion field $\chi$ is transformed vectorially under the ${\rm U(1)}_{L_{\mu}-L_{\tau}}$
symmetry and further it is odd under the $\mathbb{Z}_2$ parity. Apart from these,
an SU(2)$_L$ scalar doublet $\Phi$ with nonzero ${\rm U(1)}_{L_{\mu}-L_{\tau}}$
charge and a real scalar singlet $S$ have also been incorporated in the present
scenario. Both of these non-standard scalar fields are odd under $\mathbb{Z}_2$ symmetry.
The mixing (which is parametrised by a mixing angle $\theta_D$) between these two
$\mathbb{Z}_2$-odd scalar fields gives a potential dark matter candidate $\rho_1$
and also two heavier $\mathbb{Z}_2$-odd physical particles $\rho_2$ and $\rho_3$.
All of these three scalar fields provide significant contributions not only in dark
matter phenomenology but also in rare $B$-meson decay processes.
Existence of lepton flavour universality violation in neutral current sector has
been measured by $R_{K^{(*)}}$ in which $b\to s \ell^+\ell^-$ ($\ell\equiv e, \mu$)
transition is involved. This type of flavour changing neutral current is highly
suppressed in the Standard Model and therefore, even for a small deviation between
the experimental data and the Standard Model could play significant role for finding
of new physics effects. In this work, the introduced new physics particles have played
crucial role in the concerned $b\to s$ transition processes which are in general loop
induced\footnote{Apart from leptoquark scenarios where $b\to s$ transition is possible at tree level.}. Particularly, the dark matter particle $\rho_1$ with two heavier $\mathbb{Z}_2$-odd
neutral scalar fields $\rho_2$, $\rho_3$ and the non-standard fermion $\chi$
generate extra loop contributions. Furthermore, the extra non-standard gauge
boson $Z_{\mu\tau}$ behaves as a propagator (in addition to the SM $Z$ boson)
for the process $b\to s \ell^+\ell^-$. Now, due to the very basic structure of
our model, the process $b\to s \mu^+\mu^-$ is more favourable with respect to
$b\to s e^+e^-$, consequently one obtains the significant non-standard
contribution to the Wilson coefficients $C^{\rm NP}_{9}$ for ``$\mu$" but not for ``$e$".
Therefore, in our work, we have easily satisfied the current fit result for
$C^{\rm NP,\mu}_{9} \in [-1.26, -0.63]$ in $2\sigma$ interval to explain
the $R_{K^{(*)}}$ anomalies and thereby we have constrained the parameter
space of the proposed scenario. On top of that, we have also calculated
another rare decay process $B\to X_s\gamma$ which has also been a class of
processes that characterised by $b\to s$ transition. We have estimated the
branching ratio for $B\to X_s\gamma$ process, and have used the corresponding
experimental data within $3\sigma$ interval as one of the constraints in
our analysis. Moreover, we have calculated the contribution of non-standard
gauge boson $Z_{\mu\tau}$ to the $(g-2)_\mu$ and considering the recent experimental
data with some error bars ($1\sigma$ and $2\sigma$) we have further constrained the
parameter space allowed by dark matter and flavour physics observables.
In the present scenario, we have extensively studied the dark matter phenomenology
by choosing $\rho_1$ as a WIMP type dark matter candidate. This $\rho_1$ is an admixture
of a real scalar singlet $S$ and the CP-even neutral component ($\phi^0$) of the
doublet $\Phi$. In our work, first we have calculated dark matter relic abundance
by considering all possible annihilation and co-annihilation channels for a
wide range (10 GeV $\leq$ 1 TeV) of the mass of $\rho_1$. Thereafter, we have
imposed necessary constraints like Planck limit on relic density
($0.1172\leq\Omega_{\rm DM} h^2\leq 0.1226$), latest direct detection bounds on
$\sigma_{\rm SI}$ from XENON1T and also the bound on Higgs invisible branching
ratio from LHC to find the allowed parameter space.\,\,We have found that in the
case of low mass region ($M_{\rho_1}< 100$ GeV), our dark matter candidate $\rho_1$
predominantly annihilates into $Z_{\mu\tau}$ pair while co-annihilations among other
$\mathbb{Z}_2$-odd particles are insignificant as we have considered all heavier
$\mathbb{Z}_2$-odd particles masses larger than 100 GeV throughout this analysis
to respect the experimental bounds form LEP collider. Due to this primary
annihilation channel ($\rho_1\rho_1\to Z_{\mu\tau}Z_{\mu\tau}$), in spite of
being a gauge singlet $\mathbb{Z}_2$-odd scalar field, the mixing with another
$\mathbb{Z}_2$-odd field (part of an ${\rm SU(2)}_{L}$ doublet) having nonzero
$L_{\mu}-L_{\tau}$ charge, makes the entire dynamics of our dark matter
candidate $\rho_1$ remarkably different from the standard Scalar Singlet dark matter
scenario where $b\bar{b}$ final state is in general the principal annihilation channel
and low mass region has already been ruled out by direct detection, indirect detection
and also by the upper limit on Higgs invisible decay branching ratio.
On the other hand for the higher values of $M_{\rho_1}$, depending upon the mass
splitting between $\rho_1$ and other $\mathbb{Z}_2$-odd particles several annihilation
or co-annihilation channels may appear and have contributed significantly to
the relic density. Since, one of our prime motivations of this article
is to correlate dark matter puzzle with some specific flavour physics anomalies
associated with FCNC processes, therefore, we have used experimental data of some flavour
physics observables (e.g., $R_{K^{(*)}}$ anomalies and Br($B\to X_s\gamma$)) as
further constraints on the parameter space which is already allowed by experiments
related to dark matter physics. As a consequence, both the effects of $R_{K^{(*)}}$
anomalies and dark matter phenomenology allow only a very restrictive values
of dark sector mixing angle $\theta_D$ which remains confined within a certain
range (0.01$<$ $\theta_D$ (rad) $<$ 0.3) when $M_{\rho_1}\leq 100$ GeV. This is
a unique feature of our proposed model.
Additionally, we have used some other constraints which have been relevant
to our present scenario. For example, we have imposed constraint from neutrino
trident production and for that purpose we have used the CCFR experimental data
which is currently the most stringent one for the neutrino trident production
process. Furthermore, we have imposed constraint from the measurement of the
Standard Model $Z$ boson decay to 4$\mu$ final state at the LHC. As a consequence
there is a substantial amount of reduction in the parameters space due to the inclusion
of such constraints. However, there still exists a few portion of the parameter space
of the present model which can address dark matter, $R_{K^{(*)}}$ anomalies, $(g-2)_{\mu}$ and
Br($B \rightarrow X_s \gamma$) simultaneously. Most importantly our predicted parameter
space and hence our model can be tested within a few years by neutrino trident processes
at DUNE. Therefore, in view of the above discussion
we can readily conclude that our proposed scenario can reasonably connect the dark matter
puzzle with some of the flavour physics anomalies. Besides, within the scope of our proposed model,
we have also briefly discussed the origin of neutrino masses and mixing angles via
Type-I seesaw mechanism, which is a common feature of most of the ${L_{\mu}-L_{\tau}}$
models.
Finally, for the purpose of constraining the parameter space of the present scenario from the LHC, we have used the latest ATLAS data of non-observation of a resonant $\ell^{+}\ell^{-}$ signal at the LHC running at 13 TeV with an integrated luminosity 139 ${\rm fb}^{-1}$. For this purpose we have estimated the cross section for the process $pp \to Z_{\mu\tau} \to \ell^+ \ell^-$ at the 13 TeV LHC for the mass range $M_{Z_{\mu\tau}}\in [0.5, 5]$ TeV in the present scenario. By comparing the theoretical predictions of the cross section with corresponding ATLAS data of cross section for non-observation of a resonant $\ell^{+}\ell^{-}$ signal at the 13 TeV LHC one yields some specific combination of coupling $g_{Z_{\mu\tau}}$ and $Z$-$Z_{\mu\tau}$ mixing angle $\theta_{\mu\tau}$. Consequently, with those combinations we have excluded some portion of the parameter space of the present scenario at 95\% C.L. From our analysis it has been observed that, for a larger values of mixing angle one can exclude larger region of parameter space in the $M_{Z_{\mu\tau}}-g_{Z_{\mu\tau}}$ plane. For example if the mixing angle is $4.5\times 10^{-5}$ rad then one can maximally exclude the region of parameter space in the $M_{Z_{\mu\tau}}-g_{Z_{\mu\tau}}$ plane.
\noindent{\bf Acknowledgments}
A.S. would like to thank Heerak Banerjee for useful discussions.
A.B. would like to acknowledge the cluster computing facility
(http://www.hri.res.in/cluster/) of Harish-Chandra Research Institute, Allahabad.
He also thanks Alexander Pukhov for a few email conversation
regarding the package micrOMEGAs. Moreover, A.B. acknowledges all the
members of Particle Group Meeting of IACS, especially Sourov Roy,
Satyanarayan Mukhopadhyay, Heerak Banerjee, Sougata Ganguly,
Ananya Tapadar and Disha Bhatia for a useful discussion
on kinetic mixing between two U(1) gauge groups.
\begin{appendices}
\renewcommand{\thesection}{\Alph{section}}
\renewcommand{\theequation}{\thesection-\arabic{equation}}
\setcounter{equation}{0}
\section{Multiplicative factors and functions that are involved in flavour physics}\label{flav_app}
\begin{eqnarray}
\mathscr{L}^9_{Z(Z_{\mu\tau})}&=&\frac{g_2}{4\cos\theta_W}\bigg(1-4\sin^2\theta_W\bigg)\cos(\sin)\theta_{\mu\tau}\pm \bigg(g_{Z_{\mu\tau}}-\frac 34 \frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\bigg)\sin(\cos)\theta_{\mu\tau}\;,\\\label{lzz9}
\mathscr{L}^{10}_{Z(Z_{\mu\tau})}&=-&\frac{g_2}{4\cos\theta_W}\bigg(\cos(\sin)\theta_{\mu\tau}\pm\epsilon\sin\theta_W\sin(\cos)\theta_{\mu\tau}\bigg)\;,\\
\label{lzz10}
\mathcal{G}_{Z(Z_{\mu\tau})}&=&\frac{g_2}{3\cos\theta_W}\sin^2\theta_W\bigg(\cos(\sin)\theta_{\mu\tau}\pm\frac{\epsilon}{\sin\theta_W}\sin(\cos)\theta_{\mu\tau}\bigg)\pm g_{Z_{\mu\tau}}\sin(\cos)\theta_{\mu\tau}\;,\\
\label{gzz}
\mathcal{C}_{Z(Z_{\mu\tau})}&=&\frac{g_2}{\cos\theta_W}\cos(\sin)\theta_{\mu\tau}\pm \bigg(2g_{Z_{\mu\tau}}+\frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\bigg)\sin(\cos)\theta_{\mu\tau}\;,\\
\label{czz}
\mathcal{S}_{Z(Z_{\mu\tau})}&=&\frac{g_2}{\cos\theta_W}\bigg(\left(\frac 12 -\frac{\sin^2\theta_W}{3}\right)\cos(\sin)\theta_{\mu\tau}\pm\epsilon\sin\theta_W\sin(\cos)\theta_{\mu\tau}\bigg)\;.
\label{szz}
\end{eqnarray}
\begin{eqnarray}
h_q(x)&=&\frac{1}{1-x}+\frac{\ln(x)}{(1-x)^2}\;,\\
\label{hq}
h_w(x,r)&=&\frac 32 -\frac{(1+r)^2\ln(1+r)}{r(1+r-x)}-\frac{x^2\ln(x)}{(1-x)(1+r-x)}\;,\\
\label{hw}
h_s(x)&=&\frac 12\left(\frac{1-3x}{1-x}-\frac{2x^2\ln(x)}{(1-x)^2}\right)\;,\\
\label{hs}
h_b(x)&=&-\frac{x^2-5x-2}{12(1-x)^3}+\frac{z\ln(x)}{6(1-x)^4}\;.
\label{hb}
\end{eqnarray}
\section{Couplings required for dark matter phenomenology, flvour physics observables and LHC analysis}\label{Dmcouplings}
$\bullet$ \underline {Trilinear couplings of different SM fermions with $Z(Z_{\mu\tau})$ gauge fields:}
\begin{eqnarray}
\bar{u}_i u_i Z^\alpha&:&i\frac{g_2\gamma^\alpha}{12\cos\theta_W}\Bigg[\Bigg(\bigg(-3+8\sin^2\theta_W\bigg)\cos\theta_{\mu\tau}+5\epsilon\sin\theta_W\sin\theta_{\mu\tau}\Bigg) \\ \nonumber
&&+\Bigg(3\cos\theta_{\mu\tau}+3\epsilon\sin\theta_W\sin\theta_{\mu\tau}\Bigg)\gamma^5\Bigg]
\label{uuz}
\end{eqnarray}
\begin{eqnarray}
\bar{u}_i u_i Z^\alpha_{\mu\tau}&:&i\frac{g_2\gamma^\alpha}{12\cos\theta_W}\Bigg[\Bigg(\bigg(-3+8\sin^2\theta_W\bigg)\sin\theta_{\mu\tau}-5\epsilon\sin\theta_W\cos\theta_{\mu\tau}\Bigg) \\ \nonumber
&&+\Bigg(3\sin\theta_{\mu\tau}-3\epsilon\sin\theta_W\cos\theta_{\mu\tau}\Bigg)\gamma^5\Bigg]
\label{uuz1}
\end{eqnarray}
\begin{eqnarray}
\bar{d}_i d_i Z^\alpha&:&-i\frac{g_2\gamma^\alpha}{12\cos\theta_W}\Bigg[\Bigg(\bigg(-3+4\sin^2\theta_W\bigg)\cos\theta_{\mu\tau}+\epsilon\sin\theta_W\sin\theta_{\mu\tau}\Bigg) \\ \nonumber
&&+\Bigg(3\cos\theta_{\mu\tau}+3\epsilon\sin\theta_W\sin\theta_{\mu\tau}\Bigg)\gamma^5\Bigg]
\label{ddz}
\end{eqnarray}
\begin{eqnarray}
\bar{d}_i d_i Z^\alpha_{\mu\tau}&:&-i\frac{g_2\gamma^\alpha}{12\cos\theta_W}\Bigg[\Bigg(\bigg(-3+4\sin^2\theta_W\bigg)\sin\theta_{\mu\tau}-\epsilon\sin\theta_W\cos\theta_{\mu\tau}\Bigg) \\ \nonumber
&&+\Bigg(3\sin\theta_{\mu\tau}-3\epsilon\sin\theta_W\cos\theta_{\mu\tau}\Bigg)\gamma^5\Bigg]
\label{ddz1}
\end{eqnarray}
In the above, $i=1,2,3$.
\begin{eqnarray}
\bar{e}e Z^\alpha&:&i\gamma^\alpha\Bigg[\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(1-4\sin^2\theta_W\bigg)\cos\theta_{\mu\tau}-\frac 34 \frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\sin\theta_{\mu\tau}\Bigg)\\ \nonumber
&&-\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(\cos\theta_{\mu\tau}+\epsilon\sin\theta_W\sin\theta_{\mu\tau}\bigg)\Bigg)\gamma^5\Bigg]
\label{eez}
\end{eqnarray}
\begin{eqnarray}
\bar{e}e Z^\alpha_{\mu\tau}&:&i\gamma^\alpha\Bigg[\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(1-4\sin^2\theta_W\bigg)\sin\theta_{\mu\tau}+\frac 34 \frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\cos\theta_{\mu\tau}\Bigg)\\ \nonumber
&&-\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(\sin\theta_{\mu\tau}-\epsilon\sin\theta_W\cos\theta_{\mu\tau}\bigg)\Bigg)\gamma^5\Bigg]
\label{eez1}
\end{eqnarray}
\begin{eqnarray}
\bar{\mu}\mu Z^\alpha&:&i\gamma^\alpha\Bigg[\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(1-4\sin^2\theta_W\bigg)\cos\theta_{\mu\tau}+ \bigg(g_{Z_{\mu\tau}}-\frac 34 \frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\bigg)\sin\theta_{\mu\tau}\Bigg)\\ \nonumber
&&-\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(\cos\theta_{\mu\tau}+\epsilon\sin\theta_W\sin\theta_{\mu\tau}\bigg)\Bigg)\gamma^5\Bigg]
\label{mumuz}
\end{eqnarray}
\begin{eqnarray}
\bar{\mu}\mu Z^\alpha_{\mu\tau}&:&i\gamma^\alpha\Bigg[\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(1-4\sin^2\theta_W\bigg)\sin\theta_{\mu\tau}- \bigg(g_{Z_{\mu\tau}}-\frac 34 \frac{g_2\sin\theta_W\epsilon}{\cos\theta_W}\bigg)\cos\theta_{\mu\tau}\Bigg)\\ \nonumber
&&-\Bigg(\frac{g_2}{4\cos\theta_W}\bigg(\sin\theta_{\mu\tau}-\epsilon\sin\theta_W\cos\theta_{\mu\tau}\bigg)\Bigg)\gamma^5\Bigg]
\label{mumuz1}
\end{eqnarray}
\newpage
$\bullet$ \underline {Trilinear couplings of $\rho_i~(i\equiv 1,2,3)$ with $H_1$ and $H_2$ scalar fields:}
\begin{eqnarray}
\rho_1 \rho_1 H_1 &:& i \Bigg(2\cos^2\theta_D\bigg(v_1\lambda_7\cos\theta_s-v_2\lambda_6\sin\theta_s\bigg)\\ \nonumber
&&+\sqrt{2}\lambda_8\cos\theta_D\sin\theta_D\bigg(v_2\cos\theta_s-v_1\sin\theta_s\bigg)\\ \nonumber
&&+ \sin^2\theta_D\bigg(v_1(\lambda_2+\lambda_3)\cos\theta_s-v_2\lambda_4\sin\theta_s\bigg)\Bigg)\\
\rho_1 \rho_1 H_2 &:& i \Bigg(2\cos^2\theta_D\bigg(v_1\lambda_7\sin\theta_s+v_2\lambda_6\cos\theta_s\bigg)\\ \nonumber
&&+\sqrt{2}\lambda_8\cos\theta_D\sin\theta_D\bigg(v_2\sin\theta_s+v_1\cos\theta_s\bigg)\\ \nonumber
&&+ \sin^2\theta_D\bigg(v_1(\lambda_2+\lambda_3)\sin\theta_s+v_2\lambda_4\cos\theta_s\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_2 \rho_2 H_1 &:& i \Bigg(2\sin^2\theta_D\bigg(v_1\lambda_7\cos\theta_s-v_2\lambda_6\sin\theta_s\bigg)\\ \nonumber
&&-\sqrt{2}\lambda_8\cos\theta_D\sin\theta_D\bigg(v_2\cos\theta_s-v_1\sin\theta_s\bigg)\\ \nonumber
&&+ \cos^2\theta_D\bigg(v_1(\lambda_2+\lambda_3)\cos\theta_s-v_2\lambda_4\sin\theta_s\bigg)\Bigg)\\
\rho_2 \rho_2 H_2 &:& i\Bigg(2\sin^2\theta_D\bigg(v_1\lambda_7\sin\theta_s+v_2\lambda_6\cos\theta_s\bigg)\\ \nonumber
&&-\sqrt{2}\lambda_8\cos\theta_D\sin\theta_D\bigg(v_2\sin\theta_s+v_1\cos\theta_s\bigg)\\ \nonumber
&&+ \cos^2\theta_D\bigg(v_1(\lambda_2+\lambda_3)\sin\theta_s+v_2\lambda_4\cos\theta_s\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_1 \rho_2 H_1 &:& \frac{i}{2} \Bigg(\sqrt{2}\cos 2\theta_D\lambda_8\bigg(v_2\cos\theta_s-v_1\sin\theta_s\bigg)\\ \nonumber
&&+ \sin 2\theta_D\bigg(v_1(\lambda_2+\lambda_3-2\lambda_7)\cos\theta_s-v_2(\lambda_4-2\lambda_6)\sin\theta_s\bigg)\Bigg)\\
\rho_1 \rho_2 H_2 &:& \frac{i}{2} \Bigg(\sqrt{2}\cos 2\theta_D\lambda_8\bigg(v_2\sin\theta_s+v_1\cos\theta_s\bigg)\\ \nonumber
&&+ \sin 2\theta_D\bigg(v_1(\lambda_2+\lambda_3-2\lambda_7)\sin\theta_s+v_2(\lambda_4-2\lambda_6)\cos\theta_s\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_3 \rho_3 H_1 &:& i\Bigg(v_1(\lambda_2+\lambda_3)\cos\theta_s-v_2\lambda_4\sin\theta_s\Bigg) \\
\rho_3 \rho_3 H_2 &:& i\Bigg(v_1(\lambda_2+\lambda_3)\sin\theta_s+v_2\lambda_4\cos\theta_s\Bigg)
\end{eqnarray}
$\bullet$ \underline {Quartic couplings of $\rho_i~(i\equiv 1,2,3)$ with $H_1$ scalar fields:}
\begin{eqnarray}
\rho_1 \rho_1 H_1 H_1 &:&i \Bigg(-2\sqrt{2}\lambda_8\cos\theta_s\sin\theta_s\cos\theta_D\sin\theta_D \\ \nonumber
&&+\cos^2\theta_s\bigg(2\lambda_7\cos^2\theta_D+(\lambda_2+\lambda_3)\sin^2\theta_D\bigg) \\ \nonumber
&&+\sin^2\theta_s\bigg(2\lambda_6\cos^2\theta_D+\lambda_4\sin^2\theta_D\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_2 \rho_2 H_1 H_1 &:&i \Bigg(2\sqrt{2}\lambda_8\cos\theta_s\sin\theta_s\cos\theta_D\sin\theta_D \\ \nonumber
&&+\cos^2\theta_s\bigg(2\lambda_7\sin^2\theta_D+(\lambda_2+\lambda_3)\cos^2\theta_D\bigg) \\ \nonumber
&&+\sin^2\theta_s\bigg(2\lambda_6\sin^2\theta_D+\lambda_4\cos^2\theta_D\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_1 \rho_2 H_1 H_1 &:&i \Bigg(-\sqrt{2}\lambda_8\cos\theta_s\sin\theta_s\cos 2\theta_D+\cos\theta_D\sin\theta_D \\ \nonumber
&&\bigg((\lambda_4-2\lambda_6)\sin^2\theta_s+(\lambda_2+\lambda_3-2\lambda_7)\cos^2\theta_s\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_3 \rho_3 H_1 H_1 &:& i \Bigg((\lambda_2+\lambda_3)\cos^2\theta_s+\lambda_4\sin^2\theta_s\Bigg)
\end{eqnarray}
$\bullet$ \underline {Quartic couplings of $\rho_i~(i\equiv 1,2,3)$ with $H_2$ scalar fields:}
\begin{eqnarray}
\rho_1 \rho_1 H_2 H_2 &:&i \Bigg(2\sqrt{2}\lambda_8\cos\theta_s\sin\theta_s\cos\theta_D\sin\theta_D \\ \nonumber
&&+\sin^2\theta_s\bigg(2\lambda_7\cos^2\theta_D+(\lambda_2+\lambda_3)\sin^2\theta_D\bigg) \\ \nonumber
&&+\cos^2\theta_s\bigg(2\lambda_6\cos^2\theta_D+\lambda_4\sin^2\theta_D\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_2 \rho_2 H_2 H_2 &:&i \Bigg(-2\sqrt{2}\lambda_8\cos\theta_s\sin\theta_s\cos\theta_D\sin\theta_D \\ \nonumber
&&+\sin^2\theta_s\bigg(2\lambda_7\sin^2\theta_D+(\lambda_2+\lambda_3)\cos^2\theta_D\bigg) \\ \nonumber
&&+\cos^2\theta_s\bigg(2\lambda_6\sin^2\theta_D+\lambda_4\cos^2\theta_D\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_1 \rho_2 H_2 H_2 &:&i \Bigg(\sqrt{2}\lambda_8\cos\theta_s\sin\theta_s\cos 2\theta_D+\cos\theta_D\sin\theta_D \\ \nonumber
&&\bigg((\lambda_4-2\lambda_6)\cos^2\theta_s+(\lambda_2+\lambda_3-2\lambda_7)\sin^2\theta_s\bigg)\Bigg)
\end{eqnarray}
\begin{eqnarray}
\rho_3 \rho_3 H_2 H_2 &:& i \Bigg((\lambda_2+\lambda_3)\sin^2\theta_s+\lambda_4\cos^2\theta_s\Bigg)
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings between $\mathbb{Z}_2$ odd particles with gauge fields:}
\begin{eqnarray}
\rho_1\phi^\pm W^{\mp_\alpha} &:& \mp i\frac{e\sin\theta_D}{2\sin\theta_W}(p_1-p_2)^\alpha\\
\rho_2\phi^\pm W^{\mp_\alpha} &:& \mp i\frac{e\cos\theta_D}{2\sin\theta_W}(p_1-p_2)^\alpha\\
\rho_3\phi^\pm W^{\mp_\alpha} &:& -\frac{e}{2\sin\theta_W}(p_1-p_2)^\alpha\\
\rho_1\rho_3 Z^\alpha &:& \frac{\sin\theta_D}{2} \Bigg(\frac{e}{2\sin\theta_W \cos\theta_W}\cos\theta_{\mu\tau}\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\sin\theta_{\mu\tau}\Bigg)(p_1-p_2)^\alpha\\
\rho_1\rho_3 Z^\alpha_{\mu\tau} &:& \frac{\sin\theta_D}{2} \Bigg(\frac{e}{\sin\theta_W \cos\theta_W}\sin\theta_{\mu\tau}-\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\cos\theta_{\mu\tau}\Bigg)(p_1-p_2)^\alpha\\
\rho_2\rho_3 Z^\alpha &:& \frac{\cos\theta_D}{2} \Bigg(\frac{e}{2\sin\theta_W \cos\theta_W}\cos\theta_{\mu\tau}\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\sin\theta_{\mu\tau}\Bigg)(p_1-p_2)^\alpha\\
\rho_2\rho_3 Z^\alpha_{\mu\tau} &:& \frac{\cos\theta_D}{2} \Bigg(\frac{e}{\sin\theta_W \cos\theta_W}\sin\theta_{\mu\tau}-\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\cos\theta_{\mu\tau}\Bigg)(p_1-p_2)^\alpha
\end{eqnarray}
$\bullet$ \underline {Quartic couplings of dark matter with gauge fields:}
\begin{eqnarray}
\rho_1 \rho_1 W^{+_\alpha} W^{-_\beta} &:& i\frac{e^2\sin^2\theta_D}{2\sin^2\theta_W}g^{\alpha\beta}
\end{eqnarray}
\begin{eqnarray}
\rho_1 \rho_1 Z^{\alpha}Z^{\beta} &:& i \frac {\sin^2\theta_D}{2} \Bigg(\bigg(2g_{Z_{\mu\tau}}\sin\theta_{\mu\tau}+\frac{e \cos\theta_{\mu\tau}}{\cos\theta_W\sin\theta_W}\bigg)\\ \nonumber
&&\bigg(2\left(g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W}\right)\sin\theta_{\mu\tau}+\frac{e \cos\theta_{\mu\tau}}{\cos\theta_W\sin\theta_W}\bigg)\Bigg)g^{\alpha\beta}
\end{eqnarray}
\begin{eqnarray}
\rho_1 \rho_1 Z^{\alpha}_{\mu\tau}Z^{\beta}_{\mu\tau} &:& i \frac {\sin^2\theta_D}{2} \Bigg(\bigg(2g_{Z_{\mu\tau}}\cos\theta_{\mu\tau}-\frac{e \sin\theta_{\mu\tau}}{\cos\theta_W\sin\theta_W}\bigg)\\ \nonumber
&&\bigg(2\left(g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W}\right)\cos\theta_{\mu\tau}-\frac{e \sin\theta_{\mu\tau}}{\cos\theta_W\sin\theta_W}\bigg)\Bigg)g^{\alpha\beta}
\end{eqnarray}
\begin{eqnarray}
\rho_1 \rho_1 Z^{\alpha}_{\mu\tau}Z^{\beta}&:& i \frac {\sin^2\theta_D}{2}\Bigg(\frac{e^2\cos\theta_{\mu\tau}\sin\theta_{\mu\tau}}{\cos^2\theta_W\sin^2\theta_W} \\ \nonumber
&&-\frac{e}{\cos\theta_W \sin\theta_W}\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W}\right)\cos 2\theta_{\mu\tau} \\ \nonumber
&& -2g_{Z_{\mu\tau}} \left(g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W}\right) \sin 2\theta_{\mu\tau}
\Bigg)g^{\alpha\beta}
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings between $\mathbb{Z}_2$ odd charged particles with $H_1$ and $H_2$ scalar fields:}
\begin{eqnarray}
\phi^+\phi^-H_1&:& i\Bigg(v_1\lambda_2\cos\theta_s-v_2\lambda_4\sin\theta_s\Bigg)\\
\phi^+\phi^-H_2&:& i\Bigg(v_1\lambda_2\sin\theta_s+v_2\lambda_4\cos\theta_s\Bigg)
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings between $\mathbb{Z}_2$ odd charged particles with gauge fields:}
\begin{eqnarray}
\phi^+\phi^-\gamma^\alpha&:&-ie(p_1-p_2)^\alpha \\
\phi^+\phi^-Z^\alpha&:&\frac{i}{2}\Bigg(\frac{e\cos 2\theta_W}{\sin\theta_W \cos\theta_W}\cos\theta_{\mu\tau}-\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\sin\theta_{\mu\tau}\Bigg) (p_1-p_2)^\alpha\\
\phi^+\phi^-Z^\alpha_{\mu\tau}&:&\frac{i}{2}\Bigg(\frac{e \cos 2\theta_W}{\sin\theta_W \cos\theta_W}\sin\theta_{\mu\tau}+\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\cos\theta_{\mu\tau}\Bigg) (p_1-p_2)^\alpha
\end{eqnarray}
$\bullet$ \underline {Quartic couplings between $\mathbb{Z}_2$ odd charged particles with gauge fields:}
\begin{eqnarray}
\phi^+\phi^-W^{+\alpha} W^{-\beta} &:&i\frac{e^2}{2\sin^2\theta_W}g^{\alpha\beta}\\
\phi^+\phi^-\gamma^\alpha \gamma^\beta &:&i2e^2g^{\alpha\beta}
\end{eqnarray}
\begin{eqnarray}
\phi^+\phi^-\gamma^\alpha Z^\beta &:&i\frac{e}{2}\Bigg(\frac{e\cos 2\theta_W}{\sin\theta_W \cos\theta_W}\cos\theta_{\mu\tau}-\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\sin\theta_{\mu\tau}\Bigg) g^{\alpha\beta} \\
\phi^+\phi^-Z^\alpha Z^\beta &:&\frac{i}{2}g^{\alpha\beta}\Bigg(\frac{e\cos 2\theta_W}{\sin\theta_W \cos\theta_W}\cos\theta_{\mu\tau}-2g_{Z_{\mu\tau}}\sin\theta_{\mu\tau}\Bigg) \\ \nonumber
&&\Bigg(\frac{e\cos 2\theta_W}{\sin\theta_W \cos\theta_W}\cos\theta_{\mu\tau}-\left(2g_{Z_{\mu\tau}}+\epsilon\frac{e}{\cos\theta_W} \right)\sin\theta_{\mu\tau}\Bigg)
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings between CP-even scalar fields:}
\begin{eqnarray}
H_1 H_1 H_1 &:& i \Bigg(6 v_1\lambda_H \cos^3\theta_s-3\lambda_1\bigg(\cos^2\theta_s\sin\theta_s-\cos\theta_s\sin^2\theta_s\bigg)-6v_2\lambda_\eta\sin^3\theta_s\Bigg)\\
H_2 H_1 H_1 &:& i\Bigg(v_2\lambda_1\cos^3\theta_s+2v_1(3\lambda_H-\lambda_1)\cos^2\theta_s\sin\theta_s\\ \nonumber
&&+2v_2(3\lambda_\eta-\lambda_1)\cos\theta_s\sin^2\theta_s+v_1\lambda_1\sin^3\theta_s\Bigg)
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings of CP-even scalar fields with gauge fields:}
\begin{eqnarray}
H_1 W^{+\alpha} W^{-\beta} &:& i\frac{e^2v_1}{2\sin^2\theta_W}\cos\theta_s g^{\alpha\beta}\\
H_2 W^{+\alpha} W^{-\beta} &:& i\frac{e^2v_1}{2\sin^2\theta_W}\sin\theta_s g^{\alpha\beta}\\
H_1 Z^{\alpha} Z^{\beta} &:& i\Bigg(\frac{e v_1}{2\sin\theta_W\cos\theta_W}\cos\theta_{\mu\tau}\bigg(\cos\theta_{\mu\tau}\frac{e}{\sin\theta_W\cos\theta_W}\\ \nonumber
&&+2\sin\theta_{\mu\tau}\epsilon\frac{e}{\cos\theta_W}\bigg)\cos\theta_s -2g^2_{Z_{\mu\tau}}v_2\sin^2\theta_{\mu\tau}\sin\theta_s\Bigg)g^{\alpha\beta}\\
H_2 Z^{\alpha} Z^{\beta} &:& i\Bigg(\frac{e v_1}{2\sin\theta_W\cos\theta_W}\cos\theta_{\mu\tau}\bigg(\cos\theta_{\mu\tau}\frac{e}{\sin\theta_W\cos\theta_W}\\ \nonumber
&&+2\sin\theta_{\mu\tau}\epsilon\frac{e}{\cos\theta_W}\bigg)\sin\theta_s +2g^2_{Z_{\mu\tau}}v_2\sin^2\theta_{\mu\tau}\cos\theta_s\Bigg)g^{\alpha\beta}\\
H_1 Z^{\alpha}_{\mu\tau} Z^{\beta}_{\mu\tau} &:& i\Bigg(\frac{e v_1}{2\sin\theta_W\cos\theta_W}\sin\theta_{\mu\tau}\bigg(\sin\theta_{\mu\tau}\frac{e}{\sin\theta_W\cos\theta_W}\\ \nonumber
&&-2\cos\theta_{\mu\tau}\epsilon\frac{e}{\cos\theta_W}\bigg)\cos\theta_s -2g^2_{Z_{\mu\tau}}v_2\cos^2\theta_{\mu\tau}\sin\theta_s\Bigg)g^{\alpha\beta}\\
H_2 Z^{\alpha}_{\mu\tau} Z^{\beta}_{\mu\tau} &:& i\Bigg(\frac{e v_1}{2\sin\theta_W\cos\theta_W}\sin\theta_{\mu\tau}\bigg(\sin\theta_{\mu\tau}\frac{e}{\sin\theta_W\cos\theta_W}\\ \nonumber
&&-2\cos\theta_{\mu\tau}\epsilon\frac{e}{\cos\theta_W}\bigg)\sin\theta_s +2g^2_{Z_{\mu\tau}}v_2\cos^2\theta_{\mu\tau}\cos\theta_s\Bigg)g^{\alpha\beta}
\end{eqnarray}
\begin{eqnarray}
H_1 Z^{\alpha} Z^{\beta}_{\mu\tau} &:& i\Bigg(\frac{e v_1}{2\sin\theta_W\cos\theta_W}\bigg(\sin 2\theta_{\mu\tau}\frac{e}{2\sin\theta_W\cos\theta_W}\\ \nonumber
&&-\cos 2\theta_{\mu\tau}\epsilon\frac{e}{\cos\theta_W}\bigg)\cos\theta_s +g^2_{Z_{\mu\tau}}v_2\sin 2\theta_{\mu\tau}\sin\theta_s\Bigg)g^{\alpha\beta}
\end{eqnarray}
\begin{eqnarray}
H_2 Z^{\alpha} Z^{\beta}_{\mu\tau} &:& i\Bigg(\frac{e v_1}{2\sin\theta_W\cos\theta_W}\bigg(\sin 2\theta_{\mu\tau}\frac{e}{2\sin\theta_W\cos\theta_W}\\ \nonumber
&&-\cos 2\theta_{\mu\tau}\epsilon\frac{e}{\cos\theta_W}\bigg)\sin\theta_s -g^2_{Z_{\mu\tau}}v_2\sin 2\theta_{\mu\tau}\cos\theta_s\Bigg)g^{\alpha\beta}
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings between gauge fields:}
\begin{eqnarray}
\gamma^\sigma W^{+\alpha}W^{-\beta}&:& ie \bigg( g^{\sigma\alpha} (p_2
-p_1)^\beta + g^{\sigma\beta} (p_1 -p_3)^\alpha +
g^{\beta\alpha} (p_3 -p_2)^\sigma \bigg) \\
Z^\sigma W^{+\alpha}W^{-\beta}&:& ie \frac{\cos\theta_W \cos\theta_s}{\sin\theta_W}\bigg( g^{\sigma\alpha} (p_2
-p_1)^\beta + g^{\sigma\beta} (p_1 -p_3)^\alpha +
g^{\beta\alpha} (p_3 -p_2)^\sigma \bigg) \\
Z^\sigma_{\mu\tau} W^{+\alpha}W^{-\beta}&:& ie \frac{\cos\theta_W \sin\theta_s}{\sin\theta_W}\bigg( g^{\sigma\alpha} (p_2
-p_1)^\beta + g^{\sigma\beta} (p_1 -p_3)^\alpha +
g^{\beta\alpha} (p_3 -p_2)^\sigma \bigg)
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings of CP-even fields scalar with different SM fermion fields:}
\begin{eqnarray}
H_1 c \bar{c} &:& -i\frac{e\; m_c}{\sqrt{2}\sin\theta_W M_W}\cos\theta_s\\
H_2 c \bar{c} &:& -i\frac{e\; m_c}{\sqrt{2}\sin\theta_W M_W}\sin\theta_s\\
H_1 t \bar{t} &:& -i\frac{e\; m_t}{\sqrt{2}\sin\theta_W M_W}\cos\theta_s\\
H_2 t \bar{t} &:& -i\frac{e\; m_t}{\sqrt{2}\sin\theta_W M_W}\sin\theta_s\\
H_1 b \bar{b} &:& -i\frac{e\; m_b}{\sqrt{2}\sin\theta_W M_W}\cos\theta_s\\
H_2 b \bar{b} &:& -i\frac{e\; m_b}{\sqrt{2}\sin\theta_W M_W}\sin\theta_s\\
H_1 \tau^+\tau^- &:& -i\frac{e \;m_\tau}{\sqrt{2}\sin\theta_W M_W}\cos\theta_s\\
H_2 \tau^+\tau^- &:& -i\frac{e\; m_\tau}{\sqrt{2}\sin\theta_W M_W}\sin\theta_s
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings of $\chi$ with SM down-type quarks and $\rho_i~(i\equiv 1,2,3)$ field:}
\begin{eqnarray}
\bar{\chi} \rho_1 b_j: -i\frac{f_j}{2\sqrt{2}}(1-\gamma^5)\sin\theta_D,\;\;\;\;\bar{b}_j \rho_1\chi: -i\frac{f_j}{2\sqrt{2}}(1+\gamma^5)\sin\theta_D
\label{rho1xd}
\end{eqnarray}
\begin{eqnarray}
\bar{\chi} \rho_2 b_j: -i\frac{f_j}{2\sqrt{2}}(1-\gamma^5)\cos\theta_D,\;\;\;\;\bar{b}_j \rho_2\chi: -i\frac{f_j}{2\sqrt{2}}(1+\gamma^5)\cos\theta_D
\label{rho2xd}
\end{eqnarray}
\begin{eqnarray}
\bar{\chi} \rho_3 b_j: -\frac{f_j}{2\sqrt{2}}(1-\gamma^5),\;\;\;\;\bar{b}_j \rho_3\chi: -\frac{f_j}{2\sqrt{2}}(1+\gamma^5)
\label{rho3xd}
\end{eqnarray}
$\bullet$ \underline {Trilinear couplings of $\chi$ with $Z(Z_{\mu\tau})$ gauge field:}
\begin{eqnarray}
\bar{\chi}\chi Z^\alpha&=&-i\gamma^\alpha\Bigg[\frac{g_2}{3\cos\theta_W}\sin^2\theta_W\bigg(\cos\theta_{\mu\tau}+\frac{\epsilon}{\sin\theta_W}\sin\theta_{\mu\tau}\bigg)+ g_{Z_{\mu\tau}}\sin\theta_{\mu\tau}\Bigg]\\
\label{chichiz}
\bar{\chi}\chi Z^\alpha_{\mu\tau}&=&-i\gamma^\alpha\Bigg[\frac{g_2}{3\cos\theta_W}\sin^2\theta_W\bigg(\sin\theta_{\mu\tau}-\frac{\epsilon}{\sin\theta_W}\cos\theta_{\mu\tau}\bigg)- g_{Z_{\mu\tau}}\cos\theta_{\mu\tau}\Bigg]
\label{chichiz1}
\end{eqnarray}
\end{appendices}
\bibliographystyle{jhep}
|
1,116,691,498,557 | arxiv | \section{Introduction}
Every now and then we are faced with field theories containing a charged sector interacting with monopole-like defects. The most remarkable example is associated with the scenario of dual superconductivity for confinement in $SU(N)$ Yang-Mills theories \cite{N}-\cite{hooft1}, \cite{cp}-\cite{KLSW}.
In these theories, the charged sector corresponds to the ``off-diagonal" modes living in the Cartan subalgebra of the nonabelian group, while mono\-poles arise as defects when defining abelian projection gauge fixing conditions \cite{ap}. Monopoles can also be introduced as defects of the local color frame $\hat{n}_a$, $a=1,2,3$, to decompose the gauge fields \cite{cho-a,FN}, \cite{cho2}-\cite{Shaba}. This procedure has the advantage of not relying on any a priori gauge fixing condition.
In both situations, we have to deal with the associated Dirac strings or worldsheets, depending on whether the monopole defects are point-like or loop-like. Considering that these objects are not observable (their location can be changed by means of a topologically trivial gauge transformation) a natural question that arises is about the possibility of representing physical quantities, such as the partition function, only in terms of their gauge invariant borders (where monopoles are located).
In this paper, we will present an exact procedure to achieve this goal in compact $QED(3)$ with charged fields and in the framework of the Cho-Faddeev-Niemi decomposition of pure $SU(2)$ Yang-Mills theory.
This is particularly relevant in the latter case, when using the gauge field decomposition to guide the obtention of effective theories associated with ensembles of monopoles and center vortices (see ref. \cite{LEO}).
In particular, if our procedure were not applied before any approximation scheme, it would be possible that the effective theories obained could make no sense physically, as Dirac worldsheets would become observable because of the approximations.
On the other hand, once we have a partition function representation only in terms of the monopole locations, assuming a phase where monopoles condense, we can reobtain the effective model of ref. \cite{cho-a}, proposed by following physical heuristic arguments to deal with the Dirac worldsheets.
Another closely related example occurs in refs. \cite{cho1, kondo.sky}, where the effective Skyrme model \cite{FN, Shaba, F} has been discussed in the Cho-Faddeev-Niemi framework, by following heuristic arguments assuming a magnetic condensate, and by implementing a series of approximations to compute the one-loop effective action in a monopole background.
In fact, in these references the singular terms where the worldsheets are concentrated are missing (see the discussion in ref. \cite{LEO}). Of course, any heuristic reasoning only deals with physical objects and the effective theory must be directly constructed in terms of them. Then, the effective models have been constructed in terms of the third component $\hat{n}=\hat{n}_3$ of the local color frame, as monopoles can be seen as defects of this component, with no reference to any Dirac worldsheet.
The main point is that, as discussed in ref. \cite{LEO}, when monopole defects are present for $\hat{n}$, necessarily the components $\hat{n}_1$, $\hat{n}_2$ must also contain defects, and therefore we have two possibilities: Firstly, we could have Dirac worldsheet defects where the components $\hat{n}_1$, $\hat{n}_2$ rotate twice, as we go close and around them. This corresponds to a magnetic flux $4\pi/g$ carried by the Dirac worldsheet, matching the magnetic flux $4\pi/g$ emanating from monopoles in nonabelian theories. Secondly, it is also possible to attach monopoles with a pair of center vortices carrying flux $2\pi/g$, which are also given by defects in the components $\hat{n}_1$, $\hat{n}_2$; in this case, when we go around the vortex they rotate once.
Therefore, when looking for effective models written only in terms of $\hat{n}$, if on the one hand no information about unphysical Dirac worldsheets is introduced, on the other, we miss information about the $\hat{n}_1$, $\hat{n}_2$ sector, which contains physical information about center vortex ensembles. For this reason, it is important to have a careful discussion about how to get rid of Dirac worldsheets in the Cho-Faddeev-Niemi decomposition framework, and to understand why this procedure fails to get rid of center vortices, so that they can be associated with interesting phases displaying confinement, $N$-ality \cite{debbio3}-\cite{quandt} or Abelian dominance \cite{LEO}.
Moreover, the interest in looking for possible extensions to the Skyrme effective model is also supported by recent limitations of this model observed in the lattice \cite{DHW}.
Technically, the above question about the possibility of representing the partition function with no reference to Dirac strings or worldsheets is a nontrivial one, as in a field theory problem the charge current is distributed on the whole Euclidean spacetime. This is in contrast with the problem of representing the path integral for the propagation of a one-particle system, where the relevant electric current is concentrated on the integration path and the Dirac string does not appear, as long as the Dirac quantization condition is imposed.
Initially, we will perform a change of variables with trivial Jacobian that only introduces given closed Dirac strings or worldsheets in the partition function. In compact $QED(3)$ and $SU(2)$ Yang-Mills theory, this will be possible by working in the Lorentz and the Maximal Abelian gauges, respectively, and considering a gauge transformation with multivalued phase $\chi$, satisfying the Laplace equation $\partial_\mu \partial_\mu \chi =0$. The explicit form of this transformation is obtained by means of the expressions obtained in refs. \cite{engelhardt1,reinhardt} to describe closed thin center vortices. In this respect, note that as is well known, the MAG condition, as well as the Landau condition, do not fix the gauge completely. In \ref{YM-sub}, we will discuss this issue in the context of Gribov ideas for the implementation of a properly defined path integral (see refs. \cite{G,UERJ} and references therein).
Next, we will show that it is always possible to choose the closed Dirac strings or worldsheets, in such a way that the total effect is the decoupling of open plus closed Dirac defects from the charged sector, in the integrand of the partition function, leaving only the effect of their associated gauge invariant borders, where the physical monopoles are placed.
This article is organized as follows. In section \S \ref{c}, we review monopoles in compact $QED(3)$ with charged matter and the Cho-Faddeev-Niemi scenario for $SU(2)$ Yang-Mills theory. Section \S \ref{b} is devoted to discuss the associated partition functions in minimal coupling form and to define the gauge fixing conditions. In \S\ref{d},
we separate, by means of a Hodge decomposition, the terms coupling the Dirac strings or worldsheets to the charged sector from those coupling their borders, where the physical monopoles are placed. In section \S\ref{in}, we carefully discuss the Dirac string or worldsheet independence of the partition functions, and show the central result of this work, namely, how to get rid of Dirac defects by decoupling them from the charged sector. Finally, in section \S\ref{conc} we present our conclusions and discuss exactly where our procedure fails to get rid of physical center vortices.
\section{Charged fields and monopole-like defects}
\label{c}
\subsection{Compact $QED(3)$}
As shown in \cite{polya}, pure compact $QED(3)$ is a confining model. Here, we consider its coupling to a charged matter
sector. In this case, the action \footnote{Throughout this paper we work in Euclidean spacetime.} for an instanton/anti-instanton pair is given by,
\begin{equation}
S = \int d^{3}x \Biggl(\bar{D}_{\mu}\bar{\Phi}D_{\mu}\Phi +
\frac{1}{2}(f_{\mu} + h_{\mu})^{2} \Biggr),
\label{action}
\end{equation}
where
\begin{equation}
D_{\,\mu} = \partial_{\,\mu} - iq\biggl(A_{\,\mu} +
C_{\,\mu}\biggr)
\label{1a}
\makebox[.5in]{,}
f_{\mu} = \epsilon_{\mu\nu\rho}\partial_{\nu}A_{\rho}.
\label{6}
\end{equation}
The field $h_\mu$ added to the dual field strength tensor $f_\mu$ in the action (\ref{action}) is such that
\begin{equation}
\partial_{\mu} h_{\mu}= g_m [\delta^{(3)}(x - x^{+}) - \delta^{(3)}(x - x^{-})] ,
\label{1}
\end{equation}
and the vector potential $C_{\mu}$, satisfying
$h_{\mu}=\epsilon_{\mu\nu\rho}\partial_{\nu}C_{\rho}$, can be
introduced only outside a region containing a Dirac string
$x_{s}(\sigma)$, $\sigma \in [0,1]$, running from
$x^{-}$ to $x^{+}$,
\begin{equation}
x_{s}(0)=x^{-}
\makebox[.5in]{,}
x_{s}(1)=x^+.
\label{2}
\end{equation}
In order to extend the vector potential to the whole space $\mathcal{R}^3$, $h_{\mu}$ and $\epsilon_{\mu\nu\rho}\partial_{\nu}C_{\rho}$ must differ by a singular term $d_{\mu}$,
\begin{equation}
h_{\mu}= \epsilon_{\mu\nu\rho}\partial_{\nu}C_{\rho}+ d_{\mu},
\makebox[.5in]{,}
d_{\mu}=g_m \int_{[x_s]} dy_{\mu}\, \delta^{(3)} (x - y),
\label{3}
\end{equation}
This implies that the flux of $d_{\mu}$
through a surface crossed by the Dirac string is $\pm g_m$ .
The independence of physical quantities on the choice of $C_{\mu}$
must also include the independence on the possible associated
Dirac strings. As is well known, this nonobservability implies the
famous Dirac charge quantization condition
\begin{equation}
q = n e
\makebox[.5in]{,}
e= 2\pi/g_m
\label{dirac_condition}
\end{equation}
where $n$ is an integer.
\subsection{$SU(2)$ Yang-Mills and the Cho-Faddeev-Niemi decomposition}
In $SU(2)$ Yang-Mills theory in four dimensions the action is given by,
\begin{equation}
{S_{YM}}=\frac{1}{2}\int d^4 x\; tr\, (F_{\mu \nu} F_{\mu \nu })
\makebox[.5in]{,}
F_{\mu \nu}=F_{\mu \nu}^{a}T^{a}.
\end{equation}
The generators can be realized as $T^a=\tau^a/2$, $a=1,2,3$, where $\tau^a$ are the Pauli matrices, and the field strength tensor is written in terms of the gauge fields $A_{\mu }^{a}$, $a=1, 2, 3$,
\begin{equation}
\vec{F}_{\mu \nu}=\partial_\mu \vec{A}_\nu -\partial_\nu \vec{A}_\mu +g \vec{A}_\mu\times \vec{A}_\nu,
\makebox[.5in]{,}
\vec{A}_\mu=A_\mu^a\, \hat{e}_a
\makebox[.5in]{,}
\vec{F}_{\mu \nu}=F_{\mu \nu}^a\, \hat{e}_a,
\end{equation}
where $\hat{e}_a$ is the canonical basis in color space.
The Cho-Faddeev-Niemi decomposition \cite{cho-a,FN} is done in terms of a general local frame in color space, $\hat{n}_a$, $a=1,2,3$, which can be parametrized by means of an orthogonal local transformation $R\in SO(3)$ ,
\begin{equation}
\hat{n}_a=R\, \hat{e}_a.
\end{equation}
This frame can be used to represent the gauge field $\vec{A}_\mu$ as,
\begin{equation}
\vec{A}_\mu=A_\mu \hat{n}-\frac{1}{g} \hat{n}\times \partial_\mu \hat{n} + \vec{X}_\mu
\makebox[.5in]{,}
\hat{n}.\vec{X}_\mu=0 ,
\label{dec}
\end{equation}
\begin{equation}
\hat{n}_a.\hat{n}_b=\delta_{ab}
\makebox[.5in]{,}
a,b=1,2,3
\makebox[.5in]{,}
\hat{n}\equiv \hat{n}_3,
\end{equation}
where $\vec{X}_\mu$ transforms in the adjoint representation.
The field strength tensor, for the decomposition (\ref{dec}) defined in the whole Euclidean spacetime, is given by,
\begin{equation}
\vec{F}_{\mu \nu}=(F_{\mu \nu}+H_{\mu \nu}+K_{\mu \nu}) \hat{n}+\vec{G}_{\mu \nu}+\vec{L}_{\mu \nu},
\label{FHK}
\end{equation}
\begin{equation}
F_{\mu \nu}=\partial_\mu A_\nu -\partial_\nu A_\mu
\makebox[.5in]{,}
H_{\mu \nu}=-\frac{1}{g} \hat{n}.(\partial_\mu \hat{n} \times \partial_\nu \hat{n}),
\label{HK}
\end{equation}
\begin{equation}
K_{\mu \nu}=-i g (\bar{\Phi}_\mu \Phi_\nu-\Phi_\mu \bar{\Phi}_\nu)
\makebox[.5in]{,}
\vec{G}_{\mu \nu}=G^1_{\mu \nu} \hat{n}_1 +G^2_{\mu \nu} \hat{n}_2
\end{equation}
with,
\begin{equation}
\Phi_\mu=\frac{1}{\sqrt{2}}(X^1_\mu+iX^2_\mu)
\makebox[.5in]{,}
G_{\mu \nu}=\frac{1}{\sqrt{2}}(G^1_{\mu \nu}+iG^2_{\mu \nu}),
\label{Fi}
\end{equation}
\begin{equation}
G_{\mu \nu}=
[\partial_\mu+ig(A_\mu+C^{(n)}_\mu)]\Phi_\nu -[\partial_\nu+ig(A_\nu+C^{(n)}_\nu)]\Phi_\mu ,
\end{equation}
and the monopole vector potential is given by,
\begin{equation}
C^{(n)}_\mu=-\frac{1}{g} \hat{n}_1.\partial_\mu \hat{n}_2.
\label{Cmu}
\end{equation}
Finally, $\vec{L}_{\mu \nu}= -(1/g) \hat{n}\times [\partial_\mu,\partial_\nu] \hat{n}$
is a term concentrated on the defects of the color direction $\hat{n}$ (see ref. \cite{LEO}). In addition, while in refs. \cite{cho2}-\cite{cho5}, $H_{\mu \nu}$ is computed to be $\partial_\mu C_\nu -\partial_\nu C_\mu$, obtaining simpler ``abelianized'' expressions for the field strength tensor, when dealing with gauge fields containing defects this relationship must be revised. In fact, when defined on the whole Euclidean spacetime, both quantities differ by singular terms \cite{LEO},
\begin{equation}
H_{\mu \nu}=\partial_\mu C^{(n)}_\nu -\partial_\nu C^{(n)}_\mu
+D_{\mu \nu}
\makebox[.5in]{,}
D_{\mu \nu}=\frac{1}{g} \hat{n}_1. [\partial_\mu,\partial_\nu]\hat{n}_2.
\label{HCD}
\end{equation}
To study magnetic defects, it will also be convenient to consider the associated dual expressions, defining the dual tensors using lower-case letters. For instance, the dual form of the first equation in (\ref{HCD}) reads,
\begin{equation}
h_{\mu \nu}=\epsilon_{\mu \nu \rho \sigma}\partial_\rho C^{(n)}_\sigma+d_{\mu \nu}
\makebox[.3in]{,}
h_{\mu \nu}=\frac{1}{2} \epsilon_{\mu \nu \rho \sigma} H_{\rho \sigma}
\makebox[.3in]{,}
d_{\mu \nu}=\frac{1}{2} \epsilon_{\mu \nu \rho \sigma} D_{\rho \sigma}.
\label{hcd}
\end{equation}
The monopole configurations are obtained from nontrivial $\hat{n}$ mappings \cite{cho-a}, \cite{cho2}-\cite{Shaba},
\begin{equation}
g_m=\oint ds_i\, h_{0i} = \pm \frac{4\pi}{g},
\label{m-ch}
\end{equation}
where the integral is on a surface enclosing a monopole (resp. anti-monopole). The factor of two, with respect to the magnetic charge of a Dirac monopole, is associated with the nonabelian nature of the fields.
For mappings like these, the term $\vec{L}_{\mu \nu}$ must vanish since $\hat{n}$ does not contain defects localized on two-dimensional worldsheets. On the other hand, the local directions $\hat{n}_1$, $\hat{n}_2$ will be necessarily singular on two-dimensional worldsheets, and therefore they give a nontrivial contribution to $d_{\mu \nu}$ of the form,
\begin{eqnarray}
d_{\mu\nu} &=&\frac{4\pi}{g} \int d\sigma_1 d\sigma_2\,
\left(\frac{\partial x_w^\mu}{\partial \sigma_1}\frac{\partial x_w^\nu}{\partial \sigma_2}-
\frac{\partial x_w^\mu}{\partial \sigma_2}\frac{\partial x_w^\nu}{\partial \sigma_1}\right) \delta^{(4)}(x-x_{w}(\sigma_1,\sigma_2))\nonumber \\
&=&\frac{4\pi}{g} \int d^2 \sigma_{\mu \nu}\, \delta^{(4)}(x-x_{w}(\sigma_1,\sigma_2)),
\end{eqnarray}
where $x_{w}(\sigma_1,\sigma_2)$ is the Dirac worldsheet.
It will be useful to know that for a monopole/anti-monopole pair localized on the loops ${\cal C}^+$ and ${\cal C}^-$, we have,
\begin{eqnarray}
\partial_\nu d_{\mu \nu}&=&
\frac{4\pi}{g} \left( \oint_{{\cal C}^+} dy_\mu\, \delta^{(4)}(x-y)- \oint_{{\cal C}^-} dy_\mu\, \delta^{(4)}(x-y) \right).
\label{divd}
\end{eqnarray}
\section{Partition functions}
\label{b}
\subsection{Compact $QED(3)$}
The partition function of compact $QED(3)$ with an instanton/anti-instanton pair is,
\begin{equation}
Z = \int [{\cal D} A] [{\cal D}\Phi] [{\cal D}\bar{\Phi}] F_{gf}\, e^{-S}.
\label{partition_function}
\end{equation}
For example, we can consider the gauge fixing condition,
\begin{equation}
\partial_\mu (A_\mu +C_\mu) = 0
\end{equation}
introducing a Lagrange multiplier $\beta$, which corresponds to the measure,
\begin{equation}
F_{gf}=[{\cal D} \beta] e^{i\int d^3x\, \beta\, \partial_\mu (A_\mu + C_\mu)}.
\end{equation}
In order to single out the terms that depend explicitly on the Dirac string, we linearize the coupling with $C_{\mu}$ by introducing the auxiliary fields $\Lambda_{\mu}$, $\bar{\Lambda}_{\mu}$, and $\lambda_{\mu}$,
\begin{equation}
S = \int d^{3}x \Biggl(\frac{1}{2}\lambda^{2}_{\mu} +
\bar{\Lambda}_{\mu}\Lambda_{\mu} - \frac{i
}{2}\biggl(\bar{\Lambda}_{\mu}D_{\mu}\Phi +
\bar{D}_{\mu}\bar{\Phi}\Lambda_{\mu}\biggr) -
i\lambda_{\mu}\biggl(f_{\mu} +
h_{\mu}\biggr)\Biggr).
\label{11}
\end{equation}
The partition function becomes,
\begin{equation}
Z = \int [{\cal D} \Psi][{\cal D}\beta] \, e^{ - S_{c}-\int d^{3}x\, \frac{1}{2}\lambda_{\mu}^{2} +
i\int d^{3}x\, (\lambda_{\mu}\,(f_{\mu}+h_{\mu}) - J_{\mu}(A_{\mu}+C_\mu)+\beta\, \partial_\mu (A_\mu +C_\mu))},
\label{partition2}
\end{equation}
where $[{\cal D} \Psi]$ is the measure over all fields, physical and auxiliary, while,
\begin{equation}
S_{c} = \int d^{3}x \biggl(\bar{\Lambda}_{\mu}\Lambda_{\mu} - \frac{i}{2}(\bar{\Lambda}_{\mu}
\partial_{\mu} \Phi +
\Lambda_{\mu}\partial_{\mu}\bar{\Phi})\biggr),
\label{13}
\end{equation}
\begin{equation}
J_{\mu} = \frac{iq}{2}(\bar{\Lambda}_{\mu}\Phi - \bar{\Phi}\Lambda_{\mu}).
\label{14}
\end{equation}
We also note that a constraint is implicit in eq. (\ref{partition2}), because of the $A_\mu$ path integral,
\begin{equation}
\epsilon_{\mu \nu \rho} \partial_\nu \lambda_\rho =J_\mu^c \makebox[.5in]{,}
J_\mu^c = J_\mu +\partial_\mu \beta ,
\label{ampere}
\end{equation}
which implies,
\begin{equation}
\beta =-\frac{1}{\partial^2} \partial_\mu J_\mu.
\label{betaJ}
\end{equation}
That is,
\begin{equation}
Z = \int [{\cal D} \Psi][{\cal D}\beta] \, e^{ - S_{c}-\int d^{3}x\, \frac{1}{2}\lambda_{\mu}^{2} +
i\int d^{3}x\, [A_\mu (\epsilon_{\mu \nu \rho}\partial_\nu \lambda_\rho-J_\mu^c) + \lambda_{\mu} d_\mu]}.
\label{ZQ-exa}
\end{equation}
where we used,
\begin{eqnarray}
\int d^{3}x\, (\lambda_\mu h_\mu - J_{\mu} C_{\mu}+\beta \partial_\mu C_\mu)=\int d^{3}x\, \lambda_{\mu} (h_\mu -\epsilon_{\mu \nu \rho} \partial_{\nu} C_{\rho}).
\label{17}
\end{eqnarray}
\subsection{$SU(N)$ Yang-Mills in four dimensions}
The Yang-Mills action on the monopole background is,
\begin{eqnarray}
S_{YM}&=&\int d^4x\, \left[ \frac{1}{4} (f_{\mu \nu} +h_{\mu \nu}+k_{\mu \nu})^2 + \frac{1}{2} \bar{g}^{\mu \nu} g^{\mu \nu}\right],
\label{SM}
\end{eqnarray}
where,
\begin{equation}
g^{\mu \nu}=\frac{1}{2} \epsilon_{\mu \nu \rho \sigma}G_{\rho \sigma}=\epsilon_{\mu \nu \rho \sigma}[\partial_\rho+ig(A_\rho+C^{(n)}_\rho)]\Phi_\sigma,~~{\rm etc.}
\end{equation}
Introducing real and complex auxiliary fields, $\lambda_{\mu \nu}$ and $\Lambda_{\mu \nu}$, we obtain,
\begin{eqnarray}
&&S_{YM}=\nonumber \\
&&\phantom{S_{YM}}=S_c+\int d^4x\, \left[\frac{1}{4}\lambda_{\mu \nu} \lambda_{\mu \nu} -\frac{i}{2}
\lambda_{\mu \nu}(f_{\mu \nu}+h_{\mu \nu}+k_{\mu \nu})+i J^\mu (A_\mu+C^{(n)}_\mu) \right],\nonumber \\
&&S_c=\int d^4x\, \left[\frac{1}{2}\bar{\Lambda}^{\mu \nu} \Lambda^{\mu \nu}-\frac{i}{2} (\bar{\Lambda}^{\mu \nu}
\epsilon^{\mu \nu \rho \sigma}\partial_\rho \Phi_\sigma + {\Lambda}^{\mu \nu}
\epsilon^{\mu \nu \rho \sigma}\partial_\rho \bar{\Phi}_\sigma)\right],
\end{eqnarray}
that is, the Yang-Mills action with $(A_\mu+C^{(n)}_\mu)$ minimally coupled to the current for charged fields,
\begin{equation}
J^\mu = -\frac{i}{2}\, g \epsilon^{\mu \nu \rho \sigma} \bar{\Lambda}_{\nu \rho}\Phi_\sigma + \frac{i}{2}\, g \epsilon^{\mu \nu \rho \sigma} {\Lambda}_{\nu \rho}\bar{\Phi}_\sigma.
\label{Jlambda}
\end{equation}
\subsection{Gauge fixing}
As gauge fixing, we will adopt the Maximal Abelian gauge (see \cite{verS1} and references therein). For its extension in the context of the Cho-Faddeev-Niemi decomposition, see \cite{kondo6}. Then, for the charged modes we consider,
\begin{equation}
\hat{D}_\mu \vec{X}^{(n)}_\mu =0
\makebox[.5in]{,}
\hat{D}_\mu \vec{X}^{(n)}_\nu=\partial_\mu \vec{X}^{(n)}_\nu+g \hat{A}_\mu\times \vec{X}^{(n)}_\nu ,
\end{equation}
\begin{equation}
\hat{A}_\mu=A_\mu \hat{n}-\frac{1}{g} \hat{n}\times \partial_\mu \hat{n},
\label{Arest}
\end{equation}
while for the diagonal fields, we have,
\begin{equation}
\partial_\mu (A_\mu+C^{(n)}_\mu)=0.
\label{lorentz}
\end{equation}
These conditions can be imposed by means of lagrange multipliers $\vec{b}=b_1 \hat{n}_1+b_2 \hat{n}_2$ and $\beta$, respectively.
The condition for the charged modes can be rewritten as,
\begin{equation}
{\cal D}_\mu \Phi_\mu =0
\makebox[.5in]{,}
\bar{{\cal D}}_\mu\bar{\Phi}_\mu =0,
\makebox[.5in]{,}
{\cal D}_\mu=[\partial_\mu +ig(A_\mu+C^{(n)}_\mu)],
\label{MAG}
\end{equation}
so that eqs. (\ref{lorentz}) and (\ref{MAG}) can be implemented by including a factor,
\begin{equation}
e^{i\int_M d^4x\, \left[\beta \partial_\mu (A_\mu+C^{(n)}_\mu)+\bar{b}\, {\cal D}_\mu \Phi_\mu +
b\, \bar{{\cal D}}_\mu \bar{\Phi}_\mu \right]}
\makebox[.5in]{,}
b=\frac{1}{\sqrt{2}}(b_1 +i b_2).
\end{equation}
We will also have a Faddeev-Popov determinant, exponentiated by means of the associated ghost fields $\vec{c}=c_1 \hat{n}_1+c_2 \hat{n}_2$. The action for the ghosts contains a term quadratic in $\hat{D}_\mu$,
which can be linearized by considering additional auxiliary fields $\vec{a}^\mu=a^\mu_1 \hat{n}_1+a^\mu_2 \hat{n}_2$. Here, we can also define charged fields,
\begin{equation}
c=\frac{1}{\sqrt{2}}(c_1 +i c_2)
\makebox[.5in]{,}
a_\mu=\frac{1}{\sqrt{2}}(a_\mu^1 +i a_\mu^2),
\end{equation}
and introduce a factor whose exponent contains ${\cal D}_\mu$ derivatives linearly (see ref. \cite{kondo6,LEO}).
The final form for the integration measure fixing the above mentioned gauge conditions depends on the combination $A_\mu+C^{(n)}_\mu$, and can be written as,
\begin{equation}
F_{gf}=\tilde{F}_{gf}\, e^{-i\int d^4x\, (A_\mu+C^{(n)}_\mu)K_\mu} ,
\label{int-measure}
\end{equation}
\begin{equation}
K_\mu=\partial_\mu \beta + \tilde{K}_\mu ,
\end{equation}
where $\tilde{F}_{gf}$ collects all the other factors, independent of $A_\mu +C_\mu$, and the integration measure for ghosts and auxiliary fields. The part of the current $\tilde{K}_\mu$ depends on the charged fields, $a_\mu$, $b$, $c$ and $\Phi_\mu$, and is invariant under U(1) phase transformations of these fields.
In general, for a given gauge field $A^a_\mu$, $a=1,2,3$, many different local frames $\hat{n}_a$ can be introduced to decompose it.
In refs. \cite{kondo3,kondo7}, Cho variables have been incorporated by including, in the partition function for Yang-Mills theory, an identity written as an integral over local color directions $\hat{n}$, satisfying $\hat{n}.\hat{n}=1$, and then showing that the Jacobian of the transformation,
\[
\vec{A}_\mu, \hat{n} \rightarrow A_\mu, \Phi_\mu, \bar{\Phi}_\mu , \hat{n} ,
\]
is trivial.
Then, according to the previous discussions, gauge fields with monopole defects are taken into account by considering local color frames where $\hat{n}$ contains defects concentrated on loops. Necessarily, $\hat{n}_a$, $a=1,2$, will be singular on the associated Dirac worldsheets.
Therefore, the Yang-Mills partition function can be represented as (see ref. \cite{LEO}),
\begin{eqnarray}
Z_{YM}
& =& \int [{\cal D}A][{\cal D}\Phi][{\cal D}\bar{\Phi}][{\cal D}\hat{n}] F_{gf}\, e^{-S_{YM}} \nonumber \\
&= &\int [{\cal D} \Psi] \tilde{F}_{gf}\, e^{-S_c-\int d^4x\, \frac{1}{4}\lambda_{\mu \nu} \lambda_{\mu \nu}+i\int d^4x\, [A_\mu (\frac{1}{2}\epsilon_{\mu \nu \rho \sigma} \partial_\nu \lambda_{\rho \sigma}-
J^c_\mu ) +\frac{1}{2}\lambda_{\mu \nu}(d_{\mu \nu}+k_{\mu \nu})]},\nonumber \\
\label{ZYMb}
\end{eqnarray}
\begin{equation}
J^c_\mu=J^\mu +K^\mu,
\end{equation}
where $[D \Psi]$, besides the integration measure for $A_\mu$, $\Phi_\mu$, $\bar{\Phi}_\mu$ and $\hat{n}$, also integrates over the auxiliary fields $\lambda_{\mu \nu}$ and $\Lambda_{\mu \nu}$. Again, because of the path integration over the diagonal field $A_\mu$, we obtain the implicit constraint,
\begin{equation}
J^c_\mu=\frac{1}{2}\epsilon_{\mu \nu \rho \sigma} \partial_\nu \lambda_{\rho \sigma}.
\label{Jconst4}
\end{equation}
\section{Treatment of Dirac strings and worldsheets}
\label{d}
As is well known, in the formalism of first quantization it is simple to express a physical quantity, such as the probability density for the propagation of a particle, in such a way that the Dirac string is no longer apparent. This comes about as in that case the relevant electric current is concentrated on a closed path formed by the composition of the integration path and a given reference path, joining the fixed initial and final particle positions; thus given a relative phase that only depends on the pierced magnetic flux, as long as Dirac quantization condition is imposed.
On the other hand, in a field theory problem, the possibility of representing physical quantities in a way that does not refer to a Dirac string or worldsheet is nontrivial, since the charge current is distributed on the whole Euclidean spacetime.
In order to obtain a similar result for quantum field theories with a charged sector, we will proceed in three steps. First, we introduce the Hodge decomposition for $\lambda_\mu$ and $\lambda_{\mu \nu}$ so as to isolate the string or worldsheet dependent terms from gauge invariant objects such as their borders, where the monopoles are located. Next, we verify that Dirac strings and worldsheets can be changed by means of an appropriate change of variables associated with a gauge transformation. Finally, we show that it is always possible to change to an appropriate set of Dirac strings or worldsheets such that the partition function only depends on the monopole positions.
\subsection{The Hodge decomposition}
In order to isolate the unphysical terms in a physical quantity such as the partition function, we first note that the Dirac string and worldsheet dependence is contained in (cf. eqs. (\ref{ZQ-exa}) and (\ref{ZYMb})),
\begin{equation}
\int d^{3}x\, \lambda_{\mu} d_\mu
\makebox[.5in]{,}
\int d^4x\, \frac{1}{2}\lambda_{\mu \nu} d_{\mu \nu},
\end{equation}
for compact $QED(3)$ and $SU(2)$ Yang-Mills, respectively.
In the first case, it will be convenient to consider the following decomposition,
\begin{equation}
\lambda_{\mu}=\partial_\mu \phi+ B_{\mu},
\end{equation}
with,
\begin{equation}
\partial_\mu B_{\mu}=0 ,
\label{g-fixing}
\end{equation}
and because of eq. (\ref{ampere}), we also have the implicit constraint,
\begin{equation}
\epsilon_{\mu \nu \rho} \partial_\nu B_\rho =J_\mu^c .
\label{curr3}
\end{equation}
Therefore, we have,
\begin{equation}
\int d^{3}x\, \lambda_{\mu} d_\mu = g_m [(\phi (x^-)-\phi(x^+)] + \int d^{3}x\, B_{\mu} d_\mu ,
\label{3dim}
\end{equation}
\begin{equation}
\int d^{3}x\, B_{\mu} d_\mu = g_m\int_{[x_{s}]} dx_{\mu}\, B_{\mu}.
\label{18}
\end{equation}
We should also change the measure appropriately in (\ref{ZQ-exa}),
\begin{equation}
[{\cal D}\lambda]\to [{\cal D}B][{\cal D}\phi] F^{B}_{gf},
\label{delambda}
\end{equation}
where $F^{B}_{gf}$ is the part of the measure fixing the condition $\partial_\mu B_{\mu}=0$,
\begin{equation}
F^{B}_{gf}=[{\cal D}\xi] e^{i\int d^4x\, \xi \partial_\mu B_{\mu}}.
\label{xi-fixing}
\end{equation}
Similarly, for $SU(2)$ Yang-Mills in four dimensions we decompose the auxiliary field $\lambda_{\mu \nu}$ in the following way,
\begin{equation}
\lambda_{\mu \nu}=\partial_\mu \phi_\nu-\partial_\nu \phi_\mu +B_{\mu \nu},
\end{equation}
\begin{equation}
\partial_\mu \phi_\mu=0 \makebox[.5in]{,} \partial_\nu B_{\mu \nu}=0,
\label{g-fixing2}
\end{equation}
with the implicit constraint,
\begin{equation}
\frac{1}{2}\epsilon_{\mu \nu \rho \sigma} \partial_\nu B_{\rho \sigma}=J^c_\mu.
\label{curr4}
\end{equation}
That is,
\begin{equation}
\int d^4x\, \frac{1}{2}\lambda_{\mu \nu} d_{\mu \nu}=\frac{4\pi}{g} \left( \oint_{{\cal C}^+} dy_\mu\, \phi_\mu- \oint_{{\cal C}^-} dy_\mu\, \phi_\mu \right) +\int d^4x\, \frac{1}{2} B_{\mu \nu} d_{\mu \nu},
\label{dirac.term}
\end{equation}
\begin{equation}
\int d^4x\, \frac{1}{2} B_{\mu \nu} d_{\mu \nu} = \frac{4\pi}{g} \int_{[x_w]} d^2 \sigma_{\mu \nu}\, B_{\mu \nu} .
\label{184}
\end{equation}
The first term in eq. (\ref{dirac.term}) depends on the (gauge invariant) monopole locations, while the Dirac string and worldsheet have been isolated in the second term.
\section{Getting rid of Dirac strings and worldsheets}
\label{in}
\subsection{Compact $QED(3)$}
In compact $QED(3)$, let us consider the change of variables,
\begin{equation}
\Phi' = e^{iq\chi}\,\Phi
\makebox[.5in]{,}
A'_\mu = A_\mu + \chi_\mu,
\label{change-abe}
\end{equation}
which has a trivial Jacobian. The phase $\chi$ is multivalued; when we go along any loop encircling a
closed Dirac string $\partial \Sigma$, given by the border of a surface $\Sigma$, it changes an amount $\Delta \chi$.
In order for $e^{i q\,\chi}$ be single-valued, we must have,
\begin{equation}
q\Delta \chi =2n \pi.
\label{Dcon}
\end{equation}
Under this condition, $e^{i q\,\chi}$ is continuous on any $\Sigma$, so that we obtain,
\begin{equation}
\partial_\mu e^{i q\,\chi}=i q\, e^{i q\,\chi}\, \chi_\mu ,
\label{derivada}
\end{equation}
where $\chi_\mu$ is locally given by $\partial_\mu \chi$, containing no $\delta$-distribution localized on $\Sigma$.
Now, under the change in eq. (\ref{change-abe}), the transformed action is,
\begin{equation}
S' = \int d^{3}x \Biggl(\bar{D}_{\mu}\bar{\Phi}D_{\mu}\Phi +
\frac{1}{2}(f_{\mu} + h'_{\mu})^{2} \Biggr),
\label{action'}
\end{equation}
\begin{equation}
h'_{\mu}=h_{\mu}+\epsilon_{\mu\nu\rho}\partial_{\nu} \chi_{\rho}.
\label{hh}
\end{equation}
As $\partial_\mu h'_{\mu}=\partial_\mu h_{\mu}$, no new monopoles are introduced in this process. The second term in eq. (\ref{hh}) only represents a flux concentrated on the closed Dirac string $\partial \Sigma$,
\begin{equation}
\pm g_m=\int dS_{\mu}\, \epsilon_{\mu\nu\rho}\partial_{\nu}\chi_{\rho} =\oint_l dx_{\mu}\, \chi_{\mu} =\Delta \chi,
\label{multi}
\end{equation}
where the first integral is done over a surface which is crossed by $\partial \Sigma$, so that its border is a loop $l$ encircling $\partial \Sigma$. In particular, this transformation can be used to change the string attached to monopoles from $d_{\mu}$ to $d'_{\mu}$, by choosing,
\begin{equation}
\epsilon_{\mu\nu\rho}\partial_{\nu} \chi_{\rho} = d'_{\mu}-d_{\mu}.
\label{8}
\end{equation}
Of course, considering eq. (\ref{Dcon}) and the multivaluadness of $\chi$ in eq. (\ref{multi}), Dirac's quantization condition (\ref{dirac_condition}) is obtained.
At the quantum level, in the representation of $Z$ (cf. eq. (\ref{ZQ-exa})), we have also introduced a charged field $\Lambda_\mu$. Performing the change of variables given in eq. (\ref{change-abe}), together with,
\begin{equation}
\Lambda'_\mu = e^{iq\chi}\,\Lambda_\mu ,
\end{equation}
we obtain,
\begin{equation}
Z = \int [{\cal D} \Psi][{\cal D}\beta] \, e^{ - S_{c}+i\int d^{3}x\, \chi_\mu \, J_\mu
-\int d^{3}x\, \frac{1}{2}\lambda_{\mu}^{2} +
i\int d^{3}x\, [(A_\mu +\chi_\mu )(\epsilon_{\mu \nu \rho}\partial_\nu \lambda_\rho-J_\mu^c) + \lambda_{\mu} d_\mu]}. \label{Zqed}
\end{equation}
Now, according to eq. (\ref{ampere}), the only difference between $J_\mu$ and $J_\mu^c$ is $\partial_\mu \beta$. Therefore, we can replace $J_\mu \rightarrow J_\mu^c$ in the exponent of eq. (\ref{Zqed}), since the difference is
\begin{equation}
\int d^3 x\, \chi_\mu \partial_\mu \beta = -\int d^3 x\, \beta\, \partial_\mu \chi_\mu,
\end{equation}
which can be nullified by means of a multivalued phase such that,
\begin{equation}
\partial_\mu \chi_\mu =\partial_\mu \partial_\mu \chi = 0.
\end{equation}
The possibility of such a choice will be discussed in the next subsection.
That is, we obtain,
\begin{equation}
Z = \int [{\cal D} \Psi][{\cal D}\beta] \, e^{ - S_{c}
-\int d^{3}x\, \frac{1}{2}\lambda_{\mu}^{2} +
i\int d^{3}x\, [(A_\mu +\chi_\mu)\epsilon_{\mu \nu \rho}\partial_\nu \lambda_\rho + \lambda_{\mu} d_\mu]}.
\end{equation}
Finally, integrating by parts the term containing $\chi_\mu \epsilon_{\mu \nu \rho}\partial_\nu \lambda_\rho$, and recalling eq. (\ref{8}), we obtain the partition function in (\ref{ZQ-exa})
where $d_\mu$ is replaced by $d'_\mu$, thus showing the independence of $Z$ on the Dirac string choice.
\subsection{Yang-Mills}
\label{YM-sub}
In $SU(2)$ Yang-Mills, let us consider a gauge transformation of the gauge field $\vec{A}_\mu$ given in eq. (\ref{dec}), decomposed in terms of a general frame $\hat{n}_a$,
\begin{equation}
\vec{A}^S_\mu.\vec{T}= S\vec{A}_\mu.\vec{T}S^{-1}+\frac{i}{g}S\partial_\mu S^{-1},
\end{equation}
which has a trivial Jacobian.
As we have seen in ref. \cite{LEO}, in terms of the Cho-Faddeev-Niemi variables, the gauge transformed field is,
\begin{equation}
\vec{A}^S_\mu=A'_\mu \hat{n}'-\frac{1}{g} \hat{n}'\times \partial_\mu \hat{n}' + X^1_\mu\, \hat{n}'_1+ X^2_\mu\, \hat{n}'_2,
\end{equation}
\begin{equation}
A'_\mu=A_\mu+ C^{(n)}_\mu - C^{(n')}_\mu
\makebox[.5in]{,}
\hat{n}'_a = R(S)\, \hat{n}_a,
\label{nonabe-abe}
\end{equation}
where $C^{(n')}_\mu$ is computed with the transformed basis.
In particular, we can consider a singular gauge transformation $S$ along the direction $\hat{n}$, living in the trivial topological sector of $SU(2)$, representing a frame rotation with phase $\chi$. This phase is multivalued when we go along a loop $l$ linking the closed Dirac worldsheet $\partial \Sigma$ to be introduced, given as the border of a three-volume $\Sigma$. In this case, as $\hat{n}'=\hat{n}$, we still have vanishing $\vec{L}'_{\mu \nu}$ (cf. eq. (\ref{FHK})).
For gauge transformations representing a rotation along the $\hat{n}$-axis, that rotates the basis elements $\hat{n}_1$, $\hat{n}_2$ by an angle $\chi$, $C^{(n)}_\mu - C^{(n')}_\mu$ turns out to be $\chi_\mu $, with $\chi_\mu$ locally given by $\partial_\mu \chi$. Similarly to what happens in the compact $QED(3)$ case, $\chi_\mu$ cannot contain singularities (a $\delta$-distribution) on
any three-volume $\Sigma$. This comes about as $C^{(n)}_\mu$ and $C^{(n')}_\mu$ depend on derivatives of the local color frame (cf. eq. (\ref{Cmu})), which is single valued along any loop $l$.
The second transformation in eq. (\ref{nonabe-abe}) can be equivalently translated to a phase change $\chi$ of the charged sector, which in the partition function representation of eq. (\ref{ZYMb}) includes not only the fields $\Phi_\mu$, $\bar{\Phi}_\mu$ but also the fields $\Lambda_\mu$, $\bar{\Lambda}_\mu$, the charged ghosts, charged lagrange multipliers and charged auxiliary fields in the gauge fixing measure given in eq. (\ref{int-measure}).
Then, the effect of this transformation on the Yang-Mills action is (see ref. \cite{LEO}),
\begin{eqnarray}
S'_{YM}&=& \int d^4x\, [\frac{1}{4}(f_{\mu \nu}+h'_{\mu \nu}+k_{\mu \nu})^2 + \frac{1}{2} \bar{g}^{\mu \nu} g^{\mu \nu}],\nonumber \\
\label{YM-linha}
\end{eqnarray}
\begin{equation}
h'_{\mu \nu}=h_{\mu \nu}+\epsilon_{\mu \nu \rho \sigma} \partial_\rho \chi_\sigma ,
\end{equation}
where the second term is localized on $\partial \Sigma$. In particular, to change the Dirac worldsheet attached to monopoles, we should consider,
\begin{equation}
\epsilon_{\mu \nu \rho \sigma} \partial_\rho \chi_\sigma = d'_{\mu \nu} -d_{\mu \nu},
\end{equation}
representing a trivial flux $4\pi/g$, concentrated on the composition of the initial and final worldsheets.
Then, after performing the change of variables (\ref{nonabe-abe}), we get,
\begin{eqnarray}
Z_{YM}
& =& \int [{\cal D}A][{\cal D}\Phi][{\cal D}\bar{\Phi}][{\cal D}\hat{n}] F_{gf}\, e^{-S_{YM}} \nonumber \\
&= &\int [{\cal D} \Psi] \tilde{F}_{gf}\, e^{-S_c+i\int d^{3}x\, \chi_\mu \, (J_\mu+\tilde{K}_\mu)}\times \nonumber \\
&& \times e^{-\int d^4x\, \frac{1}{4}\lambda_{\mu \nu} \lambda_{\mu \nu}+i\int d^4x\, [ (A_\mu +\chi_\mu ) (\frac{1}{2}\epsilon_{\mu \nu \rho \sigma} \partial_\nu \lambda_{\rho \sigma}-
J^c_\mu ) +\frac{1}{2}\lambda_{\mu \nu}(d_{\mu \nu}+k_{\mu \nu})]}.\nonumber \\
\label{YMchi}
\end{eqnarray}
Again, with the choice $\partial_\mu \chi_\mu =\partial_\mu \partial_\mu \chi = 0$,
we can replace $J_\mu +\tilde{K}_\mu \rightarrow J_\mu + \tilde{K}_\mu +\partial_\mu \beta =J_\mu + K_\mu=J_\mu^c$, and similarly to the $QED(3)$ case, we obtain the partition function in (\ref{ZYMb}) where $d_{\mu \nu}$ is replaced by $d'_{\mu \nu}$, thus showing the independence of $Z_{YM}$ with respect to the change of Dirac worldsheet joining the instanton/anti-instanton defects.\\
In order to have an explicit form for $\chi_\mu$, we note that it can be associated with a pair of closed center vortices placed at $\partial \Sigma$. As $\chi_\mu$ does not contain any $\delta$-distribution localized on $\Sigma$, we can use the results given in refs. \cite{engelhardt1,reinhardt} for closed thin center vortices, taking into account the appropriate factors,
\begin{equation}
\chi_{\mu } = -g_m
\int_{\Sigma } d^{D-1} \tilde{\sigma }_{\nu }
(\delta_{\mu \nu } \partial^{2} - \partial_{\mu } \partial_{\nu } )
D(x-\bar{x} (\sigma )),
\label{dechi}
\end{equation}
where the (minimum) magnetic charge $g_m$ is given by $2\pi/e$ or $4\pi/g$, in the abelian or nonabelian case, respectively, and $\bar{x} (\sigma )$ is a parametrization of $\Sigma$, a surface or a three-volume in $D=3,4$, respectively. The integration measure is,
\begin{equation}
d^{D-1} \tilde{\sigma }_{\mu}=\frac{1}{(D-1)!} \epsilon_{\mu \alpha_1 ... \alpha_{D-1}} d^{D-1} \sigma_{\alpha_1...\alpha_{D-1}},
\end{equation}
\begin{equation}
d^{D-1} \sigma_{\alpha_1...\alpha_{D-1}}=\epsilon_{k_1 ... k_{D-1}}\frac{\partial \bar{x}_{\alpha_1}}{\partial \sigma_{k_1}}... \frac{\partial \bar{x}_{\alpha_{D-1}}}{\partial \sigma_{k_{D-1}}}\, d\sigma_1 ... d\sigma_{D-1},
\end{equation}
and $D(x)$ is the Green function for the Laplacian operator.
As shown in refs. \cite{engelhardt1,reinhardt}, using Stokes' theorem, $\chi_{\mu } $ can be written only in terms of
$\partial \Sigma$ which corresponds to the manifold where the closed Dirac defects are placed ($\partial \partial \Sigma =0$),
\begin{equation}
\chi_{\mu } = \frac{4\pi }{g}
\int_{\partial \Sigma } d^{D-2} \tilde{\sigma }_{\mu \kappa }
\partial_{\kappa }^{x} D(x-\bar{x} (\sigma ) ).
\end{equation}
For instance, in three dimensions, if a Dirac string along the $z$-axis is considered, we obtain $\chi_0=0$, $\chi_{i}=-(2/g)\,\epsilon_{ij}\partial_j \ln \rho$, which contains no singularity on any plane whose border is the $z$-axis, and can be locally written as, $\chi_{\mu }=(2/g)\, \partial_\mu \varphi $, where $\varphi$ is the multivalued polar angle, in accordance with our previous discussion.
Note also that in general, because of the index structure in eq. (\ref{dechi}), we have $\partial_\mu \chi_\mu =0$.
Finally, it is interesting to discuss the change of variables we have performed here, after the implementation of the MAG gauge fixing condition, in the light of Gribov ideas. In this respect, we would like to underline that there is an important research program based on the implementation of a properly defined path integral, so as to avoid the so called Gribov copies (see refs. \cite{G,UERJ} and references therein). The path integral restriction is usually done by the inclusion of a Gribov-Zwanzinger term to the Yang-Mills action. In fact, this procedure only erases copies connected to each other by infinitesimal gauge transformations, so that even after it is applied, there is still room for large copies living in the trivial topological sector of the theory \cite{H,UERJ}. These are precisely associated with the change of variables we have performed here, which is along a gauge transformation that lives in the trivial topological sector, as it includes a frame defect such that $\hat{n}_1$, $\hat{n}_2$ rotate twice when we go around the closed Dirac worldsheet.
Moreover, as shown in ref. \cite{UERJ}, in the case of the MAG, the Gribov region in SU(2) Euclidean Yang-Mills theories can be seen as a cylinder, bounded in all off-diagonal directions, and unbounded along the diagonal one.
Therefore, our procedure would also work after the implementation of the Gribov restriction, as it only involves
operations on the diagonal direction; namely, the use of the implicit constraint (\ref{Jconst4}), derived from the path integration over the diagonal field $A_\mu$, and diagonal gauge transformations with multivalued phase $\chi$.
In other words, the developments in the following section can be seen as a natural way to fix the freedom associated with large copies when Gribov's scenario is applied to the MAG.
\subsection{Decoupling Dirac strings and worldsheets from the charged sector}
Now, it is desirable to express a physical quantity such as the partition function only in terms of observable properties of the monopoles. In this regard, we will show that the line integral in eq. (\ref{18}) can always be nulified for a given choice of Dirac strings, that is, by considering an appropriate change of variables.
As shown in the previous section, when compact $QED(3)$ and $SU(2)$ Yang-Mills theory are considered in the Lorentz and Maximal Abelian gauge, respectively, and a change of variables associated with a multivalued phase satisfying $\partial_\mu \chi_\mu =\partial_\mu \partial_\mu \chi =0$ is performed, the only change in the integrand of the partition function is the substitution,
\begin{equation}
\int d^D x\, \lambda d \to \int d^D x\, \lambda d' = \int d^D x\, \lambda ( d + \epsilon \partial \chi) ,
\label{Dxl}
\end{equation}
where we have simplified the notation by defining,
\begin{equation}
\lambda (d +\epsilon \partial \chi) =\left\{ \begin{array}{ll}
\lambda_\mu (d_\mu + \epsilon_{\mu \nu \rho}\partial_\nu \chi_\rho ) & {\rm or,} \\
\lambda_{\mu \nu} (d_{\mu \nu}+ \epsilon_{\mu \nu \rho \sigma} \partial_\rho \chi_\sigma ) ,&
\end{array}\right.
\end{equation}
in $D=3,4$ dimensions, respectively. On the other hand, in section \S \ref{d}, we have introduced a Hodge decomposition of $d$ in terms of the fields $\phi$, $B_\mu$ or $\phi_{\mu}$, $B_{\mu \nu}$ in three and four dimensions, respectively. As $\epsilon \partial \chi$ introduces a closed Dirac string or worldsheet, the borders in $d'$ are the same as in $d$. That is, the
terms involving $\phi$ and $\phi_\mu$ in eqs. (\ref{3dim}) and (\ref{dirac.term}) do not change after the above mentioned substitution (they are couplings with the gauge invariant monopole locations). Therefore, the only change in those equations is in the couplings of the Dirac defects with the charged sector,
\begin{equation}
\int d^D x\, B d \to \int d^D x\, B ( d + \epsilon \partial \chi),
\label{Dxb}
\end{equation}
(recall that $\epsilon \partial B$ represents the charged currents, cf. eqs. (\ref{curr3}) and (\ref{curr4})).
As already discussed, in compact $QED(3)$ and $SU(2)$ Yang-Mills theory, because of the single-valuedness of $e^{i q\,\chi}$ and the local color frame, in the change of variables for $A_\mu$, the function $\chi_\mu$ cannot contain singularities on the surface or three-volume $\Sigma$ whose border gives the Dirac string or worldsheet. This means that $\chi_\mu$ can be globally written as,
\begin{equation}
\chi_\mu=\partial_\mu \Theta + R_\mu,
\end{equation}
where $\Theta$ coincides with a given branch of $\chi$ on the Euclidean spacetime minus $\Sigma$, and $R_\mu$ is localized on $\Sigma$. When crossing $\Sigma$, $\Theta$ contains a discontinuity, defining a single-valued function, which jumps back to its initial value when we go around any loop linking the Dirac defect $\partial \Sigma$. Therefore, the calculation of $\partial_\mu \Theta$ contains a $\delta$-distribution on $\Sigma$, and $R_\mu$ must be designed to compensate it, giving a nonsingular $\chi_\mu$.
In this regard, it is useful to consider the formula obtained in refs. \cite{engelhardt1,reinhardt} to separate the so called thin and ideal center vortices, namely,
\begin{equation}
-\int_{\Sigma } d^{D-1} \tilde{\sigma }_{\mu }\,\delta^{(D)}(x-\bar{x} (\sigma ) )
-\int_{\Sigma } d^{D-1} \tilde{\sigma }_{\nu }\,(\delta_{\mu \nu } \partial^{2} - \partial_{\mu } \partial_{\nu } )
D(x-\bar{x} (\sigma ) ) = \partial_\mu \Omega,
\end{equation}
where $\Omega$ is the solid angle (normalized to $1$) subtended by $\Sigma$ when viewed from $x$. This solid angle is
single valued when we go along any loop linking $\partial\Sigma$. In other words, using eq. (\ref{dechi}), we obtain,
\begin{equation}
\Theta = g_m \Omega
\makebox[.5in]{,}
R_\mu=g_m \int_{\Sigma } d^{D-1} \tilde{\sigma }_{\mu }\,\delta^{(D)}(x-\bar{x} (\sigma )).
\end{equation}
As $\Theta$ is single valued, we have $\epsilon_{\mu \nu \rho} \partial_\nu \partial_\rho \Theta =0$, $\epsilon_{\mu \nu \rho \sigma} \partial_\rho \partial_\sigma \Theta =0$, that is,
\begin{equation}
\int d^Dx\, [\epsilon \partial \chi] B = \int d^Dx\, [\epsilon \partial R] B,
\end{equation}
for any well behaved $B_\mu(x)$. For example, in $D=3$,
\begin{eqnarray}
\int d^3x\, [\epsilon_{\mu \nu \rho}\partial_{\nu}\chi_\rho ] B_\mu &=& g_m \int d^3x\, \int_{\Sigma } d^{2} \tilde{\sigma }_{\rho}\, \epsilon_{\mu \nu \rho}\partial_{\nu}[\delta^{(3)}(x-\bar{x} (\sigma ))B_\mu(\bar{x} (\sigma ))],\nonumber \\
&=& g_m\int_{\partial\Sigma } dy_{\mu}\, B_\mu(y),
\end{eqnarray}
where we used Stokes' theorem. In this manner, we can explicitly verify that $\chi_\mu$ introduces a closed Dirac string $\partial\Sigma$ (cf. eqs. (\ref{18}) and (\ref{Dxb})),
\begin{equation}
\int_{[x_{s}]} dx_{\mu}\, B_{\mu} \to \int_{[x'_{s}]} dx_{\mu}\, B_{\mu}.
\end{equation}
Following a similar procedure in $D=4$, from eqs. (\ref{184}) and (\ref{Dxb}), the change of variables is equivalent to introduce a closed Dirac worldsheet $\partial \Sigma$,
\begin{equation}
\int_{[x_w]} d^2 \sigma_{\mu \nu}\, B_{\mu \nu} \to \int_{[x'_w]} d^2 \sigma_{\mu \nu}\, B_{\mu \nu}.
\end{equation}
Now, in order to show that is is always possible to decouple the Dirac defects from the charged sector in the integrand of the partition functions, let us first consider a simple situation, in three dimensional spacetime, where the charged fields are such that the monopole and the anti-monopole happen to be placed on a given field line of $B_{\mu}$. The field $B_{\mu}$, which satisfies eqs. (\ref{g-fixing}) and (\ref{curr3}), can be seen as a ``magnetic'' field generated by the charge current $J^c_{\mu}$, so that the
associated field lines must be closed and oriented. Now, as the
monopole and the anti-monopole are at the endpoints of the Dirac
strings, we can consider two strings, $[x_s]$ and
$[x'_s]$, contained on the field line, with tangent
vectors oriented parallel or anti-parallel to $B_{\mu}$,
respectively. That is, when we change from $d_\mu$ to $d'_\mu$ we have,
\begin{equation}
P=\int_{[x_s]} dx_{\mu}\, B_{\mu} > 0
\makebox[.5in]{,} N=\int_{[x'_s]} dx_{\mu}\, B_{\mu} <
0 .
\label{20}
\end{equation}
Then, if the system is defined on ${\cal R}^3$, we can deform
continuously $[x'_s]$ into $[x_s]$, keeping
the endpoints fixed. In this process, the line integral of
$B_{\mu}$ will change continuously from a positive to a negative
value, so that an intermediate string must exist such that it is verified,
\begin{equation}
\int_{[x^0_s]} dx_{\mu}\, B_{\mu} = 0.
\label{21}
\end{equation}
We will present a general proof. We start by defining,
\begin{equation}
I_{[x]}=\left\{ \begin{array}{ll}
\int_{[x_{s}]} dx_{\mu}\, B_{\mu}& {\rm or,} \\
\int_{[x_w]} d^2 \sigma_{\mu \nu}\, B_{\mu \nu} .&
\end{array}\right.
\end{equation}
Let us consider a Dirac string (worldsheet) $[x]$ joining the anti-monopole and the
monopole, placed at $x^{-}$ and $x^{+}$ (${\cal C}^{-}$ and ${\cal C}^{+}$).
If $I_{[x]}$ is zero, we are done. If not, we
can assume without loss of generality that it gives a positive result. Now, by considering the above mentioned change of variables, we will gain a term,
\begin{eqnarray}
\lefteqn{I_{[\partial \Sigma]} =\int d^D x\, B \epsilon \partial \chi =\int d^D x\, R \epsilon \partial B}\nonumber \\
&& = g_m \int_{\Sigma } d^{D-1} \tilde{\sigma }_{\mu }\,[\epsilon \partial B]_\mu.
\label{Isig}
\end{eqnarray}
If $\epsilon \partial B \equiv 0$ on the whole Euclidean spacetime, this together with the defining property of $B$, ($\partial_\mu B_\mu =0$, $\partial_\nu B_{\mu \nu} =0$) in eqs. (\ref{g-fixing}), (\ref{g-fixing2}), would imply $B$ identically zero, and the term containing the Dirac string would be trivially zero. Therefore, we can suppose that a region of spacetime exists such that $\epsilon \partial B$ is nonzero. In this case, in order to have a nonzero $I_{[\partial \Sigma]}$, it is sufficient to consider $\Sigma$ as a small disk or three-volume placed on that region with $d^{D-1}\tilde{\sigma }_\mu$ oriented along the local direction of $[\epsilon \partial B]_\mu$. Of course, if necessary we can use $-\chi$ instead of $\chi$ so as to render,
\begin{equation}
I_{[\partial \Sigma]} < 0 .
\end{equation}
The important point is that the phase $n \chi$, with $n$ a natural number, also defines a possible singular gauge transformation, as it also leads to a single valued transformation of the charged fields along any closed loop. Therefore, for the associated change of variables, we have,
\begin{equation}
I_{[x']} = I_{[x]} + n I_{[\partial \Sigma]},
\label{25}
\end{equation}
which can be rendered negative for a large enough value of $n$.
Again, $[x']$ can be continuously deformed into
$[x]$, by shrinking $[\partial \Sigma]$ to zero, and in this process
an intermediate string or worldsheet $[x^0]$ must exist such that $I_{[x^0]}=0$ is
verified.\vspace{.1in}
Summarizing, in this section we have shown that it is always possible to make a change of variables with trivial Jacobian, not altering the initial gauge fixing condition, such that the Dirac strings or worldsheets are decoupled from the charged sector of the theory. For instance, in the Cho-Faddeev-Niemi decomposition of $SU(2)$ Yang-Mills theory, this leads to a representation of the partition function where the only effect of Dirac strings is
given by the coupling of the gauge invariant associated borders (monopoles) and the dual field $\phi_\mu$ (see eq. (\ref{dirac.term})),
\begin{equation}
\int d^4x\, \frac{1}{2}\lambda_{\mu \nu} d_{\mu \nu} \to \frac{4\pi}{g} \left( \oint_{{\cal C}^+} dy_\mu\, \phi_\mu- \oint_{{\cal C}^-} dy_\mu\, \phi_\mu \right).
\end{equation}
Of course, this procedure simplifies the study of effective monopole ensembles, as discussed in ref. \cite{LEO}. Once the Dirac worldsheets become decoupled, the ensemble integration over the string-like monopoles, can be represented by means of a second quantized complex field $\psi$, coupled to the gauge field $\phi_\mu$ (see refs. \cite{polya1}, \cite{bardakci}-\cite{halpern2}, \cite{antonov} and references therein). In this language, contact interactions between the string-like monopoles, generate a quartic term $\lambda (\bar{\psi}\psi)^2$ which stabilize the system in a phase with spontaneous symmetry breaking, if the correlation between monopoles and the gluon fields generate an effective negative mass term $-m^2 \bar{\psi} \psi$. In the context of the Cho-Faddeev-Niemi decomposition, this effective theory, representing the condensation of monopole degrees of freedom, has been derived in ref. \cite{cho-a} relying on an heuristic treatment of the Dirac worldsheets.
\section{Conclusions}
\label{conc}
Dirac strings and worldsheets are unobservable objects, however, the presence of a charged sector, which in the case of $SU(2)$ Yang-Mills theory is associated with the off-diagonal modes, implies that these unphysical objects appear in the integrand of the partition function representation.
If on the one hand Dirac strings and worldsheets can be changed at will, it would be desirable to have a representation of the partition functions where they are eliminated in favor of their gauge invariant borders, where the monopoles are located.
This is particularly relevant when using the Cho-Fadeev-Niemi gauge field decomposition to guide the obtention of effective theories associated with ensembles of defects. As Dirac worldsheets and center vortices are described as defects of the components $\hat{n}_1$, $\hat{n}_2$ of the local color frame, it is important to have a careful discussion about how to eliminate Dirac worldsheets, and to understand why this procedure cannot be applied to eliminate center vortices. In this respect, note that in effective models constructed only in terms of $\hat{n}=\hat{n}_3$, if on the one hand no information about unphysical Dirac worldsheets is introduced, on the other, the information about the $\hat{n}_1$, $\hat{n}_2$ vortex sector is lost.
In this work, we have seen that Dirac strings in compact $QED(3)$ and Dirac worldsheets in the Cho-Faddeev-Niemi representation of $SU(2)$ Yang-Mills theory, in the Maximal Abelian gauge, can be handled in a similar manner. In particular, the realization of gauge transformations in terms of the Cho-Faddeev-Niemi variables shows that
the consideration of a multivalued phase $\chi$, $\partial_\mu \partial_\mu \chi =0$, has the only effect of including in the integrand of the gauge-fixed partition function a term containing a closed Dirac defect.
In general, by introducing auxiliary fields $B_\mu$, $B_{\mu \nu}$ (representing the char\-ged current $J_\mu^c$) and $\phi$, $\phi_\mu$, for $D=3,4$ respectively, we have been able to isolate the ($B$-dependent) terms where Dirac strings and worldsheets are coupled from those ($\phi$-dependent) where the associated borders (gauge invariant monopole locations) are coupled.
Then, we have presented the main result of this work, namely, a procedure showing that it is always possible to choose Dirac strings and worldsheets, in such a manner that the $B$-dependent terms vanish. This can be seen as a natural way to fix the remaining freedom, associated with large copies, after the introduction of a Gribov-Zwanzinger term to erase copies connected to each other by infinitesimal gauge transformations. Note that, in the MAG, the Gribov region is a cylinder, bounded in all off-diagonal directions, and unbounded along the diagonal one. Therefore, our procedure also works after the implementation of the Gribov restriction, as it only involves operations on the diagonal direction.
This procedure is specially useful as we are generally interested in studying ensembles of monopoles, so that it is important to write the theory in a form only depending on physical properties of the ensembles to be integrated.
In particular, in the Cho-Faddeev-Niemi decomposition of $SU(2)$ Yang-Mills theory, the ensemble integration, assuming a phase where monopoles condense, is easily related with an effective model for $\phi_\mu$ and a complex field $\psi$ displaying spontaneous symmetry breaking. This model has been obtained in \cite{cho-a}, by following physical heuristic arguments to deal with the Dirac worldsheets, which can be justified by the exact treatment we have presented here to decouple them from the charged sector.
In the presence of a sector of closed center vortices, the $d_{\mu \nu}$ tensor simply gains a term concentrated on the closed thin center vortices \cite{LEO}. While for the percolating case, in the lattice, closed center vortices display a confining phase exhibiting $N$-ality (see \cite{greensite} and references therein), in the nonpercolating situation they could be associated with Abelian dominance \cite{LEO}.
It is also possible to attach monopoles with a pair of open center vortices carrying flux $2\pi/g$. In the nonpercolating case, center vortex chains would tend to erase magnetic monopoles, forming magnetic dipoles and a nonconfining phase, as occurs in compact $QED(3)$ coupled to massless fermions, where dipoles are formed because of the existence of quasi-zero modes \cite{FO}. On the other hand, from lattice studies \cite{AGG}-\cite{GKPSZ}, the percolating case is a promising phase, possibly displaying not only confinement but also the observed dependence of the confining string tension on the group representation.
In this regard, it could be argued that the argument in section \S \ref{in} can also be used to get rid of open or closed center vortices, as they would appear in eq. (\ref{Dxl}) parametrized by $d_{\mu \nu}$ (see \cite{LEO}), and an appropriate unobservable closed Dirac worldsheet could be introduced to compensate the center vortex contribution.
However, while for fixed monopole positions it is possible to change the Dirac worldsheet by performing a (singular) topologically trivial $SU(2)$ gauge transformation, for center vortices it is not \cite{LEO}, \cite{engelhardt1,reinhardt}, so that the latter are expected to be physical objects. From the perspective provided by our procedure, this means that a nontrivial correlation between center vortices and charged fields must be generated. This would imply a nontrivial Jacobian for the phase tansformation of the charged fields, precluding the elimination of center vortices by a simple extension of the procedure derived for open Dirac worldsheets. A similar situation applies to the $k_{\mu \nu}$-dependent term in eq. (\ref{YMchi}) (containing nonabelian information): of course, it cannot be eliminated as $k_{\mu \nu}$ depends on the charged fields and the Jacobian for the necessary transformation would be nontrivial.
Then, when open or closed physical center vortices are considered, their coupling to the dual field $B_{\mu \nu}$ cannot be made to vanish. In this case, the analysis of the possible phases becomes highly nontrivial, as it involves ensembles of two-dimensional worldsheets correlated with charged fields and loop-like monopoles at the borders.
\section*{Acknowledgements}
We would like to acknowledge S. P. Sorella and R. Sobreiro for useful discussions. The Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico (CNPq-Brazil), the Funda{\c {c}}{\~{a}}o de Amparo {\`{a}} Pesquisa do Estado do Rio de Janeiro (FAPERJ), and the Pr\'o-Reitoria de P\'os-Gradua\c c\~ao e Pesquisa da Universidade Federal Fluminense (PROPP-UFF), are acknowledged for the financial support.
|
1,116,691,498,558 | arxiv | \section{Introduction} \label{sec:intro}
In AI in general and in data mining in particular, there is an increasing interest in developing general methods for data analysis. In order to be useful, such methods should be easy to extend with domain-specific knowledge.
In pattern mining, the frequent sequence mining problem has already been studied in depth, but usually with a focus on efficiency and less on generality and extensibility. An important step in the development of more general approaches was the cSpade algorithm~\cite{zaki2000sequence} which supports a variety constraints.
It supports many constraints such as constraints on the length of the pattern, on the maximum gap in embeddings or on the discriminative power of the patterns between datasets.
Many other constraints have been integrated into specific mining algorithms (e.g. \cite{han2001prefixspan,yan2003clospan,wang2004bide,OhtaniKUA09}).
However, none of these are truly generic in that adding extra constraints usually amounts to changing the data-structures used in the core of the algorithm.
For \textit{itemset} mining, the simplest form of pattern mining, it has been shown that constraint programming (CP) can be used as a generic framework for constraint-based mining~\cite{cp4im_aij} and beyond~\cite{DBLP:conf/cpaior/RojasBLCL14,dominance_dp}.
Recent works have also investigated the usage of CP-based approaches for mining sequences with explicit wildcards~\cite{coquery2012sat,JSS-13-3,kemmarictai}. A wildcard represents the presence of exactly one arbitrary symbol in that position in the sequence.
The main difference between mining itemsets, sequences with wildcards and standard sequences lies in the complexity of testing whether a pattern is included in another itemset/sequence, e.g. from the database. For itemsets, this is simply testing the subset inclusion relation which is easy to encode in CP. For sequences with wildcards and general sequences, one has to check whether an \textit{embedding} exists (matching of the individual symbols). But in case only few embeddings are possible, as in sequences with explicit wildcards, this can be done with a disjunctive constraint over all possible embeddings \cite{kemmarictai}. In general sequence (the setting we address in this paper), a pattern of size $m$ can be embedded into a sequence of size $n$ in $O(n^m)$ different ways, hence prohibiting a direct encoding or enumeration.
The contributions of this paper are as follows:
\begin{itemize}
\item
We present four categories of user-constraints, this categorization will be useful to compare the generality of the two proposed models.
\item We introduce an {\em exists-embedding} global constraint for sequences, and show the relation to projected databases and \textit{projected frequency} used in the sequence mining literature to speedup the mining process \cite{han2001prefixspan,zaki2001spade}.
\item We propose a more general formulation using a decomposition of the {\em exists-embedding} constraint. Searching whether an embedding exists for each transaction is not easily expressed in CP and requires a modified search procedure.
\item We investigating the effect of adding constraints, and compare our method with state-of-the-art sequence mining algorithms.
\end{itemize}
\noindent The rest of the paper is organized as follows: Section~\ref{sec:preliminaries} formally introduces the sequence mining problem and the constraint categories. Section~\ref{sec:seq_cp} explains the basics of encoding sequence mining in CP. Section~\ref{sec:first-model} and \ref{sec:second-model} present the model with the global constraint and the decomposition respectively. Section~\ref{sec:experiments} presents the experiments. After an overview of related work (Section~\ref{sec:related}), we discuss the proposed approach and results in Section~\ref{sec:conclusions}.
\section{Sequence mining}
\label{sec:preliminaries}
Sequence mining~\cite{agrawal1995mining} can be seen as a variation of the well-known itemset mining problem proposed in \cite{agrawal1994fast}. In itemset mining, one is given a set of \textit{transactions}, where each transaction is a set of items, and the goal is to find patterns (i.e. sets of items) that are included in a large number of transactions. In sequence mining, the problem is similar except that both transactions and patterns are ordered, (i.e. they are sequences instead of sets) and symbols can be repeated.
For example, $\seq{b,a,c,b}$ and $\seq{a,c,c,b,b}$ are two sequences, and the sequence $\seq{a,b}$ is one possible pattern included in both.
This problem is known in the literature under multiple names, such as {\em embedded subsequence mining}, {\em sequential pattern mining}, {\em flexible motif mining}, or {\em serial episode mining} depending on the application.
\subsection{Frequent sequence mining: problem statement}
\label{sec:freq-sequ-mining}
A key concept of any pattern mining setting is the pattern inclusion relation.
In sequence mining, a pattern is included in a transaction if there exists an embedding of that sequence in the transaction; where an embedding is a mapping of every symbol in the pattern to the same symbol in the transaction such that the order is respected.
\begin{definition}[Embedding in a sequence]
\label{def:seq-embedding}
Let $S = \langle s_1,\ldots, s_m \rangle$ and $S' = \langle s'_1, \ldots, s'_n\rangle$ be two sequences of size $m$ and $n$ respectively with $m\leq n$. The tuple of integers $e = (e_1, \ldots, e_m)$ is an \textbf{embedding} of $S$ in $S'$ (denoted $S \sqsubseteq_e S'$) if and only if:
\begin{align}
S \sqsubseteq_e S' \leftrightarrow e_1 < \ldots < e_m ~\mbox{and}~ \forall i \in 1,\ldots,m: s_i = s'_{e_i}
\end{align}
\end{definition}
For example, let $S=\seq{a,b}$ be a pattern, then $(2,4)$ is an embedding of $S$ in $\seq{b,a,c,b}$ and $(1,4),(1,5)$ are both embeddings of $S$ in $\seq{a,c,c,b,b}$. An alternative
setting considers sequences of \textit{itemsets} instead of sequences of individual symbols. In this case, the definition is $S \sqsubseteq_e S' \leftrightarrow e_1 < \ldots < e_n ~\mbox{and}~ \forall i \in 1,\ldots,n: s_i \subseteq s'_{e_i}$. We do not consider this setting further in this paper, though it is an obvious extension.
We can now define the sequence inclusion relation as follows:
\begin{definition}[Inclusion relation for sequences]
\label{def:seq-incl-rel}
Given two sequences $S$ and $S'$, $S$ \textbf{is included in} $S'$ (denoted $S \sqsubseteq S'$) if there exists an embedding $e$ of $S$ in $S'$:
\begin{align} \label{eq:sec-incl-rel}
S \sqsubseteq S' \leftrightarrow \exists e ~\mbox{s.t.}~ S \sqsubseteq_e S'.
\end{align}
\end{definition}
To continue on the example above, $S=\seq{a,b}$ is included in both $\seq{b,a,c,b}$ and $\seq{a,c,c,b,b}$ but not in $\seq{c,b,a,a}$.
\begin{definition}[Sequential dataset]
Given an alphabet of symbols $\Sigma$, a {\em sequential dataset} $D$ is a multiset of sequences defined over symbols in $\Sigma$.
\end{definition}
Each sequence in $D$ is called a {\em transaction} using the terminology from itemset mining. The number of transactions in $D$ is denoted $|D|$ and the sum of the lengths of every transaction in $D$ is denoted $||D||$ ($||D|| = \sum_{i = 1}^{|D|}|T_i|$).
Furthermore, we use {\em dataset} as a shorthand for {\em sequential dataset} when it is clear from context.
Given a dataset $D = \{T_i, \ldots, T_n\}$, one can compute the {\bf cover} of a sequence $S$ as the set of all transactions $T_i$ that contain $S$:
\begin{equation}
\label{eq:cover}
cover(S, \ensuremath{D}) = \{T_i \in \ensuremath{D} : S \sqsubseteq T_i\}
\end{equation}
We can now define frequent sequence mining, where the goal is to find all patterns that are frequent in the database; namely, the size of their cover is sufficiently large.
\begin{definition}[Frequent sequence mining]
\label{def:spm}
Given:
\begin{enumerate}
\item an alphabet $\Sigma$
\item a sequential dataset $\ensuremath{D} = \{T_1, \ldots, T_{n}\}$ defined over $\Sigma$
\item a minimum frequency threshold $\theta$,
\end{enumerate}
\noindent enumerate all sequences $S$
such that
$
|cover(S, \ensuremath{D})| \ge \theta
$.
\end{definition}%
In large datasets, the number of frequent sequences is often too large to be analyzed by a human. Extra constraints can be added to extract fewer, but more relevant or interesting patterns. Many such constraints have been studied in the past.
\subsection{Constraints}
\label{sec:constraints-max-1}
Constraints typically capture background knowledge and are provided by the user. We identify four categories of constraints for sequence mining: 1) constraints over the pattern, 2) constraints over the cover set, 3) constraints over the inclusion relation and 4) preferences over the solution set.
\subsubspace
\subsubsection{Constraints on the pattern}
These put restrictions on the structure of the pattern. Typical examples include size constraints
or regular expression constraints.\\
{\em Size constraints:}
A size constraint is simply $|S| \gtrless \alpha$ where $\gtrless \in \{=,\neq,>,\geq,<,\leq\}$ and $\alpha$ is a user-supplied threshold. It is used to discard small patterns.\\
{\em Item constraints:} One can constrain a symbol $t$ to surely be in the pattern: $\exists s \in S: s = t$; or that it can not appear in the pattern: $\forall s \in S: s \neq t$, or more complex logical expressions over the symbols in the pattern.\\
{\em Regular expression constraints:}
Let $R$ be a regular expression over the vocabulary $V$ and $L_R$ be the language of sequences recognised by $R$, then for any sequence pattern $S$ over $V$, the {\em \ensuremath{match\mbox -regular}\xspace} constraint requires that $S \in L_R$~\cite{han2001prefixspan}.
\subsubspace
\subsubsection{Constraints on the cover set.}
The {\em minimum frequency} constraint $|cover(S,D)| \geq \theta$ is the most common example of a constraint over the cover set. Alternatively, one can impose the {\em maximum frequency} constraint: $|cover(S,D)| \leq \beta$\\
{\em Discriminating constraints:} In case of multiple datasets, discriminating constraints require that patterns effectively distinguish the datasets from each other.
Given two datasets $D_1$ and $D_2$, one can require that the ratio between the size of the cover of both is above a threshold: $\frac{|cover(S,D_1)|}{|cover(S,D_2)|} \geq \alpha$. Other examples include more statistical measures such as information gain and entropy~\cite{correlated_cp}.
\subsubspace
\subsubsection{Constraints over the inclusion relation.}
The inclusion relation in definition~\ref{def:seq-incl-rel} states that $S \sqsubseteq S' \leftrightarrow \exists e ~\mbox{s.t.}~ S \sqsubseteq_e S'$. Hence, an embedding of a pattern can match symbols that are far apart in the transaction. For example, the sequence $\seq{a,c}$ is embedded in the transaction $\seq{a,b,b,b,\ldots,b,c}$ independently of the distance between $\lit a$ and $\lit c$ in the transaction. This is undesirable when mining datasets with long transactions. The \ensuremath{max\mbox -gap}\xspace and \ensuremath{max\mbox -span}\xspace constraints \cite{zaki2000sequence} impose a restriction on the embedding, and hence on the inclusion relation.
\label{sec:maxgap-constraint}
The {\em \ensuremath{max\mbox -gap}\xspace constraint} is satisfied on a transaction $T_i$ if an embedding $e$ maps every two consecutive symbols in $S$ to symbols in $T_i$ that are close to each-other: $\ensuremath{max\mbox -gap}\xspace_i(e) \Leftrightarrow \forall{j \in 2..|T_i|}, (e_{j} - e_{j-1} - 1) \le \gamma$.
For example, the sequence $\seq{abc}$ is embedded in the transaction \seq{adddbc} with a maximum gap of 3 whereas \seq{ac} is not.
\label{sec:maxspan-constraint}
The {\em \ensuremath{max\mbox -span}\xspace constraint} requires that the distance between the first and last position of the embedding of all transactions $T_i$ is below a
threshold~$\gamma$:
$
\ensuremath{max\mbox -span}\xspace_i(e) \Leftrightarrow e_{|T_i|} - e_1 + 1 \le \gamma
$.
\subsubspace
\subsubsection{Preferences over the solution set.}
A pairwise preference over the solution set expresses that a pattern $A$ is preferred over a pattern $B$. In~\cite{dominance_dp} it was shown that condensed representations like closed, maximal and free patterns can be expressed as pairwise preference relations. Skypatterns~\cite{DBLP:conf/cpaior/RojasBLCL14} and multi-objective optimisation can also be seen as preference over patterns. As an example, let $\Delta$ be the set of all patterns; then, the set of all closed patterns is $\{S \in \Delta | \nexists S' \mbox{ s.t. } S \sqsubset S' \mbox{ and } cover(S,\ensuremath{D}) = cover(S',\ensuremath{D})\}$.
\section{Sequence Mining in Constraint Programming}\label{sec:seq_cp}
In constraint programming, problems are expressed as a constraint satisfaction problem (CSP), or a constraint optimisation problem (COP). A CSP $X=(V,D,C)$ consists of a set of variables $V$, a finite domain $D$ that defines for each variable $v \in V$ the possible values that it can take, and a set of constraints $C$ over the variables in $V$. A solution to a CSP is an assignment of each variable to a value from its domain such that all constraints are satisfied. A COP additionally consists of an optimisation criterion $f(V)$ that expresses the quality of the solution.
There is no restriction on what a constraint $C$ can represent. Examples include logical constraints like ${\bf X} \wedge {\bf Y}$ or ${\bf X} \rightarrow {\bf Y}$ and mathematical constraints such as ${\bf Z} = {\bf X} + {\bf Y}$ etc. Each constraint has a corresponding \textit{propagator} that ensures the constraint is satisfied during the search. Many \textit{global constraints} have been proposed, such as \textit{alldifferent}, which have a custom propagator that is often more efficient then if one would \textit{decompose} that constraint in terms of simple logical or mathematical constraints. A final important concept used in this paper is that of \textit{reified constraints}. A reified constraint is of the form ${\bf B} \leftrightarrow C'$ where ${\bf B}$ is a Boolean variable which will be assigned to the truth value of constraint $C'$. Reified constraints have their own propagator too.
\parspace
\paragraph{Variables and domains for modeling sequence mining.}
Modeling a problem as a CSP requires the definition of a set of variables with a finite domain, and a set of constraints. One solution to the CSP will correspond to one pattern, that is, one frequent sequence.
We model the problem using an array ${\bf S}$ of integer variables representing the characters of the sequence and an array ${\bf C}$ of Boolean variables representing which transactions include the pattern. This is illustrated in Fig. \ref{fig:principle1}:
\begin{figure}[t]
\centering
\Large
\begin{tikzpicture}[scale=0.55, transform shape]
\tikzstyle{seq_item}=[draw, fill=blue!20, shape=rectangle, minimum size=1cm];
\tikzstyle{seq_item_small}=[draw, fill=blue!20, shape=rectangle, minimum size=1cm, minimum width=0.5cm];
\tikzstyle{seq_item_static}=[draw, fill=white, shape=rectangle, minimum size=1cm];
\tikzstyle{myedge}=[thick, -latex, color=red!80!black];
\newcounter{y}
\node at (0.2, 0) {${\bf S}:~$};
\setcounter{y}{1}
\foreach \i in {A, B, $\epsilon$, $\epsilon$}{
\node[seq_item, label=above:{\small p=\arabic{y}}] at (\arabic{y}, 0) (j\arabic{y}) {\i};
\stepcounter{y}
}
\node[seq_item_small] (t1) at (-1.1, -1.4) {$1$};
\node at (-1.8, -1.4) {${\bf C_1}:~$};
\node at (0.15, -1.4) {$T_1:~$};
\setcounter{y}{1}
\foreach \i in {A, C, B}{
\node[seq_item_static] at (0+\arabic{y}, -1.4) (x1\arabic{y}) {\i};
\stepcounter{y}
}
\node[seq_item_small] (t2) at (-1.1, -2.4) {$0$};
\node at (-1.8, -2.4) {${\bf C_2}:~$};
\node at (0.15, -2.4) {$T_2:~$};
\setcounter{y}{1}
\foreach \i in {B, A, A, C}{
\node[seq_item_static] at (0+\arabic{y}, -2.4) (x2\arabic{y}) {\i};
\stepcounter{y}
}
\end{tikzpicture}
\caption{Example assignment; blue boxes represent variables, white boxes represent data.}
\label{fig:principle1}
\end{figure}%
\begin{enumerate}
\item $T_1$ and $T_2$ represent two transactions given as input. We denote the number of transactions by $n$;
\item The array of variables ${\bf S}$ represents the sequence pattern.
Each variable ${\bf S_j}$ represents the character in the $j$th position of the sequence.
The size of ${\bf S}$ is determined by the length of the longest transactions (in the example this is $4$).
We want to allow patterns that have fewer than $max_i(|T_i)$ characters, hence we use $\epsilon$ to represent an unused position in ${\bf S}$.
The domain of each variable ${\bf S_j}$ is thus $\Sigma \cup \{\epsilon\}$;
\item Boolean variables ${\bf C_i}$ represent whether the pattern is included in transaction $T_i$, that is, whether ${\bf S} \sqsubseteq T_i$. In the example, this is the case for $T_1$ but not for $T_2$.
\end{enumerate}
What remains to be defined is the constraints. The key part here is how to model the inclusion relation; that is, the constraint that verifies whether a pattern is included in the transaction. Conceptually, this is the following reified constraint: ${\bf C_i} \leftrightarrow \exists e ~\mbox{s.t.}~ {\bf S} \sqsubseteq_e T_i$.
As mentioned in the introduction, the number of possible embeddings is exponential in the size of the pattern. Hence, one can not model this as a disjunctive constraint over all possible embeddings (as is done for sequences with explicit wildcards~\cite{kemmarictai}).
We propose two approaches to cope with this problem: one with a global constraint that verifies the inclusion relation directly on the data,
and one in which the inclusion relation is decomposed and the embedding is exposed through variables.
\section{Sequence mining with a global \textit{exists-embedding} constraint}
\label{sec:first-model}
The model consists of three parts: encoding of the pattern, of the minimum frequency constraint and finally of the inclusion relation using a global constraint.
\parspace
\paragraph{Variable-length\ pattern:} The array ${\bf S}$ has length $k$; patterns with $l < k$ symbols are represented with $l$ symbols from $\Sigma$ and $(k - l)$ times an $\epsilon$ value. To avoid enumerating the same pattern with $\epsilon$ values in different positions, $\epsilon$ values can only appear at the end:
\begin{equation}
\label{eq:well-formed}
\begin{gathered}
\forall j \in 1..(k-1): {\bf S_{j}} = \epsilon \rightarrow {\bf S_{j+1}} = \epsilon
\end{gathered}
\end{equation}
\optional{
This can be encoded with $k-1$ auxiliary variables and reified ${\bf B} \leftrightarrow {\bf S_j} = \epsilon$ constraints, and $1$ lexicographic 'less than or equal' constraint over the array of auxiliary variables (using the observation that ${\bf B_1} \rightarrow {\bf B_2} \equiv {\bf B_1} \leq {\bf B_2}$ for Boolean variables).
}
\parspace
\parspace
\paragraph{Minimum frequency:} At least $\theta$ transactions should include the pattern. This inclusion is indicated by the array of Boolean variables ${\bf C}$:
\parspace
\begin{equation}
\label{eq:minfreq}
\begin{gathered}
\sum_{i = 1}^n {\bf C_i} \geq \theta
\end{gathered}
\end{equation}
\optional{
This can be encoded with $1$ linear inequality constraint over the ${\bf C}$ variables.
}
\parspace
\parspace
\paragraph{Global exists-embedding constraint:}
The goal is to encode the relation:
$
{\bf C_i} \leftrightarrow \exists e ~\mbox{s.t.}~ {\bf S} \sqsubseteq_e T_i.
$
\begin{algorithm}[t]
\algtext*{EndIf
\footnotesize
\caption{Incremental propagator for ${\bf C_i} \leftrightarrow \exists e ~\mbox{s.t.}~ {\bf S} \sqsubseteq_e T_i$:\label{alg:global_emp}}
\textit{internal state, $pos_S$: current position in ${\bf S}$ to check, initially 1}\\
\textit{internal state, $pos_e$: current position in $T_i$ to match to, initially 1}
\begin{algorithmic}[1]
\While{$pos_S \leq |T_i|$ and ${\bf S}$[$pos_S$] is assigned\label{a1:while}} \Comment{note that $|T_i| \leq |{\bf S}|$}
\If{${\bf S}$[$pos_S$] $\neq \epsilon$}\label{a1:neq_eps}
\While{not ($T_i[pos_e] = {\bf S}[pos_S])$ and $pos_e \leq |T_i|$\label{a1:while2}} \Comment find match
\State{$pos_e \gets pos_e + 1$}
\EndWhile
\If{$pos_e \leq |T_i|$} \Comment match found, on to next one
\State{$pos_S \gets pos_S + 1$; $pos_e \gets pos_e + 1$}
\Else
\State propagate ${\bf C_i} = False$ and return
\EndIf
\Else \Comment previous ones matched and rest is $\epsilon$
\State propagate ${\bf C_i} = True$ and return \label{a1:eps}
\EndIf
\EndWhile \label{a1:endwhile}
\If{$pos_S > |{\bf S}|$} \Comment previous ones matched and reached end of sequence
\State propagate ${\bf C_i} = True$ and return \label{a1:eos}
\EndIf
\If{$pos_S > |T_i|$ and $|T_i| < |{\bf S}|$} \label{a1:longer:start}
\State{{\bf let} $R \gets {\bf S}[|T_i|+1]$}
\If{$R$ is assigned and $R = \epsilon$} \Comment{{\bf S} should not be longer than this transaction}
\State propagate ${\bf C_i} = True$ and return
\EndIf
\If{$\epsilon$ is not in the domain of $R$}
\State propagate ${\bf C_i} = False$ and return
\EndIf
\EndIf \label{a1:longer:stop}
\If{${\bf C_i}$ is assigned and ${\bf C_i} = True$} \label{a1:revprop:start}
\State{propagate by removing from ${\bf S}[pos_S]$ all symbols not in $\langle T_i[pos_e]..T_i[|T_i|] \rangle$ except $\epsilon$}
\EndIf \label{a1:revprop:stop}
\end{algorithmic}
\end{algorithm}%
The propagator algorithm for this constraint is given in Algorithm~\ref{alg:global_emp}. It is an incremental propagator that should be run when one of the ${\bf S}$ variables is assigned. Line~\ref{a1:while} will loop over the
variables in ${\bf S}$ until reaching an unassigned one at position $pos_S$. In the sequence mining literature, the sequence $\langle {\bf S_{1}}..{\bf S_{{pos_S}}} \rangle$ is called the \textit{prefix}. For each assigned ${\bf S_j}$ variable, a matching element in the transaction is sought, starting from the position $pos_e$ after the element that matched the previous ${\bf S_{j-1}}$ assigned variable. If no such match
is found then an embedding can not be found and ${\bf C_i}$ is set to false.
Line~\ref{a1:eps} is called when an ${\bf S_j}$ variable is assigned to $\epsilon$. This line can only be reached if all previous values of ${\bf S}$ are assigned and were matched in $T_i$, hence the propagator can set ${\bf C_i}$ to true and quit. Similarly for line~\ref{a1:eos} when the end of the sequence is reached, and lines~\ref{a1:longer:start}-\ref{a1:longer:stop} in case the transaction is smaller than the sequence. Lines~\ref{a1:revprop:start}-\ref{a1:revprop:stop} propagate the remaining possible symbols from $T_i$ to the first unassigned ${\bf S}$ variable in case ${\bf C_i} = True$.
The propagator algorithm has complexity $O(|T_i|)$: the loop on line~\ref{a1:while} is run up to $|T_i|$ times and on line~\ref{a1:while2} at most $|T_i|$ times in total, as $pos_e$ is monotonically increasing.
\optional{
There are $n$ global \textit{exists-embedding} constraints needed.
}
\subsection{Improved pruning with \textit{projected frequency}}
Compared to specialised sequence mining algorithms, $pos_S$
in Algorithm~\ref{alg:global_emp} points to the first position in ${\bf S}$ after the current \textit{prefix}. Dually, $pos_e$ points to the position after the first match of the prefix in the transaction. If one would project the prefix away, only the symbols in the transaction from $pos_e$ on would remain; this is known as \textit{prefix projection}~\cite{han2001prefixspan}. Given prefix $\seq{a,c}$ and transaction $\seq{b,a,a,e,c,b,c,b,b}$ the projected transaction is $\seq{b,c,b,b}$.
The concept of a prefix-projected database can be used to recompute the frequency of all symbols in the projected database. If a symbol is present but not frequent in the projected database, one can avoid searching over it. This is known to speed up specialised mining algorithms considerably~\cite{han2001prefixspan,wang2004bide}.
To achieve this in the above model, we need to adapt the global propagator so that it exports the symbols that still appear after $pos_e$.
We introduce an auxiliary integer variable ${\bf X_i}$ for every transaction $T_i$, whose domain represents these symbols (the set of symbols is monotonically decreasing). To avoid searching over infrequent symbols, we define a custom search routine (brancher) over the ${\bf S}$ variables. It first computes the local frequencies of all symbols based on the domains of the ${\bf X_i}$ variables; symbols that are locally infrequent will not not be branched over. See Appendix~\ref{app:branch}
for more details.
\subsection{Constraints}
\label{sec:user-constraints1}
This formulation supports a variety of constraints, namely on the pattern (type 1), on the cover set (type 2) and over the solution set (type 4).
For example, the type 1 constraint \ensuremath{min\mbox -size}\xspace, constrains the size of the pattern to be larger than a user-defined threshold~$\alpha$. This constraint can be formalised as follows.
\begin{equation}
\label{eq:minsize}
\begin{gathered}
\sum_{j = 1}^k\bf \left[S_j \neq \epsilon\right] \ge \alpha
\end{gathered}
\end{equation}
\optional{Alternatively, we can use of the $\bf B$ auxiliary variables defined in the previous section to simplify this formulation. By observing that $\bf B_j \leftrightarrow S_j = \epsilon$, Equation~\ref{eq:minsize} can be reformulated as: $(\sum_{j = 1}^k\lnot \bf B_j) \ge \alpha$.}
{\em Minimum frequency} in Equation~(\ref{eq:minfreq}) is an example of a constraint of type~2, over the cover set. Another example is the {\it discriminative} constraint mentioned in Section~\ref{sec:constraints-max-1}: given two datasets $D_1$ and $D_2$, one can require that the ratio between the cover in the two datasets is larger than a user defined threshold $\alpha$: $\frac{|cover(S,D_1)|}{|cover(S,D_2)|} \ge \alpha$. Let $D = D_1 \cup D_2$ and let $t_1 = \{ i | T_i \in D_1\}$ and $t_2 = \{ i | T_i \in D_2\}$ then we can extract the discriminant patterns from $D$ by applying the following constraint.
\begin{equation}
\label{eq:discr}
\begin{gathered}
\frac{\sum_{i \in t_1} {\bf C_i}}{\sum_{i \in t_2} {\bf C_i}} \ge \alpha
\end{gathered}
\end{equation}
Such a constraint can also be used as an optimisation criterion in a CP framework.
Type~4 constraints a.k.a. preference relations have been proposed in~\cite{dominance_dp} to formalise well-known pattern mining settings such as $maximal$ or $closed$ patterns.
Such preference relations can be enforced dynamically during search for any CP formulation~\cite{dominance_dp}. The preference relation for closed is $S' \succ S \equiv S \sqsubset S' \wedge cover(S,D) = cover(S',D)$ and one can reuse the global reified \textit{exists-embedding} constraint for this.
Finally, type~3 constraints over the inclusion relation are not possible in this model. Indeed, a new global constraint would have to be created for every possible (combination of) type~3 constraints. For example for \ensuremath{max\mbox -gap}\xspace, one would have to modify Algorithm~\ref{alg:global_emp} to check whether the gap is smaller than the threshold, and if not, to search for an alternative embedding instead (thereby changing the complexity of the algorithm).
\section{Decomposition with explicit embedding variables} \label{sec:second-model}
In the previous model, we used a global constraint to assign the ${\bf C_i}$ variables to their appropriate value, that is:
${\bf C_i} \leftrightarrow \exists e\mbox{ s.t. }{\bf S} \sqsubseteq_e T_i$. The global constraint efficiently tests the existence of one embedding, but does not expose the value of this embedding, thus it is impossible to express constraints over embeddings such as the \ensuremath{max\mbox -gap}\xspace constraint.
To address this limitation, we extend the previous model with a set of {\em embedding} variables ${\bf E_{i1},\ldots,E_{i|T_i|}}$ that will represent an embedding $e = (e_1, \ldots, e_{|T_i|})$ of sequence ${\bf S}$ in transaction $T_i$. In case there is no possible match for a character ${\bf S_i}$ in $T_i$, the corresponding ${\bf E_{ij}}$ variable will be assigned a {\em no-match} value.
\subsection{Variables and constraints}
\newcommand{{\em no-match~}}{{\em no-match~}}
\subsubsection{Embedding variables.}
For each transaction $T_i$ of length $|T_i|$, we introduce integer variables ${\bf E_{i1}}, \ldots, {\bf E_{i|T_i|}}$. Each variable ${\bf E_{ij}}$ is an index in $T_i$, and an assignment to ${\bf E_{ij}}$ maps the variable ${\bf S_j}$ to a position in $T_i$; see Figure~\ref{fig:principle2}, the value of the index is materialized by the red arrows. The domain of ${\bf E_{ij}}$ is initialized to all possible positions of $T_i$, namely $1, \ldots, |T_i|$ plus a {\em no-match~} entry which we represent by the value $|T_i| + 1$.
\subsubspace
\subsubsection{The $\ensuremath{position\mbox -match}\xspace$ constraint.\label{par:posmatch}} This constraint ensures that the variables ${\bf E_i}$ either represent an embedding $e$ such that ${\bf S} \sqsubseteq_e T_i$ or otherwise at least one ${\bf E_{ij}}$ has the {\em no-match~} value. Hence, each variable ${\bf E_{ij}}$ is assigned the value $x$ only if the character in ${\bf S_i}$ is equal to the character at position $x$ in $T_{i}$. In addition, the constraint also ensures that the values between two consecutive variables ${\bf E_{ij}, E_{i(j+1)}}$ are increasing so that the order of the characters in the sequence is preserved in the transaction. If there exist no possible match satisfying these constraints, the {\em no-match} value is assigned.
\parspace
\begin{align}
\label{eq:posmatch}
\forall i \in 1, \ldots, n, \forall j \in 1,\ldots, |T_i|: &\quad ({\bf S_j} = T_i[{\bf E_{ij}}]) \lor ({\bf E_{ij}} = |T_i|+1)\\
\forall i \in 1, \ldots, n, \forall j \in 2,\ldots, |T_i|: &\quad ({\bf E_{i(j-1)}} < {\bf E_{ij}}) \lor ({\bf E_{ij}} = |T_i|+1)
\end{align}
Here ${\bf S_j} = T_{i}[{\bf E_{ij}}]$ means that the symbol of ${\bf S_j}$ equals the symbol at index ${\bf E_{ij}}$ in transaction $T_i$. See Appendix~\ref{app:decomp} for an effective reformulation of these constraints.
\subsubspace
\subsubsection{\ensuremath{Is\mbox -embedding}\xspace constraint.} Finally, this constraint ensures that a variable ${\bf C_i}$ is $true$ if the embedding variables ${\bf E_{i1},\ldots,E_{i|T_i|}}$ together form a valid embedding of sequence ${\bf S}$ in transaction $T_i$. More precisely: if each character ${\bf S_j} \neq \epsilon$ is mapped to a position in the transaction that is different from the {\em no-match~} value.
\begin{equation}
\label{eq:isemb}
\forall i \in 1, \ldots, n:\quad
{\bf C_i} \leftrightarrow \forall j \in 1,\ldots, |T_i|: ~ ({\bf S_j} \neq \epsilon) \rightarrow ({\bf E_{ij}} \neq |T_i|+1)
\end{equation}
\noindent Note that depending on how the ${\bf E_{ij}}$ variables will be searched over, the above constraints are or are not equivalent to enforcing ${\bf C_i} \leftrightarrow \exists e\mbox{ s.t. }{\bf S} \sqsubseteq_e T_i$. This is explained in the following section.
\begin{figure}[t]
\centering
\Large
\begin{tikzpicture}[scale=0.55, transform shape]
\tikzstyle{seq_item}=[draw, fill=blue!20, shape=rectangle, minimum size=1cm];
\tikzstyle{seq_item_small}=[draw, fill=blue!20, shape=rectangle, minimum size=1cm, minimum width=0.5cm];
\tikzstyle{seq_item_static}=[draw, fill=white, shape=rectangle, minimum size=1cm];
\tikzstyle{myedge}=[thick, -latex, color=red!80!black];
\node at (0.3, 1) {${\bf S}:~$};
\setcounter{y}{1}
\foreach \i in {A, B, $\epsilon$, $\epsilon$}{
\node[seq_item, label=above:{\small p=\arabic{y}}] at (\arabic{y}, 0.3) (j\arabic{y}) {\i};
\stepcounter{y}
}
\node[seq_item_small] (t1) at (-2.0, -1.4) {$1$};
\node at (-3, -1.4) {${\bf C_1}:~$};
\node at (0.15, -1.4) {$T_1:~$};
\setcounter{y}{1}
\foreach \i in {A, C, B}{
\node[seq_item_static] at (0+\arabic{y}, -1.4) (x1\arabic{y}) {\i};
\stepcounter{y}
}
\node at (6.0, -1.4) {${\bf E_1}:~$};
\setcounter{y}{1}
\foreach \i in {1, 3, {\it 4}, {\it 4}}{
\node[seq_item, label=above:{\small j=\arabic{y}}] at (6+\arabic{y}, -1.4) (e1\arabic{y}) {\i};
\stepcounter{y}
}
\draw (e11) edge[myedge, bend right, in=-130] (x11);
\draw (e12) edge[myedge, bend right, out=300] (x13);
\node[seq_item_small] (t2) at (-2.0, -2.4) {$0$};
\node at (-3, -2.4) {${\bf C_2}:~$};
\node at (0.15, -2.4) {$T_2:~$};
\setcounter{y}{1}
\foreach \i in {B, A, A, C}{
\node[seq_item_static] at (0+\arabic{y}, -2.4) (x2\arabic{y}) {\i};
\stepcounter{y}
}
\node at (6.0, -2.4) {${\bf E_2}:~$};
\setcounter{y}{1}
\foreach \i in {2, {\it 5}, {\it 5}, {\it 5}}{
\node[seq_item] at (6+\arabic{y}, -2.4) (e2\arabic{y}) {\i};
\stepcounter{y}
}
\draw (e21) edge[myedge, bend left, out=35, in=125] (x22);
\end{tikzpicture}
\caption{Example assignment; blue boxes represent variables, white boxes represent data. The cursive values in ${\bf E_1}$ and ${\bf E_2}$ represent the {\em no-match~} value for that transaction.}
\label{fig:principle2}
\end{figure}%
\subsection{Search strategies for checking the existence of embeddings}
CP's standard enumerative search would search for all satisfying assignments to the ${\bf S_j}, {\bf C_i}$ and ${\bf E_{ij}}$ variables. As for each sequence of size $m$, the number of embeddings in a transaction of size $n$ can be $O(n^m)$, such a search would not perform well. Instead, we only need to search whether {\em one} embedding exists for each transaction.
\optional{
\subsubspace
\subsubsection{Without additional constraints on ${\bf E_{ij}}$ and ${\bf C_i}$.}
The reformulation in Appendix \ref{app:decomp} of the \ensuremath{position\mbox -match}\xspace constraint is \textit{lower}-bound consistent on the ${\bf E_{ij}}$ variables, assuming there are no other constraints on the ${\bf E_{ij}}$ or ${\bf C_i}$ variables. Indeed, for any assignment to ${\bf S}$, taking the smallest value of each ${\bf E_{ij}}$ variable results in an assignment that is a valid embedding for all transactions that admit one. Thus in that specific case, there is no need to search over the ${\bf E_{ij}}$ or ${\bf C_i}$ variables. Indeed, after searching over the ${\bf S_j}$ variables, either ${\bf C_i}$ is {\em false} or it is unassigned and can be set to {\em true} by assigning each ${\bf E_{ij}}$ to the lowest value in its domain (non {\em no-match~} value, otherwise ${\bf C_i}$ would be \textit{false}).
}
\subsubspace
\subsubsection{With additional constraints on ${\bf E_{ij}}$ but not ${\bf C_i}$.}
When there are additional constraints on the ${\bf E_{ij}}$ variables
such as \ensuremath{max\mbox -gap}\xspace, one has to perform backtracking search to find a valid embedding. We do this after the ${\bf S}$ variables have been assigned.
We call the search over the ${\bf S}$ variables the \textit{normal} search, and the search over the ${\bf E_{ij}}$ variables the \textit{sub} search. Observe that one can do the \textit{sub} search for each transaction $i$ independently of the other transactions as the different ${\bf E_{i}}$ have no influence on each other, only on ${\bf C_i}$. Hence, one does not need to backtrack across different \textit{sub} searchers.
The goal of a \textit{sub} search for transaction $i$ is to find a valid embedding for that transaction. Hence, that \textit{sub} search should search for an assignment to the ${\bf E_{ij}}$ variables with ${\bf C_i}$ set to \textit{true} first. If a valid assignment is found, an embedding for $T_i$ exists and the \textit{sub} search can stop. If no assignment is found, ${\bf C_i}$ is set to false and the \textit{sub} search can stop too.
See Appendix \ref{app:subsearch}
for more details on the sub search implementation.
\subsubspace
\subsubsection{With arbitrary constraints.} The constraint formulation in Equation \eqref{eq:isemb} is not equivalent to ${\bf C_i} \leftrightarrow \exists e\mbox{ s.t. }{\bf S} \sqsubseteq_e T_i$. For example, lets say some arbitrary constraint propagates ${\bf C_i}$ to \textit{false}. For the latter constraint, this would mean that it will enforce that ${\bf S}$ is such that there does not exists an embedding of it in $T_i$. In contrast, the constraint in Equation \eqref{eq:isemb} will propagate some ${\bf E_{ij}}$ to the {\em no-match~} value, even if there exists a valid match for the respective ${\bf S_j}$ in $T_i$!
To avoid an ${\bf E_{ij}}$ being set to the {\em no-match~} value because of an assignment to ${\bf C_i}$, we can replace Equation \eqref{eq:isemb} by the half-reified $\forall i: ~{\bf C_i} \rightarrow ( \forall j ~({\bf S_j} \neq \epsilon) \rightarrow ({\bf E_{ij}} \neq |T_i|+1)~)$ during \textit{normal} search.
The \textit{sub} search then has to search for a valid embedding, even if ${\bf C_i}$ is set to \textit{false} by some other constraint.
One can do this in the \textit{sub} search of a specific transaction $i$ by replacing the respective half-reified constraint by the constraint $~{\bf C'_i} \leftrightarrow ( \forall j ~({\bf S_j} \neq \epsilon) \rightarrow ({\bf E_{ij}} \neq |T_i|+1)~)$ over a new variable ${\bf C'_i}$ that is local to this \textit{sub} search. The \textit{sub} search can then proceed as described above, by setting ${\bf C'_i}$ to \textit{true} and searching for a valid assignment to ${\bf E_{i}}$. Consistency between ${\bf C'_i}$ and the original ${\bf C_i}$ must only be checked after the \textit{sub} search for transaction $i$ is finished. This guarantees that for any solution found, if ${\bf C_i}$ is \textit{false} and so is ${\bf C'_i}$ then indeed, there exists no embedding of ${\bf S}$ in $T_i$.
\subsection{Projected frequency}
Each ${\bf E_{ij}}$ variable represents the positions in $T_i$ that ${\bf S_j}$ can still take. This is more general than the projected transaction, as it also applies when the previous symbol in the sequence ${\bf S_{j-1}}$ is not assigned yet.
Thus, we can also use the ${\bf E_{ij}}$ variables to require that every symbol of ${\bf S_j}$ must be frequent in the (generalised) projected database. This is achieved as follows.
\begin{equation}
\label{eq:freq-char-reif}
\begin{gathered}
\forall j \in 1\ldots n, \forall x \in \Sigma,
{\bf S_j} = x \rightarrow |\{ i : {\bf C_i} \wedge T_i[{\bf E_{ij}}] = x \}| \ge \theta
\end{gathered}
\end{equation}
\noindent See Appendix~\ref{app:projfreq}
for a more effective reformulation.
\optional{
Even when reformulated, this is a costly constraint which propagates to all ${\bf S_j}$, independent of the current prefix. An alternative is to devise another specialised search routine that checks the frequency over all $T_i$ and ${\bf E_{ij}}$, just before branching over an ${\bf S_j}$.
}
\subsection{Constraints}
\label{sec:user-constraints}
All constraints from Section~\ref{sec:user-constraints1} are supported in this model too. Additionally, constraints over the inclusion relations are also supported; for example, \ensuremath{max\mbox -gap}\xspace and \ensuremath{max\mbox -span}\xspace. Recall from Section~\ref{sec:constraints-max-1} that for an embedding $e = (e_1, \ldots, e_k)$, we have $\ensuremath{max\mbox -gap}\xspace_i(e) \Leftrightarrow \forall j \in 2\ldots |T_i|, (e_{j} - e_{j-1} - 1) \le \gamma$. One can constrain all the embeddings to satisfy the \ensuremath{max\mbox -gap}\xspace constraint as follows (note how $x$ is smaller than the {\em no-match~} value $|T_i|+1$):
\begin{align}
\label{eq:maxgap}
\forall i \in 1\ldots n, \forall j \in 2\ldots |T_i|, x \in 1\ldots |T_i|:
\quad {\bf E_{ij}} = x \rightarrow x - {\bf E_{i(j-1)}} \le \gamma+1
\end{align}
\ensuremath{Max\mbox -span}\xspace was formalized as $\ensuremath{max\mbox -span}\xspace_i(e) \Leftrightarrow e_{|T_i|} - e_1 + 1 \le \gamma$ and can be formulated as a constraint as follows:
\begin{align}
\label{eq:maxspan}
\forall i \in 1\ldots n, \forall j \in 2\ldots |T_i|, x \in 1\ldots |T_i|:
\quad {\bf E_{ij}} = x \rightarrow x - {\bf E}_{{\bf i}1} \le \gamma-1
\end{align}
In practice, we implemented a simple \textit{difference-except-no-match} constraint that achieves the same without having to post a constraint for each $x$ separately.
\section{Experiments}
\label{sec:experiments}
\optional{
\todo{adapt}
\renewcommand{\r}[1]{\parbox[t]{2mm}{{\rotatebox[origin=l]{60}{#1}}}}
\newcommand{\checkmark}{\checkmark}
\begin{table*}
\centering\small
\begin{tabular}{|c||cccc|cccc|cccc|}
\hline
task $\rightarrow$ & \multicolumn{4}{c}{frequent} & \multicolumn{4}{c}{closed} & \multicolumn{4}{c|}{relevant} \\\hline
solver $\downarrow$ & none & gap & span & both & none & gap & span & both & none & gap & span & both \\\hline
cSpade & \checkmark & \checkmark & \checkmark & & & & & & & & & \\\hline
Bide & & & & & \checkmark & & & & & & & \\\hline
CP-SM & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & & & & \\\hline
Bide+P.P. & & & & & & & & & \checkmark & & & \\\hline
{\bf RCP} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\\hline
\end{tabular}
\caption{Capabilities of various solvers.}
\label{tab:cando}
\end{table*}
}
The goal of these experiments is to answer the four following questions:
{\bf Q1:} What is the overhead of exposing the embedding variables in the {\em decomposed} model?
{\bf Q2:} What is the impact of using projected frequency in our models?
{\bf Q3:} What is the impact of adding constraints on runtime and on number of results?
{\bf Q4:} How does our approach compares to existing methods?
\parspace
\paragraph{Algorithm and execution environment:}
All the models described in this paper have been implemented in the Gecode solver\footnote{http://www.gecode.org}. We compare our {\em global} and {\em decomposed} models (Section~\ref{sec:first-model} and Section~\ref{sec:second-model}) to the state-of-the-art algorithms cSpade\cite{zaki2000sequence} and PrefixSpan~\cite{han2001prefixspan}. We use the author's cSpade implementation\footnote{http://www.cs.rpi.edu/~zaki/www-new/pmwiki.php/Software/} and a publicly available PrefixSpan implementation by Y. Tabei\footnote{https://code.google.com/p/prefixspan/}. We also compare our models to the CP-based approach proposed by \cite{metivierconstraint}. No implementation of this is available so we reimplemented it in Gecode. Gecode does not support non-deterministic automata so we use a more compact DFA encoding that requires only $O(n*|\Sigma|)$ transitions, by constructing it back-to-front. We call this approach {\em regular-dfa}. Unlike the non-deterministic version, this
does not allow the addition of constraints of type~3 such as \ensuremath{max\mbox -gap}\xspace.
All algorithms were run on a Linux PC with 16~GB of memory. Algorithm runs taking more than 1 hour or
more than 75\% of the RAM were terminated. The implementation and the datasets used for the experiments are available online \footnote{https://dtai.cs.kuleuven.be/CP4IM/cpsm}.
\parspace
\paragraph{Datasets:}
The datasets used are from real data and have been chosen to represent a variety of application domains.
In {\bf Unix user}\footnote{https://archive.ics.uci.edu/ml/datasets/}, each transaction is a series of shell commands executed by a user during one session. We report results on User~3; results are similar for the other users.
\noindent{\bf JMLR} is a natural language processing dataset; each transaction is an abstract of a paper from the {\em Journal of Machine Learning Research}.
\noindent{\bf iPRG} is a proteomics dataset from the application described in \cite{trypticcleavage}; each transaction is a sequence of peptides that is known to cleave in presence of a Trypsin enzyme.
{\bf FIFA} is click stream dataset\footnote{{http://www.philippe-fournier-viger.com/spmf/}} from logs of the website of the FIFA world cup in 98; each transaction is a sequence of webpages visited by a user during a single session. Detailed characteristics of the datasets are given in Table~\ref{tab:dataset-spec}. Remark that the characteristic of these datasets are very diverse due to their different origins.
In our experiments, we vary the minimum frequency threshold ($minsup$). Lower values for $minsup$ result in larger solution sets, thus in larger execution times.
\begin{table}[t]
\centering
{\footnotesize
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
dataset & $|\Sigma|$ & $|\ensuremath{D}|$ & $||\ensuremath{D}||$ & $\displaystyle\max_{ T \in \ensuremath{D}} |T|$ & $\mbox{avg}~|T|$ & density \\\hline
Unix user & 265 & 484 & 10935 & 1256 & 22.59 & 0.085 \\\hline
JMLR & 3847 & 788 & 75646 & 231 & 96.00 & 0.025 \\\hline
iPRG & 21 & 7573 & 98163 & 13 & 12.96 & 0.617 \\\hline
FIFA & 20450 & 2990 & 741092& 100 & 36.239 & 0.012 \\\hline
\end{tabular}}
\caption{Dataset characteristics. Respectively: dataset name, number of distinct symbols, number of transactions, total number of symbols in the dataset, maximum transaction length, average transaction length, and density calculated by $\frac{||\ensuremath{D}||}{|\Sigma| \times |\ensuremath{D}|}$.}
\label{tab:dataset-spec}
\end{table}
\parspace
\paragraph{Experiments:}
First we compare the {\em global} and the {\em decomposed} models. The execution times for these models are shown on Fig.~\ref{fig:time_all}, both without and with projected frequency (indicated by {\em -p.f.}). We first look at the impact of exposing the embedding variables in the {\em decomposed} model ({\bf Q1}). Perhaps unsurprisingly, the {\em global} model is up to one order of magnitude faster than the {\em decomposed} model, which has $O(n*k)$ extra variables. This is the overhead required to allow one to add constraints over the inclusion relation. We also study the impact of the projected frequency on both models ({\bf Q2}). In the \textit{global} model this is done as part of the search, while in the \textit{decomposed} model this is achieved with an elaborate constraint formulation. For {\em global-p.f.} we always observe a speedup in Fig.~\ref{fig:time_all}. Not so for {\em decomposed-p.f.} for the two largest (in terms of $||D||$) datasets
\begin{figure*}[h]
\centering
\includegraphics[width=0.255\textwidth]{plots_global_vs_decomposed_user3_num_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_global_vs_decomposed_jmlr_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_global_vs_decomposed_iprg_pos_6_num_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_global_vs_decomposed_fifa_time.pdf}
\caption{Global model vs. decomposed model: Execution times. (Timeout 1 hour.)}
\label{fig:time_all}
\end{figure*}
We now evaluate the impact of user constraints on the number of results and on the execution time ({\bf Q3}). Fig.~\ref{fig:numsol} shows the number of patterns and the execution times for various combinations of constraints. We can see that adding constraints
enables users to control the explosion of the number of patterns, and that the execution times decrease accordingly. The constraint propagation
allows early pruning of invalid solutions which effectively compensates the computation time of checking the constraints.
For example, on the Unix user dataset, it is not feasible to mine for patterns at 5\% minimum frequency without constraints, let alone do something with the millions of patterns found.
On the other hand, by adding constraints one can look for interesting patterns at low frequency without being overwhelmed by the number of results (see also later).
\optional{By using combinations of relevant constraints, analysts can look for interesting patterns at low frequency without being overwhelmed by the number of results. }
\begin{figure*}[t]
\centering
\includegraphics[width=0.243\textwidth]{plots_constraints_user3_num_sol.pdf}\hfill
\includegraphics[width=0.227\textwidth]{plots_constraints_jmlr_num_sol.pdf}\hfill
\includegraphics[width=0.227\textwidth]{plots_constraints_iprg_pos_num_sol.pdf}\hfill
\includegraphics[width=0.227\textwidth]{plots_constraints_fifa_num_sol.pdf} \\
\includegraphics[width=0.255\textwidth]{plots_constraints_user3_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_constraints_jmlr_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_constraints_iprg_pos_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_constraints_fifa_time.pdf}
\caption{Number of patterns (top) and execution times (bottom) for the decomposed model with various combinations of constraints.}
\label{fig:numsol}
\end{figure*}
The last experiment compares our models to existing algorithms. Fig.~\ref{fig:comparatives} shows the execution times for our {\em global} model compared with {\em regular-dfa}, PrefixSpan and cSpade ({\bf Q4}).
First, we can observe that {\em regular-dfa} is always slowest. On iPRG it performs reasonably well, but the number of transitions in the DFAs does not permit it to perform well on datasets with a large alphabet or large transactions, such as Unix user, JMLR or FIFA. Furthermore, it can not make use of projected frequencies
\textit{global} shows similar, but much faster, behaviour than \textit{regular-dfa}. On datasets with many symbols such as JMLR and FIFA, we can see that not using projected frequency is a serious drawback; indeed, \textit{global-p.f.} performs much better than \textit{global} there.
Of the specialised algorithms, \textit{cSpade} performs better than \textit{PrefixSpan}; it is the most advanced algorithm and is the fastest in
all experiments (not counting the highest frequency thresholds). \textit{global-p.f.} has taken inspiration from \textit{PrefixSpan} and
we can see that they indeed behave similarly. Although, for the dense iPRG dataset \textit{PrefixSpan} performs better than \textit{global-p.f.} and inversely for the large and sparse FIFA dataset. This might be due to implementation choices in the CP solver and \textit{PrefixSpan} software.
\begin{figure*}[t]
\centering
\includegraphics[width=0.255\textwidth]{plots_comparatives_user3_num_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_comparatives_jmlr_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_comparatives_iprg_pos_6_num_time.pdf}\hfill
\includegraphics[width=0.24\textwidth]{plots_comparatives_fifa_time.pdf}\\
\caption{Global model vs. other approaches. Execution times. (Timeout 1 hour.)}
\label{fig:comparatives}
\end{figure*}
\paragraph{Analysis of the pattern quality}
Finally, we use our constraint-based framework to perform exploratory analysis of the Unix user datasets.
Table~\ref{tab:qual} shows different settings we tried and patterns we found interesting. Few constraints lead to too many patterns while more constrained settings lead to fewer and more interesting patterns.
\begin{table}[b]
\centering
\footnotesize
\begin{tabular}{|l|c|c|c|}
\hline
setting & \# of patterns & interesting pattern & comment \\\hline
${\bf F_1}$ & 627 & - & Too many patterns \\\hline
${\bf F_2}$ & 512 & - & Long sequences of \lit{cd} and \lit{ls} \\\hline
${\bf F_3}$ & 36 & \seq{{\tt latex,bibtex,latex}} & User2 is using Latex to write a paper \\\hline
${\bf D_1}$ & 7 & \seq{{\tt emacs}} & User2 uses $Emacs$, his/her collaborators use $vi$ \\\hline
${\bf D_2}$ & 9 & \seq{{\tt quota, rm, ls, quota}} & User is out of disc quota \\\hline
\end{tabular}
\caption{Patterns with various settings (User~2): ${\bf F_1}$: $minfreq = 5\%$, ~ ${\bf F_2}$: $~{\bf F_1} \land \ensuremath{min\mbox -size}\xspace=3$, ~ ${\bf F_3}$: ${\bf F_2} \land \ensuremath{max\mbox -gap}\xspace=2 \land \ensuremath{max\mbox -span}\xspace=5$, ~ ${\bf D_1}$: $minfreq=5\% \land \ensuremath{discriminant}\xspace=8$ (w.r.t. all other users), ${\bf D_2}$: $minfreq=0.4\% \land \ensuremath{discriminant}\xspace=8 \land member(\lit{quota})$}
\label{tab:qual}
\end{table}
\section{Related work}
\label{sec:related}
The idea of mining patterns in sequences dates from earlier work by Agrawal et al. \cite{agrawal1995mining} shortly after their well-known work on frequent itemset mining~\cite{agrawal1994fast}. The problem introduced in \cite{agrawal1995mining}
consisted of finding frequent sequences of {\em itemsets}; that is: sequences of sets included in a database of sequences of sets.
Mining sequences of individual symbols was introduced later by \cite{mannila1997discovery};
the two problems are closely related and one can
adapt one to the other
\cite{wang2004bide}. Sequence mining was driven by the application of market basket analysis for customer data spread over
multiple days. Other applications include bio-medical ones where a large number of DNA and protein sequence datasets are available (e.g. \cite{ye2007efficient}), or natural language processing where sentences can be represented as sequences of words~(e.g. \cite{DBLP:conf/kdd/TattiV12}).
Several specialised algorithm have addressed the problem of constrained sequence mining. The cSpade algorithm \cite{zaki2000sequence} for example is an extension of the Spade sequence mining algorithm~\cite{zaki2001spade} that supports constraints of type 1, 2 and 3. PrefixSpan~\cite{han2001prefixspan} mentions regular expression constraints too. The LCMseq algorithm~\cite{OhtaniKUA09} also supports a range of constraints, but does not consider all embeddings during search.
Other sequence mining algorithms have often focussed on constraints of type 4, and on closed sequence mining in particular. CloSpan~\cite{yan2003clospan} and Bide~\cite{wang2004bide} are both extentions of PrefixSpan to mine \textit{closed} frequent sequences. We could do the same in our CP approach by adding constraints after each solution found, following~\cite{dominance_dp,kemmarictai}.
Different flavors of sequence mining have been studied in the context of a generic framework, and constraint programming in particular. They all study constraints of type 1, 2 and 4. In \cite{coquery2012sat} the setting of sequence patterns with explicit wildcards in a single sequence is studied: such a pattern has a linear number of embeddings.
As only a single sequence is considered, frequency is defined as the number of embeddings in that sequence, leading to a similar encoding to itemsets. This is extended in~\cite{JSS-13-3} to sequences of itemsets (with explicit wildcards over a single sequence). \cite{kemmarictai} also studies patterns with explicit wildcards, but in a database of sequences. Finally, \cite{metivierconstraint} considers standard sequences in a database, just like this paper; they also support constraints of type 3. The main difference is in the use of a costly encoding of the inclusion relation using non-deterministic automata and the inherent inability to use projected frequency.
\section{Conclusion and discussion}
\label{sec:conclusions}
We have investigated a generic framework for sequence mining, based on constraint programming. The difficulty, compared to itemsets and sequences with explicit wildcards, is that the number of embeddings can be huge, while knowing that one embedding exists is sufficient.
We proposed two models for the sequence mining problem: one in which the exists-embedding relation is captured in a global constraint. The benefit is that the complexity of dealing with the existential check is hidden in the constraint. The downside is that modifying the inclusion relation requires modifying the global constraint; it is hence not generic towards such constraints.
We were able to use the same \textit{projected frequency} technique as well-studied algorithms such as PrefixSpan~\cite{han2001prefixspan}, by altering the global exists-embedding constraint and using a specialised search strategy. Doing this does amount to implementing specific propagators and search strategies into a CP solver, making the problem formulation not applicable to other solvers out-of-the-box. On the other hand, it allows for significant efficiency gains.
The second model exposes the actual embedding through variables, allowing for more constraints and making it as generic as can be.
However, it has extra overhead and requires a custom two-phased search strategy.
Our observations are not just limited to sequence mining. Other pattern mining tasks such as tree or graph mining also have multiple (and many) embeddings, hence they will also face the same issues with a reified exists relation. Whether a general framework exists for all such pattern mining problems is an open question.
\subsection*{Acknowledgments}
The authors would like to thank Siegfried Nijssen, Anton Dries and R\'emi Coletta for discussions on the topic, and the reviewers for their valuable comments. This work was supported by the European Commission under project FP7-284715 ``Inductive Constraint Programming'' and a Postdoc grant by the Research Foundation -- Flanders.
\section*{Appendix}
|
1,116,691,498,559 | arxiv | \section{Introduction}
Our present day theoretical framework of the Universe is the general theory of relativity (GTR; with a final piece in \cite{Ein15}), celebrating 100 years of its existence. GTR can, at an appropriate limit, be well substituted with Newtonian gravity since it was constructed for this, thus at some point GTR was adjusted to observations made in Newton's era. To explain the modern large-scale observations of the Universe with GTR, we have to insist on a nearly flat non-monotonously accelerating Universe filled with never directly observed ingredients, the so-called dark energy (well represented by the cosmological constant $\Lambda$) and non-baryonic dark matter (DM, or CDM for cold dark matter), both having very finely-tuned properties (e.g. \citealp{Cop+06, FM13}).
Unfortunately the $\Lambda$CDM model of the Universe is mute in addressing observed dynamical regularities of galaxies, the building blocks of the Universe: the baryonic Tully-Fisher relation \citep{TF77, McG+00, McG05b}, the Faber-Jackson relation \citep{FJ76, San10}, or the mass discrepancy-acceleration correlation \citep{McG04, McG05a}. These observations reveal a strong coupling between the baryonic matter and the hypothetical DM. Moreover, they self-consistently point to the existence of a special acceleration scale (\citealt{FM12}).
Observations of our closest cosmic neighbourhood, the Local Group, highly disfavour the standard cosmology based on the particle dark matter (e.g. \citealp{Kro+10, Kro12}). One of the observations that is hard to accommodate within $\Lambda$CDM, even after baryonic physics is incorporated into the model, is the highly anisotropic distribution of the Local Group members - existence of thin co-orbiting planes of satellites around the Galaxy and M31 \citep{Paw+12b, Paw+13, Paw+14, Paw+15, Iba+13}. It has recently been discovered that similarly anisotropic distributions of satellites are possibly common in a low redshift Universe ($z < 0.05$; \citealp{Iba+14, Iba+15}).
All these issues signal that, after 100 years, we have probably reached the boundaries of GTR and it happened very naturally with empirical progress. Thus we should try to find and test a new theory that provides a better explanation for present-day observations.
The aforementioned galactic phenomenology can be well explained within the framework of
Milgromian dynamics (MD or MOND; \citealp{Mil83b, FM12} for a review of 30 years of its evolution). For instance, the thin co-orbiting planes of Local Group satellites can be a by-product of a past close fly-by that the Galaxy and M31 have undergone about 7 - 11 Gyr ago \citep{Zha+13, Paw+12a}. Thus, we can make the claim that the new theoretical framework of the Universe that we are looking for will explain why everything happens as if galaxies are Milgromian, and not Newtonian objects.
The current status of MD is quite analogous to Newton's gravitational law, explaining the Kepler laws of planetary motion, in Newton's era: MD has strong predictive power although its parent (generally-covariant) theory is still absent (\citealt{FM12}). MD proposes a modification of dynamics that is most apparent in low-acceleration regions of astrophysical systems. In MD, a test particle in a point mass gravitational field accelerates towards the point mass with magnitude $(g^{N}a_{0})^{1/2}$ if $g^{N}\ll a_{0}$, where $g^{N}$ is expected Newtonian gravitational acceleration and $a_{0}$ is a constant with units of acceleration. The constant $a_{0}\sim10^{-10}$ m s$^{-2}$ plays the role of a moderator and vice-versa when $g^{N}\gg a_{0}$ the classical limit is recovered. However, MD also states (at least when considered as modified gravity) that the internal gravitational dynamics of a system is influenced by the existence of a constant external gravitational field\footnote{In spatially varying gravitational field we also have standard tidal effects.} in which the system is embedded \citep{Mil83b}. In MD, external gravity does not decouple from internal dynamics as it does in Newtonian dynamics; the strong equivalence principle is apparently broken. This so-called external field effect (EFE) can attenuate or erase MD effects in the presence of an external field of magnitude that is larger than $a_{0}$, even when internal accelerations are well below $a_{0}$, see Sect. \ref{Sec:EFE}.
Many of new comets entering the inner solar system can be good probes of modified dynamics as we expect them to originate at large heliocentric distances where the Sun-comet acceleration is very small\footnote{But note that EFE always attenuates classical MD effects, see Sect. \ref{Sec:EFE} for discussion.}.
When astronomers observing motion of new comets entering the inner solar system interpret these observations in the framework of Newtonian dynamics (Newtonian astronomers) they end up with the idea of a vast reservoir (radius $\sim$ 100 kau) of bodies, the Oort cloud (OC; \citealp{Opi32, Oor50}), from which the comets are steadily replenished. In the language of the Newtonian orbital elements this happens because they find: (1) a sharp peak in the distribution of the original (i.e. before entering the planetary zone) reciprocal semi-major axes $0<1/a_{orig}\lesssim10^{-4}$ (i.e. orbital energies), and, (2) nearly isotropically distributed perihelia directions.
We reserve the terms ``near-parabolic comet'' and ``Oort-spike comet'' for a comet with semi-major axis greater than 10 kau and perihelion distance between 0 and $\sim8$ au (i.e. to be observable), as derived by a Newtonian astronomer.
In this paper, we investigate the change of the view about the solar system cometary reservoir when Newtonian dynamics is substituted with Milgromian.
We consider the exclusively quasi-linear formulation of MD (QUMOND; \citealt{Mil10}), the classical modified gravity theory that was constructed in the spirit of MOND (\citealt{Mil83b}).
We emphasize that the comet is observable only in the deep Newtonian regime where gravity is much larger than the MD threshold value $a_{0}$.
The basic structure of the hypothetical cloud in MD can be thus probed by tracing the motion of Oort-spike comets back in time, with the actual observations (positions and velocities) serving as the initial conditions. Extending our mainly qualitative analysis into quantitative type presents a profound test of MD.
In the rest of Sect. 1 we briefly review the classical picture of the cometary reservoir. In Sect. \ref{MD} we introduce a quasi-linear formulation of MD (QUMOND) and the numerical procedure of ``how to move things'' in QUMOND. Sect. \ref{models} presents various models of the solar system that is nested in the local Galactic environment, as considered in this paper. The crude picture of the Milgromian OC (MOC) is presented in Sect. \ref{MilgromianOC}. In Sect. \ref{simul} we examine past QUMOND trajectories of 31 observed near-parabolic comets. In Sect. \ref{XXZ} we investigate torquing of perihelia induced by the MD's EFE. Constraints on the MD interpolating function families, as recently found by \citet{Hee+15}, are taken into account in Sect. \ref{if}. We conclude and discuss our results in Sect. \ref{sum}.
\subsection{The classical Oort cloud}\label{classicalOC}
We refer to the OC, whose existence, size and structure are inferred by a Newtonian astronomer as ``the classical OC''.
The standard picture is that the OC with a radius of several tens of kau is a natural product of an interplay between the scattering of planetesimals by the giant planets - inflating bodies' semi-major axes - and tidal torquing by the Galaxy, and random passing stars - lifting bodies' perihelia out of the planetary zone \citep{Dun+87, Don+04}.
Vice-versa reinjection of these bodies
into the inner solar system is moderated by the same dynamical agents \citep{HT86, KQ09}. The pivotal role of the Galactic tide, in both enriching and eroding the OC, was fully recognized after the paper of \citet{HT86}. Their simplified analytical theory of the Galactic disk tide, taking only its vertical component into consideration (if we assume that the Galactic equatorial plane is ``horizontal''), as the radial components are nearly an order of magnitude weaker, reveals that the effect of the tides is analogous to the effect of the planets on comets of shorter periods -- causing the Lydov-Kozai cycles. The vertical component of the comet's orbital angular momentum is conserved and comets follow closed trajectories in the $q-\omega$ plane ($q$ is the perihelion distance and $\omega$ is the argument of perihelion). Thus, $q$ can be traded for a Galactic inclination back and forth, while $\omega$ librates around some fixed value. Since the component of the tidal force that brings comets into visibility is $\sim\sin(2b_{G})$, where $b_{G}$ is the galactic latitude of comet's aphelion, a comet experiences the most rapid changes of $q$ per orbit when $b_{G}=\pm\pi/4$, while when $b_{G}=0$, or $b_{G}=\pm \pi/2$, the changes in the perihelion distance are nil \citep{Tor86}.
Using a sample of long periodic comets (LPCs), with periods longer than 10 000 yrs and accurately known original orbits, \citet{Del87} also noted these features observationally in the distribution of $b_{G}$ among the sample comets, confirming the significance of the Galactic tide.
The comets with $q<15$ au are usually considered lost from the OC, either to the interstellar region or a more tightly bound orbit, owing to planetary perturbations (phenomenon also called Jupiter-Saturn barrier). The planetary kick they receive is typically much larger than the width of the Oort spike. Thus, to be observable, a comet has to decrease its perihelia by at least $\sim$10 au during the revolution that precedes its possible discovery from the zone where planets have a minor effect down to the observability zone (typically less than 5 au from the Sun). Only comets with $a>20 - 30$ kau (defining outer OC; $a$ is the semi-major axis) experience large enough tidal torque to cause this kind of large decrease in $q$ in one revolution (e.g. \citealp{Don+04, Ric14}). But, there are many observed Oort spike comets with much smaller semi-major axes (\citealp{DK11}, hereafter \citetalias{DK11}).
The concept of the Jupiter-Saturn barrier should actually be revised as about 15$\%$ of the near-parabolic comets can migrate through it without any significant orbital change (\citetalias{DK11}; \citealp{DK15}).
\citet{KQ09} demonstrate the importance of a special dynamical pathway capable of delivering inner OC bodies (initial $a<20$ kau, often even $<10$ kau) into the observable orbits -- but at first into the outer OC region $a>20$ kau -- by a cooperation between the planetary perturbations and the Galactic tide. According to \citet{KQ09}, the new comets entering the inner solar system could originate in both the inner, and the outer, OC, with nearly equal probability.
Passing Galactic-field stars, although their implied injection rate is 1.5 - 2 times less than that of the Galactic tide\footnote{If we do not consider very close encounters (which can occur because the process is stochastic) occurring on very large time scales, probably leading to comet showers (\citealp{Hil81}).} (\citealp{HT86}), have their own important role -- they keep the OC isotropic. The trajectories with ``course inner solar system'' would quickly be depleted if there were no passing stars. Synergy between the Galactic tide and the passing stars ensures almost steady flow of new comets into the inner solar system (\citealp{Ric+08}). Thus all above-mentioned dynamical agents are important in the delivery process.
\subsection{Puzzles}\label{puzzles}
Here we briefly review some of the persistent puzzles that challenge the classical OC theory.
Simulations of OC formation indicate that only 1 - 3 \% of all bodies that are scattered by the giant planets are trapped to the present day outer OC orbits (or $\sim 5 \%$ into the whole cloud; \citealp{Don+04, Kai+11}). This low trapping efficiency leads to some inconsistencies in the standard theory, if we presume that the outer OC is the source of the observed LPCs. Specifically, the primordial protoplanetary disk of planetesimals of the total mass 70 - 300 $M_{\oplus}$ is required to explain the observed LPC flux near Earth. Such a massive disk is at odds with giant planets formation theory, leading to their excessive migration and/or formation of additional giant planets (\citealp{Don+04} and references therein).
The existence of the mentioned special dynamical pathway described in \citet{KQ09} could serve as a possible solution to this problem, because the trapping efficiency of the inner OC can be an order of magnitude larger than that in the outer OC if the OC formation began in an open cluster (\citealp{KQ08}). In any case, the Sun was more probably born in an embedded cluster (\citealp{LL03}), encased in interstellar gas and dust. The sketched simple solution could be problematic in the presence of a vast amount of gas as in the embedded cluster environment. Aerodynamic gas drag on planetesimals prevents kilometre-sized bodies from entering the cloud and, in the most extreme case, this first stage of the solar system evolution does not make any contribution to the cloud (\citealp{Bra+07}).
Another outstanding puzzle concerns the observed population ratio between the OC and the scattered disk\footnote{It is believed that the scattered disk is the source region of the Jupiter-family comets (\citealp{DL97}).} (SD). Observations suggest that this ratio lies between 100 and 1000 but simulations that produce these two reservoirs simultaneously, yield the value of the order of 10 \citep{DL97, Lev+08, KQ09}. The populations are inferred from the observed fluxes of new LPCs and Jupiter-family comets (JFCs), which are brighter than some reference total magnitude. However, the population ratio estimated in the simulations of the OC and SD formation refers to objects larger than a given size. Accounting for the fact that ``an LPC is smaller than a JFC with the same total absolute magnitude'', \citet{BM13} arrive at the discrepancy of a factor of ``only'' 4.
As early as the first numerical simulations of the OC formation were performed, it was recognized that only bodies with semi-major axes $a$ beyond $\sim2000$ au could have their perihelia torqued out of the planetary zone into the OC (\citealp{Dun+87}). Bodies with smaller $a$ would still have their perihelia settled near planets. The observed orbital distribution of trans-Neptunian objects (TNOs) have largely agreed with this result. In any case, two striking exceptions have been found - the orbit of Sedna (\citealp{Bro+04}) and 2012 VP$_{113}$ (\citealp{TS14}). With perihelia $(q)$ of 76 and 80 au respectively, these objects no longer interact with planets, yet their large semi-major axes of $\sim 500$ and $\sim 250$ respectively, point to strong planetary perturbations in the past. Although their semi-major axes are larger than most TNOs, they are still too small to be significantly perturbed by the current local Galactic tide. Thus, these orbits remain unexplained by any known dynamical process in the solar system (\citealp{ML04}). An interesting solution to this problem was offered by \citet{Kai+11}, namely, radial migration of the Sun (\citealp{SB02}), which has not been accounted properly in any past study. The simulation of \citet{Kai+11} began with the formation of the Galaxy in a large N-body + smooth-particle-hydrodynamics simulation where solar analogues were identified. Then the OC formation around these stars (often substantial radial migrants) were followed under the influence of the four giant planets, the Galaxy and randomly passing stars, leading to the conclusion that Sedna can be a classical OC body. Unfortunately, the enhanced tidal field that is due to the Sun's radial migration (inward with respect to its current position, if we are looking back in time) also enhances erosion of the outer OC, and thus deepens the primordial disk-mass problem \citep{Kai+11}.
\subsection{Basics of MOND}
According to the MOND\footnote{From now on, when we write ``MOND'' we mean the 1983 Milgrom's formulation \citep{Mil83b}, the simple formula in Eq. (\ref{a1}). When we write ``Milgromian dynamics (MD)'' we mean general theory, like in \citet{BM84} on the classical level or in \citet{Bek04} on the Lorentz-covariant level.} algorithm (\citealp{Mil83b}), the true gravitational acceleration in spherically symetric systems has to be calculated as
\begin{eqnarray}\label{a1}
{\bf g}~=~\nu(g^{N}/a_{0})~{\bf g^{N}}~,
\end{eqnarray}
where $a_{0}\approx10^{-10}$ m s$^{-2}\sim c~H_{0}\sim c^{2}~\Lambda^{1/2}$ is the transition acceleration, $c$ is the speed of light, $H_{0}$ is the Hubble constant, $\Lambda$ is the cosmological constant, ${\bf g^{N}}$ is the expected Newtonian acceleration, $|{\bf g^{N}}|\equiv g^{N}$, and $\nu(\beta)$ is an interpolating function that reflects the underlying general theory with properties $\nu(\beta)\rightarrow 1$ for $\beta\gg1$ and $\nu(\beta)\rightarrow \beta^{-1/2}$ for $\beta\ll1$. Eq. (\ref{a1}) implies that
\begin{eqnarray}\label{a10}
\vert{\bf g}\vert\equiv g=(g^{N}a_{0})^{1/2}~~\Leftrightarrow~~g^{N}\ll a_{0}~,
\end{eqnarray}
and thus it yields exactly the well-known scaling relations \citep{McG+00, FJ76, Mil83a}.
The basics of MOND in Eq. (\ref{a1}) can be written equivalently in the form
\begin{eqnarray}\label{a11}
\mu(g/a_{0})~{\bf g}~=~{\bf g^{N}}~,
\end{eqnarray}
where $\mu(\alpha)=1/\nu(\beta)$, $\beta=\alpha~\mu(\alpha)$, satisfies $\mu(\alpha)\rightarrow 1$ for $\alpha\gg1$ and $\mu(\alpha)\rightarrow \alpha$ for $\alpha\ll1$.
Eq. (\ref{a10}), the backbone of MOND/MD, is the equivalent of stating that: (i) equations of motion are invariant under transformation $(t,{\bf r})\rightarrow(\lambda t,\lambda {\bf r})$, $\lambda\in\mathbb{R}$ \citep{Mil09b}, or (ii) the gravitational field is enhanced by anti-screening of ordinary masses in some gravitationally polarizable medium that is characterized by ``gravitational permittivity'' equal to $g/a_{0}$ \citep{BLT08, BLT09, BB14}. Eventually, MOND can be related to quantum-mechanical processes in the vacuum \citep{Mil99}. Another interesting theory taking the best of both worlds of MD and $\Lambda$CDM, is the recent DM superfluid model \citep{BK15b, bk15a}.
As MD has higher predictive power in galaxies than the $\Lambda$CDM model, although its parent (generally-covariant) theory is still missing, and as most of the classical OC lies in the MD acceleration regime, which is modulated by the external field of the Galaxy $\sim 2a_{0}$, it is asking for the motion of the Oort spike comets to be investigated as it is prescribed by MD.
Science is mainly about formulating and testing hypotheses.
The possible inevitable tension between the theory and observations could be a disproof of some formulations of MD, incorporating Eq. (\ref{a10}).
Maybe application of the non-standard physics does not yield inconsistencies between the OC formation / OC body-injection models, which are calibrated by the observed LPC flux, and those of giant planets formation, which are calibrated by the appearance of the outer planets region.
\section{Milgromian dynamics}\label{MD}
The simple formula of Eq. (\ref{a1}), when considered as modified gravity\footnote{Eqs. (\ref{a1}) and (\ref{a11}) can be equivalently considered as modified inertia and the whole theory can be built around modifying the kinetic part of the classical action \citep{Mil94, Mil11}. We do not consider modified inertia theories in this paper. Note that these are generically non-local theories \citep{Mil94}.}, cannot be regarded as a universal theory that is applicable to any self-gravitating system of interest, e.g. for not obeying conservation laws out of highly symmetric problems \citep{FM12}. In any case, it was recognised, as early on by \citet{BM84} at the classical level and by \citet{Bek04} at the Lorentz-covariant level, that construction of a universal theory, reproducing Eq. (\ref{a1}) in the special case of the static weak field limit and spherical symmetry, is possible.
\subsection{Quasi-linear formulation of MD}\label{sec:QUMOND}
Several Lorentz covariant theories of MD have been devised in recent years (e.g. \citealp{Bek04, San05, Zlo+07, Mil09a}) which reproduce Eq. (\ref{a1}) in the static weak field limit and spherical symmetry, but differing from each other outside of it \citep{ZF10}. At a classical level these theories generally transform to one of the two types of modified Poisson equation \citep{BM84, Mil10}. Both classical theories are derived from action, thus benefiting from the standard conservation laws.
The theory from \cite{Mil10}, dubbed QUMOND for quasi-linear formulation of MD, can be considered as especially attractive for its computational friendliness.
In QUMOND the field equation that determines MD potential, $\Phi$, reads
\begin{eqnarray}\label{QFE}
\nabla\cdot\left(\nabla\Phi\right)=\nabla\cdot\left[\nu\left(\vert\nabla\phi^{N}\vert/a_{0}\right)\nabla\phi^{N}\right],
\end{eqnarray}
where $\phi^{N}$ is the Newtonian potential fulfilling $\nabla\cdot(\nabla\phi^{N})=4\pi G \varrho_{b}$, $\varrho_{b}$ is baryonic mass density. QUMOND comes from modifying only the gravitational part of the classical action hence the equation of motion stays the same
\begin{eqnarray}\label{eqm}
{\bf g}~=~-\nabla\Phi~.
\end{eqnarray}
Let us define the so-called phantom matter density (PMD)
\begin{eqnarray}\label{roph}
\varrho_{ph}~=~\frac{\nabla\cdot[\widetilde{\nu}(|\nabla\phi^{N}|/a_{0})\nabla\phi^{N}]}{4\pi G},
\end{eqnarray}
$\widetilde{\nu}(\beta)\equiv\nu(\beta)-1$. Eq. (\ref{roph}) does not represent any real physical quantity, particle, or field. PMD is only a mathematical object that allows us to take advantage of the already mentioned QUMOND formulation of MD and write the equations in our intuitive Newtonian sense with ``dark matter''.
With aid of Eq. (\ref{roph}), the MD potential $\Phi$ can be written as a sum
\begin{eqnarray}\label{MDphisum}
\Phi~=~\phi^{N}+\phi_{ph}~,
\end{eqnarray}
where the phantom potential $\phi_{ph}$ fulfils normal Poisson equation
\begin{eqnarray}\label{phiph}
\nabla\cdot\left(\nabla\phi_{ph}\right)~=~4\pi G \varrho_{ph}~.
\end{eqnarray}
Once the Newtonian potential is specified, PMD can be found and hence the motion in MD can be traced.
The widely used family of $\widetilde{\nu}(\beta)$ functions, corresponding to the special behaviour of $\nu(\beta)$ in Eq. (\ref{a1}), is
\begin{eqnarray}\label{A3}
\widetilde{\nu}_{n}(\beta)~=~\left[\frac{1+\left(1+4\beta^{-n}\right)^{1/2}}{2}\right]^{1/n}-1~,
\end{eqnarray}
see, e.g. \citet{FM12}.
It is well known that the simple $n=1$ function \citep{FB05} reproduces the rotation curves of the most spiral galaxies well, e.g. \citet{Gen+11}.
However, this function is because of its rather gradual transition to the Newtonian regime excluded by solar system tests, e.g. \citet{SJ06}, \citet{BN11}.
It is possible to construct an interpolating function with more rapid transition to the Newtonian regime (less impact on the solar system) and, at the same time, very similar to the simple interpolating function on the galactic scales where accelerations are $\sim a_{0}$ (see Fig. 19 in \citealp{FM12}). An example of this is \cite{McG08}:
\begin{eqnarray}\label{expinterpol}
\widetilde{\nu}(\beta)~=~\left(1-e^{-\beta^{1/2}}\right)^{-1}-1~.
\end{eqnarray}
Unless stated otherwise, we use this function throughout the paper, together with the standard value $a_{0}=1.2\times10^{-10}$ m s$^{-2}$= 3700 km$^{2}$ s$^{-2}$ kpc$^{-1}$ \citep{Beg+91, Gen+11, FM12}.
MD greatly reduces the missing mass in galaxy clusters but leaves consistent mass discrepancy of a factor of about 2 (e.g \citealp{San03}, see also \citealp{FM12}). This fact is frequently used as a reason to completely refute any consideration of MD\footnote{The short argumentation of MD sceptics often goes as ``the Bullet cluster''. In MD theories the mass discrepancies are uniquely predicted by the distribution of baryons but do not need to follow the distribution of baryons exactly.}. There is a suggestion to avoid the remaining discrepancies with a variation of $a_{0}$, and that $a_{0}$ is larger in clusters than it is in galaxies (e.g., \citealp{ZF12, Kho15}). We do not develop this idea in this paper.
In MD, the remaining missing mass does not need to be non-baryonic. Instructed by the history and motivated by the missing baryons problem\footnote{$\sim 30\%$ of the baryons predicted by the big bang nucleosynthesis were not yet detected. Only a fraction of these hidden baryons would be necessary to account for the mass discrepancy in galaxy clusters in MD \citep{FM12}.} it is completely possible that we still do not know the whole baryonic budget of galaxy clusters. The recent discovery of more than a thousand ultra-diffuse galaxy-like objects in the Coma cluster \citep{Kod+15} further promotes this suggestion \citep{Mil15}.
\subsection{Solving for the Milgromian potential of the Galaxy on a grid}\label{PMD}
One can convert known baryonic matter distribution to QUMOND potential and hence the real acceleration. But in general this has to be done numerically.
According to the scheme sketched in Eqs. (\ref{eqm}) - (\ref{phiph}) first we have to know the Newtonian potential $\phi^{N}({\bf r})$, thus we have to solve the Poisson equation $\Delta\phi^{N}({\bf r})=4\pi G \varrho_{b}({\bf r})$, where the baryonic mass density $\varrho_{b}({\bf r})$ is specified by the adopted model of the Galaxy, see Sect. \ref{Galaxymodel}. For this purpose, we employ a fast Poisson solver on a cartesian grid with the boundary condition that corresponds to a point mass, $\phi^{N}(r)=-GM_{b}/r$, on the last grid point, where $r$ is the centre of mass distance of the baryonic mass density grid and $M_{b}$ is the total baryonic mass.
For a given Newtonian potential $\phi^{N}$ discretised on a cartesian grid $(x,~y,~z)$ of step $h$, the discretised version of Eq. (\ref{roph}) is given on a grid point $(i,~j,~k)$ by:
\begin{eqnarray}\label{A4}
\varrho_{ph}^{i,j,k}~=~\frac{1}{4\pi G h^{2}}&\Big{[}&
\left(\phi^{N}_{~i+1,j,k}-\phi^{N}_{~i,j,k}\right)\widetilde{\nu}_{B_{x}} \nonumber \\
&-&\left(\phi^{N}_{~i,j,k}-\phi^{N}_{~i-1,j,k}\right)\widetilde{\nu}_{A_{x}}\nonumber \\
&+&\left(\phi^{N}_{~i,j+1,k}-\phi^{N}_{~i,j,k}\right)\widetilde{\nu}_{B_{y}}\nonumber \\
&-&\left(\phi^{N}_{~i,j,k}-\phi^{N}_{~i,j-1,k}\right)\widetilde{\nu}_{A_{y}}\nonumber \\
&+&\left(\phi^{N}_{~i,j,k+1}-\phi^{N}_{~i,j,k}\right)\widetilde{\nu}_{B_{z}}\nonumber \\
&-&\left(\phi^{N}_{~i,j,k}-\phi^{N}_{~i,j,k-1}\right)\widetilde{\nu}_{A_{z}}~\Big{]}~,
\end{eqnarray}
where $\widetilde{\nu}$ function is evaluated in a particular midpoint, e.g. $\widetilde{\nu}_{B_{x}}$ is evaluated in $(i+1/2,~j,~k)$, $\widetilde{\nu}_{A_{y}}$ in $(i,~j-1/2,~k)$, and so on, half a cell from $(i,j,k)$ in each of the three orthogonal directions, see, e.g. \citet{FM12,Lug+13,Lug+14,Lug+15} for illustration. The gradient of $\phi^{N}$ in $\widetilde{\nu}_{B_{x}}(|\nabla\phi^{N}|/a_{0})$ is approximated by $\nabla\phi^{N}=(4\phi^{N}_{~i+1,j,k}-4\phi^{N}_{~i,j,k}~,~\phi^{N}_{~i+1,j+1,k}-\phi^{N}_{~i+1,j-1,k}+\phi^{N}_{~i,j+1,k}-
\phi^{N}_{~i,j-1,k}~,~\phi^{N}_{~i,j,k+1}-\phi^{N}_{~i,j,k-1}
+\phi^{N}_{~i+1,j,k+1}-\phi^{N}_{~i+1,j,k-1})/(4h)$, and so forth.
Finally, knowing the PMD we can solve for the effective Milgromian potential $\Phi({\bf r})$ in $\Delta\Phi({\bf r})=4\pi G [\varrho_{b}({\bf r})+\varrho_{ph}({\bf r})]$ on the same grid. As the boundary condition
\begin{eqnarray}\label{A5}
\Phi(r)=(G M_{b}a_{0})^{1/2}\ln(r)~,
\end{eqnarray}
where $r$ is the centre of mass distance of the ``mass density'' grid and $M_{b}$ is the total baryonic mass, is assumed on the last grid point, in accordance with Eq. (\ref{a1}). In the whole procedure of obtaining $\Phi$, we assume that the Galaxy is isolated from external gravitational fields\footnote{To avoid confusion, we treat the Galaxy as being isolated but we consider the solar system as being embedded in the field of the Galaxy.}, see Sect. \ref{Sec:EFE} for a discussion on EFE. This is a good approximation until the internal gravity becomes comparable with the external field generated by the large scale structure, which is of the order of $a_{0}/100$ \citep{Fam+07}. At the position of the Sun the internal gravity is $\sim a_{0}$.
\subsection{External field effect}\label{Sec:EFE}
A special feature of MD as modified gravity is that its formulation breaks the strong equivalence principle \citep{Mil86b}.
If we have a system $s$ that rests in the gravitational field of a larger system $S$. Say that $S$ generates gravitational acceleration ${\bf g_{e}}=-\nabla\Phi_{e}$ within $s$.
We assume that the gravitational field that is acting on a body within $s$, ${\bf g}=-\nabla\Phi$, can be separated into internal ${\bf g_{i}}=-\nabla\Phi_{i}$ ($\vert{\bf g_{i}}\vert\equiv g_{i}$) and external ${\bf g_{e}}=-\nabla\Phi_{e}$ ($\vert{\bf g_{e}}\vert\equiv g_{e}$) part. We can then substitute $\nabla\phi^{N}=\nabla\phi^{N}_{~i}+\nabla\phi^{N}_{~e}=-{\bf g^{N}_{i}}-{\bf g^{N}_{e}}$ into Eq. (\ref{QFE}), where ${\bf g^{N}_{i}}$ ($\vert{\bf g^{N}_{i}}\vert\equiv g^{N}_{i}$) and ${\bf g^{N}_{e}}$ ($\vert{\bf g^{N}_{e}}\vert\equiv g^{N}_{e}$) are internal and external Newtonian gravitational accelerations. After removing divergences, dropping the curl-field and considering only directions in the plane perpendicular to the external field this gives \citep{Ang+14}
\begin{eqnarray}\label{simplealgebra}
{\bf g_{i}}=\nu\left(\frac{\sqrt{\left(g^{N}_{i}\right)^{2}+\left(g^{N}_{e}\right)^{2}}}{a_{0}}\right){\bf g^{N}_{i}}~,
\end{eqnarray}
where we have further assumed ${\bf g_{e}}=\nu(g^{N}_{e}/a_{0}){\bf g^{N}_{e}}$.
The internal gravity in $s$ depends not only on internal gravitational sources (in our case - the Sun) but also on the strength of the external field at the position of $s$ (in our case - the local strength of the Galactic gravitational field), even when the external field is considered as being constant within $s$.
This effect should not be confused with tidal forces that arise from the non-uniformity of the external gravitational field across the system $s$. A person in the (arbitrarily small) falling elevator in $s$ can find out about the existence and properties of the external gravitational field through its influence on the internal dynamics. Say $g^{N}_{e}$ is constant, if $g^{N}_{i} < a_{0} \ll g^{N}_{e}$ in Eq. (\ref{simplealgebra}) the system $s$ behaves purely as Newton said, with no sign of the modified dynamics as $\nu(g^{N}/a_{0})$ tends to 1 then, similarly as in the case $g^{N}_{i} \gg a_{0}$. The opposite deep-MD regime applies when $g^{N}_{e} < g^{N}_{i} \ll a_{0}$. The standard MD effects are observed only when both internal and external gravity are sufficiently small $(\lesssim a_{0})$ and, moreover, the external field does not dominate over the internal one.
Eventually, if the hierarchy goes as $g^{N}_{i} < g^{N}_{e} \sim a_{0}$, the dynamics is Newtonian
with rescaled gravitational constant $G/\mu(g_{e}/a_{0})=\nu(g^{N}_{e}/a_{0})G$, where $G$ is the Newtonian gravitational constant. Moreover, the dynamics is anisotropic with dilatation along the direction of the external field\footnote{This is not seen in approximative Eq. (\ref{simplealgebra}), but see Sect. \ref{MilgromianOC} where a more rigorous approach is applied and anisotropic dynamics emerge.}.
The external field of the Galaxy, ${\bf g_{e}}$, thus has to be considered carefully beyond its tidal effects when modelling MOC. We use the constant value $g_{e}= V^{2}_{0}/R_{0}=240^{2}$ km$^{2}$ s$^{-2}$/(8.3 kpc)$~\doteq1.87~a_{0}$, where $V_{0}$ is the circular speed of the Sun at $R_{0}$, and $R_{0}$ is the distance between the Sun and the Galactic center (GC), throughout the paper. Compare the values of $V_{0}$ and $R_{0}$ with for example those given by \citet{Sch12}. We take the Newtonian value $g^{N}_{e}$ as a solution of
\begin{eqnarray}\label{getogen}
g_{e}=\nu(g^{N}_{e}/a_{0})g^{N}_{e}~.
\end{eqnarray}
Eq. (\ref{getogen}) is known to be a good approximation at the position of the Sun \citep{BM95} (the Galaxy can be well modelled as being made up of bulge plus exponential disks).
We note that the Galactic tide is modelled as a separate effect, see Sect. \ref{tide} for details.
To better visualise the gravity-boosting effect of MD and also the importance of EFE on the solar system scales, we plot $\nu$ interpolating function as a function of heliocentric distance $\Xi$ in Fig. \ref{img:MONDSS}. The simple $\nu(\beta)=[1+(1+4\beta^{-1})^{1/2}]/2$ and the exponential $\nu(\beta)=[1-\exp(-\beta^{1/2})]^{-1}$ interpolating functions are depicted.
$\beta\equiv g_{N}/a_{0}$ is approximated with $[(g^{N}_{e})^{2} + (GM_{\odot}/\Xi^{2})^{2}]^{1/2}/a_{0}$, i.e. vectors of external and internal Newtonian gravitational acceleration are assumed to be perpendicular to each other for simplicity. The characteristic distance scale (MD transition scale) is $\sim\sqrt{GM_{\odot}/a_{0}}\approx 7$ kau. Because of the action of EFE, $\nu(\beta)$ does not diverge with $\Xi\rightarrow\infty$, but asymptotes to the constant value $\nu(g^{N}_{e}/a_{0})$.
EFE is important, even in the high-acceleration regime, where the gravity-boosting effect of MD is very weak. It has been shown that, at $\Xi\ll \sqrt{GM_{\odot}/a_{0}}$, which is well fulfilled in the planetary region, EFE manifests primarily through an anomalous quadrupolar correction to the Newtonian potential, which increases with the heliocentric distance $\Xi$ \citep{Mil09c, BN11}. This dynamical effect is thus analogous to that of a massive body hidden at a large heliocentric distance, lying in the direction to GC, ${\bf g_{e}}/g_{e}$, \citep{Hog+91,Ior10b}. As the external field ${\bf g_{e}}$ rotates with period $\sim210$ Myr, this corresponds to an unfeasible configuration in Newtonian dynamics (too massive body in a very distant circular orbit around the Sun). Hence the effect of MD should be distinguishable from that of the distant planet in simulations that are carried out on large timescales.
\begin{figure}
\begin{center}
\resizebox{0.85\hsize}{!}{\includegraphics{MONDSS_3.pdf}}
\caption{Interpolating functions $\nu(\beta)=[1+(1+4\beta^{-1})^{1/2}]/2$ (dot-dashed line) and $\nu(\beta)=[1-\exp(-\beta^{1/2})]^{-1}$ (solid line) as functions of heliocentric distance $\Xi$. $\beta\equiv g_{N}/a_{0}$ is approximated with $[(g^{N}_{e})^{2} + (GM_{\odot}/\Xi^{2})^{2}]^{1/2}/a_{0}$, i.e. vectors of external and internal Newtonian gravity are assumed to be perpendicular to each other for simplicity. The two topmost horizontal dashed lines are the values $\nu$-functions asymptote to under the condition $\Xi\rightarrow\infty$ (then $g_{N}\rightarrow g^{N}_{e}$), the downmost $\nu=1$ marks the Newtonian limit. Vertical dashed lines from left to right indicate the aphelia of Neptune and Sedna and the distances where $GM_{\odot}/\Xi^{2}=g_{e}=1.9 a_{0}$ and $GM_{\odot}/\Xi^{2}=a_{0}$. The dotted line is $\beta^{-1/2}= [(GM_{\odot}/\Xi^{2})/a_{0}]^{-1/2}$, the deep-MOND limit of $\nu(\beta)$, in the case of no external field.}
\label{img:MONDSS}
\end{center}
\end{figure}
\section{Models}\label{models}
In Sect. \ref{Galaxymodel}, the adopted model of the Galactic matter distribution is presented and the appropriate PMD for this model is calculated. The model of the Galaxy is considered solely to estimate the matter density in the solar neighborhood and hence estimate the effect of the Galactic tide, see sections \ref{tide}, \ref{simul} and \ref{XXZ}. In Sect. \ref{sec:simpleOC} the simplified model of the MOC that is embedded in a constant external field is introduced. The majority of the qualitative analysis performed in the paper is carried out assuming this simple model.
Firstly, we erect a rectangular Galilean coordinate system $O_{\odot}(\xi',\eta',\zeta')$ that is centred on the Sun. At time $t=0$ (present time), the inertial reference frame $O_{\odot}(\xi',\eta',\zeta')$ coincides with the rotating Galactic rectangular coordinate system, i.e. $\xi'$ axis is directed from the Sun to the GC at $t=0$. We also use an inertial frame that is centred on the GC, denoted $O_{GC}(x,y,z)$, with $x-y$ plane being the Galactic plane and $x$ axis directed from the GC to the Sun at $t=0$.
\subsection{The Galaxy}\label{Galaxymodel}
We adopt the Galaxy mass model of \citet{McG08}, similar to that used in \citet{Lug+14}. \citet{McG08} concluded that MOND prefers short disk scale lengths in the range $2.0<r_{d}<2.5$ kpc. The modelled Galaxy consists of a stellar double-exponential disk with the scale length $R_{d}=2.3$ kpc and the scale height $z_{d}=0.3$ kpc with the disk mass $4.2\times10^{10}~M_{\odot}$. Moreover, it has a thin gas disk of the total mass $1.2\times10^{10}~M_{\odot}$ with the same scale length and half scale height as the stellar one and a bulge modelled as a Plummer's sphere, with the mass $0.7\times10^{10}~M_{\odot}$ and the half-mass radius 1 kpc.
\subsubsection{Phantom matter density}\label{pomodoro}
MD predicts the complex structure of a ``Newtonist's dark halo'' with a pure disk component and rounder component with radius-dependent flattening that becomes spherical at great distances \citep{Mil01}, see also Fig. 5 in \citet{Lug+15}.
We calculated the PMD of the Galaxy model according to the numerical scheme of Sect. \ref{PMD}. A cartesian $(x,y,z)$ grid with $512\times512\times256$ cells and resolution of $0.1\times0.1\times0.02$ kpc was used. This resolution was tested as being sufficiently fine enough so that the calculated PMD changes only negligibly if the resolution is further increased.
Fig. \ref{img:pmd} shows the vertical PMD $\varrho_{ph}(z)$ at $R=R_{0}=8.3$ kpc within $\vert z \vert<1$ kpc.
The $K_{z}$ force perpendicular to the Galactic plane will be obviously enhanced in this case, compared to the Galaxy that resides in a spherical DM halo, as predicted by Milgrom already in his pioneer paper \citep{Mil83b}.
Owing to small stellar samples (Hipparcos data), one cannot precisely recover the shape of $K_{z}(z)$ or of the dynamical density, only the surface density below some $\vert \overline{z} \vert$, where $\overline{z}$ is the mean distance of the samples from the Galactic
plane \citep{Bie+09}.
We should compare the calculated surface density of the baryonic matter plus the phantom matter with observations. \citet{HF04} find the dynamical surface density $\Sigma_{0}=74\pm 6$ M$_{\odot}$ pc$^{-2}$ within $\vert z \vert<1.1$ kpc.
By fitting the calculated local PMD with a superposition of three exponential disks, we find $\Sigma_{0}=80$ M$_{\odot}$ pc$^{-2}$ within $\vert z \vert<1.1$ kpc, which is consistent with the value of \citet{HF04}. The portion 43 M$_{\odot}$ pc$^{-2}$ resides in the normal matter and 37 M$_{\odot}$ pc$^{-2}$ in the phantom.
\begin{figure}\centering
\resizebox{0.5\hsize}{!}{\includegraphics{roph.pdf}}
\caption{PMD of the Galaxy (solid), modelled as in Sect. \ref{Galaxymodel} at $R=R_{0}=8.3$ kpc within $\vert z \vert<1$ kpc. NFW dark matter density (dashed line) is also depicted.}
\label{img:pmd}
\end{figure}
\subsubsection{The dark matter halo of Newtonian Galaxy}\label{halo}
The Navarro-Frenk-White (NFW) halo model \citep{Nav+97}
\begin{eqnarray}\label{NFW}
\varrho_{h}~=~\frac{\varrho_{h,0}}{\delta(1+\delta)^{2}}~,
\end{eqnarray}
where $\delta\equiv r/r_{h}$, $r_{h}$ is the scale radius (spherically symmetric halo, $r$ is radial coordinate), $\varrho_{h,0}$ is a constant, represents the culmination of the present day theoretical knowledge in the standard CDM-based cosmology.
In Sect. \ref{XXZ} we aim to compare the effect of the Galaxy on the MOC and the classical OC.
We use the NFW model as the model of the Galaxy's dark matter halo in the Newtonian framework in order to find local mass density in the solar neighbourhood and quantify the Galactic tide.
CDM haloes are routinely described in terms of their virial mass, $M_{vir}$, which is the mass that is contained within the virial radius $r_{vir}$, and
the concentration parameter $c = r_{-2}/r_{vir}$, where $r_{-2}$ is the radius at which the logarithmic slope of the density profile $d\log \varrho_{h}/$ $d \log r = -2$ (for the
NFW profile, $r_{-2} = r_{h}$ ). The virial radius $r_{vir}$ is defined as the radius
of a sphere that is centred on the halo centre, which has an average density that is $\Delta$ times the critical density $\varrho_{crit}=3H^{2}_{0}/(8\pi G)$, where $H_{0}$ is the Hubble constant. $\Delta$ varies with redshift, with $\Delta\approx100$ today. For the NFW model
\begin{eqnarray}\label{NFW2}
\frac{\varrho_{h,0}}{\varrho_{crit}}~=~\frac{\Delta}{3}~\frac{c^{3}}{\ln(1+c)-c/(1+c)}
\end{eqnarray}
holds. Thus knowing the concentration parameter $c$ we can find $\varrho_{h,0}$ of Eq. (\ref{NFW}). \citet{Boy+10} examined (NFW) haloes taken from the Millennium-II simulations at redshift zero, in the mass range $10^{11.5}$ $\leq$ $M_{vir} [h^{-1} M_{\odot}]$ $\leq$ $10^{12.5}$, a mass
range that the Galaxy's halo is likely to lie in, and determined that the probability distribution of the concentration parameter was well-fitted by a Gaussian distribution in $\ln c$, with $\langle\ln c\rangle=2.56$ and $\sigma_{\ln c}=0.272$. We adopt $c=\exp(2.56)$ as the concentration parameter of the Galaxy. The remaining degree of freedom in Eq. (\ref{NFW}), represented by the scale radius $r_{h}$, can be eliminated by fitting the circular speed $V_{0}$ at radial distance $R_{0}$: $V^{2}_{0}=V^{2}_{d,s}+V^{2}_{d,g}+V^{2}_{b}+V^{2}_{h}$, where the added squared speeds represent particular Galactic components (stellar disk, gas disk, bulge, dark halo) determined by the particular masses that are enclosed within $R_{0}$. Doing so for $V_{0}=240$ km/s, $R_{0}=8.3$ kpc we find: $\varrho_{h,0}=5.750\times 10^{6}$ M$_{\odot}$ kpc$^{-3}$, $r_{h}=28.4$ kpc. Surface density of the NFW halo within $\vert z \vert<1.1$ kpc is 26 M$_{\odot}$ pc$^{-2}$, consistent with the lower bound on $\Sigma_{0}$ \citep{HF04}.
\subsubsection{Galactic tide}\label{tide}
We use a 1D model of the Sun's motion through the Galaxy with the Sun moving in a circular orbit upon which are superimposed small vertical oscillations. For the vertical (perpendicular to the Galactic midplane) acceleration of the Sun at $z=z_{\odot}$ we assume
\begin{eqnarray}\label{osc}
\ddot{z}(z_{\odot})~=~-\frac{\partial\Phi}{\partial z}(z_{\odot})~=~-4\pi G \int^{z_{\odot}}_{0}\varrho(z)dz~,
\end{eqnarray}
where in MD, $\varrho(z)=\varrho_{b}(z)+\varrho_{ph}(z)$, is the local vertical ``matter density'' which is sum of the baryonic and the phantom density at $R=R_{0}$ and $\Phi$ is the QUMOND potential of the Galaxy, see sections \ref{PMD} and \ref{Galaxymodel}. In Newtonian dynamics, $\varrho(z)=\varrho_{b}(z)+\varrho_{h}(z)$, where $\varrho_{h}(z)$ is the vertical density of the DM halo at $R=R_{0}$. Eq (\ref{osc}) hangs on the fact that the rotation curve of the Galaxy is approximately flat at the position of the Sun - for an axisimmetric model of the Galaxy: $(1/R)\partial(R\partial\Phi/\partial R)/\partial R+\partial^{2}\Phi/\partial z^{2}=4\pi G \varrho(R,z)$ with $\partial(R\partial\Phi/\partial R)/\partial R\approx0$ holds. Fig. \ref{img:sz} shows the oscillations of the Sun through the Galactic disk governed by Eq. (\ref{osc}). The oscillations have a period of 76.7 Myr. The model of the Galaxy of Sect. \ref{Galaxymodel} is employed.
\begin{figure*}\centering
\includegraphics[width=0.7\linewidth]{sunz.pdf}
\caption{\textbf{Left}: Oscilation of the Sun governed by Eq. (\ref{osc}) in MD. We used $z_{\odot}(0)=30$ pc and $v_{z_{\odot}}(0)=7.25$ km s$^{-1}$ as the initial conditions of the Sun's motion. \textbf{Middle}: Local ``total matter density'' $\varrho=\varrho_{b}+\varrho_{ph}$, as experienced by the oscillating Sun. \textbf{Right}: Local PMD, as experienced by the oscillating Sun.}\label{img:sz}
\end{figure*}
We approximate the tidal acceleration of a comet\footnote{MD is non-linear. One cannot a priori sum up partial accelerations to get a net acceleration vector. The usage of Eq. (\ref{tidesimple}) in MD is further discussed and justified in Sect. \ref{simul}.} in the inertial frame of reference $O_{\odot}(\xi',\eta',\zeta')$ centred on the Sun as $(0,~0,~\ddot{\zeta'}_{tide}\equiv\ddot{z}_{c}-\ddot{z}_{\odot})$ with
\begin{eqnarray}\label{tidesimple}
\ddot{\zeta'}_{tide}~=~-4\pi G \varrho(z_{\odot})\zeta' + \mathcal{O}(\zeta'^{2})~,
\end{eqnarray}
where $z_{c}$ and $z_{\odot}$ are vertical components (perpendicular to the Galactic midplane) of the position vector of the comet and the Sun with respect to the GC and $z_{c}=z_{\odot}+\zeta'$ holds.
We omit the $\xi'$ and $\eta'$ components of the tide since these are approximately an order of magnitude smaller than the $\zeta'$ component \citep{HT86}. We note that this is true not only in Newtonian dynamics, but also in MD as the distribution of the phantom matter resembles that of a disk close to the galactic midplane.
\subsection{Simple model of the Milgromian Oort cloud}\label{sec:simpleOC}
Here we introduce a simple model of the MOC embedded in an external field of constant magnitude (no tides). Accounting for the external field is a necessary step as in MD the external field does not decouple from the internal dynamics.
We assume that the Sun travels with angular frequency $\omega_{0}=V_{0}/R_{0}$ in a circular orbit of radius $R_{0}$ which lies in the Galactic midplane $(z=0)$.
Let the Newtonian external field of the Galaxy at the position of the Sun be approximated by the time-dependent vector:
\begin{eqnarray}\label{gesimple}
{\bf g^{N}_{e}}=[g^{N}_{e}~\cos(\omega_{0}t),~g^{N}_{e}~\sin(\omega_{0}t),~0]
\end{eqnarray}
in $O_{\odot}(\xi',\eta',\zeta')$.
So that at $t=0$: ${\bf g^{N}_{e}}=g^{N}_{e}~{\bf \hat{\xi'}}$, where ${\bf \hat{\xi'}}$ is the unit vector.
In Eq. (\ref{gesimple}), we assume that the Sun orbits counterclockwise in the plane $\xi'-\eta'$ of $O_{\odot}(\xi',\eta',\zeta')$. The sense of rotation of the Sun does not play a role in the analysis.
In Eq. (\ref{roph}) we now have $\nabla\phi^{N}_{S.S.}=GM_{\odot}{\bf \Xi'}/\Xi'^{3}-
{\bf g^{N}_{e}}$, where ${\bf \Xi'}=[\xi',\eta',\zeta']$,
$\Xi'\equiv(\xi'^{2}+\eta'^{2}+\zeta'^{2})^{1/2}$ and the lower index ``S.S.'' stresses that we are dealing with the solar system embedded in the external field of the Galaxy. For the PMD we thus obtain
\begin{eqnarray}\label{rophsimple}
\varrho_{ph,S.S.}~=~\frac{\nabla\widetilde{\nu}\cdot(GM_{\odot}{\bf \Xi'}/\Xi'^{3}-
{\bf g^{N}_{e}})}{4\pi G}~,
\end{eqnarray}
where $\widetilde{\nu}\equiv\widetilde{\nu}(|GM_{\odot}{\bf \Xi'}/\Xi'^{3}-
{\bf g^{N}_{e}}|/a_{0})$.
The phantom potential $\phi_{ph,S.S.}$ can be found by solving the ordinary Poisson equation
\begin{eqnarray}\label{phpot}
\Delta\phi_{ph,S.S.}=4\pi G \varrho_{ph,S.S.}~,
\end{eqnarray}
with the boundary condition: $\phi_{ph,S.S.}=-{\bf g_{e}}\cdot{\bf \Xi'}$.
The equation of motion in $O_{\odot}(\xi',\eta',\zeta')$ then reads
\begin{eqnarray}\label{eqmo}
{\bf \ddot{\Xi'}}=-\nabla\Phi_{S.S.}-{\bf g_{e}}~,
\end{eqnarray}
where $\Phi_{S.S.}=-GM_{\odot}/\Xi'+\phi_{ph,S.S.}$.
As QUMOND equations are linear when formulated with the aid of phantom matter, we can also look for a solution of Eq. (\ref{phpot}) with the vacuum boundary condition ($\phi_{ph,S.S.}=0$ at the boundary) and then evolve a body with
\begin{eqnarray}\label{eqmo2}
{\bf \ddot{\Xi'}}=-\nabla\Phi_{S.S.}~.
\end{eqnarray}
\subsubsection{Simple model of the Oort cloud - numerical solution at t=0}
For integration of cometary orbits throughout the paper we employ the well-tested RA15 routine \citep{Eve85} as part of the {\small MERCURY 6} gravitational dynamics software package \citep{Cha99}, which we have modified appropriately to be compatible with the MD framework. Eq. (\ref{gesimple}) has to be transformed from $O_{\odot}(\xi',\eta',\zeta')$ to a coordinate system used by {\small MERCURY 6}. This transformation and subsequent modification of Eqs. (\ref{rophsimple}) and (\ref{eqmo}) are straightforward. $O_{\odot}(\xi,\eta,\zeta)$ denotes from now on the rectangular coordinate system we use in {\small MERCURY 6}, i.e. Galilean coordinates coinciding at $t=0$ with the heliocentric ecliptical coordinate system\footnote{$O_{\odot}(\xi',\eta',\zeta')$ vs. $O_{\odot}(\xi,\eta,\zeta)$, primed are Galactic and non-primed are ecliptic coordinates at $t=0$.}.
During short time periods, compared to the period of the Sun's revolution around the GC, $\sim210$ Myr, one can approximate Eq. (\ref{gesimple}) with the constant vector ${\bf g^{N}_{e}}=[g^{N}_{e},~0,~0]$~, $g^{N}_{e}\approx 1.22~a_{0}$, in $O_{\odot}(\xi',\eta',\zeta')$.
We used this approximation to find the phantom potential $\phi_{ph,S.S.}$ experienced by a body in the MOC model that is represented by Eqs. (\ref{gesimple}) - (\ref{eqmo}). The numerical procedure is analogous to the one described in Sect. \ref{PMD}. The boundary conditions are described under Eq. (\ref{phpot}).
We employed a regular cartesian grid with $512^{3}$ cells and resolution of 390 au that is centred on the Sun. This resolution was tested to be sufficiently fine enough so that the trajectories of comets do not change significantly if the resolution is further increased. In the case of inner OC orbits in sections \ref{Sedna} and \ref{Sedna2} we used a resolution of 78 au with the same result. The calculated phantom acceleration, $-\nabla\phi_{ph,S.S.}$, is linearly interpolated to an instaneous position of the body within each integration cycle.
We refer to this simplified dynamical model of the MOC as ``simple model of the MOC''.
\subsection{Escape speed}
An isolated point mass $M$ at distance $r\gg(GM/a_{0})^{1/2}$ is in MD source of the potential of the form
\begin{eqnarray}\label{A55}
\Phi(r)\sim (GMa_{0})^{1/2}\ln(r)~.
\end{eqnarray}
Eq. (\ref{A55}) yields asymptotically flat rotation curves but also means that there is no escape from the central field produced by the isolated point mass in MD, since $V^{2}_{esc}(r)\sim\Phi(\infty)-\Phi(r)$. But, an external field (which is always intrinsically present) actually regularizes the former divergent potential, so that it is possible to escape from non-isolated point masses in MD \citep{Fam+07}, as we have already seen in Sect. \ref{Sec:EFE}.
The escape speed of a comet can be well defined as \citep{Wu+07, Wu+08}
\begin{eqnarray}\label{vesc1}
V_{esc}(\xi,\eta,\zeta)=\sqrt{-2\Phi_{i}(\xi,\eta,\zeta)}~,
\end{eqnarray}
with $-\nabla\Phi_{i}={\bf \ddot{\Xi}}$.
The estimate of the escape in the direction perpendicular to the external field can be found
by approximating the Galactic EFE that is acting on the OC with the simple curl-free formula of Eq. (\ref{simplealgebra}), where now ${\bf g_{i}^{N}}=-GM_{\odot}{\bf \Xi}/\Xi^{3}$.
For the escape speed at $\Xi=r_{C}$, we then have
\begin{eqnarray}\label{vesc}
V_{esc}(r_{c})=\left[2\int^{\infty}_{r_{c}}g_{i}(\Xi)d\Xi\right]^{1/2}~,
\end{eqnarray}
where $g_{i}(\Xi)\equiv \vert{\bf \ddot{\Xi}}\vert$. We use Eq. (\ref{vesc}) in sections \ref{escapingcomets} and \ref{JSBinMOND} to estimate binding energy of a comet.
\begin{figure*}
\begin{center}
\resizebox{0.65\hsize}{!}{\includegraphics{OC10kadg.pdf}}
\resizebox{0.65\hsize}{!}{\includegraphics{OC50kbeh.pdf}}
\resizebox{0.65\hsize}{!}{\includegraphics{OC100kcfi.pdf}}
\caption{Past Milgromian trajectories of $3\times 100$ Monte Carlo particles projected to 3 mutually orthogonal planes of $O_{\odot}(\xi,\eta,\zeta)$. The particles were initialised with original Newtonian orbital elements: $a=10$ (\textbf{top row}), 50 (\textbf{middle row}), 100 (\textbf{bottom row}) kau, $q$ distributed uniformly on the interval $(0,8)$ au, $\cos(i)$ distributed uniformly on the interval $(-1,1)$, $\omega$ and $\Omega$ distributed uniformly on the interval $(0,2\pi)$, among the particles, and mean anomaly $M=0$. Then the particles were evolved back in time in the simple model of the MOC, Sect. \ref{sec:simpleOC}, for one Keplerian period ($\approx$ 1 Myr) in the case of $a=10$ kau, and for 10 Myr in the case of $a=50$ and 100 kau. The concentric circles at the top right corner of figures A,B, and C represent relative radii of the Milgromian (always the smaller circle) and Newtonian OC (radius $=2a$; always the larger circle) as determined by the simulation and assuming that the cloud is the smallest sphere encompassing all orbits of given initial $a$.
The Sun resides at [0,0], as indicated by the symbol.}
\label{img:oo}
\end{center}
\end{figure*}
\section{The Oort cloud as seen by a Milgromian astronomer}\label{MilgromianOC}
Do the observations lead us to hypothesize the existence of a vast cloud of bodies as a reservoir of new comets also if we interpret the data with the laws of MD? If so, how vast and shaped, in a rough sense, would be the cloud, compared to the classical one?
\citetalias{DK11} studied the dynamical evolution of 64 Oort spike comets with orbits what were determined with the highest precision, discovered after 1970, with their original semi-major axes larger than 10 kau and osculating perihelion distances $q>3$ au (to minimise non-gravitational effects). They identified 31 comets as dynamically new (having their first approach to the zone of significant planetary perturbations; for the detailed definition see the paper), and one of these comets as possibly hyperbolic\footnote{\citetalias{DK11} found the original reciprocal semi-major axis of the comet C/1978 G2 to be $-22.4\pm 37.8~\times 10^{-6}$ au$^{-1}$.}. Median value of the original reciprocal semi-major axis for the 30 comets on the certainly bound orbits is 22.385 $\times$ 10$^{-6}$ au$^{-1}$, which corresponds to 44.7 kau, maximum and minimum values in the sample read 250.6 and 21.9 kau, respectively. All the orbits have osculating $q<9$ au. The orbits of dynamically new comets are free from planetary perturbations and can be used to study the source region of these comets. We emphasize that for a comet being dynamically new under Newtonian dynamics does not necessary mean to be dynamically new under MD. A reconsideration of the dynamical status in MD would require a similar approach as in \citetalias{DK11} with an extensive use of orbital clones to cover the large errors that are in the original orbital energy.
To acquire vital motivation we have used a more straightforward approach as a first step. Employing the aforementioned simple model of the MOC, we traced the past trajectories of 300 Monte Carlo test particles that represent a sample of Oort spike comets. We consider this a fairly small sample since, in reality, observed samples are of similar or even smaller numbers. We considered three values of the particle's initial semi-major axis $a=$ 10, 50 and 100 kau. For each of the three values of $a$ we initialised $100$ test particles at their perihelia - all the perihelia lie in the deep Newtonian regime - with the following randomly generated original Newtonian orbital elements: $q$ distributed uniformly on the interval $(0,8)$ au, $\cos(i)$ distributed uniformly on the interval $(-1,1)$, $\omega$ and $\Omega$ distributed uniformly on the interval $(0,2\pi)$, among the test particles, here $q$ is perihelion distance, $i$ is inclination with respect to the ecliptics, $\omega$ is argument of periapsis, and $\Omega$ is the longitude of the ascending node.
The initial Newtonian orbital elements are immediately transformed into the initial cartesian positions and velocities, the notions being independent of the dynamical framework; also these are the observables on the basis of which the orbital elements are calculated\footnote{Published catalogues and papers usually offer only the Newtonian orbital elements, not the observables.}.
We followed the particles with $a=10$ kau back in time for one Keplerian period (which is by no means the real period assuming MD), $2\pi$ ($a$[au])$^{3/2}$/$k$ days, where $k$ is Gaussian gravitational constant, and the particles with $a=50$ and 100 kau for 10 Myr. We do not use the integration time of one Keplerian period in the latter case because, during this time, the change in the external field direction cannot be neglected ($a=100$ kau orbit has the Keplerian period $T_{Kep}\approx 32$ Myr). In any case, as will be shown, all the particles with initial $a=50$ and 100 kau revolve many times during 10 Myr.
By the term ``original orbit'' we want to emphasise the fact that, in reality, the outer planets and non-gravitational effects are important dynamical agents, primarily changing the value of the semi-major axis. We can imagine the ensemble of the initial orbital elements as the result of backward integration of observed osculating (instantaneous) orbits to the time when the comets/particles enter the planetary zone.
The past QUMOND trajectories of the particles are shown in Fig. \ref{img:oo}. Trajectories can be typically described as ellipses with a quickly precessed line of apsides. Moreover, the external field often changes perihelion distances of the particles rapidly and almost irrespective of their initial semi-major axis. This important fact is discussed in Sect. \ref{XXZ}. In this case, the orbits change their shape dramatically, as was previously illustrated in \citet{Ior10} for the deep-MD orbits only.
A small departure from the isotropy of the cloud can be seen in Fig. \ref{img:oo}. The cloud is prolonged in the direction of the $\eta$ axis. Also an indistinct pac-man shape of $\xi-\eta$ and $\eta-\zeta$ plane cuts emerges. This is because of the external field of the Galaxy, which points in the direction of $-{\bf\hat{x}}$ of $O_{GC}(x,y,z)$ (the direction Sun-GC at $t=0$), which also approximately corresponds to the direction of -\boldmath${\hat{\eta}}$\unboldmath~ of $O_{\odot}(\xi,\eta,\zeta)$. The gravity is stronger at negative $\eta$ than at positive. This can be most easily noticed on the $\nu(\beta)$ dependence on the vector sum in the grossly approximative formula ${\bf g_{i}}=\nu(\vert{\bf g^{N}_{i}} + {\bf g^{N}_{e}}\vert /a_{0}){\bf g^{N}_{i}}$ (note that larger $\beta$ means smaller $\nu(\beta)$). We also note the smaller precession rate of the projected orbits in $\xi-\zeta$ plane. Again, this is because the $\xi$ and $\zeta$ components of the Galactic external field are much smaller than the $\eta$ component.
In any case, the most important result is that even the orbits with initial $a=100$ kau are confined in a cube of side $\sim 28$ kau. in this case, the Newtonian cube would be of side $\sim 400$ kau. This implies that the OC as revealed by comets with original $0<a<100$ kau and interpreted by MD could be much more compact than the Newtonian one.
These findings looks problematic for MD at first sight. The classical picture of the Galactic tide, as the most effective comet injector, is that the sufficient decrease in a comet's perihelion distance during one revolution - to be able to penetrate the Jupiter-Saturn barrier - can be made only for comets with $a>20 - 30$ kau (e.g. \citealp{Lev+01, Ric14}), hence the comets with aphelion distances that are larger than 40 - 60 kau, if eccentricity is close to 1. These are much larger heliocentric distances than those of the particles in the MOC simulation. Also, comets of the classical inner OC, which take advantage of the Jupiter-Saturn barrier by inflating their semi-major axes, come through this outer region ($a>20 - 30$ kau; i.e. the comets appear to be from the outer OC) where the final decrease in perihelion distance is effectively made \citep{KQ09}. All these findings are of course Newtonian. The tidal field of the Newtonian Galaxy that is embedded in the DM halo is a little different from the QUMOND one, especially its vertical (perpendicular to the Galactic midplane) part. Moreover, completely beyond the tides, the MD's EFE can have a decisive influence on the dynamics.
We address this issue more rigorously in Sect. \ref{XXZ}, where injection of the bodies from the inner OC (in the classical jargon) is studied. Since MD enhances binding energy of a comet, the classical effect of the Jupiter-Saturn barrier, in fact, has to be revised, see Sect. \ref{JSBinMOND}. Last but not least we have to emphasise that the steady-state distribution of the bodies in the cloud could look different in MD, see discussion in Sect. \ref{sum}.
\begin{figure}\centering
\resizebox{0.55\hsize}{!}{\includegraphics{hyperbolic.pdf}}
\caption{Past trajectories of two slightly hyperbolic comets in the simple model of the MOC. Both were initialised at their perihelia, one with $q=8$ au, $e=1.00150$, $\omega=\pi/4$ (solid line), the other with $q=3$ au, $e=1.00055$, $\omega=\pi/4$ (dot-dashed line). All the other orbital elements were set to 0. The integration time was 20 Myr. As can be seen these comets are bound (returning) in MD. The Sun resides at [0,0], as indicated by the symbol.}
\label{fig:fig2}
\end{figure}
\subsection{Escaping comets?}\label{escapingcomets}
We use the term ``hyperbolic comet'' for a comet whose Newtonian two-body orbital energy is positive and which is, according to a Newtonian astronomer, not bound (not returning) to the solar system. In this section, we investigate the idea that slightly hyperbolic comets can be bound to the Milgromian solar system, as first pointed out by \citet{Mil86b}.
The statistics of the original reciprocal semi-major axes, $1/a_{orig}$, also reveals, besides the famous Oort spike, a small but non-negligible number of slightly hyperbolic comets ($e$ slightly larger than 1; e.g. Fig. 1b in \citet{Don+04}). These are usually considered to follow very eccentric elliptic orbits in reality, rather than to be interstellar intrudes, but owing to observational errors or the inappropriate modelling of non-gravitational forces, they seem to occupy hyperbolic orbits \citep{Don+04}. Thanks to the boosted gravity in MD, the slightly hyperbolic comet could be still bound to the solar system\footnote{To be thorough, in Newtonian dynamics it is vice-versa possible for a comet to appear to be bound but to originate in the interstellar space as a result of Galactic tidal influence \citep{NJ13}. In any case, this special configuration is highly improbable \citep{NJ13}.}.
Comparing the escape speed at perihelion, $V_{esc}(q)$, see Eqs. (\ref{simplealgebra}) and (\ref{vesc}), with the tangential speed at the perihelion, $V_{peri}(e,q)$, we can decide whether a comet is bound or not. $V_{peri}(e,q)$ can be computed in the usual way. We are at the perihelion - in the deep Newtonian regime, and it depends only on the local gravitational field.
The opposite case is the escape speed, which has to be calculated from the MD gravity, no matter where we start from, see Eq. (\ref{vesc}). Assuming motion in the ecliptic plane, $i=0$, we have
\begin{eqnarray}\label{Cec}
V_{peri}~=~\sqrt{\frac{GM_{\odot}}{q}\left(1+e\right)}~.
\end{eqnarray}
Radial speed at the perihelion is 0. Thus for a given $q$, we can find the limiting eccentricity $e_{lim}$, so $e>e_{lim}$ implies $V_{peri}(e,q)>V_{esc}(q)$ . For example $q=3~au$ implies $e_{lim}=1.00075$ and $q=8~au$ leads to $e_{lim}=1.00199$. Slightly hyperbolic comets with $e<e_{lim}$ are bound in MD. Fig. \ref{fig:fig2} shows the trajectories of two comets that were initialised with the orbital elements $q=3$ au, $e=1.00055$, $\omega=\pi/4$ (all the other elements are set to 0) and $q=8$ au, $e=1.00150$, $\omega=\pi/4$ (all the other elements are set to 0), and then integrated backwards for 20 Myr, assuming the simple model of the MOC. This is quite a long time interval to assume the stationarity of the external field, thus the real trajectories would be a little different, as the external field changes its direction. In any case, we only intend to illustrate as slightly hyperbolic comets can be bound in MD, and this qualitative result remains the same.
Observations of comets with similar original orbital elements could inflate the former conservative estimate of the MOC size to sizes comparable with the classical OC.
In Sect. \ref{simul} we take real cometary data and look at what they say about the size and shape of the MOC.
\subsection{Do Jupiter and Saturn act as a barrier in MD?}\label{JSBinMOND}
The enhanced binding energy of MOC comets raises a question: how does the mechanism of the planetary barrier that is operating in the classical OC change in the MD case?
QUMOND conserves energy. We use Eqs. (\ref{simplealgebra}) and (\ref{vesc}) to approximate QUMOND and assume energy conservation. We take a comet at perihelion, lying deeply in the Newtonian regime, with kinematics characterised by the Newtonian orbital elements, $a$ and $q$. We can find its specific binding energy in MD, simply as
\begin{eqnarray}\label{orb}
E_{BM}~=~-\frac{1}{2}\left[V_{peri}^{2}(a,q)-V^{2}_{esc}(q)\right],
\end{eqnarray}
where we can use Eq. (\ref{Cec}) under the assumption $i=0$. We note that we have put a minus sign in front of the factor 1/2 on the RHS of Eq. (\ref{orb}) because the binding energy is defined as a positive number.
For comets with $a=10$, 50, and 100 kau, the ratio $E_{BM}/E_{BN}$, where $E_{BN}=\left[GM_{\odot}/(2a)\right]$ is the Newtonian binding energy per unit mass, is approximately equal to 3, 13, and 26 respectively. Using the 1D QUMOND approximation, Eq. (60) in \citet{FM12}, instead of Eq. (\ref{simplealgebra}), these ratios are 2, 7, and 13 respectively. For near-parabolic orbits the value of $E_{BM}$ depends only weakly on $q$.
A comet of the classical OC in a typical orbit of, for example, $a=50$ kau, experiences an energy change per perihelion passage proportional to its own binding energy\footnote{This certainly depends on the orbital inclination, as can be seen in Fig. 1 in \citet{FB00}. The footnoted sentence is true for highly inclined orbits with $i\in(120,~150)~\deg$. For orbits close to ecliptics, the planetary kick at 15 au is about 6 times larger.} at $q\sim15$ au, see Fig. 1 in \citet{FB00}. Making the binding energy of this comet in MD $\sim10$ times larger this criterion is met at $q\sim7$ au. Roughly speaking this means that MOC comets with $q<7$ au, instead of the classical value $\sim15$ au, are removed from the cloud due to planetary perturbations. The planetary barrier similarly to the whole cloud shifts inward in MD. Anyway it can still act in a way of inflating semi-major axes for those comets having $q>7$ au, but these are not a priori prevented from being injected inside the inner solar system as in the case of the removed comets of the classical OC.
\begin{figure*}
\begin{center}
\resizebox{0.75\hsize}{!}{\includegraphics{nearparabolic2.pdf}}
\caption{Past Milgromian trajectories of 31 near-parabolic comets, those identified as dynamically new in \citetalias{DK11}, projected to 3 mutually orthogonal planes of $O_{\odot}(\xi,\eta,\zeta)$.
Dynamical model of the OC includes the stationary Galactic field coupled to the QUMOND equations, see Sect. \ref{sec:simpleOC}, and the Galactic tide model, see Sect. \ref{tide}.
The comets with Keplerian periods $T_{Kep}$ lesser than 10 Myr were followed for the time of $T_{Kep}$, those with $T_{Kep}>$ 10 Myr were followed for 10 Myr. Inferred MOC is much smaller than the classical OC, see Table \ref{comets} for comparison with Newtonian orbits. At [0,0] resides the Sun as indicated by the symbol.}
\label{img:nearp}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\resizebox{0.75\hsize}{!}{\includegraphics{1978g2_v2.pdf}}
\caption{Past Milgromian trajectory of the comet C/1978 G2, the slightly hyperbolic comet. Initial $q=6.28$ au and $e$ = 1.00014083. At [0,0] resides the Sun as indicated by the symbol.}
\label{img:1978g2}
\end{center}
\end{figure*}
\section{Observed near-parabolic comets in Milgromian dynamics}\label{simul}
Motivated by the crude picture of the OC outlined in Sect. \ref{MilgromianOC}, we have used real cometary data to investigate origin of the near-parabolic comets in the framework of MD.
We have approximated action of QUMOND by the simple model of the MOC, with the constant external field of the Galaxy ${\bf g_{e}}$ coupled to the QUMOND equations, see Sect.~\ref{sec:simpleOC}. The rotation of ${\bf g_{e}}$ has period of $\sim210$ Myr, therefore we use integration times to be Keplerian periods for those comets having these lesser than 10 Myr. For those that have Keplerian periods larger than 10 Myr we use integration time of 10 Myr as all these have much shorter real (QUMOND) ``periods'', i.e. times between two successive perihelia.
Moreover, the tidal effect, which comes from the Galactic gravity gradient across the OC, is also accounted. The Galactic tide model is described in Sect. \ref{tide}. This model reflects the local density of the baryonic + phantom matter as determined by QUMOND for the adopted baryonic model of the Galaxy, see sections \ref{PMD} and \ref{Galaxymodel}.
We have simply added the tidal acceleration (0, 0, $\ddot{\zeta}_{tide}$), Eq. (\ref{tidesimple}), to RHS of Eq. (\ref{eqmo}). This is only an approximation in non-linear MD.
But, it proves to be good idea to model EFE (assuming a spatially invariant field) and tides as two separate effects of the Galaxy, see sections \ref{results} and \ref{amch}.
We have taken the original orbits from the sample of near-parabolic comets that were identified as dynamically new in \citetalias{DK11}, converted them to initial positions and velocities of test particles, and integrated these back in time, looking for their past Milgromian trajectories.
\subsection{Data}\label{data}
Our sample consists of the 31 comets identified as dynamically new in \citetalias{DK11}. We omitted errors in the lengths of original semi-major axes $a_{orig}$, the only orbital element with significant error and, instead, only took their expected values as these are fairly typical for Oort spike comets. A more exact approach would proceed in a similar manner as \citetalias{DK11} did, covering the error in the orbital energy determination with a large number of virtual orbits, but this is much more processor-time consuming in MD than in Newtonian dynamics.
The sample also contains one slightly hyperbolic comet, C/1978 G2, with perihelion $q=6.28$ au and eccentricity $e= 1.00014083$. We also note the orbit of the comet C/2005 B1, with a very large semi-major axis of 250.6 kau.
Original orbital elements of sample comets were retrieved from \citet{Kro14} and are displayed in Table \ref{comets}. These were calculated at a heliocentric distance 250 au, which is still well within the Newtonian regime.
\subsection{Results}\label{results}
The past QUMOND trajectories of the sample comets are shown in Fig. \ref{img:nearp}. The resulting size and overall shape of the MOC is in large agreement with the one obtained in Sect. \ref{MilgromianOC}. The trajectory of the single comet with $e>1$ in our sample, C/1978 G2, is redrawn in Fig. \ref{img:1978g2}. In Milgromian framework the comet is bound, visiting similar heliocentric distances to the other comets in our sample.
In MD, we expect the Galactic tide to be stronger than in the Newtonian dynamics, see Fig. \ref{img:pmd} and Sect. \ref{tide}.
However, the changes in orbits - perihelia positions and precession rates - induced by the Galactic tide are negligible, compared to those induced by the EFE, see also Sect. \ref{XXZ}. Figs. \ref{img:nearp} and \ref{img:1978g2} would not look different if the Galactic tide model as presented in Sect. \ref{tide} was not incorporated. This is a natural consequence of the compactness of the cloud. The comets cruise up to $\Xi\sim$ 13 kau, where the tidal torquing is still minute, but EFE plays a dominant role. As mentioned above, we model the EFE and the Galactic tide as the separate effects.
In Fig. \ref{img:L1974v1} we show the specific angular momentum as a function of time, $L(t)$, for the comet C/1974 V1 in the simple model of the MOC. Tides are omitted this time. Periodic changes in angular momentum are induced purely by EFE. Similar behaviour can also be found by checking the other comets in the sample. Taking into account the Galactic tide only has a minor effect and $L(t)$ is very much the same.
\section{Galactic torque}\label{XXZ}
We have shown that MOC is much smaller than the classical OC. The MOC boundary, as found by tracing Oort spike comets with an initial eccentricity $e<1$ (which is the vast majority of observed comets) back in time, lies at heliocentric distances that correspond to the classical inner OC. Also the single comet with $e>1$ in the Sect. \ref{simul} sample, C/1978 G2, orbits in bound orbit at similarly small heliocentric distances in MD. It is presumed that the tidal force at these heliocentric distances is not large enough to sufficiently quickly decrease perihelion distance so that a comet bypasses the Jupiter-Saturn barrier, e.g. \citet{Don+04}. In MD, the compactness of the OC does not need to be an obstacle for the injection of a comet into the inner solar system because of the action of EFE.
\subsection{Angular momentum change}\label{amch}
In this section, we preserve the classical idea of the Jupiter-Saturn barrier at $\sim$ 15 au, although in Sect. \ref{JSBinMOND} we have shown that the barrier actually shifts inwards in MD. This shift naturally increases the inflow of comets.
To illustrate the capability of the EFE to deliver OC bodies into the inner solar system, we have run similar simulation to those in Sect. \ref{MilgromianOC}. In this case, we intended to mimic the sample of comets that are about to enter/leave the planetary zone. Consequently, we chose the initial perihelion distance of each particle, $q$, to be a random number that is uniformly distributed on the interval $(15,100)$ au. All the other initial orbital elements of the test particles were randomly generated in the same way as in Sect. \ref{MilgromianOC}.
The orbital elements were at $t=0$, transformed to initial cartesian positions and velocities, the real observables.
We employed two distinct dynamical models of the OC, one of which is Milgromian and the other, Newtonian: (i) the simple model of the MOC, and, (ii) Sun + Galactic tide in the Newtonian framework.
We tested the fact that incorporation of the Galactic tide model, as described in Sec. \ref{tide}, into the simple model of the MOC has negligible effects for the times that correspond to one revolution of a comet. This is obviously because the comets of the MOC orbit in $\Xi\lesssim$ 15 kau, at these heliocentric distances, the tidal force is too weak. Two distinct $\varrho(z_{\odot})$ were used: $\varrho(z_{\odot})=\varrho_{b}(z_{\odot})+\varrho_{ph}(z_{\odot})$ in MD and $\varrho(z_{\odot})=\varrho_{b}(z_{\odot})+\varrho_{h}(z_{\odot})$ in Newtonian dynamics, where $\varrho_{b}(z_{\odot})$ is the local vertical density of baryons and $\varrho_{h}(z_{\odot})$ is the local vertical density of the NFW DM halo.
Figs. \ref{img:dl00} ($a=10$ kau), \ref{img:dl11} ($a=50$ kau), and \ref{img:dl22} ($a=100$ kau) show the heliocentric distance, $\Xi(t)$, and change in magnitude of the specific angular momentum, $\delta L(t)\equiv L(t)-L(0)$, of the particles, as a function of time. The followed time window, $T_{rev}$, corresponds approximately to one revolution that succeeds the perihelion initialisation. In Figs. \ref{img:dl0} ($a=10$ kau), \ref{img:dl1} ($a=50$ kau), and \ref{img:dl2} ($a=100$ kau) we show the value of $\Delta L\equiv L_{max}-L_{min}$ of the individual particles, where $L_{max}\equiv [L(t)]_{max}$ and $L_{min}\equiv [L(t)]_{min}$ are the maximal and the minimal value of $L(t)$ during $T_{rev}$.
\begin{figure}
\begin{center}
\resizebox{0.65\hsize}{!}{\includegraphics{L.pdf}}
\caption{Specific angular momentum $L$ as a function of time for the comet C/1974 V1. We have assumed the simple model of the MOC (tides are omitted). The periodic changes are induced solely by EFE. The negative time means that we are dealing with the past trajectory of the comet.}
\label{img:L1974v1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{dl_10k1.pdf}}
\caption{Heliocentric distance, $\Xi(t)$, and change in magnitude of the specific angular momentum, $\delta L(t)\equiv L(t)-L(0)$, as a function of time, $t$, for 100 Monte Carlo test particles initialised with $a=10$ kau, and $q$ uniformly distributed on the interval $(15,100)$ au. The top row represents an output of the Milgromian simulation, the bottom row, the Newtonian simulation. In MD simulation, the follow up time, $T_{rev}$, is set to 0.26 Myr (see top left quarter of the figure for motivation), in Newtonian simulation $T_{rev}$ is set to be the Keplerian period $T_{Kep}$($a$=10 kau) $\approx$ 1 Myr.}
\label{img:dl00}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\resizebox{0.65\hsize}{!}{\includegraphics{DL_10k.pdf}}
\caption{Histogram of $\Delta L\equiv L_{max}-L_{min}$ for 100 Monte Carlo test particles initialised with $a=$10 kau and $q$ uniformly distributed on the interval $(15,100)$ au. Here $L_{max}$ ($L_{min}$) is maximal (minimal) magnitude of the specific angular momentum, as found during one revolution, $T_{rev}$, succeeding the initialisation of a comet at perihelion. In the MD simulation $T_{rev}=0.26$ Myr, in the Newtonian simulation $T_{rev}=T_{Kep}$($a$ = 10 kau) $\approx$ 1 Myr. A single bin corresponds to a single test particle in the simulation. Solid bins are $\Delta L$ in the simple model of the MOC, shaded bins (here barely visible), stacked on the solid bins, are $\Delta L$ in Newtonian dynamics, with the gravity of the Sun and the Galactic tide accounted for.}
\label{img:dl0}
\end{center}
\end{figure*}
When interpreting these figures, we have to bear in mind the timescales of the angular momentum changes, these are $\sim4$ ($a=10$ kau) to $\sim80$ ($a=100$ kau) times smaller in the MOC than in the classical OC. We also note that the particles that are initialised with $a$ as large as 100 kau are travelling in $\Xi\lesssim 15$ kau in the MOC.
It is evident that the injection could be very efficient in the MOC, nevertheless the MOC is much more radially compact than the classical OC. In MD, the rapid changes in the angular momentum are induced by EFE. Moreover the bodies that are hidden in the classical OC - i.e. not able to reach the observability region, because of either their immunity from the action of the external perturbers, the hypothesized inner core, or, the inability to overshoot the Jupiter-Saturn barrier, the inner OC bodies with $a\sim 10$ kau, can, because of EFE, also be delivered from the MOC into the inner solar system, see also Sect. \ref{Sedna}.
Figs. \ref{img:dl1} and \ref{img:dl2} show that Newtonian tides (OC) overcome EFE (MOC) in $\Delta q$ per revolution only for comets with $a$ as large as $\sim$ 50 - 100 kau. This is 9 out of 30 comets with $e<1$ in the Sect. \ref{simul} sample.
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{dl_50k2.pdf}}
\caption{Same as for Fig. \ref{img:dl00}, but now the particles are initialised with $a=50$ kau. The top row represents an output of the Milgromian simulation, the bottom row, the Newtonian simulation. In MD simulation, the follow-up time, $T_{rev}$, is set to 0.4 Myr (see top left quarter of this figure for motivation), in the Newtonian simulation $T_{rev}$ is set to be the Keplerian period $T_{Kep}$($a$=50 kau) $\approx$ 11 Myr.}
\label{img:dl11}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\resizebox{0.65\hsize}{!}{\includegraphics{DL_50k.pdf}}
\caption{Same as for Fig. \ref{img:dl0} but now the particles are initialized with $a=50$ kau. In the MD simulation $T_{rev}=0.4$ Myr, in the Newtonian simulation $T_{rev}=T_{Kep}$($a$ = 50 kau) $\approx$ 11 Myr. A single bin corresponds to a single test particle in the simulation. Solid bins are $\Delta L$ in the simple model of the MOC, shaded bins, stacked on the solid bins, are $\Delta L$ in Newtonian dynamics, with the gravity of the Sun and the Galactic tide accounted for.}
\label{img:dl1}
\end{center}
\end{figure*}
\subsection{Sedna}\label{Sedna}
We have shown that cometary perihelia can be very effectively torqued in and out by EFE, even for those comets that are travelling in fairly small heliocentric distances, $\sim10$ kau.
Is torquing due to EFE important at even smaller heliocentric distances? Is EFE responsible for the shape of the current puzzling orbit of the trans-Neptunian planetoid Sedna? To address these questions we ran the following simulation: 100 Monte Carlo test particles (Sedna progenitors - Sednitos) with initial $a=524$ au (Sedna's heliocentric $a$ at epoch 2,457,000.5 JD, according to JPL's service {\small HORIZONS}) and, among the particles, uniformly distributed $q$ in bounds $(5,30)$ au, $i$ in bounds $(0,10)~\deg$, $\omega$ and $\Omega$ in bounds $(0,2\pi)$, were initialised at their perihelia and then followed for 5.9 Myr in the simple model of the MOC. These initial orbital elements have been chosen to mimic the protoplanetary disk origin of Sedna. We assumed that Sednito's semi-major axis was already pumped to the current Sedna's value at the beginning of the simulation, owing to past planetary encounters. The planets were omitted in the simulation.
In Fig. \ref{img:Sedna} we show $\Delta q \equiv q_{max}-q_{min}$ for 100 simulated Sednitos, where $q_{max}$ and $q_{min}$ are Sednito's maximal and minimal value of $q$, per 5.9 Myr. As can be seen in some cases, $\Delta q$ is as large as $100$ au. At the end of the simulation, 7 Sednitos had $q\sim75$ au, hence very close to the Sedna's perihelion distance. Sedna-like orbits (here simplified as specific $a$ and $q$ values) can be produced by EFE in a few Myr. The catch is that, as $q$ oscillates in and out on timescales of millions of years, the trans-Neptunian bodies with similar orbits as Sedna could possibly wander into the inner solar system. It is also possible that substantial migrants have already been removed from these orbits and the current population is relatively stable against migration.
Fig. \ref{img:Sednitos} depicts perihelion distance as a function of time for all known bodies with $q>30$ au and $a>150$ au in the simple model of the MOC during the next 10 Myr. Initial orbital elements of the bodies were retrieved from \citet{TS14}, see their Table 1 and extended data Table 2. Only one of the followed objects, 2010 GB$_{174}$, migrates under 30 au in the next 10 Myr. To investigate the migration of these objects thoroughly, we would have to improve our dynamical model of the MOC to account for the change in the external field direction, since long integration times would be necessary, and also to account for the planetary perturbations. We leave this task to our future studies.
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{dl_100k3.pdf}}
\caption{Same as for Fig. \ref{img:dl00}, but now the particles are initialised with $a=100$ kau. The top row represents an output of the Milgromian simulation, the bottom row, the Newtonian simulation. In the MD simulation, the follow-up time, $T_{rev}$, is set to 0.4 Myr (see top left quarter of this figure for motivation), in the Newtonian simulation, $T_{rev}$ is set to be the Keplerian period $T_{Kep}$($a$=100 kau) $\approx$ 32 Myr.}
\label{img:dl22}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\resizebox{0.65\hsize}{!}{\includegraphics{DL_100k.pdf}}
\caption{Same as for Fig. \ref{img:dl0}, but now the particles are initialized with $a=100$ kau. In the MD simulation $T_{rev}=0.4$ Myr, in the Newtonian simulation, $T_{rev}=T_{Kep}$($a$ = 100 kau) $\approx$ 32 Myr. A single bin corresponds to a single test particle in the simulation. Solid bins are $\Delta L$ in the simple model of the MOC, shaded bins, stacked on the solid bins, are $\Delta L$ in Newtonian dynamics, with the gravity of the Sun and the Galactic tide accounted for.}
\label{img:dl2}
\end{center}
\end{figure*}
\section{Varying interpolating function and $a_{0}$}\label{if}
\citet{Hee+15} (hereafter \citetalias{Hee+15}) recently constrained the most frequently used families of the MD interpolating (transition) function (e.g. Sect. 6.2 in \citealp{FM12}) with the Cassini spacecraft radio tracking data \citep{Hee+14}. These constraints come from EFE, which produces small quadrupole correction to the Newtonian potential in the planetary region.
They concluded the following constraints (on $n$):
\begin{subequations}
\begin{align}
\nu_{n}(\beta)
&~=~\left[\frac{1+\left(1+4\beta^{-n}\right)^{1/2}}{2}\right]^{1/n}~~~~~~~~~~~~~,~~~n\geq7~,\label{if1}\\
\widehat{\nu}_{n}(\beta) &~=~\left[1-\exp(-\beta^{~n/2})\right]^{-1/n}~~~~~~~~~~~~~~~,~~~n\geq6~,\label{if2}\\
\overline{\nu}_{n}(\beta)
&~=~\left[\widehat{\nu}_{2n}(\beta)\right] +\left(1-\frac{1}{2n}\right)\exp(-\beta^{n})~~~,~~~n\geq2~,\label{if3}
\end{align}
\end{subequations}
where Eqs. (\ref{if1}) - (\ref{if3}) are three different families of the interpolating function $\nu$.
We note that $\widehat{\nu}_{1}=\overline{\nu}_{1/2}$. So far we have only used $\overline{\nu}_{1/2}$ in our calculations.
But according to the findings of \citetalias{Hee+15} this interpolating function is ruled out in the planetary region..
\citetalias{Hee+15} also revised the value of $a_{0}$, based on rotation curve fits (taking care whether EFE plays a role) and found optimum (best-fit) value for a given interpolating function. For example $\overline{\nu}_{n\geq 2}$ yields $a_{0}\lesssim 8.1\times 10^{-11}$ m s$^{-2}$, where the boundary value is a bit smaller than the standard value $a_{0}=1.2\times 10^{-10}$ m s$^{-2}$, but still well compatible with the baryonic Tully-Fisher or Faber-Jackson relation. In what follows, we consider only the $\overline{\nu}_{n}$ family.
Fig. \ref{img:cassiniif} shows $\overline{\nu}_{\alpha}(\beta)$ for $1\leq\beta\leq3$ and three different alphas, $\alpha=0.5$, 1.5, and 2.0. The rightmost dashed vertical line indicates the smallest possible $\beta$ for the solar system in the field of the Galaxy assuming $\overline{\nu}_{1.5}(\beta)$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$. This was found by assuming the external field dominance and solving for $g^{N}_{e}$ in $g_{e}-\overline{\nu}_{1.5}(g^{N}_{e}/a_{0})~g^{N}_{e}=0$, which yields $\beta=2.75$. If the internal field is non-negligible then $\beta$ is always larger. We note that $\overline{\nu}_{1.5}(2.75)\approx\overline{\nu}_{2.0}(2.75)$; as for this and the numerical convenience of using $\overline{\nu}_{1.5}$, we consider combination $\overline{\nu}_{1.5}$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ in our calculations.
PMD, Eq. (\ref{rophsimple}), as a function of heliocentric distance, $\Xi$, is depicted in Fig. \ref{img:cassini2} along $\xi$, $\eta$, and $\zeta$ axes of $O_{\odot}(\xi,\eta,\zeta)$. The simple model of the MOC, $\overline{\nu}_{1.5}(\beta)$ interpolating function and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$, were assumed. The peaks in the positive values of $\varrho_{ph}$ are $\sim$800 (left) and $\sim$400 (right) times smaller than in the case of $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$. We note that MD predicts the existence of regions with negative PMD in the solar system which is imposed to the gravitational field of the Galaxy \citep{Mil86a}.
This speciality of PMD makes MD possibly observationally distinguishable from the DM hypothesis.
We can conclude that, by adopting $\overline{\nu}_{\alpha}(\beta)$ with $\alpha=1.5$ or $2.0$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ - or even larger $\alpha$ and smaller $a_{0}$ \citepalias{Hee+15}, we can expect the MOC to be very much Newtonian, since EFE, in this case, essentially suppresses the Milgromian regime at any distance from the Sun. The MOC is then of comparable overall size, comets have similar binding energies, and the Jupiter-Saturn barrier operates in a similar way to that found in Newtonian dynamics. Aphelia directions of observed, dynamically new comets were shown to avoid Galactic latitudes, $b_{G}$, close to the polar caps and the Galactic equator \citep{Del87}. This is conventionally considered to be a signature of the Galactic-tide-induced injection of the comets \citep{Tor86, Del87}. We note that in the case $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$ the MOC was shown to be compact and weakly influenced by the Galactic tide, which therefore also suggests that an interpolating function that is steeper in the transition regime should be favoured.\footnote{In Newtonian dynamics, anisotropy in $b_{G}$ distribution (see Sect. \ref{classicalOC}) is introduced owing to the existence of a preferred direction, perpendicular to the Galactic midplane. By considering the MOC with $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$ - where injection of a comet due to tides is secondary - there is also a preferred direction in the cloud, which is, although varying with time, the direction of the external field of the Galaxy. Maybe the longterm effect of the external field is to produce this kind of anisotropy in $b_{G}$ distribution for MOC comets.}
In any event, even when interpolating functions and $a_{0}$ that are in line with the Cassini data are applied, some effects of MD can be still present. Torquing of cometary perihelia owing to EFE at heliocentric distances where the Galactic tide is weak, can be important. To illustrate and quantify this effect, we plotted $\Delta L$ for the same $a=10$ kau Monte Carlo comets as in Sect. \ref{amch} but now assuming QUMOND with $\overline{\nu}_{1.5}(\beta)$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ instead of $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$. One revolution of a comet now corresponds well to a Keplerian period since we are effectively in the Newtonian regime.
We have used $\alpha=1.5$, although \citetalias{Hee+15} found that $\alpha$ is constrained as $\alpha\geq2.0$, because of its numerical convenience, i.e. to speed up our numerical calculations. Our aim is to see how effective the torquing is owing to the EFE when the whole MOC is essentially in the Newtonian regime, see Fig. \ref{img:cassiniif}. In this sense $\alpha=1.5$ with $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ serve us well.
Fig. \ref{img:dl0_nu15} shows the value of $\Delta L\equiv L_{max}-L_{min}$ of the individual comets, where, again $L_{max}\equiv [L(t)]_{max}$ and $L_{min}\equiv [L(t)]_{min}$ are the maximal and the minimal value of $L(t)$ during $T_{rev}$, assumed to be the Keplerian period $T_{Kep}$, since now $T_{rev}\approx T_{Kep}$.
We can directly compare Figs. \ref{img:dl0} and \ref{img:dl0_nu15}. On average, $\Delta L$ is naturally smaller in the case of a steeper interpolation function and smaller $a_{0}$, but extremal values in both cases are similar. This could explain why we observe comets with relatively small semi-major axes, which should be prevented from being delivered into the inner solar system due to the Jupiter-Saturn barrier \citepalias{DK11}, and at the same time, an imprint of the Galactic tide as inferred from the majority of observed comets.
\begin{figure}\centering
\resizebox{0.7\hsize}{!}{\includegraphics{Sedna.pdf}}
\caption{$\Delta q \equiv q_{max}-q_{min}$ per 5.9 Myr for 100 Sedna progenitors (Sednitos) moving under the action of EFE. Sednitos were initialised at perihelia with $a=524$ au, uniformly distributed $q\in(5,30)$ au, $i\in(-10,10)\deg$, $\omega$ and $\Omega\in(0,2\pi)$.}
\label{img:Sedna}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.7\hsize}{!}{\includegraphics{Sednitos.pdf}}
\caption{Perihelion distance as a function of time under the action of EFE for the known population of trans-Neptunian bodies with $a>150$ au and $q>30$ au. Thick solid lines are 2012 VP$_{113}$ and Sedna.}
\label{img:Sednitos}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{cassiniif.pdf}}
\caption{Transition to the Newtonian regime for different alphas in $\overline{\nu}_{\alpha}$ family. Vertical dashed lines, from left to right, mark values $\beta\equiv g^{N}/a_{0}=1.20$, 2.10 and 2.75, respectively.
These came from $g_{e}-\overline{\nu}_{\alpha}(g^{N}_{e}/a_{0})~g^{N}_{e}=0$, using $\alpha=0.5$,
$a_{0}=1.2\times10^{-10}$ m $s^{-2}$ ($\beta=1.20$); $\alpha=0.5$,
$a_{0}=8.11\times10^{-11}$ m $s^{-2}$ ($\beta=2.10$); $\alpha=1.5$,
$a_{0}=8.11\times10^{-11}$ m $s^{-2}$ ($\beta=2.75$), and assuming that the external field dominates over the internal, $g^{N}\approx g^{N}_{e}$. When this is not the case the values of $\beta$ are even larger. Note that $\overline{\nu}_{1.5}(2.75)\approx\overline{\nu}_{2.0}(2.75)$.
$g_{e}$ is always the same constant $V^{2}_{0}/R_{0}$, but what matters is that the value of $g_{e}$ varies in units of $a_{0}$ as one varies $a_{0}$.}
\label{img:cassiniif}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.85\hsize}{!}{\includegraphics{cassini.pdf}}
\caption{PMD in the simple model of the MOC in the direction of $\xi$ (solid line), $\eta$ (dotted line), and $\zeta$ (dashed line) axis of $O_{\odot}(\xi,\eta,\zeta)$. $\overline{\nu}_{1.5}$, and $a_{0}=8.11\times10^{-11}$ m $s^{-2}$ are assumed.}
\label{img:cassini2}
\end{center}
\end{figure}
\begin{figure*}\centering
\resizebox{0.65\hsize}{!}{\includegraphics{DL10k_nu15.pdf}}
\caption{Histogram of $\Delta L\equiv L_{max}-L_{min}$ for 100 Monte Carlo test particles initialised at perihelia with $a=$10 kau and $q$ uniformly distributed on the interval $(15,100)$ au and then evolved in the simple model of the MOC for $T_{rev}\approx T_{Kep}\approx 1$ Myr. $\overline{\nu}_{1.5}(\beta)$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ were used here instead of $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$ as in Fig. \ref{img:dl0}. A single bin corresponds to a single test particle in the simulation.
}
\label{img:dl0_nu15}
\end{figure*}
\subsection{Sedna}\label{Sedna2}
We are interested in how important the torquing of perihelion is owing to EFE in the trans-Neptunian region when one substitutes $\alpha=0.5$ of $\overline{\nu}_{\alpha}(\beta)$ interpolating function and the standard value $a_{0}=1.2\times10^{-10}$ m s$^{-2}$ with larger values of $\alpha$ and $a_{0}\leq 8.1\times 10^{-11}$ m s$^{-2}$ \citepalias{Hee+15}.
We ran similar QUMOND simulation as in Sect. \ref{Sedna}, assuming $\overline{\nu}_{1.5}(\beta)$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$.
We used the same initial orbit assignment for Sednitos as in Sect. \ref{Sedna}, except for the value of $q$ which is now a random number uniformly distributed on the interval (25, 30) au, to maximise $\Delta q$.
In Fig. \ref{img:Sedna2}, we show $\Delta q$ per 10.0 Myr for 100 simulated Sednitos. The extremal $\Delta q$ is about 50 au per 10.0 Myr. At the end of the simulation, two Sednitos had $q\sim75$ au, hence very close to the Sedna's perihelion distance. Sedna-like orbits (here simplified as specific $a$ and $q$ values) can be produced by EFE from those having a protoplanetary-disk origin in $\sim10$ Myr.
If interpolating function $\overline{\nu}_{\alpha}$ with $\alpha\geq2.0$ would be used instead, then we can expect larger times would be necessary to produce the given $\Delta q$. We have tested this in an approximation of the EFE-induced quadrupole anomaly\footnote{The inner OC can be crudely investigated with the aid of the multipole expansion approach \citep{Mil09c,BN11}, taking into account only the dominant quadrupole term and assuming the constancy of the parameter $Q_{2}$ \citep{BN11,Hee+15}. Rotation of the external field can, in this case, be easily incorporated.} \citep{Mil09c,BN11} and the quadrupole strengths that are listed, on \cite{Hee+15}. For $\overline{\nu}_{2}$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ the timescale of producing given $\Delta q$ is in average, by a factor of few times greater.
In Fig. \ref{img:Sednitos2}, we depict the perihelion distance as a function of time, $q(t)$, for the known trans-Neptunian objects with $q>30$ au and $a>150$ au, in the next 10 Myr, assuming $\overline{\nu}_{1.5}(\beta)$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$.
None of the followed objects migrates under 30 au in the next 10 Myr.
\begin{figure}\centering
\resizebox{0.7\hsize}{!}{\includegraphics{Sedna2.pdf}}
\caption{$\Delta q \equiv q_{max}-q_{min}$ per 10.0 Myr for 100 Sedna progenitors (Sednitos) moving under the action of EFE. $\overline{\nu}_{1.5}(\beta)$ and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ were used instead of $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$ as in Fig. \ref{img:Sedna}. Sednitos were initialised at perihelia with $a=524$ au, uniformly distributed $q\in(25,30)$ au, $i\in(-10,10)\deg$, $\omega$, and $\Omega\in(0,2\pi)$.}
\label{img:Sedna2}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.7\hsize}{!}{\includegraphics{Sednitos_nu15.pdf}}
\caption{Perihelion distance as a function of time under action of EFE for the known population of trans-Neptunian bodies, with $a>150$ au and $q>30$ au. $\overline{\nu}_{1.5}(\beta)$, and $a_{0}=8.1\times10^{-11}$ m s$^{-2}$ were used instead of $\overline{\nu}_{0.5}(\beta)$ and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$, as in Fig. \ref{img:Sednitos}. Thick solid lines are 2012 VP$_{113}$ and Sedna.}
\label{img:Sednitos2}
\end{center}
\end{figure}
\section{Discussion and conclusion}\label{sum}
We have investigated how the (Newtonian) paradigm of a vast cometary reservoir, the Oort cloud (OC), changes in Milgromian dynamics (MD), specifically in the modified gravity QUMOND. The results are dependent on the choice of the MD interpolating function and value of the constant $a_{0}$.
For the popular pair, $\overline{\nu}_{0.5}$ [Eq. (\ref{if3}) with $\alpha=0.5$] and $a_{0}=1.2\times10^{-10}$ m s$^{-2}$, we have found the following qualitative properties of the Milgromian OC (MOC):
\begin{itemize}
\item The observationally inferred MOC is compact with a radius $\sim15$ kau.
\item Binding energies of comets are significantly increased compared to those of the classical OC. The planetary barrier shifts significantly inward.
\item An injection of comets into the inner solar system is mainly driven by the external field effect (EFE) from the Galaxy, the specific feature of nonlinear MD, see Sect. \ref{Sec:EFE}. The Galactic tide can be, owing to small heliocentric distances of MOC bodies, neglected.
\item EFE-induced injection of comets is very efficient and the cometary influx can be significantly larger than in the Newtonian case, if we assume the zeroth approximation of the same, Newtonian, source population and its distribution in both frameworks.
\item The orbit of a body with a proto-planetary disk origin can, under the action of EFE, be transformed into a Sedna-like orbit on a timescale of several Myr.
\end{itemize}
Trans-Neptunian bodies in Sedna-like orbits are not ``fossil'' objects with frozen perihelia (like in the Sun-in-a-cluster model) but rather they could repeatedly migrate through the inner solar system as EFE raises and lowers their perihelia repeatedly.
During the preparation of this paper, \citetalias{Hee+15} published constraints on various commonly-used MD interpolating function families. The constraints are based on the Cassini spacecraft radio tracking data \citep{Hee+14}. Many popular MD interpolating functions have been proven incompatible with the data, including $\overline{\nu}_{0.5}$.
Adopting $\overline{\nu}_{\alpha}(\beta)$ with $\alpha\gtrsim1.5$ and $a_{0}\leq 8.11\times10^{-11}$ m $s^{-2}$ in line with \citetalias{Hee+15}, the MD-regime is essentially suppressed at any distance from the Sun owing to EFE, see Fig. \ref{img:cassiniif}. The cloud is, in this case, very much Newtonian in its overall size, binding energies of comets and operation of the Jupiter-Saturn barrier. However, even in this case, EFE substantially torques orbits in the inner parts of the cloud where the tidal force is weak, with the potential to transform primordial orbits to Sedna-like orbits, as was shown in the case $\alpha=1.5$ and $a_{0}=8.11\times10^{-11}$ m $s^{-2}$ on the timescale of $\sim10$ Myr. Steeper interpolating functions imply larger timescales for these transformations.
To sum up, if the results presented in \citet{Hee+14} are correct, and there is no other hidden dynamical effect acting on the spacecraft, then the results presented in sections \ref{MilgromianOC} - \ref{XXZ} are only of academic character. Still, it is instructive to see how the MOC changes with a varying description of the transition regime.
We further discuss the MOC in line of the new constraints on the MD interpolating function.
We emphasise that the influence of EFE on inner OC bodies and Centaurs, Kuiper belt objects, and scattered disk objects in high$-a$ orbits is even under these circumstances substantial. Consequently Sedna-like orbits and orbits of large semi-major axis Centaurs are easily comprehensible in MD. In MD, they both belong to the same population, just in different modes of their evolution.
MD could eventually shed light on many open problems in the cis and trans-Neptunian region. Besides the already mentioned puzzling orbits of Sedna and 2012 VP$_{113}$, the clustering in argument of perihelion, $\omega$, near $\omega\approx 0\deg$, for bodies in orbits with $q>30$ au and $a>150$ au \citep{TS14}, and the origin of high-$a$-Centaurs \citep{Gom+15}, could be elucidated in MD.
With regards to $\omega$-clustering, EFE would manifest in this region through an anomalous force that increases with heliocentric distance and is aligned with the direction to the Galactic centre \citep{Mil09c,BN11}. Hence bodies, that are protected from encountering the planets frequently, should bear an imprint of EFE, similarly, as if there was a distant massive body hidden deep in the OC. In MD, one could expect nodal ($\Omega$), or eventually both nodal and apsidal ($\omega+\Omega$), confinement (Pau\v{c}o 2016, in preparation). The fact that a subsample of the stable objects (with $a>250$ au) is actually clustered in the physical space was recently shown in \cite{BB16}.
EFE is an important dynamical agent, raising and lowering perihelia in the inner parts of the outer solar system very effectively, with no such counterpart in Newtonian dynamics. Thus, we could intuitively expect MOC, and especially its inner part, to be more populous at the formation phase than the classical OC, as planetesimals with mildly pumped semi-major axes ($a\sim$ 0.1 - 1 kau) could have their perihelia lifted sufficiently rapidly to be protected from being ejected or captured by planets. Also, we could expect this primordial outward migration to be followed by a period of high influx of interplanetary material, after which (or after several such cycles) this inner region was radically depleted. Here timing is important because this phenomenon could coincide with the late heavy bombardment, hinted at by the Moon's petrology record \citep{Har+00}, at $\sim$700 Myr after planets formed. Although this kind of event is rather abrupt and of relatively short duration, it was well accounted for in the Newtonian framework with the model of rapid migration of the outer planets \citep{Gom+05}. We plan to investigate this topic in a subsequent work.
It is questionable whether the primordial disk mass and OC-to-scattered-disk population ratio problems arise in the context of MD since nobody has ever simulated solar system formation and evolution (with its outermost parts) in MD. EFE torquing is important in the context of the (re)distribution of material within the cloud, which could be then expected to be different in MD to that in Newtonian dynamics. The preference for high semi-major axis orbits (where tides are sufficiently strong) in the classical OC does not need to be so eminent in MOC. In the perihelion distance, $q$, vs. semi-major axis, $a$, diagram, where in the classical OC theory there is more or less empty space at $q\gtrsim 100$ au, $a\sim1000$ au, we expect some residual population in MD. In the future, this could be tested against observations. Also, a simulation similar to that in sections \ref{Sedna} and \ref{Sedna2}, but including the outer planets and more Sednitos, would yield steady populations of bodies with $q>30$ au and $q<30$ au (high-$a$ Centaurs), after some time, which could be tested against observations on a similar basis to \cite{Gom+15}. There is obviously some tension between the theory and observations in the Newtonian framework \citep{Gom+15}.
At this stage, we cannot claim MD to be self-consistent solution of the puzzles that bother classical OC theory, but it has been shown that it can well form a new, testable, paradigm with a specific signature in the outer parts of the solar system.
\begin{acknowledgements}
We are thankful to Leonard Korno\v{s} and Lubo\v{s} Neslu\v{s}an for valuable discussions on orbital integrators and the classical Oort cloud. We also thank the referee, Rodney Gomes, for an open-minded review and comments, which helped to improve the clarity of the paper and also inspired us to realize an additional motivation for consideration of Milgromian dynamics in the solar system.
J.K. is supported by the Slovak National Grant Agency VEGA, grant No. 1/067/13.
\end{acknowledgements}
\begin{table*}
\caption{Original barycentric orbital elements of Sect. \ref{simul} near-parabolic comets. These 31 comets were identified as dynamically new (assuming Newtonian dynamics) in the sample of \citetalias{DK11}. Presented orbital elements are expected values retrieved from \cite{Kro14}, errors are omitted. Successive columns are: comet designation, osculation date, perihelion distance, eccentricity, inclination, argument of perihelion, longitude of ascending node (all angles in equinox J2000.0), semi-major axis and perihelion passage time.}\label{comets}
\begin{center}
\begin{tabular}{lcccrrrrc}
\hline\hline
Comet & Epoch & $q$\tablefootmark{$\dag$} & $e$ & $i$\tablefootmark{$\dag$} & $\omega$\tablefootmark{$\dag$} & $\Omega$\tablefootmark{$\dag$} & $a$\tablefootmark{$\dag$} & $T$\tablefootmark{$\ddag$}\\
\hline
[...] & [yyyymmdd] & [au] & [...] & [$\deg$] & [$\deg$] & [$\deg$] & [kau] & [yyyymmdd]\\
\hline
C/1974 V1 & 16670721 & 6.02 & 0.99989464 & 60.9 & 151.8 & 226.1 & 57.110 & 19740808\\
C/1978 A1 & 16701212 & 5.61 & 0.99978957 & 116.9 & 343.4 & 211.7 & 26.652 & 19770722\\
C/1978 G2 & 16710809 & 6.28 & 1.00014083 & 153.2 & 229.7 & 72.2 & -44.603 & 19780826\\
C/1984 W2\tablefootmark{$\diamond$} & 16820212 & 4.00 & 0.99991890 & 89.3 & 255.3 & 250.2 & 49.383 & 19850929\\
C/1987 W3 & 16850706 & 3.32 & 0.99991866 & 76.8 & 195.1 & 198.4 & 40.850 & 19880119\\
C/1988 B1 & 16811015 & 5.03 & 0.99989942 & 80.6 & 124.2 & 325.2 & 50.025 & 19870319\\
C/1992 J1 & 16910824 & 3.00 & 0.99991839 & 124.3 & 83.5 & 203.3 & 36.765 & 19930904\\
C/1997 A1 & 16950405 & 3.16 & 0.99993098 & 145.1 & 40.0 & 135.7 & 45.830 & 19970620\\
C/1997 BA6\tablefootmark{$\diamond$} & 16970213 & 3.44 & 0.99989050 & 72.6 & 285.9 & 317.7 & 31.417 & 19991128\\
C/1999 J2 & 16910317 & 7.11 & 0.99984298 & 86.4 & 127.1 & 50.1 & 45.310 & 20000405 \\
C/1999 K5 & 16980208 & 3.25 & 0.99993034 & 89.5 & 241.5 & 106.3 & 46.707 & 20000703\\
C/1999 U4 & 16960618 & 4.89 & 0.99984462 & 52.1& 77.8 & 32.4 & 31.447 & 20011029\\
C/2000 A1 & 16861029 & 9.74 & 0.99960423 & 24.6& 14.3 & 111.9 & 24.612 & 20000715 \\
C/2001 C1 & 16960906 & 5.11 & 0.99991858 & 68.9 & 220.0 & 33.8 & 62.775 & 20020328 \\
C/2001 K3 & 16990214 & 3.07 & 0.99990440 & 52.0 & 3.4& 289.8 & 32.103 & 20010423\\
C/2001 K5 & 16970325 & 5.19 & 0.99994997 & 72.5 & 47.1& 237.5 & 103.734 & 20021011 \\
C/2002 A3 & 16960906 & 5.14 & 0.99989342 & 48.1 & 329.6 & 136.7 & 48.263 & 20020425\\
C/2002 J4 & 17000817 & 3.64 & 0.99987674 & 46.5 & 230.7 & 70.9 & 29.516 & 20031003 \\
C/2002 L9 & 16950224 & 7.04 & 0.99974285 & 68.4 & 231.4 & 110.5 & 27.360 & 20040405\\
C/2003 G1 & 16971120 & 4.92 & 0.99993277 & 66.8 & 11.5 & 246.1 & 73.206 & 20030204 \\
C/2003 S3 & 16920530 & 8.13 & 0.99971781 & 151.5 & 154.4 & 226.3 & 28.802 & 20030409\\
C/2004 P1 & 16960509 & 6.02 & 0.99981290 & 28.8 & 16.5 & 284.2 & 32.185 & 20030809 \\
C/2004 T3 & 16901227 & 8.87 & 0.99959502 & 71.9 & 259.7 & 50.4 & 21.891 & 20030414\\
C/2004 X3 & 17010414 & 4.39 & 0.99994104 & 81.2 & 202.4 & 343.0 & 74.460 & 20050618\\
C/2005 B1\tablefootmark{$\diamond$} & 17040109 & 3.21 & 0.99998720 & 92.5 & 103.1& 195.6 & 250.627 & 20060222\\
C/2005 G1 & 17001215 & 4.95 & 0.99991785 & 108.4 & 113.9 & 299.6 & 60.314 & 20060226\\
C/2005 K1\tablefootmark{$\diamond$} & 17021205 & 3.69 & 0.99996944 & 77.8 & 135.0 & 106.3 & 120.773 & 20051121\\
C/2005 Q1 & 16971120 & 6.40 & 0.99985473 & 105.3 & 44.8 & 87.7& 44.053 & 20050826 \\
C/2006 E1 & 16991001 & 6.04 & 0.99980395 & 83.2 & 232.8 & 95.1 & 30.788 & 20070106 \\
C/2006 K1 & 17030404 & 4.42 & 0.99992817 & 53.9 & 296.5 & 72.2& 61.576 & 20070720 \\
C/2007 Y1 & 17050722 & 3.34 & 0.99988596 & 110.1 & 357.1 & 133.1 & 29.317 & 20080318\\
\end{tabular}
\end{center}
\tablefoot{
\tablefoottext{$\diamond$}{Non-gravitational effects are accounted in the orbit determination, see \citetalias{DK11}.}
\tablefoottext{$\dag$}{These orbital elements rounded off
from those of \cite{Kro14}.}
\tablefoottext{$\ddag$}{.dddddd part is omitted compared to \cite{Kro14}.}
}
\end{table*}
\bibliographystyle{aa}
|
1,116,691,498,560 | arxiv | \section{Introduction}
Controllable text generation is a challenging task in natural language generation, which aims to generate fluent text with desired attributes.
Pilot studies attempt single-aspect control by directly finetuning a conditional model \cite{ziegler2019finetuning, keskarCTRL2019}, or turn to methods with language models fixed \cite{Dathathri2020Plug} due to the high cost of large-scale pre-trained language models \cite{NEURIPS2020_1457c0d6, zhang2022opt}.
Recent works focus on a more practical setting, multi-aspect\footnote{For example, \textit{positive} is an attribute from sentiment aspect while \textit{sports} is an attribute from topic aspect.} controllable text generation, with existing approaches mainly divided into three technical routes: weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative}, multi-objective optimization \cite{kumar2021controlled, mireshghallah-etal-2022-mix}, and prefix-tuning \cite{qian-etal-2022-controllable}, which explore ways to combine controllers learned from single-aspect and apply them to a fixed language model yet suffering from attribute degeneration caused by the mutual interference of controllers.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/intersection.pdf}
\vspace{-0.8cm}
\caption{
Probability space of attributes. \textcolor{HoneyOrange}{Orange} background denotes the estimated distribution over natural language. \textcolor{blue}{Blue} and \textcolor{lgreen}{green} areas represent distributions over sentences containing attributes from two different aspects, respectively. The darker region means a higher probability in the space. The shaded are distributional centers, the areas with the highest probability density.
}
\label{fig:1}
\end{figure}
We provide a distributional perspective to observe and alleviate this problem.
In the current text generation paradigm, a language model forms an estimated distribution over sentences with training data amounted to sampling from natural language distribution \cite{NEURIPS2021_260c2432}.
For single-aspect control, these methods train a classifier or a prefix for each attribute independently, which is regarded as appraising a center of distribution over attribute-relevant sentences, before biasing the language model's distribution to this center.
Correspondingly, when generalizing to multi-aspect control, their fusion strategy is directly obtaining interpolation or average of these centers, which may be too straightforward.
As shown in Figure \ref{fig:1}, the \textbf{\textcolor{Optimal}{interpolation}} point denotes the position they acquired after combining multiple centers in the probability space. And the \textbf{\textcolor{Intersection}{intersection}} represents where oracle sentences that simultaneously satisfy multiple attributes lie.
In the left part of Figure \ref{fig:1}, when distributions of attributes is symmetric\footnote{We plot distributions of attributes in \S \ref{attributes}.}, the interpolation point is indeed within the intersection area.
However, there could be a mismatch between the interpolation point and intersection. For example, as illustrated in the right part of Figure \ref{fig:1}, two skewed distributions intersect on the tails, leaving the interpolation point out of the intersection area and thus making it lack the ability to express all desired attributes together.
In this paper, different from approximating the intersection area with the interpolation point, we propose a strategy for directly acquiring the intersection.
We first deploy an autoencoder structure to map attribute-relevant sentences to latent representations constituting an estimated attribute space.
With our specially designed constraints, this space can model relationships among attributes.
Afterward, we provide an effective intersection searching algorithm that can walk around the long tail regions in distributions of all desired attributes and iteratively find where they combine more tightly.
Finally, we utilize a prefix-tuning-based decoder to construct sentences from the searched intersection.
We experiment on three-aspect control with two attributes from the sentiment aspect, four from the topic, and one from detoxification, with datasets IMDb movie reviews \cite{maas-etal-2011-learning}, AGNews \cite{NIPS2015_250cf8b5}, and Jigsaw Toxic Comment Classification Challenge Dataset, respectively. We evaluate the relevance of each attribute independently and calculate their average as the final relevance metric. Besides, we assess the text quality with perplexity and distinctness concerning fluency and diversity.
Results show that our method can significantly outperform strong baseline models on multi-aspect control. Furthermore, we find out in our analytical experiments that our intuitive assumptions fit well with our observation. The main contributions are as follows:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt]
\item We propose a distributional perspective that models multi-aspect control more practically.
\item We provide a method that directly searches for intersections in the attribute space and generates sentences with desired attributes.
\item We experimentally reveal the effectiveness of our method on multi-aspect control compared to strong baselines and achieve the SOTA.
\end{itemize}
\section{Related Work}
Variational autoencoders are often used for controllable text generation in early work \cite{10.5555/3305381.3305545, duan-etal-2020-pre, mai-etal-2020-plug} where they spend a lot of effort into improving text fluency.
The prosperity of large-scale pre-trained language models \cite{radford2019language} provides more exploration directions for attribute control such as fine-tuning \cite{ficler-goldberg-2017-controlling, ziegler2019finetuning, keskarCTRL2019}. Recent work has made gratifying progress on single-aspect control \cite{krause-etal-2021-gedi-generative}, leading studies gradually turn to a more difficult task, multi-aspect control, including the following three main approaches.
\paragraph{Weighted Decoding}
As the scale of language models increases rapidly, weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative, yang-klein-2021-fudge, liu-etal-2021-dexperts, gu-etal-2022-improving} becomes a simple and practical choice.
It is a framework that decomposes the probability of sentences conditioned on attributes into a language model and a classifier with the bayesian rule directly at decoding time.
When handling multi-aspect control, it can be easily generalized by interpolating classifiers \cite{lin-riedl-2021-plug}.
\paragraph{Multi-Objective Optimization}
Controllable text generation task is naturally a multi-objective optimization problem when regarding its decoding process as an optimization objective.
Some approaches, such as DGC \cite{khalifa2020distributional}, Mix\&Match \cite{mireshghallah-etal-2022-mix}, and COLD Decoding \cite{qin2022cold}, adopt Energy-based Models \cite{lecun2006tutorial} to blend multiple objectives.
Others like MUCOCO \cite{kumar2021controlled} convert the optimization objectives of multi-aspect control to inequality constraints and thereby apply the lagrange multiplier method for this constrained optimization problem.
\paragraph{Prefix-Tuning}
GPT-3 \cite{brown2020language} provides a new paradigm named prompt-based learning \cite{liu2021pre}, which is able to perform few-shot learning on downstream tasks. Prefix-Tuning \cite{li-liang-2021-prefix} leverages the learned lightweight prompts to trigger the conditional generation capability of the language model.
Applying Prefix-Tuning to multi-aspect controllable text generation \cite{yu-etal-2021-attribute-alignment, qian-etal-2022-controllable, carlsson-etal-2022-fine, yang2022tailor} can be regarded as optimizing on multi-objective implicitly.
\begin{figure*}[ht]
\centering
\vspace{-0.5cm}
\resizebox{2\columnwidth}{!}{
\includegraphics[scale=0.55]{pic/model.pdf}
}
\vspace{-0.5cm}
\caption{An overview of our method. \textbf{Top}: Illustration of our autoencoder structure with prefix-tuning deployed on the fixed decoder, where latent representations $\mathcal{H}_i$ constitute an estimated attribute space. \textbf{Bottom Left}: Illustration of attribute classification loss $\mathcal{L}_C$ and aspect gap loss $\mathcal{L}_G$ attached to the attribute space. \textbf{Bottom Right}: Inferencing stage with prefix mapped from the intersection of attributes.}
\label{fig:2}
\end{figure*}
\section{Methodology}
In this section, we first introduce the motivation and overall process of our method, after which we describe each module in detail.
\subsection{Overview}
As illustrated in Figure \ref{fig:2}, our method mainly revolves around the attribute space including estimating the attribute space, searching for intersections, and mapping intersections to sentences.
Firstly, we aim to construct an attribute space using sampled sentences to estimate the real space as accurately as possible. We employ an autoencoder structure with the latent representations denoting points that constitute our estimated attribute space. To ensure that our estimated space reliably models the attributes, such as their probability distributions and relationships between different attributes, we further attach three constraints to the representation.
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Reconstruction Loss} $\mathcal{L}_R$ aims to bridge the gap between points in attribute space and natural attribute-relevant sentences, which is recovering attributes reflected by contents.
\item \textbf{Attribute Classification Loss} $\mathcal{L}_C$ forces the encoder to focus more on capturing attributes by distinguishing points of different attributes from the same aspect.
\item \textbf{Aspect Gap Loss} $\mathcal{L}_G$ %
penalizes the discrepancy of aspects, which is caused by the domain gap among different data sources for different aspects.
Inspired by the feature alignment \cite{10.1145/1772690.1772767}, we minimize the distances between distributional centers of each two aspects.
\end{enumerate*}
The second step aims to search for an intersection area of desired attributes. If the intersection area exists, a point in the area satisfies that neighbor points appearing in a tiny surrounding region should cover all required attributes. Inspired by this neighborhood ideology, we design an algorithm that iteratively approaches an area where these attributes bind more tightly.
The third step maps our searched intersection to a Prefix that activates the language model to generate attribute-relevant sentences. To make the language model less sensitive to slight variations, we sample a perturbation vector from a multivariate gaussian distribution.
\subsection{Estimating Attribute Space}
Given $|\mathbf{A}|$ aspects $\mathbf{A} = \left\{A_1,\cdots,A_{|\mathbf{A}|}\right\}$ with each comprising $|A_t|$ attributes $\left\{a_1^t,\cdots,a_{|A_t|}^t\right\}$, $I_\tau^t$ is an index set representing the identifiers of all sentences with attribute $a^t_\tau$ in the training data. We have $I^t = \bigcup\limits_{\tau=1}^{|A_t|} I_\tau^t, I = \bigcup\limits_{t=1}^{|\mathbf{A}|} I^t$, where $I^t$ is the indices of all sentences with any attribute in aspect $A_t$ and $I$ is the indices of the entire training data.
We encode sentences $\{X_i\}$ from all aspects $\mathbf{A}$ to representations $\{\mathcal{H}_i\}$ with unified mapping parameters $\phi$: $\mathcal{H}_i = \text{Encode}_\phi(X_i)$, where $i \in I$.
\paragraph{Reconstruction Loss $\mathcal{L}_R$}
As in the top of Figure \ref{fig:2}, $\mathcal{L}_R$ is computed in the same way as the autoregressive loss of pre-trained language model $p_{\text{LM}}$:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\begin{aligned}
\label{eq:2}
\mathcal{L}_R &= -\sum\limits_{i\in I} \log p_\text{LM}(X_i|\text{Prefix}_i)\\
\text{Prefix}_i &= \text{MLP}_\theta(\mathcal{H}_i + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}),
\end{aligned}
\end{equation}
where $X_i$ here is a sample sentence from the entire training set, i.e., $i \in I$.
Besides, $\varepsilon_i$, with a scaling factor $\lambda$, is a perturbation vector sampled from a multivariate gaussian distribution $\mathcal{N}(\mathbf{0}, \mathbf{I})$ for robustness when reconstructing. The multi-layer perceptron $\text{MLP}_{\theta}$ will map perturbed $\mathcal{H}_i$ to $\text{Prefix}_i$ that can activate the language model to generate text with desired attributes.
It's worth noting that our primary goal is to recover attributes, which means $\mathcal{L}_R$ does not need and preferably does not converge too well while maintaining text fluency.
\paragraph{Attribute Classification Loss $\mathcal{L}_C$}
We force the encoder to focus on attributes by $\mathcal{L}_C$ in the way:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:3}
\begin{aligned}
\mathcal{L}_C = -\sum\limits_{t=1}^{|\mathbf{A}|}\sum\limits_{\tau=1}^{|A_t|} \sum\limits_{i \in I_\tau^t} \log p_{\pi_{t}}(a_\tau^t|\mathcal{H}_i).
\end{aligned}
\end{equation}
Given sentence representation, $p_{\pi_t}$ is a classifier that distinguish attributes $\left\{a_\tau^t\right\}$ from aspect $A_t$ with parameter $\pi_{t}$.
\paragraph{Aspect Gap Loss $\mathcal{L}_G$}
We penalize the discrepancy between distributional centers by:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:4}
\mathcal{L}_G = \sum\limits_{\mathclap{1\leq t_1<t_2\leq |\mathbf{A}|}}\quad\;\,\,\left\|\sum\limits_{i \in I^{t_1}} \frac{\mathcal{H}_i}{|I^{t_1}|} - \sum\limits_{j \in I^{t_2}} \frac{\mathcal{H}_j}{|I^{t_2}|}\right\|_2,
\end{equation}
which are Euler distances between every two distinct distributional centers.
When generalizing to a larger scale of aspects, it is relatively expensive to calculate averages over the entire dataset each time the model is updated.
We calculate this loss in practice using a batch-level approximation. We assign each aspect a memory unit to store the latest representation of the aspect's estimated center.
Each time processing a batch of sentences from one aspect, we take the average of their representations as the center and sum up the Euler distances to centers of other aspects in the memory, which is the estimated $\mathcal{L}_G$. Then, we update the memory unit of this aspect to the latest.
\begin{algorithm}[t]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{Intersection Searching}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{H}_i, i \in \bigcup\limits_{t=1}^N I^t_{\alpha_t}$ from $N$ attributes\\
\quad \; $\omega_{\alpha_t}$ weight of each attribute
\ENSURE Intersection of $N$ attributes: $\mathcal{\tilde{H}}^*$
\STATE Initialize $M$ candidates:$\{\mathcal{\tilde{H}}^0_m\}$
\STATE Iterate $S$ times
\FOR{$s$ in $[0, S-1]$}
\FOR{$m$ in $[1, M]$}
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathbf{0}$
\FOR{$t$ in $[1, N]$}
\STATE $\mathbf{H}\leftarrow\mathop{\text{Nearest}}\limits_{top K}(\mathcal{\tilde{H}}_m^s,\left\{\mathcal{H}_i, i \in I^t_{\alpha_t}\right\})$
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} +\omega_{\alpha_t} \mathop{\text{mean}}(\mathbf{H})$
\ENDFOR
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} / \sum\limits_{t=1}^N \omega_{\alpha_t}$
\ENDFOR
\ENDFOR
\STATE $\mathcal{\tilde{H}}^*\leftarrow \text{Select}(\{\mathcal{\tilde{H}}^{S}_m\})$
\end{algorithmic}
\end{algorithm}
During the training stage, our loss function is:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:5}
\mathcal{L} = w_1\mathcal{L}_R + w_2\mathcal{L}_C + w_3\mathcal{L}_G.
\end{equation}
It's worth noting that we only update parameters $\phi$, $\theta$, and $\left\{\pi_t\right\}$ for the encoder, the MLP layer, and the classifier heads, respectively.
\subsection{Intersection of Attributes}
Suppose there is an intersection point, denoted as $\mathcal{\tilde{H}}^*$, located within the intersection region of attributes $\left\{a^1_{\alpha_1},a^2_{\alpha_2},\cdots,a^N_{\alpha_N}\right\}$ from $N$ different aspects, where $a^t_{\alpha_t}$ is the $\alpha_t$th attribute in aspect $A_t$.
Our algorithm \ref{alg1} approximates the $\mathcal{\tilde{H}}^*$ by iteratively approaching a most balanced point with nearest neighbors from different attributes.
First, we initialize the candidates $\{\mathcal{\tilde{H}}^0_m\}$ by randomly sampling points in the attribute space, calculating their distance to the closest point of each attribute $a^t_{\alpha_t}$, and selecting the top $M$ samples with the smallest average distance to all attributes.
At each iteration $s$, we choose the top-K\footnote{We study the practical meaning and impact of $K$ in \S \ref{sec:effectofk}.} nearest points to $\mathcal{\tilde{H}}^s_m$ for each attribute and update $\mathcal{\tilde{H}}^{s+1}_m$ using the weighted average of these points.
It is worth mentioning that $\omega_{\alpha_t}$ is the weight used to balance attributes or favor some specifically, and a negative value of $\omega_{\alpha_t}$ can even move away from a particular one.
Finally, we select the best candidate from the last iteration $S$, which is expected to be in the intersection region, i.e., a representation related to multiple attributes.
\subsection{Generation with Intersections}
As illustrated in the right bottom of Figure \ref{fig:2}, we convert the representation $\mathcal{\tilde{H}}^*$ obtained from the intersection area directly to the $\text{Prefix}$ with $\text{MLP}_\theta$ and let the language model generate multi-attributed sentence $Y$ from input $\mathcal{X}$ as:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:1}
\begin{aligned}
Y &= \mathrm{arg}\max\limits_y \; p_\text{LM}(y|\text{Prefix}^*;\mathcal{X})\\
\text{Prefix}^* &=\text{MLP}_{\theta}(\mathcal{\tilde{H}}^* + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}).
\end{aligned}
\end{equation}
When generating several attribute-relevant sentences for one attribute combination, we only need to calculate the intersection for it once.
\section{Experiment}
In this section, we demonstrate the effectiveness of our method on three-aspect control, including sentiment, topic, and detoxification.
\subsection{Multi-Aspect Control Task}
The datasets we use are the same as GeDi \cite{krause-etal-2021-gedi-generative} and Contrastive Prefix \cite{qian-etal-2022-controllable}.
To balance the data scale across all aspects, we randomly sample 10k sentences from each dataset that is less than the number of samples GeDi uses, with each attribute equally dividing this amount.
We use the IMDb movie reviews \cite{maas-etal-2011-learning}, the AGNews dataset \cite{NIPS2015_250cf8b5}, and the Jigsaw Toxic Comment Classification Challenge Dataset\footnote{\url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/}} for sentiment, topic and detoxification aspects, respectively.
The prompts used for text generation are the same as those used in the PPLM \cite{Dathathri2020Plug}, with 20 from its bag-of-words experiment and 15 from its discriminator experiment. We experiment with $8$ combinations of the $3$ aspects with $2$ sentiments $\times$ $4$ topics $\times$ $1$ detoxification and generate $5$ completions for each combination and each prompt. Totally, each model will generate $35 \times 2 \times 4 \times 1 \times 5 = 1400$ sentences.
It is worth noting that we do not specifically use prompts that induce the language model to generate toxic text, making detoxification easier to improve.
To measure the performance on different aspects, we compute the attribute relevance. We finetune a DeBERTa \cite{he2021deberta, he2021debertav3} classifier on the Yelp dataset \cite{NIPS2015_250cf8b5} for sentiment aspect and a classifier for topic utilizing all its remaining data not used during training. We evaluate the non-toxicity with the Google Perspective API\footnote{\url{https://www.perspectiveapi.com}}.
The final performance of a model is determined by the average of these three attribute relevance scores introduced above.
We also use two auxiliary metrics to measure text quality.
One is perplexity calculated by GPT2-large following Contrastive Prefix \cite{qian-etal-2022-controllable}. To ensure that models are not insensitive to changes in different prefixes, we calculate the Distinctness \cite{li-etal-2016-diversity} of sentences generated from different prefixes and average the 1-gram, 2-grams, and 3-grams distinct scores for simplicity.
Moreover, we conduct human evaluation with sentences generated by different models shuffled. Each sentence is rated by three professional evaluators for 3 attribute relevance and text fluency. Evaluators rate each item on a scale of 1 to 5, with 5 representing text highly related to the desired attribute or very fluent.
\subsection{Baselines}
\begin{table*}[ht]
\small
\vspace{-0.3cm}
\centering
\begin{tabular}{l|c|ccc|c|c}
\hline
\hline
\textbf{Methods} & \textbf{Average}↑ (\%) & \textbf{Sentiment}↑ (\%) & \textbf{Topic}↑ (\%) & \textbf{Detoxification}↑ (\%) & \textbf{PPL.}↓ &\textbf{Dist.}↑\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{PPLM} & 71.0 $\pm$ 21.4 & 64.7 $\pm$ 24.8 & 63.5 $\pm$ 22.7 & 84.9 $\pm$\; 6.5 & 62.6 & 62.0\\
\hline
\textbf{GeDi} & 81.4 $\pm$ 14.7 & 76.1 $\pm$ 17.2 & 73.8 $\pm$ 11.3 & 94.2 $\pm$\; 1.9 & 116.6 & 75.1\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Multi-Objective Optimization Based Methods}}\\
\hline
\textbf{MUCOCO} & 73.9 $\pm$ 24.1 & 65.0 $\pm$ 33.7 & 67.2 $\pm$ 18.3 & 89.5 $\pm$\; 3.5 & 405.6 & 49.7\\
\hline
\textbf{Mix\&Match} & 79.7 $\pm$ 21.8 & 73.5 $\pm$ 25.9 & 69.9 $\pm$ 21.1 & 95.8 $\pm$\; 1.9 & 63.0 & 61.8\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Contrastive Prefix} &&&&&\\
\quad concatenation & 77.2 $\pm$ 18.5 & 67.3 $\pm$ 20.7 & 71.8 $\pm$ 16.5 & 92.6 $\pm$\; 2.9 & 54.6 & 39.9\\
\quad semi-supervised & 81.3 $\pm$ 16.5 & 74.4 $\pm$ 19.6 & 76.9 $\pm$ 16.7 & 92.7 $\pm$\; 3.5 & 31.9 & 43.3\\
\hline
\textbf{Ours} & \textbf{87.4} $\pm$ 10.9 & \textbf{86.7} $\pm$ 10.5 & \textbf{84.8} $\pm$ 14.2
& 90.7 $\pm$\; 7.4 & 28.4 & 49.5\\
\cline{2-7}
\quad w/o $\mathcal{L}_G$ & 80.9 $\pm$ 16.2 & 71.6 $\pm$ 11.7 & 75.9 $\pm$ 18.9 & 95.3 $\pm$\; 2.6 & 71.5 & 58.9\\
\quad w/o $\mathcal{L}_C$ & 62.3 $\pm$ 41.8 & 49.1 $\pm$ 49.8 & 41.7 $\pm$ 36.0 & \textbf{96.0} $\pm$\; 0.1 & 473.0 & 37.0\\
\hline
\hline
\end{tabular}
\caption{Automatic Results on Multi-Aspect Control. Hyperparameters and details are in \S \ref{sec:appendix3}.}
\label{tab:1}
\vspace{-0.2cm}
\end{table*}
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Weighted Decoding}:
\textbf{PPLM} \cite{Dathathri2020Plug} biases the language model with gradients back-propagated from trained classifiers. \textbf{GeDi} \cite{krause-etal-2021-gedi-generative} influences the decoding process with token probabilities conditioned on attributes.
\item \textbf{Multi-objective Optimization}:
\textbf{MUCOCO} \cite{kumar2021controlled} regards the decoding process as a constrained optimization problem, where the language model is the objective function and attributes are constraints.
\textbf{Mix\&Match} \cite{mireshghallah-etal-2022-mix} controls attributes with energy-based models and generates sentences by masking, sampling, and correcting.
\item \textbf{Prefix-Tuning}:
\textbf{Contrastive Prefix} \cite{qian-etal-2022-controllable} utilizes prefixes to activate the language model to generate attribute-relevant sentences by concatenation or semi-supervision.
\end{enumerate*}
\subsection{Results}
According to the automatic evaluation results in Table \ref{tab:1}, under the multi-aspect setting, we group models based on their type of methods in chronological order.
In addition, we demonstrate their standard deviations, which reflect the stability of models among different attribute combinations.
For weighted decoding, GeDi uses more powerful classifiers than PPLM and performs better on attribute relevance, stability to different combinations, and distinctness while correspondingly worse on perplexity.
Multi-objective optimization methods achieve a favorable performance on attribute relevance while MUCOCO explodes on perplexity due to its non-autoregressive paradigm not being suitable for generating from scratch.
Performance of semi-supervised Contrastive Prefix is similar to GeDi, except for lack of diversity.
Our method performs best on average attribute-related metrics, with at least a $7.3\%$ significant improvement over existing baselines. Our advances mainly come from sentiment and topic aspects, with no less than $13.9\%$ and $10.3\%$ each.
Although our model is not the best on detoxification, it is the most balanced and stable according to the lowest standard deviation on average, $10.9$.
As a prefix-tuning-based method inducing the language model without direct modification, which is naturally good at text fluency, we perform well on perplexity and inherit the performance on diversity.
Furthermore, we conduct ablation on aspect gap loss $\mathcal{L}_G$ and attribute classification loss $\mathcal{L}_C$ separately.
On the one hand, without $\mathcal{L}_G$, we can not alleviate the bias in different training datasets, making it hard to search for the intersection areas. Since training sentences of sentiment and topic aspects are mainly non-toxic, our model focuses more on detoxification rather than struggling for the other two, leading to considerable declines on their relevance while slight improvements on detoxification. Besides, as the distance among sample points from different aspects in the attribute space increases, our model will generate sentences mapped from far more sparse areas, leading to a small decrease on fluency and a subtle increase on diversity.
On the other hand, without $\mathcal{L}_C$, our attribute space will totally collapse. The relevance of sentiment and topic drops drastically while the non-toxicity boosts because model can hardly distinguish representations of different attributes in the same aspect and focus on relatively more effortless detoxification.
Worse still, without distinct representations, our model is required to recover different sentences from similar ones, leading to oscillation in training and hardly generating complete text when inferencing.
Results of human evaluation are in Table \ref{tab:2}, with inter-annotator agreement being $0.36$ in Fleiss’ $\kappa$.
We evaluate GeDi, Contrastive Prefix, and our method and observe that the results are consistent with the automatic ones on sentiment and topic relevance.
The performance of models on detoxification is high and relatively similar, making the automatic results different from the manual ones where the annotators believe that our model does a better job than baselines.
Since perplexity is relatively unreliable, the manually measured fluency of GeDi is much better than that of the Contrastive Prefix. And our method achieves the best fluency.
\begin{table}[t]
\small
\centering
\begin{tabular}{l|cccc}
\hline
\hline
\textbf{Methods} & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{Detox.}↑ &\textbf{Fluency}↑\\
\hline
\hline
\textbf{GeDi} & 2.96 & 2.72 & 4.59 & 3.08\\
\hline
\textbf{Con. Prefix} & 2.84 & 2.90 & 4.40 & 2.26\\
\hline
\textbf{Ours} & \textbf{3.47} & \textbf{3.39} & \textbf{4.71} & \textbf{3.69}\\
\hline
\hline
\end{tabular}
\caption{Human Evaluation on Multi-Aspect Control. }
\label{tab:2}
\end{table}
\begin{table*}[ht]
\small
\centering
\vspace{-0.3cm}
\resizebox{2\columnwidth}{!}{
\begin{tabular}{l|cc|cccc|c}
\hline
\hline
\multirow{2}{*}{\textbf{Methods}} & \multicolumn{2}{c|}{\textbf{Sentiment} (\%)} &\multicolumn{4}{c|}{\textbf{Topic} (\%)} & \multirow{2}{*}{\textbf{Detox.} (\%)}\\
& \textbf{Neg.}& \textbf{Pos.}& \textbf{World}&\textbf{Sports}& \textbf{Business}&\textbf{Sci./Tech.}&\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{GeDi} \textit{single-aspect} & 93.9& 70.7& 73.4 & 85.7 & 75.7 & 98.0 & 94.9 \\
\hline
\multirow{8}{*}{\textbf{GeDi}}
& 94.7 & - & 80.0 & - & - & - & 90.6\\
& 84.2 & - & - & 74.8 & - & - & 93.9\\
& 94.9 & - & - & - & 75.7 & - & 96.6\\
& 90.6 & - & - & - & - & 80.1 & 92.8\\
& - & 53.7 & 61.4 & - & - & - & 94.4\\
& - & 60.5 & - & 74.3 & - & - & 95.2\\
& - & 57.6 & - & - & 54.3 & - & 95.7\\
& - & 72.3 & - & - & - & 90.2 & 94.2\\
\cline{2-8}
\qquad \textit{average} & \textbf{91.1} $(- \textbf{2.8})$ & 61.0 $(-9.7)$ & 70.7 $(-2.7)$ & 74.6 $(-11.1)$ & 65.0 $(-10.7)$ & 85.2 $(-12.8)$ & \textbf{94.2} $(-\textbf{0.7})$\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Prefix} \textit{single-aspect} & 88.4 & 90.6 & 74.5 & 85.3 & 93.5 & 93.6 & 93.8\\
\hline
\multirow{8}{*}{\tabincell{l}{\textbf{Contrastive Prefix}\\ \quad semi-supervised}}
& 65.5 & - & \textbf{80.6} & - & - & - & 91.8\\
& 67.2 & - & - & \textbf{90.3} & - & - & 92.5\\
& 56.0 & - & - & - & 79.2 & - & 92.2\\
& 90.0 & - & - & - & - & 93.3 & 84.8\\
& - & 93.5 & 64.8 & - & - & - & 95.1\\
& - & 41.8 & - & 78.5 & - & - & 94.8\\
& - & 87.4 & - & - & 41.7 & - & 95.2\\
& - & 93.6 & - & - & - & 86.7 & 95.3\\
\cline{2-8}
\qquad \textit{average} & 69.7 $(-18.7)$ & 79.1 $(-11.5)$ & \textbf{72.7} $(-\textbf{1.8})$ & \textbf{84.4} $(-\textbf{0.9})$ & 60.5 $(-33.0)$ & 90.0 $(-3.6)$ & 92.7 $(-1.1)$\\
\hline
\multirow{8}{*}{\textbf{Ours}}
& 69.7 & - & 71.7 & - & - & - & 84.1\\
& 78.6 & - & - & 80.0 & - & - & 80.2\\
& \textbf{99.9} & - & - & - & \textbf{96.7} & - & 96.8\\
& 92.8 & - & - & - & - & \textbf{98.0} & 81.7\\
& - & 80.5 & 58.0 & - & - & - & 95.1\\
& - & 84.7 & - & 86.6 & - & - & 94.5\\
& - & 87.6 & - & - & 91.7 & - & \textbf{98.1}\\
& - & \textbf{99.7} & - & - & - & 96.1 & 95.4\\
\cline{2-8}
\qquad \textit{average} & 85.3 $(-3.1)$ & \textbf{88.1} $(-\textbf{2.5})$ & 64.9 $(-9.6)$ & 83.3 $(-2.0)$ & \textbf{94.2} $(+\textbf{0.7})$ & \textbf{96.8} $(+\textbf{3.2})$ & 90.7 $(-3.1)$\\
\hline
\hline
\end{tabular}
}
\caption{Detailed Results on Single-Aspect and Multi-Aspect Control. We demonstrate results on \textit{single-aspect} and \textit{average} results on multi-aspect control with their difference to \textit{single-aspect}, where other rows each represent an attribute combination. Cases are in \S \ref{sec:appendix4}. Detailed results for other baseline models and our ablations are in \S \ref{sec:appendix5}.}
\label{tab:3}
\vspace{-0.2cm}
\end{table*}
\section{Analysis}
\subsection{Effect of Different Attributes and their Combinations}
We illustrate the detailed results of each attribute and their combinations in Table \ref{tab:3}.
GeDi and Prefix-tuning perform differently in \textit{single-aspect} control, each with its advantages. For example, GeDi is dedicated to \textit{negative} with $93.9\%$ relevance, while Prefix-tuning is good at \textit{positive} with $90.6\%$ relevance.
When dealing with multi-aspect control, they inherit such imbalanced characteristics, with \textit{average} relevance of $91.1\%$ and $79.1\%$, respectively. In addition, the baselines decrease correspondingly in the \textit{average} relevance of each attribute compared to \textit{single-aspect}, ranging from $0.7$ to $33.0$.
On average, our model outperforms other baselines on attribute metrics (Table \ref{tab:1}). In detail, our model performs competitively for most attributes compared to another prefix-tuning-based model, Contrastive Prefix. Especially, on attributes like \textit{business} and \textit{sci/tech}, our model significantly improves over another prefix-tuning-based method on multi-aspect control and can even surpass it under \textit{single-aspect} control.
In addition, correlations between attributes vary widely, as in Table \ref{tab:3}.
For example, generally, \textit{positive} fits well with \textit{non-toxic} while \textit{negative} leads to a massive drop in non-toxicity, which is consistent with the intuition that one can hardly praise people and offend them simultaneously.
Besides, \textit{world} and \textit{business} news are often reported negatively, such as war, famine, inflation, etc., making it challenging to combine them with \textit{positive}.
When attributes are not closely correlated, which means that few natural sentences possess these attributes together, our method is more likely to capture such a rarely occurred incident and magnify their frequency.
Take \textit{business} as an example. It is effortless to achieve a fine attribute relevance when performing single-aspect control on \textit{business}, with GeDi achieving $75.7$ and Prefix obtaining $93.5$. After attaching \textit{positive} to \textit{business}, baseline models will suffer from a decline due to their weak correlation, where GeDi and Contrastive Prefix drop to $54.3$ and $41.7$, respectively. In contrast, our method can alleviate this problem by retrieving this unusual co-occurrence in the training sentences and recovering it from the attribute space, achieving a performance of $91.7$, which is close to single-aspect control.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/attribute_space.pdf}
\caption{Projection of 4 attributes from attribute space.}
\label{fig:3}
\end{figure}
When combining business with negative, which is a relatively common combination, there is still some decrease for baseline models. On the contrary, our method can even obtain the performance of $96.7$ that surpasses single-aspect control.
\subsection{Estimated Attribute Space}
We demonstrate part of our estimated attribute space in Figure \ref{fig:3} with four attributes: \textit{\textcolor{sred}{positive}}, \textit{\textcolor{sblue}{negative}}, \textit{\textcolor{syellow}{sports}}, and \textit{\textcolor{sgreen}{sci/tech}} from sentiment and topic aspects.
We project the high-dimensional space to 2D with Principal Component Analysis (PCA).
Consistent with our hypothesis, distributions of \textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sgreen}{sci/tech}} are asymmetric and the intersections lie in the sparse edges of attributes' distribution.
In addition, we project the intersections searched by the \textcolor{smediumvioletred}{baseline}'s strategy and \textcolor{darkred}{ours}, respectively. For \textit{\textcolor{sred}{positive}}-\textit{\textcolor{sgreen}{sci/tech}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{sgreen}{sci/tech}} pairs, the combinations are relatively tight, making it easy to find intersections. However, intersection areas for \textit{\textcolor{sred}{positive}}-\textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{syellow}{sports}} pairs are considerably sparse.
As shown in enlarged area, the \textcolor{smediumvioletred}{baseline} searched intersection is at the midpoint of the two distributional centers, but this location is not where the attributes intersect. On the contrary, \textcolor{darkred}{our} method can find an intersection in such a sparse region, making various points from the two different attributes appear simultaneously in its tiny surrounding area.
It worth noting that \textit{positive} and \textit{negative} appear to intersect in this projection because they are close in the high-dimensional space. But there is actually no intersection if only projecting these two attributes in \S \ref{sec:pos_neg}.
\subsection{Effect of $K$}
\label{sec:effectofk}
\begin{table}[t]
\small
\setlength{\abovecaptionskip}{0.2cm}
\vspace{-0.3cm}
\centering
\begin{tabular}{r|c|ccc}
\hline
\hline
$\textbf{K}$ &\textbf{Avg.}↑ & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{DeTox.}↑\\
\hline
5000 & \textcolor{sblue}{75.5} & 70.5 & 67.9 & 88.2\\
4000 & \textcolor{sblue}{77.6} & 72.9 & 71.4 & 88.4\\
3000 & \textcolor{sblue}{78.7} & 72.4 & 74.7 & 88.9\\
2000 & \textcolor{sblue}{79.1} & 72.6 & 75.9 & 88.7\\
1500 & \textcolor{sblue}{79.9} & 73.6 & 77.1 & 89.0\\
1000 & \textcolor{sblue}{80.7} & 75.7 & 77.2 & 89.1\\
800 & \textcolor{sblue}{82.9} & 79.3 & 79.2 & 90.3\\
500 & \textcolor{sblue}{85.2} & 83.5 & 81.5 & 90.5\\
300 & \textcolor{sblue}{85.7} & 84.1 & 83.2 & 89.7\\
200 & \textcolor{sred}{\textbf{87.4}} & \textbf{86.7} & \textbf{84.8} & 90.7\\
150 & \textcolor{sgreen}{84.0} & 79.2 & 84.3 & 88.4\\
100 & \textcolor{sgreen}{83.9} & 78.7 & 83.6 & 89.5\\
50 & \textcolor{sgreen}{82.2} & 78.4 & 78.5 & 89.6\\
20 & \textcolor{sgreen}{80.9} & 77.8 & 73.1 & 91.7\\
10 & \textcolor{sgreen}{80.8} & 79.6 & 71.5 & 91.2\\
5 & \textcolor{sblue}{81.4} & 82.9 & 69.3 & 92.1\\
3 & \textcolor{lred}{85.0} & 86.1 & 77.7 & 91.1\\
1 & \textcolor{sgreen}{78.8} & 63.1 & 80.9 & \textbf{92.4}\\
\hline
\hline
\end{tabular}
\caption{Results that vary with $K$.}
\vspace{-0.2cm}
\label{tab:k_analysis}
\end{table}
We analyze the variation of $K$ in the intersection searching algorithm and demonstrate the results in Table \ref{tab:k_analysis}.
Our model reaches a critical point when $K$ is 200, and the performance is optimal this time.
On the one hand, as the value of $K$ gradually increases, our method pays less attention to regions where samples are fewer while attributes combine more tightly, and the performance decreases accordingly.
When $K$ reaches 5k, our method degenerates into a plain prefix-tuning model, which treats intersection as the midpoint of distributional centers. Its performance is similar and slightly inferior to the concatenation version of Contrastive Prefix in Table \ref{tab:1}.
On the other hand, smaller $K$ leads to suboptimal performance since the effect of noise becomes non-negligible in training data.
When $K$ is less than $10$, our model will be very unstable.
\subsection{Distribution of Attributes}
\label{attributes}
\begin{figure}[t]
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{pic/single_2.pdf}
\caption{Distribution of attribute World from Topic.}
\label{fig:4}
\vspace{-0.2cm}
\end{figure}
We project sample points to 2D by PCA, with each attribute projected independently. As in Figure \ref{fig:4}, we display a scatterplot of World and conduct a Gaussian kernel density estimation to visualize its probability distribution. The darker area denotes a higher probability, where more representation points of oracle sentences gather. And the region annotated by a red ellipse is the estimated distributional center. As in the plot, the distribution of World is significantly asymmetric as the center lies in the top part, with the bottom being a sparse long tail. In addition, the distribution is even non-convex with an isolated cluster in the lower right corner. This observation supports our hypothesis that the practical distributions of attributes are far more complex than symmetric distributions such as Gaussian distribution. Besides, we plot the distribution of other attributes in the \S \ref{sec:appendix1}.
\section{Discussion on Distributional Lens}
Pilot work such as DGC \cite{khalifa2020distributional} estimates the language distribution with an energy-based model and optimizes this distribution to satisfy constraints by approaching the constraints manifold. Recent distributional approaches like COLD Decoding \cite{qin2022cold} and MuCoLa \cite{kumar2022constrained} take the language and attribute distribution in the same space so as to sample attribute-related sentences with Langevin Dynamics. Concurrent work on the image side, PromptGen \cite{wu2022generative}, simulates the complex distribution of images relevant to target attributes using a deep generative model. However, as a consensual hypothesis in manifold learning, the pre-trained language model estimates a low-dimensional manifold of language in a high-dimensional embedding space, which means most points in the embedding space are not probabilistically modeled by the language model. We believe that placing too much trust in the distributional modeling ability of language models is not a good choice. Our method attempts to depict the attribute space with discrete sample points of attributed sentences and make these discrete points, along with their coverage areas, compose the support set of our estimated distribution.
\section{Conclusion}
In this work, we present a distributional perspective for the multi-aspect controllable text generation with experimental results confirming the superiority of our model. Further observations on the 2D projection of the estimated attribute space show that our hypothesis about the attribute space is more feasible. In the future, we can explore the correlation between different attribute combinations for more fine-grained control and capture the bias in datasets to eliminate or utilize it.
\section*{Limitations}
Our method has a certain dependence on the data since we need to estimate an attribute space. Therefore, it is difficult for our method to perform well in the setting of few-shot learning. However, this disadvantage is not that severe, because we only need single-aspect data, which is relatively sufficient in style transfer tasks. Another dependence of our method on data is that it is somewhat sensitive to biases in the data. When the semantic divergence of different aspects in training data is too large, our aspect gap loss, which aims to reduce the distance among the distributions of each aspect, will conflict with the sentence reconstruction loss. As a result, it may be hard to obtain a reliable intersection in the attribute space.
Computational resources also have an impact on our approach, as our aspect gap loss leverages a batch-level estimation for each aspect. Therefore, a larger batch size means a more accurate approximation, leaving the attribute space fewer biases. An alternative strategy for smaller batches is to backpropagate the loss after accumulating enough distributional samples, which requires more training epochs.
\section*{Ethics Statement}
We are totally aware that text generation technology has a potential to be used maliciously to generate fake, toxic, or offensive content. However, after training on the Detoxification aspect, controllable text generation technology is a powerful weapon for combating hate speech, and eliminating harmful information in pre-trained language models. In addition, our multi-aspect controllable text generation technology can take Detoxification as an default aspect when controlling other aspects. We believe it meaningful and beneficial to advance research on controllable text generation.
\section*{Acknowledgements}
Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R\&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078 and the Major Key Project of PCL, PCL2021A06.
\normalem
\section{Introduction}
Controllable text generation is a challenging task in natural language generation, which aims to generate fluent text with desired attributes.
Pilot studies attempt single-aspect control by directly finetuning a conditional model \cite{ziegler2019finetuning, keskarCTRL2019}, or turn to methods with language models fixed \cite{Dathathri2020Plug} due to the high cost of large-scale pre-trained language models \cite{NEURIPS2020_1457c0d6, zhang2022opt}.
Recent works focus on a more practical setting, multi-aspect\footnote{For example, \textit{positive} is an attribute from sentiment aspect while \textit{sports} is an attribute from topic aspect.} controllable text generation, with existing approaches mainly divided into three technical routes: weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative}, multi-objective optimization \cite{kumar2021controlled, mireshghallah-etal-2022-mix}, and prefix-tuning \cite{qian-etal-2022-controllable}, which explore ways to combine controllers learned from single-aspect and apply them to a fixed language model yet suffering from attribute degeneration caused by the mutual interference of controllers.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/intersection.pdf}
\vspace{-0.8cm}
\caption{
Probability space of attributes. \textcolor{HoneyOrange}{Orange} background denotes the estimated distribution over natural language. \textcolor{blue}{Blue} and \textcolor{lgreen}{green} areas represent distributions over sentences containing attributes from two different aspects, respectively. The darker region means a higher probability in the space. The shaded are distributional centers, the areas with the highest probability density.
}
\label{fig:1}
\end{figure}
We provide a distributional perspective to observe and alleviate this problem.
In the current text generation paradigm, a language model forms an estimated distribution over sentences with training data amounted to sampling from natural language distribution \cite{NEURIPS2021_260c2432}.
For single-aspect control, these methods train a classifier or a prefix for each attribute independently, which is regarded as appraising a center of distribution over attribute-relevant sentences, before biasing the language model's distribution to this center.
Correspondingly, when generalizing to multi-aspect control, their fusion strategy is directly obtaining interpolation or average of these centers, which may be too straightforward.
As shown in Figure \ref{fig:1}, the \textbf{\textcolor{Optimal}{interpolation}} point denotes the position they acquired after combining multiple centers in the probability space. And the \textbf{\textcolor{Intersection}{intersection}} represents where oracle sentences that simultaneously satisfy multiple attributes lie.
In the left part of Figure \ref{fig:1}, when distributions of attributes is symmetric\footnote{We plot distributions of attributes in \S \ref{attributes}.}, the interpolation point is indeed within the intersection area.
However, there could be a mismatch between the interpolation point and intersection. For example, as illustrated in the right part of Figure \ref{fig:1}, two skewed distributions intersect on the tails, leaving the interpolation point out of the intersection area and thus making it lack the ability to express all desired attributes together.
In this paper, different from approximating the intersection area with the interpolation point, we propose a strategy for directly acquiring the intersection.
We first deploy an autoencoder structure to map attribute-relevant sentences to latent representations constituting an estimated attribute space.
With our specially designed constraints, this space can model relationships among attributes.
Afterward, we provide an effective intersection searching algorithm that can walk around the long tail regions in distributions of all desired attributes and iteratively find where they combine more tightly.
Finally, we utilize a prefix-tuning-based decoder to construct sentences from the searched intersection.
We experiment on three-aspect control with two attributes from the sentiment aspect, four from the topic, and one from detoxification, with datasets IMDb movie reviews \cite{maas-etal-2011-learning}, AGNews \cite{NIPS2015_250cf8b5}, and Jigsaw Toxic Comment Classification Challenge Dataset, respectively. We evaluate the relevance of each attribute independently and calculate their average as the final relevance metric. Besides, we assess the text quality with perplexity and distinctness concerning fluency and diversity.
Results show that our method can significantly outperform strong baseline models on multi-aspect control. Furthermore, we find out in our analytical experiments that our intuitive assumptions fit well with our observation. The main contributions are as follows:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt]
\item We propose a distributional perspective that models multi-aspect control more practically.
\item We provide a method that directly searches for intersections in the attribute space and generates sentences with desired attributes.
\item We experimentally reveal the effectiveness of our method on multi-aspect control compared to strong baselines and achieve the SOTA.
\end{itemize}
\section{Related Work}
Variational autoencoders are often used for controllable text generation in early work \cite{10.5555/3305381.3305545, duan-etal-2020-pre, mai-etal-2020-plug} where they spend a lot of effort into improving text fluency.
The prosperity of large-scale pre-trained language models \cite{radford2019language} provides more exploration directions for attribute control such as fine-tuning \cite{ficler-goldberg-2017-controlling, ziegler2019finetuning, keskarCTRL2019}. Recent work has made gratifying progress on single-aspect control \cite{krause-etal-2021-gedi-generative}, leading studies gradually turn to a more difficult task, multi-aspect control, including the following three main approaches.
\paragraph{Weighted Decoding}
As the scale of language models increases rapidly, weighted decoding \cite{Dathathri2020Plug, krause-etal-2021-gedi-generative, yang-klein-2021-fudge, liu-etal-2021-dexperts, gu-etal-2022-improving} becomes a simple and practical choice.
It is a framework that decomposes the probability of sentences conditioned on attributes into a language model and a classifier with the bayesian rule directly at decoding time.
When handling multi-aspect control, it can be easily generalized by interpolating classifiers \cite{lin-riedl-2021-plug}.
\paragraph{Multi-Objective Optimization}
Controllable text generation task is naturally a multi-objective optimization problem when regarding its decoding process as an optimization objective.
Some approaches, such as DGC \cite{khalifa2020distributional}, Mix\&Match \cite{mireshghallah-etal-2022-mix}, and COLD Decoding \cite{qin2022cold}, adopt Energy-based Models \cite{lecun2006tutorial} to blend multiple objectives.
Others like MUCOCO \cite{kumar2021controlled} convert the optimization objectives of multi-aspect control to inequality constraints and thereby apply the lagrange multiplier method for this constrained optimization problem.
\paragraph{Prefix-Tuning}
GPT-3 \cite{brown2020language} provides a new paradigm named prompt-based learning \cite{liu2021pre}, which is able to perform few-shot learning on downstream tasks. Prefix-Tuning \cite{li-liang-2021-prefix} leverages the learned lightweight prompts to trigger the conditional generation capability of the language model.
Applying Prefix-Tuning to multi-aspect controllable text generation \cite{yu-etal-2021-attribute-alignment, qian-etal-2022-controllable, carlsson-etal-2022-fine, yang2022tailor} can be regarded as optimizing on multi-objective implicitly.
\begin{figure*}[ht]
\centering
\vspace{-0.5cm}
\resizebox{2\columnwidth}{!}{
\includegraphics[scale=0.55]{pic/model.pdf}
}
\vspace{-0.5cm}
\caption{An overview of our method. \textbf{Top}: Illustration of our autoencoder structure with prefix-tuning deployed on the fixed decoder, where latent representations $\mathcal{H}_i$ constitute an estimated attribute space. \textbf{Bottom Left}: Illustration of attribute classification loss $\mathcal{L}_C$ and aspect gap loss $\mathcal{L}_G$ attached to the attribute space. \textbf{Bottom Right}: Inferencing stage with prefix mapped from the intersection of attributes.}
\label{fig:2}
\end{figure*}
\section{Methodology}
In this section, we first introduce the motivation and overall process of our method, after which we describe each module in detail.
\subsection{Overview}
As illustrated in Figure \ref{fig:2}, our method mainly revolves around the attribute space including estimating the attribute space, searching for intersections, and mapping intersections to sentences.
Firstly, we aim to construct an attribute space using sampled sentences to estimate the real space as accurately as possible. We employ an autoencoder structure with the latent representations denoting points that constitute our estimated attribute space. To ensure that our estimated space reliably models the attributes, such as their probability distributions and relationships between different attributes, we further attach three constraints to the representation.
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Reconstruction Loss} $\mathcal{L}_R$ aims to bridge the gap between points in attribute space and natural attribute-relevant sentences, which is recovering attributes reflected by contents.
\item \textbf{Attribute Classification Loss} $\mathcal{L}_C$ forces the encoder to focus more on capturing attributes by distinguishing points of different attributes from the same aspect.
\item \textbf{Aspect Gap Loss} $\mathcal{L}_G$ %
penalizes the discrepancy of aspects, which is caused by the domain gap among different data sources for different aspects.
Inspired by the feature alignment \cite{10.1145/1772690.1772767}, we minimize the distances between distributional centers of each two aspects.
\end{enumerate*}
The second step aims to search for an intersection area of desired attributes. If the intersection area exists, a point in the area satisfies that neighbor points appearing in a tiny surrounding region should cover all required attributes. Inspired by this neighborhood ideology, we design an algorithm that iteratively approaches an area where these attributes bind more tightly.
The third step maps our searched intersection to a Prefix that activates the language model to generate attribute-relevant sentences. To make the language model less sensitive to slight variations, we sample a perturbation vector from a multivariate gaussian distribution.
\subsection{Estimating Attribute Space}
Given $|\mathbf{A}|$ aspects $\mathbf{A} = \left\{A_1,\cdots,A_{|\mathbf{A}|}\right\}$ with each comprising $|A_t|$ attributes $\left\{a_1^t,\cdots,a_{|A_t|}^t\right\}$, $I_\tau^t$ is an index set representing the identifiers of all sentences with attribute $a^t_\tau$ in the training data. We have $I^t = \bigcup\limits_{\tau=1}^{|A_t|} I_\tau^t, I = \bigcup\limits_{t=1}^{|\mathbf{A}|} I^t$, where $I^t$ is the indices of all sentences with any attribute in aspect $A_t$ and $I$ is the indices of the entire training data.
We encode sentences $\{X_i\}$ from all aspects $\mathbf{A}$ to representations $\{\mathcal{H}_i\}$ with unified mapping parameters $\phi$: $\mathcal{H}_i = \text{Encode}_\phi(X_i)$, where $i \in I$.
\paragraph{Reconstruction Loss $\mathcal{L}_R$}
As in the top of Figure \ref{fig:2}, $\mathcal{L}_R$ is computed in the same way as the autoregressive loss of pre-trained language model $p_{\text{LM}}$:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\begin{aligned}
\label{eq:2}
\mathcal{L}_R &= -\sum\limits_{i\in I} \log p_\text{LM}(X_i|\text{Prefix}_i)\\
\text{Prefix}_i &= \text{MLP}_\theta(\mathcal{H}_i + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}),
\end{aligned}
\end{equation}
where $X_i$ here is a sample sentence from the entire training set, i.e., $i \in I$.
Besides, $\varepsilon_i$, with a scaling factor $\lambda$, is a perturbation vector sampled from a multivariate gaussian distribution $\mathcal{N}(\mathbf{0}, \mathbf{I})$ for robustness when reconstructing. The multi-layer perceptron $\text{MLP}_{\theta}$ will map perturbed $\mathcal{H}_i$ to $\text{Prefix}_i$ that can activate the language model to generate text with desired attributes.
It's worth noting that our primary goal is to recover attributes, which means $\mathcal{L}_R$ does not need and preferably does not converge too well while maintaining text fluency.
\paragraph{Attribute Classification Loss $\mathcal{L}_C$}
We force the encoder to focus on attributes by $\mathcal{L}_C$ in the way:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:3}
\begin{aligned}
\mathcal{L}_C = -\sum\limits_{t=1}^{|\mathbf{A}|}\sum\limits_{\tau=1}^{|A_t|} \sum\limits_{i \in I_\tau^t} \log p_{\pi_{t}}(a_\tau^t|\mathcal{H}_i).
\end{aligned}
\end{equation}
Given sentence representation, $p_{\pi_t}$ is a classifier that distinguish attributes $\left\{a_\tau^t\right\}$ from aspect $A_t$ with parameter $\pi_{t}$.
\paragraph{Aspect Gap Loss $\mathcal{L}_G$}
We penalize the discrepancy between distributional centers by:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:4}
\mathcal{L}_G = \sum\limits_{\mathclap{1\leq t_1<t_2\leq |\mathbf{A}|}}\quad\;\,\,\left\|\sum\limits_{i \in I^{t_1}} \frac{\mathcal{H}_i}{|I^{t_1}|} - \sum\limits_{j \in I^{t_2}} \frac{\mathcal{H}_j}{|I^{t_2}|}\right\|_2,
\end{equation}
which are Euler distances between every two distinct distributional centers.
When generalizing to a larger scale of aspects, it is relatively expensive to calculate averages over the entire dataset each time the model is updated.
We calculate this loss in practice using a batch-level approximation. We assign each aspect a memory unit to store the latest representation of the aspect's estimated center.
Each time processing a batch of sentences from one aspect, we take the average of their representations as the center and sum up the Euler distances to centers of other aspects in the memory, which is the estimated $\mathcal{L}_G$. Then, we update the memory unit of this aspect to the latest.
\begin{algorithm}[t]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{Intersection Searching}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{H}_i, i \in \bigcup\limits_{t=1}^N I^t_{\alpha_t}$ from $N$ attributes\\
\quad \; $\omega_{\alpha_t}$ weight of each attribute
\ENSURE Intersection of $N$ attributes: $\mathcal{\tilde{H}}^*$
\STATE Initialize $M$ candidates:$\{\mathcal{\tilde{H}}^0_m\}$
\STATE Iterate $S$ times
\FOR{$s$ in $[0, S-1]$}
\FOR{$m$ in $[1, M]$}
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathbf{0}$
\FOR{$t$ in $[1, N]$}
\STATE $\mathbf{H}\leftarrow\mathop{\text{Nearest}}\limits_{top K}(\mathcal{\tilde{H}}_m^s,\left\{\mathcal{H}_i, i \in I^t_{\alpha_t}\right\})$
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} +\omega_{\alpha_t} \mathop{\text{mean}}(\mathbf{H})$
\ENDFOR
\STATE $\mathcal{\tilde{H}}_m^{s+1}\leftarrow \mathcal{\tilde{H}}_m^{s+1} / \sum\limits_{t=1}^N \omega_{\alpha_t}$
\ENDFOR
\ENDFOR
\STATE $\mathcal{\tilde{H}}^*\leftarrow \text{Select}(\{\mathcal{\tilde{H}}^{S}_m\})$
\end{algorithmic}
\end{algorithm}
During the training stage, our loss function is:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:5}
\mathcal{L} = w_1\mathcal{L}_R + w_2\mathcal{L}_C + w_3\mathcal{L}_G.
\end{equation}
It's worth noting that we only update parameters $\phi$, $\theta$, and $\left\{\pi_t\right\}$ for the encoder, the MLP layer, and the classifier heads, respectively.
\subsection{Intersection of Attributes}
Suppose there is an intersection point, denoted as $\mathcal{\tilde{H}}^*$, located within the intersection region of attributes $\left\{a^1_{\alpha_1},a^2_{\alpha_2},\cdots,a^N_{\alpha_N}\right\}$ from $N$ different aspects, where $a^t_{\alpha_t}$ is the $\alpha_t$th attribute in aspect $A_t$.
Our algorithm \ref{alg1} approximates the $\mathcal{\tilde{H}}^*$ by iteratively approaching a most balanced point with nearest neighbors from different attributes.
First, we initialize the candidates $\{\mathcal{\tilde{H}}^0_m\}$ by randomly sampling points in the attribute space, calculating their distance to the closest point of each attribute $a^t_{\alpha_t}$, and selecting the top $M$ samples with the smallest average distance to all attributes.
At each iteration $s$, we choose the top-K\footnote{We study the practical meaning and impact of $K$ in \S \ref{sec:effectofk}.} nearest points to $\mathcal{\tilde{H}}^s_m$ for each attribute and update $\mathcal{\tilde{H}}^{s+1}_m$ using the weighted average of these points.
It is worth mentioning that $\omega_{\alpha_t}$ is the weight used to balance attributes or favor some specifically, and a negative value of $\omega_{\alpha_t}$ can even move away from a particular one.
Finally, we select the best candidate from the last iteration $S$, which is expected to be in the intersection region, i.e., a representation related to multiple attributes.
\subsection{Generation with Intersections}
As illustrated in the right bottom of Figure \ref{fig:2}, we convert the representation $\mathcal{\tilde{H}}^*$ obtained from the intersection area directly to the $\text{Prefix}$ with $\text{MLP}_\theta$ and let the language model generate multi-attributed sentence $Y$ from input $\mathcal{X}$ as:
\begin{equation}
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{6pt}
\label{eq:1}
\begin{aligned}
Y &= \mathrm{arg}\max\limits_y \; p_\text{LM}(y|\text{Prefix}^*;\mathcal{X})\\
\text{Prefix}^* &=\text{MLP}_{\theta}(\mathcal{\tilde{H}}^* + \lambda \varepsilon_i),\; \varepsilon_i \sim \mathcal{N}(\mathbf{0}, \mathbf{I}).
\end{aligned}
\end{equation}
When generating several attribute-relevant sentences for one attribute combination, we only need to calculate the intersection for it once.
\section{Experiment}
In this section, we demonstrate the effectiveness of our method on three-aspect control, including sentiment, topic, and detoxification.
\subsection{Multi-Aspect Control Task}
The datasets we use are the same as GeDi \cite{krause-etal-2021-gedi-generative} and Contrastive Prefix \cite{qian-etal-2022-controllable}.
To balance the data scale across all aspects, we randomly sample 10k sentences from each dataset that is less than the number of samples GeDi uses, with each attribute equally dividing this amount.
We use the IMDb movie reviews \cite{maas-etal-2011-learning}, the AGNews dataset \cite{NIPS2015_250cf8b5}, and the Jigsaw Toxic Comment Classification Challenge Dataset\footnote{\url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/}} for sentiment, topic and detoxification aspects, respectively.
The prompts used for text generation are the same as those used in the PPLM \cite{Dathathri2020Plug}, with 20 from its bag-of-words experiment and 15 from its discriminator experiment. We experiment with $8$ combinations of the $3$ aspects with $2$ sentiments $\times$ $4$ topics $\times$ $1$ detoxification and generate $5$ completions for each combination and each prompt. Totally, each model will generate $35 \times 2 \times 4 \times 1 \times 5 = 1400$ sentences.
It is worth noting that we do not specifically use prompts that induce the language model to generate toxic text, making detoxification easier to improve.
To measure the performance on different aspects, we compute the attribute relevance. We finetune a DeBERTa \cite{he2021deberta, he2021debertav3} classifier on the Yelp dataset \cite{NIPS2015_250cf8b5} for sentiment aspect and a classifier for topic utilizing all its remaining data not used during training. We evaluate the non-toxicity with the Google Perspective API\footnote{\url{https://www.perspectiveapi.com}}.
The final performance of a model is determined by the average of these three attribute relevance scores introduced above.
We also use two auxiliary metrics to measure text quality.
One is perplexity calculated by GPT2-large following Contrastive Prefix \cite{qian-etal-2022-controllable}. To ensure that models are not insensitive to changes in different prefixes, we calculate the Distinctness \cite{li-etal-2016-diversity} of sentences generated from different prefixes and average the 1-gram, 2-grams, and 3-grams distinct scores for simplicity.
Moreover, we conduct human evaluation with sentences generated by different models shuffled. Each sentence is rated by three professional evaluators for 3 attribute relevance and text fluency. Evaluators rate each item on a scale of 1 to 5, with 5 representing text highly related to the desired attribute or very fluent.
\subsection{Baselines}
\begin{table*}[ht]
\small
\vspace{-0.3cm}
\centering
\begin{tabular}{l|c|ccc|c|c}
\hline
\hline
\textbf{Methods} & \textbf{Average}↑ (\%) & \textbf{Sentiment}↑ (\%) & \textbf{Topic}↑ (\%) & \textbf{Detoxification}↑ (\%) & \textbf{PPL.}↓ &\textbf{Dist.}↑\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{PPLM} & 71.0 $\pm$ 21.4 & 64.7 $\pm$ 24.8 & 63.5 $\pm$ 22.7 & 84.9 $\pm$\; 6.5 & 62.6 & 62.0\\
\hline
\textbf{GeDi} & 81.4 $\pm$ 14.7 & 76.1 $\pm$ 17.2 & 73.8 $\pm$ 11.3 & 94.2 $\pm$\; 1.9 & 116.6 & 75.1\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Multi-Objective Optimization Based Methods}}\\
\hline
\textbf{MUCOCO} & 73.9 $\pm$ 24.1 & 65.0 $\pm$ 33.7 & 67.2 $\pm$ 18.3 & 89.5 $\pm$\; 3.5 & 405.6 & 49.7\\
\hline
\textbf{Mix\&Match} & 79.7 $\pm$ 21.8 & 73.5 $\pm$ 25.9 & 69.9 $\pm$ 21.1 & 95.8 $\pm$\; 1.9 & 63.0 & 61.8\\
\hline
\hline
\multicolumn{6}{l}{\quad\textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Contrastive Prefix} &&&&&\\
\quad concatenation & 77.2 $\pm$ 18.5 & 67.3 $\pm$ 20.7 & 71.8 $\pm$ 16.5 & 92.6 $\pm$\; 2.9 & 54.6 & 39.9\\
\quad semi-supervised & 81.3 $\pm$ 16.5 & 74.4 $\pm$ 19.6 & 76.9 $\pm$ 16.7 & 92.7 $\pm$\; 3.5 & 31.9 & 43.3\\
\hline
\textbf{Ours} & \textbf{87.4} $\pm$ 10.9 & \textbf{86.7} $\pm$ 10.5 & \textbf{84.8} $\pm$ 14.2
& 90.7 $\pm$\; 7.4 & 28.4 & 49.5\\
\cline{2-7}
\quad w/o $\mathcal{L}_G$ & 80.9 $\pm$ 16.2 & 71.6 $\pm$ 11.7 & 75.9 $\pm$ 18.9 & 95.3 $\pm$\; 2.6 & 71.5 & 58.9\\
\quad w/o $\mathcal{L}_C$ & 62.3 $\pm$ 41.8 & 49.1 $\pm$ 49.8 & 41.7 $\pm$ 36.0 & \textbf{96.0} $\pm$\; 0.1 & 473.0 & 37.0\\
\hline
\hline
\end{tabular}
\caption{Automatic Results on Multi-Aspect Control. Hyperparameters and details are in \S \ref{sec:appendix3}.}
\label{tab:1}
\vspace{-0.2cm}
\end{table*}
\begin{enumerate*}[label=(\Roman*)]
\item \textbf{Weighted Decoding}:
\textbf{PPLM} \cite{Dathathri2020Plug} biases the language model with gradients back-propagated from trained classifiers. \textbf{GeDi} \cite{krause-etal-2021-gedi-generative} influences the decoding process with token probabilities conditioned on attributes.
\item \textbf{Multi-objective Optimization}:
\textbf{MUCOCO} \cite{kumar2021controlled} regards the decoding process as a constrained optimization problem, where the language model is the objective function and attributes are constraints.
\textbf{Mix\&Match} \cite{mireshghallah-etal-2022-mix} controls attributes with energy-based models and generates sentences by masking, sampling, and correcting.
\item \textbf{Prefix-Tuning}:
\textbf{Contrastive Prefix} \cite{qian-etal-2022-controllable} utilizes prefixes to activate the language model to generate attribute-relevant sentences by concatenation or semi-supervision.
\end{enumerate*}
\subsection{Results}
According to the automatic evaluation results in Table \ref{tab:1}, under the multi-aspect setting, we group models based on their type of methods in chronological order.
In addition, we demonstrate their standard deviations, which reflect the stability of models among different attribute combinations.
For weighted decoding, GeDi uses more powerful classifiers than PPLM and performs better on attribute relevance, stability to different combinations, and distinctness while correspondingly worse on perplexity.
Multi-objective optimization methods achieve a favorable performance on attribute relevance while MUCOCO explodes on perplexity due to its non-autoregressive paradigm not being suitable for generating from scratch.
Performance of semi-supervised Contrastive Prefix is similar to GeDi, except for lack of diversity.
Our method performs best on average attribute-related metrics, with at least a $7.3\%$ significant improvement over existing baselines. Our advances mainly come from sentiment and topic aspects, with no less than $13.9\%$ and $10.3\%$ each.
Although our model is not the best on detoxification, it is the most balanced and stable according to the lowest standard deviation on average, $10.9$.
As a prefix-tuning-based method inducing the language model without direct modification, which is naturally good at text fluency, we perform well on perplexity and inherit the performance on diversity.
Furthermore, we conduct ablation on aspect gap loss $\mathcal{L}_G$ and attribute classification loss $\mathcal{L}_C$ separately.
On the one hand, without $\mathcal{L}_G$, we can not alleviate the bias in different training datasets, making it hard to search for the intersection areas. Since training sentences of sentiment and topic aspects are mainly non-toxic, our model focuses more on detoxification rather than struggling for the other two, leading to considerable declines on their relevance while slight improvements on detoxification. Besides, as the distance among sample points from different aspects in the attribute space increases, our model will generate sentences mapped from far more sparse areas, leading to a small decrease on fluency and a subtle increase on diversity.
On the other hand, without $\mathcal{L}_C$, our attribute space will totally collapse. The relevance of sentiment and topic drops drastically while the non-toxicity boosts because model can hardly distinguish representations of different attributes in the same aspect and focus on relatively more effortless detoxification.
Worse still, without distinct representations, our model is required to recover different sentences from similar ones, leading to oscillation in training and hardly generating complete text when inferencing.
Results of human evaluation are in Table \ref{tab:2}, with inter-annotator agreement being $0.36$ in Fleiss’ $\kappa$.
We evaluate GeDi, Contrastive Prefix, and our method and observe that the results are consistent with the automatic ones on sentiment and topic relevance.
The performance of models on detoxification is high and relatively similar, making the automatic results different from the manual ones where the annotators believe that our model does a better job than baselines.
Since perplexity is relatively unreliable, the manually measured fluency of GeDi is much better than that of the Contrastive Prefix. And our method achieves the best fluency.
\begin{table}[t]
\small
\centering
\begin{tabular}{l|cccc}
\hline
\hline
\textbf{Methods} & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{Detox.}↑ &\textbf{Fluency}↑\\
\hline
\hline
\textbf{GeDi} & 2.96 & 2.72 & 4.59 & 3.08\\
\hline
\textbf{Con. Prefix} & 2.84 & 2.90 & 4.40 & 2.26\\
\hline
\textbf{Ours} & \textbf{3.47} & \textbf{3.39} & \textbf{4.71} & \textbf{3.69}\\
\hline
\hline
\end{tabular}
\caption{Human Evaluation on Multi-Aspect Control. }
\label{tab:2}
\end{table}
\begin{table*}[ht]
\small
\centering
\vspace{-0.3cm}
\resizebox{2\columnwidth}{!}{
\begin{tabular}{l|cc|cccc|c}
\hline
\hline
\multirow{2}{*}{\textbf{Methods}} & \multicolumn{2}{c|}{\textbf{Sentiment} (\%)} &\multicolumn{4}{c|}{\textbf{Topic} (\%)} & \multirow{2}{*}{\textbf{Detox.} (\%)}\\
& \textbf{Neg.}& \textbf{Pos.}& \textbf{World}&\textbf{Sports}& \textbf{Business}&\textbf{Sci./Tech.}&\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Weighted Decoding Based Methods}}\\
\hline
\textbf{GeDi} \textit{single-aspect} & 93.9& 70.7& 73.4 & 85.7 & 75.7 & 98.0 & 94.9 \\
\hline
\multirow{8}{*}{\textbf{GeDi}}
& 94.7 & - & 80.0 & - & - & - & 90.6\\
& 84.2 & - & - & 74.8 & - & - & 93.9\\
& 94.9 & - & - & - & 75.7 & - & 96.6\\
& 90.6 & - & - & - & - & 80.1 & 92.8\\
& - & 53.7 & 61.4 & - & - & - & 94.4\\
& - & 60.5 & - & 74.3 & - & - & 95.2\\
& - & 57.6 & - & - & 54.3 & - & 95.7\\
& - & 72.3 & - & - & - & 90.2 & 94.2\\
\cline{2-8}
\qquad \textit{average} & \textbf{91.1} $(- \textbf{2.8})$ & 61.0 $(-9.7)$ & 70.7 $(-2.7)$ & 74.6 $(-11.1)$ & 65.0 $(-10.7)$ & 85.2 $(-12.8)$ & \textbf{94.2} $(-\textbf{0.7})$\\
\hline
\hline
\multicolumn{6}{l}{\quad \textit{Prefix-Tuning Based Methods}}\\
\hline
\textbf{Prefix} \textit{single-aspect} & 88.4 & 90.6 & 74.5 & 85.3 & 93.5 & 93.6 & 93.8\\
\hline
\multirow{8}{*}{\tabincell{l}{\textbf{Contrastive Prefix}\\ \quad semi-supervised}}
& 65.5 & - & \textbf{80.6} & - & - & - & 91.8\\
& 67.2 & - & - & \textbf{90.3} & - & - & 92.5\\
& 56.0 & - & - & - & 79.2 & - & 92.2\\
& 90.0 & - & - & - & - & 93.3 & 84.8\\
& - & 93.5 & 64.8 & - & - & - & 95.1\\
& - & 41.8 & - & 78.5 & - & - & 94.8\\
& - & 87.4 & - & - & 41.7 & - & 95.2\\
& - & 93.6 & - & - & - & 86.7 & 95.3\\
\cline{2-8}
\qquad \textit{average} & 69.7 $(-18.7)$ & 79.1 $(-11.5)$ & \textbf{72.7} $(-\textbf{1.8})$ & \textbf{84.4} $(-\textbf{0.9})$ & 60.5 $(-33.0)$ & 90.0 $(-3.6)$ & 92.7 $(-1.1)$\\
\hline
\multirow{8}{*}{\textbf{Ours}}
& 69.7 & - & 71.7 & - & - & - & 84.1\\
& 78.6 & - & - & 80.0 & - & - & 80.2\\
& \textbf{99.9} & - & - & - & \textbf{96.7} & - & 96.8\\
& 92.8 & - & - & - & - & \textbf{98.0} & 81.7\\
& - & 80.5 & 58.0 & - & - & - & 95.1\\
& - & 84.7 & - & 86.6 & - & - & 94.5\\
& - & 87.6 & - & - & 91.7 & - & \textbf{98.1}\\
& - & \textbf{99.7} & - & - & - & 96.1 & 95.4\\
\cline{2-8}
\qquad \textit{average} & 85.3 $(-3.1)$ & \textbf{88.1} $(-\textbf{2.5})$ & 64.9 $(-9.6)$ & 83.3 $(-2.0)$ & \textbf{94.2} $(+\textbf{0.7})$ & \textbf{96.8} $(+\textbf{3.2})$ & 90.7 $(-3.1)$\\
\hline
\hline
\end{tabular}
}
\caption{Detailed Results on Single-Aspect and Multi-Aspect Control. We demonstrate results on \textit{single-aspect} and \textit{average} results on multi-aspect control with their difference to \textit{single-aspect}, where other rows each represent an attribute combination. Cases are in \S \ref{sec:appendix4}. Detailed results for other baseline models and our ablations are in \S \ref{sec:appendix5}.}
\label{tab:3}
\vspace{-0.2cm}
\end{table*}
\section{Analysis}
\subsection{Effect of Different Attributes and their Combinations}
We illustrate the detailed results of each attribute and their combinations in Table \ref{tab:3}.
GeDi and Prefix-tuning perform differently in \textit{single-aspect} control, each with its advantages. For example, GeDi is dedicated to \textit{negative} with $93.9\%$ relevance, while Prefix-tuning is good at \textit{positive} with $90.6\%$ relevance.
When dealing with multi-aspect control, they inherit such imbalanced characteristics, with \textit{average} relevance of $91.1\%$ and $79.1\%$, respectively. In addition, the baselines decrease correspondingly in the \textit{average} relevance of each attribute compared to \textit{single-aspect}, ranging from $0.7$ to $33.0$.
On average, our model outperforms other baselines on attribute metrics (Table \ref{tab:1}). In detail, our model performs competitively for most attributes compared to another prefix-tuning-based model, Contrastive Prefix. Especially, on attributes like \textit{business} and \textit{sci/tech}, our model significantly improves over another prefix-tuning-based method on multi-aspect control and can even surpass it under \textit{single-aspect} control.
In addition, correlations between attributes vary widely, as in Table \ref{tab:3}.
For example, generally, \textit{positive} fits well with \textit{non-toxic} while \textit{negative} leads to a massive drop in non-toxicity, which is consistent with the intuition that one can hardly praise people and offend them simultaneously.
Besides, \textit{world} and \textit{business} news are often reported negatively, such as war, famine, inflation, etc., making it challenging to combine them with \textit{positive}.
When attributes are not closely correlated, which means that few natural sentences possess these attributes together, our method is more likely to capture such a rarely occurred incident and magnify their frequency.
Take \textit{business} as an example. It is effortless to achieve a fine attribute relevance when performing single-aspect control on \textit{business}, with GeDi achieving $75.7$ and Prefix obtaining $93.5$. After attaching \textit{positive} to \textit{business}, baseline models will suffer from a decline due to their weak correlation, where GeDi and Contrastive Prefix drop to $54.3$ and $41.7$, respectively. In contrast, our method can alleviate this problem by retrieving this unusual co-occurrence in the training sentences and recovering it from the attribute space, achieving a performance of $91.7$, which is close to single-aspect control.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{pic/attribute_space.pdf}
\caption{Projection of 4 attributes from attribute space.}
\label{fig:3}
\end{figure}
When combining business with negative, which is a relatively common combination, there is still some decrease for baseline models. On the contrary, our method can even obtain the performance of $96.7$ that surpasses single-aspect control.
\subsection{Estimated Attribute Space}
We demonstrate part of our estimated attribute space in Figure \ref{fig:3} with four attributes: \textit{\textcolor{sred}{positive}}, \textit{\textcolor{sblue}{negative}}, \textit{\textcolor{syellow}{sports}}, and \textit{\textcolor{sgreen}{sci/tech}} from sentiment and topic aspects.
We project the high-dimensional space to 2D with Principal Component Analysis (PCA).
Consistent with our hypothesis, distributions of \textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sgreen}{sci/tech}} are asymmetric and the intersections lie in the sparse edges of attributes' distribution.
In addition, we project the intersections searched by the \textcolor{smediumvioletred}{baseline}'s strategy and \textcolor{darkred}{ours}, respectively. For \textit{\textcolor{sred}{positive}}-\textit{\textcolor{sgreen}{sci/tech}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{sgreen}{sci/tech}} pairs, the combinations are relatively tight, making it easy to find intersections. However, intersection areas for \textit{\textcolor{sred}{positive}}-\textit{\textcolor{syellow}{sports}} and \textit{\textcolor{sblue}{negative}}-\textit{\textcolor{syellow}{sports}} pairs are considerably sparse.
As shown in enlarged area, the \textcolor{smediumvioletred}{baseline} searched intersection is at the midpoint of the two distributional centers, but this location is not where the attributes intersect. On the contrary, \textcolor{darkred}{our} method can find an intersection in such a sparse region, making various points from the two different attributes appear simultaneously in its tiny surrounding area.
It worth noting that \textit{positive} and \textit{negative} appear to intersect in this projection because they are close in the high-dimensional space. But there is actually no intersection if only projecting these two attributes in \S \ref{sec:pos_neg}.
\subsection{Effect of $K$}
\label{sec:effectofk}
\begin{table}[t]
\small
\setlength{\abovecaptionskip}{0.2cm}
\vspace{-0.3cm}
\centering
\begin{tabular}{r|c|ccc}
\hline
\hline
$\textbf{K}$ &\textbf{Avg.}↑ & \textbf{Sent.}↑ & \textbf{Topic}↑ & \textbf{DeTox.}↑\\
\hline
5000 & \textcolor{sblue}{75.5} & 70.5 & 67.9 & 88.2\\
4000 & \textcolor{sblue}{77.6} & 72.9 & 71.4 & 88.4\\
3000 & \textcolor{sblue}{78.7} & 72.4 & 74.7 & 88.9\\
2000 & \textcolor{sblue}{79.1} & 72.6 & 75.9 & 88.7\\
1500 & \textcolor{sblue}{79.9} & 73.6 & 77.1 & 89.0\\
1000 & \textcolor{sblue}{80.7} & 75.7 & 77.2 & 89.1\\
800 & \textcolor{sblue}{82.9} & 79.3 & 79.2 & 90.3\\
500 & \textcolor{sblue}{85.2} & 83.5 & 81.5 & 90.5\\
300 & \textcolor{sblue}{85.7} & 84.1 & 83.2 & 89.7\\
200 & \textcolor{sred}{\textbf{87.4}} & \textbf{86.7} & \textbf{84.8} & 90.7\\
150 & \textcolor{sgreen}{84.0} & 79.2 & 84.3 & 88.4\\
100 & \textcolor{sgreen}{83.9} & 78.7 & 83.6 & 89.5\\
50 & \textcolor{sgreen}{82.2} & 78.4 & 78.5 & 89.6\\
20 & \textcolor{sgreen}{80.9} & 77.8 & 73.1 & 91.7\\
10 & \textcolor{sgreen}{80.8} & 79.6 & 71.5 & 91.2\\
5 & \textcolor{sblue}{81.4} & 82.9 & 69.3 & 92.1\\
3 & \textcolor{lred}{85.0} & 86.1 & 77.7 & 91.1\\
1 & \textcolor{sgreen}{78.8} & 63.1 & 80.9 & \textbf{92.4}\\
\hline
\hline
\end{tabular}
\caption{Results that vary with $K$.}
\vspace{-0.2cm}
\label{tab:k_analysis}
\end{table}
We analyze the variation of $K$ in the intersection searching algorithm and demonstrate the results in Table \ref{tab:k_analysis}.
Our model reaches a critical point when $K$ is 200, and the performance is optimal this time.
On the one hand, as the value of $K$ gradually increases, our method pays less attention to regions where samples are fewer while attributes combine more tightly, and the performance decreases accordingly.
When $K$ reaches 5k, our method degenerates into a plain prefix-tuning model, which treats intersection as the midpoint of distributional centers. Its performance is similar and slightly inferior to the concatenation version of Contrastive Prefix in Table \ref{tab:1}.
On the other hand, smaller $K$ leads to suboptimal performance since the effect of noise becomes non-negligible in training data.
When $K$ is less than $10$, our model will be very unstable.
\subsection{Distribution of Attributes}
\label{attributes}
\begin{figure}[t]
\centering
\vspace{-0.3cm}
\includegraphics[width=\columnwidth]{pic/single_2.pdf}
\caption{Distribution of attribute World from Topic.}
\label{fig:4}
\vspace{-0.2cm}
\end{figure}
We project sample points to 2D by PCA, with each attribute projected independently. As in Figure \ref{fig:4}, we display a scatterplot of World and conduct a Gaussian kernel density estimation to visualize its probability distribution. The darker area denotes a higher probability, where more representation points of oracle sentences gather. And the region annotated by a red ellipse is the estimated distributional center. As in the plot, the distribution of World is significantly asymmetric as the center lies in the top part, with the bottom being a sparse long tail. In addition, the distribution is even non-convex with an isolated cluster in the lower right corner. This observation supports our hypothesis that the practical distributions of attributes are far more complex than symmetric distributions such as Gaussian distribution. Besides, we plot the distribution of other attributes in the \S \ref{sec:appendix1}.
\section{Discussion on Distributional Lens}
Pilot work such as DGC \cite{khalifa2020distributional} estimates the language distribution with an energy-based model and optimizes this distribution to satisfy constraints by approaching the constraints manifold. Recent distributional approaches like COLD Decoding \cite{qin2022cold} and MuCoLa \cite{kumar2022constrained} take the language and attribute distribution in the same space so as to sample attribute-related sentences with Langevin Dynamics. Concurrent work on the image side, PromptGen \cite{wu2022generative}, simulates the complex distribution of images relevant to target attributes using a deep generative model. However, as a consensual hypothesis in manifold learning, the pre-trained language model estimates a low-dimensional manifold of language in a high-dimensional embedding space, which means most points in the embedding space are not probabilistically modeled by the language model. We believe that placing too much trust in the distributional modeling ability of language models is not a good choice. Our method attempts to depict the attribute space with discrete sample points of attributed sentences and make these discrete points, along with their coverage areas, compose the support set of our estimated distribution.
\section{Conclusion}
In this work, we present a distributional perspective for the multi-aspect controllable text generation with experimental results confirming the superiority of our model. Further observations on the 2D projection of the estimated attribute space show that our hypothesis about the attribute space is more feasible. In the future, we can explore the correlation between different attribute combinations for more fine-grained control and capture the bias in datasets to eliminate or utilize it.
\section*{Limitations}
Our method has a certain dependence on the data since we need to estimate an attribute space. Therefore, it is difficult for our method to perform well in the setting of few-shot learning. However, this disadvantage is not that severe, because we only need single-aspect data, which is relatively sufficient in style transfer tasks. Another dependence of our method on data is that it is somewhat sensitive to biases in the data. When the semantic divergence of different aspects in training data is too large, our aspect gap loss, which aims to reduce the distance among the distributions of each aspect, will conflict with the sentence reconstruction loss. As a result, it may be hard to obtain a reliable intersection in the attribute space.
Computational resources also have an impact on our approach, as our aspect gap loss leverages a batch-level estimation for each aspect. Therefore, a larger batch size means a more accurate approximation, leaving the attribute space fewer biases. An alternative strategy for smaller batches is to backpropagate the loss after accumulating enough distributional samples, which requires more training epochs.
\section*{Ethics Statement}
We are totally aware that text generation technology has a potential to be used maliciously to generate fake, toxic, or offensive content. However, after training on the Detoxification aspect, controllable text generation technology is a powerful weapon for combating hate speech, and eliminating harmful information in pre-trained language models. In addition, our multi-aspect controllable text generation technology can take Detoxification as an default aspect when controlling other aspects. We believe it meaningful and beneficial to advance research on controllable text generation.
\section*{Acknowledgements}
Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R\&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078 and the Major Key Project of PCL, PCL2021A06.
\normalem
|
1,116,691,498,561 | arxiv | \section{Introduction}
With the availability of high-intensity radioactive beams at many
facilities as well as a number of next generation beam facilities
being constructed or being planned \cite{rib1,rib2}, the studies
on the role of isospin degree of freedom have recently attracted a
lot of attention in both nuclear physics and astrophysics. The
ultimate goal of such studies is to extract information on the
isospin dependence of in-medium nuclear effective interactions as
well as equation of state (EOS) of isospin asymmetric nuclear
matter. The later quantity especially the symmetry energy term is
important not only to nuclear physics community as it sheds light
on the structure of radioactive nuclei, reaction dynamics induced
by rare isotopes but also to astrophysics community as it acts as
a probe for understanding the evolution of massive stars and
supernova explosion \cite{marr10}. It is worth mentioning that the
equation of state of symmetric nuclear matter has been constrained
up to densities 5 times the normal nuclear matter density through
the measurements of transverse flow as well as its disappearance
along with other collective flows (like radial flow, elliptic
flow) \cite{daniel02} and of subthreshold kaon production in
relativistic nucleus-nucleus collisions \cite{fuch06}.
\par
Although the nuclear symmetry energy at normal nuclear matter
density is known to be around 30 MeV \cite{li02}, its values at
other densities are poorly known. Heavy-ion collisions induced by
radioactive beams provide unique opportunities to investigate the
isospin-dependent properties of asymmetric nuclear matter,
particularly the density dependence of symmetry energy
\cite{li98}. Experimentally symmetry energy is not a directly
measurable quantity and has to be extracted from observables
related to symmetry energy. Over the last decade a large number of
experimental observables have been proposed like neutron/proton
ratio of emitted nucleons \cite{li97}, the neutron-proton
differnetial flow \cite{li00}, the t/$^{3}$He \cite{chen303},
$\pi^{-}/\pi^{+}$ \cite{li02,gait04}, $\Sigma^{-}/\Sigma^{+}$
\cite{qli205}, and K$^{0}/K^{+}$ \cite{ferini06} ratios and so on.
A recent analysis of data has led to a symmetry energy term of the
form
E$_{sym}$ $\simeq$
31.6($\frac{\rho}{\rho_{0}})^{\gamma}$ MeV with $\gamma$ =
0.4-1.05 for densities between 0.1$\rho_{0}$ and 1.2$\rho_{0}$
\cite{shetty10}. However, for all the above mentioned observables
the Coulomb force of charged particles plays an important role. It
competes strongly with symmetry energy. Recently Gautam and Sood
\cite{gaum10} studied the relative contribution of Coulomb force
and symmetry energy in isospin effects on the collective
transverse flow as well as its disappearance for isobaric systems
throughout the mass range and colliding geometry. They clearly
demonstrated the dominance of Coulomb repulsion over the symmetry
energy. The collective transverse in-plane flow disappears at a
particular energy called as balance energy \cite{gaum10,krof89}.
In recent communication, Gautam \emph{et al}. \cite{gaum210} has
studied the transverse momentum for a neutron rich system
$^{60}Ca$+$^{60}Ca$ in the Fermi energy as well as at high
energies. There they find that transverse momentum is sensitive to
the symmetry energy as well as its density dependence in the Fermi
energy region. Motivated by those results we here study the
E$_{bal}$ as a function of N/Z and N/A of the system for an isotopic
series. We here choose the isotopes so that the Coulomb repulsion
is same for all the systems, since as mentioned previously that
Coulomb plays much dominant role as compared to symmetry energy in
isospin effects. Here we will demonstrate that the N/Z (N/A) dependence
of E$_{bal}$ for the isotopes of same element is a sensitive probe
for the symmetry energy as well as its density dependence. To
check the sensitivity of N/Z (N/A) dependence of E$_{bal}$ towards
density dependence of symmetry energy, we have calculated the
E$_{bal}$ throughout the isotopic series for different forms of
symmetry energy F$_{1}(u)$, F$_{2}(u)$, and F$_{3}(u)$,
where\emph{ u} = $\frac{\rho}{\rho_{0}}$. The different forms are
described later. The present study is carried out within the
framework of Isospin-dependent Quantum Molecular Dynamics
(IQMD)\cite{hart98} Model. Section 2 describes the model in brief.
Section 3 explains the results and discussion and section 4
summarizes the results.
\section{The model}
The IQMD model treats different charge states of nucleons, deltas
and pions explicitly, as inherited from the
Vlasov-Uehling-Uhlenbeck (VUU) model. The IQMD model has been used
successfully for the analysis of a large number of observables
from low to relativistic energies. One of its versions QMD model
has been quite successful in explaining various phenomena such as
multifragmentation \cite{kumar}, collective flow \cite{sood1}, and
hot and dense nuclear matter \cite{leh1} as well as particle
production \cite{huang}. The isospin degree of freedom enters into
the calculations via symmetry potential, cross sections and
Coulomb interaction.
\par
In this model, baryons are represented by Gaussian-shaped density distributions
\begin{equation}
f_{i}(\vec{r},\vec{p},t) =
\frac{1}{\pi^{2}\hbar^{2}}\exp(-[\vec{r}-\vec{r_{i}}(t)]^{2}\frac{1}{2L})
\times \exp(-[\vec{p}- \vec{p_{i}}(t)]^{2}\frac{2L}{\hbar^{2}})
\end{equation}
Nucleons are initialized in a sphere with radius R = 1.12 A$^{1/3}$ fm, in accordance with liquid-drop model.
Each nucleon occupies a volume of \emph{h$^{3}$}, so that phase space is uniformly filled.
The initial momenta are randomly chosen between 0 and Fermi momentum ($\vec{p}$$_{F}$).
The nucleons of the target and projectile interact by two- and three-body Skyrme forces, Yukawa potential, Coulomb interactions,
and momentum-dependent interactions (MDI). In addition to the use of explicit charge states of all baryons and mesons, a symmetry potential between protons and neutrons
corresponding to the Bethe-Weizsacker mass formula has been included. The hadrons propagate using Hamilton equations of motion:
\begin {eqnarray}
\frac{d\vec{{r_{i}}}}{dt} = \frac{d\langle H
\rangle}{d\vec{p_{i}}};& & \frac{d\vec{p_{i}}}{dt} = -
\frac{d\langle H \rangle}{d\vec{r_{i}}}
\end {eqnarray}
with
\begin {eqnarray}
\langle H\rangle& =&\langle T\rangle+\langle V \rangle
\nonumber\\
& =& \sum_{i}\frac{p^{2}_{i}}{2m_{i}} + \sum_{i}\sum_{j>i}\int
f_{i}(\vec{r},\vec{p},t)V^{ij}(\vec{r}~',\vec{r})
\nonumber\\
& & \times f_{j}(\vec{r}~',\vec{p}~',t) d\vec{r}~ d\vec{r}~'~
d\vec{p}~ d\vec{p}~'.
\end {eqnarray}
The baryon potential\emph{ V$^{ij}$}, in the above relation, reads as
\begin {eqnarray}
\nonumber V^{ij}(\vec{r}~'-\vec{r})& =&V^{ij}_{Skyrme} + V^{ij}_{Yukawa} +
V^{ij}_{Coul} + V^{ij}_{mdi} + V^{ij}_{sym}
\nonumber\\
& =& [t_{1}\delta(\vec{r}~'-\vec{r})+t_{2}\delta(\vec{r}~'-\vec{r})\rho^{\gamma-1}(\frac{\vec{r}~'+\vec{r}}{2})]
\nonumber\\
& & +t_{3}\frac{\exp(|(\vec{r}~'-\vec{r})|/\mu)}{(|(\vec{r}~'-\vec{r})|/\mu)}+
\frac{Z_{i}Z_{j}e^{2}}{|(\vec{r}~'-\vec{r})|}
\nonumber \\
& & +t_{4}\ln^{2}[t_{5}(\vec{p}~'-\vec{p})^{2} +
1]\delta(\vec{r}~'-\vec{r})
\nonumber\\
& & +t_{6}\frac{1}{\varrho_{0}}T_{3i}T_{3j}\delta(\vec{r_{i}}~'-\vec{r_{j}}).
\end {eqnarray}
Here \emph{Z$_{i}$} and \emph{Z$_{j}$} denote the charges of
\emph{ith} and \emph{jth} baryon, and \emph{T$_{3i}$} and
\emph{T$_{3j}$} are their respective \emph{T$_{3}$} components
(i.e., $1/2$ for protons and $-1/2$ for neutrons). The
parameters\emph{ $\mu$} and \emph{t$_{1}$,....,t$_{6}$} are
adjusted to the real part of the nucleonic optical potential.
For the density dependence of the nucleon optical potential, standard Skyrme-type parametrization is employed.
We also use the isospin and energy-dependent cross
section $\sigma$ = 0.8 $\sigma_{nn}^{free}$.
The details about the elastic and inelastic cross sections for
proton-proton and proton-neutron collisions can be found in
\cite{hart98,cug}. The cross sections for neutron-neutron
collisions are assumed to be equal to the proton-proton cross
sections. Explicit Pauli blocking is also included; i.e. Pauli
blocking of the neutrons and protons is treated separately. We
assume that each nucleon occupies a sphere in coordinate and
momentum space. This trick yields the same Pauli blocking ratio as
an exact calculation of the overlap of the Gaussians will yield.
We calculate the fractions P$_{1}$ and P$_{2}$ of final phase
space for each of the two scattering partners that are already
occupied by other nucleons with the same isospin as that of
scattered ones. The collision is blocked with the probability
\begin {equation}
P_{block} = 1-[1 - min(P_{1},1)][1 - min(P_{2},1)],
\end {equation}
and, correspondingly is allowed with the probability 1 -
P$_{block}$. For a nucleus in its ground state, we obtain an
averaged blocking probability $\langle P_{block}\rangle$ = 0.96.
Whenever an attempted collision is blocked, the scattering
partners maintain the original momenta prior to scattering.
The different forms of symmetry energy are obtained by changing
the density dependence of the potential part of the symmetry
energy (last term in eq. (4)). The various forms are F$_{1}(u)
\propto u$, F$_{2}(u) \propto u^{0.4}$, F$_{3}(u) \propto u^{2}$
(where \emph{ u} = $\frac{\rho}{\rho_{0}}$). F$_{4}$ represents
calculations without symmetry potential.
\par
\section{Results and discussion}
We have simulated several thousand events at incident energies around
balance energy in small steps of 10 MeV/nucleon for each
isotopic system of Ca+Ca having N/Z (N/A) varying from 1.0 to 2.0 (0.5-0.67). i.e.
Ca$^{40}$+Ca$^{40}$, Ca$^{44}$+Ca$^{44}$,
Ca$^{48}$+Ca$^{48}$, Ca$^{52}$+Ca$^{52}$,
Ca$^{56}$+Ca$^{56}$, and Ca$^{60}$+Ca$^{60}$ for
the semicentral colliding geometry range of 0.2 - 0.4. Such
systematic studies performed at low incident energies using
various fusion models have shown a linear enhancement in the
fusion probabilities with neutron content \cite{puri1}. We use
soft equation of state with and without MDI, labeled respectively
as Soft and SMD. The calculations with this choice of equation of
state and cross section were in good agreement with the data
throughout the colliding geometry \cite{gaum310}. The IQMD model
has also been able to reproduce the other data (example, high
energy proton spectra, gamma production) at incident energies
relevant in this paper \cite{ger98,gamm}. The reactions are
followed till the transverse flow saturates. The saturation time
is around 100 fm/c for the systems in the present study. For the
transverse flow, we use the quantity "\textit{directed transverse
momentum $\langle p_{x}^{dir}\rangle$}" which is defined as
\cite{sood,leh}
\begin {equation}
\langle{p_{x}^{dir}}\rangle = \frac{1} {A}\sum_{i=1}^{A}{sign\{
{y(i)}\} p_{x}(i)},
\end {equation}
where $y(i)$ is the rapidity and $p_{x}$(i) is the momentum of
$i^{th}$ particle. The rapidity is defined as
\begin {equation}
Y(i)= \frac{1}{2}\ln\frac{{\textbf{{E}}}(i)+{{\textbf{p}}}_{z}(i)}
{{{\textbf{E}}}(i)-{{\textbf{p}}}_{z}(i)},
\end {equation}
where ${\textbf{E}}(i)$ and ${\textbf{p}_{z}}(i)$ are,
respectively, the energy and longitudinal momentum of $i^{th}$
particle. In this definition, all the rapidity bins are taken into
account. A straight line interpolation is used to calculate the
E$_{bal}$. It is worth mentioning that the E$_{bal}$ has the same
value for all fragments types \cite{west93,pak97,west98,cuss}.
\begin{figure}[!t] \centering
\vskip 0.5cm
\includegraphics[angle=0,width=10cm]{fig1.eps}
\vskip 0.5cm
\caption{(Color online) E$_{bal}$ as a function of N/Z (upper panel) and N/A (lower panel) of system
for E$_{sym} \propto F_{1} (u)$ and F$_{4}$. Lines are linear fit proportional to m. Various symbols are
explained in the text.}\label{fig3}
\end{figure}
In fig. 1(a) we display the E$_{bal}$ as a function of
N/Z of the system. Solid green circles represent the calculated
E$_{bal}$. Lines are the linear fit to E$_{bal}$. We see that
E$_{bal}$ follows a linear behavior $\propto$ m$\ast$N/Z. As the
N/Z of the system increases, the mass of the system increases due
to addition of neutron content. In addition, the effect of
symmetry energy also increases with increase in N/Z. To check the
relative contribution of increase in mass with N/Z and symmetry
energy towards the N/Z dependence of E$_{bal}$, we make the
strength of symmetry energy zero and calculate E$_{bal}$. The
results are displayed by open circles in fig. 1(a). E$_{bal}$
again follows a linear behavior $\propto$ m$\ast$N/Z. However, E$_{bal}$ decreases very slightly with increase in
N/Z, whereas when we include the symmetry energy
also in our calculations then the $\mid m \mid$ increases by 3
times which
shows that N/Z dependence of E$_{bal}$ is highly sensitive
to the symmetry energy. The slight decrease in the E$_{bal}$ with
N/Z (for calculations without symmetry energy) is due to increase
in number of nucleon-nucleon collisions. To further explore this
point, we switch off the symmetry energy and also make the cross
section isospin independent (i.e. $\sigma_{np}$ = $\sigma_{nn}$
and calculate E$_{bal}$ for two extreme N/Z. The results are displayed in
fig. 1(a) by open squares. Again E$_{bal}$ follows a linear
behavior. We see that the E$_{bal}$ for both $^{40}$Ca +
$^{40}$Ca and $^{60}$Ca + $^{60}$Ca increases as expected.
However, the increase in E$_{bal}$ for system with N/Z = 1 is
more as compared to the system with N/Z = 2. This is because
with increase in N/Z the neutron number increases due to which
neutron-neutron and neutron-proton collisions pairs increase.
However, the increase in number of neutron-neutron collision
pairs is much larger as compared to neutron-proton collision
pairs. Therefore, the possibility of neutron-proton collision is
much less in system with N/Z = 2. That is why the effect of
isospin dependence of cross section decreases with increase in
N/Z.
\par
In fig. 1(b), we display the E$_{bal}$ as a function of N/A of the system. Symbols have same meaning as in fig. 1(a). Again E$_{bal}$ follows
a linear behaviour with m = -191 and -68, respectively, for F$_{1}$ (u) and F$_{4}$. However, the percentage difference $\Delta E_{bal}$ \% (where $\Delta E_{bal}$ \% =
$\frac{E_{bal}^{F_{1}(u)}-E_{bal}^{F_{4}}}{E_{bal}^{F_{1}(u)}}$) is same (about 65\%) in both the figs. 1(a) and 1(b) which
shows that the effect of symmetry energy is same whether we discuss in terms of N/Z or N/A.
\begin{figure}[!t] \centering
\vskip 0.5cm
\includegraphics[angle=0,width=16cm]{fig2.eps}
\vskip 0.5cm
\caption{(Color online) E$_{bal}$ as a function of N/Z (left panels) and N/A (right panels) of system
for isospin independent cross section for (a) and (c) N/Z = 1.0 to 2.0 (b) and (d)
N/Z = 1.2 to 2.0. Lines are linear fit proportional to m. Various symbols are explained in the
text.}\label{fig3}
\end{figure}
As stated in literature, the isospin dependence
of collective flow and its disappearance has been explained as the
competition among various reaction mechanisms, such as, nn
collisions, symmetry energy, surface property of the colliding
nuclei and Coulomb force \cite{li}. Here we aim to show that the
N/Z and N/A dependence of E$_{bal}$ is sensitive to the symmetry energy
only. Since we are taking the isotopic
series, the effect of Coulomb will be same for all the reactions.
We have also checked that the N/Z dependence of E$_{bal}$ is insensitive to the
EOS of symmetric nuclear matter. Moreover, as mentioned previously, the
equation of state of symmetric nuclear matter has been constrained up to
densities five times the normal matter density. In the present case as the N/Z of the system increases, the number of neutrons also increases.
Since we are using isospin-dependent nn cross section, so to check the sensitivity of N/Z and N/A
dependence of E$_{bal}$ to the isospin dependence of cross section, we calculate the E$_{bal}$ throughout
the isotopic series by making the cross section isospin independent (fig. 2(a) open orange triangles, left panels).
Again E$_{bal}$ follows a linear behavior with m = -40. We find
that although E$_{bal}$ for individual system
is very sensitive to the isospin dependence of cross section. However,
N/Z dependence of E$_{bal}$ (for isotopic series) is much less sensitive to the isospin dependence of cross
section.
\begin{figure}[!t] \centering
\vskip 0.5cm
\includegraphics[angle=0,width=10cm]{fig3.eps}
\vskip 0.5cm
\caption{(Color online) E$_{bal}$ as a function of N/Z (upper panel) and N/A (lower panel) of system
for E$_{sym} \propto F_{2} (u)$ and F$_{3} (u)$. Lines are linear fit proportional to m. Various symbols
are explained in the text.}\label{fig3}
\end{figure}
\par
In fig. 2(b) (left panels), we show E$_{bal}$ as a function of N/Z of the
system for the N/Z range from 1.2 to 2.0. We find that the sensitivity of N/Z dependence of E$_{bal}$
towards the isospin dependence of
cross section decreases further. Now m is -34 (-37) for
calculations with (without) isospin dependence of cross section.
Thus the N/Z dependence of E$_{bal}$ for neutron-rich isotopes is
sensitive only to symmetry energy. In figs. 2(c) and 2(d) (right panels) we display similar plots as in
the corresponding left
panels but now we plot E$_{bal}$ as a function of N/A of the system. Again the percentage difference between the two curves in both
upper panels is same (about 18 \%) and same in lower panels as well (about 8 \%).
\par
In fig. 3 (a) (3b) we display the N/Z (N/A) dependence of
E$_{bal}$ for different forms of symmetry energy; F$_{1}$(u)
(solid circles), F$_{2}$(u) (diamonds), and F$_{3}$(u)
(pentagons). For all the cases E$_{bal}$ follows a linear
behavior. Clearly, N/Z (N/A) dependence of E$_{bal}$ is sensitive to the
density dependence of symmetry energy as well. For a fixed N/Z (N/A) stiff symmetry energy F$_{1}$(u) shows
less E$_{bal}$ as compared to soft F$_{2}$(u) whereas super stiff
symmetry energy F$_{3}$(u) shows more E$_{bal}$ as compared to
F$_{2}$(u).
\begin{figure}[!t] \centering
\vskip -1cm
\includegraphics[width=10cm]{fig4.eps}
\caption{(Color online) The time evolution of $<p_{x}^{dir}>$ for
different forms of symmetry energy for different bins at
b/b$_{max}$=0.2-0.4 . Lines are explained in the
text.}\label{fig2}
\end{figure}
To explain the above mentioned feature, we calculate for $^{60}$Ca
+ $^{60}$Ca the transverse flow of particles having $\rho/\rho_{0}
\leq 1$ (denoted as BIN 1) and particles with $\rho/\rho_{0} > 1$
(denoted as BIN 2), separately at all time steps for symmetry
energy F$_{1}$(u), F$_{2}$(u), and F$_{3}$(u). The incident energy
is taken to be 100 MeV/nucleon. The results are displayed in fig.
4. Solid (dashed) lines represent the p$_{x}^{dir}$ of particles
lying in BIN 1 (BIN 2). Dotted line represent the total
$<p_{x}^{dir}>$. We see that the total $<p_{x}^{dir} >$ is maximum
for stiff symmetry energy and minimum for super stiff symmetry
energy. During the initial stages of the reaction, $<p_{x}^{dir}
>$ due to particles lying in BIN 1 remains positive for F$_{1}$(u)
and F$_{2}$(u) because in the spectator region, repulsive symmetry
energy will accelerate the particles away from the overlap zone.
The effect is more pronounced for F$_{1}$(u) as compared to
F$_{2}$(u). Moreover, for F$_{1}$(u) and F$_{2}$(u), this interval
is about 5-25 fm/c and 5-20 fm/c, respectively. This is because
although for F$_{2}$(u), the effective strength of symmetry energy
will be more for low density particles as compared to F$_{1}$(u),
however, in the central dense zone the effective strength of
F$_{2}$(u) will be less i.e. in the central dense zone, F$_{2}$(u)
will be less repulsive, therefore for F$_{2}$(u), there will be
more attractive force on the particles lying in the spectator
region towards the central dense zone as compared to that in case
of F$_{1}$(u). That is why during the initial stages the peak
value of $<p_{x}^{dir}
>$ as well as the duration for which it remains positive is less
for F$_{2}$(u) as compared to F$_{1}$(u) (compare shaded area in
fig. 4(a) and 4(b).
This decides the value of $<p_{x}^{dir} >$ at saturation, which is more for F$_{1}$(u) as compared to F$_{2}$(u). In case of F$_{3}$(u) (fig. 4(c)) for particles lying in BIN 1,
i.e. ($\rho/\rho_{0} \leq 1$ ), the strength of symmetry energy
will be much smaller which is not sufficient to push the particles
away from the overlap zone. Therefore, the $<p_{x}^{dir} >$ of BIN
1 particles remains zero during the initial stages. This leads to
least value of final state $<p_{x}^{dir} >$ for super stiff
symmetry energy as compared to stiff and soft symmetry energy. The
$<p_{x}^{dir}>$ due to particles in BIN 2 (dashed line) decreases
in a very similar
manner for all the different symmetry energies between 0-10 fm/c. Between 10-25 fm/c, $<p_{x}^{dir}>$ for
F$_{3} (u)$ decreases more sharply as compared to in case of F$_{1} (u)$ and
F$_{2} (u)$. This is because in this time interval the particles
from BIN 1 enters into BIN 2 and $<p_{x}^{dir}>$ of particles
entering BIN 2 from BIN 1 in case of F$_{1} (u)$ and F$_{2} (u)$
will be less negative due to stronger repulsive symmetry energy as
compared to in case of F$_{3}$ (u) (see Ref. \cite{gaum210} also).
During the expansion phase, i.e. after 30 fm/c the total
$<p_{x}^{dir} >$ and $<p_{x}^{dir} >$ of BIN 1 particles overlap
as expected. Therefore, the effect of symmetry energy on the low
density particles during the initial stages decide the fate of the
final $<p_{x}^{dir} >$ and hence E$_{bal}$.
\begin{figure}[!t] \centering
\vskip 0.5cm
\includegraphics[angle=0,width=16cm]{fig5.eps}
\vskip 0.5cm
\caption{ (a) and (b) (upper panels) E$_{bal}$ as a function of N/Z (left panel) and N/A (right panel) of system for E$_{sym}
\propto F_{1} (u)$ with $^{40}$Ca as
target (stars). (c) and (d) (lower panels) E$_{bal}$ as a function of N/Z (left panel) and N/A (right panel) of system for E$_{sym}
\propto F_{1} (u)$ with SMD EOS (left triangles). Circles represent the values of E$_{bal}$ as in fig. 1. Lines are linear fit proportional to m.}\label{fig3}
\end{figure}
\par
Since one cannot use radioactive isotopes as targets,
therefore, as a next step we fix the target as a stable isotope
$^{40}$Ca and vary the projectile from $^{40}$Ca to $^{60}$Ca and
calculate E$^{bal}$. In this case the N/Z (N/A) of the reaction varies
between 1 to 1.5 (0.5 to 0.6) and the asymmetry $\delta = \frac {A_{1}-A_{2}}{A_{1}+A_{2}}$
of the reaction varies from 0
to 0.2. The results are displayed by solid green stars in figs. 5(a) and (b) (upper panels).
The solid green circles represent the calculations for symmetric
reactions with N/Z (N/A) varying from 1 to 2 (0.5 to 0.67), i.e. $^{40}$Ca+$^{40}$Ca
to $^{60}$Ca+$^{60}$Ca. Lines represent the linear fit $\propto$
m. We see that N/Z (N/A) dependence of E$_{bal}$ is same for both the
cases. We also find that when we use
stable target $^{40}$Ca and radioactive target $^{60}$Ca,
the N/Z (N/A) decreases from 2 (0.67) in case of $^{60}$Ca+$^{60}$Ca to 1.5 for
$^{60}$Ca+$^{40}$Ca, so the E$_{bal}$ also decreases. Now the
E$_{bal}$ for
$^{60}$Ca+$^{40}$Ca has same value as in case of symmetric reactions with
N/Z (N/A) = 1.5 (0.6) i.e. the value of E$_{bal}$ is decided by the N/Z (N/A) of the system and is independent of the asymmetry
of the reaction in agreement with \cite{supriya}.
\par
It has also been reported in literature that the MDI affects
drastically the collective flow as well as its disappearance \cite{soodmdi}. To
check the influence of MDI on the N/Z (N/A) dependence of E$_{bal}$ we
calculate the E$_{bal}$ for the whole N/Z (N/A) range from 1 to 2 (0.5 to 0.67) for the symmetric reactions with
SMD equation of state and symmetry potential F$_{1}(u)$. The
results are shown in figs. 5(c) and (d) (lower panels) by solid left triangles. We find that although the MDI
changes drastically the absolute value of E$_{bal}$ ( by about
30\%), however the N/Z (N/A) dependence of E$_{bal}$ remains unchanged
on inclusion of MDI. Therefore, the dependence of E$_{bal}$ as a
function of N/Z (N/A) on the symmetry energies of other different forms
(F$_{2}(u)$ and F$_{3}(u)$) is also expected to be preserved on
inclusion of MDI.
\section{Summary}
We have shown that the N/Z (N/A) dependence of E$_{bal}$ for
the isotopic series of Ca+Ca is a sensitive probe to
the symmetry energy as well as its density dependence at densities
higher than saturation density and is insensitive to other isospin
effects like Coulomb repulsion, and isospin dependence of nucleon-nucleon cross
section. We have also studied the effect of MDI on the N/Z (N/A) dependence of E$_{bal}$. We find
that although MDI influences the E$_{bal}$ drastically, the N/Z (N/A) dependence of E$_{bal}$ remains unchanged on inclusion of MDI.
\par
This work has been supported by a grant from Indo-French Centre
For The Promotion Of Advanced Research (IFCPAR) under project no.
4104-1.
|
1,116,691,498,562 | arxiv | \section{Introduction}\label{introduction}}
\IEEEPARstart{D}{eep} neural networks (DNNs) have grown into the mainstream tools in many fields, thus, their vulnerability has attracted much attention in the recent years. An obvious example is the existence of adversarial samples \cite{akhtar2018threat}, which are quite similar with the clean ones, but are able to cheat the DNNs to produce incorrect predictions in high confidence. Various attack methods to craft adversarial samples have been proposed, such as FGSM \cite{goodfellow2014explaining}, C\&W \cite{carlini2017towards}, PGD \cite{madry2017towards}, Type I \cite{tang2019adversarial} and so on. Generally speaking, when the victim network is exposed to the attacker, one can easily achieve efficient attack with a very high success rate.
Although white-box attacks can easily cheat DNNs, the current users actually do not worry about them, since it is almost impossible to get the complete information including the structure and the parameters of the victim DNNs. If the information is kept well, one has to use black-box attack, which can be roughly categorized into query-based approaches \cite{cheng2019improving, ilyas2018prior, guo2019subspace} and transfer-based approaches \cite{papernot2017practical, moosavi2017universal, dong2019evading}. The former one is to estimate the gradient by querying the victim DNNs. However, until now, the existing query-based attacks still need massive queries, which can be easily detected by the defense systems. Transfer-based attacks rely on the similarity between the victim DNN and the attacked DNN, which serves as the \emph{surrogate model} in a black-box attack, in the attacker's hands. It is expected that white-box attacks on the surrogate model can also invade the victim DNN. Although there are some promising studies recently \cite{dong2018boosting, xie2019improving, lin2019nesterov}, the transfer performance is not satisfactory and a high attack rate could be reached only when two DNNs have similar structures \cite{su2018robustness}, which however conflicts the aim of black-box attacks.
Black-box adversarial samples that are applicable to vast DNNs need to attack their common vulnerability. Since DNNs are imitating human's intelligence, although DNNs have different structures and weights, they may share similar semantic features. In this paper, we are focusing on the attention heat maps, on which different DNNs have similar results. By attacking the heat maps of one white-box DNN, we could make its attention lose focus and therefore fail in judgement. In fact, some works have been aware of the importance of attention and put the change of heat map as an evidence of successful attacks, see, e.g. \cite{dong2019evading, zhang2019interpreting}. But none of them includes the attention in loss. In our study, we develop an \emph{Attack on Attention} (AoA). AoA has a good white-box attack performance. More importantly, there is high similarity in attention across different DNNs, making AoA highly transferable: replacing the cross-entropy loss by AoA loss increases the transferability by 10\% to 15\%. Combined with some existing transferability-enhancement methods, AoA achieves a state-of-the-art performance, e.g. over 85\% transfer rate on all 12 black-box popular DNNs in numerical experiments.
Here, we first illustrate one example in Fig. \ref{intro}. The original image is a "salamander" in ImageNet \cite{deng2009imagenet}. By attacking the attention, we generate an adversarial sample, which looks very similar to the original one but with a scattered heat map (in the lower left corner), leading to misclassification. The attack is carried out on VGG19 \cite{simonyan2014very} but other well-trained DNNs on ImageNet also make wrong predictions.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize]{imgs/intro.png}
\caption{AoA adversarial sample and its attention heat map (calculated by DenseNet121). The original sample (in ImageNet: image n01629819\_15314.JPEG, class No.25) is shown on the left. All well-trained DNNs (listed in the first row) correctly recognize this image as a salamander. The right image is the generated adversarial sample by AoA. The difference between the two images is slight, however, the heat map shown in lower left corner changes a lot, which fools all the listed DNNs to incorrect predictions, as shown in the bottom row.}
\label{intro}
\end{figure}
Since AoA is for common vulnerabilities of DNNs, we successfully generate 50000 adversarial samples that can cheat many DNNs, of which the error rates increase to over $85\%$. We provide these samples in the dataset named as \emph{DAmageNet}. DAmageNet is the first dataset that provides black-box adversarial samples. Those images \emph{DAmage} many neural networks without any knowledge or query. But the aim is not to really damage them, but to point out the weak parts of neural networks and thus those samples are valuable to improve the neural networks by adversarial training \cite{ganin2016domain, shrivastava2017learning}, robustness certification \cite{sinha2017certifiable}, and so on.
The rest of this paper is organized as follows. In Section \ref{related}, we will briefly introduce adversarial attack, especially black-box attack, attention heat map, and several variants of ImageNet. The Attack on Attention is described in detail in Section \ref{method}. Section \ref{experiment} evaluates the proposed AoA along with other attacks and defenses and presents the DAmageNet. In Section \ref{conclusion}, a conclusion is given to end this paper.
\section{Related Work}\label{related}
\subsection{Adversarial attack and its defense}\label{related-defense}
Adversarial attacks \cite{szegedy2013intriguing} reveal the weakness of DNNs by cheating it with adversarial samples, which differ from original ones with only a slight perturbation. In the humans' eyes, the adversarial samples do not differ from the original ones, but well-trained networks make false predictions on them in high confidence. The adversarial attack can be expressed as below,
\begin{eqnarray*}
\begin{split}
{\text { find }} & {\Delta x} \\ {\text { s.t. }} & {f(x) \neq f(x+\Delta x)} \\ {} & {\|\Delta x\| \leq \varepsilon},
\end{split}
\end{eqnarray*}
\lchanged{M01}{\link{R0.0} \link{R3.2}}{where a neural network $f$ predicts differently on the clean sample and the adversarial sample. Even their difference is imperceivable, i.e., $\Delta x$ is restricted by $||\cdot||$, which could be the $\ell_1$-, $\ell_2$- or $\ell_\infty$-norm.}
When training a DNN, one updates the weights of the network by the gradients to minimize a training loss. While in adversarial attacks, one alters the image to increase the training loss. Based on this basic idea, there have been many variants on attacking spaces and crafting methods.
For the space to be attacked, most of the existing methods directly conduct attack in the image space \cite{goodfellow2014explaining, moosavi2016deepfool, su2019one}. It is also reasonable to attack the feature vector in the latent space \cite{song2018constructing, tang2019adversarial} or the encoder/decoder \cite{baluja2017adversarial, han2019once}. Attack on feature space may produce unique perturbation unlike random noise.
Adversarial attacks could be roughly categorized as gradient-based \cite{goodfellow2014explaining, madry2017towards} and optimization-based methods \cite{szegedy2013intriguing, carlini2017towards}. Gradient-based methods search in the gradient direction and the magnitude of perturbation is restricted to avoid a big distortion. Optimization-based methods usually consider the magnitude restriction in the objective function. For both, the magnitude could be measured by the $\ell_1$, $\ell_2$, $\ell_\infty$-norm or other metrics.
To secure the DNN, many defense methods have been proposed to inhibit the adversarial attack. Defense can be achieved by adding adversarial samples to the training set, which is called adversarial training \cite{miyato2016adversarial, sankaranarayanan2018regularizing, zhang2019you}. It is very effective, but consumes several-fold time. Another technique is to design certain blocks in the network structure to prevent attacks or detect adversarial samples \cite{liao2018defense, xie2019feature}. Attack can also be mitigated by preprocessing images before input to the DNN \cite{liu2018feature, prakash2018deflecting, mustafa2019image}, which does not require modification on the pre-trained network.
\subsection{Black-box attack}
When the victim DNNs are totally known, the attacks mentioned above have high success rates. However, it is almost impossible to have access to the victim model in real-world scenarios and thus black-box attacks are required \cite{papernot2016transferability, brendel2017decision, ilyas2018black}. Black-box attacks rely on either query \cite{cheng2019improving, ilyas2018prior} or transferability \cite{papernot2016transferability, papernot2017practical}.
For the query-based approach, the attacker adds a slight perturbation to the input image and observes the reaction of the victim model. By a series of queries, the gradients could be roughly estimated and then one can conduct the attack in the way similar to white-box cases. To decide on the attack direction, attackers adopt methods including Bayes optimization \cite{ru2020bayesopt}, evolutional algorithms \cite{laurent2019yet}, meta learning \cite{du2019query} etc. Since the practical DNNs are generally very complicated, good estimation of the gradients needs a massive number of queries, leading to an easy detection by the model owner.
For the transfer-based approach, one conducts white-box attack in a well-designed surrogate model and expects that the adversarial samples remain aggressive to other models. The underlying assumption is that the distance between decision boundaries across different classes is significantly shorter than that across different models \cite{papernot2016transferability}. Although a good transfer rate has been recently reported in \cite{xie2019improving, dong2018boosting, lin2019nesterov, wu2019skip}, it is mainly for models in the same family, e.g., InceptionV3 and InceptionV4, or models with the same blocks, e.g., residual blocks \cite{su2018robustness}. Until now, cross-family transferability of adversarial samples with small perturbations is limited and there is no publicly available dataset of that.
\subsection{Attention heat map}
In making judgements, humans tend to concentrate on certain parts of an object and allocate attention efficiently. This attention mechanism in human intelligence has been exploited by researchers. In recent studies, methods in natural language process have benefited from the attention mechanism a lot \cite{vaswani2017attention}. In computer vision, the same idea has been applied and becomes an important component in DNNs, especially in industrial applications \cite{samek2019explainable}.
To attack on attention, we need to calculate the pixel-wise attention heat map, for which network visualization methods \cite{zhou2016learning, lin2013network} are applicable. Forward visualization adopts the intuitive idea to obtain the attention by observing the changes in the output caused by changes in the input. The input can be modified by noise \cite{zhou2014object}, masking \cite{zeiler2014visualizing}, or perturbation \cite{zhou2015predicting}. However, these methods consume much time and may introduce randomness.
In contrast, backward visualization \cite{simonyan2013deep, zeiler2014visualizing, springenberg2014striving} obtains the heat map by calculating the relevance between adjacent layers from the output to the input. The layer-wise attention is obtained by the attention in the next layer and the network weights in this layer. Significant works include Layer-wise Relevance Propagation (LRP) \cite{bach2015pixel}, Contrastive LRP (CLRP) \cite{gu2018understanding} and Softmax Gradient LRP (SGLRP) \cite{iwana2019explaining}. These methods extract the high-level semantic attention features for the images from the perspective of the network and make DNNs more interpretable and explainable.
\subsection{ImageNet and its variants}
To demonstrate and evaluate our attack, we will modify images from ImageNet as other transfer attacks \cite{xie2019improving, dong2018boosting, lin2019nesterov, wu2019skip}. ImageNet is a large-scale dataset \cite{deng2009imagenet}, which contains images of 1000 classes and each has 1300 well-chosen samples. ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has encouraged a lot of mile-stone works \cite{simonyan2014very, he2016deep, huang2017densely}. Recently, many interesting variants of ImageNet have been developed, including ImageNet-A \cite{hendrycks2019natural}, ObjectNet \cite{barbu2019objectnet}, ImageNet-C, and ImageNet-P \cite{hendrycks2019benchmarking}.
ImageNet-A contains real-world images in ImageNet classes, and they are able to mislead many classifiers to output false predictions. ObjectNet also includes natural images that well-trained models in ImageNet cannot distinguish. Objects in ObjectNet have random backgrounds, rotations and viewpoints. ImageNet-C is produced by adding 15 diverse corruptions. Each type of corruptions has 5 levels from the lightest to the severest. ImageNet-P is designed from ImageNet-C and differs from it in possessing additional perturbation sequences, which are not generated by attack but by image transformations.
The datasets mentioned above are valuable for testing and improving the network generalization capability, but DAmageNet is for the robustness. In other words, samples in the above datasets differ from the samples in ImageNet and the low accuracy is due to the poor generalization. In DAmageNet, the samples are quite similar to the original ones in ImageNet and the low accuracy is due to the over-sensitivity of DNNs.
\section{Attack on Attention (AoA)}\label{method}
To pursue high transferability for black-box attacks, we need to find common vulnerabilities and attack semantic features shared by different DNNs. Attention heat maps for three images are illustrated in Fig. \ref{heatmaps}, where the pixel-wise heat maps show how the input contributes to the prediction. Even with different architectures, the models have similar attention. Inspired by the similarity across different DNNs, we propose to Attack on Attention (AoA). Different to the existing methods that focus on attacking the output, AoA aims to change the attention heat map.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize]{imgs/heatmaps.png}
\caption{Attention heat maps for VGG19 \cite{simonyan2014very}, InceptionV3 \cite{szegedy2016rethinking}, DenseNet121 \cite{huang2017densely}, which are similar even the architectures are different.}
\label{heatmaps}
\end{figure}
Let $h(x, y)$ stand for the attention heat map for the input $x$ and a specified class $y$. $h(x, y_\mathrm{ori})$ is a tensor with the dimension consistent to $x$. The basic idea of AoA is to shift the attention away from the original class, e.g. decrease the heat map for the correct class $y_\mathrm{ori}$, as illustrated in Fig. \ref{loss}. In this paper, we utilize SGLRP \cite{iwana2019explaining} to calculate the attention heat map $h(x, y)$, which is good at distinguishing the attention for the target class from the others. There exist of course many other techniques for obtaining the heat map to attack, as long as $h(x, y)$ and its gradient on $x$ could be effectively calculated.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize]{imgs/loss.png}
\caption{The design of AoA. AoA calculates the attention heat map by SGLRP after inference. The gradient from the heat map back-propagates to the input and updates the sample iteratively. By suppressing the attention heat map value, one can change the network decision by fooling its focus. Constantly doing this, the produced adversarial sample could beat several black-box models.}
\label{loss}
\end{figure}
\begin{table*}[htbp]
\caption{Transfer Rate from ResNet50 to Other Neural Networks}
\centering
\begin{tabular}{c|ccccccc}
\toprule
Loss/Method & DN121 \cite{huang2017densely} & VGG19 \cite{simonyan2014very} & RN152 \cite{he2016deep} & IncV3 \cite{szegedy2016rethinking} & IncRNV2 \cite{szegedy2017inception} & Xception \cite{chollet2017xception} & NASNetL \cite{zoph2018learning} \\ \midrule
CW \cite{carlini2017towards} & 66.6$\pm$1.24\% & 54.2$\pm$4.27\% & 47.3$\pm$4.69\% & 39.6$\pm$2.92\% & 37.9$\pm$4.77\% & 37.4$\pm$2.67\% & 28.8$\pm$2.58\% \\
PGD \cite{madry2017towards} & 67.8$\pm$1.83\% & 54.2$\pm$2.56\% & 46.8$\pm$3.71\% & 38.7$\pm$2.25\% & 35.6$\pm$4.21\% & 37.4$\pm$4.08\% & 28.4$\pm$3.17\% \\ \midrule
$L_\mathrm{supp}(x)$ & 66.8$\pm$3.37\% & 57.2$\pm$3.96\% & 54.8$\pm$2.50\% & 43.9$\pm$2.78\% & 41.6$\pm$1.66\% & 40.9$\pm$2.60\% & 33.0$\pm$2.53\% \\
$L_\mathrm{dstc}(x)$ & 67.1$\pm$4.04\% & 56.5$\pm$2.28\% & 55.5$\pm$4.15\% & 45.4$\pm$3.77\% & 40.0$\pm$1.82\% & 41.6$\pm$4.07\% & 31.0$\pm$2.17\% \\
$L_\mathrm{bdry}(x)$ & 50.2$\pm$5.26\% & 49.8$\pm$4.39\% & 44.0$\pm$4.05\% & 34.1$\pm$3.34\% & 32.9$\pm$3.22\% & 31.7$\pm$1.86\% & 21.7$\pm$1.29\% \\
$L_\mathrm{log}(x)$ & 74.9$\pm$3.48\% & 64.2$\pm$4.13\% & 59.2$\pm$4.71\% & 50.1$\pm$2.69\% & 46.2$\pm$3.39\% & 48.0$\pm$4.87\% & 36.3$\pm$3.74\% \\
\midrule
$L_\mathrm{AoA}(x)$ & \textbf{78.7$\pm$2.54\%} & \textbf{64.9$\pm$2.01\%} & \textbf{63.9$\pm$1.98\%} & \textbf{53.3$\pm$2.27\%} & \textbf{48.9$\pm$2.65\%} & \textbf{50.9$\pm$3.01\%} & \textbf{41.0$\pm$2.00\%} \\
\bottomrule
\end{tabular}
\label{toy}
\end{table*}
There are several potential ways to change the attention heat maps.
\begin{enumerate}\label{scheme}
\item Suppress the magnitude of attention heat maps for the correct class $h(x, y_\mathrm{ori})$: When the network attention degree on the correct class decreases, attention for other classes would increase and finally exceed the correct one, which leads the model to seek for information on other classes rather than the correct one and thus make an incorrect prediction. We call this design as the following \emph{suppress loss,
\begin{eqnarray*}
L_\mathrm{supp}(x) = \|h(x, y_\mathrm{ori})\|_1,
\end{eqnarray*}}
where $\|\cdot\|_1$ stands for the componentwise $\ell_1$-norm.
\item Distract the focus of $h(x, y_\mathrm{ori})$: It could be expected that when the attention is distracted from the original regions of interest, the model may lose its capability for prediction. In this case, we do not require the network to focus on information of any incorrect class, but lead it to concentrate on irrelevant regions of the image. The loss could be expressed as the following \emph{distract loss},
\begin{eqnarray*}
L_\mathrm{dstc}(x) = -\left\|\frac{h(x, y_\mathrm{ori})}{max(h(x, y_\mathrm{ori}))} - \frac{h(x_\mathrm{ori}, y_\mathrm{ori})}{max(h(x_\mathrm{ori}, y_\mathrm{ori}))}\right\|_1.\\
\end{eqnarray*}
Here, self-normalization is conducted to eliminate the influence of attention magnitude.
\item Decrease the gap between $h(x, y_\mathrm{ori})$ and $h(x, y_\mathrm{sec}(x))$, the heat map for the second largest probability: If the attention magnitude for the second class exceeds that for the correct class, the network would focus more on information about the false prediction, which is inspired by CW attack \cite{carlini2017towards}. We call it \emph{boundary loss} and take the following formulation,
\begin{eqnarray*}
L_\mathrm{bdry}(x) = \|h(x, y_\mathrm{ori})\|_1-\|h(x, y_\mathrm{sec}(x))\|_1.
\end{eqnarray*}
The values of attention heat maps vary a lot for different models, so the self-normalization may improve the transferability of adversarial samples. Therefore, rather than $L_\mathrm{bdry}$, we can also consider the ratio between $h(x, y_\mathrm{ori})$ and $h(x, y_\mathrm{sec}(x))$, resulting the following \emph{logarithmic boundary loss}
\begin{eqnarray*}
\begin{split}
L_\mathrm{log}(x) = \log(\|h(x, y_\mathrm{ori})\|_1) - \log(\|h(x, y_\mathrm{sec}(x))\|_1).
\end{split}
\end{eqnarray*}
\end{enumerate}
Now let us illustrate the attack result on the attention heat map by distract loss. In Fig. \ref{dstc}, a clean sample is drawn together with its heat maps away from its original class.
Aiming at ResNet50 \cite{he2016deep}, we minimize $L_\mathrm{dstc}$ and successfully change the heat map such that the attention is distracted to irrelevant regions (the second right column at the bottom). This common property shared by the attention in different DNNs makes the attack transferable, which is the motivation of attack on attention. The generated adversarial sample
is shown in the leftmost in the bottom, which is incorrectly
recognized by all the DNNs in Fig. \ref{dstc}. Additionally, we could see that the heat map for VGG19 is much clearer, which might explain the high transferability of its adversarial samples as shown later and in \cite{su2018robustness}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\hsize]{imgs/dstc.png}
\caption{Minimizing $L_\mathrm{dstc}$ distracts the attention from the correct ROI to irrelevant regions and similar distraction could be observed for different networks.}
\label{dstc}
\end{figure}
The transferability across different DNNs could be observed not only for the $L_\mathrm{dstc}$ but also for the other attention-related losses. To compare the above losses' attack performance, we attack on ResNet50 \cite{he2016deep} and feed the adversarial samples to other DNNs (see the setting in Section \ref{experiment} for details). Two attacks on classification loss, namely CW and PGD, are also compared as the baseline.
The white-box attack success rates. i.e., the error rates of ResNet50, are all near $100\%$ but attacks by different losses have different transferability performance, which is reported in Table \ref{toy}. The suppress loss and the distract loss have a better transferability than PGD and CW. The logarithmic boundary loss is the best and is hence chosen as the attack target. Moreover, attack on attention could be readily combined with the existing attack on prediction (the cross entropy loss attacked in PGD, denoted by $L_\mathrm{ce}$), resulting in the following \emph{AoA loss},
\begin{eqnarray}\label{aoaloss}
L_\mathrm{AoA}(x) = L_\mathrm{log}(x) - \lambda L_\mathrm{ce}(x, y_\mathrm{ori}),
\end{eqnarray}
where $\lambda$ is a trade-off between the attack on attention and cross entropy. In this paper, $\lambda=1000$ is suggested such that the two items have similar variance for different inputs. The combination further increases the transferability, as shown in Table \ref{toy}.
Basically, the adversarial samples are generated in an update process by minimizing the AoA loss $L_\mathrm{AoA}$. Specifically, set $x^0_\mathrm{adv} = x_\mathrm{ori}$ and the update procedure could be generally described as the following
\begin{eqnarray}\label{update}
\begin{split}
x^{k+1}_\mathrm{adv} & = \text{clip}_\varepsilon\left(x^{k}_\mathrm{adv} - \alpha \frac{g(x^{k}_\mathrm{adv})}{||g(x^{k}_\mathrm{adv})||_1/N} \right), \\
g(x) &= \frac{\partial L_\mathrm{AoA}(x)}{\partial x}.
\end{split}
\end{eqnarray}
The gradient $g$ is normalized by its average $\ell_1$-norm, i.e., $||g(x_k)||_1/N$, where $N$ is the size of the image. Further, to keep the perturbations invisible, we restrict our attack by the distance from the original clean sample
such that the $\ell_\infty$ distance does not exceed $\varepsilon$. AoA is different from other attacks merely on the loss. Therefore, transferability-enhancement techniques developed for directly attacking prediction are also applicable to AoA. In fact, with optimization modification \cite{dong2018boosting} or input modification \cite{xie2019improving, dong2019evading, lin2019nesterov}, the transfer performance of AoA gets further improved, as numerically verified in Section \ref{experiment-trans}. \lchanged{M02}{\link{R0.0} \link{R3.3}}{The procedure of AoA is summarized in Algorithm \ref{alg}.}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}[htbp]
\caption{{Attack on Attention}}
\label{alg}
\begin{algorithmic}[1]
\REQUIRE{AoA loss $L_\mathrm{AoA}(x)$, origin sample $x_{\mathrm{ori}}$, $\ell_{\infty}$-norm bound $\epsilon$, RMSE threshold $\eta$, attack step length $\alpha$.}
\ENSURE{adversarial sample $x_{\mathrm{adv}}$}
\STATE $x_{\mathrm{adv}}^{0} \gets x_{\mathrm{ori}}$
\STATE $N \gets height \times width \times channel$ of $x_{\mathrm{ori}}$
\STATE $k \gets 0$
\WHILE {$RMSE(x_{\mathrm{ori}}, x_{\mathrm{adv}}^{k}) < \eta $}
\STATE $g = \frac{\partial L_{AoA}(x_{\mathrm{adv}}^{k})}{\partial x_{\mathrm{adv}}^{k}} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad ~ ~ ~ \quad \quad \quad \star $
\STATE $x_{\mathrm{adv}}^{k+1}= \mathrm{clip}_{\epsilon}(x_{\mathrm{adv}}^{k} - \alpha \cdot \frac{g}{||g||_{1}/N}) \quad \quad ~ \quad\quad \quad \quad \quad \star \star $
\STATE $k = k+1$
\ENDWHILE
\RETURN ${x_{\mathrm{adv}}^{k}}$
\STATE
\STATE $\star ~~$: could be modified for DI \cite{xie2019improving},SI \cite{lin2019nesterov} enhancement.
\STATE $\star \star$: could be modified for MI \cite{dong2018boosting},TI \cite{dong2019evading} enhancement.
\end{algorithmic}
\end{algorithm}
Because of its good transferability on attention heat maps, AoA could be used for the black-box attack. The basic scheme is to choose a white-box DNN, which serves as the surrogate model for black-box attacks, to attack
by updating (\ref{update}). The generated adversarial samples tend to be aggressive to other black-box victim models.
\section{Experiments}\label{experiment}
In this section, we will evaluate the performance of our Attack on Attention, especially its black-box attack capability compared to other SOTA methods. Since AoA is a very good black-box attack, it provides adversarial samples that can defeat many DNNs in a zero-query manner. These samples are collected in the dataset DAmageNet. This section will also introduce DAmageNet and report the performance of different DNNs on it. We further test the AoA performance under several defenses and find that AoA is the most aggressive method in almost all the cases.
\subsection{Setup}
The experiments for AoA are conducted on ImageNet \cite{deng2009imagenet} validation set. For attack and test, several well-trained models in Keras Applications \cite{chollet2015keras} are used, including VGG19 \cite{simonyan2014very}, ResNet50 \cite{he2016deep}, DenseNet121 \cite{huang2017densely}, InceptionV3 \cite{szegedy2016rethinking} and so on. We also use other adversarial-trained models (not by AoA, indicated by underline). For preprocessing, Keras preprocessing function, central cropping, and resizing (to 224) are used. The experiments are implemented in TensorFlow \cite{tensorflow2015-whitepaper}, Keras \cite{chollet2015keras} with 4 NVIDIA GeForce RTX 2080Ti GPUs.
For the attack performance, we care about two aspects: the success/transfer rate of attack and how large the image is changed. Denote the generated adversarial sample as $x_\mathrm{adv}$. The change from its corresponding original image $x_\mathrm{ori}$ could be measured by the Root Mean Squared Error (RMSE) in each pixel: $d\left(x_\mathrm{adv}, x_\mathrm{ori}\right)=\sqrt{\|x_{\mathrm{adv}}-x_{\mathrm{ori}}\|_2^{2} / N}$. In the experiments, 200 images are randomly selected
from ImageNet validation set and the samples incorrectly predicted by the victim model are skipped as the same setting in \cite{su2018robustness}. Experiments are repeated 5 times and the overall performance on 1000 samples is reported. All the compared attacks will be fairly stopped when RMSE exceeds $\eta=7$ and the perturbation is bounded by $\varepsilon=0.1*255$. In this way, the number of iterations is about 10 with step size $\alpha=2$ as the setting of \cite{wu2019skip} and other literatures. We alter $\alpha=0.5$ for MI \cite{dong2018boosting} \changed{M03}{\link{R0.0} \link{R1.1}}{based on numerical experiments}.
\subsection{Transferability of AoA}\label{experiment-trans}
We first compare AoA with popular attacks CW \cite{carlini2017towards} and PGD \cite{madry2017towards}, which aim at classification losses. Specifically, CW uses the hinge loss and PGD uses the cross entropy loss. For CW, a gradient-based update is applied to keep the perturbation small. We carefully tune their parameters, resulting in a better transferability than reported in \cite{su2018robustness}.
We use AoA, CW, and PGD to attack different neural networks, and then feed the generated adversarial samples to different models. The average error rates are reported in Table \ref{aoaresult}. AoA, CW, and PGD all have a high white-box attack success rate but the transfer performance varies a lot, which depends on both the surrogate model and the victim model. But in all the tested situations, AoA achieves a better black-box attack performance.
The essential difference of AoA from CW/PGD is the attack target. The existing effort on improving attack transferability for CW/PGD is mainly on modifying the optimization process. For example, DI proposes to transform 4 times when calculating gradients with a probability \cite{xie2019improving}. \changed{M04}{\link{R0.0} \link{R1.2}}{TI translates the image for more transferable attack gradients \cite{dong2019evading}.}
MI tunes momentum parameter for boosting attacks \cite{dong2018boosting}. SI divides the sample by the power 2 for 4 times to calculate the gradient \cite{lin2019nesterov}. Those state-of-the-art transferability-enhancement methods could improve the performance for CW/PGD and are also applicable to AoA.
In Table \ref{aoaresult2}, we report the black-box attack performance when attacking ResNet50 with MI-DI, {MI-TI}, and SI (all with the hyperparameters suggested by their inventors). We find that SI is very helpful and can prominently increase the error rate for PGD and CW. Applying SI in AoA, denoted as SI-AoA, achieves the highest transfer rate, which is significantly better than other state-of-the-art methods.
\begin{table*}[htbp]
\caption{Error Rate (Top-1) of Different Attack Baselines}
\centering
\setlength{\tabcolsep}{3pt}{
\begin{tabular}{l|c|cccccccc}
\toprule
Surrogate & Method & DN121 \cite{huang2017densely} & IncRNV2 \cite{szegedy2017inception} & IncV3 \cite{szegedy2016rethinking} & NASNetL \cite{zoph2018learning} & RN152 \cite{he2016deep} & RN50 \cite{he2016deep} & VGG19 \cite{simonyan2014very} & Xception \cite{chollet2017xception}\\ \midrule
& CW & 66.6$\pm$1.24\% & 37.9$\pm$4.77\% & 39.6$\pm$2.92\% & 28.8$\pm$2.58\% & 47.3$\pm$4.69\% & 100.0$\pm$0.00\% & 54.2$\pm$4.27\% & 37.4$\pm$2.67\% \\
RN50 \cite{he2016deep} & PGD & 67.8$\pm$1.83\% & 35.6$\pm$4.21\% & 38.7$\pm$2.25\% & 28.4$\pm$3.17\% & 46.8$\pm$3.71\% & \textbf{100.0$\pm$0.00\%} & 54.2$\pm$2.56\% & 37.4$\pm$4.08\% \\
& AoA & \textbf{78.4$\pm$2.44\%} & \textbf{49.0$\pm$1.87\%} & \textbf{52.2$\pm$2.66\%} & \textbf{39.6$\pm$3.61\%} & \textbf{63.4$\pm$2.63\%} & 99.9$\pm$0.20\% & \textbf{65.6$\pm$2.82\%} & \textbf{51.1$\pm$2.18\%} \\ \midrule
& CW & 100.0$\pm$0.00\% & 33.5$\pm$2.55\% & 39.5$\pm$1.67\% & 31.9$\pm$2.87\% & 39.6$\pm$2.85\% & 64.6$\pm$3.76\% & 53.2$\pm$3.93\% & 39.4$\pm$1.16\% \\
DN121 \cite{huang2017densely} & PGD & 100.0$\pm$0.00\% & 34.0$\pm$3.49\% & 41.7$\pm$2.38\% & 31.9$\pm$2.87\% & 41.5$\pm$3.21\% & 68.9$\pm$4.76\% & 55.5$\pm$2.28\% & 41.5$\pm$2.30\% \\
& AoA & \textbf{100.0$\pm$0.00\%} & \textbf{46.1$\pm$2.91\%} & \textbf{53.5$\pm$3.46\%} & \textbf{46.1$\pm$2.44\%} & \textbf{55.0$\pm$2.77\%} & \textbf{76.7$\pm$2.29\%} & \textbf{64.6$\pm$2.18\%} & \textbf{52.1$\pm$2.15\%} \\ \midrule
& CW & 31.0$\pm$1.95\% & 22.7$\pm$3.01\% & 100.0$\pm$0.00\% & 21.3$\pm$0.60\% & 26.1$\pm$3.62\% & 42.3$\pm$2.01\% & 40.7$\pm$3.34\% & 33.4$\pm$1.56\% \\
IncV3 \cite{szegedy2016rethinking} & PGD & 32.7$\pm$2.50\% & 24.2$\pm$2.89\% & 100.0$\pm$0.00\% & 21.3$\pm$1.91\% & 27.3$\pm$2.29\% & 45.3$\pm$1.17\% & 40.7$\pm$3.39\% & 33.7$\pm$3.22\% \\
& AoA & \textbf{39.0$\pm$1.79\%} & \textbf{30.2$\pm$2.77\%} & \textbf{100.0$\pm$0.00\%} & \textbf{32.7$\pm$1.81\%} & \textbf{34.0$\pm$2.93\%} & \textbf{52.8$\pm$1.69\%} & \textbf{45.9$\pm$3.98\%} & \textbf{45.1$\pm$2.08\%} \\ \midrule
& CW & 85.5$\pm$0.84\% & 62.0$\pm$1.67\% & 69.8$\pm$1.60\% & 62.7$\pm$1.21\% & 60.0$\pm$1.61\% & 77.8$\pm$2.04\% & 100.0$\pm$0.00\% & 68.0$\pm$2.39\% \\
VGG19 \cite{simonyan2014very} & PGD & 87.1$\pm$1.20\% & 64.1$\pm$2.03\% & 71.8$\pm$1.63\% & 63.9$\pm$1.77\% & 63.1$\pm$4.14\% & 82.5$\pm$2.63\% & 100.0$\pm$0.00\% & 71.9$\pm$0.97\% \\
& AoA & \textbf{91.4$\pm$2.65\%} & \textbf{73.7$\pm$1.29\%} & \textbf{79.8$\pm$1.08\%} & \textbf{74.2$\pm$1.63\%} & \textbf{73.5$\pm$1.05\%} & \textbf{86.6$\pm$1.77\%} & \textbf{100.0$\pm$0.00\%} & \textbf{81.0$\pm$1.30\%} \\ \midrule
& CW & 42.4$\pm$2.52\% & 36.2$\pm$2.32\% & 35.3$\pm$1.66\% & 25.6$\pm$2.24\% & 100.0$\pm$0.00\% & 57.7$\pm$0.81\% & 46.0$\pm$4.06\% & 31.9$\pm$1.77\% \\
RN152 \cite{he2016deep} & PGD & 42.7$\pm$3.19\% & 35.0$\pm$2.47\% & 34.9$\pm$2.96\% & 24.5$\pm$3.05\% & 98.1$\pm$0.97\% & 55.3$\pm$2.71\% & 43.6$\pm$3.61\% & 30.5$\pm$4.87\% \\
& AoA & \textbf{55.9$\pm$2.35\%} & \textbf{54.2$\pm$2.36\%} & \textbf{49.6$\pm$4.21\%} & \textbf{36.4$\pm$2.60\%} & \textbf{100.0$\pm$0.00\%} & \textbf{71.5$\pm$2.57\%} & \textbf{57.2$\pm$3.79\%} & \textbf{45.6$\pm$1.93\%} \\
\bottomrule
\end{tabular}}
\label{aoaresult}
\end{table*}
\begin{table*}[htbp]
\caption{{Error Rate (Top-1) of Transfer Attacks on ResNet50}}
\centering
\begin{tabular}{r|cccccccc}
\toprule
Method & DN121 \cite{huang2017densely} & IncRNV2 \cite{szegedy2017inception} & IncV3 \cite{szegedy2016rethinking} & NASNetL \cite{zoph2018learning} & RN152 \cite{he2016deep} & RN50 \cite{he2016deep} & VGG19 \cite{simonyan2014very} & Xception \cite{chollet2017xception}\\ \midrule
CW & 66.6$\pm$1.24\% & 37.9$\pm$4.77\% & 39.6$\pm$2.92\% & 28.8$\pm$2.58\% & 47.3$\pm$4.69\% & 100.0$\pm$0.00\% & 54.2$\pm$4.27\% & 37.4$\pm$2.67\% \\
MI-DI-CW & 66.9$\pm$1.91\% & 39.4$\pm$4.03\% & 42.9$\pm$1.59\% & 32.3$\pm$3.83\% & 50.2$\pm$4.74\% & 99.8$\pm$0.24\% & 57.9$\pm$3.40\% & 39.9$\pm$2.92\% \\%$65.3$\pm$3.17\% & 41.0$\pm$2.05\% & 43.9$\pm$3.23\% & 37.2$\pm$2.27\% & 51.6$\pm$1.80\% & 99.8$\pm$0.24\% & 57.5$\pm$2.79\% & 41.7$\pm$2.23\% \\
MI-TI-CW & 63.4$\pm$3.35\% & 42.0$\pm$3.33\% & 44.6$\pm$1.02\% & 33.7$\pm$1.96\% & 51.6$\pm$3.77\% & 99.7$\pm$0.24\% & 60.2$\pm$2.80\% & 40.6$\pm$2.40\% \\
SI-CW & 80.3$\pm$1.86\% & 46.4$\pm$2.22\% & 51.6$\pm$2.60\% & 38.3$\pm$3.53\% & 63.9$\pm$1.50\% & 99.9$\pm$0.20\% & 66.5$\pm$1.67\% & 48.8$\pm$3.70\% \\ \midrule
PGD & 67.8$\pm$1.83\% & 35.6$\pm$4.21\% & 38.7$\pm$2.25\% & 28.4$\pm$3.17\% & 46.8$\pm$3.71\% & 100.0$\pm$0.00\% & 54.2$\pm$2.56\% & 37.4$\pm$4.08\% \\
MI-DI-PGD & 70.5$\pm$1.30\% & 43.3$\pm$3.33\% & 45.8$\pm$2.58\% & 35.7$\pm$3.53\% & 55.9$\pm$3.68\% & 99.5$\pm$0.00\% & 62.1$\pm$1.93\% & 43.3$\pm$2.42\% \\%69.4$\pm$0.80\% & 46.4$\pm$2.71\% & 48.9$\pm$2.42\% & 38.3$\pm$1.08\% & 55.3$\pm$2.96\% & 99.6$\pm$0.37\% & 60.5$\pm$1.22\% & 44.8$\pm$0.98\% \\
MI-TI-PGD & 68.6$\pm$0.97\% & 44.6$\pm$2.18\% & 49.5$\pm$1.30\% & 38.0$\pm$1.00\% & 54.2$\pm$1.99\% & 99.3$\pm$0.51\% & 64.2$\pm$2.29\% & 45.3$\pm$1.72\% \\
SI-PGD & 81.2$\pm$1.63\% & 48.7$\pm$1.91\% & 53.0$\pm$0.95\% & 38.6$\pm$2.06\% & 66.1$\pm$2.46\% & 100.0$\pm$0.00\% & 69.5$\pm$2.10\% & 49.1$\pm$1.59\% \\ \midrule
AoA & 78.4$\pm$2.44\% & 49.0$\pm$1.87\% & 52.2$\pm$2.66\% & 39.6$\pm$3.61\% & 63.4$\pm$2.63\% & 99.9$\pm$0.20\% & 65.6$\pm$2.82\% & 51.1$\pm$2.18\% \\
MI-DI-AoA & 74.1$\pm$1.02\% & 50.4$\pm$2.92\% & 52.0$\pm$3.32\% & 44.2$\pm$3.39\% & 58.7$\pm$3.59\% & 99.8$\pm$0.24\% & 66.4$\pm$4.20\% & 50.6$\pm$3.01\% \\%73.0$\pm$3.56\% & 51.3$\pm$1.94\% & 56.0$\pm$3.25\% & 46.0$\pm$2.42\% & 61.7$\pm$1.41\% & 99.6$\pm$0.46\% & 65.6$\pm$2.50\% & 49.3$\pm$2.10\% \\
MI-TI-AoA & 79.2$\pm$1.21\% & 58.7$\pm$4.27\% & 62.5$\pm$3.52\% & 52.2$\pm$3.23\% & 67.5$\pm$2.76\% & 99.8$\pm$0.40\% & 75.3$\pm$2.89\% & 58.9$\pm$1.56\% \\
SI-AoA & \textbf{90.5$\pm$0.89\%} & \textbf{64.6$\pm$2.71\%} & \textbf{66.1$\pm$3.89\%} & \textbf{57.9$\pm$2.20\%} & \textbf{78.8$\pm$1.75\%} & \textbf{100.0$\pm$0.00\%} & \textbf{80.4$\pm$2.73\%} & \textbf{64.6$\pm$3.07\%} \\
\bottomrule
\end{tabular
\label{aoaresult2}
\end{table*}
\subsection{AoA under Defenses}
Our main contribution in this paper is for black-box attack by increasing the transferability. It is not necessary that AoA can break defenses, but indeed, it is interesting to evaluate the attack performance under several defenses. In this experiment, we apply PGD, CW, and AoA, all enhanced by SI
to attack ResNet50.
We consider defenses that have been verified effective on ImageNet \cite{carlini2017adversarial}. Those defense methods can be roughly categorized as preprocessing-based and adversarial-training-based, which could be used together.
Preprocessing-based defenses are to eliminate the adversarial perturbation. We use JPEG Compression \cite{liu2018feature}, Pixel Deflection \cite{prakash2018deflecting}, Total Variance Minimization (TVM) \cite{guo2017countering} with provided parameters. Another idea is to add the randomness to observe the variance of the outputs. For example, Random Smoothing \cite{cohen2019certified} makes prediction by $m$ intermediate images, which is crafted by Gaussian noise from the input image. We choose $m=100$ and the Gaussian noise scale $\sigma=0.25*255$ here.
Adversarial training is to re-train the neural networks by adversarial samples. In \cite{kurakin2016adversarial}, InceptionV3adv and InceptionResNetV2adv are designed and \cite{xie2019feature} proposes ResNetXt101denoise with denoising blocks in architectures to secure the model.
\lchanged{M05}{\link{R0.0} \link{R3.1}}{Table \ref{defenses}} gives the comprehensive black-box attack performance under defenses. Generally speaking, the preprocessing-based defenses decrease the error rate for about 5\% to 10\% and SI-AoA maintains the highest transfer rate. Adversarial-trained models (indicated by underlines in tables) exhibit a strong robustness to attacks, including SI-AoA (but still, it is better than SI-PGD, SI-CW). That means although samples generated by SI-AoA are different to others, the distribution can still be captured by adversarial training. Developing adversarial attacks that can defeat adversarial training is interesting but out of our scope. Random smoothing generally has a low error rate but its inference time is much longer than other methods, generally $m$ times and hence it is not a fair comparison. In our experiment, random smoothing seems not to work well on adversarial-trained models, sometimes even oppositely, which is also interesting but in the field of defenses.
\begin{table*}[htbp]
\caption{{Error Rate (Top-1) under Defenses (ResNet50 as the surrogate model)}}
\centering
\begin{tabular}{r|l|cccccc}
\toprule
Victim & Method & None & JPEG \cite{liu2018feature} & Pixel \cite{prakash2018deflecting} & Random \cite{xie2017mitigating} & TVM \cite{guo2017countering} & Smooth \cite{cohen2019certified} \\ \midrule
& SI-CW & 80.3$\pm$1.86\% & 64.9$\pm$2.40\% & 67.2$\pm$2.20\% & 64.5$\pm$3.99\% & 70.2$\pm$1.63\% & 60.0$\pm$2.26\% \\
DN121 \cite{huang2017densely} & SI-PGD & 81.2$\pm$1.63\% & 65.1$\pm$1.24\% & 66.4$\pm$0.58\% & 64.0$\pm$3.44\% & 69.7$\pm$1.29\% & 60.0$\pm$2.26\% \\
& SI-AoA & \textbf{90.5$\pm$0.89\%} & \textbf{81.0$\pm$3.32\%} & \textbf{82.1$\pm$2.85\%} & \textbf{78.0$\pm$3.70\%} & \textbf{83.7$\pm$3.14\%} & \textbf{63.4$\pm$2.35\%} \\ \midrule
& SI-CW & 46.4$\pm$2.22\% & 38.0$\pm$2.17\% & 38.3$\pm$0.93\% & 40.3$\pm$3.04\% & 41.0$\pm$1.64\% & 31.7$\pm$2.19\% \\
IncRNV2 \cite{szegedy2017inception} & SI-PGD & 48.7$\pm$1.91\% & 39.8$\pm$0.93\% & 39.3$\pm$0.75\% & 40.0$\pm$3.11\% & 42.1$\pm$0.86\% & 31.8$\pm$1.70\% \\
& SI-AoA & \textbf{64.6$\pm$2.71\%} & \textbf{56.7$\pm$1.72\%} & \textbf{58.2$\pm$3.91\%} & \textbf{57.8$\pm$4.37\%} & \textbf{59.5$\pm$2.63\%} & \textbf{34.6$\pm$3.24\%} \\ \midrule
& SI-CW & 51.6$\pm$2.60\% & 43.2$\pm$3.39\% & 42.7$\pm$2.98\% & 46.2$\pm$2.34\% & 46.1$\pm$3.47\% & 33.5$\pm$4.73\% \\
IncV3 \cite{szegedy2016rethinking} & SI-PGD & 53.0$\pm$0.95\% & 44.8$\pm$3.33\% & 45.0$\pm$2.98\% & 47.9$\pm$3.09\% & 48.3$\pm$3.23\% & 32.6$\pm$5.66\% \\
& SI-AoA & \textbf{66.1$\pm$3.89\%} & \textbf{62.3$\pm$3.87\%} & \textbf{62.4$\pm$4.12\%} & \textbf{62.9$\pm$2.67\%} & \textbf{64.1$\pm$3.79\%} & \textbf{37.5$\pm$6.18\%} \\ \midrule
& SI-CW & 38.3$\pm$3.53\% & 31.3$\pm$3.09\% & 32.4$\pm$4.12\% & 35.2$\pm$2.93\% & 34.0$\pm$4.57\% & 23.7$\pm$3.68\% \\
NASNetL \cite{zoph2018learning} & SI-PGD & 38.6$\pm$2.06\% & 30.8$\pm$3.59\% & 31.5$\pm$2.92\% & 34.3$\pm$4.07\% & 34.6$\pm$2.96\% & 23.5$\pm$3.35\% \\
& SI-AoA & \textbf{57.9$\pm$2.20\%} & \textbf{49.2$\pm$3.71\%} & \textbf{53.0$\pm$4.01\%} & \textbf{52.7$\pm$3.93\%} & \textbf{53.0$\pm$3.32\%} & \textbf{29.3$\pm$2.80\%} \\ \midrule
& SI-CW & 63.9$\pm$1.50\% & 51.4$\pm$1.91\% & 51.6$\pm$1.85\% & 48.9$\pm$3.85\% & 56.6$\pm$1.56\% & 41.2$\pm$5.28\% \\
RN152 \cite{he2016deep} & SI-PGD & 66.1$\pm$2.46\% & 52.8$\pm$2.56\% & 54.1$\pm$1.53\% & 51.5$\pm$3.39\% & 58.4$\pm$1.83\% & 40.2$\pm$4.81\% \\
& SI-AoA & \textbf{78.8$\pm$1.75\%} & \textbf{70.3$\pm$3.56\%} & \textbf{72.8$\pm$4.49\%} & \textbf{67.1$\pm$2.82\%} & \textbf{75.6$\pm$3.93\%} & \textbf{44.2$\pm$5.07\%} \\ \midrule
& SI-CW & 99.9$\pm$0.20\% & 98.5$\pm$0.84\% & 98.7$\pm$0.81\% & 89.5$\pm$2.59\% & 99.6$\pm$0.49\% & 93.4$\pm$0.94\% \\
RN50 \cite{he2016deep} & SI-PGD & 100.0$\pm$0.00\% & 99.1$\pm$0.49\% & 99.4$\pm$0.58\% & 90.8$\pm$1.33\% & 99.6$\pm$0.37\% & 92.4$\pm$1.71\% \\
& SI-AoA & \textbf{100.0$\pm$0.00\%} & \textbf{99.9$\pm$0.20\%} & \textbf{99.8$\pm$0.40\%} & \textbf{95.6$\pm$2.13\%} & \textbf{99.9$\pm$0.20\%} & \textbf{94.1$\pm$1.20\%} \\ \midrule
& SI-CW & 66.5$\pm$1.67\% & 60.7$\pm$4.27\% & 60.6$\pm$3.20\% & 62.9$\pm$4.07\% & 63.3$\pm$5.09\% & 89.8$\pm$1.89\% \\
VGG19 \cite{simonyan2014very} & SI-PGD & 69.5$\pm$2.10\% & 62.8$\pm$3.54\% & 61.4$\pm$4.92\% & 65.7$\pm$3.80\% & 65.2$\pm$4.25\% & 89.6$\pm$1.73\% \\
& SI-AoA & \textbf{80.4$\pm$2.73\%} & \textbf{77.7$\pm$4.43\%} & \textbf{78.5$\pm$3.77\%} & \textbf{77.1$\pm$4.52\%} & \textbf{79.8$\pm$4.04\%} & \textbf{89.9$\pm$2.18\%} \\ \midrule
& SI-CW & 48.8$\pm$3.70\% & 40.6$\pm$3.81\% & 40.9$\pm$3.71\% & 44.0$\pm$2.92\% & 44.7$\pm$3.23\% & 36.5$\pm$4.38\% \\
Xception \cite{chollet2017xception} & SI-PGD & 49.1$\pm$1.59\% & 40.8$\pm$3.59\% & 43.0$\pm$4.02\% & 43.5$\pm$3.89\% & 44.7$\pm$3.37\% & 37.1$\pm$3.35\% \\
& SI-AoA & \textbf{64.6$\pm$3.07\%} & \textbf{57.6$\pm$3.26\%} & \textbf{58.4$\pm$1.80\%} & \textbf{61.1$\pm$3.89\%} & \textbf{59.0$\pm$2.65\%} & \textbf{40.9$\pm$4.52\%} \\ \midrule
& SI-CW & 31.2$\pm$1.29\% & 33.8$\pm$2.50\% & 35.0$\pm$3.35\% & 38.1$\pm$3.73\% & 37.0$\pm$4.27\% & 96.5$\pm$1.44\% \\
\underline{IncV3adv} \cite{kurakin2016adversarial} & SI-PGD & 31.5$\pm$3.08\% & 34.3$\pm$3.44\% & 35.8$\pm$2.99\% & 39.2$\pm$3.14\% & 38.4$\pm$2.85\% & 96.2$\pm$1.13\% \\
& SI-AoA & \textbf{53.7$\pm$2.25\%} & \textbf{52.7$\pm$2.20\%} & \textbf{54.9$\pm$3.15\%} & \textbf{55.1$\pm$2.78\%} & \textbf{56.2$\pm$2.71\%} & \textbf{96.2$\pm$1.16\%} \\ \midrule
& SI-CW & 26.4$\pm$1.59\% & 27.4$\pm$2.03\% & 27.6$\pm$2.63\% & 30.1$\pm$4.78\% & 28.2$\pm$3.66\% & 81.7$\pm$3.74\% \\
\underline{IncRNV2adv} \cite{kurakin2016adversarial} & SI-PGD & 26.1$\pm$1.98\% & 27.9$\pm$0.86\% & 28.5$\pm$2.51\% & 29.7$\pm$3.64\% & 29.8$\pm$0.93\% & 81.5$\pm$3.47\% \\
& SI-AoA & \textbf{44.0$\pm$1.52\%} & \textbf{44.2$\pm$3.23\%} & \textbf{46.2$\pm$3.71\%} & \textbf{48.0$\pm$4.55\%} & \textbf{47.0$\pm$2.30\%} & \textbf{82.3$\pm$3.16\%} \\ \midrule
& SI-CW & 18.0$\pm$3.13\% & 18.2$\pm$3.11\% & 18.2$\pm$3.33\% & 44.4$\pm$3.69\% & 18.1$\pm$3.22\% & 70.4$\pm$2.26\% \\
\underline{RNXt101den} \cite{xie2019feature} & SI-PGD & 18.2$\pm$2.87\% & 18.5$\pm$2.88\% & 18.9$\pm$3.17\% & 44.6$\pm$3.46\% & 18.4$\pm$3.31\% & 70.5$\pm$2.09\% \\
& SI-AoA & \textbf{18.7$\pm$3.01\%} & \textbf{19.2$\pm$2.71\%} & \textbf{19.1$\pm$2.97\%} & \textbf{44.6$\pm$3.48\%} & \textbf{19.0$\pm$2.88\%} & \textbf{70.5$\pm$2.26\%} \\
\bottomrule
\end{tabular}
\label{defenses}
\end{table*}
\subsection{DAmageNet}
The above experiments verify that AoA has a promising transferability, which then makes it possible to generate adversarial samples that are able to beat many well-trained DNNs. An adversarial dataset will be very useful for evaluating robustness and defense methods. To establish an adversarial dataset, we use SI-AoA to attack VGG19 to generate samples from all 50000 samples from ImageNet validation set. Since the original images come from ImageNet training set and the adversarial samples are going to cheat neural networks, we hence name this dataset as DAmageNet.
DAmageNet contains 50000 adversarial samples and could be downloaded from \url{http://www.pami.sjtu.edu.cn/Show/56/122}. The samples are named the same as the original ones in ImageNet validation set. Accordingly, users could easily find the corresponding samples as well as their labels. The average RMSE between samples in DAmageNet and those in ImageNet is 7.23. In Fig. \ref{samples}, we show several image pairs in ImageNet and DAmageNet.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\hsize]{imgs/samples.png}
\caption{Samples in ImageNet and DAmageNet. The images on the left are original samples from ImageNet. The images on the right are adversarial samples from DAmageNet. One could observe that these images look similar and human beings have no problem to recognize them as the same class.}
\label{samples}
\end{figure*}
\begin{table*}[htpb]
\caption{Error Rate (Top-1) on ImageNet and DAmageNet}
\centering
\begin{tabular}{r|cc|cccc}
\toprule
& \multicolumn{2}{c}{No defense} & \multicolumn{4}{c}{Defenses on DAmageNet} \\
Victim & ImageNet \cite{deng2009imagenet}& DAmageNet & JPEG \cite{liu2018feature}& Pixel \cite{prakash2018deflecting}& Random \cite{xie2017mitigating}& TVM \cite{guo2017countering}\\ \midrule
VGG16 \cite{simonyan2014very} & 38.51 & 99.85 & 99.67 & 99.70 & 99.19 & 99.76 \\
VGG19 \cite{simonyan2014very} & 38.60 & 99.99 & 99.99 & 99.99 & 99.96 & 99.99\\
RN50 \cite{he2016deep}& 36.65 & 93.94 & 91.88 & 92.48 & 92.52 & 93.08\\
RN101 \cite{he2016deep}& 29.38 & 88.13 & 85.44 & 86.23 & 86.12 & 87.06 \\
RN152 \cite{he2016deep}& 28.65 & 86.78 & 83.93 & 84.83 & 84.71 & 85.68\\
NASNetM \cite{zoph2018learning}& 27.03 & 92.81 & 90.42 & 91.43 & 90.31 & 91.86 \\
NASNetL \cite{zoph2018learning}& 17.77 & 86.32 & 83.31 & 84.87 & 84.91 & 85.53 \\
IncV3 \cite{szegedy2016rethinking} & 22.52 & 89.84 & 87.82 & 89.01 & 88.49 & 89.59 \\
IncRNV2 \cite{szegedy2017inception}& 24.60 & 88.09 & 85.01 & 85.95 & 89.04 & 86.79 \\
Xception \cite{chollet2017xception} & 21.38 & 90.57 & 88.53 & 89.77 & 86.03 & 90.32 \\
DN121 \cite{huang2017densely}& 26.85 & 96.14 & 93.96 & 94.85 & 93.82 & 95.30\\
DN169 \cite{huang2017densely}& 25.16 & 94.09 & 91.72 & 92.78 & 91.78 & 93.36 \\
DN201 \cite{huang2017densely}& 24.36 & 93.44 & 90.52 & 91.71 & 90.86 & 92.45 \\
\underline{IncV3adv} \cite{kurakin2016adversarial}& 22.86 & 82.23 & 82.03 & 83.35 & 82.88 & 83.95 \\
\underline{IncV3advens3} \cite{tramer2017ensemble}& 24.12 & 80.72 & 80.35 & 81.68 & 81.57 & 82.36 \\
\underline{IncV3advens4} \cite{tramer2017ensemble}& 24.45 & 79.26 & 78.86 & 79.96 & 79.76 & 80.8 \\
\underline{IncRNV2adv} \cite{kurakin2016adversarial}& 20.03 & 76.42 & 75.71 & 76.85 & 76.86 & 77.73 \\
\underline{IncRNV2advens} \cite{tramer2017ensemble}& 20.35 & 70.70 & 71.09 & 72.32 & 73.32 & 73.04 \\
\underline{RNXt101den} \cite{xie2019feature}& 32.20 & 35.40 & 36.27 & 36.65 & 55.53 & 36.21\\
\bottomrule
\end{tabular}
\label{rate}
\end{table*}
To the best of our knowledge, DAmageNet is the first adversarial dataset, which can be used to evaluate model robustness and defenses. As an example, we use several well-trained models to recognize the images in DAmageNet. Several neural networks strengthened by adversarial training are considered as well. The error rate (top-1) is reported in Table \ref{rate}. The models are from Keras Application and the test error may differ from original references. One could observe that i) all the listed 13 undefended models are not robust: DAmageNet increases the error rate of all 13 undefended models to over 85\%; ii) the 5 listed adversarial-trained models have a slightly better performance and the error rate is over $70\%$; iii) DAmageNet resists 4 tested defenses with almost no drop on the error rate compared to other methods; iv) feature denoising model shows promising robustness but simply combining it with preprocessing-based defence does not work well.
\section{Conclusion}\label{conclusion}
To improve the transferability of adversarial attack, we are the first to attack on attention and achieve a great performance on the black-box attack. The high transferability of AoA relies on the semantic features shared by different DNNs. AoA enjoys a significant increase in transferability when the traditional cross entropy loss is replaced with the attention loss. Since AoA alters the loss only, it could be easily combined with other transferability-enhancement methods, e.g., SI \cite{lin2019nesterov}, and achieve a state-of-the-art performance.
By SI-AoA, we generate DAmageNet, the first dataset containing samples with a small perturbation and a high transfer rate (an error rate over 85\% for undefended models and over 70\% for adversarial-trained models). DAmageNet provides a benchmark to evaluate the robustness of DNNs by elaborately-crafted adversarial samples.
AoA has found the common vulnerability of DNNs in attention. Also, attention is just one semantic feature and attacking on other semantic features shared by DNNs is also promising to have good transferability.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work was partially supported by National Key Research Development Project (No. 2018AAA0100702, 2019YFB1311503) and National Natural Science Foundation of China (No. 61977046, 61876107, U1803261).
The authors are grateful to the anonymous reviewers for their insightful comments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,498,563 | arxiv | \section{Introduction}\label{sec:introduction}
Microstructure significantly influences the physical properties of metallic and ceramic materials \cite{dillon2016importance}. Many efforts have been made to develop mathematical descriptions of the spatial and temporal evolution of polycrystalline microstructures at elevated temperatures used in processing or application. During grain growth in isotropic systems, the velocity of a migrating grain boundary $v_b$ is defined by a driving force $F_b$ and a grain boundary mobility $M_b$ \cite{humphreys1997unified}:
\begin{equation}
v_b = M_b F_b. \label{v_GB}
\end{equation}
The primary driving force is the grain boundary energy $\gamma_b$, such that $F_b = \gamma_b/R_b$, where $R_b$ is the radius of curvature of the boundary. The simplest case of grain growth is the shrinking of a circular grain embedded in a matrix for which the rate of change of the grain area $A$ is defined as
\begin{equation}
\frac{\partial A}{\partial t} = 2 \pi r \frac{\partial r}{\partial t} = - 2 \pi M_b \gamma_b,
\label{eq:dAdt_circle}
\end{equation}
where $r$ is the radius of the grain and $\partial r/ \partial t = - v_b$ from Eq.~\eqref{v_GB}. This expression can be integrated with time to define the grain area as a function of time $t$:
\begin{equation}
A(t) = A_0 - 2 \pi M_b \gamma_b t.
\label{eq:A(t)}
\end{equation}
Following Burke and Turnbull analysis \cite{Burke1952}, evolution of the mean grain size $ \langle R \rangle$ of a polycrystalline grain structure is given as
\begin{equation}
\langle R \rangle^2 - \langle R_o \rangle^2 = K t, \label{eq:av_gr_size_w_time}
\end{equation}
where $ \langle R_o \rangle$ is the initial mean grain size (the spherical equivalent radius of an arbitrarily shaped grain) and $K$ is the kinetic coefficient. Another important aspect of two-dimensional polycrystalline grain structure is the relationship between the number of sides of a grain $F$ and its area $A$, which is also known as the von Neumann-Mullins relation \cite{mullins1956two} and is given as
\begin{equation}
\frac{dA}{dt} = -\frac{\pi}{3} M_b \gamma_b \left(6-F\right).\label{eq:vonNeumannMullins}
\end{equation}
Accordingly, grains with more than six sides will grow and those with less than six will shrink, while grains with six sides will remain stable.
Computational modeling is another approach to model polycrystalline grain growth and it has been carried out for nearly forty years. The earliest model of grain growth used the Monte Carlo Potts (MCP) method \cite{wu1982potts} to represent the local changes throughout a grain structure \cite{anderson1984computer,srolovitz1984computer}. In this stochastic approach, the grain structure is divided into discrete grain sites and the grain assigned to each site randomly changes to neighboring grains based on a probability that is a function of the grain boundary energy. In addition, various deterministic methods have been developed to model grain growth. In phase field grain growth methods, first used by Fan and Chen \cite{fan1997computer}, the grain structure is represented by continuous variable fields that have constant values within grains and smoothly transition between values at grain boundaries. The variables evolve with time to minimize a functional that defines the overall free energy of the system \cite{kim2014phase,miyoshi2017ultra,Miyoshi2021M,Chadwick2021,Moelans2022}. Cellular automata \cite{liu1996simulation,he2006computer,ding2006cellular,Xiong2021,Baumard2021}, front tracking \cite{frost1988two,lazar2010more,lazar2011more}, and level set methods \cite{elsey2009diffusion,Fausty2021} have also been used to model grain growth.
Analytical models assume that the material properties are isotropic. In reality, however, the kinetics of atomic movement depends on the anisotropic grain boundary energy ($\gamma_b$) and mobility ($M_b$) \cite{rollett1989simulation}. Similarly, computational models for grain growth are developed using assumptions and approximations. These assumptions and approximations result in errors when compared against experimental data \cite{mckenna2014grain}. A recent experimental finding has challenged the most accepted grain growth theories by revealing that there is no observed relationship between grain boundary velocity and curvature in polycrystalline Ni \cite{bhattacharya2021grain}. In addition, simulations have to be solved numerically and can be computationally expensive for large numbers of grains. Therefore, an alternative and efficient computational approach that accurately mimics grain growth experiments is needed.
Recently, machine learning modeling approaches have been tremendously successful in many scientific computational tasks by implementing statistical inference on a very large {scale \cite{Carou_undated-go,Datta2021-un}.} In particular, deep learning has proven to be a powerful tool for analyzing dynamic systems \cite{qian2020lift}, including complex material microstructures \cite{bostanabad2018computational}. However, most of this work has focused on microstructure recognition \cite{chowdhury2016image} and reconstruction \cite{bostanabad2016stochastic}. There are limited studies that take advantage of state-of-the-art machine learning methods for modeling the dynamic evolution of microstructures. In prior work, de Oca Zapiain et al.\ implemented a surrogate model to facilitate phase field model predictions \cite{de2021accelerating}. However, in their approach, the application of machine learning is heavily dependent on the high-fidelity phase field model. Yang et al.\ applied convolutional recurrent neural networks to predict microstructural evolution of various complexities \cite{yang2021self}, but the black-box machine learning model simply mimics the simulator without providing meaningful insights about the underlying physics. Moreover, machine learning models trained on large, clean simulation data alone cannot be used for real-world experimental prediction, and it is extremely expensive to collect sufficient experimental microstructure data for proper training. Hence, a generalized machine learning framework that can be trained with simulations, guided by physics, and remain flexible enough to be tuned with new experimental knowledge is of high priority.
The objective of the current work is to train a deep neural network model to predict two-dimensional isotropic grain growth {(assuming all grain boundaries have the same energy and mobility)} and validate its results with analytical models of normal grain growth. This is a first step in a long-term vision: to develop an interpretable, physics-guided deep learning model for predicting spatio-temporal grain boundary migration in two and three dimensions. The remainder of this paper is structured as follows: First, the Physics-Regularized Interpretable Machine Learning Microstructure Evolution (PRIMME) architecture is presented. This PRIMME model is trained with simulated data (MCP simulations using the Stochastic Parallel PARticle Kinetic Simulator (SPPARKS)). PRIMME results are then compared with analytical models and MCP and { phase field (PF) } simulations. Finally, various aspects of PRIMME are discussed in detail. In future work, this two-dimensional isotropic PRIMME model, which now contains interpretability and integrated physics, will be extended to analyze complex three-dimensional experimental grain growth with anisotropic grain boundary energy and mobility.
\section{Model Development}
\label{sec:model}
Designing a robust machine learning architecture for grain growth is not immediately straightforward. Ideally, the machine learning will directly predict the next grain structure using the previous grain structure. The input would be an array containing the grain number at each site and the output would be another array of grain numbers, representing the evolved grain structure after some period of time. However, this goal does not cleanly fit into the traditional classification or regression machine learning archetypes \cite{james2013introduction}. As a result, a new architecture is necessary for grain growth.
Our new architecture is inspired by the dynamic processes in the Monte Carlo Potts model and concepts from deep reinforcement learning \cite{mnih2013playing}. Rather than learn grain numbers directly, it learns how they act given their local grain neighborhood. We assume that each site has a finite number of actions at each time step. An action is defined as a site adopting the grain number of a neighbor (i.e., the site ``flips''). This new architecture first learns the most likely action for each site and then applies this action, which may or may not change the assigned grain.
Furthermore, rather than learn the action directly, this new architecture learns the likelihood for each action (similar to a regression problem). It then chooses the action with the maximal likelihood (similar to a classification problem). This approach provides flexibility because particular actions can be constrained based on known physics or knowledge. It also increases interpretability, since the outcome is a mapping of the most likely actions for each site. To extend this architecture for more complex microstructure in the future, these two properties are highly desirable.
Figure~\ref{FIG:flowchart} illustrates the complete framework of the proposed new architecture. In the framework, the microstructure is first divided into local neighborhoods and physically relevant data features are extracted from each neighborhood. These features are taken as input into a neural network, which outputs the likelihood of taking each action. Subsequently, the action with maximum likelihood is applied to obtain the evolved grain structure. A fully connected deep neural network is employed to learn actions due to its advantage in learning complex, non-linear relationships within data. The following subsections detail each of these steps.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig2_modRev.png}
\caption{Summary of the PRIMME deep learning approach for grain growth simulation. (a). Boundary pixels are selected to create pixel-wise features. Kronecker delta of Moore’s neighbors are used as the deep neural network input. (b). The neural network outputs the action likelihood for a state around the center site. The maximum action likelihood is selected to flip the center site to corresponding grain number.}
\label{FIG:flowchart}
\end{figure*}
\subsection{Training Data Generation from a Monte Carlo Potts Model}
\label{ssec:sppark}
A MCP algorithm was used to model grain growth using the SPPARKS code \cite{cardona2009crossing}, assuming isotropic grain boundary properties. Each SPPARKS simulation was initialized with a $257 \times 257$ pixelated domain with 256 initial grains generated using a Voronoi diagram to assign the grain number for each site $s_i$. The initial grain structure was then evolved using the MCP method, simulating grain growth. Within one simulation step, each site in the image can evolve to one of its allowable neighbors based on a probability linked to the energy state given by the Hamiltonian
\begin{equation}
\mathcal{H} = \sum_{i=1}^N \left[ \frac{1}{2} \sum_{j \in \mathcal{N}_i} \overline{\gamma}_b (1-\delta(s_i, s_j)) \right],\label{eq:SPPARKS_hamiltonian}
\end{equation}
where $N$ is total number of sites (i.e., pixels) in the image, $\mathcal{N}_i$ is the Moore's neighborhood of sites surrounding site $i$, $\overline{\gamma}_b$ is the isotropic average grain boundary energy, and $\delta(s_i, s_j)$ is the Kronecker delta
\begin{equation}
\delta(s_i, s_j) = \begin{cases}
1, & s_i = s_j \\
0, & s_i \neq s_j \\
\end{cases}. \label{eq:delta}
\end{equation}
Hence, the term $1-\delta(s_i, s_j)$ is $0$ when two sites are from the same grain and $1$ when they are from different grains.
The Hamiltonian represents the total interfacial energy contributed by the grain boundaries. The {probability $P$ of} flipping to an allowable neighbor {that results in an energy change $\Delta E$, if we assume all boundaries have the same mobility, is defined as}
\begin{equation}
P(\Delta E) = \begin{cases}
1, & \Delta E \leq 0 \\
e^{\frac{-\Delta E}{kT}}, & \Delta E > 0 \\
\end{cases},
\end{equation}
{where the computational temperature $kT$ is a hyperparameter that defines the intrinsic randomness of the Monte Carlo model \cite{anderson1984computer}}. In the training data, $kT=0.5$ was found {to roughen the boundaries enough} to avoid lattice pinning and {still result in a general decrease in the global energy to }provide normal growth behavior. We assume that the energy of all grain boundaries is equal with value $\overline{\gamma}_b=1$. Therefore, the SPPARKS simulations only consider local curvature and hence simulate normal curvature-driven parabolic grain growth.
\subsection{Neural Network Inputs and Outputs}
\label{ssec:action}
The input into the neural network is derived from images of grain number and is based on a modified Moore's neighbors criteria, where the observation can be extended to more than one immediate neighbor. Specifically at site $s_i$, the number of neighbors with a different grain number is calculated as
\begin{align}
\mathcal{I}(s_i) = \sum_{j\in \mathcal{N}_n} 1-\delta(s_i, s_j),
\label{eq:number_diff_neighbors}
\end{align}
where $\mathcal{N}_n$ is the set containing indices for the $n \times n$ neighborhood of pixels around site $s_i$. For this paper, we use $n=7$ for the network input. Hence, if the original microstructure is a $257 \times 257$ pixelated image of grain numbers, then $\mathcal{I}_n(s_i)$ represents an image of the same dimensions (with spatial locations indexed by $i$), where each value is equal to the number of neighboring sites with a different grain number (Fig.~\ref{FIG:flowchart}a), resembling the Hamiltonian in Eq.~\eqref{eq:SPPARKS_hamiltonian}. The new image $\mathcal{I}(s_i)$ is then divided into overlapping $\mathcal{O} \times \mathcal{O}$ patches $\mathcal{P}(s_i, s_j)$ around site $s_i$. The index $s_j$ corresponds to the $\mathcal{O} \times \mathcal{O}$ neighboring sites. If the patch is at the edge of the image, we assume periodic boundary conditions. Each patch is vectorized into a $\mathcal{O}^2 \times 1$ vector to become a single input to the neural network. The number of neural network inputs from one image is equal to the number of pixels in the image, $N$. When training, we randomize the order of patches but we train with a single step of the entire simulated microstructure before progressing to the next simulation.
The output of the neural network is defined by the predicted action likelihoods around each site $s_i$. Therefore, the output is an $\mathcal{A}^2 \times 1$ vector that can form a $\mathcal{A} \times \mathcal{A}$ image $Y(s_i, s_j)$ of a neighborhood around $s_i$. The index $s_j$ corresponds to the $\mathcal{A} \times \mathcal{A}$ neighboring sites. Each value defines the action likelihood that site $s_i$ will flip to the grain number associated with index $s_j$ (Figure~\ref{FIG:flowchart}b). We flip $s_i$ according to the highest action likelihood. After simultaneously applying an action to each site (rather than in a sequence), the grain structure is considered ``evolved'' to its next state.
\subsection{Neural Network Loss Function and Regularization}
\label{ssec:loss}
A custom loss function is used to robustly link the inputs to the action likelihood outputs. The loss for site $s_i^{(t)}$ at time step $t$ is then defined by a squared error term
\begin{equation}
\label{eq:loss}
L\left(s_i^{(t)} \right) = \frac{1}{|\mathcal{N}_{\mathcal{A}}|} \sum_{j \in \mathcal{N}_{\mathcal{A}}} \Bigg| Y\left(s_i^{(t)}, s_j^{(t)}\right) - \sum_{\tau=t}^{t+{N_t}} \left(\frac{1}{2} \right)^{\tau} \left[ \delta\left( s_i^{(\tau+1)},s_j^{(t)} \right) + \lambda \, \Gamma\left( s_i^{(\tau)},s_j^{(\tau)} \right) \right] \Bigg|^2
\end{equation}
where $\mathcal{N}_{\mathcal{A}}$ is set of neighbors around $s_i^{(t)}$ in the action space, $|\mathcal{N}_{\mathcal{A}}|$ is the size of the set, $Y(s_i^{(t)}, s_j^{(t)})$ is the neural network's predicted action likelihood, and $\Gamma(s_i^{(t)},s_j^{(t)})$ is the physics-guided regularization with weight $\lambda$. {$\tau$ is the index used in the summation over the number of future time steps.} Inspired by the cumulative future rewards used in deep reinforcement learning \cite{mnih2015human}, we consider {a number of }future time steps {$N_t$} in our learning framework. If {$N_t=1$}, then the loss is only computed for the immediate next step. As {$N_t$} increases, we consider more future time steps. In this paper, we use {$N_t = 4$} and $\lambda=1$.
The first term within this sum $\delta( s_i^{(\tau+1)},s_j^{(t)}) $ is the direct action label. It trains the network to choose the true future grain number, according to the MCP model. The second term is a physics-guided regularization mechanism that encourages actions that decrease the number of neighbors with different grain numbers $\mathcal{I}(s_i)$ at each pixel site of the system. This regularization is defined by
\begin{align}
\label{eq:regularization_label}
\Gamma(s_i^{(t)}, s_j^{(t)}) &=
\frac{1}{8} \left[ \sum_{k \in \mathcal{N}_3}
\delta \left( s_j^{(t)}, s_k^{(t+1)} \right) -
\delta \left( s_i^{(t)}, s_k^{(t)} \right) \right] \;
\end{align}
where $\mathcal{N}_3$ represents a $3 \times 3$ neighborhood. The first term in $\Gamma(s_i^{(t)}, s_j^{(t)})$ is the number of neighbors that match each of the possible new grain numbers $s_j^{(t)}$ in the next time step $s_k^{(t+1)}$. Hence, this value is high at site $s_j^{(t)}$ when more of the sites in $s_k^{(t+1)}$ are the same grain number. The second term gives the initial number of sites that match the center site before taking actions. The maximum value of our regularization is $1$, which occurs when site $s_i$ matches none of its neighbors (second term is $0$) and every neighbor has the same grain number (first term is $8$). The minimum value of our regularization is $-1$ when all sites in the neighborhood have the same grain number. Therefore, the regularization strongly discourages sites from flipping when most of its neighbors already have the same grain number.
{Note that the regularization only influences how the neural network is trained. It does not place hard constraints on the output nor is it directly computed as part of the neural network's output in testing. As a result, given sufficient and consistent training data, the neural network could learn irregular grain growth behavior where, for example, a small grain surrounded by larger ones could grow.}
\subsection{Neural Network Design}
The neural network architecture is shown in Table~\ref{tbl: model architecture}. Three hidden layers with Rectified Linear Unit (ReLU) activation are fully connected to take the $\mathcal{O}^2$ inputs from observation space. Batch normalization layers are added before the activation layers to improve learning speed and stability. A 25\% random dropout is performed to prevent the co-adaptation of neurons \cite{hinton2012improving}. The output layer contains $\mathcal{A}^2$ neurons with a sigmoid transformation to represent the likelihood of flipping actions within the action space. The input and output size, $\mathcal{O}^2$ and $\mathcal{A}^2$, respectively, are tunable. We chose an observation space and action space of $\mathcal{O} = \mathcal{A} = 17$ due to its balance of speed and performance.
\begin{table}[t!]
\centering
\caption{Neural Network Architecture}
\begin{tabular}{ p{2cm} p{4cm} p{1.3cm} }
Layer &Layer description &Activation\\
\hline
Flatten & Input Size = $\mathcal{O}^2$ & -\\[1mm]
BatchNorm & Batch Normalization & - \\[1mm]
Dense & Size = 1764 & ReLU\\[1mm]
Dropout & Drop Rate = 0.25 & -\\[1mm]
BatchNorm & Batch Normalization & - \\[1mm]
Dense & Size = 882 & ReLU\\[1mm]
Dropout & Drop Rate = 0.25 & -\\[1mm]
BatchNorm & Batch Normalization & - \\[1mm]
Dense & Size = 441 & ReLU\\[1mm]
BatchNorm & Batch Normalization & - \\[1mm]
Dense& Output Size = $\mathcal{A}^2$ & Sigmoid \\[1mm]
\hline
\end{tabular}
\label{tbl: model architecture}
\end{table}
\subsection{Training the Neural Network}
\begin{figure}[b!]
\centering
\includegraphics[width=0.5\textwidth]{training_size_distribution.png}
\caption{Training data grain sizes. Histogram of the grain sizes in the MCP training data.} \label{FIG:training_size_distribution}
\end{figure}
The neural network was trained on results from the MCP simulations evolved for up to a maximum of 100 Monte Carlo steps (MCS). A total of $200$ simulations were run for training, each with unique initial conditions (resulting in $66,049$ neural network inputs per simulation, one for each site). For each simulation, we only considered one point of time $t$, to prevent overfitting to a single initial condition. Generating the MCP training data and training PRIMME required approximately 5 hours. Figure~\ref{FIG:training_size_distribution} illustrates the distribution of grain sizes in the MCP training data. It is skewed to the left and follows a log-normal distribution. Note that the training data only contains grains with radii ranging from 0 to 40 pixels. This enables us to observe how well PRIMME can generalize the grain growth behavior to grains with sizes larger than those contained in the training data.
After the training is complete, PRIMME is ready to predict isotropic grain growth for any two-dimensional domain and for any initial grain structure. It can use both zero-flux or periodic boundary conditions. Like the MCP model, PRIMME is non-dimensional. However, unlike the MCP model, PRIMME is deterministic not stochastic. {Thus, the assumptions of the model are that all grain boundaries have the same energy and mobility (grain boundary migration is only driven by curvature), the domain is two-dimensional, and that the behavior is non-dimensional in space and time.}
\section{Results}
\label{sec:result_discussion}
In this section, a systematic microstructural analysis is carried out with the trained two-dimensional isotropic PRIMME model. First, we analyze the simplest case - evolution of a circular grain. Next, we qualitatively compare polycrystalline microstructure evolution from PRIMME to the MCP and phase field models. Finally, we investigate geometric and topological properties \cite{Barmak2013} of large-scale grain growth. We also compare PRIMME simulation results with analytical models for grain growth. The MCP simulations are carried out using SPPARKS. The phase field simulations are carried out using the Multiphysics Object-Oriented Simulation Environment (MOOSE) \cite{permann2020moose}, a parallel finite element framework developed by Idaho National Laboratory with a computationally efficient implementation \cite{permann2016order} of the grain growth model from Moelans et al. \cite{moelans2008quantitative}. The parameters for the PF simulation are as follows: element size =$1\ \mu$m, a grain boundary mobility $M_b = 3.24\times10^{-11}$ m$^4$/(Js), energy $\gamma_b = 0.74$ J/m$^2$, interface width $=6\ \mu$m and time-step $=0.1$ sec.
\subsection{Evolution of a circular grain}
The evolution of circular grains embedded in a matrix simulated by PRIMME, MCP and PF models are shown in Fig.~\ref{FIG:vanish_images}. The initial radius of the circular grain for the PRIMME and MCP models is 30 pixels and the size of the matrix is $256 \times 256$ pixels. Similarly, the initial radius of the circular grain in the PF model is 30 $\mu$m and the size of the matrix is $256\ \mu\mathrm{m} \times 256\ \mu$m. PRIMME is nondimensional in time and space, similar to the MCP model. However, since it evolves the grain structure differently than is done in the MCP implementation in SPPARKS, its evolution in time will be different. In order to compare all three simulations, PRIMME and MCP results are scaled using the analytical solution from Eq.~\eqref{eq:A(t)}. The scaling is determined by assuming 1 pixel = 1 $\mu$m for PRIMME and MCP and the grain boundary mobility and energy are the same as for the PF model. By fitting the circular grain results to Eq.~\eqref{eq:A(t)}, one PRIMME step is equal to $0.50$~s and on{e} MC step is equal to $0.35$~s. This scaling procedure, where PRIMME and MCP results are scaled by fitting to analytical models or PF simulations, will be used in the following examples.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{case1_circle30.png}
\caption{\label{FIG:vanish_images}}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{case1_circlearea30Rev.png}
\caption{\label{FIG:vanish_plot}}
\end{subfigure}
\caption{Evolution of a 30 $\mu$m radius circular grain in a $256\times256$ pixel matrix. (a) Images of the shrinking grain from PRIMME, MCP and PF simulations. The time in PRIMME and MCP is scaled to real time by fitting to the analytical solution from Eq.~\eqref{eq:A(t)}.(b) The change in area of the circular grain with time from PRIMME, MCP, PF. Eq.~\eqref{eq:A(t)} is also plotted for reference.}
\label{FIG:vanish}
\end{figure}
The scaled grain area versus time plot for the PRIMME, MCP and PF models is shown in Fig.~\ref{FIG:vanish_plot}, and the analytical solution from Eq.~\eqref{eq:A(t)} is included for reference. All three models follow a linear relationship between area and time{, consistent with the analytical model and other isotropic grain growth models from the literature \cite{anderson1984computer,fan1997computer,liu1996simulation,frost1988two,elsey2009diffusion}}. PRIMME and MCP do not maintain a circular shape of the grain with time; the shape fluctuates in the MCP result but maintains a consistent oblong shape in the PRIMME result. {This behavior disagrees with other deterministic models from the literature that maintain a circular shape like the phase field result shown here \cite{fan1997computer,liu1996simulation,frost1988two,elsey2009diffusion}}. When the circle becomes very small, PRIMME results deviate from the analytical solution {due to the oblong shape of the grain. It takes longer to disappear than a circular grain}. Note that the PRIMME model was never trained on this initial condition but the shrinking behavior was reasonably described (though the shape was not), demonstrating that PRIMME can generalize beyond a Voronoi initial condition
\subsection{Evolution of polycrystalline microstructure}
\begin{figure*}[b!]
\centering
\includegraphics[width=\textwidth]{case3_microstructuresRev.png}
\caption{Comparison of the grain structure evolution predicted by PRIMME, MCP and PF in a $512\times512$ pixel domain with 512 initial grains. Grain structures with similar numbers of grains are compared from the three methods.}
\label{FIG:growth}
\end{figure*}
Next, we compare the PRIMME simulation results for a polycrystalline grain structure with those from the MCP and PF models. We use a $512 \times 512$ pixel domain with $512$~initial grains generated by a Voronoi diagram, which has both more grains and a larger domain than used in the training set. The initial condition is identical for the three methods. Other simulation details (like grain boundary energy, grain boundary mobility, etc.) are identical to the circular grain case for all three models. PRIMME simulation is performed for $1000$ steps, which is larger than the $100$~maximum~steps used in the training data.
As shown in Fig.~\ref{FIG:growth}, the PRIMME simulation has grain structures typical for normal grain growth, with triple junctions converging towards three stable $120^{\circ}$ angles. PRIMME and MCP have similar grain structures when the number of grains $\approx300$. This is after 60 MCP steps, which is still within the range of the 100 steps used in the training data. The PF structure is somewhat less similar, consistent with previous comparisons between MCP and PF grain growth model comparisons \cite{tikare1998comparison,suwa2005computer}. As grain growth continues, the local grain structures predicted by the three methods diverge, though they still have similar microstructural characteristics. This behavior is expected as PRIMME is now extrapolating outside of the training data and MCP is stochastic. {The grain growth behavior predicted by PRIMME is in agreement with other isotropic grain growth models from the literature \cite{anderson1984computer,fan1997computer,liu1996simulation,frost1988two,elsey2009diffusion}.}
\subsection{Microstructural analysis of large-scale grain structure}
To quantify the performance of PRIMME in a case with sufficient number of grains to allow for an accurate statistical comparison with MCP and PF models, we simulate a $2400 \times 2400$ pixel ($\mu$m) domain with 20,000 initial grains generated using a Voronoi diagram. The grain growth is simulated using the PF method for $300$~s, which is equivalent to 1000 MCP steps. Thus, these large scale simulations have a domain size, number of grains, and number of steps that is far outside of the training data. We carry out a geometric analysis, comparing the change in the mean grain size and the grain size distribution, and a topological analysis, comparing the mean number of sides and the distribution of the number of sides. We compare the PRIMME results with our MCP and PF simulations and to previous simulation results.
However, we first compare the computational cost of the three simulations. It is difficult to directly compare the computational cost of PRIMME, MCP using SPPARKS, and PF using MOOSE since they use different approaches, different hardware, and different levels of parallelization. However, since both SPPARKS and MOOSE are widely-utilized and highly optimized engines for grain growth simulation, a rough comparison of wall clock time is valuable. Table~\ref{tbl: performance} shows the number and type of processor used for the simulations and the total wall time required. SPPARKS and MOOSE both run on CPUS, while PRIMME is GPU-enabled. The total wall time required for PRIMME is shorter than the times for SPPARKS and MOOSE: its time using two graphics cards is 63\% of the wall time required for SPPARKS using four 64-core processors and is 12\% of the wall time required for MOOSE using 40 32-core processors.
\begin{table*}[b!]
\centering
\caption{Computational cost of PRIMME, SPPARKS and MOOSE for the 20,000 grain simulation.}
\begin{tabular}{ p{3.5cm} p{3.5cm} p{3.5cm} p{3.5cm} }
Model &PRIMME &SPPARKS &MOOSE\\
\hline
Processors & \textbf{2} Nvidia Quadro RTX 8000 & \textbf{4} AMD EPYC 7702 64-Core & \textbf{40} AMD EPYC 75F3 32-Core \\[1mm]
Total wall time (s) & 4,839 & 7,776 & 40,590 \\[1mm]
\hline
\end{tabular}
\label{tbl: performance}
\end{table*}
\subsubsection{Geometric analysis}
The square of the average grain size with time from the PRIMME, MCP, and PF simulations is shown in Fig.~\ref{fig:av_gr_area_vs_time}. The PRIMME and MCP results were scaled to real time by fitting their slopes to the slope of the PF results. PRIMME accurately predicts the linear relationship between grain size and time (Eq.~\eqref{eq:av_gr_size_w_time}) and is very similar to that predicted by MCP and PF. {The evolution of number of grains is shown in Fig.~\ref{fig:numg_rain_vs_time}.}
\begin{figure}[b!]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{case4_area_over_timePlot.png}
\caption{\label{fig:av_gr_area_vs_time}}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{Revisedcase4_NumGrain_over_timePlot.png}
\caption{\label{fig:numg_rain_vs_time}}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.32\textwidth]{case4_size_distribution4KRev.png}
\includegraphics[width=0.32\textwidth]{case4_size_distribution3KRev.png} \includegraphics[width=0.32\textwidth]{case4_size_distribution2KRev.png} \caption{\label{fig:gr_size_distr}}
\end{subfigure}
\caption{Geometric analysis of the $2400\times2400$ pixel domain with 20,000 initial grains. (a) Evolution of the square of the mean grain size with time from PRIMME, MCP, and PF. The PRIMME and MCP results were scaled to real time by fitting their slope to the slope of the PF results. {(b) Evolution of number of grains with time.} (c) Comparison of the grain size distribution for approximately 4000, 3000, and 2000 grains from PRIMME, MCP, and PF. The results from Yadav 2018 \cite{Yadav2018} and Zollner 2016 \cite{Zollner2016} are also included, for reference.}
\end{figure}
The distribution of the grain sizes for grain structures with approximately 4000, 3000, and 2000 grains from PRIMME, MCP, and PF are shown in Fig.~\ref{fig:gr_size_distr}. In normal isotropic grain growth, the distribution of the grain sizes normalized by the average grain size is self similar, meaning that it is constant over time. The grain size distribution predicted by PRIMME does not significantly change as the grain structure evolves. The shape of the grain size distribution predicted by PRIMME is very similar to that predicted by MCP and PF. In addition, it is {in good agreement with} the grain size distribution from large-scale two-dimensional MCP simulations \cite{Zollner2016} and PF simulations \cite{Yadav2018} from the literature.
\subsubsection{Topological analysis}
\begin{figure}[b!]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{case4_sides_over_timePlot.png}
\caption{\label{fig:av_num_sides_vs_time}}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.32\textwidth]{case4_sides_distribution4KRev.png}
\includegraphics[width=0.32\textwidth]{case4_sides_distribution3KRev.png} \includegraphics[width=0.32\textwidth]{case4_sides_distribution2KRev.png} \caption{\label{fig:num_sides_distr}}
\end{subfigure}
\caption{Topological analysis of the $2400\times2400$ pixel domain with 20,000 initial grains. (a) Evolution of the mean number of sides with time from PRIMME, MCP, and PF. The PRIMME and MCP results used the scaling from the geometric analysis. (b) Comparison of the distribution of the number of sides for approximately 4000, 3000, and 2000 grains from PRIMME, MCP, and PF. The results from Yadav 2018 \cite{Yadav2018} and Mason 2015 \cite{Mason2015} are also included, for reference.}
\end{figure}
The mean number of sides $\langle F \rangle$ versus time from PRIMME, and MCP and PF is shown in Fig.~\ref{fig:av_num_sides_vs_time}. According to Euler's theorem, the average number of sides during steady-state growth should be six \cite{THOMPSON2001}. PRIMME correctly predicts this behavior, and its results are very similar to those from MCP and PF.
The distribution of the number of sides for grain structures with approximately 4000, 3000, and 2000 grains from PRIMME, MCP, and PF are shown in Fig.~\ref{fig:num_sides_distr}. Like the grain size distribution, the distribution of the number of sides is also self similar during normal grain growth. PRIMME accurately predicts an unchanging distribution of the number of sides even as the grain structure evolves, and it predicts distributions that are very similar to those from MCP and PF. The PRIMME distributions are {in good agreement with} those from front tracking simulations \cite{Mason2015} and PF simulations \cite{Yadav2018} from the literature.
\subsubsection{von Neumann-Mullins relationship}
Finally, to test the von Neumann-Mullins relation using PRIMME, we simulate the grain growth of a $443 \times 512$ domain with $64$~hexaganal grains using periodic boundary conditions. The structure after 1 step and after $500$~steps are shown in Fig.~\ref{FIG:hexagon}. PRIMME accurately predicts no evolution of the hexagonal grain structure. Note that PRIMME was not trained on any structures with hexagonal grains, nor on any rectangular domains
\begin{figure}[b!]
\centering
\includegraphics{case2_hex.png}
\caption{Testing von Neumann-Mullins relationship with hexagonal grain structure. PRIMME accurately simulates no evolution of a hexagonal grain structure. }
\label{FIG:hexagon}
\end{figure}
The systematic investigation shows that PRIMME simulation results match with analytical models. Also, steady-state growth characteristic of normal grain growth is observed for large-scale two-dimensional simulations. Thus, PRIMME model is ready to simulate isotropic two-dimensional grain growth accurately.
\section{Discussion}
We have demonstrated that the newly developed PRIMME model is capable of simulating isotropic two-dimentional grain growth. In this section, we discuss the model interpretability, the effects of regularization, error due to extrapolation, and the impact of overfitting. We also show PRIMME's ability to learn irregular grain growth and perform simulations accordingly.
\subsection{Interpretability}
A major benefit of PRIMME is that it is not just a black box. The architecture of PRIMME is built specifically to increase interpretability by predicting the action likelihood rather than the action itself. Figure~\ref{FIG:interpret} illustrates how we can interpret what microstructural features are critical to flip a site A from grain $i$ to grain $j$. The neural network calculates the action likelihood for each site to flip to the same grain as other pixels. Figure~\ref{FIG:interpret}(b) shows the action likelihood map calculated for site A. In this example, site A will flip to the grain that contains site B because that is the action with the greatest likelihood, according to the neural network. Interestingly, site B is not a neighbor of site A but is near a neighboring triple junction, suggesting that PRIMME is using information from triple junctions to determine grain boundary motion. This type of information can be assessed for each site and evaluated to determine statistically what features are critical to grain growth.
\begin{figure}[t!]
\centering
\includegraphics[width=3.33in]{Figure_Interpret.png}\\
(a) \hspace{3cm} (b)
\caption{Illustration of the interpretability of the PRIMME architecture. (a) shows the input grain structure and (b) shows the output action likelihood of the PRIMME neural network.Colors represent the action likelihood for site A to obtain the same grain number as that site after one step. The lightest color represents the largest action likelihood. In this example, site A flips to the site B grain number in the next step.}
\label{FIG:interpret}
\end{figure}
\subsection{The Effects of Regularization}
The physics-informed regularization is one of the key components that enables PRIMME to predict accurate grain growth. To illustrate its importance, we simulate the grain growth in the $512\times512$ pixel domain with 512 initial grains with three changes to the regularization, as shown in Fig.~\ref{FIG:regularization}, to demonstrate how it affects the predicted growth. Fig.~\ref{FIG:regularization} also contains the standard PRIMME results from Fig.~\ref{FIG:growth}, for reference.
\begin{figure*}[p]
\centering
\includegraphics[width=5.5in]{case3_microstructures_fringe.png}
\caption{Effects of regularization on PRIMME microstructure growth in the $512\times512$ pixel domain with $512$~initial grains. The first row shows the standard PRIMME results from Fig.~\ref{FIG:growth}, for reference. The predicted behavior with no regularization, only regularization, and strong regularization are shown in the second, third, and fourth rows, respectively.}
\label{FIG:regularization}
\end{figure*}
When regularization is removed (i.e., $\lambda=0$ in Eq.~\eqref{eq:loss}), sites in the middle of grains will occasionally flip to some other grain number, as shown in the second row of Fig.~\ref{FIG:regularization}. As a result, the grains evolve into non-convex shapes and divide into discontiguous sections. Such behaviors are not consistent with normal isotropic grain growth.
When only regularization is applied (i.e., effectively $\lambda=\infty$ in Eq.~\eqref{eq:loss}), nothing is learned from the training data. Under this condition, the microstructure fails to evolve significantly, as shown in the third row of Fig.~\ref{FIG:regularization}, since a single step does not reduce the regularization. The smallest grains evolve slightly but never successfully disappear.
When the regularization loss is weighted approximately $32$~times more than the training loss (i.e., $\lambda=32$ in Eq.~\eqref{eq:loss}), the grain structure initially evolves, but the grain boundary migration slows and eventually stops, as shown in the last row of Fig.~\ref{FIG:regularization}. This demonstrates that the regularization and data-driven training are both equally necessary to achieve accurate grain growth behavior.
\subsection{Error from Extrapolating Outside the Training Data Set}
\begin{figure*}[b!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{case1_circle200.png}
\caption{\label{FIG:vanish200_images}}
\end{subfigure} \hspace{0.1in}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{case1_circlearea200Rev.png}
\caption{\label{FIG:vanish200_plot}}
\end{subfigure}
\caption{Effects of extrapolation on PRIMME modeling the shrinking of a 200 pixel radius circular grain embedded in a $512\times512$ pixel matrix. (a) Images of the shrinking grain using MCP and PRIMME. (b) The area of the circular grain over time predicted by PRIMME and MCP, with the analytical solution from Eq.~\eqref{eq:A(t)} shown for reference.}
\label{FIG:vanish200}
\end{figure*}
Machine learning models learn from training data. This enables PRIMME to learn grain growth behavior without a definition of the underlying physics. Yet, PRIMME is limited by the behavior it observes. Due to computational constraints, PRIMME was trained with relatively small grain sizes, as shown in Fig.~\ref{FIG:training_size_distribution}. To understand what occurs as grain sizes increase, we apply PRIMME to simulate the behavior of a $R=200$ pixel radius circular grain embedded in $512\times 512$ pixel matrix. This circular grain is ten times larger in size than any of the grains in the training data. Its evolution is shown in Fig.~\ref{FIG:vanish200_images}. As the simulation progresses, the circular shape of the grain is not maintained as occurred with the smaller circular grain from Fig.~\ref{FIG:vanish}, but it develops more distinct faceted boundary segments. Also, the relationship between the its area and time is no longer linear (Fig.~\ref{FIG:vanish200_plot}). This result likely demonstrates error introduced into the prediction due to extrapolation to cases far outside of the training set.
\subsection{Overtraining and Extrapolation}
As the neural network is trained, it improves its performance when replicating behavior similar to its training data, i.e. interpolation. However, this occurs at the cost of a less effective prediction of behavior outside the training data, i.e.\ extrapolation. This is effectively caused by the neural network overfitting to the training data. Figure~\ref{FIG:overtraining} compares the grain growth behavior in a 20,000 grain polycrystal predicted by PRIMME trained with $200$~simulations to PRIMME trained with $1000$~simulations. PRIMME trained with 200 simulations predicts the correct behavior, while PRIMME trained with 1000 simulations predicts deviation from parabolic growth after 200 s. We hypothesize that the extrapolated behavior at longer times, once the grain size gets outside of the range in the training data, becomes more dependent on the regularization than the data as we overfit. Hence, large grains begin to stop growing, as observed in Fig.~\ref{FIG:regularization}.
\begin{figure}[b!]
\centering
\includegraphics[width=0.45\textwidth]{case4_area_over_time_overtrainingPlot.png}
\caption{Effects of overfitting on the grain growth predicted by PRIMME for the $2400\times2400$ pixel domain with 20,000 initial grains. The square of the average grain size with time is shown for PRIMME trained with 200 simulations and PRIMME trained with 1000 simulations. }
\label{FIG:overtraining}
\end{figure}
\subsection{Modeling of Irregular Grain Growth}
A major benefit of PRIMME is that it is trained by data without governing assumptions about grain growth, suggesting it could directly learn from contiguous experimental data. As such, PRIMME has the potential to predict currently unexplained behavior by training with experimental datasets that exhibit irregular grain growth. We demonstrate how PRIMME can be trained to predict irregular grain growth by training using MCP model data similar to that discussed in Section \ref{ssec:sppark}, but using $kT=0$ rather than 0.5.
It is well-established in the literature that using $kT=0$ in MCP simulations results in pinning of the grain boundaries to lattice sites \cite{zollner2014new}, which causes grain boundary faceting that is not representative of normal grain growth. While the predicted grain growth behavior with $kT=0$ is non-physical, it does provide a convenient source of irregular grain growth data.
\begin{figure}[b!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{case3_microstructures_abnormal.png}
\caption{\label{FIG:growthIr_images}}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Case4Irregular2k.png}
\caption{\label{FIG:growthIr_plot}}
\end{subfigure}
\caption{Comparison of irregular grain growth with normal grain growth using results from the $2400\times2400$ pixel polycrystal with 20,000 initial grains. (a) Zoomed in $512\times512$ subdomain once the structure has evolved to 1000 grains from regular PRIMME (trained with $kT=0.5$ MCP data) compared with irregular PRIMME (trained with $kT=0$ MCP data). (b) Comparison of the normalized grain size distribution once the structure has reached $N_G=2000$ grains. Regular PRIMME is compared with irregular PRIMME and with an MCP simulation from the literature with $kT=0$ \cite{zollner2014new}.}
\label{FIG:growthIr}
\end{figure}
Figure~\ref{FIG:growthIr_images} shows a zoomed in view of PRIMME results for the $2400\times2400$ pixel polycrystal with 20,000 initial grains trained using the MCP model data with $kT=0$ that results in irregular grain growth. The $kT=0$ PRIMME predicts more rectangular grains, with some triple junction angles near 90$^\circ$. It also predicts a few large grains and a cluster of small grains.
Figure~\ref{FIG:growthIr_plot} compares the normalized grain size distribution of irregular PRIMME with regular PRIME and MCP results with $kT=0.0$ from the literature. The shape of the normalized grain size distribution is asymmetrical for irregular PRIMME and its peak is shifted towards small grains compared to regular grain growth. The grain size distribution of irregular PRIMME has good agreement with previous MCP simulation with $kT=0$ \cite{zollner2014new}, suggesting that just by changing the training data, PRIMME can predict distinct grain boundary migration
While the irregular grain growth behavior generated using the MCP model with $kT = 0$ is artificial, it does demonstrate the potential for PRIMME to be trained to predict irregular grain growth. It could be trained using experimental data or results from simulations that account for grain boundary anisotropy.
\section{Conclusions}
\label{sec:conclusions}
We have successfully developed the PRIMME model, a machine learning neural network with regularization to make site-wise action predictions for grain evolution. PRIMME, which was trained on simulated grain structures generated using the MCP model in SPPARKS, is capable of simulating isotropic two-dimensional grain growth {in any rectangular domain with any initial grain structure. Its predictions are in good agreement with analytical models and with the results from physics-based isotropic grain growth models from the literature. We demonstrated this agreement for grain structures with 512 initial grains in a $512\times512$ pixel domain and 20,000 initial grains in a $2400\times2400$ pixel domain. Predictions of a shrinking grain showed good agreement with respect to the change in area with time, but the shrinking grain took on an oblong shape that is not it agreement with other deterministic methods such as the phase field method}. Results produced by PRIMME are interpretable, using the action likelihood for each site{, and i}t can be taught to predict irregular grain growth using MCP training data with $kT=0$. {V}ariations in regularization and training data {significantly} affect the results{, and need to be investigated more in the future}. Computational resource requirement for large scale simulations is comparable or less than PF and MCP models.
Our results are among the first demonstrations of a machine learning model replicating isotropic grain growth from training with physics-based simulation data. However, future work is necessary to address challenges with extrapolation without overfitting. PRIMME may also be more effectively trained using data from deterministic simulation methods, such as from phase field models. PRIMME could be extended to reproduce realistic three-dimensional complex grain growth, where grain boundary energy and mobility are anisotropic, after training with experimental data. {Furthermore, PRIMME could be extended to model grain growth and densification during sintering, though new capability would need to be added to ensure conservation of mass.} Thus, our results demonstrate a huge potential for machine learning to be used to predict microstructure evolution for various materials and applications, which is currently very difficult to do with analytical or computational models.
\section*{Acknowledgements}
The authors would like to acknowledge financial support by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award \#DE-SC0020384. This material is also based upon work supported by the U.S. Department of Defence through a Science, Mathematics, and Research for Transformation (SMART) scholarship.
|
1,116,691,498,564 | arxiv | \section{Point modules over the $W$-algebra}
\setcounter{equation}{0} \vspace*{1pt}\par We first recall a main
result about the weight Virasoro-module in [MZ]:
\begin{theo} Let $V$ be an irreducible weight
Virasoro-module. Assume that there exists $\lambda\in \mathbb{C}$, such
that ${\rm dim\,}V_\lambda=\infty$. Then ${\rm
Supp}(V)=\lambda+\mathbb{Z}$, and for every $k\in\mathbb{Z}$, we have ${\rm
dim\,}V_{\lambda+k}=\infty$.
\end{theo}
\begin{lemm}\label{HH} Assume that there exists $\mu\in\mathbb{C}$ and
a non-zero element $v\in M_\mu$, such that
$$I_1v=L_1v=L_{-1}I_2v=L_2v=0\mbox{ \ \ or \ \ }I_{-1}v=L_{-1}v=L_1I_{-2}v=L_{-2}v=0.$$ Then $M$ is a
Harish-Chandra module.
\end{lemm}
\vspace*{4pt}\par{\it Proof.~} Suppose that $I_1v=L_1v=L_2v=0$ for
$v\in V_\mu$, it is clear that $L_{>0}v=0$ and $I_mv=0$ for $m\ge
3$. Moreover $L_{>0}I_2v=0$ and $I_mI_2v=0$ for $m\ge 3$ or $m=1$.
But $L_{-1}I_2v=0$, then $L_1L_{-1}I_2v=[L_{1},
L_{-1}]I_2v+L_{-1}L_{1}I_2v=-{1\over2}L_0I_2v=0$. So $I_2v=0$ if
$\mu\ne -2$. Then ${\cal L}_{>0}v=0$. Hence $v$ is a highest
weight vector, and hence, $M$ is a Harish-Chandra module.
If $\mu=-2$ and $w=I_2v\ne 0$, then $L_nw=L_nI_2v=[L_n,
I_2]v+I_2L_nv=0$ for any $n\in\mathbb{N}$. Moreover $I_1w=0$ and
$L_{-1}I_2w=[L_{-1}, I_2]I_2v+I_2L_{-1}I_2v=0$. Then $I_2w=0$
since $I_2w\notin V_{0}$. So ${\cal L}_{>0}w=0$. Hence $w$ is
either a highest weight vector, and hence, $M$ is a Harish-Chandra
module. Similar for the lowest weight case.$\hfill\Box$
Assume now that $M$ is an irreducible weight $\Lambda$-module such that
there exists $\lambda\in\mathbb{C}$ satisfying ${\rm
dim\,}M_\lambda=\infty$.
\begin{lemm}\label{l1} There exists at most one $i\in\mathbb{Z}$ such
that ${\rm dim\,}M_{\lambda+i}<\infty$.
\end{lemm}
\noindent{\bf Proof.~}~Assume that $${\rm
dim\,}M_{\lambda+i}<\infty\mbox{ \ \ and \ }{\rm
dim\,}M_{\lambda+j}<\infty\mbox{ \ \ \ for some different \ \
}i,j\in\mathbb{Z}.$$ Without loss of generality, we may assume $i=1$ and
$j>1$. Set
$$
\begin{array}{ll}
V:=&\mbox{Ker}(I_1:M_\lambda\rightarrow
M_{\lambda+1})\cap\mbox{Ker} (L_1, L_{-1}I_2:M_\lambda\rightarrow
M_{\lambda+1})\cap\mbox{Ker}(I_j:M_\lambda\rightarrow
M_{\lambda+j})\\[7pt]&
\cap\,\mbox{Ker}(L_j:M_\lambda\rightarrow
M_{\lambda+j}),\end{array}$$ which is a subspace of $M_\lambda$. Since
$${\rm dim\,}M_\lambda=\infty,\ \ \ {\rm dim\,}M_{\lambda+1}<\infty\mbox{ \ \
and \ \ }{\rm dim\,}M_{\lambda+j}<\infty,$$ we have, ${\rm
dim\,}V=\infty.$ Since $$[L_1,L_k]=(k-1)L_{k+1}\neq 0\mbox{ \ \
and \ }[I_1,L_l]=(l-1)I_{l+1}\neq 0\mbox{ \ \ for \ \ }k, l\in\mathbb{Z},\
\ k, l\ge 2,$$ we get
$$
\begin{array}{ll}
L_kV=0,& k=1,j,j+1,j+2,\cdots,\ \mbox{\ \ and \ \ }\\[7pt]
I_lV=0,& l=1,j,j+1, j+2\cdots.\end{array}\eqno(2.1)$$If there
would exist $0\neq v\in V$ such that $L_2v=0$, then
$I_1v=L_1v=L_{-1}I_2v=L_2v=0$ and $M$ would be a Harish-Chandra
module by Lemma 2.2. It is a contradiction. Hence $L_2v\neq 0$ for
all $v\in V$. In particular,
$${\rm dim\,}L_2V=\infty.$$ Since ${\rm dim\,}M_{\lambda+1}<\infty$,
and the actions of $I_{-1}$ and $L_{-1}$ on $L_2V$ map $L_2V$
(which is an infinite dimensional subspace of $M_{\lambda+2}$) to
$M_{\lambda+1}$ (which is finite dimensional), there exists $0\neq w\in
L_2V$ such that $I_{-1}w=L_{-1}w=0$. Let $w=L_2v$ for some $v\in
V$. For all $k\geq j$, using (2.1), we have
$$L_kw=L_kL_2v=L_2L_kv+(2-k)L_{k+2}v=0+0=0.$$ Hence $L_kw=0$ for
all $k=1,j,j+1,j+2,\cdots.$ Since
$$[L_{-1},L_l]=(l+1)L_{l-1}\neq 0\ \mbox{ \ and \ } [I_{-1},I_l]=(l+1)I_{l-1}\neq 0\ \mbox{\ for
all \ }l>1,$$ we get inductively $L_kw=I_kw=0$ for all
$k=1,2,\cdots.$ Hence $M$ is a Harish-Chandra module by Lemma 2.2. A
contradiction. The lemma follows.$\hfill\Box$
Because of Lemma 2.3, we can now fix the following notation: $M$ is
an irreducible weight $\Lambda$-module, $\mu\in\mathbb{C}$ is such that ${\rm
dim\,}M_\mu<\infty$ and ${\rm dim\,}M_{\mu+i}=\infty$ for every
$i\in\mathbb{Z}\setminus\{0\}$.
\begin{lemm}\label{ll2}
Let $0\neq v\in M_{\mu-1}$ and $\mu\ne -1$ such that
$I_1v=L_1v=L_{-1}I_2v=0$. Then
(1) There exists a nonzero $u\in M$ such that $L_1v=I_mv=0$ for
all $m\ge1$.
(2) $I_mL_2v=0$ for all $m\ge1$.
\end{lemm}
\noindent{\bf Proof.~} Since $L_{-1}I_2v=0$, then
$L_1L_{-1}I_2v=[L_{1},
L_{-1}]I_2v+L_{-1}L_{1}I_2v=-{1\over2}L_0I_2v=0$. So $I_2v=0$
since $\mu\ne -1$. By $[L_1, I_k]=(k-1)I_{k+1}$ we have $I_kv=0$
for all $k\ge 2$. Moreover $I_mL_2v=[I_m,
L_2]v+L_2I_mv=(2-m)I_{m+2}v+L_2I_mv=0$. $\hfill\Box$
\begin{lemm}\label{ll3}
Let $0\neq w\in M_{\mu+1}$ and $\mu\ne 1$ such that
$I_{-1}w=L_{-1}w=L_{1}I_{-2}w=0$. Then
(1)$L_{-1}w=I_{-m}w=0$ for all $m\ge1$.
(2) $I_{-m}L_{-2}w=0$ for all $m\ge1$.
\end{lemm}
\noindent{\bf Proof.~} It is similar to that in Lemma 2.4.
$\hfill\Box$
\section{\bf Proof of Theorem \ref{tmain}}
\noindent{\bf Proof of Theorem \ref{tmain}.} Due to Lemma
\ref{l1}, we can suppose that $\hbox{\rm dim}\, M_\mu<+\infty$ and $\hbox{\rm dim}\,
M_{\mu+i}=+\infty$ for all $i\in\mathbb{Z}, i\ne 0$.
Set
\begin{multline*}
V:=\mbox{Ker}\{L_1:M_{\mu-1}\rightarrow M_\mu\}\cap
\mbox{Ker}\{I_1:M_{\mu-1}\rightarrow M_\mu\}\\ \cap
\mbox{Ker}\{L_{-1}I_2 :M_{\mu-1}\rightarrow
M_\mu\}\cap\mbox{Ker}\{L_{-1}L_2 :M_{\mu-1}\rightarrow
M_\mu\}\subset M_{\mu-1}. \end{multline*}
For any $v\in V$, $L_1v=I_1v=L_{-1}I_2v=0$
Since ${\rm dim\,}M_{\mu-1}=\infty$ and ${\rm dim\,}M_\mu<\infty$,
we have ${\rm dim\,}V=\infty$. For any $v\in V$, consider the
element $L_2v$. By Lemma 2.2, $L_2v=0$ would imply that $M$ is a
Harish-Chandra module, a contradiction. Hence $L_2v\neq 0$, in
particular, ${\rm dim\,}L_2V=\infty$.
Since the actions of $I_{-1}$, $L_{-1}$,$L_{1}L_{-2}$ and
$L_{1}I_{-2}$ on $L_2V$ map $L_2V$ (which is an infinite
dimensional subspace of $M_{\mu+1}$) to $M_\mu$ (which is finite
dimensional), there exists $w=L_2v\in L_2V$ for some $v\in V$,
such that $0\ne w\in M_{\mu+1}$ and
$I_{-1}w=L_{-1}w=L_{1}I_{-2}w=L_{1}L_{-2}w=0$.
(1) If $\mu\ne \pm1$, then $$I_kw=0,\ k=1,2,\cdots\eqno(3.1)$$
from Lemma \ref{ll2} and
$$I_{-k}w=0,\ k=1,2,\cdots\eqno(3.2)$$ from Lemma \ref{ll3}.
This means that $I_k$ act trivially on $M$ for all $k\in\mathbb{Z}$, and
so $M$ is simply an irreducible module over the Virasoro algebra.
Thus, Theorem 1.3 follows from Theorem 2.1 in the case
$\mu\ne\pm1$.
(2) If $\mu=\pm1$, we only show that $\mu=1$ is not possible and
for $\mu=-1$ the statement will follow by applying the canonical
involution on ${\cal L}$.
In fact, if $\mu=1$, then for $v\in V$,
$L_1v=I_1v=L_{-1}I_2v=L_{-1}L_2v=L_0v=0$. By Lemma \ref{ll2}, we
have $I_kv=0, k=1, 2, \cdots$.
For any $v\in V$, consider the element $L_2v$. By Lemma 2.2,
$L_2v=0$ would imply that $M$ is a Harish-Chandra module, a
contradiction. Hence $L_2v\neq 0$, in particular, ${\rm
dim\,}L_2V=\infty$.
Since the actions of $I_{-1}$, $L_{-1}$ and $L_{1}I_{-2}$ on
$L_2V$ map $L_2V$ (which is an infinite dimensional subspace of
$M_{2}$) to $M_1$ (which is finite dimensional), there exists
$w=L_2v\in L_2V$ for some $v\in V$, such that $w\neq 0$ and
$I_{-1}w=L_{-1}w=L_{1}I_{-2}w=0$. Moreover we have $$I_kw=0, k=1,
2,\cdots\eqno(3.3)$$ from Lemma \ref{ll2}. So $$I_0w=0.
\eqno(3.4)$$
If $L_1w=L_1L_2v=0$, then from $L_{-1}L_2v=0$ we have
$L_{1}L_{-1}L_2v=[L_1, L_{-1}]L_2v+L_{-1}L_1L_2v=0$. So
$L_0L_2v=2L_2V=0$ since $L_2v\in M_2$. Hence $L_2v=0$ and then $M$
is a highest weight module.
Then we can suppose that $L_1w\ne 0$ for any $w\in L_2V$.
For any $w\in L_2V$, consider the element $L_{-2}w$. If
$L_{-2}w=0$, then $L_{-k}w=I_{-k}w=0,\ k=1,2,\cdots$. Then M is a
Harish-Chandra module. Hence $L_{-2}L_2V\ne 0$, in particular,
${\rm dim\,}L_{-2}L_2V=\infty$. Let $W=L_2V$, then $L_1$ maps
$L_{-2}W$ to $M_1$ has infinite dimensional Kernel $K$. Let $0\ne
L_{-2}w\in K$, then $L_1L_{-2}w=0$. But $L_1L_{-2}=[L_1,
L_{-2}]+L_{-2}L_1$ and $[L_1, L_{-2}]w=(-3)L_{-1}w=0$, hence
$L_{-2}L_1w=0$. Setting $u=L_1w\ne 0$, we have $L_{-2}u=0$,
$I_{-1}u=I_{-1}L_1w=[I_{-1}, L_1]w+L_{1}I_{-1}w=0$. Moreover by
induction we have $I_{-m}u=0$ for all $m\ge 3$.
$I_mu=[I_m,
L_1]w+L_1I_mw=(1-m)I_{m+1}w+L_1I_mw=0$ for all $m\ge 0$ by (3.3)
and (3.4). So $I_2u={1\over6}[L_{-2}, I_4]u=0$. Then $I_ku=0$ for
all $k\in\mathbb{Z}$.
By $L_{-2}u=0$, we have $I_2L_{-2}u=0$. Therefore $c_1=0$.
This means that $I_k, k\in\mathbb{Z}, C_1$ act trivially on the
irreducible $M$ for all $k\in\mathbb{Z}$, and so $M$ is simply a module
over the Virasoro algebra. Thus, Theorem 1.3 follows from Theorem
2.1. $\hfill\Box$
Theorem \ref{tmain} also implies the following classification of
all irreducible weight $\Lambda$-modules which admit a nontrivial
finite dimensional weight space:
\begin{coro} Let $M$ be an irreducible weight
$\Lambda$-module. Assume that there exists $\lambda\in\mathbb{C}$ such that
$0<{\rm dim\,}M_\lambda<\infty$. Then $M$ is a Harish-Chandra
module. Consequently, $M$ is either an irreducible highest or
lowest weight module or an irreducible module from the
intermidiate series. \end{coro}
\noindent{\bf Proof.}~Assume that $M$ is not a Harish-Chandra
module. Then there should exists $i\in\mathbb{Z}$ such that ${\rm
dim\,}M_{\lambda+i}=\infty$. In this case, Theorem 1.3 implies
${\rm dim\,}M_\lambda=\infty$, a contradiction. Hence $M$ is a
Harish-Chandra module, and the rest of the statement follows from
Theorem \ref{T2}.$\hfill\Box$
\par
\vskip30pt \centerline{\bf ACKNOWLEDGMENTS}
\vskip15pt Project is supported by the NNSF (Grant 10671027,
10701019, 10571119), the ZJZSF(Grant Y607136), and Qianjiang
Excellence Project of Zhejiang Province (No. 2007R10031). Authors
give their thanks to Prof. Congying Dong for his useful comments.
\vskip30pt
\def\cen{\bf REFERENCES}{\centerline{\bf REFERENCES}}
|
1,116,691,498,565 | arxiv | \section{Introduction}
The asymptotic symmetry group of asymptotically-flat spacetimes,
the BMS group and its associated charges, has encountered somewhat of a
resurgence in interest recently, whether in the context of flat space holography \cite{BarnichAspects, Barnich:2013axa}, its relation to the Weinberg soft theorems \cite{Strominger:2013jfa, He:2014laa} or to black holes physics \cite{Hawking:2016msc,Hawking:2016sgy,Sheikh-Jabbari:2016lzm}.
The novel feature of asymptotically-flat spacetimes is that their asymptotic
symmetry group \cite{bondi, sachs} as one asymptotically approaches null
infinity is much larger than the na\"ively expected Poincar\'e group,
the symmetry group of Minkowski spacetime. It is the existence of an
infinite number of supertranslations that distinguishes the BMS group from the
Poincar\'e group. More precisely, the BMS group is the semi-direct
product of conformal isometries on the round 2-sphere with the
supertranslations, i.e.~angle-dependent translations along future null
infinity (see equation (\ref{BMSgen})):
\begin{equation}
\textup{BMS} = \textup{SL$(2,\mathbb{C})$} \ltimes \textup{ST}.
\end{equation}
Whether viewed from a
phase-space \cite{Ashtekar:1981bq, LW, IW, WZ} or covariant \cite{BB,BarTro}
point of view, the existence of an enhanced (infinite) asymptotic symmetry
group implies the existence of an infinite number of charges; the BMS charges.
Roughly speaking, the BMS charges are constructed by integrating a
BMS transformation parameter multiplied by a BMS invariant quantity over
the sphere at null infinity. Of course, in the non-linear theory there is
the subtle issue that charges will generally not be integrable due to
the existence of flux at infinity, associated with gravitational
radiation (measured by the Bondi flux, or Bondi news) \cite{bondi, WZ, BarTro}.
A short time after the BMS group and its associated charges were discovered,
another set of (conserved) charges at null infinity was also discovered,
known as Newman-Penrose (NP) charges \cite{NP}. Newman and Penrose
constructed their charges in the framework of the Newman-Penrose
formalism \cite{NP61}. These charges \emph{are} conserved along null
infinity, and are given by the integral over the sphere at infinity of a
particular spherical harmonic of a Weyl scalar. In the linearised theory there
is an infinite tower of such charges, while in the non-linear theory the
tower collapses to ten such NP charges. Despite the fact that the existence of NP charges requires a leading analytic expansion for the fields around null infinity, which is in general not satisfied \cite{Damour:1985cm, christ}, NP charges have also been of interest recently in relation to the existence of conserved charges on the horizon of extremal back holes \cite{Aretakis:RN1, Aretakis:extremal, BF, LuciettiERN, us}. In Ref.\ \cite{us}, it has been shown that there is a 1-1 correspondence between Aretakis charges on the extremal horizon and NP charges at null infinity of so-called weakly asymptotically-flat spacetimes.
The question that we would like to address here is the relation between
BMS and NP charges.
At first glance there is no obvious relation between these two sets of
charges, but, given that they are both defined in the asymptotic region of
asymptotically-flat spacetimes, it would seem natural that there
should exist some connection between them. For simplicity, we shall
restrict our attention henceforth to the supertranslations.
Generalising to the full BMS group
should not be too difficult. However, since the most interesting part is
the supertranslations, it makes sense to focus our attention on these
transformations.
Recently, it was shown by Conde and Mao in Ref.\ \cite{conde} that in the
linearised theory the infinite tower of NP charges may be reinterpreted as
subleading BMS charges. The standard BMS charge associated with
supertranslations is given by the integral over the sphere at infinity of
the Bondi mass aspect, which is supertranslation invariant in the
linearised
theory, multiplied by a supertranslation parameter. What Conde and Mao
realised is that the Bondi mass aspect is but the leading $1/r^{0}$ term in a
$1/r$-expansion of the $uu$-component of the linearised metric
perturbation $\delta g_{ab}$. Furthermore, $\delta g_{uu}$ is invariant
under supertranslations. This led them to define a new BMS charge at each
order in the $1/r$-expansion, finding that the subleading BMS charges
include the infinite tower of NP charges that exist in the linear
theory.~\footnote{In fact they only identify the real part of the NP
charges, because their expansion for the BMS charge is real. We shall
encounter the same feature in the non-linear case.}
Our aim in this paper is to generalise the above result to the full
non-linear theory. As pointed out before, this is non-trivial given the
existence of flux in the non-linear theory.
In particular, $\delta g_{uu}$ is no longer supertranslation invariant. Moreover, generally, in the non-linear theory the objects of interest are not supertranslation invariant. Hence, the same method as Conde-Mao cannot be used to find the non-linear charges.
Our idea is very simple:
we take as our starting point the general expression for asymptotic charges
derived by Barnich and Brandt \cite{BB}.~\footnote{There is an ambiguity in the definition of the asymptotic charges in general relativity (see Ref.~\cite{compere} for a discussion of this point). However, this ambiguity will not affect the results in this paper (see section \ref{sec:dis} for more details).} As defined, the Barnich-Brandt
expression can be considered as a $1/r$-expansion, the leading $1/r^{0}$ term being
the standard BMS charge. Thus, each subsequent term in this $1/r$-expansion
may be viewed as a subleading BMS charge. We find that at
order $r^{-3}$, the subleading BMS charges are associated with
the non-linearly conserved NP charges.
We begin in section \ref{sec:AF} by reviewing properties of asymptotically-flat spacetimes, as defined by Bondi \cite{bondi}. We explain the fall-off conditions that will be assumed in this paper, the canonical complex null frame for the general metric, the form of the Einstein equations at each order and, most importantly, the BMS group and how it acts on the fields.
In section \ref{sec:BMSsub}, we consider a $1/r$-expansion of the
Barnich-Brandt definition of the asymptotic charge adapted to
asymptotically-flat spacetimes, defining these to be subleading BMS charges.
We analyse the expansion up to order $r^{-3}$. In general, the structure
of the subleading BMS charges is similar to that of the leading charges;
there exist both integrable and non-integrable pieces. At each order,
we consider whether the non-integrable pieces can be made to vanish by
making particular choices for the supertranslation parameter,
finding that this can only be done non-trivially at order $r^{-3}$.
The relation of the subleading BMS charges to the Newman-Penrose formalism
is clarified in section \ref{sec:BMSNP}. In particular, we show that the
integrable BMS charges at order $r^{-3}$ correspond to NP charges.
We conclude with some comments in section \ref{sec:dis}.
\section{Asymptotically-flat metrics} \label{sec:AF}
Here, we work with the Bondi definition of asymptotic flatness \cite{bondi, sachs}. We introduce Bondi coordinates $(u,r,x^I=\{\theta,\phi\})$, such that the metric takes the form
\begin{equation} \label{AF}
d s^2 = - F e^{2 \beta} du^2 - 2 e^{2 \beta} du dr +
r^2 h_{IJ} \, (dx^I - C^I du) (dx^J - C^J du)
\end{equation}
with the metric functions satisfying the following fall-off conditions at large $r$
\begin{align}
F(u,r,x^I) &= 1 + \frac{F_0(u,x^I)}{r} + \frac{F_1(u,x^I)}{r^2} + \frac{F_2(u,x^I)}{r^3} + \frac{F_3(u,x^I)}{r^4} + o(r^{-4}), \notag \\[2mm]
\beta(u,r,x^I) &= \frac{\beta_0(u,x^I)}{r^2} + \frac{\beta_1(u,x^I)}{r^3} + \frac{\beta_2(u,x^I)}{r^4} + o(r^{-4}), \notag \\[2mm]
C^I(u,r,x^I) &= \frac{C_0^I(u,x^I)}{r^2} + \frac{C_1^I(u,x^I)}{r^3} + \frac{C_2^I(u,x^I)}{r^4} + \frac{C_3^I(u,x^I)}{r^5} + o(r^{-5}), \notag \\[2mm] \label{met:falloff}
h_{IJ}(u,r,x^I) &= \omega_{IJ} + \frac{C_{IJ}(u,x^I)}{r} + \frac{C^2 \omega_{IJ}}{4 r^2} + \frac{D_{IJ}(u,x^I)}{r^3} + \frac{E_{IJ}(u,x^I)}{r^4} + o(r^{-4}),
\end{align}
where $\omega_{IJ}$ is the standard metric on the round 2-sphere with coordinates $x^I=\{\theta, \phi\}$ and $C^2 \equiv C_{IJ} C^{IJ}$. Moreover, residual gauge freedom allows us to require that
\begin{equation} \label{det:h}
h =\omega,
\end{equation}
where $h \equiv \textup{det}(h_{IJ})$ and $\omega
\equiv \textup{det}(\omega_{IJ}) =\sin\theta$.
A parameterisation of $h_{IJ}$, which makes this gauge choice obvious is one for which \cite{sachs}
\begin{equation}
2 h_{IJ} dx^I dx^J = (e^{2f} + e^{2g}) d\theta^2 + 4 \sin{\theta} \sinh(f-g) d\theta d\phi + \sin^2\theta (e^{-2f} + e^{-2g}) d\phi^2
\end{equation}
with
\begin{align}
f(u,r,x^I) &= \frac{f_0(u,x^I )}{r}+\frac{f_2(u,x^I)}{r^3} +\frac{f_3(u,x^I)}{r^4} + o(r^{-4}), \notag \\[1mm]
g(u,r,x^I) &= \frac{g_0(u,x^I)}{r}+\frac{g_2(u,x^I)}{r^3} +\frac{g_3(u,x^I)}{r^4} + o(r^{-4}). \label{def:fg}
\end{align}
Note that there are no terms above for $f$ and $g$ at order $r^{-2}$ because of regularity conditions on the metric \cite{sachs}.
As will become clear later, both parameterisations for $h_{IJ}$ are useful
and, clearly, there is a relation between the two. In particular, we have
\begin{gather}
C_{IJ} = \text {{\footnotesize $ \begin{pmatrix}
f_0 + g_0 & (f_0 - g_0) \sin \theta \\
(f_0 - g_0) \sin \theta & -(f_0 + g_0) \sin^2 \theta
\end{pmatrix}$} }, \quad
D_{IJ} = \text {{\footnotesize $ \begin{pmatrix}
f_2 + g_2 + \ldots & (f_2 - g_2 + \ldots) \sin \theta \\
(f_2 - g_2 + \ldots) \sin \theta & -(f_2 + g_2 + \ldots) \sin^2 \theta
\end{pmatrix}$} }, \notag \\[2mm]
E_{IJ} = \text {{\footnotesize $ \begin{pmatrix}
f_3 + g_3 + \ldots & (f_3 - g_3 + \ldots) \sin \theta \\
(f_3 - g_3 + \ldots) \sin \theta & -(f_3 + g_3 + \ldots) \sin^2 \theta
\end{pmatrix}$} },
\end{gather}
where the ellipses indicate lower order terms in $f$ and $g$, such as $f_0$ and $g_0$.
Since we are using the gauge \eqref{det:h} in which the determinant of
$h_{IJ}$ is equal to the determinant of the round metric on the
2-sphere, this implies that $C_{IJ}$ and $D_{IJ}$ are both trace-free, while
\begin{equation} \label{trE}
\textup{tr}\, E \equiv \omega^{IJ} E_{IJ} = D^{IJ} C_{IJ} - \frac{1}{16} \left(C^2 \right)^2,
\end{equation}
where
\begin{equation}
C^2 \equiv C_{IJ} C^{IJ} = 4 (f^2_0 + g^2_0).
\end{equation}
\subsection{Null frame} \label{sec:frame}
A complex null frame $e_\mu{}^a=(\ell^a,n^a,m^a,\bar{m}^a)$ with inverse $E^\mu{}_a$,
\begin{equation}
g_{ab} = E^\mu{}_a E^\nu{}_b \ \eta_{\mu \nu}, \qquad \eta_{\mu \nu} = \text {{\footnotesize $ \begin{pmatrix}
\begin{matrix} 0 & -1 \\ -1 & 0 \end{matrix} & \mathbf{0} \\
\mathbf{0} & \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}
\end{pmatrix}$ }}
\end{equation}
may be introduced, where
\begin{align}
\ell &= \frac{\partial}{\partial r}, \qquad n = e^{- 2 \beta} \Bigg[ \frac{\partial}{\partial u} - {\textstyle{\frac{1}{2}}} F \frac{\partial}{\partial r} + C^I \frac{\partial}{\partial x^I} \Bigg], \qquad m = \frac{\hat{m}^I}{r} \frac{\partial}{\partial x^I}, \notag\\
\ell^\flat &= - e^{2\beta} du, \qquad n^{\flat} = - \Big( dr + \frac{1}{2} F du \Big), \qquad m^{\flat} = r\, \hat{m}_I\, (dx^I - C^I du),
\label{AF:frame}
\end{align}
where
\begin{equation}
2 \hat{m}^{(I} \bar{\hat{m}}^{J)} = h^{IJ}
\end{equation}
with $h^{IJ}$ the matrix inverse of $h_{IJ}$. Equivalently,
\begin{equation}
m = \frac{1}{2r} \left[ (e^{-f} + i e^{-g}) \partial_\theta -\frac{i}{\sin\theta} (e^{f} + i e^{g}) \partial_\phi \right].
\end{equation}
Given some arbitrary vector $V_a$, we denote the components in the null basis as follows
\begin{equation}
\ell^a V_a \equiv V_0 = - V^1,\qquad n^a V_a \equiv V_1 = -V^0,\qquad m^a V_a \equiv V_m=V^{\bar{m}},
\end{equation}
with the obvious generalisation also to tensors.
\subsection{Einstein equations}
As well as the fall-off conditions \eqref{met:falloff} and the gauge condition \eqref{det:h}, following Ref.\ \cite{sachs}, we assume that the components
$T_{00}$ and $T_{0m}$ of the energy-momentum tensor in the null frame
fall off as
\begin{equation} \label{falloff:matter}
T_{00} = o(r^{-5}), \qquad T_{0m} = o(r^{-3}).
\end{equation}
The Einstein equation then implies that
\begin{align}
G_{00} = o(r^{-5}) &\quad \implies \quad \beta_0 = -\frac{1}{32}\, C^2, \quad \beta_1 = 0, \\
G_{0m} = o(r^{-3}) &\quad \implies \quad C_0^I = -{\textstyle{\frac{1}{2}}} D_J C^{IJ},
\end{align}
where $D_I$ is the standard covariant derivative associated
with the round-sphere metric $\omega_{IJ}$.
Furthermore, at higher orders, given appropriate fall-off for energy-momentum tensor components, the Einstein equation would imply the following equations
\begin{align}
G_{00} = o(r^{-6}) &\ \implies \ \beta_2 = -\frac{3}{32} D_{IJ} C^{IJ} + \frac{1}{128}\, (C^2)^2, \label{b2} \\[2mm]
G_{0m} = o(r^{-5}) &\ \implies \ C_2^I = \frac{3}{4} \left( D_J D^{IJ} - C^{IJ} C_{1\, J} \right) +\frac{1}{64} C^2 D_J C^{IJ} -\frac{1}{16} C^{IJ} D_{J}C^2, \label{C2} \\[2mm]
G_{0m} = o(r^{-6}) &\ \implies \ C_3^I = \frac{2}{5} D_{J} E^{IJ} + \frac{9}{80} C^2 C_1^I -\frac{19}{80} C_{KL} D^K D^{LI} -\frac{51}{80} C^{IL} D^K D_{KL} \notag \\
& \hspace{22mm} -\frac{11}{80} D^{KL} D^I C_{KL} + \frac{7}{160} C^2 D^I C^2, \label{C3}
\end{align}
\begin{align}
G_{mm} = o(r^{-4}) &\ \implies \ \partial_u D_{IJ} = \frac{1}{8} C_{IJ} \partial_u C^2 - \frac{1}{4} F_{0} C_{IJ} - \frac{1}{2} D_{(I} C_{1\, J)} - \frac{1}{8} C_{IJ} D_K D_L C^{KL} \notag \\
& \hspace{8mm} + \frac{1}{32} D_I D_J C^2 + \frac{1}{2} D_{(I}(C_{J)K} D_L C^{KL}) - \frac{1}{8} D_I C^{KL} D_J C_{KL} \notag \\
& \hspace{8mm} + \frac{1}{4} \omega_{IJ} \Big[ D_K C_1^K -\frac{5}{16} \Box C^2 + D^M C^{KL} \big( D_{K} C_{LM}- \frac{1}{4} D_{M} C_{KL}\big) + C^2 \Big], \label{uD} \\[2mm]
%
%
%
G_{mm} = o(r^{-5}) &\ \implies \ \partial_u E_{IJ} = \frac{1}{2} D^K(C_{1\, (I} C_{J)K}) - \frac{1}{2} D^K D_{(I} D_{J)K} + \frac{5}{32} D^K(C^2 D_{(I} C_{J)K}) \notag \\
& \hspace{8mm} - \frac{1}{8} D^K (C_{K(I} D_{J)} C^2) + \frac{1}{2} \omega_{IJ} \Big[ D^{KL} \partial_u C_{KL} - \frac{1}{4} C^2 F_0 - \frac{1}{2} C_{1}^{K} D^L C_{KL} \notag \\
& \hspace{8mm} - C^{KL} D_K C_{1\, L}+ \frac{1}{2} D^K D^L D_{KL} - \frac{1}{32} C^2 D^K D^L C_{KL} + \frac{5}{32} C^{KL} D_K D_L C^2 \notag \\
& \hspace{8mm} - \frac{1}{16} C_{KL} D_M C^{MK} D_N C^{NL} + \frac{3}{32} C^{KL} D_K C^{MN} D_L C_{MN} \Big], \label{uE}
\end{align}
\begin{align}
G_{01} = o(r^{-4}) &\ \implies \ F_1 = -\frac{1}{2} D_I C_1^I + \frac{3}{32} (\Box - 2) C^2 \notag \\
& \hspace{45mm} + \frac{1}{2} D_{I} C^{IK} D^J C_{JK} -\frac{1}{8} D^{I} C^{JK} D_{I}C_{JK}, \label{F1} \\[2mm]
G_{01} = o(r^{-5}) &\ \implies \ F_2 = -\frac{1}{4} D_I D_J D^{IJ} -\frac{3}{4} C_1^I D^J C_{IJ} + \frac{1}{32} C^{IJ} C^{KL} \, D_I D_J C_{KL} \notag \\
& \hspace{20mm} + \frac{1}{64} C^2 \, D_I D_J C^{IJ} - \frac{1}{32} C^{IJ} D_I C^{KL} D_J C_{KL} + \frac{5}{64} D_I C^{IJ} D_J C^{2}, \label{F2}
\end{align}
\begin{align}
G_{01} = o(r^{-6}) &\ \implies \ F_3 = -\frac{1}{10} D_I D_J E^{IJ} + \frac{3}{4} C_1^I C_{1\, I} + \frac{3}{160} D_I(C^2 C_1^I) +\frac{5}{512} (C^2)^2 \notag \\
& \hspace{20mm} +\frac{1}{16} C^{IJ} \Box D_{IJ} + \frac{9}{80} D^{IJ} \Box C_{IJ} -\frac{11}{40} D^I C^{JK} D_I D_{JK} \notag \\
& \hspace{20mm} + \frac{2}{5} D^I C^{JK} D_J D_{IK} - \frac{3}{80} D^{IJ} C_{IJ}-\frac{33}{5120} \Box (C^2)^2 \notag \\
& \hspace{20mm} + \frac{13}{1024} D^I C^2 D_I C^2 + \frac{3}{128} C^2 D^{I} C^{JK} D_I C_{JK} \notag \\
& \hspace{20mm} - \frac{1}{32} C^2 D^{I} C^{JK} D_{J} C_{IK}, \label{F3}
\end{align}
\begin{align}
G_{11} = o(r^{-2}) &\ \implies \ \partial_u F_0 = -\frac{1}{2} D_I D_J \partial_u C^{IJ} + \frac{1}{4} \partial_u C^{IJ} \partial_u C_{IJ}, \label{uF0} \\[2mm]
G_{1m} = o(r^{-3}) &\ \implies \ \partial_u C_1^I = \frac{1}{3} D^I F_0 +\frac{1}{6} \Box D_J C^{IJ} - \frac{1}{6} D^I D^J D^K C_{JK} + \frac{1}{8} C_{JK} \partial_u D^I C^{JK} \notag \\
& \hspace{25mm} + \frac{5}{8} \partial_u C_{JK} D^{I} C^{JK} -\frac{2}{3} \partial_u C_{JK} D^{J} C^{KI} -\frac{1}{6} D_J C^{IJ}, \label{uC1}
\end{align}
where $\Box \equiv D^I D_I$ is the covariant Laplacian on the unit 2-sphere.
\subsection{BMS group} \label{sec:BMS}
The asymptotic BMS symmetry is determined by imposing that the variation of the metric under the generators of the asymptotic symmetry group respects the form of the metric and the gauge choices.
These conditions imply that~\footnote{As explained in the introduction, for simplicity, we neglect the SL$(2,\mathbb{C})$ part of the BMS group.}
\begin{equation} \label{BMSgen}
\xi = s\, \partial_u + \int dr \frac{e^{2\beta}}{r^2} h^{IJ} D_{J} s \ \partial_I - \frac{r}{2} \left( D_I \xi^I - C^I D
_I s \right) \partial_r.
\end{equation}
The $u$ and $r$-independent function $s(x^I)$ parameterises supertranslations.
We list below the variation of some of the metric components under supertranslations that will be useful later. Some of these variations can also be found in Ref. \cite{BarTro}.
\begin{align}
\delta F_0 &= s \partial_u F_0 - \frac{1}{2} \partial_u C^{IJ} D_I D_J s - D_I \partial_uC^{IJ} D_J s, \label{var:F0} \\[2mm]
\delta C_1^I &= s \partial_u C_1^I + \frac{1}{16} \partial_u C^2 D^I s + F_0 D^I s - \frac{1}{4} C^{JK} D^I D_J D_K s - \frac{1}{2} C^{IJ} D_J \Box s
\notag \\
&+ \frac{1}{2} D^J C^{IK} D_J D_K s - \frac{3}{4} D^I C^{JK} D_J D_K s - \frac{1}{2} D_J C^{JK} D_K D^I s - \frac{1}{2} D^I D^J C_{JK} D^K s \notag \\
&+ \frac{1}{2} D^J D_K C^{KI} D_J s - C^{IJ} D_J s, \label{var:C1}
\end{align}
\begin{align}
\delta C_{IJ} &= s \partial_u C_{IJ} + \Box s\ \omega_{IJ} - 2 D_{(I} D_{J)} s, \label{var:C} \\[2mm]
\delta C^2 &= s \partial_u C^2 - 4 C^{IJ} D_I D_J s, \label{var:C2} \\[2mm]
\delta D_{IJ} &= s \partial_u D_{IJ} + \Big[ \frac{1}{16} C^2 \Box s - \frac{1}{16} D^K C^2 D_K s - \frac{1}{2} C^{LM} D^K C_{KL} D_Ms + C_1^K D_K s \Big] \omega_{IJ} \notag \\
& - 2 C_{1 \, (I} D_{J)}s- \frac{1}{4} C_{IJ} C^{KL} D_K D_L s - \frac{1}{8}C^2 D_{I} D_{J}s + \frac{1}{8} D_{(I} C^2 D_{J)}s + D_K C^{KL} C_{L(I} D_{J)}s, \label{var:D} \\[2mm]
\delta E_{IJ} & = s \partial_u E_{IJ} + \Big[ \frac{1}{4} D^{KL} D_K D_L s + \frac{3}{2} D_K D^{KL} D_L s - \frac{5}{4} C^{KL} C_{1\, K} D_L s - \frac{1}{64} C^2 C^{KL} D_K D_L s \notag \\
& + \frac{3}{64} \Big( C^{KL} D_K C^2+ 2 C^2 D_K C^{KL} \Big) D_L s\Big] \omega_{IJ} + \frac{1}{2} C_{1\, (I} C_{J)K} D^K s - \frac{5}{2} D^K( D_{K(I} D_{J)} s) \notag \\
& - \frac{1}{2} D^K s D_{(I} D_{J)K} + \frac{5}{32} D^K (C^2 C_{K(I} D_{J)} s) + \frac{5}{32} C^2 D^K s D_{(I} C_{J)K} - \frac{1}{8} C_{K(I} D_{J)}C^2 D^K s. \label{var:E}
\end{align}
As explained above, the form of the Bondi metric \eqref{AF} is preserved under the action of the BMS group. However, assuming a particular fall-off for the energy-momentum tensor components implies, via the Einstein equations, additional constraints on the metric. Of course, one must be sure that these extra conditions are also preserved under the action of the symmetry group. They will be preserved as long as a particular set of energy-momentum tensor components satisfy particular fall-off conditions. More precisely, consider the variation of a particular component
\begin{equation} \label{var:Einstein}
\delta_{\xi} T_{\alpha \beta} = (\mathcal L_{\xi} T)_{\alpha \beta} = \xi^c \partial_c T_{\alpha \beta} + T_{c \beta} \partial_\alpha \xi^c + T_{\alpha c} \partial_\beta \xi^c,
\end{equation}
where $\alpha$ and $\beta$ denote a fixed component of $T_{ab}$ in the null frame, i.e.\ they are each chosen from the set $\{0,1,m,\bar{m}\}$.
Now, assuming that
\begin{equation}
T_{\alpha \beta} = o(r^{-n}),
\end{equation}
for some integer $n$, equation \eqref{var:Einstein} at $O(r^{-n})$ equals
\begin{equation}
\delta_{\xi} T_{\alpha \beta} = T_{c \beta} \partial_\alpha \xi^c + T_{\alpha c} \partial_\beta \xi^c.
\end{equation}
Therefore, a necessary condition that the fall-off condition for $T_{\alpha \beta}$ be preserved is that $T_{c \alpha}$ and $T_{c \beta}$ also satisfy appropriate fall-off conditions. Here, when assuming a particular fall-off condition for a particular component of $T_{ab}$, we will always assume that the relevant components of $T_{ab}$ also satisfy appropriate fall-off conditions such that the fall-off condition for $T_{\alpha \beta}$ is preserved by the action of the BMS group. This can always be done.
\section{BMS charges at subleading order} \label{sec:BMSsub}
An expression for the variation of an asymptotic charge in general relativity is given by Barnich and Brandt \cite{BB} (see also Ref.~\cite{Abbott:1981ff})
\begin{gather}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{Q}_\xi[\delta g, g]= \frac{1}{8 \pi G} \int_{S}\,(d^2x)_{ab}\, \sqrt{-g}\ \Big\{ \xi^b g^{cd} \nabla^a \delta g_{cd} -\xi^b g^{ac} \nabla^d \delta g_{cd} +\xi^c g^{ad} \nabla^b \delta g_{cd} \hspace{20mm} \notag \\[2mm]
\hspace{70mm} + \frac{1}{2} g^{cd} \delta g_{cd} \nabla^b \xi^a + \frac{1}{2} g^{bd} \delta g_{cd} (\nabla^a \xi^c - \nabla^c \xi^a) \Big\} , \label{AsympCharge}
\end{gather}
where
\begin{equation}
(d^2x)_{ab} = \frac{1}{4} \eta_{abIJ}\ d x^J \wedge d x^J,
\end{equation}
where $\eta$ is the alternating symbol with $\eta_{u r \theta \phi}=1$. The
slash on the variational symbol $\delta$ signifies the fact that the
variation is not, in general, integrable.
As is explained in section \ref{sec:dis}, the above definition is not unique. For example, it differs from the expression given by Iyer and Wald by an ambiguity, which vanishes for $\xi$ an exact Killing vector, as opposed to an asymptotic one. We find that the ambiguity vanishes also in this case, rendering all such charges equal.
The background of interest here, with metric $g_{ab}$, is the class of asymptotically-flat spacetimes, as defined in section \ref{sec:AF}, which gives all the necessary ingredients to compute the charges, namely, the background metric $g_{ab}$, given by equation \eqref{AF} and the symmetry generators $\xi^a$, given by equation \eqref{BMSgen}. In this case,
\begin{equation} \label{measure}
(d^2x)_{ab}\, \sqrt{-g} = d\Omega\ r^2 e^{2\beta} \delta_{[a}^{u} \delta^{r}_{b]}.
\end{equation}
Plugging in the above expressions into equation \eqref{AsympCharge} leads to a rather complicated expression of the form
\begin{equation} \label{BMScharge:gen}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{Q}_\xi[\delta g, g]= \frac{1}{8 \pi G} \int_{S}\, d\Omega\ \Big\{ \delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0 + \frac{\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_1}{r} + \frac{\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2}{r^2} + \frac{\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3}{r^3} + o(r^{-3}) \Big\}.
\end{equation}
The first term $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0$ in the expansion above has been derived in Ref.\ \cite{BarTro}, as we shall review below. Strictly, only this first term is defined at null infinity. Therefore, a definition of asymptotical flatness along the lines of Geroch \cite{geroch} would simply not identify any further terms beyond the leading one, $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0$. However, there is no reason why one should not consider the subleading terms and as we shall find below, this provides a direct relation between subleading ``BMS charges'' and the non-linear NP charges.
\subsection{BMS charge at $O(r^{0})$} \label{sec:I0}
Barnich and Troessaert \cite{BarTro} found that
\begin{equation} \label{I0}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0 = \delta \big( -2 s F_{0} \big) + \frac{s}{2} \partial_u C_{IJ} \delta C^{IJ}.
\end{equation}
Significantly, the BMS charge is not integrable. This non-integrability is directly related to the flux of gravitational radiation, or ``Bondi news,'' at null infinity \cite{BB}. The first term on the left-hand side, $-2 s F_{0}$, would
be a conserved charge if there were no flux at infinity.
$-2 F_0$ is generally known as the Bondi mass aspect, and if $s$ is chosen from the $\ell=0$ or $\ell=1$
spherical harmonics, the charge corresponds to the Bondi-Sachs 4-momentum vector.
It should be emphasised that the above separation into an integrable and non-integrable part is not unique. One could simply rearrange the terms differently, by moving some portion of the integrable part into the non-integrable part. However, the most significant aspect of the above exercise is that the BMS charge at leading order is non-integrable, and that this is related to the news at null infinity. In fact, one could ask whether the non-integrable part in equation \eqref{I0} can ever be set to zero for non-trivial parameter $s$. Clearly, this is only possible if and only if
\begin{equation}
\partial_u C_{IJ} = 0.
\end{equation}
This corresponds precisely to the absence of Bondi news at null infinity.
\subsection{BMS charge at $O(r^{-1})$} \label{sec:I1}
At the next order, a rather long but straightforward calculation gives that\footnote{Given equation \eqref{BMScharge:gen}, i.e.\ the fact that we always regard these quantities as being integrated over a round 2-sphere, we freely use integration by parts, ignoring total derivative terms.}$^,$\footnote{We note that there exist many Schouten identities that allow the terms to be written in different forms, see appendix \ref{app:iden}. For example, it can be shown that (see appendix \ref{app:iden}) $$ D^{I}C^{JK} D_{I} C_{JK} - D^{I}C^{JK} D_{K} C_{IJ} - D^I C_{IK} D_J C^{JK} = 0.$$}
\begin{equation} \label{I1}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_1 = s \delta \left(-2 F_1 - D_I C_1^I + \frac{3}{16} (\Box - 2) C^2 + D^{I} C_{IK} D_J C^{JK} -\frac{1}{4} D^{I}C^{JK} D_{I} C_{JK} \right).
\end{equation}
Thus, at this order the BMS charge is integrable. Moreover, from equation \eqref{F1}, we find that if the energy-momentum tensor component $T_{01} = o(r^{-4}),$ the Einstein equation implies that
\begin{equation}
\mathcal{I}_1 = 0.
\end{equation}
If, on the other hand, $T_{01}$ is non-vanishing at this order, we have a new non-linear BMS charge
\begin{equation}
\mathcal{Q}_1 = \int_{S}\, d\Omega\ \big(-s\, T_{01}|_{r^{-4}}\big).
\end{equation}
\subsection{BMS charge at $O(r^{-2})$} \label{sec:I2}
Similarly, at the next order, we find that
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2 = s\ \delta \Big( & -2 F_2 -2 D_I C_2^{I} -3 D^I C_{IJ} C_1^J -\frac{3}{2} C_{IJ} D^I C_1^J + \frac{1}{8} C^2 \, D_I D_J C^{IJ} \notag\\
& - \frac{1}{32} C^{IJ}\, D_I D_J C^2 - \frac{1}{8} C^{IJ} D_I C^{KL} D_J C_{KL} + \frac{3}{16} D_I C^{IJ} D_J C^{2} \Big) \notag \\[2mm]
& \hspace{-10mm} + s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{16} \partial_u C^2 \delta C^2 + \frac{1}{8} F_0 \delta C^2 - \frac{1}{2} D^I C_1^J \delta C_{IJ} \notag \\
& - C_{1}^I D^J \delta C_{IJ} + \frac{1}{16} D_I D_J C^{IJ} \delta C^2 + \frac{1}{32} D_I D_J C^2 \delta C^{IJ} +\frac{1}{16} D_I C^2 D_J \delta C^{IJ} \notag \\
& \hspace{30mm} + \frac{1}{2}C_{KL} D_I C^{IK} D_{J} \delta C^{JL}+ \frac{1}{8} \delta C^{IJ} D_I C^{KL} D_J C_{KL} \Bigg). \label{BMScharge:I2}
\end{align}
Assuming that
\begin{equation}
T_{0m} = o(r^{-5}), \qquad T_{01} = o(r^{-4}), \qquad T_{mm} = o(r^{-4}),
\end{equation}
which give equations for $C_2^I$ (equation \eqref{C2}), $F_2$ (equation \eqref{F2}) and $\partial_u D_{IJ}$ (equation \eqref{uD}), respectively, the expression for $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2$ reduces to\footnote{For brevity, we have not directly substituted equation \eqref{uD} for ${\partial}_u D_{IJ}$ into the expression below.}
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2 =& s\ D_I D_J \delta \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big) \notag \\[2mm]
& + s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{16} \partial_u C^2 \delta C^2 + \frac{1}{8} F_0 \delta C^2 - \frac{1}{2} D^I C_1^J \delta C_{IJ} \notag \\
& \hspace{10mm} - C_{1}^I D^J \delta C_{IJ} + \frac{1}{16} D_I D_J C^{IJ} \delta C^2 + \frac{1}{32} D_I D_J C^2 \delta C^{IJ} +\frac{1}{16} D_I C^2 D_J \delta C^{IJ} \notag \\
& \hspace{40mm} + \frac{1}{2}C_{KL} D_I C^{IK} D_{J} \delta C^{JL}+ \frac{1}{8} \delta C^{IJ} D_I C^{KL} D_J C_{KL} \Bigg). \label{BMScharge:I2Einstein}
\end{align}
Thus, at order $r^{-2}$, we have a situation that is analogous to the leading BMS charge. That is, for a general parameter $s$ there is a non-zero integrable piece as well as a non-zero non-integrable piece, presumably again related to a flux. However, given that the expressions above do not exist \textit{at} null infinity as the boundary of the conformally compactified spacetime, the relation to quantities at null infinity is lost. Physically the best way to think about these quantities is perhaps that they are defined ``close'' to null infinity. For this reason we say that the non-integrable part is related to \textit{fake news} at null infinity. While, the physical interpretation of the leading order BMS charge is clear, this is not the case here. Of course, there is also the issue of the non-uniqueness of the split between the integral and non-integrable terms as explained before. It will become clear later why we have chosen the above splitting.
We have established that at $O(r^{-2})$, we have a subleading BMS charge that is non-integrable for a general parameter $s$. It is reasonable to consider whether there exists an integrable BMS charge at this order for some special parameter(s). Given that there are no Einstein equations for $F_{0}$, $C_1^I$ and $C_{IJ}$, terms in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}$ involving these quantities would then have to vanish independently. Consider first the terms involving $F_0$ in the non-integrable part in equation \eqref{BMScharge:I2}. Using the equations for the supertranslation variations of the metric components listed in section \eqref{sec:BMS} and the Einstein equations \eqref{var:D} and \eqref{uD}, we find that the only terms in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}$ that contribute to terms involving $F_0$ are
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{F_0 \; \textrm{terms}} = s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{8} F_0 \delta C^2 \Bigg)\Bigg|_{F_0 \; \textrm{terms}} .
\end{equation}
Thus, using equations \eqref{var:C}, \eqref{var:D} and \eqref{uD}
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{F_0 \;\textrm{terms}} = - \frac{1}{4} s F_0 C^{IJ} D_I D_J s.
\end{equation}
In order for the above term to be zero for an arbitrary symmetric, trace-free matrix $C_{IJ}$, we conclude that
\begin{equation} \label{I2:s}
D_I D_J s = \frac{1}{2} \omega_{IJ} \Box s,
\end{equation}
i.e.\ $s$ is an $\ell=0$ or $\ell=1$ spherical harmonic, with
\begin{equation}
\Box s = - \ell (\ell+1) s, \qquad \ell \in \{0,1\}.
\end{equation}
Next, consider the terms involving $C_1^I$. Analogously, we find here that the only relevant terms that can contribute are
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{C_1^I \; \textrm{terms}} = s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] -\frac{1}{2} D^I C_1^J \delta C_{IJ} - C_{1}^I D^J \delta C_{IJ} \Bigg)\Bigg|_{C_1^I \; \textrm{terms}}.
\end{equation}
Note that substituting equation \eqref{I2:s} in the variation of $C_{IJ}$ \eqref{var:C} gives that
\begin{equation} \label{s01dC}
\delta C_{IJ} = s \partial_u C_{IJ}.
\end{equation}
Furthermore, using equations \eqref{var:D} and \eqref{uD}, we find that the terms involving $C_1^I$ then simplify to
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{C_1^I \; \textrm{terms}} = - D_I (s\, C_{1\, J} \delta C^{IJ}),
\end{equation}
which is a total derivative term and can thus be ignored.
Lastly, the only terms left to consider are those involving only $C_{IJ}$. Using equation \eqref{s01dC}, the only contributing terms are
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2&^{(non-int)} = \frac{1}{16} s D_I C^2 D_J \delta C^{IJ} + \frac{1}{2} s C_{KL} D_I C^{IK} D_{J} \delta C^{JL} + \big(\delta D_{IJ} - s \partial_u D_{IJ} \big)\delta C^{IJ}
\notag \\[1mm]
&+s \Big[ \partial_u D_{IJ} - \frac{1}{8} C_{IJ} \partial_u C^2 + \frac{1}{8} C_{IJ} D_K D_L C^{KL} + \frac{1}{32} D_I D_J C^2+ \frac{1}{8} D_I C^{KL} D_J C_{KL} \Big] \delta C^{IJ}.
\end{align}
Substituting the $C_{IJ}$ terms in $\delta D_{IJ}$ and $\partial_u D_{IJ}$ from equations \eqref{var:D} and \eqref{uD}, respectively, and using equation \eqref{I2:s}, gives
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)} = D_I \Big( \Big[ \frac{1}{16} s D_J C^2 + \frac{1}{2} s C_{JK} D_L C^{KL} \Big] \delta C^{IJ} \Big),
\end{equation}
i.e.\ it reduces to a total derivative, which vanishes when integrated over the
2-sphere. Hence, we conclude that for $s$ an $\ell=0$ or $\ell=1$ spherical harmonic,
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)} = 0.
\end{equation}
Therefore, $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2$ is now integrable and hence we can read off the (unintegrated) charge from equation \eqref{BMScharge:I2Einstein}
\begin{equation} \label{I2:int}
\mathcal{I}_2 = s\ D_I D_J \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big).
\end{equation}
Up to total derivatives, the charge at this order is equivalently obtained by integrating
\begin{equation}
\mathcal{I}_2 = D_I D_J s\ \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big).
\end{equation}
Equation \eqref{I2:s} and the trace-free property of $C_{IJ}$ and $D_{IJ}$ then implies that in fact
\begin{equation}
\mathcal{I}_2 = 0.
\end{equation}
In conclusion, there is no non-trivial integrable charge at this order. This result is similar in spirit to that obtained at the previous order, where we found that, while integrable, $\mathcal{I}_1 = 0$ if we assume strong enough fall-off conditions for the matter fields.
\subsection{BMS charge at $O(r^{-3})$} \label{sec:I3}
Finally, we consider the next subleading term, which we shall later relate to the NP charges in section \ref{sec:BMSNP}. A long but straightforward calculation gives that
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3 = s\ \delta \Big( & -2 F_3 -3 D_I C_3^I + 2\Box \beta_2 + 4\beta_2 +\frac{3}{2} C_1^I C_{1\, I} +\frac{3}{8} D_I (C^2 C_1^I) - \frac{3}{256} (C^2)^2 \notag \\
& - \frac{1}{2} C^{IJ} \, \Box D_{IJ} + \frac{1}{2} D^{IJ} \Box C_{IJ} + \frac{1}{2} D^I D^{JK} (4 D_K C_{IJ} - 3 D_I C_{JK}) +\frac{3}{2} D^{IJ} C_{IJ} \notag\\
& + \frac{3}{512} \Box (C^2)^2 +\frac{13}{512} D^I C^2 D_I C^2 + \frac{1}{64} C^2 D^IC^{JK} (3 D_I C_{JK} - 4 D_K C_{IJ}) \Big) \notag\\[2mm]
& \hspace{-10mm} + s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{8} F_1 \delta C^2 - 2 C^{IJ} D_I \delta C_{2\, J} - 4 \delta C_2^I D^J C_{IJ} \notag \\
& - 3 \delta C^{IJ} D_I C_{2\, J}- 5 C_2^I D^J \delta C_{IJ} - \frac{3}{4} C^2 D_I \delta C_1^I - \frac{17}{16} \delta C^2 D_I C_1^I - \frac{3}{2} \delta C_1^I D_I C^2 \notag \\
& - \frac{15}{8} C_1^I D_I \delta C^2 + \frac{1}{2} C^{IJ} \delta C_{JK} D_I C_1^K + \frac{5}{2} C_1^K C^{IJ} D_I \delta C_{JK} + C_1^K \delta C^{IJ} D_I C_{JK} \notag \\
& + \frac{1}{2} C_1^K \delta C^{IJ} D_K C_{IJ} + \frac{3}{2} \delta C_1^K C^{IJ} D_I C_{JK} + \frac{5}{4} \delta C^{IJ} \Box D_{IJ} + \frac{3}{4} C^{IJ} \Box \delta D_{IJ} \notag \\
& + \frac{5}{8} D^{IJ} \Box \delta C_{IJ} - \frac{11}{4} D^I D^{JK} D_K \delta C_{IJ} + \frac{15}{4} D^I D^{JK} D_I \delta C_{JK} + 3 D^J C_{IJ} D_K \delta D^{IK} \notag \\
& - \frac{3}{4} D^{IJ} \delta C_{IJ} - \frac{3}{2} \delta D^{IJ} C_{IJ} - \frac{1}{16} C^2 \Box \delta C^2 - \frac{3}{256} \delta C^2 \Box C^2 - \frac{1}{4} \delta C^{IJ} C_{JK} D^K D_I C^2 \notag \\
& - \frac{1}{32} C^2 \delta C^{IJ} D^K D_I C_{JK} - \frac{3}{64} C^2 C^{IJ} D^K D_I \delta C_{JK} + \frac{1}{32} \delta C^2 C^{IJ} D_I D^K C_{JK} \notag \\
& + \frac{9}{64} C^2 D_I C^{IK} D^J \delta C_{JK} - \frac{7}{32} C^{IJ} D^K C_{JK} D_I \delta C^2 - \frac{1}{8} C^{IJ} D_I C_{JK} D^K \delta C^2 \notag \\
& + \frac{1}{64} \delta C^2 D^I C^{JK} D_I \delta C_{JK} - \frac{9}{64} \delta C^{IJ} D_{I} C^2 D^K C_{JK} - \frac{17}{64} \delta C^{IJ} D^K C^2 D_{I} C_{JK} \notag \\
& - \frac{9}{32} C^{IJ} D_I C^2 D^{K} \delta C_{JK} - \frac{17}{64} C^{IJ} D^K C^2 D_{I} \delta C_{JK} - \frac{7}{128} C^2 \delta C^2 \Bigg). \label{BMScharge:I3}
\end{align}
Assuming that
\begin{equation}
T_{00} = o(r^{-6}), \qquad T_{0m} = o(r^{-6}), \qquad T_{01} = o(r^{-6}), \qquad T_{mm} = o(r^{-5}),
\end{equation}
we obtain equations for $\beta_2$ \eqref{b2}, $C_2^I$ \eqref{C2}, $C_3^I$ \eqref{C3}, $F_3$ \eqref{F3} and $\partial_u E_{IJ}$ \eqref{uE}, respectively. Inserting these equations into \eqref{BMScharge:I3} gives the much simpler expression
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3 = s\ \delta \Big(& - D_I D_J E^{IJ} +\frac{1}{2} \Box (D^{IJ} C_{IJ}) - \frac{1}{32} \Box (C^2)^2 \Big) \notag\\[2mm]
& \hspace{-10mm} + s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{4} D_I (C_1^K C^{IJ}) \delta C_{JK} + \frac{1}{4} C_1^K C^{IJ} D_I \delta C_{JK} \notag \\
& + \frac{1}{4} \delta C^{IJ} D^K D_I D_{JK} + \frac{5}{4} D_{JK} D_I D^K \delta C^{IJ} + D_I D_{JK} D^K \delta C^{IJ} \notag \\
& + \frac{1}{16} \delta C^{IJ} D^K (C_{JK} D_I C^2) -\frac{5}{64}\Big[ \delta C^{IJ} D^K (C^2 D_I C_{JK}) + C_{JK} D_I (C^2 D^K \delta C^{IJ}) \Big] \notag \\
& - \frac{1}{16} C_{JK} D_I C^2 D^K \delta C^{IJ} \Bigg). \label{BMScharge:I3Einstein}
\end{align}
In deriving this equation from \eqref{BMScharge:I3}, simple applications of the identity \eqref{iden:hadi} are required, as well as the fact that the covariant derivatives in the round 2-sphere metric satisfy
\begin{equation} \label{Riemann:S2}
[D_I, D_J] V_K = R_{IJK}{}^L V_{L}, \qquad R_{IJKL} = \omega_{IK}\, \omega_{JL} - \omega_{IL}\, \omega_{JK}.
\end{equation}
As with $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2$ in section \ref{sec:I2}, we find that in general there exist non-integrable terms. As before, one may consider whether there exists some choice or choices of the parameter $s$ such that the non-integrable part of $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3$ vanishes. We note that there are no Einstein equations for $F_0$, $C_1^I$, $D_{IJ}$ or $C_{IJ}$, and therefore we can consider terms involving each one of these fields in isolation, without loss of generality.
First, consider terms involving $F_0$. Inspecting equation \eqref{BMScharge:I3Einstein} and equations \eqref{var:E} and \eqref{uE}, we find that the only terms containing $F_0$ are
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}|_{F_0 \; \textrm{terms}} &= \frac{1}{2} s \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big]\Big|_{F_0 \; \textrm{terms}} \notag \\
&= - \frac{1}{16} s C^2 F_0\; \omega_{IJ} \Big[ \delta C^{IJ} + s \partial_u C^{IJ} \Big].
\end{align}
Since $C_{IJ}$ is trace-free, it follows that the terms involving $F_0$
vanish.
Next, we consider terms involving $C_1^I$. These come from
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}|_{C_1^I \; \textrm{terms}} &= s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{4} D_I (C_1^K C^{IJ}) \delta C_{JK} \notag \\
& \hspace{70mm}+ \frac{1}{4} C_1^K C^{IJ} D_I \delta C_{JK} \Bigg)\Bigg|_{C_1^I \; \textrm{terms}} \notag \\[2mm]
&= \frac{1}{4} D_K \Big( s C_1^I C^{JK} \delta C_{IJ} \Big) \notag \\
& \hspace{10mm} + \frac{1}{2} \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) \Big[ D_{K} \big(s C_1^K C_{IJ} \big) - D_{I} \big(s C_1^K C_{JK} \big) \Big],
\end{align}
where we have used equations \eqref{var:E}, \eqref{uE} and \eqref{var:C}. Notice that the first term in the final equation above is a total derivative and can therefore be ignored. Furthermore, up to total derivatives, the second set of
terms is equivalent to
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = - \frac{1}{2} s C_1^K C_{IJ} \Big( D_K D^I D^J s - \frac{1}{2} \delta^J_K D^I \Box s - \delta^J_K D^I s \Big),
\end{equation}
where we have made use of equation \eqref{Riemann:S2}. Now, if this expression is to vanish for arbitrary $C_1^K$ and symmetric trace-free $C_{IJ}$, the symmetrisation on $(IJ)$ of the terms in the bracket would need to be proportional to the round 2-sphere metric $\omega_{IJ}$. Contracting over the $IJ$ indices determines the function of proportionality. In summary, we find that $s$ must satisfy
\begin{equation} \label{I3:s}
D_K D_{(I} D_{J)} s - \frac{1}{2} \omega_{K(I} D_{J)} \Box s - \frac{1}{4} \omega_{IJ} D_{K} \Box s - \omega_{K(I} D_{J)} s + \frac{1}{2} \omega_{IJ} D_{K} s = 0.
\end{equation}
As discussed in appendix C, this equation is satisfied if $s$ is any $\ell=2$ spherical harmonic (see equation \eqref{l2:1}). In particular,
\begin{equation} \label{boxs}
\Box s = - 6 s,
\end{equation}
and equation \eqref{I3:s} reduces to the simpler equation \eqref{l2id2}
\begin{equation}
D_K D_I D_J s = -2 \, \omega_{IJ}\, D_K\,s -
2 \, \omega_{K(I}\, D_{J)}\, s .\,\label{l2id2text}
\end{equation}
Assuming henceforth that $s$ is an $\ell = 2$ spherical harmonic, we proceed to investigate the terms featuring $D_{IJ}$, which appear in the following terms
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}|_{D_{IJ} \; \textrm{terms}} &= s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{4} D^K D_I D_{JK} \delta C^{IJ} \notag \\
& \hspace{28mm} + \frac{5}{4} D_{JK} D_I D^K \delta C^{IJ}+ D_I D_{JK} D^K \delta C^{IJ} \Bigg)\Bigg|_{D_{IJ} \; \textrm{terms}} \notag \\[2mm]
&= \frac{5}{4} D_I \Big( s D_{JK} D^K \delta C_{IJ} \Big) + D^K \Big(s \delta C^{IJ} D_I D_{JK} - \frac{5}{4} \delta C^{IJ} D_I \big( s D_{JK} \big) \Big) \notag \\
& \hspace{25mm} - \frac{1}{2} \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) D^K \Big[ s D_I D_{JK} +5 D_{JK} D_I s \Big],
\end{align}
where, as before, we have used equations \eqref{var:E}, \eqref{uE} and \eqref{var:C}. The first two terms in the final equation here are total derivatives, and
so when integrated over the sphere they will give zero. Up to total derivatives, the remaining terms then give
\begin{equation} \label{I3:Dterms}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = \frac{1}{2} D^K \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) \Big[ s D_I D_{JK} +5 D_{JK} D_I s \Big].
\end{equation}
Using equations \eqref{boxs} and \eqref{l2id2text}, one can show that
\begin{equation} \label{s:iden}
D^K \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) = 2 \omega^{I[J} D^{K]} s - \omega^{JK} D^I s.
\end{equation}
Given that the above combination is contracted with terms that are symmetric and trace-free in $(JK)$ in equation \eqref{I3:Dterms}, this implies that the terms involving $D_{IJ}$ vanish in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}$.
Finally, we are left with terms involving only $C_{IJ}$
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} &= s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{16} \delta C^{IJ} D^K (C_{JK} D_I C^2) \notag \\
& \hspace{2mm} -\frac{5}{64}\Big[ \delta C^{IJ} D^K (C^2 D_I C_{JK}) + C_{JK} D_I (C^2 D^K \delta C^{IJ}) \Big] - \frac{1}{16} C_{JK} D_I C^2 D^K \delta C^{IJ} \Bigg) \notag \\[2mm]
&= \frac{5}{64} D^K \Big(C^2 D_I(s C_{JK}) \delta C^{IJ} \Big) -\frac{5}{64} D_I \Big(s C^2 C_{JK} D^K \delta C^{IJ} \Big) \notag \\
& \hspace{83mm} -\frac{1}{16} D^K \Big(s C_{JK} D_I C^2 \delta C^{IJ} \Big) \notag \\
& \quad - \frac{1}{8} \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) D^K \Big[ s C_{JK} D_I C^2 - \frac{5}{4} C^2 D_{I}(s C_{JK}) \Big],
\end{align}
where we have used equations \eqref{var:E}, \eqref{uE} and \eqref{var:C}. Up to total derivatives,
\begin{equation} \label{I3:Cterms}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = \frac{1}{8} D^K \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) \Big[ s C_{JK} D_I C^2 - \frac{5}{4} C^2 D_{I}(s C_{JK}) \Big].
\end{equation}
Equation \eqref{s:iden}, and the fact that $C_{JK}$ is symmetric and trace-free, then imply that
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = 0.
\end{equation}
In summary, we find that the non-integrable terms in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3$ vanish if and only if $s$ is an $\ell=2$ spherical harmonic. Thus, we have an integrable charge, whose integrand can be read off from equation \eqref{BMScharge:I3Einstein}. Using equation \eqref{trE}, this gives, for any $\ell=2$ spherical
harmonic $s$,
\begin{equation} \label{I3:int}
\mathcal{I}_3 = s\, D_I D_J \Bigg( -E^{IJ} + \frac{1}{2}\, \textup{tr}E\ \omega^{IJ} \Bigg),
\end{equation}
which, up to total derivatives, is equivalent to
\begin{equation}
\mathcal{I}_3 = - \left( D_I D_J s + 3 s\, \omega_{IJ} \right) E^{IJ},
\end{equation}
where we have used equation \eqref{boxs}.
Hence, we have found a new integrable charge that is generally non-vanishing for arbitrary field $E_{IJ}$. In the next section, we shall demonstrate that this charge has a precise correspondence with the NP charges.
\section{Relating the BMS charges to the NP formalism} \label{sec:BMSNP}
In this section, we relate the tower of BMS charges found in section \ref{sec:BMSsub} to the formalism developed by Newman and Penrose in Ref.\ \cite{NP61, NP}. In particular, we show that the BMS charges at order $r^{-3}$ are the non-linear NP charges discovered in Ref.\ \cite{NP}. Throughout this section, we use the notation of the Newman-Penrose formalism, which can be found in Ref.\ \cite{NP61}.\footnote{In Ref.\ \cite{NP61}, they use negative signature convention, whereas we use positive signature conventions. This simply means that the scalar product of the null frame vectors and the definition of the Newman-Penrose scalars is different by a minus sign. }
The Newman-Penrose formalism begins with a choice of complex null frame $\{\ell,n,m,\bar{m}\}$. We choose the null frame defined in equation \eqref{AF:frame}. Once a null frame has been chosen, we can form scalars by contracting tensors onto null frame components. Hencewith, 12 complex spin coefficients are formed by contracting covariant derivatives of the null frame vectors onto null frame components. The spin coefficients constitute information about the connection. For example,
\begin{equation}
\kappa = m^a \ell^b \nabla_b \ell_a, \qquad \sigma = -m^a m^b \nabla_b \ell_a
\end{equation}
parameterise geodesicity and shear, respectively, of the null vector congruence associated with $\ell$. Moreover, we have scalars representing the ten degrees of freedom in the Ricci tensor, and the five complex Weyl scalars
\begin{gather}
\Psi_0 = \ell^a m^b \ell^c m^d C_{abcd}, \quad \Psi_1 = \ell^a n^b \ell^c m^d C_{abcd}, \quad \Psi_2 = \ell^a m^b \bar{m}^c n^d C_{abcd}, \notag \\
\Psi_3 = \ell^a n^b \bar{m}^c n^d C_{abcd}, \quad \Psi_4 = n^a \bar{m}^b n^c \bar{m}^d C_{abcd}.
\end{gather}
With the fall-off conditions \eqref{met:falloff} and \eqref{falloff:matter}, we find that
\begin{gather}
\Psi_0 = \psi_0^0\; \frac{1}{r^5} + \psi_0^1\; \frac{1}{r^6} + o(r^{-6}), \quad \Psi_1 = \psi_1^0\; \frac{1}{r^4} + o(r^{-4}), \quad \Psi_2 = \psi_2^0\; \frac{1}{r^3} + \psi_2^1\; \frac{1}{r^4} + o(r^{-4}), \notag \\[2mm]
\Psi_3 = \psi_3^0\; \frac{1}{r^2} + o(r^{-2}), \quad \Psi_4 = \psi_4^0\; \frac{1}{r} + o(r^{-1}).
\end{gather}
The above property of the Weyl tensors is known as peeling \cite{NP61, bondi, sachs}. Moreover,
\begin{equation}
\sigma = \sigma^0\; \frac{1}{r^2} + o(r^{-2}).
\end{equation}
In terms of the functions that define the metric components \eqref{met:falloff} and \eqref{def:fg},
\begin{equation}
\sigma^0 = \frac{(1+i)}{2} (f_0+ig_0).
\end{equation}
Defining the differential operators $\eth$ and $\bar{\eth}$ acting on a scalar of spin $n$ \cite{Goldberg:1966uu, NP61}\footnote{The spins $n$ of the Weyl scalars $\Psi_0$, $\Psi_1$, $\Psi_2$, $\Psi_3$, $\Psi_4$ are 2, 1, 0, -1 and -2, respectively, while $\sigma$ has spin 2. Complex conjugation reverses the sign of the spin: $n \rightarrow -n$. }
\begin{align}
\eth \eta &= - \frac{(1+i)}{2}\sin^n \theta \left( \frac{\partial }{\partial \theta }-\frac{i}{\sin\theta} \frac{\partial}{\partial \phi }\right)\Big(\frac{\eta}{\sin ^n\theta}\Big), \notag \\[2mm]
\bar{\eth} \eta &= - \frac{(1-i)}{2} \frac{1}{\sin^n \theta} \left( \frac{\partial }{\partial \theta }+\frac{i}{\sin\theta} \frac{\partial}{\partial \phi }\right)\big(\sin ^n\theta\, \eta\big),
\end{align}
\begin{equation} \label{NPeqns}
\psi_4^0 = - \partial_u^2 \bar{\sigma}^0, \quad \psi_3^0 = \eth \partial_u\bar{\sigma}^0, \quad \psi_2^0 - \bar{\psi}_2^0 = \bar{\sigma}^0 \partial_u \sigma^0 - \sigma^0 \partial_u \bar{\sigma}^0 + \bar{\eth}^2 \sigma^0 - \eth^2 \bar{\sigma}^0.
\end{equation}
Furthermore,
\begin{equation} \label{psi20}
\psi_2^0 + \bar{\psi}_2^0 = F_0 - \partial_u |\sigma^0|^2
\end{equation}
and\footnote{Note that $(C_1^{\theta} - i \sin \theta\, C_1^{\phi})$ is a spin 1 quantity.}
\begin{align}
\psi_2^1 &= F_1 + \frac{(1+i)}{2}\, \bar{\eth}(C_1^{\theta} - i \sin \theta\, C_1^{\phi}) - \frac{(1-i)}{4}\, \eth(C_1^{\theta} + i \sin \theta\, C_1^{\phi}) \notag \\
& \quad - \frac{3}{4} \eth(\bar{\sigma}^0 \bar{\eth} \sigma^0) + \frac{9}{4} \sigma^0 \bar{\eth} \eth \bar{\sigma}^0 + \frac{1}{4} \bar{\eth} \bar{\sigma}^0 \eth \sigma^0, \\
\psi_1^0 &= \frac{3(1+i)}{4} (C_1^{\theta} - i \sin \theta\, C_1^{\phi}) + \frac{3}{4} \eth |\sigma^0|^2 + 3 \sigma^0 \eth \bar{\sigma}^0, \\
\psi_0^0 &= -3 (1+i) (f_2 + i g_2) - i (f_0^3 + g_0^3 ) + \frac{(1-i)}{4} (f_0+ig_0)^3, \\
\psi_0^1 &= -6 (1+i) (f_3 + i g_3).
\end{align}
Now that we have defined all the quantities in the language of Newman and Penrose we are ready to compare to the tower of BMS charges derived in section \ref{sec:BMSsub}.
\subsection{$\mathcal{I}_0$ and BMS charges}
The standard BMS charge is defined by
\begin{equation} \label{BMS:charge}
P_{\ell,m} = - \frac{1}{2\pi G} \int d\Omega\ Y_{\ell m}\; (\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0),
\end{equation}
where $Y_{\ell m}$ are the usual spherical harmonics.
Setting $0 \leq |m| \leq \ell \leq 1$ gives the usual Bondi-Sachs 4-momentum vector. In fact, in this case, from the last equation in \eqref{NPeqns}
\begin{equation}
\Im(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0) = \Im(\bar{\eth}^2 \sigma^0)
\end{equation}
is a total derivative. Thus,
\begin{equation} \label{BMS:charge01}
P_{\ell,m} = - \frac{1}{2\pi G} \int d\Omega\ Y_{\ell m}\ \Re(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0), \qquad \ell\in \{0,1\}.
\end{equation}
Defining the integrable part of equation \eqref{I0} to be
\begin{equation}
\mathcal{Q}_0 = \frac{1}{8\pi G} \int d\Omega\ Y_{\ell m} (-2 F_0)
\end{equation}
with $s=Y_{\ell m}$ and rewriting the above expression in terms of Newman-Penrose quantities gives
\begin{equation} \label{Q0}
\mathcal{Q}_0 = - \frac{1}{2\pi G} \int d\Omega\ Y_{\ell m}\ \Re(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0)
\end{equation}
Comparing with equation \eqref{BMS:charge} we find that the charge above is the real part of the BMS charge as defined by Newman-Penrose (see equation (4.15) of Ref.\ \cite{NP}). However, for $\ell=0,1$, they are equal as can be seen from equation \eqref{BMS:charge01}.
The integrability property of $\mathcal{Q}_0$ in the language of Barnich-Brandt translates to its conservation along null infinity in the language of Newman-Penrose. The Bianchi identities, which are non-trivial in the Newman-Penrose formalism, imply that
\begin{equation}
\partial_u \psi_2^0 = - \eth^2\partial_u \bar{\sigma}^0 - \sigma^0 \partial_u^2 \bar{\sigma}^0.
\end{equation}
Using this equation
\begin{equation}
\partial_u(-2 F_0) = - 4 \partial_u \Re(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0) = \Re(\eth^2\partial_u \bar{\sigma}^0) -4 |\partial_u \sigma^0|^2.
\end{equation}
Note that for $\ell \leq 1$, the first term is a total derivative since\footnote{This result comes from standard properties of spin-weighted spherical harmonics (see e.g.\ Ref.\ \cite{NP}) \label{ft:spin}
\begin{equation*}
\eth(_{s}Y_{lm}) = \sqrt{(l-s)(l+s+1)}\ _{s+1}Y_{lm}, \qquad
\bar{\eth}(_{s}Y_{lm}) = - \sqrt{(l+s)(l-s+1)}\ _{s-1}Y_{lm}.
\end{equation*}
}
\begin{equation} \label{eth20}
\bar{\eth}^2 Y_{\ell m} = \eth^2 Y_{\ell m} = 0,
\end{equation}
i.e.\ it is a soft graviton term \cite{Strominger:2013jfa
}, while in terms of functions of the metric components
\begin{equation}
|\partial_u \sigma^0|^2 = \frac{1}{8} \partial_u C_{IJ} \partial_u C^{IJ},
\end{equation}
i.e.\ the obstacle to the conservation of $\mathcal{Q}_0$ is
\begin{equation}
\frac{1}{2} \partial_u C_{IJ} \partial_u C^{IJ},
\end{equation}
which matches precisely with the non-integrable term in equation \eqref{I0}.
\subsection{$\mathcal{I}_1$ and $\psi_1^0$}
Writing $\mathcal{I}_1$ from equation \eqref{I1} in terms of Newman-Penrose quantities gives
\begin{equation}
\mathcal{I}_1 = 2\, \Re (\bar{\eth} \psi_1^0 - \psi_2^1).
\end{equation}
The Bianchi identities imply that
\begin{equation}
\psi_2^1 = \bar{\eth} \psi_1^0.
\end{equation}
Hence,
\begin{equation}
\mathcal{I}_1 = 0.
\end{equation}
\subsection{$\mathcal{I}_2$ and $\psi_0^0$}
In section \ref{sec:I2}, we found that choosing $s$ to be an $\ell=0$ or $\ell=1$ mode, the non-integrable part vanishes and we are left with a candidate charge of the form \eqref{I2:int}. In terms of Newman-Penrose quantities,
\begin{equation}
D_I D_J \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big) = \frac{2}{3} \Re \big( \bar{\eth}^2 \psi_0^0 \big).
\end{equation}
Hence,
\begin{equation}
\mathcal{I}_2 = \frac{2}{3}\ Y_{\ell m}\ \Re \big( \bar{\eth}^2 \psi_0^0 \big)
\end{equation}
with $\ell=0,1$. Using equation \eqref{eth20}, we reproduce the result in section \ref{sec:I2} that the integrable charge is in fact zero.
\subsection{$\mathcal{I}_3$ and NP charges}
In section \ref{sec:I3}, we found an integrable charge at order $r^{-3}$ as long as $s$ is chosen to be an $\ell=2$ spherical harmonic. Translating the main result of that section, equation \eqref{I3:int}, into Newman-Penrose language, and using the fact that
\begin{equation} \label{Epsi01}
D_I D_J \Bigg( -E^{IJ} + \frac{1}{2} \omega^{IJ} \Big[ D^{KL} C_{KL} - \frac{1}{16} (C^2)^2 \Big] \Bigg) = \frac{1}{3} \Re \big( \bar{\eth}^2 \psi_0^1 \big),
\end{equation}
gives
\begin{equation}
\mathcal{Q}_3 = \frac{1}{24\pi G} \int d\Omega\ \bar{Y}_{2,m} \Re \big( \bar{\eth}^2 \psi_0^1 \big).
\end{equation}
Integrating by parts gives
\begin{equation}
\mathcal{Q}_3 = \frac{1}{4\sqrt{6}\, \pi G } \int d\Omega\ \Big[ {}_{2}\bar{Y}_{2,m}\ \psi_0^1 + (-1)^m\ {}_{2}Y_{2,-m}\ \bar{\psi_0^1} \Big].
\end{equation}
Notice that the first term in the integrand above corresponds to the NP charges (see equation (4.19) of Ref.\ \cite{NP}). The second term is not quite the complex conjugate of the first. However, the combination means that we only have half the number of NP charges. Perhaps an easier way to see this is that in equation \eqref{Epsi01}, only the real part of $\bar{\eth}^2 \psi_0^1$ appears on the right-hand side.
\section{Discussion} \label{sec:dis}
In this paper, we have established concretely the relation of the NP charges to the BMS group of asymptotic symmetries at null infinity and its associated
charges. While the relation of the NP charges to the BMS group was argued for in Ref.\ \cite{NP}, even an explicit demonstration of the supertranslation invariance of the non-linear NP charges has been missing (see, however, Ref.\ \cite{goldberg}). In particular, interestingly, we find that the NP charges appear
at subleading $1/r^3$ order in a $1/r$-expansion of the Barnich-Brandt charge, which defines the standard BMS charge at leading order.
We have used the Barnich-Brandt definition of asymptotic charges, but this is not unique. For example, the Iyer-Wald definition \cite{IW} differs by a term of the form
\begin{equation}
\frac{1}{16 \pi G} \int_{S}\,(d^2x)_{ab}\, \sqrt{-g}\ \big( \nabla^a \xi^c + \nabla^c \xi^a \big) g^{bd} \delta g_{cd}.
\end{equation}
In fact, as discussed in Ref.~\cite{compere}, the above expression, with an arbitrary coefficient, represents a one parameter family of ambiguities. Our results in this paper are not affected by the inclusion of this term.
Curiously, we only obtain half the number of NP charges, owing to the fact that the Barnich-Brandt charge is real. It would be interesting to understand whether the Barnich-Brandt integral could ever give all ten NP charges and, if so, how.
It seems unlikely that the SL$(2,\mathbb{C})$ part, or indeed its
generalisation involving superrotations, could account for the remaining
five charges.
Another slightly puzzling feature of the Barnich-Brandt charge definition is
that in it $s$ plays the role both of the supertranslation parameter and also as a function used in order to define the charge. Thus, for example, in section \ref{sec:I3}, when we show that $\mathcal{I}_3$ is integrable if $s$ is
an $\ell=2$ harmonic, showing that the variation of $\mathcal{I}_3$ with such a parameter
$s$ vanishes clearly does not prove that the integrable charge is invariant under the full action of the full supertranslation group. Rather, it only demonstrates that $\mathcal{I}_3$ is invariant under the action of those supertranslations where the supertranslation parameter $s$ is an $\ell=2$ harmonic. We do,
however, prove the complete invariance of the NP charges under the full action of the supertranslation group in appendix \ref{app:STNP}.
At the linearised level, at each order in the $1/r$ expansion, there are conserved charges associated to the tower of linearised Newman-Penrose charges. Conde and Mao~\cite{conde} also find only half of these charges, \emph{viz.}\ the real parts. Linearising our extended BMS charges, at each order we get the same form as Conde-Mao's charges. At suitably low enough order the Conde-Mao charges come from expanding $F(u,r,\theta,\phi)$, which we also have. Therefore, at leading order our charges agree; see equation \eqref{I0}. However, at subleading orders, we also get contributions from the expansion of $D_{I} C^{I}(u,r,\theta,\phi)$; see equations \eqref{I1}, \eqref{BMScharge:I2} and \eqref{BMScharge:I3}. Using equations of motion \eqref{F1}, \eqref{F2}, \eqref{C2}, \eqref{F3} and \eqref{C3}, the Taylor coefficients in the $1/r$ expansion of $F(u,r,\theta,\phi)$ and $D_{I} C^{I}(u,r,\theta,\phi)$ are proportional to each other, hence the form of our linearised charges at each order is equal to the charges of Conde-Mao. However, the coefficients are different. In particular, at the subleading order the relative constant of proportionality between $F_1(u,\theta,\phi)$ and $D_{I} C_1^{I}(u,\theta,\phi)$ is such that they cancel upon use of equation \eqref{F1}. The difference between our and Conde-Mao's linearised charges reflects the fact that at the linearised level there are a number of independent supertranslation invariant quantities. However, at the non-linear level this degeneracy is lifted and there is a unique combination that is supertranslation invariant, which is what is found in this paper.
The fact that there are only ten non-linearly conserved NP charges has not been fully understood in the context of the Newman-Penrose formalism. It remains an open question whether the reframing of the charges in terms of the Barnich-Brandt formalism could help with resolving this puzzle. Of course, a prerequisite to understanding this is first to understand why half the NP charges are missing in this formalism.
In a future work, we will also investigate the tower of subleading BMS charges for the more realistic fall-off conditions at infinity~\cite{Angelopoulos:2016wcv, Angelopoulos:2017iop,Angelopoulos:2018uwb} that do not preclude some physical processes, such as compact data close to spacelike infinity. These fall-off conditions are most relevant for current gravitational wave observations and the hope would be that this leads to the discovery of a quantity that is useful for gravitational wave analysis.
It would also be interesting to investigate the charge algebra at subleading order. In particular, there will be a hierarchy of BMS algebras at each order with different modified brackets, corresponding to the different fake news at each order, and field-dependent central extensions. At the leading order, the algebra has no central extension for supertranslation generators \cite{BarTro} and this is expected to be the case at subleading orders as well. However, extending our charges to include rotations should give rise to new central extensions at subleading orders. Furthermore, at $O(1/r^3),$ there ought to be a subalgebra, given by the generators corresponding to the Newman-Penrose charges, for which the modified bracket is just given by the ordinary Dirac bracket. We will investigate the charge algebra hierarchy in a future work.
\section*{Acknowledgements}
We would like to thank Gary Gibbons, Pujian Mao, Blagoje Oblak,
Malcolm Perry, Shahin Sheikh-Jabbari and C\'edric Troessaert for useful discussions. We would like to thank the Mitchell Family Foundation for hospitality at the Brinsop Court workshop.
Moreover, M.G.\ and C.N.P.\ would like to thank the Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Insitut), Potsdam, where this work was initiated, H.G.\ would like to thank the Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University and H.G.\ and M.G.\ would like to thank the ICTP, Trieste for hospitality during the course of this work.
M.G.\ is partially supported by grant no.\ 615203 from the European Research Council under the FP7. C.N.P.\ is partially supported by DOE grant DE-FG02-13ER42020.
|
1,116,691,498,566 | arxiv | \section{Introduction}
A Ranque-Hilsch tube is a mechanical device that, without any moving components, separates a stream of gas into a hot and a cold components.
Air at $T_{in} = 17.5^{\circ} C $ and a high pressure of 5.72 atm. enters tangentially at a cross section of the tube Figure 1.
In our configuration the hot stream exits at $T_{H} = 30.4^{\circ} C $ and the cold stream at $T_{C} = 14.9^{\circ} C$.
Our tube was constructed following the patent of Ranque [1]. The physical phenomenon has not been completely understood.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\textwidth]{Figura1.png}
\caption{Ranque-Hilsch tube}
\label{fig:Figura 1}
\end{figure}
The Ranque-Hilsch tubes (vortex tube) are now commercially used for low-temperature applications.
In order to increase the efficiency, they have been proposed to replace the conventional expansion nozzle in refrigeration systems.
Before we seeded the flow we didn't know the existence of a swirling helicoidal motion, that can be observed in the slow motion videos (1200 fps, seeded with baby powder and water). We did expect the helicoidal mode.
To introduce the baby powder or the water, we made an atomizer as shown in Figure 2.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\textwidth]{Figura2.png}
\caption{Side A seeding particles and cold side of the Ranque-Hilsch tube}
\end{figure}
\section{Acknowledgements}
Thanks to DGAPA-UNAM through Proyecto PAPIIT No.IN117712 "Propagaci\'on de ondas a trav\'es de superficies" and Professors Marcos Ley Koo and Andr\'es Porta.
\section{Reference}
[1]-US Patent No 1,952,281 from March, 1934. Ranque G.J. Method and Apparatus for Obtaining
\indent from Fluid under Pressure Two Currents of Fluids at Different Temperatures. \\
\noindent [2]-Low-pressure vortex tubes. B Ahlborny, J Camirey and J U Kellerz
1996 J. Phys. D: Appl. \indent Phys. 29 1469 \\
\noindent [3]-The Ranque effect. A F Gutsol 1997 Uspekhi Fizicheskikh Nauk,
Russian Academy of Sciences \indent 1997 Phys.-Usp. 40 639
\end{document} |
1,116,691,498,567 | arxiv | \section{Introduction}
Several years ago, multiple M2 brane actions were proposed
\cite{Bagger:2006sk}\cite{Gustavsson:2007vu}\cite{Aharony:2008ug}.
A system of multiple M2 branes can be polarized into a single M5 brane through
the dielectric-type effect \cite{Emparan:1997rt}\cite{Myers:1999ps}\cite{Constable:1999ac} as shown by the authors
of \cite{Ho:2008nn}\cite{Ho:2008ve}\cite{Ho:2011ni} who
constructed an M5 action starting with BLG action with a ``bottom-up" approach.
We present in this paper what could be viewed as a ``top-down" approach -- the Kaluza-Klein
induced mechanism
of obtaining multiple M2 branes starting from a single
M5 brane.\footnote{See \cite{Park:2008qe} and \cite{Bandos:2008fr} for a related discussion. An interesting role played by a Ramond-Ramond C-field
background was discussed in
\cite{Chu:2010eb} and \cite{Ho:2011yr} in a related context.} We take the M5/M2 case for
an illustration but the procedure should be valid in general.
On an intuitive level, the mechanism is simply to ``cut''
an M5 brane into M2 brane strips.
In this paper we propose that there is a mathematical procedure that
corresponds to the ''cutting``:
the cutting may be realized through a certain regularized discretization of part of the M5
worldvolume as depicted in the figure below.
Roughly, the idea is to compactify the worldvolume theory of a single M5 brane
on a manifold of real dimension three.
The physics of the resulting M2 branes is
correlated with properties of the internal manifold. The choice of the internal manifold determines, among other things, the gauge group of the reduced theory in the case where the harmonics of the internal
manifold can be associated with an infinite extension of certain gauge algebra.
It is well known that the fuzzy spherical harmonics of $S^2$ admit
such infinite extension \cite{hoppe}\cite{de Wit:1988ig}.
As established in those papers, the algebra of the $S^2$ spherical harmonics
can be associated with that of the infinite extension of $SU(N)$.
We choose $S^2\times S^1$
as the internal manifold, and carry out reduction along $S^1$ followed by truncation; we then consider expansion of fields in
terms of the $S^2$ spherical harmonics.
The main reason to choose $S^2$ is for its association with infinite $SU(N)$; the relevance of $S^2$
was discussed in \cite{Nastase:2009ny} and \cite{Gustavsson:2010ep} from different angles.
The worldvolume diffeomorphism turns into the gauge invariance once the fuzzy spherical
harmonics $Y_{lm}$ are manually replaced by $SU(N)$ group generators ${\cal T}^a$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{M2M5two}
\caption{From an M5 to M2's} \label{discretization}
\end{figure}
Part of our motivation comes from \cite{Park:2011bg} where certain non-abelianization of the Green-Schwarz open
string action was proposed. It was anticipated that recent progress on the non-abelian M2 brane physics
would shed light on such non-abelianization. We will have more on this in the conclusion.
To some extent, Kaluza-Klein related techniques already appeared in related literature, especially in the bottom-up
approach.
We believe that it is in the top-down approach where the techniques provide
a more enlightening perspective. While the relevance of Kaluza-Klein procedure in
the top-down approach can be seen relatively easily, precise
implementation requires new ideas and components.
Put differently, there are some subtleties in the top-down approach that need to be understood.
For example, an M2 brane theory should have eight
scalars that correspond to the transverse directions of the brane worldvolume. The M5 brane has only
five scalars which correspond to its transverse directions. One may anticipate that the extra scalars would
come, upon reduction, from the self-dual gauge field. On the contrary, the scalars from the self-dual
field should be removed, as we will show. It turns out that
it is certain gauge fixing -- which may be called ``partial static gauge'' -- that brings the deficit scalars.
More specifically, one gauge-fixes only the M2 brane worldvolume coordinates among eleven $X$-coordinates.
Another example of subtleties concerns field ordering. After the M5 brane action
is appropriately reduced, the fields in the action will be Kaluza-Klein expanded in terms of $S^2$ spherical
harmonics, $Y_{lm}$. $Y_{lm}$ can be mapped \cite{hoppe}\cite{de Wit:1988ig} to its
fuzzy version that was denoted therein by $T_{lm}$.
Note that $T_{lm}$'s are functions of $SO(3)$ generators that satisfy the well-known commutation
relations, therefore, {\em non-commuting}. Due to this, one must face ordering issues. As we will discuss, it is likely
that the ordering will be determined uniquely.
The Kaluza-Klein algorithm of this paper offers several new insights on the physics under consideration.
One of them is an alternative understanding of the appearance of the product $U(N)$ group, namely $U(N)\times U(N)$,
in the ABJM model. In the ABJM construction, the product $U(N)$ group arose through a brane construction.
In the current set-up, it appears on a more elementary level: it arises through enforcing reality of the action
after replacement of $Y_{lm}$ (or $T_{lm}$) by ${\cal T}^a$, the group generator.
Another revealing aspect of the current approach is that it is clearly sets the limitations of the
reduced theory.
Arriving at the BLG or ABJM type models requires an elaborate reduction
procedure. The fact that such an elaborate procedure is required
reflects that the BLG or ABJM model describes certain aspects of a particular "sector'' of the physical states of M5.
The starting action of an M5 brane is a Nambu-Goto type that is supposed to describe a single M5 brane
at its {\em full} energy scale.\footnote{Note, however, that the M5 brane action of \cite{Bandos:1997ui}\cite{Aganagic:1997zq}
needs to be modified by higher derivative corrections to M-theory in ultra-high energy region.}
As we will explicitly show, cutting out the higher derivative terms is
essential for arriving at BLG or ABJM action (or their variations). This implies that BLG or
ABJM action is only capable of describing the low energy aspects of the M2 brane physics but not the full
energy scale physics \cite{Bagger:2006sk}. (See also \cite{Park:1999xz}.)
\vspace{.3in}
\noindent The rest of the paper is organized as follows.
In Section 2, we start with an action for a single M5 brane.
In the derivative expansion, we keep two leading order terms that we name
the DBI term and the $H^2$ term, respectively. Using the technique of \cite{Park:2008qe},
3+3 splitting of dimensions is implemented on the DBI term. After that, 3+3 splitting of the $H^2$ term
is carried out following the steps that could be viewed as
the reverse procedure of \cite{Ho:2008nn}. By generalizing the results of \cite{Ho:2008nn} to a nonlinear level
and employing certain complexification of the action, we
show in Section 3 that there is a series of reduction ans\"atze that lead to ABJM action.
In the conclusion,
we end with comments on future directions.
\section{Partial reduction of M5 action}
Kaluza-Klein techniques have been employed in the literature, especially in the bottom-up approach.
However, once one gets to the specifics of obtaining multiple M2's from a single M5, certain things
are not so obvious. For example, the single M5 brane action has five scalars after the usual static
gauge-fixing whereas BLG or ABJM action has eight scalars.
One possible way of getting the eight scalars would be the self-dual field's yielding the required scalars
upon reduction. This would be in line with the way in which Kaluza-Klein reduction usually works. It has turned out
that the usual approach does not apply here. The approach that does work is an unusual Kaluza-Klein program
in which a novel gauge-fixing is required. In this section, we deal with this
and a few other related issues, partially carrying out the reduction. The reduction will be completed in the next section.
We start with the worldvolume theory of a single M5 brane formulated in \cite{Bandos:1997ui}\cite{Aganagic:1997zq}.
Several leading terms will be kept in the low energy derivative expansion.
Following \cite{Park:2008qe}\cite{Lee:2010ey},
we consider $(d,n)$ ($d+n=6$) splitting of the dimensions with $d$ denoting
the dimension of the ``external" manifold and $n$ the dimension of the ``internal" manifold.
While doing so, the Nambu bracket naturally appears.
The fact that the Nambu bracket structure is relevant for an M2 brane action is long
known \cite{de Wit:1988ig}\cite{Bergshoeff:1988hw}\cite{Awata:1999dz}. The works of \cite{de Wit:1988ig} and \cite{Bergshoeff:1988hw} were carried out
in a lightcone framework. It was recently shown in \cite{Park:2008qe} that the Nambu
bracket can be introduced in a more covariant
manner.
In this section, we partially carry out the reduction to $R^{1,2}$ which we take as the external manifold; the
goal is to re-cast the action in (\ref{2apprM5q2}) into a form that is suitable
for the harmonic expansion presented in Section 3.
Let us start with the derivative-expanded form of the covariant action of a single M5 brane \cite{Pasti:1997gx}\cite{Bandos:2000az}
\begin{eqnarray}
S&=&-\int d^6\xi \, \sqrt{-g}\left[ 1-\fr{1}{24}\hat{H}_{mnl}\hat{H}^{mnl}-\fr{1}{8(\pa a)^2}\pa_m a (\hat{H}-\tilde{\hat H})^{mnl}(H-\tilde{\hat H})_{nlp}\pa^p a+\dots \right]\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&&+\int \Big[ \hat{C}^{\sst{(6)}}+\fr12 dB^{\sst{(2)}}\wedge \hat{C}^{\sst{(3)}} \Big].
\label{2apprM5q}
\end{eqnarray}
Here $g_{mn}$ is the induced metric on 6D worldvolume, $\hat{H}^{\sst{(3)}}=dB^{\sst{(2)}}-\hat{C}^{\sst{(3)}}$ is the field strength of the worldvolume self-dual gauge field $B^{\sst{(2)}}$, i.e.
\[
\hat{H}_{mnl}=\tilde{\hat H}_{mnl},\quad \tilde{\hat H}_{mnp}=\frac{1}{3!\sqrt{-g}}\epsilon_{mnprst}\hat{H}^{rst}.
\]
$\hat{C}^{\sst{(6)}}$ and $\hat{C}^{\sst{(3)}}$ are the pullbacks of 11D target-space gauge fields.
One may fix, e.g., the gauge \cite{PST}
\[
\fr{\pa_m a(x)}{\sqrt{-(\pa a)^2}}=\d^0_m .
\]
In this gauge, the action becomes (cf. \cite{Pasti:2009xc})
\begin{eqnarray}
S&=&-\int d^6\xi \, \sqrt{-g}\left[ 1-\fr{1}{24}\hat{H}_{mnl}\hat{H}^{mnl}-\fr{1}{8}(\hat{H}-\tilde{\hat H})_0^{~\bar{m}\bar{n}}(\hat{H}-\tilde{\hat H})_{0\bar{m}\bar{n}}+\dots \right]\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&&+\int \Big[ \hat{C}^{\sst{(6)}}+\fr12 dB^{\sst{(2)}} \wedge \hat{C}^{\sst{(3)}} \Big],
\label{2apprM5qfixed}
\end{eqnarray}
where $\bar{m}=1,2,...,5$.
The gauge fixed version of the M5 action \rf{2apprM5qfixed} is equivalent (modulo contribution of
the target-space gauge fields) to that of \cite{Ho:2008nn}
(see \cite{Pasti:2009xc} for details).
We go to flat space and set $\hat{C}^{\sst{(6)}}=0,\hat{C}^{\sst{(3)}}=0$. Substituting the $a(\xi)$-equation of motion
into \rf{2apprM5q}, the expanded covariant action
reduces to (cf. \cite{Bergshoeff:1996ev})
\begin{eqnarray}
S
&=&-\int d^6\xi \,\Big[ \sqrt{-g}-\fr{1}{24}\sqrt{-g}H_{mnl}H^{mnl}+\dots \Big] .
\label{2apprM5q2}
\end{eqnarray}
Now the self-dual field strength is related to its gauge potential in the usual way
\begin{eqnarray}
H_{mnl} \equiv (\pa_l B_{mn}+\pa_m B_{nl}+\pa_n B_{lm})
\label{H}\equiv 3\pa_{[m} B_{nl]}.
\end{eqnarray}
The self-duality condition reads
\begin{eqnarray}
H_{mnp}-\tilde{H}_{mnp}=0,\quad
\tilde{H}^{mnp}=\frac{1}{3!\sqrt{-g}}\epsilon^{mnprst}H_{rst}
\label{Hsfd6}
\end{eqnarray}
Let us split $m=(\m,i)$,
\begin{eqnarray}
m=(\m,i)\quad \m=0,1,2 \quad i=1,2,3 \label{is}
\end{eqnarray}
and denote $\xi^m=(x^\m,y^i)$.
In the next two subsections, we implement 3+3 splittings of the $\sqrt{-g}$-part and the $H^2$-part,
arriving at (\ref{6Dstartpoint2mod2}) as a result.
\subsection{3+3 splitting of $ \sqrt{-\det (g_{mn})}$ part}
The analysis of the Nambu-Goto part, i.e., the first term of (\ref{2apprM5q2}) requires use of two ingredients:
the covariant splitting procedure of \cite{Park:2008qe}\cite{Lee:2010ey} and novel gauge fixing. The gauge fixing procedure
has a certain similarity to that of \cite{Bandos:2008fr}.
Define
\begin{eqnarray}
S_{NG}&\equiv &-\int d^6 \xi \sqrt{-\det (g_{mn})}=-\int d^6 \xi \sqrt{-\det (\pa_m X^M \pa_n X_M)} \label{sqrtg}
\end{eqnarray}
where $M=0,\dots,10$ are the target space indices; $m,n=0,\dots,5$ are the indices on the M5-brane worldvolume.
Let us impose partial gauge fixing
\begin{eqnarray}
X^\m=\xi^m \d_m^\m, \quad \m=0,1,2 \label{psg}
\end{eqnarray}
This action can be recast to the form \cite{Park:2008qe}
\begin{eqnarray}
S_{NG}
=\int d^3x\, d^3y \;\sqrt{-h}\Big[-h^{\m\n}D_\m X^M D_\n X_M-\fr14 w^{2}\det V+2w
\Big] \label{3p3z}
\end{eqnarray}
where
\begin{eqnarray}
D_\m X^M \equiv \pa_\m X^M-A_\m^{i} \pa_{i} X^M,\quad i=1,2,3
\label{D0cov}
\end{eqnarray}
and the $V$-term is given by
\begin{eqnarray}
\det V=\fr1{3!}\Big[\epsilon^{i_1,i_2,i_3}\pa_{i_1}X^{M_1}\pa_{i_2}X^{M_2} \pa_{i_3}X^{M_3}\Big]^2 \label{V}
\end{eqnarray}
The field $w$ is auxiliary, and will play an interesting role later.
Substituting $X^\m=\xi^m \d_m^\m$ explicitly, the equations (\ref{3p3z}), (\ref{D0cov}) and (\ref{V}) become respectively
\begin{eqnarray}
S_{NG}
=\int d^3x\, d^3y \;\sqrt{-h}\Big[-h^{\m\n}\eta_{\m\n}-h^{\m\n}D_\m X^{I} D_\n X_{I}-\fr14 w^{2}\det V+2w
\Big] \label{Y3p3z2}
\end{eqnarray}
\begin{eqnarray}
D_\m X^{I} &\equiv & \pa_\m X^{I}- A_\m^{i} \pa_{i} X^{I}, \quad I=1,...,8
\label{YD0cov2}
\end{eqnarray}
and\footnote{The action \rf{3p3z} implies the following field equation for $A_\m^i$,
\begin{eqnarray}
{A}_\n^k= (V^{-1})^{ki}(B^T)_{i \n},\qquad V_{ij}=\pa_i X^M \pa_j X_M,~B_{\m i}=\pa_\m X^M \pa_i X_M
\end{eqnarray}
The field equations for $(w, h_{\m\n})$ that follow from \rf{3p3z} (or, equivalently, from \rf{Y3p3z2}) are
\begin{eqnarray}
w^{-1}=\fr14 \det V, \qquad h_{\m \n}=w^{-1}D_\m X^M D_\n X_M=w^{-1}(\eta_{\m\n}+D_\m X^I D_\n X_I)
\label{tKmnc}
\end{eqnarray}
Upon substituting these into the lagrangian, one gets the Nambu-Goto form back as can be
easily checked with noting $-\det(g_{mn})=-\det(D_\m X^M D_\n X_M)\cdot \det (V_{ij})$ \cite{Park:2008qe}.
}
\begin{eqnarray}
\det V=\fr1{3!}\Big[\epsilon^{i_1,i_2,i_3}\pa_{i_1}X^{I_1}\pa_{i_2}X^{I_2} \pa_{i_3}X^{I_3}\Big]^2 \label{YV2}
\end{eqnarray}
The field strength of $A_{\m i}$ is defined \cite{Park:2008qe} by
\begin{eqnarray}
\Phi_{\m\n}^i \equiv \pa _\m A_\n^i-\pa _\n A_\m^i
+(\pa _j A_\m^i) A_\n^j-(\pa _j A_\n^i) A_\m^j
\label{phimunu}
\end{eqnarray}
The nonlinear part of $\Phi_{\m\n}^i$
will be responsible for the generation of the $A^3$ term of the Chern-Simons action
in the analysis of $H^2$ in the next subsection.
As it stands, the action possesses such rich dynamics - which must be a general aspect of M-brane dynamics - that
the scalar potential $V$ has a spacetime-dependent coupling ''constant``. We
narrow down to a special sector of the M5 dynamics choosing $w=const$. This choice may be taken as
part of the reduction procedure.
Let us expand the action \rf{Y3p3z2} over the classical solution to the equations of motion
\begin{eqnarray}
h_{\m\n}=w^{-1}(\eta_{\m\n}+D_\m X^I D_\n X^I)
\end{eqnarray}
from which one obtains
\[
h^{\m\n}=w(\eta^{\m\n}-D^\m X^I D^\n X^I+\dots),
\]
and
\[
-h^{\m\n}(\eta_{\m\n}+D_\m X^I D_\n X^I)=-3w+\dots
\]
The action \rf{Y3p3z2} now takes
\[
S_{NG}=\int \, d^3x\, d^3 y~ w^{-3/2}(1-\fr12 D^\m X^I D_\m X^I+\dots)(-w-\fr14 w^2 \det V+\dots)
\]
\[
=\int \, d^3x\, d^3 y~\fr12 w^{-1/2}(\eta^{\m\n}D_{\m}X^I D_{\n}X^I-\fr12 w\det V-2+\dots)
\]
The constant part, -2, can be omitted from the NG part of the action.
Once $X^M$
are re-scaled according to
\begin{eqnarray}
X^{I}\rightarrow w^{-\fr14}X^{I} \label{Xscale}
\end{eqnarray}
the auxiliary field $w$ becomes an overall factor,
\begin{eqnarray}
S_{NG}
&=& w_1\int \, d^3x\, d^3 y~ (\fr12\eta^{\m\n}D_{\m}X^I D_{\n}X^I-\fr14 \det V)
\end{eqnarray}
where we have defined
\begin{eqnarray}
w_1\equiv \fr1{w}
\end{eqnarray}
\subsection{3+3 splitting of $H^2$ part}
With the analysis in the previous section, the action \rf{2apprM5q2}, now takes the form
\begin{eqnarray}
S &=&w_1\int \;\Big[\fr12 \eta^{\m\n}D_\m X^I {D}_\n {X}_I-\fr14 \det V-1
\Big] +\int \Big[H_{mnl}H^{mnl}+\dots \Big]
\label{2apprM5q3}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
\end{eqnarray}
where the second $\sqrt{-g}$ in \rf{2apprM5q2} has been replaced by one.
(Also the factor $\fr{1}{24}$ has been re-scaled away.)
The reason for this is two-sided. Firstly, we are considering a minimal coupling between
the fields $X$ and $H$. Secondly, it is for the action after the reduction to have two derivatives
at most. Ultimately, we arrive at
ABJM, and ABJM has up to two derivatives. Put differently, if one keeps $\sqrt{-g}H^2$,
one will get ABJM+higher derivative terms after the reduction
(assuming a consistent procedure still exists).
But then one can drop those higher derivatives terms by going to a low energy limit.
Let us carry out 3+3 splitting of the $H^2$ term. The outcome of the analysis is
given below in \rf{LFe3}. Many of the necessary steps
can be found in \cite{Ho:2008nn} but the analysis should be run in the reverse, i.e., from a M5 to M2's.
There are two crucial new steps that we have taken.
The first is to identify the field $A_\m^i$ (that has appeared in the ``covariant derivative'' of \rf{D0cov})
with some components of $B_{mn}$. This is in the usual spirit of Kaluza-Klein ans\"atze: the resulting
theory comes to have a reduced solution space, a particular ``sector'' of the original theory.
The second critical step - which is crucial
for the ``non-abelian'' case - is the adoption of the definition
of $\Phi_{\m\n}$ eq.\rf{phimunu} in the present context.
In \cite{Ho:2008nn}, only the first
two terms of $\Phi_{\m\n}$
appeared because obtaining the quadratic part of
the action was the goal. The last two terms are essential
to produce the $A^3$ terms of the Chern-Simons part (hence the expression ``non-abelian''), as will be
discussed in Section 3.
For convenience, we quote here the definition of the field strength $H$ and the self-duality constraint,
\begin{eqnarray}
H_{mnl} \equiv (\pa_l B_{mn}+\pa_m B_{nl}+\pa_n B_{lm}),\quad
\tilde{H}^{mnp}=\frac{1}{3!}\epsilon^{mnprst}H_{rst}
\label{Hq}
\end{eqnarray}
\begin{eqnarray}
H_{mnp}-\tilde{H}_{mnp}=0,\quad
\label{Hsfd6q}
\end{eqnarray}
Define
\begin{eqnarray}
&& B_{\m\n}=-\epsilon_{\m\n\rho}B^\rho,\quad B_{\m j}=A_{\m j},\quad B_{ij}=A_{ij}
\label{BAna}
\end{eqnarray}
$A_{ij}$ is a two-form whereas $A_{\m i}$ is a one-form.
Let us take the following reduction ans\"atze that are ``non-abelian'' generalizations of the
corresponding equations of \cite{Ho:2008nn},
\begin{eqnarray}
&& H_{\m\n\lambda}= -\epsilon_{\m\n\lambda} \pa_\rho B^\rho,\quad
H_{\m\n i}= \Phi_{\m\n i}-\epsilon_{\m\n\lambda} \pa_i B_\lambda \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& H_{\m ij}=F_{\m ij},\quad H_{ijk}=F_{ijk}
\label{hphi}
\end{eqnarray}
where
\begin{eqnarray}
\Phi_{\m\n i} \equiv \pa _\m A_{\n i}-\pa _\n A_{\m i}
+(\pa _j A_{\m i}) A_{\n j}-(\pa _j A_{\n i}) A_{\m j}
\label{phimunu_q}
\end{eqnarray}
and
\[
F_{\m jk}\equiv \pa_{\m}A_{jk}-\pa_j A_{\m k}+\pa_k A_{\m j},\;
F_{ijk}=(\pa_k A_{ij}+\pa_i A_{jk}+\pa_j A_{ki}), \; A_{ij}\equiv \epsilon_{ijk} A^k
\]
One can re-express the self-duality constraint and the field equation of $H_{mnp}$;
the complete list of the self-duality and field equation of $H$ in terms of $A$'s is as follows.
The self-duality condition $H_{mnk}=\tilde{H}_{mnk}$ yields
\begin{eqnarray}
\pa^j (F_{\m ij}-\tilde{\Phi}_{\m ij})=0 \label{FmPhit}
\end{eqnarray}
\[
\pa_\m B^\m = -\fr16 \epsilon^{ijk}F_{ijk},\quad
\Phi_{\m\n i}= \epsilon_{\m\n\lambda} \pa_i B^\lambda -\fr12 \epsilon_{\m\n\lambda} \epsilon_{ijk}F^{\lambda jk}
\]
Using these, $\pa_m (H^{mnk})=0$ can be put\footnote{
For the moment we consider the free lagrangian $H^2$. What has been achieved by \rf{HAEOM2} is that
the field equations are obtained in terms of fields with the self-duality integrated.
}
\begin{eqnarray}
\pa_\m \tilde{F}^{\m\n\lambda}+\pa_i \tilde{F}^{\n\lambda i}=0,\quad \pa_\m \tilde{F}^{\m\n k}-\pa_i F^{\n ik}=0,\quad
\pa_\m F^{\m jk}+\pa_i F^{ijk}=0 \label{HAEOM2}
\end{eqnarray}
They should have gauge invariance inherited from the original action (\ref{2apprM5q2}),
the 6D diffeomorphism and $B$-field gauge transformation. Using part of the gauge transformations
(i.e., the part associated with $B_{ij}$ transformation),
let us set
\begin{eqnarray}
A_{ij}=0 \label{Aijgauge}
\end{eqnarray}
Eq.\rf{HAEOM2} will be supplemented by the
constraint (\ref{FmPhit}).
With the gauge fixing (\ref{Aijgauge}), the first equation of \rf{HAEOM2} becomes $\pa_i \tilde{F}^{\n\lambda i}=0$.
As can be seen by explicitly writing, the left-hand side vanishes identically.
The third equation of \rf{HAEOM2} simplifies to
\begin{eqnarray}
\pa_\m F^{\m jk}=0
\end{eqnarray}
yielding a Lorentz type gauge after the reduction.
The second equation of \rf{HAEOM2} can be reproduced by
an action. To see that, let us consider
\begin{eqnarray}
\mathcal{L}_{A}&=& w_2 \Big[- F_{\m ij} F^{\m ij}-2 \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\m A_{\n i} \pa_j A_{\lambda k}
+\fr{1}2 \epsilon^{\m\n\lambda} \epsilon^{ijk} F_{\m ij}\Phi_{\n\lambda k}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& + \epsilon^{\m\n\lambda} \epsilon^{ijk} A_{\m j}\pa_k (A_\lambda^s \pa_s A_{\n i}-A_\n^s \pa_s A_{\lambda i}) \Big]
\label{LFe3b}
\end{eqnarray}
and compute $A_\m^i$ variation. $w_2$ is a numerical constant to
be determined later. This action should be supplemented by the constraint \rf{FmPhit}.
The $X$-part of the action contains $A_\m^i$ inside the covariant derivative \rf{D0cov}. Here we
focus only on the gauge part of the action, ${\cal L}_A$. An advantage of \rf{LFe3b} over the original $H^2$ form
is that the self-duality condition can be explicitly implemented in the \rf{LFe3b} as we will see below.
Also by going from $B_{\m i}$ base to $A_{\m i}$ base, the gauge field equation of the coupled
system of \rf{2apprM5q3} is obtained by $A_{\m i}$-variation as a fully legitimate procedure.
Variation of the first three terms yield, respectively,
\begin{eqnarray}
\d(- F_{\m ij} F^{\m ij})=-4 \d A_{\m j}\pa_i F^{\m ij} \label{1stterm}
\end{eqnarray}
\begin{eqnarray}
&& \d \left(-2 \epsilon^{\m\n\lambda}\epsilon^{ijk}\pa_\m A_{\n i}\pa_j A_{\lambda k}\right)=\epsilon^{\m\n\lambda}
\epsilon^{ijk}\d A_{\m j} \pa_\n F_{\lambda ki}
+ \epsilon^{\m\n\lambda} \epsilon^{ijk} \d A_{\m j}\pa_k \Phi_{\n\lambda i} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&&\hspace{2in}- \epsilon^{\m\n\lambda}
\epsilon^{ijk} \d A_{\m j}\pa_k (A_\lambda^s \pa_s A_{\n i}-A_\n^s \pa_s A_{\lambda i})
\label{2ndterm}
\end{eqnarray}
\begin{eqnarray}
&& \d \left( \fr{1}2 F_{\m ij} \epsilon^{\m\n\lambda} \epsilon^{ijk}\Phi_{\n\lambda k} \right)
=- \d A_{\m j} \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_k \Phi_{\n\lambda i} - A_{\m j}
\epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_k \d \Phi_{\n\lambda i} \label{3rdterm}
\end{eqnarray}
Noting that
\begin{eqnarray}
&& - A_{\m j} \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_k \d \Phi_{\n\lambda i}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
=&& \d A_{\m j} \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\n F_{\lambda ki}- A_{\m j}
\epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_k \d (A_\lambda^s \pa_s A_{\n i}-A_\n^s \pa_s A_{\lambda i})
\end{eqnarray}
one gets
\begin{eqnarray}
&& \d \left( \fr{1}2 F_{\m ij} \epsilon^{\m\n\lambda} \epsilon^{ijk}\Phi_{\n\lambda k} \right)
=- \d A_{\m j} \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_k \Phi_{\n\lambda i}
+ \d A_{\m j} \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\n F_{\lambda ki}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& \hspace{2in} - A_{\m j} \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_k \d (A_\lambda^s \pa_s A_{\n i}-A_\n^s \pa_s A_{\lambda i})
\label{2ndterm2}
\end{eqnarray}
Variation of the last term of \rf{LFe3b} cancels against terms in \rf{2ndterm2} and \rf{2ndterm}.
Gathering the results above, one gets
\begin{eqnarray}
\d {\cal L}_A= 4w_2\, \d A_{\m j}\left(\pa_\n \tilde{F}^{\n\m j}-\pa_i F^{\m ij}\right)
\end{eqnarray}
This completes the proof of the claim.
The action given in \rf{LFe3b} can be re-expressed as
\begin{eqnarray}
\mathcal{L}_{A}&=& w_2 \Big[- F_{\m ij}\left( F^{\m ij}
-\tilde\Phi^{\m ij}\right)-2\epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\m A_{\n i} \pa_j A_{\lambda k} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& + \epsilon^{\m\n\lambda} \epsilon^{ijk} A_{\m j}\pa_k (A_\lambda^s \pa_s A_{\n i}-A_\n^s \pa_s A_{\lambda i}) \Big]
\label{LFe3a}
\end{eqnarray}
or, on account of \rf{FmPhit},
\begin{eqnarray}
\mathcal{L}_{A}&=& w_2\Big[-2 \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\m A_{\n i} \pa_j A_{\lambda k}+2 \epsilon^{\m\n\lambda} \epsilon^{ijk} A_{\m j}
\pa_k (A_\lambda^s \pa_s A_{\n i} )\Big]
\label{LFe3}
\end{eqnarray}
\section{Fuzzy Kaluza-Klein compactification }
The analysis of the previous section puts the action (\ref{2apprM5q3}) in the form
\begin{eqnarray}
&& \int d^3 x d^3 y\; \left\{
w_1\Big[\fr12 D_{\mu}X^I D^{\mu}X_I-\fr{1}4 \Big(\epsilon^{i_1,i_2,i_3}\pa_{i_1}X^{I_1}\pa_{i_2}X^{I_2} \pa_{i_3}X^{I_3}\Big)^2
-1 \Big] \right. \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& \left. +
w_2\Big[-2 \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\m A_{\n i} \pa_j A_{\lambda k}
+2 \epsilon^{\m\n\lambda} \epsilon^{ijk} A_{\m j}\pa_k A_\lambda^s \pa_s A_{\n i} \Big] \right\}
\label{6Dstartpoint2mod2}
\end{eqnarray}
The action \rf{6Dstartpoint2mod2} is a {\em partially} reduced form in the sense that the fields still
depend on all 6D coordinates. The worldvolume is taken as $R^{1,2}\times (S^2\times S^1)$.
The reduction will be completed in this section and it will be shown
that a mechanism or an algorithm exists whereby the action (\ref{6Dstartpoint2mod2}) can be converted
into a BLG or an ABJM-type action.
\subsection{generalities}
One of the centermost parts of the remaining reduction procedure
is expansion in terms of the harmonic functions of the internal manifold.
Before we get to the details of the expansion, let us address several
issues that merit discussion.
In Kaluza-Klein compactification,
one considers expansion in the modes of the internal manifold,
\begin{eqnarray}
\phi(x,y)=\sum \phi^a(x)H_a(y) \label{KK}
\end{eqnarray}
where $\phi$ is a generic field in (\ref{6Dstartpoint2mod2}), and $H_a(y)$
is a schematic notation for the spherical harmonics.
Necessarily, the physics of the M2 branes will be correlated with the properties of the internal manifold, and
the final form of the M2 action depends on the choice of the internal manifold.
Then a question arises with regard to the internal manifold to choose.
The choice largely determines the physics that the reduced action is to describe.
Reversely, if one has a certain M2 brane physics to describe, the internal manifold must be chosen accordingly.
Let us illustrate the point with the current case.
Suppose that we intend the M2 brane system to have a gauge group of $SU(N)$
nature (such as $SU(N)\times SU(N)$) or its infinite extension. Given the relation between $S^2$ and $SU(\infty)$
\cite{de Wit:1988ig}, it is not difficult to infer that the internal manifold should involve $S^2$: we
are naturally led to some type of a fibration over $S^2$.
The simplest fibration is $S^2\times S^1$.\footnote{
One may be concerned that the involvement of $S^1$ - which is one of the worldvolume directions
of the M5 brane - may imply that the resulting non-abelian action may be that of D-branes instead
of fundamental M-branes. This will not be the case. To see that, one may start with an NS5 brane action,
and repeat the analysis of the present paper; most of the analysis would carry over with only
minor modifications. Since an NS5 brane is not related to a D-brane via T-duality, the nonabelian
action should be that of fundamental branes such as M2's or fundamental strings upon further reduction.
}
This choice was also influenced by the work of \cite{Gustavsson:2011mg}. A more complicated choice
such as (Hopf-fibrated) $S^3$ may be possible. It is just that the resulting M2 brane physics would
be more exotic. (For example, unlike $S^2$, $S^3$ is not associated with an
infinite extension of any
Lie algebra to our knowledge.)
Our proposal is to replace the harmonics $Y_{lm}$ by its fuzzy counterpart $T_{lm}$, and subsequently
$T_{lm}$ by the $SU(N)$ gauge generator ${\cal T}^a$. The replacement would lead to ordering
ambiguities in the action since $T_{lm}$'s do not commute. The ambiguity
is not present for the quadratic terms because $\int d^3y$ will be replaced by ${\rm Tr} $, the trace over the gauge generators.
We require (and expect) as part of our proposal that the ambiguity should be (mostly) resolved
by symmetries - such as gauge symmetry and supersymmetry - that are expected for an M2-brane system.
They, with the index structures of the fields, will greatly reduce the number of
possibilities. We believe that they will
determine the final theory uniquely, at least, for the current case: it seems implausible
to have two different theories that share the kinetic terms and the symmetries but differ
in the orders of the fields that appear in the
interactions.
\subsection{complexification leading to ABJM type theories}
We now turn to specific cases:
reduction to ABJM type actions. First, we take a reduction procedure that leads to ABJM theory.
Evidently there are some manipulations required to proceed
from (\ref{6Dstartpoint2mod2}) to ABJM action.
There exist two main differences between (\ref{6Dstartpoint2mod2}) and
ABJM action. The first is that the action
of (\ref{6Dstartpoint2mod2}) is in terms of real fields whereas ABJM action is in terms of
complex fields. This implies that some kind of
complexification is required.
(Indeed, complexification is a very important step; it will bring the product $U(N)$ group as will be discussed shortly.)
The second difference lies in the gauge symmetries.
The action (\ref{6Dstartpoint2mod2}) has gauge symmetry that is inherited from the original
action (\ref{2apprM5q2}). (Part of the gauge symmetry has been used for partial gauge
fixing.) Therefore, what seems
required is replacement of $Y_{lm}$ by the group generator ${\cal T}^a$.
In fact, the first and the second requirements feed off of each other.
(The action (\ref{6Dstartpoint2mod2}) differs from ABJM in that it has derivatives along the internal
directions. This difference is related to the second difference, and will be dealt with by further reduction.)
The real action (\ref{6Dstartpoint2mod2}) can be expanded in terms of the spherical harmonics which are complex.
The replacement is not without the subtlety that is associated with the reality of the action. When the action is ``Fourier" expanded,
the reality is assured
by imposing certain constraints on the ``Fourier coefficients". However, the same constraints cannot
work after replacement by the group generators. To remedy the problem, we propose adding complex conjugate
terms explicitly to the action, viewing the fields in (\ref{6Dstartpoint2mod2})
complex.
With the proposed ``complexification", the fields in (\ref{6Dstartpoint2mod2}) can be put into the form of \rf{KK}.
Since the matrix valued fields in ABJM action do not commute, the regular spherical
harmonics $Y_{lm}$ should be made non-commuting somehow. Given the work of \cite{hoppe}\cite{de Wit:1988ig}, the
natural step for this would be to replace $Y_{lm}$
by its fuzzy version, $T_{lm}$, and then to replace $T_{lm}$ by the gauge group generator ${\cal T}^a$ afterwards.
The use of $T_{lm}$ means that the two-sphere $S^2$ is actually fuzzy.
We will discuss ordering ambiguities as they arise.
\vspace{.3in}
There is a relatively simple complexification procedure.
In this procedure, we replace the gauge field
according to
\begin{eqnarray}
A_\m^i \rightarrow -i{\cal A}_\m^i+i\hat{{\cal A}}_\m^i
\label{AtocA}
\end{eqnarray}
and treat ${\cal A}_\m^i$ and $\hat{{\cal A}}_\m^i$ independently.
Let us consider complexification of
the scalar part first and the gauge part afterwards.
Due to $SO(8)$ triality, one can regard $X^I$ in (\ref{6Dstartpoint2mod2})
as complex. (See the discussion towards the end for more details.)
The next step is to replace $Y_{lm}$ with $T_{lm}$. As soon as one
considers the replacement, one faces a few subtleties, some of which have been pointed out above.
One subtlety is that not only does the ``bare" $Y_{lm}$ appear but so does $Y_{lm}$ with
derivatives along the internal directions. Recalling the
works of \cite{hoppe} and \cite{de Wit:1988ig} and utilizing some of the results of \cite{Gustavsson:2011mg}
provide a hint as to what should be done to the derivatives.
The authors of \cite{hoppe} and \cite{de Wit:1988ig} considered the fuzzy version
of $Y_{lm}$ denoting it $T_{lm}$, and defined the
corresponding structure constant by considering $[T_{lm}, T_{l'm'}]$ with $l\leq N-1$. The structure constant defined
this way approaches the structure constant
of the Nambu bracket $\{Y_{lm}, Y_{l'm'}\}$ when $N\rightarrow \infty$.
The result naturally suggests that the Nambu bracket of $Y_{lm}$'s should be replaced by a commutator
of the gauge generator.
Replacement of $Y_{lm}$ by $T_{lm}$ is an intermediate step: $T_{lm}$ should be replaced by the gauge
generator ${\cal T}^a$. More specifically, we propose
\begin{eqnarray}
X_{lm}^{I} T^{lm} &\rightarrow & X_a^I ({\cal T}^a)_{\b\g}\rightarrow X^I_{\b\g}
\end{eqnarray}
where the adjoint generator ${\cal T}^a$ is that of the first factor of $SU(N)\times SU(N)$; there are analogous relations
for the complex conjugate fields with the gauge generators belonging to the second $SU(N)$.
Note that with the intermediate step $ X_a^I ({\cal T}^a)_{\b\g}$, the $(\b,\g)$ indices are associated with
single $SU(N)$. However, the redefined field $X^I_{\b\g}$ may be viewed as a bi-fundamental representation
of $SU(N)\times SU(N)$ with each of $(\b,\g)$ corresponding to each factor of $SU(N)$.
The proposed reduction for the $V$-term can be based on the result just stated. The $V$-term is of the three-bracket form with internal
derivatives acting on $X^I$'s. Assuming $S^2\times S^1$ as the internal manifold, one can
show \cite{Gustavsson:2011mg} that the three-bracket of the potential $V$ can be re-expressed in terms
of the two-bracket.
Based on this, we replace $Y_{mn}$ (or its fuzzy version $T_{mn}$ ) by the gauge generator ${\cal T}^a$,
\begin{eqnarray}
\Big(\epsilon^{i_1,i_2,i_3}\pa_{i_1}X^{I_1}\pa_{i_2}X^{I_2} \pa_{i_3}X^{I_3}\Big)^2
\rightarrow {\rm Tr} ([X^{I_1},X^{I_2},X^{I_3}][\bar{X}_{I_1},\bar{X}_{I_2},\bar{X}_{I_3}])
\label{Vinga}
\end{eqnarray}
with an appropriate numerical constant in front. This is the same as the potential term of ABJM.
Therefore, one arrives at a lagrangian that is the same as the corresponding part of
ABJM theory.
Now let us turn to the gauge part.
What directly came out of the Kaluza-Klein analysis in Section 2 was the real-field based lagrangian.
Let us view $A_\m^i$ as a complex field
renamed ${\cal A}_\m^i$, and explicitly add a complex conjugate part to enforce the reality:
\begin{eqnarray}
{{\cal L}}_{Ac}&=& -2w_2 \epsilon^{\m\n\lambda} \epsilon^{ijk} \pa_\m {\cal A}_{\n i} \pa_j {\cal A}_{\lambda k}
+2w_2 \epsilon^{\m\n\lambda} \epsilon^{ijk} {\cal A}_{\m j}\pa_k ({\cal A}_\lambda^s \pa_s {\cal A}_{\n i} ) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& \hspace{4.4cm}
+c.c.
\label{LFe3c}
\end{eqnarray}
now,
\begin{eqnarray}
{\cal F}_{\m ij}\equiv -\pa_i {\cal A}_{\m j}+\pa_j {\cal A}_{\m i}
\end{eqnarray}
As a matter of fact, the replacement \rf{AtocA} in the gauge part of the action produces cross terms.
Since the integration $\int d^3 y$ will be replaced by trace over the product gauge group eventually, those
terms will contain trace over a single gauge generator, therefore vanish. For this reason, they have
been omitted from \rf{LFe3c}.
As mentioned above, the ordering ambiguities should be resolved based on the
anticipated symmetries. More specifically, for the quadratic terms in (\ref{6Dstartpoint2mod2}), there is
no ambiguity since there is trace over the internal space. For the ${\cal A}$-cubic terms, the ordering causes
overall sign ambiguities. It should be possible to fix the sign based on various symmetries such as gauge
symmetry and/or supersymmetry.
For the sextic potential, again, it would be an overall sign issue which would be resolved in a manner similar to
that of the cubic terms.
\noindent For further reduction, consider
\begin{eqnarray}
{\cal A}_\m^i(x^\m,y^j) = {\cal A}_\m(x^\m,y^{\hat{j}})\; e^{i\b_i y^3},\quad \hat{j}=1,2
\end{eqnarray}
where $\b_i$'s are constants to be determined. With this ansatz, one can show that the quadratic term reduces
\begin{eqnarray}
-2 \epsilon^{\m\n\lambda}\epsilon^{ijk}\pa_\m {\cal A}_{\n i}\pa_j {\cal A}_{\lambda k}
& = & -2i(\b_1-\b_2)e^{i(\b_1+\b_2)y^3} \,\epsilon^{\m\n\lambda} \Big[ \pa_\m{\cal A}_{\n } {\cal A}_{\lambda } \Big]
\label{CS1}
\end{eqnarray}
Let us take
\begin{eqnarray}
\b_1+\b_2=0
\end{eqnarray}
The cubic term requires more lengthy, although straightforward, algebra; one can show
\begin{eqnarray}
&& 2 \epsilon^{\m\n\lambda} \epsilon^{ijk} {\cal A}_{\m j}\pa_k ({\cal A}_\lambda^s \pa_s {\cal A}_{\n i})\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
=&&
\;\; 2i\Big[\b_1 e^{i\b_1y^3}-(\b_1+\b_3)e^{-i(\b_1-2\b_3)y^3}\Big]\epsilon^{\m\n\lambda} {\cal A}_\m {\cal A}_\lambda \pa_1{\cal A}_\n \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&&
+2i\Big[\b_1 e^{-i\b_1y^3}-(\b_1-\b_3)e^{i(\b_1+2\b_3)y^3}\Big] \epsilon^{\m\n\lambda} {\cal A}_\m {\cal A}_\lambda \pa_2{\cal A}_\n \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\
&& +\;\; 2\b_1\b_3 e^{i\b_3 y^3}\, \epsilon^{\m\n\lambda} {\cal A}_\m\; {\cal A}_\n {\cal A}_\lambda
\label{rct}
\end{eqnarray}
By making, e.g., the following choices
\begin{eqnarray}
\b_1=1=-\b_2,\quad \b_3=\fr32
\end{eqnarray}
the first two terms in \rf{rct} are removed by $\int_0^{2\pi} dy^3$ whereas the third term becomes
\begin{eqnarray}
4i w_2 \epsilon^{\m\n\lambda} {\cal A}_\m\; {\cal A}_\n {\cal A}_\lambda .
\end{eqnarray}
Putting the results of the quadratic term and the cubic term together, the action \rf{LFe3c}
takes, after reduction,
\begin{eqnarray}
{\cal L}_{Ac} =-4 i w_2 \,\epsilon^{\m\n\lambda} \left(2\pi \pa_\m{\cal A}_{\n } {\cal A}_{\lambda }
-{\cal A}_\m\; {\cal A}_\n {\cal A}_\lambda \right)+c.c. \label{abjmCSp}
\end{eqnarray}
Each field ${\cal A}^\m$ above can be expanded in terms of the fuzzy spherical harmonics,
\begin{eqnarray}
{\cal A}_\m= {\cal A}_\m^{lm}T_{lm}
\end{eqnarray}
The final step is to replace ${\cal A}_\m^{lm}T_{lm}$ by ${\cal A}_\m^a {\cal T}^a$
\begin{eqnarray}
{\cal A}_{lm}^\m T_{lm}&\rightarrow & {\cal A}_{a}^\m {\cal T}^a
\end{eqnarray}
arriving at the corresponding part of
ABJM action up to the numerical coefficients. The numerical
coefficients can be adjusted by noting that \rf{abjmCSp} can be rewritten as
\begin{eqnarray}
{\cal L}_{Ac} =-16\pi i w_2 \,\epsilon^{\m\n\lambda} \left(\fr12 \pa_\m{\cal A}_{\n } {\cal A}_{\lambda }
-\fr{g}{4\pi} {\cal A}_\m\; {\cal A}_\n {\cal A}_\lambda \right)+c.c.
\end{eqnarray}
where $g$ is a constant. This is possible by making a re-scaled reduction ansatz in
Section 2.2. Therefore, it does not affect ${\cal A}$ field in the the covariant derivative.
Now set
\begin{eqnarray}
g= -\fr{4\pi i}{3},\quad w_1=k,\quad w_2=i\fr{k}{16 \pi }
\end{eqnarray}
where $k$ is the Chern-Simons level parameter.
Finally, we briefly note that there is a slightly different complexification that one can take.
The final outcome should be equivalent to ABJM theory by trivial gauge field redefinition.
Starting with \rf{LFe3c}, the scalar part may be done differently with the gauge part of the analysis unmodified.
One can derive a complex version of the scalar part (\ref{6Dstartpoint2mod2}) using some of the
ingredients in \cite{Gustavsson:2011mg}. (The {\em complexified} version will then be the resulting complex action plus
its complex conjugate.)
Let us introduce
\begin{eqnarray}
Y^M=\left(
\begin{array}{c}
Y^\m \\
Y^{\a}
\end{array}
\right) \quad \mbox{with} \quad Y^\m=X^\m, \quad Y^\a=\left(
\begin{array}{c}
Z^A \,e^{i\s}\\
Z_A \,e^{-i\s}
\end{array}
\right) \label{xz}
\end{eqnarray}
where $\a$ is a $SO(8)$ spinor index and $A$ is a $SU(4)$ index.
Due to the $SO(8)$ triality, \rf{sqrtg} can be re-expressed \cite{Gustavsson:2011mg}
as\footnote{This can be seen by the definition \rf{xz} and
using $Y_\a=(Y^\a)^*$ which was stated in \cite{Gustavsson:2010ep}.}
\begin{eqnarray}
S_{NG}&\equiv &-\int d^6 \xi \sqrt{-\det (\pa_m Y^M \pa_n Y_M)} \label{Yaction}
\end{eqnarray}
Following the steps that are analogous to those of the real case before, one gets
\begin{eqnarray}
S_{NG}
=w^{-1}\int d^3x\, d^3y \;\Big[\eta^{\m\n}D_\m Z^A D_\n \bar{Z}_A-\fr14 \det V-1
\Big] \label{3p3resc}
\end{eqnarray}
where
\begin{eqnarray}
D_\m Z^A \equiv \pa_\m Z^A-{\cal A}_\m^{i} \pa_{i} Z^A
\label{ZD0cov}
\end{eqnarray}
and
\begin{eqnarray}
V=\fr1{3!}\Big[\epsilon^{i_1,i_2,i_3}\pa_{i_1}Z^{A_1}\pa_{i_2}Z^{A_2} \pa_{i_3}Z^{A_3}\Big]^2 \label{ZV}
\end{eqnarray}
\section{Conclusion}
So far in the literature, the popular approach (see e.g., \cite{Ho:2008nn}) of relating
an action of a single M5 and that of multiple M2 branes has been through the Myers' type
effect.\footnote{The only exception that we are aware of is \cite{Bandos:2008fr} where
the starting point is an M5 action. }
In that approach, one builds up an action of a single M5 brane starting from a non-abelian
M2 brane action. The Kaluza-Klein procedure of this paper - which can be viewed
as the reverse procedure of a generalized Myers' effect - offers an alternative perspective.
What we have established is a mathematical procedure that corresponds to
``cutting" the M5 brane into pieces with each piece being an M2 brane.
The mathematical procedure is to compactify part of the M5 brane
worldvolume and discretize it by introducing ``non-commutativity" in the compactified worldvolume.
To this end, we have started with the covariant M5 action constructed by \cite{Bandos:1997ui}\cite{Aganagic:1997zq}, and have
carried an elaborate Kaluza-Klein reduction taking $S^2\times S^1$ as an internal manifold. To
make a connection with (infinite) $SU(N)$ algebra, the $S^2$ part has been taken fuzzy.
The fact
that it takes an elaborate reduction procedure should reflect that the resulting
theory describes a narrow and special sector of the M2 brane dynamics, even more so the original M5 brane dynamics.
The choice of $S^2\times S^1$ was for simplicity.
One may consider an internal manifold with different $S^1$ fibering over $S^2$.
In general, a different fibering would lead to a different theory once the $S^1$ direction is reduced.
In the case where only the {\em lowest} Kaluza-Klein mode
is kept (i.e., reduction followed by truncation), there might be a corresponding reduction
for each fibering that yields the same reduced theory. Settling this issue completely will require more work.
Let us pause to contemplate a potentially interesting issue.
As we have shown in this letter, ABJM theory can be constructed by reducing a single M5 action
on $S^2\times S^1$.
There are characteristics of the {\em reduced theory}, i.e., ABJM theory, that provide hints as to its origin.
It has a fuzzy $S^2$ solution that displays features
of $S^3$ (that can be constructed through Hopf fibration over $S^2$) as against the simpler $S^2 \times S^1$.
With regard to this difference (i.e., $S^3$ vs $S^2\times S^1$), there seem to be two possibilities.
The first possibility is that one should actually consider $S^3$ reduction of a single M5; there may be
a different, more complicated reduction procedure that leads to ABJM theory.
The second possibility is that the appearance of $S^1$ Hopf fibration over $S^2$ may be associated with
one of the limitations of ABJM theory.
In other words, the appearance of Hopf fibration may be a peculiarity due potentially to the incapability
of the reduced theory to describe the full physics of the original theory.
Given the profound relation between gauge degrees of freedom and
gravity degrees of freedom, study of 11D supergravity
solutions would be useful to settle this issue.
Stable $S^3$ compactification of 7D gauged N=2 Supergravity does not exists (see e.g. \cite{Pernici:1984nw})
while 6D gauged N=2 Supergravity admits stable $S^2$ compactification \cite{Salam:1984cj}.
Since those 7D- and 6D- supergravities are related by $S^1$ reduction, it follows that the 7D supergravity
admits the stable compactification on $S^2 \times S^1$. This seems consistent with our result, and to point towards the second possibility above.
As briefly mentioned in the introduction, part of our motivation comes from \cite{Park:2011bg} where certain
non-abelianization of the Green-Schwarz open string action was proposed. Some of the ingredients of the current
work will be useful for providing foundations for (or even first-principle derivation of) the proposal.
To that end, two things must be done. For the current work, we kept only a few leading terms
in the derivative expansion. That step must not be taken in order to obtain a nonabelian string action starting from an M5.
Secondly, the currently work should be generalized to a curved background.
It is one of the near-future directions that we are taking.
\vspace{1in}
\noindent {\bf Acknowledgements}\\
Work of AN is supported in part by the Joint DFFD-RFBR Grant \# F40.2/040.
He thanks I. Bandos and D. Sorokin for valuable discussions.
IP thanks the hospitality of the members of CQUeST during his stay.
This work greatly benefited from
discussions with J.H. Park. He also thank P.M. Ho for his hospitality during the visit to
National Center for Theoretical Sciences, Taipei, Taiwan.
\newpage
|
1,116,691,498,568 | arxiv | \section{Introduction}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{fig1-eps-converted-to.pdf}
\caption{An example of P-A INS. The topic (query) is ``Max is holding phone''. Person INS branch searches video shots concerning the target person, and action INS branch searches video shots about the target action. The number under each shot (represented by a keyframe) gives the relevant INS score. The ranking list of shots is obtained by combining person and action INS results. The red bounding boxes mark target shots with specific person doing specific action\protect\footnotemark[1].}
\label{fig1}
\end{figure}
\footnotetext[1]{All figures in this paper are best viewed in the color version.}
\IEEEPARstart{W}{ith} the rapid development of multimedia technology in recent years, various forms of videos have flooded our life. Finding specific targets from massive videos, i.e., video instance search (INS), is becoming increasingly important. For example, in the surveillance videos \cite{10.1109/TCSVT.2020.2977427,10.1109/TCSVT.2019.2909549,9336268}, police officers need to locate key video clips of specific suspects; In sports videos \cite{4469885,10.1109/TCSVT.2009.2035833,tsagkatakis2017goal}, fans want to browse shots concerning their favorite players or events; In movies \cite{Huang_2018_ECCV,8604143,10.1007/978-981-15-7345-3_10,8556097,4409105,10.1109/TCSVT.2015.2475835,Bojanowski_2013_ICCV}, the audience are interested in watching shots of their idols or romantic actions. Compared with surveillance videos and sports videos, movie videos have wider audience, more complex scenarios and diverse filming skills. Therefore, INS for movies is of particular importance and challenge.
Early INS research in movies mainly focuses on a single target, i.e., mono-semantic INS, such as finding a specific object \cite{10.1109/TCSVT.2020.2966541,10.1109/TCSVT.2017.2667710,nguyen2019video}, location \cite{tahara2010nicoscene,7399428}, person \cite{Huang_2018_ECCV,8604143,10.1007/978-981-15-7345-3_10}, or action \cite{4409105,10.1109/TCSVT.2015.2475835,Bojanowski_2013_ICCV}. Recently, researchers start to investigate the more challenging combinatorial-semantic INS, which aims at retrieving specific instances with multiple attributes simultaneously. Representative works in this field include Person-Scene (P-S) INS and Person-Action (P-A) INS. The former aims at finding shots about specific person in specific scene, while the latter aims at finding shots about specific person doing specific action. Compared with P-S INS, P-A INS pays additional attention to the identity consistency between person and action, making it a more challenging combinatorial-semantic INS problem.
In movie videos, persons and actions are freely matched,
making the content of P-A INS pairs ever-changing. Hence, P-A instance pairs can not be considered as a whole for search.
Existing methods \cite{2019nii,2019whu,yangwhu,2019pku,pengpku_wict} often adopt two different technical branches for person INS and action INS (as shown in Fig. \ref{fig1}). Specifically, in the person INS branch, face\footnote[2]{In movies or TV shows, person INS is usually achieved by face detection and recognition because of their robust appearance in different scenes. In the paper, we use the word ``face'' instead of ``person'' when we describe the details of person INS, including face detection, face identification, face score, and face bounding box.} detection and identification are conducted to compute ranking scores of video shots concerning the target person. In the action INS branch, the action recognition is conducted to compute ranking scores of video shots about the target action. Thereafter, two-branch INS scores are directly fused to generate the final ranking result. However, direct aggregation of scores cannot guarantee the identity consistency between person and action. For example, in Fig. \subref*{fig2a}, given ``Bradley is standing'' and ``Danielle is carrying bag'', the system \cite{2019whu} mistakes it as ``Bradley is carrying bag'' since the person ``Bradley'' and action ``carrying bag'' appear simultaneously; similar case happens in Fig. \subref*{fig2b}, where ``Pat is standing'' and ``Ian is sitting on couch'' are misunderstood as ``Pat is sitting on couch''. We call it \emph{identity inconsistency problem} (IIP). According to the statistics in Section \ref{section:III-A}, erroneous shots with inconsistent identities account for 23.44\% and 22.35\% of all erroneous shots in TRECVID 2019 and 2020 INS tasks, showing the seriousness of the IIP in P-A INS in movies.
\begin{figure}[!t]
\centering
\subfloat[``Bradley is carrying bag'']{\includegraphics[width=1.7in]{fig2_a_-eps-converted-to.pdf}%
\label{fig2a}}
\hfil
\subfloat[``Pat is sitting on couch'']{\includegraphics[width=1.7in]{fig2_b_-eps-converted-to.pdf}%
\label{fig2b}}
\caption{Examples of IIP in P-A INS. The blue and green boxes mark target person and action, respectively.}
\label{fig2}
\end{figure}
To address the above problem, we propose a spatio-temporal identity verification method. The core idea stems from an intuitive observation that identity-consistent face and action usually share an overlapping spatial region of their respective detection bounding boxes. Therefore, we propose an identity consistency verification (ICV) scheme to compute the spatial consistency degree between face and action detection results in the spatial dimension. The higher spatial consistency degree means the larger overlapping area between the bounding boxes of face and action, thus the more likely that face and action belong to the same person.
Furthermore, we find many face and action detection failures due to complex scenarios, such as non-frontal filming or object occlusion, which hinder ICV from getting basic detection information. For example, the face detector fails to find ``Billy'' in middle clips in Fig. \subref*{fig3a} because of non-frontal or occluded faces; similar issues appear in detecting actions like ``Holding glass'' and ``Holding phone'' in Fig. \subref*{fig3b}. Luckily, missing shots in this situation can be salvaged. Considering the continuity of video frames in a shot, if the same face/action is detected on two interval frames, it should also appear in the middle frames. Thus, in the temporal dimension, we propose an inter-frame detection extension (IDE) operation to share the detection information of interval frames, and thus improve the performance of ICV step.
\begin{figure}[!t]
\centering
\subfloat[Non-frontal faces (top row) and occluded faces (bottom row)]{
\begin{minipage}[t]{1.0\linewidth}
\centering
\includegraphics[width=\linewidth]{fig3_a-1_-eps-converted-to.pdf}
\includegraphics[width=\linewidth]{fig3_a-2_-eps-converted-to.pdf}
\end{minipage}%
\label{fig3a}
}
\hfil
\subfloat[Non-frontal actions (top row) and occluded actions (bottom row)]{
\begin{minipage}[t]{1.0\linewidth}
\centering
\includegraphics[width=\linewidth]{fig3_b-1_-eps-converted-to.pdf}
\includegraphics[width=\linewidth]{fig3_b-2_-eps-converted-to.pdf}
\end{minipage}%
\label{fig3b}
}
\caption{Examples of detection failures in (a) person INS and (b) action INS. The blue and green solid bounding boxes mark successful face and action detection results, and the blue and green dotted bounding boxes mark failed face and action detection results, respectively.}
\label{fig3}
\end{figure}
\textbf{Contributions.} The main contributions of this paper are as follows:
\begin{itemize}
\item We study the IIP in the combinatorial P-A INS, and quantitatively investigate its severity with mainstream person and action INS techniques.
\item We propose a spatio-temporal identity verification method to address the IIP. In the temporal dimension, IDE shares the detection information in successive frames to remedy face and action detection failures. In the spatial dimension, ICV checks identity consistency between person and action by computing their spatial consistency degree.
\item We verify the effectiveness of the proposed method on the large-scale TRECVID INS dataset. The performance surpasses that of existing second places in both TRECVID 2019 and 2020 INS tasks.
\end{itemize}
\section{RELATED WORKS}
Since this paper focuses on the combinatorial P-A INS, we review the existing works from three aspects, i.e., person INS, action INS and fusion strategy.
\subsection{Person INS}
Person INS in videos aims to find shots containing a specific person from a gallery video corpus, which is also termed as person re-identification. Most of the previous research works on person re-identification mainly focus on surveillance videos, where dresses rather than faces are more robust for identity discrimination \cite{10.1109/TCSVT.2020.2977427,10.1109/TCSVT.2019.2909549,9336268}. But in movies, due to massive close-up shots and frequent clothing changes, faces are more stable than dresses for person re-identification. Therefore, most of existing works in movies mainly use face detection
and face recognition algorithms for person INS \cite{Huang_2018_ECCV,8604143,10.1007/978-981-15-7345-3_10}.
In this paper, we choose RetinaFace \cite{deng2019retinaface} and ArcFace \cite{deng2019arcface} to achieve person INS in movies. RetinaFace is a single-stage face detection algorithm with high efficiency, and it is proved to be robust in detecting large angle faces. ArcFace is a face recognition algorithm with low complexity and high training efficiency, which corresponds to the geodesic distance on the hypersphere exactly.
\subsection{Action INS}
Existing research on action INS mainly relies on action recognition or detection technology \cite{2019nii,2019whu,yangwhu,2019pku,pengpku_wict}.
The difference between them is that the former only recognizes the category of action, whereas the latter can provide the location bounding boxes of action. This paper focuses on action detection.
According to different implementation strategies, action detection can be generally divided into image-based and video-based methods. The former is mainly designed for actions with obvious interactive objects but without rigorous temporal causality. For example, ``holding glass'', ``carrying bag'', and ``riding bicycle''. This corresponds to a specialized action detection task, i.e., human-object interaction (HOI) detection \cite{Qi_2018_ECCV,gao2018ican,Li_2019_CVPR,liao2020ppdm}. It aims to recognize action (interaction) category, and meanwhile, locate human and object bounding boxes from images.
The latter targets to actions with rigorous temporal causality. For example, ``open door and enter'', ``open door and leave'', and ``go up or down stairs''. Hence, it usually works on successive multiple video frames. Representative action detection algorithms in videos are \cite{Wu_2019_CVPR,Li_2020_ECCV,ulutan2020actor,Tang_2020_ECCV}.
In this paper, we choose parallel point detection and matching for real-time human-object interaction detection (PPDM) \cite{liao2020ppdm} and actor conditioned attention maps for video action detection (ACAM) \cite{ulutan2020actor} for action detection on the image level and video level, respectively.
PPDM realizes the action detection through the parallel point detection and matching branches, which is based on the idea of anchor-free detection. It is a single-stage and real-time framework, which is more suitable for large-scale datasets. ACAM adopts an attention module to rank each spatio-temporal region's relevance to a detected actor, which is suitable for observing complex events with multiple actors, and it is a near real-time algorithm.
\subsection{Fusion strategy}
For combinatorial-semantic P-A INS, the difficulty lies in how to combine the results of different branches. Most of the existing studies adopt a strategy of retrieving two instances separately and then aggregating individual scores in some ways \cite{2019nii,2019whu,yangwhu,2019pku,pengpku_wict}. For example, NII fuses scores of person INS and action INS by direct weighted summation \cite{2019nii}. Instead, WHU raises a stepwise strategy of searching action based on a candidate person list. It first builds an initial candidate person shot list with person INS scores, and then sorts the list according to scores of action INS \cite{2019whu,yangwhu}. PKU adopts a strategy of searching person based on a candidate action list \cite{2019pku,pengpku_wict}.
However, as discussed in the previous section, direct aggregation of individual person INS and action INS results without checking their identity consistency may incur serious IIP.
To solve this problem, Le \textit{et al.} \cite{2020nii} raise a heuristic method. They calculate the distance between target face and desired object, and assume that the shorter distance means the more positive relationship between person and action with desired object. The method indirectly judges the identity consistency by the distance between related object and target face, which can not sufficiently prove the identity consistency of target face and specific action. Moreover, the method works on the basis of object detection, which means that it does not work for actions without obvious interactive object, e.g., ``walking'', ``standing'', and ``talking''.
In this paper, we propose a spatio-temporal identity verification method for P-A INS. Different from \cite{2020nii}, we observe that the identity-consistent face and action usually share an overlapping spatial region of their respective detection bounding boxes. Based on the finding, the identity consistency of P-A pair can be directly determined without additional dependence on objects. Hence, it can be applied to both HOI and object-free actions. Moreover, it is compatible with the existing two-branch framework, and can flexibly enhance existing methods as a plug and play module.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\linewidth]{fig4-eps-converted-to.pdf}
\caption{The overlap area over the face bounding box area. The blue and green solid bounding boxes mark the detected faces and actions. The red dotted bounding boxes mark the overlap area of face and action bounding boxes. The overlapping degree is equal to the area of red bounding boxes divided by the corresponding blue face area.}
\label{fig4}
\end{figure}
\begin{table*}[!t]
\centering
\caption{Percentage of Erroneous Shots with Inconsistent Identities in TRECVID 2019 Ins Task}
\label{tab:1}%
\resizebox{\textwidth}{!}{
\setlength{\tabcolsep}{0.6mm}{
\renewcommand\arraystretch{1.2}
\begin{tabular}{c|ccccccccccccccccccccccc|c}
\hline
\textbf{Topic ID} & \textbf{9249} & \textbf{9250} & \textbf{9251} & \textbf{9252} & \textbf{9253} & \textbf{9254} & \textbf{9255} & \textbf{9256} & \textbf{9257} & \textbf{9258} & \textbf{9259} & \textbf{9260} & \textbf{9263} & \textbf{9264} & \textbf{9267} & \textbf{9268} & \textbf{9269} & \textbf{9270} & \textbf{9271} & \textbf{9272} & \textbf{9273} & \textbf{9277} & \textbf{9278} & \textbf{AVG} \\
\hline
\textbf{Percentage (\%)} & 18.62 & 23.36 & 22.41 & 21.18 & 35.82 & 15.79 & 22.62 & 22.78 & 32.75 & 20.41 & 18.25 & 17.77 & 28.78 & 20.46 & 17.19 & 29.35 & 22.05 & 19.20 & 36.56 & 23.01 & 27.64 & 16.85 & 26.22 & 23.44 \\
\hline
\end{tabular}%
}
}
\end{table*}%
\begin{table*}[!t]
\centering
\caption{Percentage of Erroneous Shots with Inconsistent Identities in TRECVID 2020 Ins Task}
\label{tab:2}%
\resizebox{0.8\textwidth}{!}{
\setlength{\tabcolsep}{1.2mm}{
\renewcommand\arraystretch{1.3}
\begin{tabular}{c|cccccccccccccccc|c}
\hline
\textbf{Topic ID} & \textbf{9299} & \textbf{9300} & \textbf{9301} & \textbf{9302} & \textbf{9303} & \textbf{9304} & \textbf{9305} & \textbf{9306} & \textbf{9307} & \textbf{9310} & \textbf{9311} & \textbf{9312} & \textbf{9315} & \textbf{9316} & \textbf{9317} & \textbf{9318} & \textbf{AVG} \\
\hline
\textbf{Percentage (\%)} & 16.50 & 10.20 & 18.74 & 18.61 & 31.61 & 22.45 & 21.82 & 25.35 & 18.69 & 23.72 & 29.44 & 25.15 & 27.47 & 28.07 & 19.95 & 19.86 & 22.35 \\
\hline
\end{tabular}%
}
}
\end{table*}%
\section{Motivation}
In this section, we first investigate the severity of IIP in existing P-A INS research.
Then, we observe and study two typical phenomena in IIP shots, i.e., location mismatch and detection failure, which motivate the proposed ICV and IDE methods. We focus the discussion on statistical analysis and comparison, and report implementation details in the later experimental section.
\subsection{Identity Inconsistency Problem (IIP)}
\label{section:III-A}
In order to investigate the IIP in P-A INS, we calculate the percentage of IIP shots in all erroneous shots, which are not included in official released ground-truth. The statistic is based on a proved effective strategy of searching action from a candidate person list \cite{2019whu,yangwhu}. Specifically, for each topic, we first take 0.4 as the threshold for face scores, to filter out shots with irrelevant faces. Then the ranking list is generated by sorting reserved video shots with action scores. Next, based on the official released ground-truth, we count the number of erroneous shots for each topic. Finally, among all erroneous shots, we count the number of IIP shots. The percentage of IIP shots, $P_{\rm IIP\_error}$, for each topic can be computed as:
\begin{equation}
P_{\rm IIP\_error}=\frac{N_{\rm IIP\_error}}{N_{\rm total\_error}},
\end{equation}
where $N_{\rm IIP\_error}$ is the number of erroneous IIP shots, and $N_{\rm total\_error}$ is the total number of erroneous shots.
Table \ref{tab:1} and Table \ref{tab:2} show the percentages of IIP shots in all erroneous shots in topics of TRECVID 2019 and 2020 INS tasks, respectively. As shown in both tables, all topics have different degrees of IIP. The average percentages of IIP shots in erroneous shots are 23.44\% and 22.35\% in 2019 and 2020. In particular, percentages in topic \textit{9271, 9253, 9257}, and \textit{9303} all exceed 30\%. It indicates their corresponding topics, i.e., ``Bradley is carrying bag'', ``Pat is sitting on couch'', ``Jane is holding phone'', and ``Billy is holding paper'' are more prone to occur IIP (as shown in Fig. \ref{fig2}).
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{fig5-eps-converted-to.pdf}
\caption{Percentage of overlapping degree for identity-inconsistent shots.}
\label{fig5}
\end{figure}
\subsection{Location Mismatch}
Through preliminary observation, we find that identity-consistent face and action usually share an overlapping spatial region of their respective detection bounding boxes, whereas identity-inconsistent face and action are conversely.
In this section, based on the above statistics on IIP, we quantitatively explore the relationship between the overlapping degree of detection bounding boxes and identity consistency. Specifically, we calculate the overlapping degree, i.e., the ratio between the overlap area (the red dotted bounding boxes in Fig. \ref{fig4}) over the face bounding box area (the blue solid bounding boxes in Fig. \ref{fig4}) for all IIP shots.
As shown in Fig. \ref{fig5},
in 89.94\% identity-inconsistent shots, the overlapping degree of face and action is lower than 20\%. With the increase of the overlapping degree, the number of identity-inconsistent shots gradually decreases. This shows that the overlapping degree is an important indicator to detect IIP, and lays the foundation of the proposed ICV method.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{fig6-eps-converted-to.pdf}
\caption{Percentage of shots with face and action detection failures in IIP shots. Each value in the figure is the cumulative percentage of shots, the number of failed detection keyframes included in them less than or equal to the corresponding horizontal coordinate value.}
\label{fig6}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{fig7-eps-converted-to.pdf}
\caption{Overall scheme of spatio-temporal method for P-A INS. Given the topic ``Jane is holding phone'' and the gallery videos for search, the person INS and action INS are firstly conducted. Then IDE is applied to make up the detection information to failed detection keyframes. Finally, ICV is adopted to check the identity consistency of person and action, which filters out selected IIP shots.}
\label{fig7}
\end{figure*}
\subsection{Detection Failures}
Besides location mismatch, another common phenomenon in IIP shots is detection failure. Similarly, we quantitatively investigate the severity of the detection failure problem.
Take Fig. \subref*{fig3a} as an example, ``Billy'' is not detected in the $36$th-$57$th keyframes, but it is detected in the $35$th and $58$th keyframes. Then $36$th-$57$th keyframes are defined as \emph{failed detection keyframes}, and the corresponding shot is called as the \emph{failed detection shot} containing $22$ failed detection keyframes.
We calculate the percentage of failed detection shots in all IIP shots. Specifically, we firstly count the number of failed detection keyframes for each IIP shot.
Then, we obtain the number of failed detection keyframes $n$, and the percentage of failed detection shots $P_{\rm failure}(n)$ is calculated as follows:
\begin{equation}
P_{\rm failure}(n)=\frac{N_{\rm failure}(n)}{N_{\rm IIP\_error}},
\end{equation}
where $N_{\rm failure}(n)$ is the number of shots which include no more than $n$ failed detection keyframes, $N_{\rm IIP\_error}$ is the total number of erroneous IIP shots.
Fig. \ref{fig6} records the percentages of failed detection shots on face and action, which are included in official predefined person and action sets, respectively. According to the statistics, 63.04\% and 61.83\% shots have at least one failed detection keyframe in person INS and action INS, which shows the severity of detection failure problem.
\section{Method}
The overall scheme of our method is shown in Fig. \ref{fig7}. Given a topic and a gallery video corpus, representative keyframes are first extracted from each shot of the video corpus. Then, person INS and action INS are conducted. It is worth noting that, we apply detection rather than recognition in the action INS branch. This means that we can obtain initial face/action detection scores, as well as their corresponding bounding boxes. Next, in the temporal dimension, the IDE operation is conducted on failed detection shots, providing more detection information for ICV. Thereafter, in the spatial dimension, the ICV method is applied to check identity consistency between person and action, which filters out erroneous IIP shots. Finally, the maximum fusion score of all keyframes in a shot is taken as the INS score of the shot, and the ranking list is obtained by sorting INS scores of all shots.
\subsection{Preliminary}
Assume that there are $L$ shots in gallery videos. For the $l$-th shot, $K$ keyframes can be extracted. We denote the $k$-th keyframe in the shot $l$ as $P^{(l,k)}$, where $l\in[1,L]$ and $k \in [1,K]$.
For the convenience of following discussion, the subscript signs $k$ and $l$ are temporarily omitted from all variables when they do not cause confusion.
For a keyframe $P$, assume that there are $m$ faces and $n$ actions detected in the person INS and action INS branches. The detection and identification results of $i$-th face can be expressed as $\left \langle ID_i, Conf_i, Box_i\right \rangle_{i=1}^m$, where $ID_i$ represents the face id, $Conf_i$ records the confidence score of face identification, $Box_i=\left \langle x_{min_i}, y_{min_i}, x_{max_i}, y_{max_i}\right \rangle$ contains the horizontal and vertical coordinates of upper-left and lower-right corners of the face bounding box.
Similarly, the result of $j$-th action can be expressed as $\left \langle ID_j, Conf_j, Box_j \right \rangle_{j=1}^n$, with similar notation definitions.
\subsection{Inter-frame Detection Extension (IDE)}
\label{section:IV-B}
To address the detection failure problem caused by complex filming conditions, we propose IDE to share the detection information among keyframes in the temporal dimension. For simplicity, we take the face IDE as an example. Similar operations also apply to the action detection results. Particularly, because IDE works in the temporal dimension, the superscript signs are used to indicate the temporal relationship among keyframes in the $l$-th shot.
Assume that we successfully detect the $i$-th face in the $\left(k-\gamma_1\right)$-th and $\left(k+\gamma_2\right)$-th keyframes, and meanwhile, fail to detect the face in the keyframes between them. Considering the continuity of the video shots, we conduct IDE to recover face detection information in these keyframes.
For example, the confidence score of $i$-th face in $P^{(l, k)}$ can be recovered by a simple linear interpolation:
\begin{equation}
\begin{aligned}
Conf_i^{(l, k)} & = \frac{\gamma_{1}}{\left(\gamma_{1}+\gamma_{2}\right)} \times Conf_i^{(l, k+\gamma_{2})} \\
& + \frac{\gamma_{2}}{\left(\gamma_{1}+\gamma_{2}\right)} \times Conf_i^{(l, k-\gamma_{1})},
\end{aligned}
\label{eq3}
\end{equation}
where $Conf_i^{\left(l, k-\gamma_{1}\right)}$ and $Conf_i^{\left(l, k+\gamma_{2}\right)}$ are the face confidence scores of target person in $P^{(l, k-\gamma_1)}$ and $P^{(l, k+\gamma_2)}$. The similar interpolation method is used to extend coordinates of face bounding boxes. And the face id is inherited in the process of the above extension.
\subsection{Identity Consistency Verification (ICV)}
\label{section:IV-C}
As discussed in the motivation, many erroneous shots contain IIP. In order to address the problem, we propose ICV to verify the identity consistency between person and action. Different from IDE, which deals with the same type of detection results in the temporal dimension, ICV deals with two different types of detection results, i.e., face and action detection results in the spatial dimension. Therefore, for the convenience of the following discussion, the superscript signs turn to distinguish the results of face and action.
Specifically, for a keyframe $P$, we calculate spatial consistency degree matrix $\mathbf{C}=\left[c_{i,j}\right] \in \mathbb{R}^{m \times n}$ based on face and action bounding boxes obtained from person and action INS branches, in which $c_{i,j}$ is defined as:
\begin{equation}
c_{i,j}=\frac{\mathbf{Intersection}\left(Box_i^{\rm face},Box_j^{\rm action}\right)}{\mathbf{Area}\left(Box_i^{\rm face}\right)},
\label{eq4}
\end{equation}
where $\mathbf{Intersection(\cdot,\cdot)}$ is the function of computing the intersection area of two bounding boxes, $\mathbf{Area}(\cdot)$ is the function of computing the area of a bounding box. Their formulas are as follows:
\begin{align}
&\mathbf{Intersection}\left(Box_i^{\rm face},Box_j^{\rm action}\right)= \\ \nonumber
&\mathbf{max}\left[\mathbf{min}\left(x_{max_i}^{\rm face}, x_{max_j}^{\rm action}\right) - \mathbf{max}\left(x_{min_i}^{\rm face}, x_{min_j}^{\rm action}\right), 0\right] \times \\ \nonumber
& \mathbf{max}\left[\mathbf{min}\left(y_{max_i}^{\rm face}, y_{max_j}^{\rm action}\right) - \mathbf{max}\left(y_{min_i}^{\rm face}, y_{min_j}^{\rm action}\right), 0\right],
\end{align}
\begin{align}
\mathbf{Area}\left(Box_i^{\rm face}\right) =
\left(x_{max_i}^{\rm face} - x_{min_i}^{\rm face}\right) &\\ \nonumber
& \times \left(y_{max_i}^{\rm face} - y_{min_i}^{\rm face}\right).
\end{align}
Next, the proposed spatial consistency degree is applied to optimize the fusion score. Two representative fusion strategies are adopted.
\begin{itemize}
\item One simple strategy is the weighted fusion method (\textit{$Fusion_{wet}$}) \cite{2019nii,2019whu,yangwhu}, which can be optimized as:
\begin{small}
\begin{equation}
s_{i,j}=c_{i,j} \times \left[\alpha \times Conf_i^{\rm face} + \left(1-\alpha\right) \times Conf_j^{\rm action}\right],
\label{eq7}
\end{equation}
\end{small}
where $s_{i,j}$ stands for the fusion score of the $i$-th person and the $j$-th action, $\alpha \in [0,1]$ is the fusion coefficient. We test the dynamic performance of different values of $\alpha$ in the experiment section.
\item The other effective fusion strategy, i.e., searching specific action based on a candidate person list (\textit{$Fusion_{thd}$}), is widely used \cite{2019whu,yangwhu,2019pku,pengpku_wict}. It first obtains candidate person list by setting a threshold for face confidence scores obtained by person INS, and then ranks the list according to action confidence scores obtained by action INS. It can be improved by the proposed spatial consistency degree as:
\begin{equation}
s_{i,j}=c_{i,j} \times \left[\mathbf{F_{\delta}}\left(Conf_i^{\rm face}\right) \times Conf_j^{\rm action}\right],
\label{eq8}
\end{equation}
\begin{equation}
\mathbf{F_{\delta}}\left(x\right) =
\begin{cases}
1,& x \text{ $\geq$ } \delta\\
0,& x \text{ $\textless$ } \delta
\end{cases}
\end{equation}
where $\mathbf{F_{\delta}}(\cdot)$ is a threshold function, $\delta$ is the threshold for face scores to determine whether the target person exists in the keyframes.
We test the dynamic performance of different values of $\delta$ in the experiment part.
\end{itemize}
\begin{figure}[!t]
\begin{algorithm}[H]
\caption{The spatio-temporal identity verification method for P-A INS.}
\label{alg:Framework}
\begin{algorithmic}[1]
\REQUIRE
A topic concerning $i$-th person and $j$-th action;
A gallery video corpus with $L$ shots;\\
\ENSURE
The ranking list of gallery shots, $R_{list}$;
\FOR{each $l \in [1,L]$}
\STATE Conduct person INS and action INS;
\label{code:fram:INS}
\STATE Conduct IDE for failed detection shots (Eq. \ref{eq3});
\label{code:fram:IDE}
\FOR{each $k \in [1,K]$}
\STATE Compute spatial consistency degree $c_{i,j}^{(l,k)}$ (Eq. \ref{eq4});
\STATE Compute fusion score $s_{i,j}^{(l,k)}$ (Eq. \ref{eq7} or Eq. \ref{eq8});
\label{code:fram:ICV}
\ENDFOR
\STATE Compute fusion score $s^l_{i,j}$ (Eq. \ref{eq10});
\ENDFOR
\STATE Generate $R_{list}$ by sorting $\{s^l_{i,j}\}_{l=1}^L$ in descending order;
\label{code:fram:Rank}
\RETURN $R_{list}$;
\end{algorithmic}
\end{algorithm}
\end{figure}
\subsection{Generating Ranking List}
After obtaining fusion scores of all keyframes, the fusion score of the $i$-th person conducting the $j$-th action in $l$-th shot is the maximum score of keyframes in the shot:
\begin{equation}
s_{i,j}^l=\max_{k=1,\cdots,K} {s^{(l,k)}_{i,j}},
\label{eq10}
\end{equation}
then the ranking list concerning the topic of the $i$-th person conducting the $j$-th action is obtained by sorting fusion scores of all shots. The complete flowchart of the proposed spatio-temporal identity verification method is presented in Algorithm \ref{alg:Framework}. The technical details of person INS and action INS are elaborated in Section \ref{section:V-B}, here we mainly emphasize the temporal and spatial characteristics of IDE and ICV.
\section{Experiments}
\subsection{Dataset and Experiment Settings}
\textbf{Dataset.} We use the TRECVID INS Dataset to carry out the experiments \cite{awad2019trecvid}. It comes from the 464-hour British Broadcasting Corporation (BBC) soap opera ``EastEnders''. The 244 weekly ``omnibus'' video files from 5 years of broadcasts are divided into 471,526 shots, containing about 7.84 million keyframes as the basic unit of P-A INS. The detailed dataset parameters are shown in the Table \ref{tab:addlabel3}. With the dataset, NIST selects a number of topics as representative samples for TRECVID 2019 and 2020 INS tasks. As shown in Table \ref{tab:addlabel1} and Table \ref{tab:addlabel2}, in 2019 INS task, 10 persons and 12 actions are combined into 30 unique topics \cite{awad2019trecvid}. In 2020 INS task, 8 persons and 9 actions are combined into 20 unique topics \cite{awad2020trecvid}. Each topic includes example images or videos for target person and action.
\begin{table}[!t]
\centering
\caption{Parameters of TRECVID dataset}
\begin{tabular}{ll}
\toprule
\textbf{Parameters} & \textbf{values} \\
\midrule
Total video size & 286G \\
Total video duration & 464h \\
Number of videos & 244 \\
Number of shots & 471,526 \\
Number of keyframes & 7,837,120 \\
Number of characters & 191 \\
Number of actions & 25 \\
\bottomrule
\end{tabular}%
\label{tab:addlabel3}%
\end{table}%
\begin{table}[!t]
\centering
\caption{Topics in TRECVID 2019 INS task}
\begin{tabular}{cll}
\toprule
\textbf{Topics} & \multicolumn{1}{l}{\textbf{Person}} & \multicolumn{1}{l}{\textbf{Action}} \\
\midrule
\textbf{9249} & Max & holding glass \\
\textbf{9250} & Ian & holding glass \\
\textbf{9251} & Pat & holding glass \\
\textbf{9252} & Denise & holding glass \\
\textbf{9253} & Pat & sit on couch \\
\textbf{9254} & Denise & sit on couch \\
\textbf{9255} & Ian & holding phone \\
\textbf{9256} & Phil & holding phone \\
\textbf{9257} & Jane & holding phone \\
\textbf{9258} & Pat & drinking \\
\textbf{9259} & Ian & open door enter \\
\textbf{9260} & Dot & open door enter \\
\textbf{9261} & Max & shouting \\
\textbf{9262} & Phil & shouting \\
\textbf{9263} & Ian & eating \\
\textbf{9264} & Dot & eating \\
\textbf{9265} & Max & crying \\
\textbf{9266} & Jane & laughing \\
\textbf{9267} & Dot & open door leave \\
\textbf{9268} & Phil & go up down stairs \\
\textbf{9269} & Jack & sit on couch \\
\textbf{9270} & Stacey & carrying bag \\
\textbf{9271} & Bradley & carrying bag \\
\textbf{9272} & Stacey & drinking \\
\textbf{9273} & Jack & drinking \\
\textbf{9274} & Jack & shouting \\
\textbf{9275} & Stacey & crying \\
\textbf{9276} & Bradley & laughing \\
\textbf{9277} & Jack & open door leave \\
\textbf{9278} & Stacey & go up down stairs \\
\bottomrule
\end{tabular}%
\label{tab:addlabel1}%
\end{table}%
\begin{table}[!t]
\centering
\caption{Topics in TRECVID 2020 INS task}
\begin{tabular}{cll}
\toprule
\textbf{Topics} & \multicolumn{1}{l}{\textbf{Person}} & \multicolumn{1}{l}{\textbf{Action}} \\
\midrule
\textbf{9299} & Ian & sit on couch \\
\textbf{9300} & Billy & sit on couch \\
\textbf{9301} & Ian & holding paper \\
\textbf{9302} & Bradley & holding paper \\
\textbf{9303} & Billy & holding paper \\
\textbf{9304} & Max & drinking \\
\textbf{9305} & Dot & drinking \\
\textbf{9306} & Pat & holding cloth \\
\textbf{9307} & Heather & holding cloth \\
\textbf{9308} & Ian & crying \\
\textbf{9309} & Heather & crying \\
\textbf{9310} & Max & smoking cigarette \\
\textbf{9311} & Dot & smoking cigarette \\
\textbf{9312} & Pat & smoking cigarette \\
\textbf{9313} & Stacey & laughing \\
\textbf{9314} & Pat & laughing \\
\textbf{9315} & Max & go up down stairs \\
\textbf{9316} & Bradley & go up down stairs \\
\textbf{9317} & Max & holding phone \\
\textbf{9318} & Stacey & holding phone \\
\bottomrule
\end{tabular}%
\label{tab:addlabel2}%
\end{table}%
\textbf{Evaluation criteria.} Following the standard evaluation criteria, Average Precision (AP) is adopted to evaluate the retrieval quality of each topic, and mean AP (mAP) is used to describe the overall performance among the given set of P-A INS topics. According to the official evaluation requirements of TRECVID, for each topic, only 1,000 shots at most can be fed back for evaluation. It is worthy noting that, some actions in some topics belong to the emoticon category. Since emotion actions naturally correspond to faces, they do not have the problem of identity inconsistency. Therefore, these topics are excluded in our experiment, which will not affect the dataset.
\begin{figure*}[!t]
\centering
\subfloat[Parameter $\alpha$]{\includegraphics[width=3.5in]{fig8_a_-eps-converted-to.pdf}%
\label{fig8a}}
\hfil
\subfloat[Parameter $\delta$]{\includegraphics[width=3.5in]{fig8_b_-eps-converted-to.pdf}%
\label{fig8b}}
\caption{Study on the dynamic performance of parameters. The dotted lines mark the best mAP values in TRECVID 2019 and 2020 INS tasks.}
\label{fig8}
\end{figure*}
\subsection{Implementation Details}
\label{section:V-B}
\textbf{Person INS branch.}
Different from the target persons' clothes, which are always changing, faces have robust and discriminative features for identity recognition. Hence, we find target person based on faces in the person INS branch. It should be noted that, we use a face reference set containing 815 face images as the query set, including sample images of the target person provided by the task, as well as face images collected through \textsl{Bing} \cite{lan2017ps, wang2019salient}. The person INS mainly contains two steps, i.e., face detection and face identification. For face detection, we adopt the RetinaFace detector \cite{deng2019retinaface}, which is trained on WIDER FACE dataset \cite{yang2016wider}, to obtain the face detection bounding boxes for each keyframe. For face identification, we utilize the ArcFace \cite{deng2019arcface}, which is trained on face recognition dataset MS1Mv2 \cite{deng2019arcface}. Based on the detected face bounding boxes, 512-dimension features are extracted from each normalized face image to represent a face. Finally, the similarity measurement method based on cosine distance is used to calculate the face identification score.
\textbf{Action INS branch.}
In the action INS branch, we specially apply two different action detection methods, i.e., human-object interaction detection on images and action detection on videos, for different topics of 2019 and 2020 INS tasks.
\begin{itemize}
\item Human-object interaction detection on images. Some actions with obvious objects can be well detected by using image-based human-object interaction detection, e.g., ``holding paper'', ``holding glass'', and ``carrying bag''. For topics with such actions, we adopt PPDM \cite{liao2020ppdm}, which is pre-trained on the large human-object interaction detection dataset HICO-DET \cite{chao2018learning}. First, the heatmap prediction network DLA-34 \cite{yu2018deep} is adopted as feature extractor, then point detection and matching branches are applied to conduct action detection on image-level.
\item Action detection on videos. Some actions last for long time need to be identified by videos, e.g., ``open door and enter'', ``open door and leave'', and ``go up or down stairs''. For topics with such actions, we adopt ACAM \cite{ulutan2020actor}, which is trained on the AVA dataset \cite{gu2018ava}. We perform action INS on keyframe sequences. First, persons in all keyframes in a shot are detected by object detection network. Then every 8 keyframes are grouped into a sequence and fed into action detection network. Finally, the detected action information is allocated to the corresponding keyframes.
\end{itemize}
\par\textbf{Inter-frame Detection Extension.} Based on above person and action INS branches, we get the initial face and action INS results, including ids, confidence scores and bounding boxes of target faces and actions. For each shot, we check the detection results of all keyframes. If there exist failed detection keyframes in a shot, we apply IDE operation to the shot.
\textbf{Identity Consistency Verification.}
After the IDE, we conduct ICV. As mentioned in Section \ref{section:IV-C}, for each keyframe in each shot, spatial consistency degree score matrix is calculated, and the optimizations for two basic fusion methods \textit{$Fusion_{wet}$} and \textit{$Fusion_{thd}$} are conducted. We test the dynamic performance of the parameters $\alpha$ and $\delta$ of two fusion methods in the following experiments.
\begin{itemize}
\item Investigation on the impact of $\alpha$. For the method \textit{$Fusion_{wet}$}, we investigate the impact of different values of fusion coefficient $\alpha$. Specifically, based on basic weighted fusion model without adding IDE and ICV, we observe the effect by setting $\alpha$ at values in the range of $[0,1]$ with a spacing of 0.1. Fig. \subref*{fig8a} shows mAP values of topics in TRECVID 2019 and 2020 INS tasks, respectively. It can be seen that, mAP values are highest when $\alpha$ is 0.3 in both 2019 and 2020 INS tasks. It can be seen that the action score needs higher weight during fusion since the face score is generally higher than the action score.
\item Investigation on the impact of $\delta$. For the method \textit{$Fusion_{thd}$}, we investigate the impact of different values of face score threshold $\delta$. Specifically, based on basic face filter fusion model without adding IDE and ICV, we observed the effect by setting $\delta$ at values in the range of $[0,1]$ with a spacing of 0.1. Fig. \subref*{fig8b} shows mAP values of topics in TRECVID 2019 and 2020 INS tasks, respectively. It can be seen that, mAP values are highest when $\delta$ is 0.4 in both 2019 and 2020 INS tasks. It infers that most shots contain target person when the corresponding face score greater than or equal to 0.4. In the following comparison experiment, we set $\delta$ at 0.4.
\end{itemize}
In particular, by comparing the best performance of the two basic fusion methods in the above experiments, \textit{$Fusion_{thd}$} is better than \textit{$Fusion_{wet}$}. Therefore, in the following comparison experiments, we choose \textit{$Fusion_{thd}$} as the baseline model.
\begin{table*}[!t]
\centering
\caption{Ablation Study Results on Trecvid 2019 Ins Task. The Black Bold Values Mark the Best Value for Each Column (\%)}
\label{tab:tab1}%
\resizebox{\textwidth}{!}{
\setlength{\tabcolsep}{0.7mm}{
\vspace{-10pt}
\renewcommand\arraystretch{1.3}
\begin{tabular}{p{7.5em}|ccccccccccccccc|c|cccccccc|c||c}
\hline
\multirow{2}[4]{*}{\textbf{Method}} & \multicolumn{16}{c|}{$\rm \bf P$-$\rm \bf A_{i}$} & \multicolumn{9}{c||}{$\rm \bf P$-$\rm \bf A_{v}$} & $\rm \bf P$-$\rm \bf A$ \\
\cline{2-27}
\multicolumn{1}{c|}{} & \textbf{9249} & \textbf{9250} & \textbf{9251} & \textbf{9252} & \textbf{9253} & \textbf{9254} & \textbf{9255} & \textbf{9256} & \textbf{9257} & \textbf{9258} & \textbf{9269} & \textbf{9270} & \textbf{9271} & \textbf{9272} & \textbf{9273} & \textbf{mAP} & \textbf{9259} & \textbf{9260} & \textbf{9263} & \textbf{9264} & \textbf{9267} & \textbf{9268} & \textbf{9277} & \textbf{9278} & \textbf{mAP} & \textbf{mAP} \\
\hline
Base & 18.06 & 18.49 & 18.11 & 25.71 & 21.55 & 19.94 & 56.76 & 71.88 & 48.95 & 18.36 & 20.11 & 6.62 & 3.73 & 19.68 & 24.66 & 26.17 & 4.62 & 6.26 & 8.71 & 6.69 & 1.94 & 0.33 & 1.73 & 2.46 & 4.09 & 18.49 \\
Base+IDE & 17.44 & 18.00 & 18.12 & 26.08 & \textbf{22.06} & \textbf{20.14} & 55.64 & 71.63 & 46.85 & 18.46 & 20.89 & \textbf{7.91} & 5.07 & 20.19 & 26.13 & 26.31 & 4.39 & 7.00 & 8.59 & \textbf{6.76} & 2.07 & 0.32 & 1.67 & 2.43 & 4.15 & 18.60 \\
Base+ICV & \textbf{19.15} & \textbf{20.55} & 19.82 & 25.07 & 21.06 & 18.77 & \textbf{61.04} & \textbf{73.30} & \textbf{53.62} & 23.81 & 21.21 & 5.61 & 4.53 & 23.17 & 28.67 & 27.96 & \textbf{5.20} & 6.82 & 10.41 & 5.83 & 2.20 & \textbf{0.46} & \textbf{2.13} & \textbf{2.87} & 4.49 & 19.80 \\
Base+IDE+ICV & 18.54 & 19.85 & \textbf{20.52} & \textbf{26.54} & 21.70 & 19.27 & 60.26 & 73.27 & 51.82 & \textbf{24.78} & \textbf{22.25} & 6.72 & \textbf{7.01} & \textbf{24.13} & \textbf{31.99} & \textbf{28.58} & 4.95 & \textbf{7.73} & \textbf{10.44} & 5.75 & \textbf{2.30} & 0.43 & 2.05 & 2.86 & \textbf{4.56} & \textbf{20.22} \\
\hline
\end{tabular}%
}
}
\end{table*}%
\begin{table*}[!t]
\centering
\caption{Ablation Study Results on Trecvid 2020 Ins Task. The Black Bold Values Mark the Best Value for Each Column (\%)}
\label{tab:tab2}%
\resizebox{\textwidth}{!}{
\setlength{\tabcolsep}{1.5mm}{
\vspace{10pt}
\renewcommand\arraystretch{1.2}
\begin{tabular}{p{7em}|ccccccccc|c|ccccccc|c||c}
\hline
\multirow{2}[4]{*}{\textbf{Method}} & \multicolumn{10}{c|}{$\rm \bf P$-$\rm \bf A_{i}$} & \multicolumn{8}{c||}{$\rm \bf P$-$\rm \bf A_{v}$} & $\rm \bf P$-$\rm \bf A$ \\
\cline{2-20}
\multicolumn{1}{c|}{} & \textbf{9299} & \textbf{9300} & \textbf{9301} & \textbf{9302} & \textbf{9303} & \textbf{9304} & \textbf{9305} & \textbf{9317} & \textbf{9318} & \textbf{mAP} & \textbf{9306} & \textbf{9307} & \textbf{9310} & \textbf{9311} & \textbf{9312} & \textbf{9315} & \textbf{9316} & \textbf{mAP} & \textbf{mAP} \\
\hline
Base & \textbf{23.74} & \textbf{23.02} & 35.11 & 17.04 & 26.69 & 17.42 & 28.81 & 61.22 & 53.40 & 31.83 & 0.58 & 4.81 & 1.97 & 14.49 & 9.02 & 0.65 & 10.88 & 6.06 & 20.55 \\
Base+IDE & 23.58 & 22.98 & 35.04 & \textbf{17.05} & 26.88 & 21.11 & 28.53 & 60.59 & 51.67 & 31.94 & 0.56 & 4.92 & 1.81 & 14.72 & 9.59 & 0.71 & 10.70 & 6.15 & 20.65 \\
Base+ICV & 21.48 & 21.54 & \textbf{37.45} & 16.86 & 30.77 & 19.03 & 31.68 & 63.86 & \textbf{57.44} & 33.34 & \textbf{0.81} & 5.54 & \textbf{2.05} & 15.88 & 12.34 & 1.00 & \textbf{11.53} & 7.02 & 21.83 \\
Base+IDE+ICV & 21.14 & 21.75 & 37.20 & 16.86 & \textbf{31.35} & \textbf{23.54} & \textbf{31.89} & \textbf{63.94} & 56.28 & \textbf{33.77} & 0.79 & \textbf{5.58} & 2.03 & \textbf{16.10} & \textbf{13.09} & \textbf{1.08} & 11.37 & \textbf{7.15} & \textbf{22.12} \\
\hline
\end{tabular}%
}
}
\end{table*}%
\subsection{Ablation Study}
We evaluate the effectiveness of IDE and ICV on the TRECVID INS dataset. We construct a baseline model referred to as Base by eliminating all proposed methods. Specifically, in the Base model, the initial face scores are used to obtain a candidate person list, and the action scores are taken as scores of keyframes. Thereafter, the maximum score of keyframes is taken as the shot score. Finally, ranking list is obtained by sorting the shot scores for each topic. The proposed methods are added gradually to Base, including IDE discussed in Section \ref{section:IV-B}, and ICV elaborated in Section \ref{section:IV-C}.
It should be noted that, we have two P-A INS combination methods since we adopt two action detection methods in action INS branch, i.e., image-based P-A INS (P-$\rm A_{i}$ INS) and video-based P-A INS (P-$\rm A_{v}$ INS).
Table \ref{tab:tab1} and Table \ref{tab:tab2} respectively show ablation study results in 2019 and 2020 INS tasks. According to the evaluation criteria, we usually focus on the overall performance improvement, i.e., mAP of all topics. We evaluate mAP values of topics corresponding to P-$\rm A_{i}$ and P-$\rm A_{v}$ columns respectively, and mAP of all topics in the final P-A column.
\textbf{Evaluation of IDE.}
We add the method IDE to Base, referred to as Base+IDE. The experimental results are shown in the first two rows of Table \ref{tab:tab1} and Table \ref{tab:tab2}.
In 2019 INS task, Base+IDE method gains 0.11\% (0.59\% relative growth) improvement over Base method in mAP. Similarly, in 2020 INS task, the improvement is 0.10\% (0.49\% relative growth), which confirms the effectiveness of IDE.
\textbf{Evaluation of ICV.}
We add the method ICV to Base, referred to as Base+ICV. According to the evaluation results in Table \ref{tab:tab1} and Table \ref{tab:tab2}, Base+ICV method gains 1.31\% (7.08\% relative growth) and 1.28\% (6.23\% relative growth) improvements over Base method in 2019 and 2020 INS tasks, which confirms the effectiveness of ICV.
Furthermore, the complete model Base+IDE+ICV achieves the best performance in both experiments, which gains 1.73\% (9.36\% relative growth) and 1.57\% (7.64\% relative growth) improvements over Base method in 2019 and 2020 INS task. As can be seen from the experimental results, with the proposed method, the mAPs of P-$\rm A_{i}$ INS and P-$\rm A_{v}$ INS both improve, which proves the effectiveness of the proposed method is consistent, and the method works for both P-A INS branches on images and videos. It is worth mentioning that, since the dataset is very large, performance improvement is not easy. According to the evaluation criteria, for a topic, if there is only one matched shot, a performance improvement of 1\% is equivalent to raising the shot in the ranking list from $500$th to $45$th. It can be seen that such a performance improvement is very meaningful.
\begin{figure}[!t]
\centering
\subfloat[``Billy is sitting on couch'' (\textit{9300})]{
\includegraphics[width=\linewidth]{fig12_a_-eps-converted-to.pdf}
\label{fig12a}
}
\\
\subfloat[``Bradley is carrying bag'' (\textit{9271})]{
\includegraphics[width=\linewidth]{fig12_b_-eps-converted-to.pdf}
\label{fig12b}
}
\caption{Examples of correct shots saved by IDE. The blue and green solid bounding boxes mark searched target person and action, and the blue and green dotted boxes mark failed search of target person and action.}
\label{fig12}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=\linewidth]{fig9-eps-converted-to.pdf}
\caption{Examples of IIP shots filtered out by ICV. The blue and green bounding boxes mark target person and action, respectively. The rank numbers show video shots' ranking positions in the initial ranking list generated by the baseline model.}
\label{fig9}
\end{figure*}
\begin{figure}[!t]
\centering
\subfloat[``Ian is sitting on couch'' (\textit{9299})]{
\includegraphics[width=1.6in]{fig10_a_-eps-converted-to.pdf}
\label{fig10a}
}
\subfloat[``Pat is sitting on couch'' (\textit{9253})]{
\includegraphics[width=1.6in]{fig10_b_-eps-converted-to.pdf}
\label{fig10b}
}
\\
\subfloat[``Dot is eating'' (\textit{9264})]{
\includegraphics[width=1.6in]{fig10_c_-eps-converted-to.pdf}
\label{fig10c}
}
\subfloat[``Stacey is carrying bag'' (\textit{9270})]{
\includegraphics[width=1.6in]{fig10_d_-eps-converted-to.pdf}
\label{fig10d}
}
\caption{Examples of correct shots filtered out by ICV. The blue solid bounding boxes mark searched target person, the green solid bounding boxes mark searched target action, and the green dotted boxes mark failed search of target action.}
\label{fig10}
\end{figure}
\begin{figure*}[!t]
\centering
\subfloat[AP values of topics in TRECVID 2019 INS task]{
\includegraphics[width=\linewidth]{fig11_a_-eps-converted-to.pdf}
\label{fig11a}
}
\hfil
\subfloat[AP values of topics in TRECVID 2020 INS task]{
\includegraphics[width=\linewidth]{fig11_b_-eps-converted-to.pdf}
\label{fig11b}
}
\caption{Comparisons with other P-A INS methods. In the figures, the horizontal axis represents the topic ID, and the vertical axis denotes the AP value of each corresponding topic. The mAP values of these methods are given in the legend.}
\label{fig11}
\end{figure*}
\subsection{Result Visualization}
To show the positive effect of the proposed method on results, we visualized some results. Fig. \ref{fig12} shows some shots benefit from IDE, they are not in the initial ranking list generated by the baseline method in the ablation study and added in the ranking list after IDE. In Fig. \ref{fig12a}, the topic \emph{9300} is ``Billy is sitting on couch'', ``Billy'' is detected in the $12$th keyframe, but ``sitting on couch'' is not detected. While in $8$th and $15$th keyframes, ``sitting on couch'' is detected, but ``Billy'' is not detected. So the fusion scores in all keyframes of the shot are all zero. After IDE, the face and action results are extended to the failed detection keyframes, so that the fusion score is no longer zero. Similar case appears in Fig. \ref{fig12b}. It can be seen from the figure that IDE successfully saves some failed detection shots.
Fig. \ref{fig9} shows the role of ICV. The topics are ``Jane is holding phone'', ``Billy is holding paper'', and ``Max is drinking'' from top to bottom in the figure\footnote[3]{The topics in TRECVID INS task actually contain a set of person example images as well as their mask images, and a set of action example shots. In order to make a simple illustration, we only use the images of face and action to represent a topic, this operation does not affect the following INS process and results.}. Some IIP shots filtered out by ICV are shown. They exist in the initial ranking list generated by the baseline method. And their original rankings are also marked in the figure. It can be seen from the figure that ICV effectively filters out IIP shots for each topic.
\subsection{Error Analysis}
By observing AP values for each topic in Table \ref{tab:tab1} and Table \ref{tab:tab2}, we find that for very few topics, the AP values decrease slightly after adding the proposed method.
To be specific, the ground-truth in TRECVID INS task is used to check the shots filtered out from the original ranking list by the proposed method. We find that some correct shots are filtered out by mistake.
Through observation, we figure out it is not the fault of the proposed method. In detail, although there is the target person doing target action in a shot in fact, and the target person is found successfully by person INS branch, but the target action doing by target person is not found by action INS branch, whereas the target action doing by others is found. Therefore, the shot is coincidentally considered as the correct shot.
Fig. \ref{fig10} gives four examples. In Fig. \subref*{fig10a}, the topic \textit{9299} is ``Ian is sitting on couch'', which is shown in the image. Unfortunately, the action INS branch does not find the action ``sitting on couch'' belonging to ``Ian'' due to the errors of action INS. Coincidentally, it finds the action belonging to ``Jane''. Before conducting ICV, the person ``Ian'' and the action ``sitting on couch'' belonging to ``Jane'' make the system mistakenly believe that the shot contains the topic ``Ian is sitting on couch''. After conducting ICV, the shots coincidentally included in the original result list are eliminated, resulting in the decrease of performance.
Similar issue appears in the topic \textit{9253}, \textit{9264}, \textit{9270}, which corresponds to ``Pat is sitting on couch'', ``Dot is eating'', and ``Stacey is carrying bag'' in Fig. \subref*{fig10b}-Fig. \subref*{fig10d}. Therefore, through observation, we can draw the conclusion that the fundamental reason of the decrease of mAP values in some specific topics is due to the errors of person or action INS branches, rather than the proposed method.
\subsection{Comparison with other Methods}
We compare the proposed method with public representative methods on TRECVID INS dataset, following the official evaluation settings. For automatic INS task in TRECVID, each team is allowed to submit several runs for evaluation. We select some representative runs in each year for comparison. It is worthy noting that, the ID of each team's run does not represent the order of performance, that is, the run labeled 1 may not be the best one among all runs submitted by the team. Fig. \ref{fig11} demonstrates the comparative results of our method and previous evaluation runs.
For TRECVID 2019 INS task, we select the following representative runs:
\begin{itemize}
\item F\_M\_A\_E\_PKU\_ICST\_6 \cite{2019pku}. It is the penultimate run of PKU\_ICST team, which is the first place in 2019. In this run, face detection and recognition models are adopted for person INS. Image level and video level action recognitions are adopted for action INS. The methods of searching person based on candidate action shots and searching action based on candidate person shots are implemented for fusion simultaneously. To get better ranking lists, some tricks are used, e.g., super-resolution, top N query extension strategy and scores adjustment based on video transcripts.
\item F\_M\_E\_E\_BUPT\_MCPRL\_1 \cite{2019bupt}. It is the best run of BUPT\_MCPRL team, which is the second place in 2019. In this run, face detection and recognition are adopted for person INS. Human-object interaction action recognition based on object detection and pose estimation, and general action recognition are adopted for action INS. The method of searching action based on candidate person shots is implemented for fusion. Moreover, query expansion is used in person INS.
\item F\_M\_A\_E\_NII\_Hitachi\_UIT\_2 \cite{2019nii}. It is the best run of NII\_Hitachi\_UIT team, which is the third place in 2019. In this run, face detection and recognition are adopted for person INS. An audio-type action and two visual-type action recognition models are adopted for action INS. The weighted summation method is implemented for fusion. In addition, super-resolution is used in person INS.
\item F\_M\_E\_B\_WHU\_NERCMS\_3 \cite{2019whu}. It is the best run of WHU\_NERCMS team, which is the fourth place in 2019. In this run, face recognition, object detection, and object tracking are adopted for person INS. An action recognition model is adopted for action INS. The method of searching action based on candidate person shots is implemented for fusion.
\end{itemize}
As shown in Fig. \subref*{fig11a}, our method achieves the best performance on 10 topics and competitive performance on 10 topics. The performance is relatively poor on other 3 topics, i.e., topic \textit{9268}, \textit{9277}, and \textit{9278}.
For 2020 INS task, we selected the following representative runs:
\begin{itemize}
\item F\_M\_E\_E\_PKU\_WICT.20\_6 \cite{pengpku_wict}. It is the penultimate run of PKU\_WICT team, which is the first place in 2020. The technology scheme is same as the scheme of F\_M\_A\_E\_PKU\_ICST\_6 in 2019.
\item F\_M\_E\_A\_WHU\_NERCMS.20\_1 \cite{yangwhu}. It is the best run of WHU\_NERCMS team, which is the second place in 2020. In this run, face detection and face recognition models are adopted for person INS. And human-object interaction and general action recognition models are adopted for action INS. The method of searching action based on candidate person shots is implemented for fusion.
\item F\_M\_A\_E\_BUPT\_MCPRL.20\_3 \cite{2020bupt}. It is the best run of BUPT\_MCPRL team, which is the third place in 2020. Different from F\_M\_E\_E\_BUPT\_MCPRL\_1 in 2019, face detection, face recognition, and person tracking are adopted for person INS.
\item F\_M\_A\_E\_NII\_UIT.20\_3 \cite{2020nii}. It is the best run of NII\_UIT team, which is the fourth place in 2020. Different from F\_M\_A\_E\_NII\_Hitachi\_UIT\_2 in 2019, in action INS, face detection and object detection are adopted for human-object interaction action recognition. And in the fusion stage, the method of searching action based on candidate person shots is implemented.
\end{itemize}
As shown in Fig. \subref*{fig11b}, our method achieves the best performance on 6 topics and competitive performance on 5 topics. The performance is relatively poor on other 5 topics, i.e., topic \textit{9306}, \textit{9307}, \textit{9310}, \textit{9311}, and \textit{9315}.
Through observing results both in 2019 and 2020, we find that the reason of those relatively poor-performance topics is due to detection errors of some difficult action topics. For example, the actions in topics \textit{9268, 9278}, and \textit{9315} are all ``go up or down stairs'', the actions in topics \textit{9267} and \textit{9277} are both ``open door and leave'', the actions in topics \textit{9306} and \textit{9307} are both ``holding cloth'', and the actions in topics \textit{9310} and \textit{9311} are both ``smoking cigarette''. In general, we propose a simple INS method, compared with other methods with many tricks, our method still gets considerable performance. The mAP of our methods surpassed the best runs' of second places in both TRECVID 2019 and 2020 INS tasks.
\section{CONCLUSION}
We study the IIP between person and action in P-A INS in movies. To address the problem, we propose a simple and effective spatio-temporal identity verification method. Our idea stems from an intuitive observation that identity-consistent face and action usually share an overlapping spatial region of their respective detection bounding boxes.
The proposed method is evaluated on the large-scale TRECVID INS dataset. The experimental results verify the effectiveness and robustness of the proposed method. And the performance surpass that of the second places in both TRECVID 2019 and 2020 INS tasks.
In the future, we will further improve our work from the following aspects:
Firstly, since current research is relatively rare, we can only infer identity consistency through location information, in the future, we will concentrate on improving the accuracy of identity verification by trying more accurate identity verification methods, via other appearance-based features within the bounding boxes to infer identity consistency, via human posture information to locate the face position in the action bounding boxes, etc.; Secondly, through the observation and analysis of the experimental results, in P-A INS, the accuracy of action INS is far lower than that of person INS, which leads to P-A INS mainly constrained by action INS. In the future, we will strive to improve the algorithm in action INS; Thirdly, the current framework is designed based on simple cases in which a single action is performed by a person, we will extend our method to more complex cases in which composite actions are performed by a person. Furthermore, we will extend our method to more combinatorial-semantic INS tasks, e.g, the Person-Action-Scene INS.
\section*{Acknowledgment}
The authors would like to thank the BBC for providing the EastEnders dataset: Programme material BBC.
\bibliographystyle{IEEEtran}
|
1,116,691,498,569 | arxiv | \section{Introduction}
The control of quantum mechanical systems is continuously gaining
interest in recent years \cite{Ladd2010,DiVincenzo2011,Simon2011,Aspuru-Guzik2012,Blatt2012,Bloch2012,Houck2012,Schulz2012},
mainly triggered by the pursuit of quantum information processing
and quantum simulations, which both have the potential of solving
computational problems more efficiently than classical computers \cite{Shor1994,DiVincenzo1995,Nielsen00}.
The realization of this potential requires precise control of large
quantum systems \cite{Hauke2012}. Controlling small quantum systems
has been explored extensively over the last years \cite{Ladd2010,DiVincenzo2011,Monz2011,Simon2011,Aspuru-Guzik2012,Blatt2012,Bloch2012,Houck2012,Schulz2012},
but control of large quantum systems is still very challenging. The
simulation of large quantum systems on classical computers is limited
to about 20 qubits if only pure states are considered \cite{Raedt04,zhang_modelling_2007}.
The typical classical algorithms can be also extended to calculate
the dynamics of mixed states if the initial state or the observables
are localized in a region smaller than the complete system. This approach
uses quantum parallelism of a single pure state evolution \cite{popescu_entanglement_2006,alvarez_quantum_2008}.
Present quantum technologies do not allow complete control of large
quantum states. So far large quantum systems were only addressed by
using ensemble quantum simulations with nuclear magnetic resonance
(NMR) experiments on solid state systems \cite{Suter04,Suter06,Lovric2007}.
The main difficulties for controlling large quantum systems are the
lack of individual addressing of qubits, with important efforts in
progress with samples of ultracold atoms \cite{Bakr2009,Endres2011}.
For large systems decoherence is known to degrade the information
contained in the quantum state \cite{Zurek03}. Its rate increases
with the size of the quantum system, making the largest systems the
most susceptible to perturbations \cite{PhysicaA,JalPas01,Suter04,Suter06,Cory06,Lovric2007,sanchez_time_2007,Monz2011,Zwick2012}.
While these effects are known to affect the survival time of quantum
information, they also affect the distance over which quantum states
can be transmitted \cite{Pomeransky2004,Chiara2005,Keating2007,Burrell2007,Apollaro2007,Allcock2009,alvarez_nmr_2010,alvarez_localization_2011,alvarez_decoherence_2010,Zwick2011a,Zwick2012}.
Imperfections or disorder of the spin-spin couplings that drive the
state transfer can induce localization of the quantum information
\cite{Pomeransky2004,Burrell2007,Keating2007,Allcock2009} in a process
related to Anderson localization \cite{Anderson1958,anderson_local_1978}.
Whereas disorder induced inhibition of transport of non interacting
waves has been studied in various physical systems \cite{Hu2008,Chabe2008,Kondov2011,Jendrzejewski2012,Sperling2013},
the role of dipolar interactions is under theoretical investigation
\cite{Aleiner2011}. Here we study a 3D spin-network and demonstrated
experimentally a similar behavior by studying the localization effects
induced by the finite precision of quantum gate operations used for
transferring quantum states \cite{alvarez_nmr_2010,alvarez_localization_2011}.
Reducing decoherence is a main step towards implementing large scale
quantum computers. Several techniques have been proposed for this
purpose, including dynamical decoupling \cite{5916}, decoherence-free
subspaces \cite{3045}, and quantum error correction \cite{3921,6581}.
These methods perform very well for small quantum systems \cite{6013,monz:200503,biercuk_optimized_2009,Du2009,Alvarez2010c,Souza2011,Souza2011a},
but they can be very challenging to implement in large quantum systems.
However, based only on global manipulations of spins, some of these
methods were successfully implemented in large quantum systems with
thousands of qubits \cite{Suter06,Lovric2007,sanchez_time_2007}.
The decoherence times were extended by almost two orders of magnitude.
Therefore, understanding the decoherence effects and their sources
on large quantum systems would help to optimize the control techniques
for fighting decoherence.
In this paper, we focus on understanding the impact of perturbations
with dipolar disorder on large quantum systems by quantum simulations
with solid state nuclear spin systems. These interactions depend on
the distance $r$ between the spins as $1/r^{3}$. In particular we
study the length scale of localization induced by perturbing the Hamiltonian
that drives the spreading of the information. Based on our previous
results and methods developed in Refs. \cite{alvarez_nmr_2010,alvarez_localization_2011},
we prepared a system of nuclear spins 1/2. Starting with uncorrelated
spins we let them evolve into clusters of correlated spins with increasing
size. By introducing a controlled perturbation to the Hamiltonian
that generates these clusters, we find that the size of the system
tends towards a limiting value determined by a dynamic equilibrium
\cite{alvarez_nmr_2010,alvarez_localization_2011}: if the cluster
size is initially larger than this equilibrium value, it decreases
under the effect of the perturbed Hamiltonian, and it increases while
its size is below the stationary value. The equilibrium size decreases
with increasing strength of the perturbation.
The paper is organized as follows. Section II describes the quantum
simulator, the system and the initial state preparation. Section III
shows the quantum simulations. It is divided in two parts: III.A.
contains the unperturbed evolutions that drives the growth of the
clusters, and it desribes the technique for measuring the size of
the clusters. In section III.B., we discuss the perturbed evolutions,
we describe the perturbations and how we create them. In section IV,
we discuss the dynamical equilibrium with stationary cluster-size,
which is independent of the initial states with different cluster-sizes.
Lastly, section V gives the conclusions.
\section{\textit{\emph{The quantum simulator }}}
\subsection{System}
We consider a system of equivalent spins $I=1/2$ in the presence
of a strong magnetic field and subject to mutual dipole-dipole interaction.
The Hamiltonian of the system is
\begin{align}
\widehat{\mathcal{H}} & =\widehat{\mathcal{H}}_{z}+\widehat{\mathcal{H}}_{dip},
\end{align}
where $\widehat{\mathcal{H}}_{z}=\omega_{z}\sum_{i}\hat{I}_{z}^{i}$
represents the Zeeman interaction with $\omega_{z}=\hbar\gamma B_{0}$
as the Larmor frequency, and
\begin{align}
\widehat{\mathcal{H}}_{dip} & =\frac{1}{2}\sum_{i<j}\frac{\vec{\mu}_{i}\cdot\vec{\mu}_{j}}{r_{ij}^{3}}-\frac{3\left(\vec{\mu}_{i}\cdot\vec{r}_{ij}\right)\left(\vec{\mu}_{j}\cdot\vec{r}_{ij}\right)}{r_{ij}^{5}}
\end{align}
is the dipolar interaction \cite{Slichter}, typically found also
in dipolar quantum gases \cite{Lahaye2009} and Rydberg atoms \cite{Saffman2010}
of growing interest in the context of quantum information science.
The dipoles are $\vec{\mu}_{i}=\hbar\gamma(\hat{I}_{x}^{i},\hat{I}_{y}^{i},\hat{I}_{z}^{i})$
with $\hat{I}_{x}^{i},\hat{I}_{y}^{i}\mbox{ and }\hat{I}_{z}^{i}$
the spin operators and $\vec{r}_{ij}$ is the distance vector between
$\vec{\mu}_{i}$ and $\vec{\mu}_{j}$. In the presence of a strong
magnetic field, ($\omega_{z}\gg d_{ij}$), it is possible to truncate
$\widehat{\mathcal{H}}_{dip}$ with respect to $\widehat{\mathcal{H}}_{z}$.
The part that does not commute has negligible effect on the evolution
of the system \cite{Slichter}, while the secular part can be written
as
\begin{align}
\widehat{\mathcal{H}}_{dd} & =\sum_{i<j}d_{ij}\left[2\hat{I}_{z}^{i}\hat{I}_{z}^{j}-(\hat{I}_{x}^{i}\hat{I}_{x}^{j}+\hat{I}_{y}^{i}\hat{I}_{y}^{j})\right].
\end{align}
The coupling constants are
\begin{equation}
d_{ij}=\frac{1}{2}\frac{\gamma^{2}\hslash^{2}}{r_{ij}^{3}}\left(1-3\cos^{2}\theta_{ij}\right),\label{eq:dip_coupling}
\end{equation}
with $\theta{}_{ij}$ the angle between the vector $\vec{r}_{ij}$
and the magnetic field direction \cite{Slichter}. In a frame of reference
rotating at the Larmor frequency $\omega_{z}$ \cite{Slichter}, the
Hamiltonian of the spin system reduces to $\widehat{\mathcal{H}}_{dd}$.
This kind of Hamiltonians can be also simulated with quantum gases
\cite{Lahaye2009}.
In our system, the spins are the protons of polycrystalline adamantane
and we performed all experiments on a home-built solid state NMR spectrometer
with a $^{\text{1}}$H resonance frequency of $\omega_{z}=300$ MHz
in Dortmund. As shown in Fig.\ \ref{fig:systemscheme}, the adamantane
molecule is nearly spherical and contains 16 protons. The molecules
tumble rapidly and isotropically in the solid phase. This fast tumbling
averages the intramolecular couplings to zero, but the interaction
between the molecules remains. However, the couplings between molecules
are averaged to a nonzero value that depends only on the relative
position of the molecules to which the spins belong. They are not
isotropic, and they have the normal orientation dependence of dipolar
couplings of Eq. (\ref{eq:dip_coupling}), but the distance is between
the positions of the molecules, not of the nuclei. ``Position'' would
in fact not be the center of mass of the molecules, but an effective
position that is the result of the averaging process. Figure \ref{fig:systemscheme}b
shows a scheme of the interaction between two molecules, where the
spins do not interact with spins within the molecules but they interact
with all spins of the neighbor molecules. All coupling strength are
averaged to the same value. However, in Fig. \ref{fig:systemscheme}c,
the coupling strength between molecules depends of their separation,
as shown by arrows with different color tones, and on their the polar
angle $\theta{}_{ij}$ with respect to the magnetic field. The randomness
of the dispersion of the distance vector $\vec{r}$ between molecules
will be the source of disorder on $d_{ij}$. The molecules are in
a face-centered-cubic lattice, where each adamantane molecule has
$12$ first neighbor interactions with a distance of $6.6\textrm{\AA}$,
then $6$ second neighbors with a distance of $9.34\textrm{\AA}$,
then $16$ at a distance of $11.4\textrm{\AA}$ and etc. The resonance
width of the NMR resonance line, is $7.9$ kHz.
\begin{figure}
\includegraphics[width=1\columnwidth,height=0.8\columnwidth]{molecular_scheme}
\caption{\label{fig:systemscheme}(Color online) Spin system. (a) Adamantane
molecule with 16 protons (small gray spheres). The big green spheres
are mostly $^{12}$C and $^{13}$C in natural abundance (1.1\%
\ensuremath{}$). (b) The intramolecular interactions are averaged to zero due to
very fast molecular tumbling, but the intermolecular interactions
average to a non-zero value that depends of the distance between the
molecules. The spins interacts with spins of the other molecule with
the same averaged coupling strength, as shown with arrows. (c) Schematic
representation of the interactions between the molecules. The color
tones of the arrows represent the variation of the coupling strength
with the intermolecular distance, which varies as $1/r_{ij}^{3}$.}
\end{figure}
\subsection{Initial state preparation}
We perform quantum simulations starting from the high-temperature
thermal equilibrium \cite{Slichter}. Using the notation $\hat{I}_{z}=\sum_{i}\hat{I}_{z}^{i}$,
we can write the thermal equilibrium state as
\begin{eqnarray}
\rho_{0} & = & \exp\left\{ -\frac{\hbar\omega_{z}}{k_{\mathrm{B}}T}\hat{I}_{z}\right\} /\mathrm{Tr}\left\{ \exp\left\{ -\frac{\hbar\omega_{z}}{k_{\mathrm{B}}T}\hat{I}_{z}\right\} \right\} \\
& \approx & \left(\hat{1}+\frac{\hbar\omega_{z}}{k_{\mathrm{B}}T}\hat{I}_{z}\right)/\mathrm{Tr}\left\{ \hat{1}\right\} .
\end{eqnarray}
It is convenient to exclude the unity operator $\hat{1}$ since it
does not evolve in time and does not contribute to the observable
signal. The resulting state is $\hat{\rho}_{0}\propto\hat{I}_{z}$.
In this state, the spins are uncorrelated and the density operator
commutes with the Hamiltonian $\widehat{\mathcal{H}}_{dd}$. In order
to prepare a different initial state, we wait a time longer that $T_{1}$
to reinitialize the system state to $\rho_{0}$.
\section{Quantum simulations}
\subsection{Unperturbed evolution}
\subsubsection{Generating clusters}
The initial state $\rho_{0}$ of the uncorrelated spins commutes with
$\widehat{\mathcal{H}}_{dd}$. Therefore, to generate spin clusters
we use an NMR method developed by Pines and coworkers \cite{5105,Baum1985}.
It is based on generating an average Hamiltonian $\widehat{\mathcal{H}}_{0}$
that does not commute with the thermal equilibrium state
\begin{eqnarray}
\widehat{\mathcal{H}}_{0} & = & -\sum_{i<j}d_{ij}\left[\hat{I}_{x}^{i}\hat{I}_{x}^{j}-\hat{I}_{y}^{i}\hat{I}_{y}^{j}\right].\label{flip-flip}\\
& = & -\frac{1}{4}\sum_{i<j}d_{ij}\left[\hat{I}_{+}^{i}\hat{I}_{+}^{j}-\hat{I}_{-}^{i}\hat{I}_{-}^{j}\right].
\end{eqnarray}
This Hamiltonian drives an evolution that converts the thermal initial
state into clusters of correlated spins whose density operator contains
terms of the form $\hat{I}_{u}^{i}...\hat{I}_{v}^{j}\hat{I}_{w}^{k}\left(u,v,w=x,y,z\right)$,
where the indexes $i,j,k$ identify the spins involved in the given
cluster. The cluster-size $K$ corresponds to the number of terms
in this product, which is equal to the number of spins. The cluster
size is related to the volume occupied by those spins. Experimentally,
the Hamiltonian $\widehat{\mathcal{H}}_{0}$ is generated with the
pulse sequence \cite{5105,Baum1985} shown in the upper part of Fig.
\ref{Flo:NMRseqH0-1}.
\begin{figure}
\includegraphics[bb=0bp 0bp 316bp 171bp,width=0.7\columnwidth]{Sequencesbasic}
\caption{(Color online) Pulse sequence for the quantum simulations. (a) The
effective Hamiltonian $\widehat{\mathcal{H}}_{0}$ is generated by
the periodic sequence of $\text{\ensuremath{\pi}/2}$ pulses. The
upper part of the figure shows the basic cycle, where $\Delta^{\prime}=2\Delta+\tau_{p}$,
$\Delta=2\mu$s and $\tau_{p}=2.8\mu$s is the $\text{\ensuremath{\pi}/2}$
pulse duration \cite{Baum1985}. The cycle time is then $\tau_{0}=57.6\mu$s. }
\label{Flo:NMRseqH0-1}
\end{figure}
In the usual computational or Zeeman basis $\left|\alpha_{1},\alpha_{2},...,\alpha_{K}\right\rangle $
($\alpha_{i}=\uparrow,\downarrow$) for a system of $K$ spins, we
write the states as $\left|M_{z},n_{M}\right\rangle $ where $M_{z}$
is the total magnetic quantum number, \emph{i.e.}, $\hat{I}_{z}\left|M_{z},n_{M}\right\rangle =M_{z}\left|M_{z},n_{M}\right\rangle $,
and $n_{M}$ distinguishes different states with the same $M_{z}$.
Figure \ref{fig:levelschemesAm}a shows a summary of these states,
whose degeneracy is $\max\left\{ n_{M}\right\} =K!/\left[\left(K/2-M_{z}\right)!\left(K/2+M_{z}\right)!\right]$.
\begin{figure}
\includegraphics[width=1\columnwidth]{levels_and_MQC}
\caption{\label{fig:levelschemesAm}(Color online) Energy level scheme for
a cluster of $K$ spins. (a) The different rows correspond to Zemman
states $\left|\alpha_{1},\alpha_{2},...,\alpha_{K}\right\rangle $
($\alpha_{i}=\uparrow,\downarrow$) with different energy determined
by the quantum number $M_{z}$. The degeneracy of the levels in a
row is given on the rhs of each row. The solid curved arrows show
those transitions induced by the Hamiltonian $\mathcal{H}_{0}$ that
do not conserve $M_{z}$. The dotted curved arrows show the effect
of the $M_{z}$-conserving dipolar Hamiltonian $\mathcal{H}_{dd}$.
The straight colored arrows show the possible coherences generated
by $\mathcal{H}_{0}$. (b) The number of coherences of a cluster of
size $K$ are plotted as a function of $\Delta M$. The colored bars
gives those numbers and their color code corresponds to thatof panel
(a).}
\end{figure}
The Hamiltonian $\widehat{\mathcal{H}}_{0}$ flips simultaneously
two spins, which are separated in space and have the same orientation.
Accordingly, the $z$-component of the magnetization $M_{z}$ changes
by $\Delta M_{z}=\pm2.$ This is shown with a curved solid arrow in
Fig. \ref{fig:levelschemesAm}a. At the same time, the number $K$
of correlated spins changes by $\Delta K=\pm1$. Therefore, starting
from the thermal equilibrium state, the evolution generates a density
operator where only elements $\rho_{ij}$ with $\Delta M=M_{z}(i)-M_{z}(j)=2n,\, n=0,1,2\dots$
are populated. Such elements $\rho_{ij}$ are called $\Delta M$ quantum
coherences and they are represented by colored straight arrows in
Fig. \ref{fig:levelschemesAm}a. The different colors represent different
multiple-quantum coherence (MQC) orders $\Delta M$. Off-diagonal
elements of the density matrix with $\Delta M=0$ represent zero-quantum
coherences and diagonal elements correspond to populations. Then,
a MQC spectrum $A(\Delta M)$ can be described by the number of coherences
of the density matrix for a given $\Delta M$. A typical MQC spectrum
is shown in Fig. \ref{fig:levelschemesAm}b. The initial state $\rho_{0}$
is diagonal and then $A(\Delta M)\neq0$ only for $\Delta M=0$. However,
as time evolves, different spins interact with each other and other
coherence orders are excited. Then $A(\Delta M)$ starts to spread
as a manifestation of the increasing cluster-size. If we measure the
evolution of the operator $I_{z}$ as a function of time, $\left\langle I_{z}(t)\right\rangle =\mathrm{Tr}\left\{ I_{z}\rho(t)\right\} $,
its expectation value decays as a consequence of the excitation of
the coherences of the density matrix that do not contribute to the
observable $\left\langle I_{z}(t)\right\rangle $. The black squares
in Fig. \ref{fig:Fwddynamics} show the evolution of $\left\langle I_{z}(t)\right\rangle $
driven by the Hamiltonian $\widehat{\mathcal{H}}_{0}$.
\begin{figure}
\includegraphics[width=1\columnwidth,height=0.6\columnwidth]{fwdalone}
\caption{\label{fig:Fwddynamics}Time evolution driven by $\mathcal{H}_{0}$
of the total magnetization $\left\langle I_{z}(t)\right\rangle $
of the system as a function of time. The black squares are the unperturbed
evolution, while the remaining symbols are perturbed evolutions according
to the legend.}
\end{figure}
We can see a fast decay on a time scale of $\approx100\,\mu$s, which
is followed by a quantum beat at about 400$\mu$s. Then the signal
saturates.
\subsubsection{Measuring cluster sizes}
\label{sub:MeasureClusterSize}
To determine the average number of correlated spins in the generated
clusters, we use the NMR technique developed by Baum \emph{et al.}
\cite{Baum1985}. In a system of $K$ spins, the number of transitions
with a given $\Delta M$ follows a binomial distribution
\begin{equation}
n\left(\Delta M,K\right)=\frac{\left(2K\right)!}{\left(K+\Delta M\right)!\left(K\text{\textminus}\Delta M\right)!}.
\end{equation}
For $K\gg1,$ the binomial distribution can be approximated with a
Gaussian
\begin{equation}
n\left(\Delta M,K\right)\propto\exp\left(\text{\textminus}\frac{\Delta M^{2}}{K}\right),
\end{equation}
whose half width at $e^{-1}$ is $\sigma=\sqrt{K}$ . Thus, the width
of the MQC spectrum reflects the cluster-size. We can determine the
effective size of the spin clusters in a given state by measuring
the distribution of the MQCs of its density operator $\rho$ as a
function of the coherence order $\Delta M$. They can be distinguished
experimentally by rotating the system around the $z-$axis: a rotation
$\hat{\phi}_{z}=e^{-i\phi\hat{I}_{z}}$ by $\phi$ changes the density
operator to
\begin{equation}
\hat{\rho}\left(\phi\right)=\hat{\phi}_{z}\hat{\rho}\hat{\phi}_{z}^{\dagger}=\sum_{\Delta M}\hat{\rho}_{\Delta M}^{{}}e^{i\Delta M\phi},\label{eq:rhophi}
\end{equation}
where $\hat{\rho}_{\Delta M}$ contains all the elements of the density
operator involving coherences of order $\Delta M$.
By following the sequence of Fig. \ref{Flo:NMRseqH0-1}, the system
evolution is first described by an evolution period of duration $N\tau_{0}$
under the Hamiltonian $\left(\mathcal{\widehat{H}}_{0}\right)_{\phi}=\hat{\phi}_{z}\mathcal{\widehat{H}}_{0}\hat{\phi}_{z}^{\dagger}$,
\emph{i.e.},
\begin{align}
\hat{\rho}_{0}\xrightarrow{\left(\mathcal{\widehat{H}}_{0}\right)_{\phi}N\tau_{0}}\hat{\rho}_{\phi}\left(N\tau_{0}\right) & =\hat{\phi}_{z}\hat{\rho}\left(N\tau_{0}\right)\hat{\phi}_{z}^{\dagger}\nonumber \\
& =\hat{\phi}_{z}e^{-i\widehat{\mathcal{H}}_{0}N\tau_{0}}\hat{\rho}_{0}e^{i\mathcal{\widehat{H}}_{0}N\tau_{0}}\hat{\phi}_{z}^{\dagger}\nonumber \\
& =\sum_{\Delta M}\hat{\phi}_{z}\hat{\rho}_{\Delta M}^{{}}\left(N\tau_{0}\right)\hat{\phi}_{z}^{\dagger}\nonumber \\
& =\sum_{\Delta M}\hat{\rho}_{\Delta M}^{{}}\left(N\tau_{0}\right)e^{i\Delta M\phi}.
\end{align}
The next part of the sequence of Fig. \ref{Flo:NMRseqH0-1} is an
evolution of the same duration $N\tau_{0}$ under $-\mathcal{\widehat{H}}_{0}$.
This causes an evolution backward in time that gives the following
density operator at the end of the sequence
\begin{multline}
\xrightarrow{\left(-\mathcal{\widehat{H}}_{0}\right)N\tau_{0}}\hat{\rho}_{f}\left(2N\tau_{0}\right)=e^{i\widehat{\mathcal{H}}_{0}N\tau_{0}}\hat{\rho}_{\phi}\left(N\tau_{0}\right)e^{-i\widehat{\mathcal{H}}_{0}N\tau_{0}}\\
=\sum_{\Delta M}\left[e^{i\mathcal{\widehat{H}}_{0}N\tau_{0}}\hat{\rho}_{\Delta M}^{{}}\left(N\tau_{0}\right)e^{-i\mathcal{\widehat{H}}_{0}N\tau_{0}}\right]e^{i\Delta M\phi}.
\end{multline}
If $\hat{I}_{z}$ is the NMR observable, then the signal becomes
\begin{align}
\left\langle \hat{I}_{z}\right\rangle \left(\phi,N\tau_{0}\right) & =\mbox{Tr}\left\{ \hat{I}_{z}\hat{\rho}_{f}\left(2N\tau_{0}\right)\right\} \nonumber \\
& =\mathrm{Tr}\left\{ e^{-i\mathcal{\widehat{H}}_{0}N\tau_{0}}\hat{\rho}_{0}e^{i\mathcal{\widehat{H}}_{0}N\tau_{0}}\hat{\rho}_{\phi}\left(N\tau_{0}\right)\right\} \nonumber \\
& =\mathrm{Tr}\left\{ \hat{\rho}\left(N\tau_{0}\right)\hat{\rho}_{\phi}\left(N\tau_{0}\right)\right\} \nonumber \\
& =\sum_{\Delta M}\mbox{\ensuremath{e^{i\phi\Delta M}}Tr}\left\{ \hat{\rho}_{\Delta M}^{2}\left(N\tau_{0}\right)\right\} \label{eq:unperturbed_signal-1}\\
& =\sum_{\Delta M}\mbox{\ensuremath{e^{i\phi\Delta M}}}A\left(\Delta M\right),
\end{align}
where $A(\Delta M)$ are the amplitudes of the MQ spectrum shown in
Fig. \ref{fig:levelschemesAm}b. To extract these amplitudes from
the experimental data, we measure the signal $\left\langle \hat{I}_{z}\right\rangle \left(\phi,N\tau_{0}\right)$
as a function of $\phi$ at a fixed time $N\tau_{0}$ and then perform
a Fourier transform with respect to $\phi$ as shown schematically
in Fig. \ref{fig:signalvsphi} (Black squares). The cluster size is
then determined by the half-width at $e^{-1}$, $\sigma=\sqrt{K}$,
of $A(\Delta M)$.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{signalvsphip0-am}
\par\end{centering}
\caption{\label{fig:signalvsphi}(Color online) Scheme for determining the
MQC spectrum based on the sequence of Fig. \ref{Flo:NMRseqH0-1}.
The left panel is the measured $\left\langle I_{z}\right\rangle $
signal as a function of the encoding phase $\phi$ for $N\tau_{0}=230.4\mu$s.
After doing a Fourier transform the MQC spectrum $A(\Delta M)$ is
obtained (right panel). In the left panel, the black squares are the
unperturbed signal ($p=0)$, while the other colored symbols are the
signal observed for a given perturbation strength $p$.}
\end{figure}
\subsubsection{Growth of the clusters}
Figure \ref{fig:Growth} shows the time evolution of the measured
cluster size $K\left(N\tau_{0}\right)$ as a function of the total
evolution time $N\tau_{0}$. For the unperturbed Hamiltonian, the
cluster size appears to grow indefinitely \cite{alvarez_nmr_2010,alvarez_localization_2011}.
The figure also shows two examples of the $A(\Delta M)$ distributions
at different times. We can see that for long times $K\propto\left(N\tau_{0}\right)^{4.3}$.
Assuming that the cluster-size $K$ is associated with the number
of spins inside a volume, it is seen that the associated length grow
faster than normal diffusion, where its growing curve would be expected
to be $\propto\left(N\tau_{0}\right)^{3/2}$.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{Klustergrowlogog}
\par\end{centering}
\caption{(Color online) Time evolution of the cluster size of correlated spins
with the unperturbed Hamiltonian $\widehat{\mathcal{H}}_{0}$ (black
squares). Distributions of the squared amplitudes $A_{\Delta M}$
of density operator components as a function of the coherence order
$\Delta M$ are shown for two different cluster sizes. }
\label{fig:Growth}
\end{figure}
This evolution can be reversed completely by changing the Hamiltonian
from $\widehat{\mathcal{H}}_{0}$ to $-\widehat{\mathcal{H}}_{0}$.
Experimentally, this is achieved by shifting the phase of all RF pulses
by $\pm\pi/2$ \cite{5105}. The signal $\left\langle I_{z}\right\rangle \left(\phi,N\tau_{0}\right)$
at the end of the sequence of Fig. \ref{Flo:NMRseqH0-1} is a time
reversal echo for $\phi=0$. This means that under ideal conditions
$\sum_{M}A(\Delta M,N\tau_{0})=\mathrm{const}$ and we will write
$E\left(N\tau_{0}\right)$ for this quantity. The indefinite growth
of the cluster size, as well as the complete reversibility of the
time evolution are no longer possible if the effective Hamiltonian
deviates from the ideal form (\ref{flip-flip}).
\subsection{Effect of perturbations}
\subsubsection{Intrinsic perturbations}
Experimentally, the Hamiltonian $\widehat{\mathcal{H}}_{0}$ is generated
as an effective Hamiltonian by the pulse sequence of Fig. \ref{Flo:NMRseqH0-1}.
Because of experimental imperfections, it always deviates from the
ideal Hamiltonian (\ref{flip-flip}). As a result, the actual dynamics
deviates from the ideal one and, in particular, we cannot invert exactly
the perturbed Hamiltonian and thus reverse the time evolution perfectly.
The quantity $\sum_{\Delta M}A(\Delta M)$ is no longer conserved,
but decays with increasing evolution time.
The ideal form of the effective Hamiltonian (\ref{flip-flip}) can
only be created if the dipolar couplings $d_{i,j}$ are time independent
and the pulses are ideal and rotate globally all the spins. However,
if these couplings are time dependent, or the pulses are not ideal,
the effective Hamiltonian (averaged over the pulse cycle) contains
additional terms. We have partly characterized the spectral density
of local spin-spin fluctuations driven by $\widehat{\mathcal{H}}_{dd}$
in Ref. \cite{Alvarez2011,Alvarez2010c} and its correlation time
was about $\tau_{d}=110\mu$s. Since the imperfection on $\widehat{\mathcal{H}}_{0}$
are correlated with the fluctuations driven by $\widehat{\mathcal{H}}_{dd}$,
we can use $\tau_{d}$ to estimate the correlation time of the fluctuations
of the $\widehat{\mathcal{H}}_{0}$ imperfections.
In Fig. \ref{fig:Growth} the cluster size grows faster for times
lower than $\tau_{d}$. After $\tau_{d}$ the cluster size growth
reduces speed and seems to start growing exponentially \cite{alvarez_nmr_2010,alvarez_localization_2011}.
After 1ms, the growing law again change its behavior and the cluster
size grows as a power law \cite{alvarez_nmr_2010,alvarez_localization_2011}.
We cannot contrast this regime with the spectral density determined
in Ref. \cite{Alvarez2011} because it was not determined for the
corresponding range of frequency fluctuations. While all these regimes
cannot be rigorously determined, there is a fact that the cluster-size
keeps growing for long time and faster than normal diffusion. This
`super-diffusion' may be a result of the long range nature of the
dipolar interaction \cite{Metzler2000,Mercadier2009}.
In previous works \cite{alvarez_nmr_2010,alvarez_localization_2011}
and in Fig. \ref{fig:Growth}, we determined the cluster growth by
isolating the intrinsic decay generated by an imperfect effective
Hamiltonian (\ref{flip-flip}). The imperfection effects are manifested
in the overall decrease of the echo signal $E\left(N\tau_{0}\right)$
by normalizing the MQ spectra such that the total signal $\sum_{\Delta M}A(\Delta M)$
for $\phi=0$ is constant in time. Now, we measured the decay of $E\left(N\tau_{0}\right)=\sum_{\Delta M}A(\Delta M,N\tau_{0})$.
The results are shown in Fig. \ref{fig:echodecay}. If we consider
the imperfections on $\widehat{\mathcal{H}}_{0}$, the effective Hamiltonian
during the first part of the sequence of Fig. \ref{Flo:NMRseqH0-1}a
will be $\widehat{\mathcal{H}}_{fwd}=\widehat{\mathcal{H}}_{0}+\widehat{\mathcal{H}}_{e}$,
and $\widehat{\mathcal{H}}_{bwd}=-\widehat{\mathcal{H}}_{0}+\widehat{\mathcal{H}}_{e}^{\prime}$
during the time reversed part, where $\widehat{\mathcal{H}}_{e}$
is an average error Hamiltonian. The echo decay is then $E(N_{0}\tau_{0})=\mathrm{Tr}\left\{ \hat{\rho}^{\mathcal{H}_{fwd}}\left(N\tau_{0}\right)\hat{\rho}^{\mathcal{H}_{bwd}}\left(N\tau_{0}\right)\right\} $
and it quantifies the time reversal probability as a kind of Loschmidt
echo \cite{PhysicaA,JalPas01,Rhim1970}.
\begin{figure}
\includegraphics[width=1\columnwidth,height=0.5\columnwidth]{echo}
\caption{\label{fig:echodecay}Time-reversal echo probability. The black squares
are the unperturbed ($p=0$) echo decay $E(N_{0}\tau_{0})$, measured
with the sequence of Fig. \ref{Flo:NMRseqH0-1}, as a function of
the evolution time $N_{0}\tau_{0}$. The left axis gives the time-reversal
probability normalized with respect to the signal at $N=0$. }
\end{figure}
The echo decay $E(N_{0}\tau_{0})$ shown in Fig. \ref{fig:echodecay}
starts as a Gaussian for times shorter than $N_{0}\tau_{0}\lesssim288\mu$s$\approx2.6\,\tau_{d}$.
For longer times it decays exponentially until $\approx920\mu$s where
a different decay law arises. These transitions between different
decay laws seems to be correlated with the growing law transitions
discussed in the previous parragraphs. This resembles the typical
behavior of nuclear spins of a solution diffusing in a inhomogeneous
magnetic field with a given standard deviation \cite{Klauder1962}
or in restricted spaces in the presence of a magnetic field gradient
\cite{Kennan1994}. If the spins only interact with the magnetic field,
due to the diffusion process, they feel a different magnetic field
at different times causing dephasing of the spin signal. The frequency
fluctuations have a correlation time given by the time needed to explore
the standard deviation of the changes of the magnetic field that is
related with the inhomogenity of the magnetic field or to the restriction
length. By applying a spin-echo sequence -a time reversion of the
spin precession by an inversion pulse that changes the sign of the
magnetic field interaction- one can partly reverse the effects of
the diffusion-driven dephasing \cite{Hahn1950,Carr1954}. The echo
sequence is analogous to the one of Fig. \ref{Flo:NMRseqH0-1}. If
the inversion pulse inducing the time reversal is applied at times
shorter than the correlation time of the frequency fluctuations of
the spins, the signal decay depends of the spatial displacement of
the spins and the signal decays faster as the time passes because
the dephasing rate increases with the displacement length. However,
for times longer than the correlation time of the frequency fluctuations
the decay rate becomes independent of the displacement and becomes
exponential, similar to the time reversal echo behavior shown in Fig.
\ref{fig:echodecay}.
\subsubsection{Controlled perturbation}
The echo decay in Fig. \ref{fig:echodecay} depends of perturbation
$\widehat{\mathcal{H}}_{e}$. In order to study the sensitivity to
the perturbation strength, we introduced a perturbation $\widehat{\Sigma}$,
whose strength we can control experimentally and study the behavior
of the system as a function of the perturbation strength. We choose
the raw dipole-dipole coupling for this perturbation, $\widehat{\Sigma}=\mathcal{\widehat{H}}_{dd}$,
which has long range interactions with coupling strengths decaying
as $1/r^{3}$. We add this Hamiltonian to the ideal Hamiltonian $\widehat{\mathcal{H}}_{0}$
by concatenating short evolution periods under $\widehat{\mathcal{H}}_{dd}$
with evolution periods under $\widehat{\mathcal{H}}_{0}$. We label
the durations of the two time periods $\tau_{\Sigma}$ and $\tau_{0}$,
as shown in Fig. \ref{Flo:NMRseqH0-perturbed}.
\begin{figure}
\includegraphics[bb=30bp 235bp 570bp 330bp,clip,width=1\columnwidth]{Sequences-all}
\caption{(Color online) Sequence for generating a perturbed evolution. It is
achieved when $\tau_{\Sigma}\ne0$, where $\widehat{\Sigma}=\widehat{\mathcal{H}}_{dd}$
is the free evolution Hamiltonian.}
\label{Flo:NMRseqH0-perturbed}
\end{figure}
When the duration $\tau_{\mathrm{c}}=\tau_{0}+\tau_{\Sigma}$ of
each cycle is short compared to the inverse of the dipolar couplings
$d_{ij}$, the resulting evolution can be described by the effective
Hamiltonian
\begin{equation}
\widehat{\mathcal{H}}_{\mathrm{eff}}=(1-p)\widehat{\mathcal{H}}_{0}+p\widehat{\Sigma},\label{Heff-1}
\end{equation}
where the relative strength $p=\tau_{\Sigma}/\tau_{\mathrm{c}}$ of
the perturbation $\widehat{\Sigma}=\widehat{\mathcal{H}}_{dd}$ can
be controlled by adjusting the duration $\tau_{\Sigma}$. For the
quantum simulations, we compared the artificially perturbed evolution
of $\widehat{\mathcal{H}}_{\mathrm{eff}}$ with the $\widehat{\mathcal{H}}_{0}$
evolution with its intrinsic errors. While the intrinsic errors reduce
the signal or the overall fidelity, they do not cause localization
on the time scale of our experiments (see Fig. \ref{fig:Growth}).
Starting from thermal equilibrium, now the state of the system at
the end of $N$ cycles is $\hat{\rho}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{\mathrm{c}}\right)=e^{-i\widehat{\mathcal{H}}_{\mathrm{eff}}N\tau_{\mathrm{c}}}\hat{\rho}_{0}e^{i\widehat{\mathcal{H}}_{\mathrm{eff}}N\tau_{\mathrm{c}}}.$
Taking into account now the complete sequence of evolutions given
by Fig. \ref{Flo:NMRseqH0-1}b, the experiment is thus a perturbed
forward evolution and an unperturbed backward evolution. The density
matrix at the end of the sequence is then $\sum_{M}\left[e^{i\mathcal{\widehat{H}}_{0}N\tau_{0}}\hat{\rho}_{M}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{\mathrm{c}}\right)e^{-i\mathcal{\widehat{H}}_{0}N\tau_{0}}\right]e^{iM\phi}$
as derived in \cite{alvarez_nmr_2010,alvarez_localization_2011}.
Thus the NMR echo signal, which is measured after the last backward
evolution $\exp\left\{ i\widehat{\mathcal{H}}_{0}N\tau_{0}\right\} $,
can be written as
\begin{multline}
\left\langle I_{z}\right\rangle \left(\phi,N\tau_{\mathrm{c}}\right)=\mbox{Tr}\left\{ \hat{I}_{z}\hat{\rho}_{f}\left(N\tau_{\mathrm{c}}+N\tau_{0}\right)\right\} \\
=\mathrm{Tr}\left\{ \hat{\rho}^{\mathcal{H}_{0}}\left(N\tau_{0}\right)\hat{\rho}_{\phi}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{\mathrm{c}}\right)\right\} .\label{eq:M-fidelities}
\end{multline}
In terms of the individual MQ coherences, this may be written as
\begin{equation}
\left\langle I_{z}\right\rangle \left(\phi,N\tau_{\mathrm{c}}\right)=\sum_{\Delta M}\mbox{\ensuremath{e^{i\phi\Delta M}}Tr}\left\{ \hat{\rho}_{\Delta M}^{\mathcal{H}_{0}}\left(N\tau_{0}\right)\hat{\rho}_{\Delta M}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{\mathrm{c}}\right)\right\}
\end{equation}
with the MQ coherence amplitudes $A(\Delta M)=\mbox{Tr}\left\{ \hat{\rho}_{\Delta M}^{\mathcal{H}_{0}}\left(N\tau_{0}\right)\hat{\rho}_{\Delta M}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{\mathrm{c}}\right)\right\} $.
For the ideal evolution ($p=0$), Eq. (\ref{eq:unperturbed_signal-1})
is recovered, where $A(\Delta M)$ correspond to the squared amplitudes
of the density operator elements $\hat{\rho}_{\Delta M}^{\mathcal{H}_{0}}\left(N\tau_{0}\right)$
with coherence order $\Delta M$. For the perturbed evolution, $(p\ne0)$,
they are reduced by the overlap of the actual density operator elements
$\hat{\rho}_{\Delta M}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{\mathrm{c}}\right)$
with the ideal ones. We extract these amplitudes by performing a Fourier
transformation with respect to $\phi$. Figure \ref{fig:Comparison of AM}
shows a comparison between the distributions $A(\Delta M)$ for different
evolution times for an unperturbed evolution (panel a) and a perturbed
evolution with $p=0.108$ (panel b). The main difference is that the
MQC spectrum of the perturbed evolution does not spread indefinitely
but its width reaches a limiting value \cite{alvarez_nmr_2010,alvarez_localization_2011}.
\begin{figure}
\includegraphics[width=1\columnwidth]{Ams}
\caption{\label{fig:Comparison of AM}(Color online) MQC spectrums for different
evolution times and perturbation strengths. Panel (a) shows the $A(\Delta M)$
spectrum for the unperturbed evolution ($p=0$) for different times.
Panel (b) shows its analogous but for $p=0.108$. Panel (b) shows
manifestation of the localization effects evidenced on the saturation
of the spreading of the MQC spectrum.}
\end{figure}
As discussed in section \ref{sub:MeasureClusterSize}, we determine
the cluster size for different evolution times and perturbation strengths
from the width of the measured MQC distributions. Figure \ref{fig:cluster_sizesvsp}
shows the cluster size (the number of correlated spins) as a function
of the evolution time $N\tau_{c}$. The main difference of the perturbed
time evolutions (colored symbols in Fig. \ref{fig:cluster_sizesvsp})
compared to the unperturbed evolution (black squares) is that the
cluster size does not grow indefinitely \cite{alvarez_nmr_2010,alvarez_localization_2011},
but saturates. It remains unclear if the cluster growth for the weakest
perturbation $p=0.009$ also saturates. We consider this saturation
as evidence of localization due to the perturbation and the localization
size decreases with increasing the perturbation strength $p$. .
\begin{figure}
\includegraphics[width=1\columnwidth]{Kvstloglog}
\caption{\label{fig:cluster_sizesvsp}(Color online) Time evolution of the
cluster size $K$ for different perturbation strengths. The cluster
size is related to the volume occupied by the $K$ spins. By increasing
the perturbation, the localization-size is reduced. }
\end{figure}
\subsection{Quantum dynamics from different initial cluster sizes}
We have shown that time evolution of the cluster size under perturbations
reaches a dynamical equilibrium state \cite{alvarez_nmr_2010,alvarez_localization_2011},
i.e. for a given perturbation strength, the size of the spin clusters
tends toward the same limiting value, independent of the initial condition.
In order to show this, we prepared a series of initial conditions
corresponding to different clusters sizes. Figure \ref{Flo:NMRseqdiffinitialstates}
shows the corresponding pulse sequence: The initial state preparation,
consisting of an evolution of duration $N_{0}\tau_{0}$ under the
unperturbed Hamiltonian $\widehat{\mathcal{H}}_{0}$, generates clusters
of size $K_{0}$.
\begin{figure}
\includegraphics[bb=0bp 570bp 605bp 770bp,clip,width=1\columnwidth]{fig9}
\caption{(Color online) Sequence for preparing different initial clusters sizes
by controlling $N_{0}$ and subsequently evolving them in the presence
of a perturbation. }
\label{Flo:NMRseqdiffinitialstates}
\end{figure}
During the subsequent perturbed evolution of duration $N\tau_{\mathrm{c}}$,
these initial clusters evolve and Eq. (\ref{eq:M-fidelities}) becomes
\begin{multline}
\left\langle I_{z}\right\rangle \left(\phi,N_{0}\tau_{0}+N\tau_{\mathrm{c}}\right)=\\
\mbox{Tr}\left\{ \hat{\rho}^{\mathcal{H}_{0}}\left(N\tau_{0},N_{0}\tau_{0}\right)\hat{\rho}_{\phi}^{\mathcal{H}_{\mathrm{eff}}}\left(N\tau_{c},N_{0}\tau_{0}\right)\right\} .\label{eq:Izdiffinitialstates}
\end{multline}
This method allows us to study the growth of the clusters by starting
from different sizes $K_{0}=K(N_{0}\tau_{0})$ and following the evolution
as a function of time and perturbation strength. Based on Eq. (\ref{eq:Izdiffinitialstates}),
we determined the MQC spectra $A(\Delta M)$ for different evolution
times. The insets of Fig. \ref{fig:dyneq-ams}, shows two examples
of them. From such curves, we determined the evolution of the cluster
size shown in Fig. \ref{fig:dyneq-ams}. The figure shows evolution
of the the cluster size for two perturbation strengths, starting from
different initial sizes. The dynamical equilibrium is clearly manifested
in the figure. The two insets show the $A(\Delta M)$ spectrums starting
from $K_{0}=141$, for different evolution times. We can see that
if $K_{0}$ is lower than the localization size, the MQC spectrum
spreads until it localizes (manifested by the parallel slopes), however
if $K_{0}$ is larger than the localization size, it shrink until
saturation. We found that the localization size vs. the perturbation
strength is roughly proportional to $1/p^{2}$ \cite{alvarez_nmr_2010,alvarez_localization_2011}.
During our experiment, the magnetization is uniform throughout the
sample, so the process does not lead to a spatial redistribution of
magnetization. Note however that here we measure the cluster size
of correlated spins that is associated with a coherent length. Therefore,
this technique allows to investigate the localization size, even when
the density profile of the excited cluster size would exceed the localization
length.
\begin{figure}
\includegraphics[width=1\columnwidth]{dyneq}
\caption{\label{fig:dyneq-ams}(Color online) Time evolution of the cluster-size
of correlated spins starting from different initial sates. The experimental
data is shown for two different perturbation strengths given in the
legend. The solid black squares, red triangles and green rhombuses
are evolutions from an uncorrelated initial state. Empty symbols start
from an initial state with $K_{0}$ correlated spins (see legend).
The solid blue rhombuses and brown triangles starts from an initial
$K_{0}=141$. The insets show the MQC spectrum starting from $K_{0}=141$
as a functions of time for the two perturbation strengths.}
\end{figure}
\section{Conclusions}
As a step toward the understanding of the quantum evolution of large
quantum systems, we have studied the spreading of information in a
system of nuclear spins. Decoherence has long been recognized to limit
the time for which quantum information can be used. Spatial disorder
also limits the distance over which quantum information can be transferred.
We have studied the role of a disordered dipolar interaction Hamiltonian
and shown that for larger values of the perturbation, the coherence
length of the cluster size reaches a limit value. Even though we do
not measure directly the spatial extent of the cluster size, one might
speculate that the spatial extent and the number of correlated spins
are related with Volume$\sim K$. A connection to Anderson localization
of spins with dipolar $1/r^{3}$ interactions could thus be explored
\cite{Anderson1958,anderson_local_1978,Pomeransky2004,Burrell2007,Keating2007,Allcock2009}.
We also note that for lower values of the perturbation, the size of
the cluster grows faster than expected for a diffusive model. We will
investigate possible connections to Levy flights and Levy walks induced
by the long range dipole-dipole couplings of our Hamiltonians.We developed
a method that allows one to quantify the time evolution of the cluster
size of correlated spins starting with single qubits. As we have shown,
the information can spread to clusters of several thousand qubits.
We have observed that the combination of an information spreading
Hamiltonian and a perturbation to it results in a quantum state that
becomes localized. The localization size decreases with increasing
strength of the perturbation and the resulting size appears to be
determined by a dynamic equilibrium \cite{alvarez_nmr_2010,alvarez_localization_2011},
a feature which might be adapted to other communities studying Anderson
localization.
The results presented here provide information about the spatial bounds
for transferring quantum information in large spin networks and indicate
how precise manipulations of large quantum systems have to be. The
sample used in this study is also an interesting system for studying
fundamental aspects of Anderson localization in the presence of long
range interactions.
\begin{acknowledgments}
This work was supported by the DFG through Su 192/24-1. G.A.A. acknowledges
financial support from the Alexander von Humboldt Foundation and from
the European Commission under the Marie Curie Intra-European Fellowship.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,116,691,498,570 | arxiv | \section{Introduction}
As the simplest model for cooperative communications, relay channel
has attracted plenty of attention since 1971, when it was first
introduced by van der Meulen \cite{Van_Der_Meulen:1971}. In 1979,
Cover and El Gamal proposed two major coding schemes for the relay
channel \cite{Cover:1979}. These two schemes are widely known as
Decode-And-Forward (DAF) and Compress-And-Forward (CAF) today; see
\cite{Kramer:2005} for a recent review. These two coding schemes
represent two different types of cooperation. In DAF, the cooperation
is relatively obvious, where the relay decodes the message from the
transmitter, and the transmitter and the relay cooperatively transmit
the constructed common information to the receiver in the next
block. In CAF, the cooperation spirit is less easy to recognize, as
the message is sent by the transmitter only once. However, the relay
cooperates with the transmitter by compressing and sending its signal
to the receiver. The rate gains in these achievable schemes are due to
the fact that, through the channel from the transmitter to the relay,
{\it correlation} is created between the transmitter and the relay,
and this correlation is utilized to improve the rates.
In the DAF scheme, correlation is created and then utilized in a block
Markov coding structure. More specifically, a {\it full} correlation
is created by decoding the message fully at the relay, which enables
the transmitter and the relay to create any kind of joint distribution
for the channel inputs in the next block.
The shortcoming of the DAF scheme is that by forcing the
relay to decode the message in its entirety, it limits the overall
achievable rate by the rate from the transmitter to the relay. In
contrast, by not forcing a full decoding at the relay, the CAF scheme
does not limit the overall rate by the rate from the transmitter to
the relay, and may yield higher overall rates. The shortcoming of the
CAF scheme, on the other hand, is that the correlation offered by the
block coding structure is not utilized effectively, since in each
block the channel inputs $X$ and $X_1$ from the transmitter and the
relay are independent, as the transmitter sends the message only once.
However, the essence of good coding schemes in multi-user systems with
correlated sources (e.g., \cite{Cover:1980, Ahlswede:1983}) is to
preserve the correlation of the sources in the channel
inputs. Motivated by this basic observation, in this paper, we propose
a new coding scheme for the relay channel, that is based on the idea
of preserving the correlation in the channel inputs from the
transmitter and the relay. We will show that our new coding scheme may
be viewed as a more general version of the CAF scheme, and therefore,
our new coding scheme may potentially yield larger rates than the CAF
scheme. Our proposed scheme can be further combined with the DAF
scheme to yield rates that are potentially larger than those offered
by both DAF and CAF schemes, similar in spirit to \cite[Theorem
7]{Cover:1979}.
Our new achievability scheme for the relay channel may be viewed as a
variation of the coding scheme of Ahlswede and Han
\cite{Ahlswede:1983} for the multiple access channel with a correlated
helper. In our work, we view the relay as the helper because the
receiver does not need to decode the information sent by the
relay. Also, we note that the relay is a {\it correlated helper} as
the communication channel from the transmitter to the relay provides relay
for free a correlated version of the signal sent by the
transmitter. The key aspects of the Ahlswede-Han \cite{Ahlswede:1983}
scheme are: to preserve the correlation between the channel inputs of
the transmitter and the helper (relay), and for the receiver to decode
a ``virtual'' source, a compressed version of the helper, but not the
entire signal of the helper.
Our new coding scheme is in the form of block Markov coding. The
transmitter uses a superposition Markov code, similar to the one used
in the DAF scheme \cite{Cover:1979}, except in the random codebook
generation stage, a method similar to the one in \cite{Cover:1980} is
used in order to preserve the correlation between the blocks. Thus, in
each block, the fresh information message is mapped into a codeword
conditioned on the codeword of the previous block. Therefore, the overall codebook at the transmitter has a tree structure, where the codewords in block $l$ emanate from the codewords in block $l-1$. The depth of the tree is $B-1$. A similar strategy
is applied at the relay side where the compressed version of the
received signal is mapped into a two-block-long codeword conditioned
on the codeword of the previous block. Therefore, the overall codebook at the relay has a tree structure as well. As a result of this coding strategy, we successfully
preserve the correlation between the channel inputs of the transmitter
and the relay. However, unlike the DAF scheme where a {\it full}
correlation is acquired through decoding at the relay, our scheme
provides only a {\it partially} correlated helper at the relay by not
trying to decode the transmitter's signal fully. From
\cite{Cover:1980, Ahlswede:1983}, we note that the channel inputs are
correlated through the virtual sources in our case, and therefore, the
channel inputs between the consecutive blocks are correlated. This
correlation between the blocks will surely hurt the achievable
rate. The correlation between the blocks is the price we pay for
preserving the correlation between the channel inputs of the
transmitter and the relay within any given block.
At the decoding stage, we perform joint decoding for the entire $B$
blocks after all of the $B$ blocks have been received, which is
different compared with the DAF and CAF schemes. The reason for
performing joint decoding at the receiver is that due to the
correlation between the blocks, decoding at any time before the end of
all the $B$ blocks would decrease the achievable rate. We note that
joint decoding increases the decoding complexity and the delay as
compared to DAF and CAF, though neither of these is a major concern in
an information theoretic context. The only problem with the joint
decoding strategy is that it makes the analysis difficult as it
requires the evaluation of some mutual information expressions
involving the joint probability distributions of up to $B$ blocks of
codes, where $B$ is very large.
The analysis of the error events provides us three conditions
containing mutual information expressions involving infinite letters
of the underlying random process. Evaluation of these mutual
information expressions is very difficult, if not impossible. To
obtain a computable result, we lower bound these mutual informations
by noting some Markov structure in the underlying random process. This
operation gives us three conditions to be satisfied by the achievable
rates. These conditions involve eleven variables, the two channel
inputs from the transmitter and the relay, the two channel outputs at
the relay and the receiver and the compressed version of the channel
output at the relay, in two consecutive blocks, and the channel input
from the transmitter in the previous block.
We finish our analysis by revisiting the CAF scheme. We develop an
equivalent representation for the achievable rates given in
\cite{Cover:1979} for the CAF scheme. We then show that this
equivalent representation for the achievable rates for the CAF scheme
is a special case of the achievable rates in our new coding scheme,
which is obtained by a special selection of the eleven variables
mentioned above. We therefore conclude that our proposed coding scheme
yields potentially larger rates than the CAF scheme. More importantly,
our new coding scheme creates more possibilities, and therefore a
spectrum of new achievable schemes for the relay channel through the
selection of the underlying probability distribution, and yields the
well-known CAF scheme as a special case, corresponding to a particular
selection of the underlying probability distribution.
\section{The Relay Channel}
Consider a relay channel with finite input alphabets $\mathcal{X}$,
$\mathcal{X}_1$ and finite output alphabets $\mathcal{Y}$,
$\mathcal{Y}_1$, characterized by the transition probability
$p(y,y_1|x,x_1)$. An $n$-length block code for the relay channel
$p(y,y_1|x,x_1)$ consists of encoders $f, f_i$, $i=1,\dots,n$ and a
decoder $g$
\begin{align}
f&: \mathcal{M}\longrightarrow \mathcal{X}^n\nonumber\\
f_i&: \mathcal{Y}_1^{i-1}\longrightarrow \mathcal{X}_1, \qquad i=1,
\dots,n\nonumber\\
g&: \mathcal{Y}^n\longrightarrow \mathcal{M}\nonumber
\end{align}
where the encoder at the transmitter sends $x^n=f(m)$ into the
channel, where $m \in \mathcal{M}\triangleq \{1,2,\dots, M\}$; the encoder at the relay at the
$i$th channel instance sends $x_{1i}=f_i(y_1^{i-1})$ into the channel;
the decoder outputs $\hat{m}=g(y^n)$. The average probability of
error is defined as
\begin{equation}
P_e=\frac{1}{M}\sum_{m\in \mathcal{M}}Pr(\hat{m}\neq m|m
~\text{is transmitted})
\end{equation}
A rate $R$ is achievable for the relay channel $p(y,y_1|x,x_1)$ if for
every $0<\epsilon<1$, $\eta>0$, and every sufficiently large $n$,
there exists an $n$-length block code $(f,f_i, g)$ with $P_e\le
\epsilon$ and $\frac{1}{n}\ln M\ge R-\eta$.
\section{A New Achievability Scheme for the Relay Channel}\label{SL1}
We adopt a block Markov coding scheme, similar to the DAF and CAF
schemes. We have overall $B$ blocks. In each block, we transmit
codewords of length $n$. We denote the variables in the $l$th block
with a subscript of $[l]$. We denote $n$-letter codewords transmitted
in each block with a superscript of $n$. Following the standard relay
channel literature, we denote the (random) signals transmitted by the
transmitter and the relay by $X$ and $X_1$, the signals received at
the receiver and the relay by $Y$ and $Y_1$, and the compressed
version of $Y_1$ at the relay by $\hat{Y}_1$. The realizations of
these random signals will be denoted by lower-case letters. For
example, the $n$-letter signals transmitted by the transmitter and the
relay in the $l$th block will be represented by $x_{[l]}^n$ and
$x_{1[l]}^n$.
Consider the following discrete time stationary Markov process
$G_{[l]}\triangleq(X,\hat{Y}_1,X_1,y, Y_1)_{[l]}$ for $l=0,1,\dots,B$,
with the transition probability distribution
\begin{align} \label{distr}
p\left((x,\hat{y}_1,x_1,y,y_1)_{[l]}|(x,\hat{y}_1,x_1,y,y_1)_{[l-1]}\right)&
\nonumber\\
=p(x_{[l]}|x_{[l-1]})p&(y_{1[l]},y_{[l]}|x_{[l]},x_{1[l]})p(x_{1[l]}|
\hat{y}_{1[l-1]})p(\hat{y}_{1[l]}|y_{1[l]}, x_{1[l]})
\end{align}
The codebook generation and the encoding scheme for the $l$th block,
$l=1,\dots, B-1$, are as follows.
\vspace*{0.1in}
\noindent
{\bf Random codebook generation:} Let $(x_{[l-1]}^n(m_{[l-1]}),
x_{1[l-1]}^n, y_{1[l-1]}^n, y_{[l-1]}^n)$ denote the
transmitted and the received signals in the $(l-1)$st block, where
$m_{[l-1]}$ is the message sent by the transmitter in the $(l-1)$st
block. An illustration of the codebook structure is shown in
Figure~\ref{codeS}.
\begin{enumerate}
\item For each $x_{[l-1]}^n(m_{[l-1]})$ sequence, generate $M$
sequences, where $x^n_{[l]}(m_{[l]})$, the $m_{[l]}$th sequence, is
generated independently according to $\prod_{i=1}^n
p(x_{i[l]}|x_{i[l-1]})$. Here, every codeword in the $(l-1)$st block
expands into a codebook in the $l$th block. This expansion is
indicated by a directed cone from $x_{[l-1]}^n$ to $x_{[l]}^n$ in
Figure~\ref{codeS}.
\item For each $x_{1[l-1]}^n$ sequence, generate $L$
$\hat{Y}_{1[l-1]}^n$ sequences independently uniformly distributed in
the conditional strong typical set\footnote{Strong typical set and
conditional strong typical set are defined in \cite[Definition 1.2.8,
1.2.9]{Csiszar:1981}. For the sake of simplicity, we omit the
subscript which is used to indicate the underlying distribution in
\cite{Csiszar:1981}.} $\mathcal{T}_{\delta}(x_{1[l-1]}^n)$ with
respect to the distribution $p(\hat{y}_{1[l-1]}|x_{1[l-1]})$. If
$\frac{1}{n}\ln L>I(Y_{1[l-1]};\hat{Y}_{1[l-1]}|X_{1[l-1]})$, for any
given $y_{1[l-1]}^n$ sequence, there exists one $\hat{y}_{1[l-1]}^n$
sequence with high probability when $n$ is sufficiently large such
that $(y_{1[l-1]}^n, \hat{y}_{1[l-1]}^n, x_{1[l-1]}^n)$ are jointly
typical according to the probability distribution $p(y_{1[l-1]},
\hat{y}_{1[l-1]}, x_{1[l-1]})$. Denote this $\hat{y}_{1[l-1]}^n$ as
$\hat{y}_{1[l-1]}^n(y_{1[l-1]}^n, x_{1[l-1]}^n)$. Here, the
quantization from $y_{1[l-1]}^n$ to $\hat{y}_{1[l-1]}^n$, parameterized
by $x_{1[l-1]}^n$, is indicated in Figure~\ref{codeS} by a directed
cone from $y_{1[l-1]}^n$ to $\hat{y}_{1[l-1]}^n$, with a straight line
from $x_{1[l-1]}^n$ for the parameterization.
\item For each $\hat{y}_{1[l-1]}^n$, generate one $x_{1[l]}^n$
sequence according to $\prod_{i=1}^n
p(x_{1i[l]}|\hat{y}_{1i[l-1]})$. This one-to-one mapping is indicated
by a straight line between $\hat{y}_{1[l-1]}^n$ and $x_{1[l]}^n$ in
Figure~\ref{codeS}.
\end{enumerate}
\begin{figure*}
\centering
\includegraphics[width=6.3in]{coding.eps}
\caption{Codebook structure.}
\label{codeS}
\end{figure*}
\vspace*{0.1in}
\noindent
{\bf Encoding:} Let $m_{[l]}$ be the message to be sent in this
block. If $(x_{[l-1]}^n(m_{[l-1]}), x_{1[l-1]}^n)$ are sent and $y_{1[l-1]}^n$ is received in the previous block,
we choose $(x^n_{[l]}(m_{[l]}), \hat{y}_{1[l-1]}^n(y_{1[l-1]}^n,
x_{1[l-1]}^n), x_{1[l]}^n)$ according to the code generation method
described above and transmit $(x^n_{[l]}(m_{[l]}), x_{1[l]}^n)$. In the
first block, we assume a virtual $0$th block, where $(x^n_{[0]},
x^{n}_{1[0]}, \hat{y}^n_{1[0]})$, as well as $x^n_{1[1]}$, are known by the transmitter, the
relay and the receiver. In the $B$th block, the transmitter randomly
generates one $x_{[B]}^n$ sequence according to $\prod_{i=1}^n
p(x_{i[B]}|x_{i[B-1]})$ and sends it into the channel. The relay,
after receiving $y_{1[B]}^n$, randomly generates one
$\hat{y}_{1[B]}^n$ sequence according to $\prod_{i=1}^n
p(\hat{y}_{1i[B]}|y_{1i[B]},x_{1i[B]})$. We assume that the
transmitter and the relay reliably transmit $x_{[B]}^n$ and
$\hat{y}_{1[B]}^n$ to the receiver using the next $b$ blocks, where
$b$ is some finite positive integer. We note that $B+b$ blocks are
used in our scheme, while only the first $B-1$ blocks carry the
message. Thus, the final achievable rate is
$\frac{B-1}{B+b}\frac{1}{n}\ln M$ which converges to $\frac{1}{n}\ln
M$ for sufficiently large $B$ since $b$ is finite.
\vspace*{0.1in}
\noindent
{\bf Decoding:} After receiving $B$ blocks of $y^n$ sequences, i.e.,
$y^n_{[1]},\dots, y^n_{[B]}$, and assuming $x^n_{1[1]}$, $x^n_{[B]}$ and
$\hat{y}_{1[B]}^n$ are known at the receiver, we seek
$x^n_{[1]},\dots, x^n_{[B-1]}$, $\hat{y}^n_{1[1]},\dots,
\hat{y}^n_{1[B-1]}, x^n_{1[2]},\dots, x^n_{1[B]}$, such that
\begin{equation}
\left(x^n_{[1]},\dots, x^n_{[B]},
\hat{y}^n_{1[1]},\dots, \hat{y}^n_{1[B]}, x^n_{1[1]},\dots, x^n_{1[B]},
y^n_{[1]},\dots,y^n_{[B]}\right)\in \mathcal{T}_{\delta}\nonumber
\end{equation}
according to the stationary distribution of the Markov process
$G_{[l]}$ in (\ref{distr}).
The differences between our scheme and the CAF scheme are as
follows. At the transmitter side, in our scheme, the fresh message
$m_{[l]}$ is mapped into the codeword $x_{[l]}^n$ conditioned on the
codeword of the previous block $x_{[l-1]}^n$, while in the CAF scheme,
$m_{[l]}$ is mapped into $x_{[l]}^n$, which is generated independent
of $x_{[l-1]}^n$. At the relay side, in our scheme, the compressed
received signal $\hat{y}_{1[l-1]}^n$ is mapped into the codeword
$x_{1[l]}^n$, which is generated according to
$p(x_{1[l]}|\hat{y}_{1[l-1]})$, while in the CAF scheme, $x_{1[l]}^n$
is generated independent of $\hat{y}_{1[l-1]}^n$. The aim of our
design is to preserve the correlation built in the $(l-1)$st block in
the channel inputs of the $l$th block. At the decoding stage, we
perform joint decoding for the entire $B$ blocks after all of the $B$
blocks have been received, while in the CAF scheme, the decoding of
the message of the $(l-1)$st block is performed at the end of the
$l$th block.
\vspace*{0.1in}
\noindent
{\bf Probability of error:} When $n$ is sufficiently large, the
probability of error can be made arbitrarily small when the following
conditions are satisfied.
\begin{enumerate}
\item For all $j$ such that $1\le j\le B-1$,
\begin{align}
\frac{1}{n}(B-j)\ln M+(B-j)&I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]},
X_{[l]})\nonumber\\
&< I(X_{[j]}^{[B-1]}, \hat{Y}_{1[j]}^{[B-1]}, X_{1[j+1]}^{[B]};Y_{[j]}^{[B]},
\hat{Y}_{1[B]},X_{[B]}| X_{[j-1]}, X_{1[j]})\label{cond4a}
\end{align}
\item For all $j,k$ such that $1\le j<k\le B-1$,
\begin{align}
\frac{1}{n}(B-j) &\ln M+(B-k)I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]},
X_{[l]})\nonumber\\
&< I(X_{[j]}^{[B-1]}, \hat{Y}_{1[k]}^{[B-1]}, X_{1[k+1]}^{[B]};Y_{[j]}^{[B]},
\hat{Y}_{1[B]},X_{1[B]},\hat{Y}_{1[j]}^{[k-1]},X_{1[j+1]}^{[k]}| X_{[j-1]},
X_{[j]})\label{cond4b}
\end{align}
\item For all $j,k$ such that $1\le k<j\le B-1$,
\begin{align}
(j-k)I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}&,X_{[l]})+\frac{1}{n}(B-j)\ln M
+(B-j)I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}, X_{[l]})\nonumber\\
&<I(X_{[j]}^{[B-1]}, \hat{Y}_{1[k]}^{[B-1]}, X_{1[k+1]}^{[B]};Y_{[k]}^{[B]},
\hat{Y}_{1[B]},X_{[B]}|X_{[k]}^{[j-1]}, X_{1[k]})\label{cond4c}
\end{align}
\end{enumerate}
where the subscript $[l]$ on the left hand sides of (\ref{cond4a}), (\ref{cond4b}) and (\ref{cond4c}) indicates that the corresponding random variables belong to a generic sample $g_{[l]}$ of the underlying random process in (\ref{distr}).
The details of the calculation of the probability of error where these
conditions are obtained can be found in Appendix \ref{POE}. The derivation uses standard techniques from information theory, such as counting error events, etc.
In the above conditions, we used the notation $A_{[j]}^{[B]}$ as a
shorthand to denote the sequence of random variables $A_{[j]},
A_{[j+1]}, \dots, A_{[B]}$. Consequently, we note that the mutual
informations on the right hand sides of (\ref{cond4a}), (\ref{cond4b})
and (\ref{cond4c}) contain vectors of random variables whose lengths
go up to $B$, where $B$ is very large. In order to simplify the
conditions in (\ref{cond4a}), (\ref{cond4b}) and (\ref{cond4c}), we
lower bound the mutual information expressions on the right hand sides
of (\ref{cond4a}), (\ref{cond4b}) and (\ref{cond4c}) by those that
involve random variables that belong to up to three blocks. The detailed derivation of the
following lower bounding operation can be found in Appendix
\ref{LowB}.
The derivation uses standard
techniques from information theory, such as the chain rule of mutual
information, and exploiting the Markov structure of the involved
random variables.
\begin{enumerate}
\item For all $j$ such that $1\le j\le B-1$,
\begin{align}
(B-j) \left( \frac{1}{n}\ln M+I( \hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}, X_{[l]})\right)&
\nonumber\\
< (B-j)I(Y_{[l]}&; X_{[l]},\hat{Y}_{1[l]}, X_{1[l]}|X_{[l-2]}, X_{1[l-1]},
Y_{[l-1]})\label{cond5a}
\end{align}
\item For all $j,k$ such that $1\le j<k\le B-1$,
\begin{align}
(k-j)\frac{1}{n}\ln M+(B-k)&\left(\frac{1}{n}\ln M+I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}, X_{[l]})\right)
\nonumber\\
&< (k-j)I(X_{[l]}; Y_{[l]}, \hat{Y}_{1[l]}|X_{1[l]}, Y_{[l-1]},
\hat{Y}_{1[l-1]}, X_{1[l-1]}, X_{[l-2]})\nonumber\\
&\quad+(B-k)I(Y_{[l]};X_{[l]},\hat{Y}_{1[l]}, X_{1[l]}|X_{[l-2]}, X_{1[l-1]},
Y_{[l-1]})\label{cond5b}
\end{align}
\item For all $j,k$ such that $1\le k<j\le B-1$,
\begin{align}
(j-k)I(\hat{Y}_{1[l]};&Y_{1[l]}|X_{1[l]},X_{[l]})+(B-j)
\left(\frac{1}{n}\ln M +I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}, X_{[l]})\right)\nonumber\\
&<(j-k)I(Y_{[l]};\hat{Y}_{1[l]}, X_{1[l]}|X_{[l]}, X_{[l-1]}, X_{1[l-1]},
Y_{[l-1]})\nonumber\\
&\quad +(B-j) I(Y_{[l]};X_{[l]},\hat{Y}_{1[l]}, X_{1[l]}|X_{[l-2]},
X_{1[l-1]}, Y_{[l-1]})\label{cond5c}
\end{align}
\end{enumerate}
We can further derive sufficient conditions for the above three
conditions in (\ref{cond5a}), (\ref{cond5b}) and (\ref{cond5c}) as
follows. We define the following quantities:
\begin{align}
C_1&\triangleq\frac{1}{n}\ln M+I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}, X_{[l]})\\
C_2&\triangleq\frac{1}{n}\ln M\\
C_3&\triangleq I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]},X_{[l]})\\
D_1&\triangleq I(Y_{[l]};X_{[l]},\hat{Y}_{1[l]}, X_{1[l]}|X_{[l-2]},
X_{1[l-1]}, Y_{[l-1]})\\
D_2&\triangleq I(X_{[l]}; Y_{[l]}, \hat{Y}_{1[l]}|X_{1[l]}, Y_{[l-1]},
\hat{Y}_{1[l-1]}, X_{1[l-1]}, X_{[l-2]})\\
D_3&\triangleq I(Y_{[l]};\hat{Y}_{1[l]}, X_{1[l]}|X_{[l]}, X_{[l-1]},
X_{1[l-1]}, Y_{[l-1]})
\end{align}
Then, the sufficient conditions in (\ref{cond5a}), (\ref{cond5b}) and
(\ref{cond5c}) can also be written as,
\begin{enumerate}
\item For all $j$ such that $1\le j\le B-1$,
\begin{align}
(B-j)C_1 < (B-j)D_1\label{cond5d}
\end{align}
\item For all $j,k$ such that $1\le j<k\le B-1$,
\begin{align}
(k-j)C_2+(B-k)C_1 < (k-j)D_2+(B-k)D_1\label{cond5e}
\end{align}
\item For all $j,k$ such that $1\le k<j\le B-1$,
\begin{align}
(j-k)C_3+(B-j)C_1<(j-k)D_3+(B-j)D_1\label{cond5f}
\end{align}
\end{enumerate}
We note that the above conditions are implied by the following three
conditions,
\begin{align}
C_1&< D_1\\
C_2&< D_2\\
C_3&< D_3
\end{align}
or in other words, by,
\begin{align}
R-\eta\le\frac{1}{n}\ln M&< I(X_{[l]}; Y_{[l]}, \hat{Y}_{1[l]}|X_{1[l]}, Y_{[l-1]},
\hat{Y}_{1[l-1]}, X_{1[l-1]}, X_{[l-2]}) \label{cond5g}\\
I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]},X_{[l]})&<I(Y_{[l]};\hat{Y}_{1[l]},
X_{1[l]}|X_{[l]}, X_{[l-1]}, X_{1[l-1]}, Y_{[l-1]})\label{cond5h}\\
R-\eta+I(\hat{Y}_{1[l]};Y_{1[l]}|X_{1[l]}, X_{[l]})&< I(Y_{[l]};X_{[l]},
\hat{Y}_{1[l]}, X_{1[l]}|X_{[l-2]}, X_{1[l-1]}, Y_{[l-1]}) \label{cond5i}
\end{align}
The expressions in (\ref{cond5g}), (\ref{cond5h}) and (\ref{cond5i})
give sufficient conditions to be satisfied by the rate in order for
the probability of error to become arbitrarily close to zero. We note
that these conditions depend on variables used in three consecutive
blocks, $l$, $l-1$ and $l-2$. With this development, we obtain the
main result of our paper which is stated in the following theorem.
\begin{Theo}\label{Achrate}
The rate $R$ is achievable for the relay channel, if the following
conditions are satisfied
\begin{align}
R\le& I(Y,\hat{Y}_{1};X| X_{1},\tilde{\hat{Y}}_{1},\tilde{Y}, \tilde{X}_{1},
\tilde{\tilde{X}})\label{cond6a}\\
I(\hat{Y}_1;Y_1|X_1,X)<& I(Y;\hat{Y}_{1},X_{1}|X, \tilde{Y},\tilde{X},
\tilde{X}_{1})\label{cond6b}\\
R+I(\hat{Y}_1;Y_1|X_1,X)\le& I(Y;\hat{Y}_{1}, X_{1}, X_{}|\tilde{Y},
\tilde{X}_{1},\tilde{\tilde{X}})\label{cond6c}
\end{align}
where
\begin{align}
\tilde{\tilde{X}}\longrightarrow(\tilde{X},\tilde{\hat{Y}}_1,&\tilde{X}_1,
\tilde{Y}, \tilde{Y}_1)\longrightarrow(X,\hat{Y}_1,X_1,Y, Y_1)
\label{cond6d}\\
p(x,\hat{y}_1,x_1,y,y_1,\tilde{x})&=p(\tilde{x},\tilde{\hat{y}}_1,
\tilde{x}_1, \tilde{y},\tilde{y}_1, \tilde{\tilde{x}}) \label{cond6e}\\
p(x,\hat{y}_1,x_1,y,y_1|\tilde{x},\tilde{\hat{y}}_1,\tilde{x}_1,
\tilde{y},\tilde{y}_1)&=p(x|\tilde{x})p(x_{1}|\tilde{\hat{y}}_{1})p(y_{1},y|x,x_{1})
p(\hat{y}_1|y_1,x_1)
\label{cond6f}
\end{align}
\end{Theo}
In the above theorem, the notations $\tilde{}$ and $\tilde{\tilde{}}$
are used to denote the signals belonging to the previous block and the
block before the previous block, respectively, with respect to a
reference block. Therefore, we see that the achievable rate in the
relay channel, using our proposed coding scheme, needs to satisfy
three conditions that involve mutual information expressions
calculated using eleven variables which satisfy the Markov chain
constraint in (\ref{cond6d}), the marginal distribution constraint in
(\ref{cond6e}), and the additional inter-block probability distribution constraint in (\ref{cond6f}).
In the next section, we will revisit the well-known CAF scheme
proposed in \cite{Cover:1979}. First, we will develop an equivalent
representation for the well-known representation of the achievable
rate in the CAF scheme. We will then show that the rates achievable by
the CAF scheme can be achieved with our proposed scheme by choosing a
certain special structure for the joint probability distribution of
the eleven random variables in Theorem~\ref{Achrate} while still
satisfying the three conditions in (\ref{cond6d}), (\ref{cond6e}) and
(\ref{cond6f}).
\section{Revisiting the Compress-And-Forward (CAF) Scheme}\label{revisit}
In \cite{Cover:1979}, the achievable rates for the CAF are
characterized as in the following theorem.
\begin{Theo}[\cite{Cover:1979}]\label{CAF}
The rate $R$ is achievable for the relay channel, if the following
conditions are satisfied
\begin{align}
R&\le I(X;Y,\hat{Y}_1|X_1)\label{cond1a}\\
I(Y_1;\hat{Y}_1|X_1,Y)&< I(X_1;Y) \label{cond1b}
\end{align}
where
\begin{equation}
p(x,x_1,y,y_1,\hat{y}_1)=p(x)p(x_1)p(y,y_1|x,x_1)p(\hat{y}_1|y_1,x_1)
\end{equation}
\end{Theo}
In the following theorem, we present three equivalent forms for
the rate achievable by the CAF scheme.
\begin{Theo} \label{CAF-equiv}
The following three conditions are equivalent.
\begin{enumerate}
\item For some $p(x,x_1,y,y_1,\hat{y}_1)=p(x)p(x_1)p(y,y_1|x,x_1)p(\hat{y}_1|y_1,x_1)$
\begin{align}
R-I(X;\hat{Y}_1|X_1)&\le I(X;Y|\hat{Y}_1,X_1)\label{cond1c}\\
I(Y_1;\hat{Y}_1|X_1)&< I(\hat{Y}_1;Y|X_1)+I(X_1;Y)\label{cond1d}
\end{align}
\item For some $p(x,x_1,y,y_1,\hat{y}_1)=p(x)p(x_1)p(y,y_1|x,x_1)p(\hat{y}_1|y_1,x_1)$
\begin{align}
R-I(X;\hat{Y}_1|X_1)&\le I(X;Y|\hat{Y}_1,X_1)\label{cond2a}\\
R-I(X;\hat{Y}_1|X_1)+I(Y_1;\hat{Y}_1|X_1)
&\le I(X, \hat{Y}_1;Y|X_1)+I(X_1;Y)\label{cond2b}
\end{align}
\item For some $p(x,x_1,y,y_1,\hat{y}_1)=p(x)p(x_1)p(y,y_1|x,x_1)p(\hat{y}_1|y_1,x_1)$
\begin{align}
R-I(X;\hat{Y}_1|X_1)&\le I(X;Y|\hat{Y}_1,X_1)\label{cond3a}\\
I(\hat{Y}_1;Y_1|X_1,X)&<
I(\hat{Y}_1;Y|X_1,X)+I(X_1;Y|X)\label{cond3b}\\
R-I(X;\hat{Y}_1|X_1)+I(Y_1;\hat{Y}_1|X_1)
&\le I(X, \hat{Y}_1;Y|X_1)+I(X_1;Y)\label{cond3c}
\end{align}
\end{enumerate}
\end{Theo}
The proof of the above theorem is given in Appendix
\ref{proof_CAF_equiv}.
We rewrite the final equivalent representation in (\ref{cond3a}),
(\ref{cond3b}) and (\ref{cond3c}) in the following more compact form
in order to compare the rates achievable with our proposed scheme and
the rates achievable with the CAF scheme in the next section.
\begin{align}
R&\le I(X;Y,\hat{Y}_1|X_1)\label{cond3d}\\
I(\hat{Y}_1;Y_1|X_1,X)&< I(\hat{Y}_1, X_1;Y|X)\label{cond3e}\\
R+I(Y_1;\hat{Y}_1|X_1,X)&\le I(X, \hat{Y}_1, X_1;Y)\label{cond3f}
\end{align}
\section{Comparison of the Achievable Rates with Our Scheme and
with the CAF Scheme}
We note that the conditions on the achievable rates with our scheme
given in Theorem~\ref{Achrate}, i.e., (\ref{cond6a}), (\ref{cond6b}),
(\ref{cond6c}), are very similar to the final equivalent form for the
conditions on the achievable rates with the CAF scheme, i.e.,
(\ref{cond3d}), (\ref{cond3e}), (\ref{cond3f}), except for two
differences. First, the channel inputs of the transmitter and the
relay, i.e., $X$ and $X_1$, in our proposed scheme can be correlated,
while in the CAF scheme they are independent, and second, in our
scheme there are some extra random variables, which mutual information
expressions are conditioned on, e.g., $\tilde{X},\tilde{X}_{1},
\tilde{Y}, \tilde{\hat{Y}}_{1}, \tilde{\tilde{X}}$. These two
differences come from our coding scheme where we introduced
correlation between the channel inputs of the transmitter and the
relay in a block, and between the variables across the blocks. The
correlation between the channel inputs from the transmitter and the
relay in any block is an advantage, as for channels which favor
correlation, this translates into higher rates. However, the
correlation across the blocks is a disadvantage as it decreases the
efficiency of transmission, and therefore the achievable rates. In
fact, the price we pay for the correlation between the channel inputs
in any given block is precisely the correlation we have created across
the blocks. For a given correlation structure, it is not clear which
of these two opposite effects will overcome the other. That is, the
rate of our scheme for a certain correlated distribution may be lower
or higher than the rate of the CAF scheme. However, we note that the
CAF scheme can be viewed as a special case of our proposed scheme by
choosing an independent distribution, i.e., by choosing the following
conditional distribution in (\ref{cond6f})
\begin{equation}
p(x,\hat{y}_1,x_1,y, y_1|\tilde{x},\tilde{\hat{y}}_1,\tilde{x}_1,\tilde{y},\tilde{y}_1)
= p(x)p(x_{1})p(y_{1},y|x,x_{1})p(\hat{y}_1|x_1,y_1)
\end{equation}
In this case, the expressions in Theorem~\ref{Achrate}, i.e.,
(\ref{cond6a}), (\ref{cond6b}), (\ref{cond6c}), degenerate into the
third equivalent form for the CAF scheme in Theorem~\ref{CAF-equiv},
i.e., (\ref{cond3d}), (\ref{cond3e}), (\ref{cond3f}). The above
observation implies that the maximum achievable rate with our proposed
scheme over all possible distributions is not less than the achievable
rate of the CAF scheme. Thus, we can claim that this paper offers more
choices in the achievability scheme than the CAF scheme, and that
these choices may potentially yield larger achievable rates than those
offered by the CAF scheme.
|
1,116,691,498,571 | arxiv | \section{Introduction}
Decentralized Finance (DeFi) refers to a composable and trust-minimized protocol stack that is built on public Blockchain networks and uses smart contracts to create a large variety of publicly accessible and interoperable financial services. In contrast to traditional financial infrastructure, these services are mostly non-custodial and can mitigate counterparty risk without the need for a centralized third party. Funds are locked in smart contracts and handled in accordance with predefined rules, as specified by the contract code.
Some examples of DeFi protocols include constant function market makers, lending-platforms, prediction markets, on-chain investment funds, and synthetic assets, \cite{schar2020defi}.
Most of these protocols issue corresponding tokens that represent some form of partial protocol ownership. Although the exact implementations, the feature sets, and the token holder rights vary greatly among these tokens, the reason for their existence can usually be traced back to two motives: \emph{Protocol Governance} and \emph{Protocol Economics}.
\begin{enumerate}
\item[] \textbf{Governance:} Tokens may entitle the holder to vote on contract upgrades or parameter changes. A token-based governance system allows for the implementation of new features. Moreover, the protocol can react to exogenous developments, upcoming interface changes, and potential bugs. \vspace{0.5em}
\item[] \textbf{Economics:} Most tokens have some form of implicit or explicit value-capture that allows the token holder to participate economically in the growth of the protocol. Value is usually distributed through a utility and burn mechanism (deflationary pressure) or some form of dividend-like payments. In many cases, initial token sales are used to fund protocol development and continuous release schedules to incentivize protocol usage.
\end{enumerate} \vspace{0.5em}
Considering the two main reasons for the existence of these tokens, it becomes apparent that token distribution is a critical factor in the protocols' decentralization efforts. Heavily centralized token allocations may result in situations where a small set of super-users can unilaterally change the protocol -- potentially at the expense of everyone else. Moreover, a heavily concentrated distribution may create an ecosystem where much of the value is captured by a small number of actors.
The authors are unaware of previous academic research on this subject. In August 2020, an analysis was circulated on social media, \cite{conti2020}. Simone Conti analyzed token contracts for their top holders and used this data to compute ownership concentration measures. However, the study was based on questionable assumptions and fails to account for the large variety of contract accounts. In particular, liquidity-, lending- and staking-pools, as well as token wrappers, had been counted as individual entities. As these contract accounts are mere custodians and usually hold significant token amounts on behalf of a large set of economic agents, this approach clearly leads to spurious results.
There are previous studies that tackle similar research questions in the context of the Bitcoin network, \cite{gupta2017gini}, \cite{chohan2019cryptocurrencies}, \cite{kondor2014rich}. However, due to Bitcoin's relatively static nature and the separation of token ownership and protocol voting rights, the question is less pressing. Moreover, the fact that Bitcoin's standard client discourages address reuse makes these analyses much harder to perform. In a similar vein, a recent working paper conducted an analysis for the evolution of shares in proof-of-stake based cryptocurrencies, \cite{rosu2020evolution}.
The remainder of this paper is structured as follows: In Section \ref{sec:sampleSelection} we describe how the token and snapshot samples have been selected. Sections \ref{sec:dataPreparation} and \ref{sec:dataAnalysis} explore the data preparation and analysis respectively. In Section \ref{sec:discussion} we discuss the results, limitations and further research. In Section \ref{sec:conclusion} we briefly summarize our findings and the contribution of this paper.
\section{Sample Selection}
\label{sec:sampleSelection}
In this section, we describe the scope of our analysis. In particular, we discuss how tokens and snapshots have been selected.
The token selection determines which assets we observe. The snapshot selection determines at which point in time the blockchain state is observed.
\subsection{Token Selection}
To qualify for selection, tokens had to fulfill the following criteria:
\begin{enumerate}
\item The token must be a protocol token. It must incorporate some form of governance and/or utility mechanism. Pure stablecoins, token wrappers, or token baskets have not been considered.\footnote{Although wrappers and baskets will be considered for fund reallocation, as described in Section \ref{sec:dataPreparation}.}
\item The token must be \texttt{ERC-20} compliant and contribute towards decentralized financial infrastructure.
\item As of September 15th, 2020, the token must fulfill at least one of the following three conditions:
\begin{itemize}
\item[a)] Relevant supply with market cap $\geq$ 200 mm (MC).
\item[b)] Total value locked in the protocol's contracts (vesting not included) $\geq$ 300 mm (VL).
\item[c)] Inclusion in Simone Conti's table (SC).
\end{itemize}
\end{enumerate}
Market cap and value locked serve as objective and quantitative inclusion criteria. Tokens from Simone Conti's table have mainly been included to allow for comparisons.
Applying these criteria, we get a sample of 18 DeFi tokens. The tokens and the reason for their selection are summarized in Table \ref{tbl:tokens}. Please note that we have decided to exclude SNX since some of its features are not in line with standard conventions and make it particularly difficult to analyze.
\begin{table}[h!]
\caption{Token Selection}
\label{tbl:tokens}
\center
\begin{tabular}{lcccr}
\toprule
Token & MC & VL & SC & Deployment \\ \midrule
BAL & \xmark & \cmark & \cmark & 2020-06-20 \\
BNT & \xmark & \xmark & \cmark & 2017-06-10 \\
COMP & \cmark & \cmark & \cmark & 2020-03-04 \\
CREAM & \xmark & \cmark & \xmark & 2020-08-04 \\
CRV & \xmark & \cmark & \xmark & 2020-08-13 \\
KNC & \cmark & NA & \cmark & 2017-09-12 \\
LEND & \cmark & \cmark & \cmark & 2017-09-17 \\
LINK & \cmark & NA & \xmark & 2017-09-16 \\
LRC & \cmark & \xmark & \xmark & 2019-04-11 \\
MKR & \cmark & \cmark & \cmark & 2017-11-25 \\
MTA & \xmark & \xmark & \cmark & 2020-07-13 \\
NXM & \cmark & \xmark & \xmark & 2019-05-23 \\
REN & \cmark & \xmark & \cmark & 2017-12-31 \\
SUSHI & \cmark & \cmark & \xmark & 2020-08-26 \\
UMA & \cmark & \xmark & \xmark & 2020-01-09 \\
YFI & \cmark & \cmark & \cmark & 2020-07-17 \\
YFII & \cmark & \xmark & \xmark & 2020-07-26 \\
ZRX & \cmark & NA & \xmark & 2017-08-11 \\ \bottomrule
\end{tabular}
\end{table}
\subsection{Snapshot Selection}
To analyze how the allocation metrics change over time, we decided to conduct the analysis for various snapshots. The first snapshot is from June 15th, 2019. We had then taken monthly snapshots. The snapshots' block heights and timestamps are listed in Table \ref{tbl:snapshots}.
\begin{table}[h!]
\caption{Snapshot Selection}
\label{tbl:snapshots}
\center
\begin{tabular}{lll}
\toprule
Nr. & Block Height & Date \\ \midrule
1 & 7962629 & 2019-06-15 \\
2 & 8155117 & 2019-07-15 \\
3 & 8354625 & 2019-08-15 \\
4 & 8553607 & 2019-09-15 \\
5 & 8745378 & 2019-10-15 \\
6 & 8938208 & 2019-11-15 \\
7 & 9110216 & 2019-12-15 \\
8 & 9285458 & 2020-01-15 \\
9 & 9487426 & 2020-02-15 \\
10& 9676110 & 2020-03-15 \\
11& 9877036 & 2020-04-15 \\
12& 10070789 & 2020-05-15 \\
13& 10270349 & 2020-06-15 \\
14& 10467362 & 2020-07-15 \\
15& 10664157 & 2020-08-15 \\
16& 10866666 & 2020-09-15 \\ \bottomrule
\end{tabular}
\end{table}
\section{Data Preparation}
\label{sec:dataPreparation}
We use our token and snapshot selection from \ref{sec:sampleSelection} to analyze the allocation characteristics and observe how they change over time. All the necessary transaction- and event data was directly extracted from a Go-Ethereum node using Ethereum-ETL, \cite{ethereumetl}. To construct accurate snapshots of token ownership, we must map each token holding to the address that actually owns and may ultimately claim the funds.
A simple example is the YFI/wETH Uniswap V2 liquidity pool: A naïve analysis would lead to the conclusion that the tokens are owned by the Uniswap exchange contract. However, this contract is just a liquidity pool with very limited control over the tokens it holds. Full control, and thus ownership of the tokens, remains with the liquidity providers. To account for this and to correctly reflect the state of token ownership, the tokens must be mapped proportionally from the exchange contract to the liquidity providers.
A more complex example illustrates the need for an iterative mapping process: YFI is deposited into a Cream lending pool, minting crYFI for the owner. This crYFI together with crCREAM is then deposited in a crYFI/crCREAM Balancer-like liquidity pool, minting CRPT (Cream pool tokens) for the depositor. Finally, these CRPT are staked in a Cream staking pool, which periodically rewards the staker with CREAM tokens but does not mint any ownership tokens. The actual YFI tokens, in this case, are held by the Cream lending pool. Trying to map them to their owners via the lending pool tokens (crYFI) will lead us to the liquidity pool and finally to the staking pool, where we can map the YFI to the accounts that staked the CRPT tokens. Each of these steps needs to be approached differently, as the underlying contracts have distinct forms of tracking token ownership. And further, these steps must also be performed in the correct order.
\subsection{Identifying and Categorizing Addresses}
\label{sub:identifyCategorize}
Addresses that do not have bytecode deployed on them - also called externally owned accounts or EOAs - cannot be analyzed further with on-chain data. To determine whether to include or exclude an EOA from our analysis, we use a combination of tags from etherscan.io, nansen.ai, and coingecko.com, \cite{etherscan}, \cite{nansen}, \cite{coingecko}. An EOA qualifies for exclusion if it is a known burner address, owned by a centralized, off-chain exchange (CEX) or if the tokens on the account are disclosed by the developer team as FTIA (foundation, team, investor, and advisor) vesting. Every other EOA is assumed to be a single actor and is included in the analysis.
Addresses with deployed bytecode are smart contracts or contract accounts. These contracts are analyzed and categorized based on their ABI, bytecode, return values, and manual code review. Most implementations of multisig wallets are detected and treated equivalent to EOAs. Mappable smart contracts are described by the following categories:
\begin{enumerate}
\item[] \textbf{Liquidity Pools:} Decentralized exchanges, converters, token baskets, or similar contracts that implement one or more \texttt{ERC-20} liquidity pool tokens. The funds are mapped proportionally to the relevant liquidity pool tokens. \vspace{0.5em}
\item[] \textbf{Lending Pools:} Aave, Compound, and Cream offer lending and borrowing of tokens. Both the debts and deposits are mapped to their owners using protocol-specific events and archival calls to the contracts. \vspace{0.5em}
\item[] \textbf{Staking Contracts:} Staking contracts differ from liquidity pools in the sense that they usually do not implement an \texttt{ERC-20} token to track the stakes of the owners. We further differentiate if the token in question is used as a reward, as a stake, or both. Future staking rewards are excluded as they cannot be reliably mapped to future owners. The remaining tokens are mapped using contract-specific events for depositing and withdrawing stakes and rewards. For Sushi-like staking pools, we also account for a possible migration of staked liquidity pool tokens. \vspace{0.5em}
\item[] \textbf{Unique Contracts:} These contracts do not fit any of the above categories, but the tokens can still be mapped to their owners. Each contract is treated individually, using contract-specific events and archival calls where needed. A few examples include MKR governance voting, REN darknode staking, or LRC long-term holdings.
\end{enumerate} \vspace{0.5em}
Smart contracts which hold funds that are not owned by individual actors or where no on-chain mapping exists are excluded from the analysis. Most commonly, this applies to contracts that hold and manage funds directly owned by a protocol with no obvious distribution mechanism.
\subsection{Iterative Mapping Process for Tokens}
For each token and snapshot, we construct a token holder table listing the initial token endowments per address. We then proceed with an iterative mapping process as follows:
\begin{algorithm}[H]
\caption{Iterative Mapping Process}
\begin{algorithmic}[1]
\State $H \gets$ initial token holder table
\Repeat
\State sort $H$ by token value, descending
\ForAll{$h \in$ top 1,000 rows of $H$}
\State identify and categorize $h$
\State apply inclusion logic to $h$
\If{$h$ is mappable}
\State map $h$ according to its category
\EndIf
\EndFor
\Until{no mappable rows found in last iteration}
\State \textbf{assert} every row with more than 0.1\% of the total relevant supply is properly identified and categorized
\end{algorithmic}
\end{algorithm}
It is possible that tokens must be mapped from an address onto themselves. For most mappable contracts, these tokens are permanently lost\footnote{For example, if Uniswap liquidity pool tokens are directly sent to their liquidity pool address, they can never be retrieved.} and are thus treated as burned and are excluded from the analysis. For contracts where the tokens are not lost in this way, we implemented contract-specific solutions to avoid potential infinite recursion.
Every instance of a remapping from one address to another, called an adjustment, is tracked and assigned to one of five adjustment categories. There is no distinction between situations where the protocol token or a wrapped version thereof is remapped. The five adjustment categories are:
\begin{enumerate}
\item[] \textbf{Internal Staking:} Depositing the token into a contract that is part of the same protocol. This includes liquidity provision incentives, protocol stability staking, and some forms of governance voting.\vspace{0.5em}
\item[] \textbf{External Staking:} Depositing the token into a contract that is not part of the same protocol. This is most prominent for Sushi-like liquidity pool token staking with the intention of migrating the liquidity pool tokens, but it also includes a variety of other, external incentive programs.\vspace{0.5em}
\item[] \textbf{AMM Liquidity:} Depositing the token into a liquidity pool run by a decentralized exchange with some form of an automated market maker.\vspace{0.5em}
\item[] \textbf{Lending / Borrowing:} Depositing the token into a liquidity pool run by a decentralized lending platform or borrowing tokens from such a pool. \vspace{0.5em}
\item[] \textbf{Other:} Derivatives, 1:1 token wrappers with no added functionality, token migrations, and investment fund-like token baskets.
\end{enumerate} \vspace{0.5em}
\begin{table*}
\caption{Token Ownership Structure}
\label{tbl:holdings}
\begin{tabularx}{\textwidth}{@{}lr*{8}{C}c@{}}
\toprule
Token & & Owner \# & Top 5 & Top 10 & Top 50 & Top 100 & Top 500 & Top 50\% & Top 99\% & Gini 500 \\
\midrule
BAL$\dagger$ & Sep 20 & 16,661 & 27.6\% & 36.71\% & 77.3\% & 85.01\% & 94.86\% & 18 & 2,157 & 83.77\% \\
\midrule
\multirow{3}{*}{BNT} & Sep 20 & 49,294 & 15.69\% & 24.71\% & 49.5\% & 61.77\% & 80.95\% & 52 & 10,010 & 69.82\% \\
& Trend & +1.64\% & -5.43\% & -4.43\% & -2.94\% & -2.14\% & -1.06\% & +49.45\% & +7.52\% & -1.5\% \\
& $\sigma$ 12m & 2,882.0 & 0.0712 & 0.0764 & 0.0827 & 0.0669 & 0.0378 & 15.7 & 1,481.9 & 0.0487 \\
\midrule
COMP$\dagger$ & Sep 20 & 36,033 & 31.23\% & 43.79\% & 86.75\% & 96.15\% & 98.91\% & 14 & 564 & 90.36\% \\
\midrule
CREAM$\dagger$ & Sep 20 & 4,426 & 48.44\% & 57.11\% & 74.32\% & 81.77\% & 94.16\% & 6 & 1,549 & 83.04\% \\
\midrule
CRV$\dagger$ & Sep 20 & 11,076 & 56.92\% & 61.09\% & 73.23\% & 79.07\% & 90.27\% & 2 & 3,549 & 84.64\% \\
\midrule
\multirow{3}{*}{KNC} & Sep 20 & 92,780 & 24.93\% & 35.63\% & 57.73\% & 64.62\% & 77.99\% & 26 & 19,922 & 77.6\% \\
& Trend & +6.51\% & +3.36\% & +5.01\% & +2.14\% & +0.98\% & +0.04\% & -5.39\% & +15.74\% & +1.21\% \\
& $\sigma$ 12m & 12,589.4 & 0.0302 & 0.0594 & 0.0489 & 0.0336 & 0.0171 & 13.9 & 3,971.3 & 0.0374 \\
\midrule
\multirow{3}{*}{LEND} & Sep 20 & 174,861 & 36.67\% & 43.64\% & 61.44\% & 67.42\% & 80.05\% & 16 & 57,534 & 79.97\% \\
& Trend & +0.23\% & +33.26\% & +22.23\% & +11.35\% & +8.26\% & +3.74\% & -9.77\% & -4.7\% & +3.98\% \\
& $\sigma$ 12m & 3,066.9 & 0.1294 & 0.1389 & 0.1358 & 0.1258 & 0.0878 & 82.2 & 21,962.9 & 0.0933 \\
\midrule
\multirow{3}{*}{LINK} & Sep 20 & 233,128 & 7.18\% & 13.46\% & 37.0\% & 44.99\% & 61.23\% & 166 & 61,910 & 65.27\% \\
& Trend & +31.34\% & -0.5\% & -0.62\% & +1.72\% & +1.24\% & +0.08\% & -2.73\% & +16.99\% & +1.24\% \\
& $\sigma$ 12m & 52,004.9 & 0.0029 & 0.004 & 0.0221 & 0.0204 & 0.0067 & 25.0 & 12,158.7 & 0.0279 \\
\midrule
\multirow{3}{*}{LRC} & Sep 20 & 66,382 & 13.75\% & 20.06\% & 43.44\% & 62.11\% & 87.9\% & 66 & 5,251 & 66.36\% \\
& Trend & +1.49\% & -2.3\% & -1.68\% & -1.26\% & -1.14\% & -0.41\% & +3.23\% & +7.95\% & -0.74\% \\
& $\sigma$ 12m & 3,392.5 & 0.0236 & 0.0232 & 0.0261 & 0.0313 & 0.0163 & 6.1 & 811.7 & 0.0205 \\
\midrule
\multirow{3}{*}{MKR} & Sep 20 & 29,765 & 24.43\% & 36.49\% & 67.71\% & 79.49\% & 93.72\% & 20 & 3,918 & 79.26\% \\
& Trend & +8.31\% & -3.45\% & -2.12\% & -0.45\% & -0.19\% & -0.12\% & +4.5\% & +7.17\% & -0.22\% \\
& $\sigma$ 12m & 4,511.7 & 0.0503 & 0.0405 & 0.0175 & 0.0107 & 0.0057 & 3.0 & 587.0 & 0.01 \\
\midrule
MTA$\dagger$ & Sep 20 & 5,595 & 13.81\% & 22.97\% & 51.18\% & 63.51\% & 88.27\% & 47 & 2,090 & 65.93\% \\
\midrule
\multirow{3}{*}{NXM} & Sep 20 & 7,355 & 32.17\% & 44.3\% & 70.42\% & 78.51\% & 91.29\% & 14 & 2,817 & 81.14\% \\
& Trend & -36.69\% & -2.87\% & -2.71\% & -1.65\% & -1.12\% & -0.37\% & +18.09\% & -33.11\% & -0.24\% \\
& $\sigma$ 12m & 1,918.2 & 0.0704 & 0.0992 & 0.0869 & 0.0619 & 0.0238 & 2.7 & 747.1 & 0.0434 \\
\midrule
\multirow{3}{*}{REN} & Sep 20 & 22,770 & 10.45\% & 15.29\% & 32.81\% & 41.79\% & 67.85\% & 166 & 8,500 & 55.31\% \\
& Trend & +26.0\% & -3.12\% & -2.97\% & -2.98\% & -2.64\% & -1.5\% & +42.78\% & +25.39\% & -1.56\% \\
& $\sigma$ 12m & 4,673.4 & 0.0232 & 0.0313 & 0.0671 & 0.072 & 0.0579 & 38.4 & 1,718.0 & 0.0437 \\
\midrule
SUSHI$\dagger$ & Sep 20 & 22,740 & 25.64\% & 35.26\% & 58.31\% & 66.28\% & 83.78\% & 28 & 7,300 & 74.11\% \\
\midrule
UMA$\dagger$ & Sep 20 & 5,634 & 56.21\% & 75.64\% & 96.87\% & 98.21\% & 99.43\% & 5 & 240 & 95.61\% \\
\midrule
YFI$\dagger$ & Sep 20 & 14,296 & 11.52\% & 16.98\% & 37.32\% & 48.1\% & 73.75\% & 114 & 5,145 & 57.6\% \\
\midrule
YFII$\dagger$ & Sep 20 & 8,513 & 20.8\% & 27.78\% & 53.93\% & 66.23\% & 85.15\% & 40 & 3,278 & 72.18\% \\
\midrule
\multirow{3}{*}{ZRX} & Sep 20 & 161,285 & 23.71\% & 38.4\% & 59.39\% & 63.87\% & 72.91\% & 21 & 38,404 & 82.63\% \\
& Trend & +4.05\% & -1.15\% & -0.02\% & +0.76\% & +0.64\% & +0.22\% & -2.96\% & +6.28\% & +0.43\% \\
& $\sigma$ 12m & 16,372.0 & 0.0133 & 0.0056 & 0.0158 & 0.0147 & 0.0082 & 3.6 & 5,233.6 & 0.0132 \\
\midrule
\multicolumn{10}{l}{$\dagger$\footnotesize{Insufficient historical data.}} \\
\bottomrule
\end{tabularx}
\end{table*}
\section{Data Analysis}
\label{sec:dataAnalysis}
In this section, we will use our data set to analyze two questions: \emph{First}, we study the token ownership concentration and use our remapping approach to compute more accurate ownership tables and introduce new allocation metrics. These metrics are of particular interest, as highly concentrated token allocations could potentially undermine any decentralization efforts. \emph{Second}, we use our remapping and protocol usage data to introduce wrapping complexity, shortage, and token interaction measures. These measures essentially serve as a proxy and indicate the degree of integration into the DeFi ecosystem. Moreover, they may serve as an important measure for potential dependencies and the general stability of the system.
\subsection{Concentration of Token Ownership}
Table \ref{tbl:holdings} shows key metrics to illustrate the concentration of adjusted token ownership for the most recent snapshot, September 15th, 2020. The table is described below. Please note that \emph{relevant supply} refers to the sum of all adjusted and included token holdings, taking into account outstanding debts. Excluded token holdings are described in detail in section \ref{sub:identifyCategorize}.
\begin{enumerate}
\item[] \textbf{Owner \#:} Total number of addresses owning a positive amount or fraction of the token. \vspace{0.5em}
\item[] \textbf{Top n:} Percentage of the relevant supply held by the top $n$ addresses. \vspace{0.5em}
\item[] \textbf{Top n\%:} Minimum number of addresses owning a combined $n\%$ of the relevant supply. \vspace{0.5em}
\item[] \textbf{Gini 500:} The Gini coefficient, \cite{gini1912variabilita}, is used to show the wealth distribution inequality among the top 500 holders of each token. It can be formalized as \eqref{eq:gini}.
\begin{equation}
\label{eq:gini}
G_{500} = \frac{\sum_{i=1}^{500}\sum_{j=1}^{500}\lvert x_i-x_j \rvert}{2 \cdot 500 ^2\bar{x}}
\end{equation}
\end{enumerate} \vspace{0.5em}
For tokens with historical data of at least 12 months, we include the trend and standard deviation over this period. The trend represents the monthly change in percent according to an OLS regression line; the standard deviation shows the volatility of the trend.
\subsection{Ecosystem Integration}
Table \ref{tbl:wrap} presents key metrics of the tokens' integration into the DeFi ecosystem. The table is described below.
\begin{enumerate}
\item[] \textbf{Inclusion \%:} Relevant token supply divided by total token supply, excluding burned tokens. \vspace{0.5em}
\item[] \textbf{Wrapping Complexity:} Relevant adjustments divided by relevant supply. This includes only adjustments to non-excluded addresses\footnote{Some of the excluded addresses still deposit their tokens in mappable contracts; e.g. a CEX depositing their users' tokens in a staking pool. To prevent distortion, we exclude these funds from both the relevant supply and the relevant adjustments.} and may (in extreme cases) reach values above 1. The Wrapping complexity is formalized in \eqref{eq:wrap}, where $\pmb{\omega}:=(\omega_1,\dots, \omega_N)$ represents the vector of all relevant adjustments for a given token and $\bar{S}$ represents relevant supply. \vspace{0.5em}
\begin{equation}
\label{eq:wrap}
\frac{\sum_{i = 1}^{N} \left|\omega_{i} \right| }{\bar{S}}
\end{equation}
\item[] \textbf{Multi-Token Holdings:} Number of addresses with a minimum allocation of 0.1\% of this token and 0.1\% for at least $n\in(1,2,3,4)$ other tokens from our sample. \vspace{0.5em}
\item[] \textbf{Shorted:} Negative token balances in relation to relevant supply; i.e. value on addresses that used lending markets to borrow and resell the token, to obtain a short exposure, divided by $\bar{S}$.
\end{enumerate} \vspace{0.5em}
It is important to note that the inclusion ratio is predominantly dictated by the tokens' emission schemes. In some cases, the total supply is created with the \texttt{ERC-20} token deployment but held in escrow and only released over the following years. Consequently, we excluded this non-circulating supply.
\begin{table*}
\caption{Token Wrapping Complexity}
\label{tbl:wrap}
\begin{tabularx}{\textwidth}{llccccccccccccccc}
\toprule
\multirow{2}{*}{Token} & {} & \multirow{2}{*}{Inclusion \%} & {} &
\multicolumn{6}{c}{Wrapping Complexity} & {} & \multicolumn{4}{c}{Multi-Token Holdings} & {} &
\multirow{2}{*}{Shorted} \\
& & & & Jun-19 & Sep-19 & Dec-19 & Mar-20 & Jun-20 & Sep-20 & & 1+ & 2+ & 3+ & 4+ \\
\cline{1-1} \cline{3-3} \cline{5-10} \cline{12-15} \cline{17-17} \\
BAL & {} & 19.6\% & {} & - & - & - & - & - & 51.7\%& {} & 17.6\% & 5.5\% & 1.1\% & -& {} & 0.026\% \\
BNT & {} & 56.8\% & {} & 11.9\% & 11.9\% & 10.3\% & 20.8\% & 9.6\% & 10.2\%& {} & 8.7\% & 1.4\% & 0.7\% & 0.7\%& {} & - \\
COMP & {} & 36.0\% & {} & - & - & - & 0.0\% & 0.0\% & 7.5\%& {} & 8.4\% & 3.6\% & 2.4\% & -& {} & 0.004\% \\
CREAM & {} & 3.6\% & {} & - & - & - & - & - & 455.0\%& {} & 30.1\% & 11.8\% & 5.4\% & -& {} & 11.971\% \\
CRV & {} & 2.2\% & {} & - & - & - & - & - & 43.1\%& {} & 20.9\% & 9.9\% & 4.4\% & 2.2\%& {} & 0.761\% \\
KNC & {} & 70.7\% & {} & 0.2\% & 0.2\% & 0.4\% & 2.9\% & 1.8\% & 48.4\%& {} & 17.7\% & 9.4\% & 4.2\% & 2.1\%& {} & 0.123\% \\
LEND & {} & 69.3\% & {} & 0.0\% & 0.0\% & 0.1\% & 28.9\% & 50.7\% & 63.1\%& {} & 38.6\% & 19.3\% & 6.8\% & 2.3\%& {} & 0.039\% \\
LINK & {} & 31.3\% & {} & 0.0\% & 0.0\% & 0.0\% & 1.8\% & 2.2\% & 13.6\%& {} & 12.9\% & 5.9\% & 4.0\% & 2.0\%& {} & 0.383\% \\
LRC & {} & 58.8\% & {} & 5.3\% & 4.7\% & 7.4\% & 19.0\% & 21.4\% & 23.1\%& {} & 1.8\% & 0.6\% & - & -& {} & - \\
MKR & {} & 81.5\% & {} & 33.6\% & 23.2\% & 31.5\% & 28.6\% & 37.3\% & 41.5\%& {} & 7.2\% & 2.4\% & 0.8\% & -& {} & 0.036\% \\
MTA & {} & 3.1\% & {} & - & - & - & - & - & 73.8\%& {} & 15.1\% & 4.8\% & 1.8\% & -& {} & 2.631\% \\
NXM & {} & 95.1\% & {} & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 0.0\% & 66.7\%& {} & 17.0\% & 8.0\% & 2.0\% & -& {} & - \\
REN & {} & 61.3\% & {} & 0.0\% & 0.0\% & 0.0\% & 0.2\% & 12.1\% & 59.9\%& {} & 11.4\% & 4.4\% & 3.2\% & 1.3\%& {} & 0.035\% \\
SUSHI & {} & 48.2\% & {} & - & - & - & - & - & 109.9\%& {} & 28.9\% & 9.9\% & 1.7\% & -& {} & 0.844\% \\
UMA & {} & 53.8\% & {} & - & - & - & 0.0\% & 0.4\% & 3.0\%& {} & 4.3\% & - & - & -& {} & - \\
YFI & {} & 94.8\% & {} & - & - & - & - & - & 70.5\%& {} & 41.0\% & 14.1\% & 2.6\% & -& {} & 0.307\% \\
YFII & {} & 40.1\% & {} & - & - & - & - & - & 54.2\%& {} & 8.6\% & 4.3\% & 1.4\% & -& {} & - \\
ZRX & {} & 57.9\% & {} & 0.7\% & 1.9\% & 1.7\% & 4.5\% & 6.8\% & 32.8\%& {} & 19.0\% & 6.3\% & 4.8\% & 3.2\%& {} & 0.052\% \\
\bottomrule
\end{tabularx}
\end{table*}
Figure \ref{fig:wrapTime} shows the development of the tokens' wrapping complexities by adjustment category in a stacked time series. Note that the limits of the $y$-axis for the CREAM graph are adjusted to accommodate for the higher total wrapping complexity. We have not included a graph for the SUSHI token, as there is only one snapshot available since its launch\footnote{On September 15th, 2020, the 109.9\% wrapping complexity of SUSHI is composed of 28.2\% internal staking, 49.3\% external staking, 30.1\% AMM liquidity, and 2.2\% lending/borrowing.}.
A wrapping complexity $>1$ means that the same tokens are wrapped several times. If, for example, a token is added to a lending pool, borrowed by another person, subsequently added to an AMM liquidity pool, and the resulting LP tokens staked in a staking pool, the wrapping complexity would amount to 4. Similarly, a single token could be used multiple times in a lending pool and thereby significantly increase the wrapping complexity.
Note that most tokens have experienced a sharp increase in wrapping complexity in mid-2020. The extent to which each category is used depends on the characteristics of each token; internal staking, in particular, can take very different forms.
The ``other'' category is mainly driven by token migrations, where new tokens are held in redemption contracts, and 1:1 token wrappers.
\vspace{1em}
\begin{figure*}
\centering
\caption{Adjustment Graphs}
\label{fig:wrapTime}
\includegraphics[width=\textwidth ,height=\textheight,keepaspectratio]{adjustments.pdf}
\end{figure*}
\hspace{1em}
\section{Discussion}
\label{sec:discussion}
In this section we discuss the results from our data analysis. We revisit Table \ref{tbl:holdings} and \ref{tbl:wrap} as well as Figure \ref{fig:wrapTime} and discuss some interesting findings.
What seems to be true across the board is that DeFi tokens have a somewhat concentrated ownership structure. This is certainly an issue that merits monitoring, as it may potentially undermine many of the advantages this new financial infrastructure may provide.
For protocols with token-based governance models, the lower bound number of addresses needed to reach a majority, i.e., >50\%, may be of special interest. A relatively low threshold can indicate a higher likelihood of collusion and centralized decision making. In extreme cases, a few individuals could jointly enact protocol changes. However, since governance rules, the implementations of voting schemes, and security modules (e.g., timelocks) vary greatly between protocols, direct comparisons should only be made with great care.
In addition to the decentralization and governance concerns, the study also shows DeFi's limitations with regard to transparency. While it is true that the DeFi space is extremely transparent in the sense that almost all data is available on-chain, it is very cumbersome to collect the data and prepare it in a digestible form. High nesting levels with multiple protocols and token wrappers involved will overwhelm most users and analysts and create the need for sophisticated analysis tools. The computation of accurate token ownership statistics and reliable dependency statistics is extremely challenging.
The problem becomes apparent when we compare our results to the results of Simone Conti's analysis, \cite{conti2020}. Recall that Conti's analysis has not controlled for any account-specific properties. Our analysis shows that for most tokens, the token holdings of the top 5 addresses thereby have been overestimated by approximately 100\% and in some extreme cases by up to 700\%. The main source of these errors is the inclusion of token holdings from custodial- and escrow contracts, such as liquidity-, lending-, and staking-pools, as well as token wrappers, vesting contracts, migrations, burner addresses, and decentralized exchange addresses. We control for these accounts and split their holdings to the actual beneficiary addresses where possible and exclude them where not possible. A closer comparison of the two tables reveals that the differences remain high for lower holder thresholds (i.e., top 10, top 50, and top 100). At the top 500 threshold, the differences are still significant, although to a much lesser degree.
In addition to the computation of more accurate holder tables, transparency is a precondition for the analysis of protocol interconnections and dependencies. For this purpose, we introduce the wrapping complexity and multi-token holding metrics. Wrapping complexity essentially shows how the token is used in the ecosystem. On the one hand, high wrapping complexities can be interpreted as an indicator for a token that is deeply integrated into the DeFi ecosystem.
On the other hand, high wrapping complexities may also be an indicator for convoluted and unnecessarily complex wrapping schemes that may introduce additional risks.
A potential indicator for how the market feels about the complexity is the shortage percentage, i.e., the value of all decentralized short positions in relation to the relative supply. Interestingly, there is a high positive correlation between the two measures, which may at first glance suggest that wrapping complexity is interpreted as a negative signal. However, this would be a problematic interpretation since wrapping complexity is, in fact, at least partially driven by the shortage activity. Once we exclude the lending and borrowing, as well as ``other'' categories, the effect becomes less pronounced.
The DeFi space is developing very rapidly and constantly increases in complexity. Many new and exciting protocols have emerged in 2020. Novel concepts such as complex staking schemes started to play a role in most protocols. We see staking, or more specifically staking rewards, as a catalyst for the immense growth in the DeFi space. However, it is somewhat questionable whether this growth will be sustainable. Treasury pools will eventually run out of tokens, and uncontrolled token growth leads to an increase of the relevant token supply, which may create inflationary pressure.
While we are confident that our study provides interesting contributions with new metrics and processes to compute token ownership tables with unprecedented accuracy, we would still like to mention some of the limitations of our study and point out room for further extensions.
First, we perform no network analysis to potentially link multiple addresses of the same actor. This approach has likely lead to an overestimation of decentralization. In a further research project, one could combine our data set and remapping method with address clustering.
Second, while the automated process may remap tokens for all contract accounts, our manual analysis was limited to contract accounts with a significant amount. We decided to set the threshold value at 0.1\% of relevant supply.
Third, we used various data sources to verify the labeling of addresses. In some unclear cases, we approached the teams directly for more information. However, this information cannot be verified on-chain. Consequently, this is the only part of the study for which we had to rely on information provided by third parties.
Further research may adopt the methods of this paper to analyze token characteristics in the context of governance models. The data could be used as a parameter for more realistic simulations and game-theoretical governance models. Novel metrics, such as the wrapping complexity, may be useful for studies concerned with the interdependencies and risk assessment of the DeFi landscape. Finally, the proposed readjustment categories may provide a good base for further research on how DeFi tokens are being used and the reasons for their spectacular growth.
\section{Conclusion}
\label{sec:conclusion}
\balance
In this paper, we analyze the holder distribution and ecosystem integration for the most popular DeFi tokens. The paper introduces a novel method that allows us to split and iteratively reallocate contract account holdings over multiple wrapping levels.
Our data indicate that previous analyses severely overestimated ownership concentration. However, in most cases, the majority of the tokens are still held by a handful of individuals. This finding may raise important questions regarding protocol decentralization and build a foundation for DeFi governance research.
We further investigated dependencies and ecosystem integration. Our analysis suggests that the complexity of the ecosystem has drastically increased. This increase seems to be consistent among most tokens. However, the main drivers vary significantly, depending on the nature of the token.
To conclude, DeFi is an exciting and rapidly growing new financial infrastructure. However, there is a particular risk that high ownership concentration and complex wrapping structures introduce governance risks, undermine transparency and create extreme interdependence affecting protocol robustness.
\bibliographystyle{IEEEtran}
|
1,116,691,498,572 | arxiv | \section*{Methods}
\subsection*{Experimental Methods}
The nanodevices were illuminated by a few-cycle, supercontinuum-based \supercite{Sell2009}, CEP-stablilized fiber laser source \supercite{Putnam2019}.
The source has a central wavelength of $\sim$1170~nm, with a pulse duration of $\sim$10~fs FWHM ($\sim$2.5 cycles), and repetition rate of 78~MHz.
The supercontinuum was generated from a highly non-linear gemanosilicate fiber pumped by a Er:fiber-based laser oscillator and Er-doped fiber amplifier (EDFA) system and compressed with a SF10 prism compressor.
The CEP was locked to a fixed CEP value for all measurements taken.
Pulse characterization of the laser source was performed by 2DSI~\supercite{Birge2006} and can be found in Supplmenentary Information section 6.
The spectrum of the laser source was measured with a fiber-coupled optical spectrum analyzer (Ando Electric Co., Ltd.).
More details about the supercontinuum source can be found in Ref.~\cite{Putnam2019}.
A dispersion-balanced Mach-Zehnder interferometer was used to generate the pulse pairs for the experiment.
An Inconel reflective neutral density (ND) filter of optical density (OD) 4 on a 2~mm thick BK7 substrate (Thorlabs) was placed in one arm and used to generate a weak signal pulse with pulse energy of $\sim$5~fJ.
An optical chopper was placed in this weak arm for lock-in amplification and detection.
The strong, driver arm had a pulse energy of $\sim$50~pJ.
A corresponding 2~mm thick BK7 window was placed in the driver arm to balance the dispersion between arms.
The added chirp from the glass was precompensated using the prism compressor.
The delay between the two pulses was controlled with a home built \SI{15}{\micro\meter} piezo stage.
A chopper was placed in the weak arm to modulate the signal for lock-in amplification.
A schematic of experimental setup can be found in Supplmenentary Information Sec.~1.
The pulses were focused onto the chip using a Cassegrain reflector to a spot-size of \SI{2.25}{\micro\meter}~$\times$~\SI{4.1}{\micro\meter}~FWHM.
This spot-size allowed for illumination of 10-15 nanoantennas at a time.
The polarization of the pulses was parallel to the nanoantenna height axis.
A bias voltage of \SI{3}{V} was applied across the \SI{50}{nm} device gap.
The emitted current was collected and amplified by a transimpedance amplifier (FEMTO Messtechnik GmbH) in conjunction with a lock-in amplifier (Stanford Research Systems), with a modulation of 200~Hz of the optical chopper.
For each data set, 60 scans of 10 second acquisition time over the \SI{100}{fs} time window were performed. Post-processing was done in Matlab.
Each data set was Fourier transformed and windowed from \SI{150}{THz} to \SI{350}{THz} with a tukey-window steepness of $\alpha=0.2$. The resulting output was averaged in the time-domain.
\subsection*{Device Fabrication}
We used a fabrication process based on that described in Ref.~\cite{Yang2019}.
The data presented in this work comes from devices fabricated on two different chips. The devices were fabricated on BK7 substrates. The patterning was performed using an electron beam lithography process with PMMA A2 resist (Microchem), a writing current of 2~nA, a dose of \SI{5000}{\micro\coulomb\per\centi\meter^2}, and an electron beam energy of \SI{125}{\kilo\electronvolt}. To avoid charging, an Electra92 layer was spin-coated on top of the PMMA at 2 krpm and baked for 2 min at 90~$^{\circ}$C. Since these are large arrays, a proximity effect correction step was also included when designing the layout. After exposure, the resist was cold-developed in a 3:1 isopropyl alcohol to methyl isobutyl ketone solution for 60~s at 0~$^{\circ}$C. Then, a 2~nm adhesion layer followed by 20~nm of Au were deposited using electron beam evaporation. As adhesion layer Ti was used for the 240 nm and Cr for the 200 nm antennas chips. Subsequently a liftoff process in a 65~$^{\circ}$C bath of n-methylpyrrolidone (NMP) (Microchem) was used to release the structures. Finally, we used a photolithography procedure to fabricate the contact pads for external electrical connections.
\subsection*{Electromagnetic Simulations}
The optical response of the plasmonic nanoantennas was simulated in a finite-element-method electromagnetic solver (COMSOL Multiphysics). The nanoantenna geometry was extracted from SEM images. The refractive index of gold was taken from Ref.~\cite{johnson1972optical}, and the refractive index of the glass substrate was fixed at 1.5 with negligible dispersion in the simulation spectral range. To simulate nanoantenna arrays, periodic boundary conditions were used. The normally incident plane wave was polarized along the nanotriangle axis (perpendicular to the nanowire). Perfectly matched layers were used to avoid spurious reflections at the simulation domain boundaries. The complex field response $\tilde{H}_{\mathrm{Pl.}}(\omega)= \tilde{E}^{(L)}(\omega)/\tilde{E}(\omega)$ was evaluated as a function of frequency. The field enhancement was defined as the ratio of the near-field at the nanotriangle tip to the incident optical field.
\section*{Data and Code Availability}
The data and code that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request.
\printbibliography
\section*{Acknowledgements}
This material is based upon work supported by the Air Force Office of Scientific Research under award numbers FA9550-19-1-0065 and FA9550-18-1-0436. F.X.K. acknowledges support by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007–2013) through the Synergy Grant ‘Frontiers in Attosecond X-ray Science: Imaging and Spectroscopy’ (AXSIS) (609920) and by the Cluster of Excellence ‘CUI: Advanced Imaging of Matter’ of the Deutsche Forschungsgemeinschaft (DFG) - EXC 2056 - project ID 390715994. This work was also partially supported by a seed grant provided by SENSE.nano, a center of excellence powered by MIT.nano, as well as the PIER Hamburg – MIT Program. We thank Marco Colangelo and John Simonaitis for their scientific discussion and edits to the manuscript. We thank Navid Abedzadeh for taking photos of the chip.
\section*{Author Contributions}
F.R., M.R.B. and P.D.K., conceived the experiments. Y.Y. and D.C.M. simulated the optical response of the devices. M.T. fabricated the devices. M.R.B., F.R., and M.T. performed the experiments with assistance from P.D.K. F.R. derived the theory and simulated the results with input from P.D.K., M.R.B., and W.P.P. F.R. and M.R.B. analyzed the data with input from P.D.K., W.P.P., M.T., and Y.Y. M.R.B. and F.R. wrote the first draft of the manuscript and Supplmenentary Information with significant contributions from M.T., Y.Y., P.D.K., and W.P.P. K.K.B. and F.X.K. provided input and feedback throughout the process. All authors contributed to the writing and editing of the manuscript.
\section*{Competing Interests}
The authors declare no competing interests.
\clearpage
\setcounter{figure}{0}
\renewcommand{\figurename}{Fig.}
\renewcommand{\thefigure}{S\arabic{figure}}
\title{Supplementary Information for:\\
On-chip sampling of optical fields with attosecond resolution}
\author{\small{Mina R. Bionta$^{1,\dagger,*}$, Felix Ritzkowsky$^{2,\dagger,*}$, Marco Turchetti$^{1,\dagger}$, Yujia Yang$^1$, Dario Cattozzo Mor$^1$, William P. Putnam$^3$, Franz X. Kärtner$^2$, Karl K. Berggren$^1$, and Phillip~D.~Keathley$^{1,*}$}}
\address{$^1$Research Laboratory of Electronics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA\\
$^2$Deutsches Elektronen Synchrotron (DESY) \& Center for Free-Electron Laser Science, Notkestra\ss e 85, 22607 Hamburg, Germany\\
$^3$Department of Electrical and Computer Engineering, University of California, Davis, 1 Shields Ave, Davis, CA 95616, USA\\
$^\dagger$These authors contributed equally to this work.
}
\email{$^*$e-mail: [email protected]; [email protected]; [email protected]}
\maketitle
\section{Experimental Setup\label{sec:exp_setup}}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{SetupSketch.pdf}
\caption{\small\textbf{Experimental Setup}
Overview of the optical layout and signal detection chain of our experiments. Abbreviations: BS: beamsplitter, ND: neutral density filter, DAQ: data acquisition.
\label{fig:exp_setup}}
\end{figure}
A CEP-stable, 78 MHz Er:fiber-based supercontinuum laser source was used, with a central wavelength of $\sim$1170~nm and pulse duration of $\sim$10~fs FWHM. A dispersion-balanced Mach-Zehnder interferometer was used to generate the pulse pairs for the experiment (Fig.~\ref{fig:exp_setup}). An Inconel reflective neutral density (ND) filter of optical density (OD) 4 on a 2~mm thick BK7 substrate (Thorlabs) was placed in one arm and used to generate a weak signal pulse with pulse energy of $\sim$5~fJ. An optical chopper was placed in this weak arm for lock-in amplification and detection. The strong, driver arm had a pulse energy of $\sim$50~pJ. A corresponding 2~mm thick BK7 window was placed in the driver arm to balance the dispersion between arms. The added chirp from the glass was precompensated using the prism compressor. The delay between the two pulses was controlled with a home built \SI{15}{\micro\meter} piezo stage. The generated electron emission is collected and amplified by a transimpedance amplifier (FEMTO Messtechnik GmbH). The resulting voltage signal is demodulated by the Lock-In amplifier with the 200~Hz frequency of the chopper wheel and subsequently low-pass filtered.
\section{Discussion of Sampling Bandwidth\label{sec:BW}}
A strong local electric-field transient (driver) drives the electron emission at the metallic nanoantenna \supercite{Rybka2016,Putnam2017,Keathley2019,Ludwig2020}. For simplicity in this section we will be discussing the field driving the emission at a surface, $E_\mathrm{D}(t)$. When a weak electric-field waveform (signal) perturbs the emission process, the detected time-averaged current is proportional to the electric field of the small signal. The small-signal gain, as defined by $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$, is therefore dictated by the strong driving electric field waveform. To demonstrate the influence of the FWHM of the driving pulse duration on the sampling bandwidth, we calculated $\tilde{H}_\text{Det}(\omega)$ for 1-, 3-, 5-, 7-, and 9-cycle sech$^2$ driver pulses each with a central frequency of $\SI{250}{THz}$ and a peak field strength at the antenna surface of \SI{15}{GV m^{-1}} (see Fig.~\ref{fig:SamplingBW1}a).
The small-signal gain $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ was calculated by assuming Fowler-Nordheim tunneling emission with a characteristic tunneling field of $F_t = \SI{78.7}{\volt\per\nano\meter}$. Fig.~\ref{fig:SamplingBW1}b shows the effective gate signal $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ for the sampling process for each pulse duration. Only the single-cycle pulse (blue) exhibits an isolated peak. However, for driver pulses with an increasing number of cycles, satellite pulses start to emerge. For the 9-cycle case (green traces) the height of satellite pulses at $\SI{-4}{fs}$ and $\SI{4}{fs}$ approach the height of the center peak. Fig.~\ref{fig:SamplingBW1}c shows the Fourier transform of $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$.
The sampling bandwidth generated by a single-cycle field transient shows a smooth response from DC to \SI{1.8}{PHz} and corresponds to the Fourier transform of the isolated peak in Fig.~\ref{fig:SamplingBW1}b (blue traces). With increasing pulse duration, the bandwidth becomes increasingly modulated due to the destructive interference of the additional peaks in the gate signal. The modulation is periodic with the frequency $f_0$ of the driving electric field at $\SI{250}{THz}$ and exhibits maxima at the higher harmonics $n\cdot f_0$ for $n~\epsilon ~ \mathbb{N}$. We highlight that although a 5-cycle driver waveform results in strong modulation of the sampling response $\tilde{H}_\text{Det}(\omega)$, the sampling response does not completely vanish at the minima (yellow traces). However, for driver pulses having a FWHM duration greater than five cycles, we find that the sampling response completely vanishes at the minima. This sampling technique allows for detection of higher harmonics of the driving signal regardless of the pulse duration, which originates from the fact that the individual peaks are deeply sub-cycle in duration \supercite{Keathley2019}.
\begin{figure}[h]
\centering
\includegraphics{BandwidthDiscussion2.pdf}
\caption{\small\textbf{Sampling bandwidth as a function of pulse duration.}
\textbf{a,} Electric-field transients for near-infrared pulses with a FWHM duration of 1-, 3-, 5-, 7-, and 9-cycles and a central frequency of $\SI{250}{THz}$.
\textbf{b,} Calculation of $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ for the field transients shown in \textbf{a} and assuming $F_t = \SI{78.7}{\volt\per\nano\meter}$ as the characteristic tunneling field.
\textbf{c,} Fourier transform of $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ showing the accessible sampling bandwidth provided by the field transients shown in \textbf{a}.
\label{fig:SamplingBW1}}
\end{figure}
\section{Carrier-Envelope Phase Discussion\label{sec:CEP}}
\begin{figure}[h]
\centering
\includegraphics{SamplingPhase3.pdf}
\caption{\small\textbf{Sampling response as a function of CEP.}
\textbf{a,} Calculated sech$^2$ pulse centered at \SI{250}{THz} with a pulse duration of \SI{10}{fs} (2.5 cycle), a peak electric field of \SI{15}{GV m^{-1}}, and a $\Phi_{\mathrm{CEP}} = 0,~\frac{\pi}{2},~\pi$. \textbf{b,} The small signal gain $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ is calculated by assuming Fowler-Nordheim tunneling emission with a characteristic tunneling field of $F_t = \SI{78.7}{\volt\per\nano\meter}$. The electric-field transients used here correspond to \textbf{a}. \textbf{c,} The spectral amplitude and phase of the complex sampling response of $\tilde{H}_{\mathrm{Det}}(\omega)$ as a function of frequency. Calculated for $\Phi_{\mathrm{CEP}} = 0,~\frac{\pi}{2},\pi$.
\label{fig:SamplingCEP}}
\end{figure}
The carrier-envelope phase (CEP) of a few cycle pulse plays a significant role in strong-field physics and heavily influences the electron emission characteristics from resonant nanoantenna devices. In this section we discuss the role of the driving waveform's CEP in the sampling process. For simplicity in this section we will be discussing the field driving the emission at a surface, $E_\mathrm{D}(t)$.
For our analysis, we calculated the complex sampling response $\tilde{H}_{\mathrm{Det}}(\omega)$ assuming a sech$^2$ driving pulse with a central frequency of \SI{250}{THz} and a pulse duration of \SI{10}{\femto\second} ($\sim$2.5 cycle), as given by the output of the laser used to experimentally verify device performance. As in Sec.~\ref{sec:BW}, the incident electric field was taken to be \SI{15}{GV/m}. The results are plotted in Fig.~\ref{fig:SamplingCEP}a for various CEP values of the driving pulse. The small signal gain $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ was calculated by assuming Fowler-Nordheim tunnel emission with a characteristic tunneling field of $F_t = \SI{78.7}{\volt\per\nano\meter}$ and is plotted in Fig.~\ref{fig:SamplingCEP}b. In Fig.~\ref{fig:SamplingCEP}c the complex sampling response $\tilde{H}_{\mathrm{Det}}(\omega)$ derived from $\left.\dv{\Gamma}{E}\right\vert_{E_\mathrm{D}(t)}$ is shown.
The CEP, $\Phi_{\mathrm{CEP}}$, of the driving pulse dictates the amplitude of the modulation of $\tilde{H}_\text{Det}(\omega)$.
For the driver pulse duration modeled in Fig.~\ref{fig:SamplingCEP}a, a cosine shaped pulse ($\Phi_{\mathrm{CEP}}$ = 0) exhibits minimal modulation of the sampling bandwidth, which corresponds to an isolated electron burst with small satellites in the time-domain if the pulse is sufficiently short (see Fig.~\ref{fig:SamplingBW1}b).
A CEP of $\Phi_{\mathrm{CEP}}=\pi$ corresponds to a negative cosine shaped pulse, which corresponds to two electron bursts of equal height, resulting in the sharp minima in the sampling bandwidth as shown in Fig.~\ref{fig:SamplingCEP}c (dotted traces).
More importantly, with an adequately short driving pulse, it is possible to choose an appropriate $\Phi_{\mathrm{CEP}}$ value such that only one electron burst dominates the field emission process, resulting in a smooth, unmodulated $\tilde{H}_{\mathrm{Det}}(\omega)$ from DC to \SI{1}{\peta\hertz}, as shown in Fig.~\ref{fig:SamplingBW1}c. Nevertheless independently of $\Phi_{\mathrm{CEP}}$ a full octave of spectrum can still be sampled with distortion due to $\tilde{H}_\mathrm{Det}$.
Another important characteristic of the sampling process to consider is the absolute phase of the sampled output. When $\Phi_\mathrm{CEP} = 0$, a dominant electron burst exists in the time domain and the absolute phase of the signal pulse will be transferred to the sampled output, as $\tilde{H}_\mathrm{Det}(\omega)$ will be a purely real function (see Fig.~\ref{fig:SamplingCEP}c). For comparison, if $\Phi_\mathrm{CEP} \neq 0$ the spectral phase of $\tilde{H}_\mathrm{Det}(\omega)$ is not flat. As shown in Fig. \ref{fig:SamplingCEP}, this phase resembles a stair function with plateaus of flat phase around the central frequency $\omega_0$ and its harmonics. Looking closely at Fig. \ref{fig:SamplingCEP}, we see that we can write the spectral phase at the nth harmonic as $\angle \tilde{H}_\mathrm{Det}(n\omega)=n\cdot \Phi_\mathrm{CEP}$ for $n
~\epsilon
~\mathbb{N}$. With these spectral phase behaviors, we then see that the constant phase component of the sampled output becomes the difference between that of the sampling pulse, $n\cdot \Phi_\mathrm{CEP}$, and that of the signal, $\Phi_\mathrm{S}$. Therefore, the constant, or absolute, phase of the sampled output can be written $\Phi_\mathrm{S}-n\cdot \Phi_\mathrm{CEP}$. In the case where the driving pulse, $E_\mathrm{D}$, and the signal pulse, $E_\mathrm{S}$, originate from the same laser source, they will share a common $\Phi_\mathrm{CEP}$, and in this case, the absolute phase of the sampled pulse will therefore be zero. Importantly, we should note that this result is independent of $\Phi_\mathrm{CEP}$, and even laser sources with a carrier envelope offset $f_\mathrm{CEO}\neq 0$ can be used for sampling. Lastly, we should additionally note that in stark contrast to other phase-sensitive techniques, like homo- and hetero-dyne detection, the absolute phase of $E_\mathrm{D}$ can be derived unambiguously \textit{in-situ} from the field emission current generated by $E_\mathrm{D}$ in our devices, as demonstrated in \cite{Rybka2016,Yang2019}.
\section{Field-Sampling Measurements with 200~nm Devices\label{sec:200nm}}
\begin{figure}[h!]
\centering
\includegraphics{ConfidencePlot200nm.pdf}
\caption{\small\textbf{Experimental field sampling results using 200 nm devices.} Time-domain results for 200~nm devices comparing measured (blue) and simulated (red) near-fields to the calculated incident laser signal (yellow).
Here, negative delays indicate the driver pulse arrives before the signal pulse.
The 200~nm device is designed to be off-resonant with the laser pulse and the measured trace yields good agreement to the calculated laser output.
The 1$\sigma$-confidence interval is shown as a blue shaded ribbon centered at the average value (blue solid line) retrieved from 47 scans.
\label{fig:Data_200nm}}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics{Spectrum200nm.pdf}
\caption{\small\textbf{Frequency-domain analysis of 200 nm device results.} Frequency-domain analysis comparing measured (solid) and simulated (dashed) near-fields for 200~nm devices to the calculated incident laser signal (dotted).
The 200~nm device is designed to be off-resonant with the laser pulse, thus the measured and simulated spectrum only show a single spectral peak corresponding with that of the laser spectrum at $\approx220~\mathrm{THz}$.
\label{fig:Data_200nm_spec}}
\end{figure}
Our technique was also tested using devices consisting of triangular antennas with a 200~nm height.
These devices were designed to be off-resonant with the laser pulse and were fabricated on a separate chip from the 240~nm antenna.
Fig.~\ref{fig:Data_200nm} presents the acquired cross-correlation trace (blue) for these devices.
For each data set, 47 scans of 5 seconds acquisition time over the \SI{100}{fs} time window were performed. Post-processing was done in Matlab.
Each data set was Fourier transformed and windowed from \SI{150}{THz} to \SI{350}{THz} with a tukey-window (steepness of $\alpha=0.2$). The resulting output was averaged in the time-domain.
We find good agreement between the measured trace (blue) to the simulated local signal field, $E_\mathrm{S}^\mathrm{(L)}(t)$ (red). We note that both the measurement and simulated local signal fields are both slightly shorter than the calculated laser output (yellow). The reason for this is apparent when examining the pulses in the frequency domain as shown in Fig.~\ref{fig:Data_200nm_spec}.
While the main spectral peak at $\approx 220~\mathrm{THz}$ agrees with the measured laser spectrum (gray dotted curve) and the expected antenna response (light blue dashed curve), both the simulated and experimental local signal field spectra exhibit an enhanced shoulder out to \SI{300}{\tera\hertz} relative to the measured laser output spectrum (solid blue curve). This is due to the plasmonic resonance which enhances these higher frequency components, resulting in a shorter time domain response of the local fields relative to the incident fields after interaction with the antenna.
\section{Data Processing and Error Analysis}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{ConfidencePlot3.pdf}
\caption{\small\textbf{Mean value and 1$\sigma$-confidence interval}
Time-domain measurement and simulation for \textbf{a,} 240~nm devices (Fig.~3, main text) and \textbf{b}, 200~nm devices (Fig.~\ref{fig:Data_200nm}).
The blue curves shows the mean value for every electric field/time coordinate over all individual scans.
The grey ribbon shows the 1$\sigma$-confidence interval for the respective coordinate.
For comparison, the simulated electric field is shown in purple.
\label{fig:conf}}
\end{figure}
To determine the error in our measurement, we took the Fourier transform of the each of the $\sim$50 individual data sets and applied a tukey-window in the frequency-domain with a steepness of $\alpha = 0.2$ from \SI{150}{THz} to \SI{350}{THz}.
The windowed data sets were then back transformed into the time-domain and averaged for each time coordinate over all data sets.
To determine the $1\sigma$-confidence interval the standard deviation was calculated for each time coordinate over all data sets.
The result is shown in Fig. \ref{fig:conf} and compared to the respective simulation shown in Fig.~3 (main text) for the 240~nm devices and Fig. \ref{fig:Data_200nm} for the 200~nm devices.
\section{Source Spectral Phase Measurements\label{sec:2DSI}}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{2DSI_figure.pdf}
\caption{\small\textbf{Source spectral phase characterization using 2DSI.} \textbf{a,}
Raw 2DSI spectrogram of the source in the experiment conditions. \textbf{b,} Retrieved group delay (red) and laser spectrum (blue). The optimized values of shear frequency and upconversion wavelength are $f_\text{shear} = 5.5$~THz and $\lambda_\text{up} = 1050$~nm.
\label{fig:2DSI}}
\end{figure}
In order to characterize the spectral phase of our supercontinuum source we performed two-dimensional spectral shearing interferometry (2DSI) measurements~\supercite{Birge2006}.
Two spectrograms were obtained for the measurement: the first with the laser in similar conditions to that of the experiment, and the second with an added 1.5~mm fused silica window placed in the beam path.
The spectrogram of the source in the experimental conditions is shown in Fig. \ref{fig:2DSI}a.
The second spectrogram taken with an additional propagation through 1.5~mm fused silica was used to calibrate the shear frequency $f_\text{shear}$ and upconversion wavelength $\lambda_\text{up}$ needed for group delay retrieval from the 2DSI measurement. Using an optimization routine, we found the values for $f_\text{shear}$ and $\lambda_\text{up}$ that resulted in the minimum error between the group delay difference measured with and without the fused silica using 2DSI and that predicted using the known optical properties of fused silica.
The resulting retrieved group delay and the spectrum of our laser source are reported in Fig. \ref{fig:2DSI}b.
\end{document}
|
1,116,691,498,573 | arxiv | \section{Introduction}
Higgsless models of electroweak symmetry breaking \cite{Csaki:2003dt}
may be viewed
as ``dual" to more conventional technicolor models
\cite{Weinberg:1979bn,Susskind:1978ms} and, as such, provide a basis for constructing low-energy effective theories
to investigate the phenomenology of a strongly interacting symmetry breaking sector
\cite{He:2007ge,Hirn:2007we}. One approach to constructing such an
effective theory, the three-site model \cite{SekharChivukula:2006cg}, includes only
the lightest of the extra vector mesons typically present in such theories -- the meson analogous to the $\rho$ in
QCD. An alternative approach is given by ``holographic technicolor"
\cite{Hirn:2006nt}, which potentially provides a description of the first two extra
vector mesons -- including, in addition to the $\rho$, the analog of the $a_1$ meson in QCD.
In this note we consider consider a four-site ``Higgsless" model \cite{Accomando:2008jh}
illustrated, using ``moose notation" \cite{Georgi:1985hf}, in fig. \ref{fig:one}.
We show how, once an $L_{10}$-like
``wavefunction" mixing term for the two strongly-coupled $SU(2)$ groups in the
center of the moose is included, we can reproduce the features of
the holographic model -- including the vanishing of the parameter $\alpha S$
for brane-localized fermions and the existence (whether or not $\alpha S = 0$) of the potentially interesting
decay $a_1 \to W \gamma$.
\section{The Model}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{foursite-moose.eps}
\end{center}
\caption{The ``moose" diagram \protect\cite{Georgi:1985hf} for the $SU(2)^3 \times U(1)$ model considered in
this note. The solid circles represent $SU(2)$ groups; the dashed circle, a $U(1)$ group; the
``links", $SU(2) \times SU(2)/SU(2)$ non-linear sigma models.
In order to
be phenomenologically realistic \protect\cite{SekharChivukula:2004mu}, we work in the limit $g,g' \ll \tilde{g}$; in this limit the model also
has an approximate parity symmetry. We consider brane-localized fermions, which
couple only the the $SU(2) \times U(1)$ at the ends of the moose, and add an
$L_{10}$-like ``wavefunction mixing" term to mix the two strongly-coupled $SU(2)$ groups
in the middle two sites.
\label{fig:one}}
\end{figure}
The Lagrangian for the model consists of several parts. First, the usual nonlinear
sigma model link terms
\begin{align}
{\cal L}_\pi = & \frac{f^2_1}{4}\left[{\rm Tr} D^\mu \Sigma_1 D_\mu \Sigma^\dagger_1
+ {\rm Tr} D^\mu \Sigma_3 D_\mu \Sigma^\dagger_3\right] \nonumber\\
& + \frac{f^2_2}{4}
{\rm Tr} D^\mu \Sigma_2 D_\mu \Sigma^\dagger_2~.
\label{eq:nlsm}
\end{align}
Next, the gauge-boson kinetic energies
\begin{equation}
{\cal L}_{gauge} = -\,\frac{1}{4}\left(\vec{W}^2_{0\mu\nu}
+ \vec{W}^2_{1\mu\nu}+\vec{W}^2_{2\mu\nu}+ \vec{W}^2_{3\mu \nu}\right)~,
\label{eq:gaugekinetic}
\end{equation}
where we denote the weakly-coupled $SU(2)\times U(1)$ fields by
$\vec{W}_0$ and $\vec{W}_3 \equiv B$ (by convention, $i=3$ vanishes for the charged sector), and the strongly coupled $SU(2)$ fields by $\vec{W}_{1,2}$. And finally, there is an $L_{10}$-like mixing between the middle two sites
\begin{equation}
{\cal L}_\varepsilon = -\,\frac{\varepsilon}{2} {\rm Tr}\left[
\vec{W}_{1\mu\nu} \Sigma_2 \vec{W}^{\mu\nu}_2 \Sigma^\dagger_2\right]~,
\label{eq:l10term}
\end{equation}
where in this calculation we treat $\varepsilon$ as an ${\cal O}(1)$ parameter.
This model has a ``parity" (more precisely, a $G$-parity)
symmetry in the $g=g'=0$ limit, under which
$\vec{W}^\mu_1 \to \vec{W}^\mu_2$, $\Sigma_1 \to \Sigma_3^\dagger$, and
$\Sigma_2 \to \Sigma_2^\dagger$. In the limit $f_2 \to \infty$,\footnote{For fixed
values of $2/f^2_1 + 1/f^2_2$, see eqn. (\protect\ref{eq:GF}).} this model
reduces to the three-site model considered in \cite{SekharChivukula:2006cg}.
In unitary gauge (with $\Sigma_1=\Sigma_2=\Sigma_3 \equiv {\cal I}$), the
${\cal L}_\varepsilon$ term above corresponds to wavefunction-mixing of
the fields $\vec{W}_i$,
\begin{equation}
{\cal L} = -\,\frac{1}{4} \vec{W}_{i\mu\nu} \tilde{Z}_{ij} \vec{W}_j^{\mu\nu}
-\frac{1}{2} \vec{W}_{i\mu} M^2_{ij} \vec{W}^\mu_j~,
\label{eq:nsite}
\end{equation}
with
\begin{equation}
\tilde{Z} = \begin{pmatrix}
1 & & & \\
& 1 & \varepsilon & \\
& \varepsilon & 1 & \\
& & & 1
\end{pmatrix}~.
\end{equation}
To avoid ghosts, we require $\tilde{Z}$ to
be positive-definite, and hence $|\varepsilon| < 1$.
\section{Masses and Mixing Angles}
The eigenstates corresponding to the quadratic part of Lagrangian in eqn. (\ref{eq:nsite})
satisfy the generalized eigenvalue equation
\begin{equation}
M^2 \vec{v}_n = m^2_n \tilde{Z} \vec{v}_n~,
\label{eq:eigeneqn}
\end{equation}
where $\vec{v}_n$ is a vector in site-space with components $v_n^i$. The superscript
$i$ labels the sites, running from 0 to 2 for charged-bosons ($n = W^\pm, \rho^\pm, a_1^\pm$), and 0 to 3 for neutral
ones ($n = Z^0, \rho^0, a_1^0, \gamma$).
If we choose eigenvectors normalized by $\vec{v}^T_n \tilde{Z} \vec{v}_m = \delta_{nm}$,
the gauge-eigenstate ($W_\mu^i$) and mass-eigenstate ($W'_{n\mu}$)
fields are related by
\begin{equation}
W^i_\mu = \sum_n v^i_n W_{n\mu}' ~.
\label{eq:gauge-states}
\end{equation}
\subsection{The $g=g'=0$ Limit}
Consider first the $g=g'=0$ limit, in which we can determine the leading
contributions to the heavy gauge-boson masses. Due to the parity symmetry in this
limit, we expect the eigenvectors to be proportional to $\vec{W}^\mu_1 \pm \vec{W}^\mu_2$.
Applying the normalization condition $\vec{v}^T_n \tilde{Z} \vec{v}_m = \delta_{nm}$, we find a parity-even
eigenvector (the ``$\rho$")
\begin{equation}
\vec{\rho}^\mu = \frac{1}{\sqrt{2(1+\varepsilon)}}\left(
\vec{W}_1^\mu + \vec{W}_2^\mu \right)~,
\label{eq:rhovector}
\end{equation}
with mass
\begin{equation}
m^2_\rho = \frac{\tilde{g}^2}{4} \frac{f^2_1}{1+\varepsilon}~,
\label{eq:rhomass}
\end{equation}
and a parity-odd eigenvector (the ``$a_1$")
\begin{equation}
\vec{a}_1^\mu = \frac{1}{\sqrt{2(1-\varepsilon)}} \left(
\vec{W}_1^\mu - \vec{W}_2^\mu \right)~,
\label{eq:a_1vector}
\end{equation}
with mass
\begin{equation}
m^2_{a_1} = \frac{\tilde{g}^2}{4} \frac{f^2_1 + 2 f^2_2}{1-\varepsilon}~.
\label{eq:a_1mass}
\end{equation}
We note that the $\rho$ and $a_1$ are degenerate for
\begin{equation}
\varepsilon = -\, \frac{f^2_2}{f^2_1 + f^2_2}~,
\label{eq:degeneracy}
\end{equation}
a value satisfying
the constraint $|\varepsilon| < 1$. As $\varepsilon$ becomes
more negative, the $a_1$ becomes lighter than the $\rho$.
\subsection{The Photon}
Examining the eigenvalue eqn. (\ref{eq:eigeneqn}) we see that the wavefunction factor
$\tilde{Z}$ affects the normalization of a massless eigenvector, but not the orientation. We see,
therefore, that the photon must be of the form
\begin{equation}
A_\mu = \frac{e}{g} W^3_{0\mu} + \frac{e}{\tilde{g}} W^3_{1\mu} +
\frac{e}{\tilde{g}} W^3_{2\mu} + \frac{e}{g'} B_\mu~,
\label{eq:photonvector}
\end{equation}
or
\begin{equation}
(v_\gamma)^T = \left(\frac{e}{g}\ ,\ \ \frac{e}{\tilde{g}} \ , \ \ \frac{e}{\tilde{g}} \ , \ \ \frac{e}{g'}\right)~.
\label{eq:photon}
\end{equation}
The electric charge $e$ is, then, determined from the normalization condition to be
\begin{equation}
\frac{1}{e^2} = \frac{1}{g^2} + \frac{1}{{g'}^2} + \frac{2(1+\varepsilon)}{\tilde{g}^2}~.
\label{eq:charge}
\end{equation}
Examining the photon-couplings, we see that the unbroken gauge-generator
has the expected form $Q= T^3+T^3_1 + T^3_2 + Y$.
\subsection{The $W$-boson}
Next, we consider a perturbative evaluation of the electroweak boson eigenvectors and eigenvalues,
computed in powers of $x = g/\tilde{g}$. We start with the $W$-boson; the
charged-boson mass matrix is given by
\begin{equation}
M^2_W = \frac{\tilde{g}^2}{4}
\begin{pmatrix}
x^2 f^2_1 & -x f^2_1 & 0 \\
-x f^2_1 & f^2_1 + f^2_2 & - f^2_2 \\
0 & -f^2_2 & f^2_1+f^2_2
\end{pmatrix}~.
\label{eq:mwsq}
\end{equation}
To ${\cal O}(x^2)$ we find
\begin{align}
v^0_W & = \left[1-\frac{f^4_1 + 2(1+\varepsilon) f^2_1 f^2_2 + 2(1+\varepsilon) f^4_2}{2(f^2_1+2 f^2_2)^2}\,x^2~,
\right] \nonumber \\
v^1_W & = x\,\frac{f^2_1+f^2_2}{f^1_1+2 f^2_2}W_1~, \\
v^2_W &= x\,\frac{f^2_2}{f^2_1+2 f^2_2} W_2~,\nonumber
\end{align}
where we have computed, but do not display, the corrections of ${\cal O}(x^3)$ to the
last two components.
For the corresponding eigenvalue we find
\begin{equation}
m^2_W = \frac{g^2}{4} \frac{f^2_1 f^2_2}{f^2_1+2f^2_2}
\left[1-\frac{f^4_1 + 2(1+\varepsilon) f^2_1 f^2_2 + 2(1+\varepsilon) f^4_2}{(f^2_1+2 f^2_2)^2}\,x^2
\right]~.
\end{equation}
\subsection{The $Z$-boson}
The neutral gauge-boson mass matrix is
\begin{equation}
M^2_Z =
\begin{pmatrix}
x^2 f^2_1 & -x f^2_1 & 0 & 0\\
-x f^2_1 & f^2_1 + f^2_2 & - f^2_2 & 0 \\
0 & -f^2_2 & f^2_1+f^2_2 & -x\tan\theta f^2_1 \\
0 & 0 & -x \tan\theta f^2_1 & x^2 \tan^2\theta f^2_1
\end{pmatrix}~.
\label{eq:mzsq}
\end{equation}
where we have defined the angle $\theta$ by
$g'/g \equiv \tan \theta$.
Note that $\theta$ is the {\it leading order} weak mixing angle; we will later define a weak mixing angle $\theta_Z$ that is better suited to comparison with experiment.
We have computed the $Z$-boson eigenvector to ${\cal O}(x^3)$ -- as the
result is complicated, and the algebra unilluminating, we do not reproduce it here. For the $Z$-boson
mass, we find
\begin{widetext}
\begin{equation}
m^2_Z = \frac{g^2}{4\cos^2\theta} \frac{f^2_1 f^2_2}{f^2_1+2f^2_2}
\left[
1-\frac{(3-\varepsilon) f^4_1+4(1+\varepsilon)(f^2_1 f^2_2 + f^4_2) +
(1+\varepsilon)(f^2_1+2 f^2_2)^2 \cos4\theta}{4(f^2_1+2f^2_2)^2}\,x^2 \sec^2\theta
\right]~.
\label{eq:m2z}
\end{equation}
\end{widetext}
\section{The Electroweak Parameters}
From eqn. (\ref{eq:gauge-states}), we can compute the couplings of the mass-eigenstate
electroweak gauge-bosons to fermions. For
brane-localized fermion couplings of the form
\begin{equation}
{\cal L}_f = g_0 \vec{J}^\mu_L \cdot \vec{W}^0_\mu
+ g' J^\mu_Y B_\mu~,
\end{equation}
we find the mass-eigenstate $W$-boson couplings
$g_W^f = g_0 v^0_W$ and the $Z$-boson couplings
\begin{equation}
g^f_Z = g v^0_Z I_3 + g' v^3_Z Y = g I_3(v^0_Z - \tan\theta v^3_Z) + g' v^3_Z {\cal Q}~.
\label{eq:zcouplings}
\end{equation}
We may then compute the on-shell precision electroweak parameters at tree-level to
${\cal O}(x^2)$, using the definitions and procedures outlined in \cite{Chivukula:2004af,SekharChivukula:2004mu}. The values of electric charge, eqn. (\ref{eq:charge}),
and $m^2_Z$, eqn. (\ref{eq:m2z}), are given above, and we find the Fermi constant
\begin{equation}
\sqrt{2} G_F = \frac{1}{v^2} = \frac{2}{f^2_1} + \frac{1}{f^2_2}~,
\label{eq:GF}
\end{equation}
where $v\approx 246$ GeV.
The only non-zero precision electroweak parameter parameter is
$\alpha S$ \cite{Peskin:1991sw}, for which we find
\begin{equation}
\frac{\alpha S}{4 s^2} = \frac{\varepsilon f^4_1 + 2(1+\varepsilon) f^2_1 f^2_2 + 2 f^4_2 (1+\varepsilon)}
{(f^2_1 + 2f^2_2)^2}\, x^2~,
\end{equation}
As expected \cite{Hirn:2006nt,Hirn:2007we}, we can choose $\varepsilon$ so that $\alpha S$ vanishes for any given value of $f_1/f_2$
\begin{equation}
\varepsilon \to -\, \frac{2(f^4_2 + f^2_1 f^2_2)}{f^4_1 + 2 f^2_1 f^2_2 + 2 f^4_2}~,
\label{eq:zeroS}
\end{equation}
while satisfying $\vert\varepsilon\vert < 1$.
Note, however, that the value of
the low-energy parameter $\vert\varepsilon\vert$ that makes $\alpha S$ vanish
is of order one, larger than would be expected by naive dimensional
analysis \cite{Georgi:1992dw}. This result is consistent with investigations
of continuum 5d effective theories \cite{Hong:2006si,Agashe:2007mc}, and
with investigations of plausible conformal technicolor ``high-energy completions"
of this model using Bethe-Salpeter methods \cite{Harada:2005ru,Kurachi:2006mu},
both of which suggest that $\alpha S >0$
and that it may not be possible
to achieve very small values of $\alpha S$. We note also that the result is consistent with
the expectation of \cite{Appelquist:1998xf,Appelquist:1999dq}, since
the value of $\varepsilon$ required for $\alpha S$ to vanish results in
axial-vector mesons which are lighter than the vector mesons.\footnote{An alternative
approach, Degenerate BESS \protect\cite{Casalbuoni:1995yb,Casalbuoni:1995qt}, produces
degenerate vector and axial mesons and $\alpha S=0$ using a different theory without unitarity delay
\cite{SekharChivukula:2004mu}
-- see ``case I" described in \protect\cite{Chivukula:2003wj}. }
\section{Triple Boson Vertices}
\subsection{Electroweak Vertices}
Consider the electroweak vertices $\gamma WW$ and $ZWW$.
To leading order, in the absence of CP-violation, the triple gauge boson vertices
may be written \cite{Hagiwara:1986vm}
\begin{eqnarray}
{\cal L}_{TGV} & = & - ie\frac{c_Z }{s_Z}\left[1+\Delta\kappa_Z\right] W^+_\mu W^-_\nu Z^{\mu\nu} \nonumber \\
& -& ie \left[1+\Delta \kappa_\gamma\right] W^+_\mu W^-_\nu A^{\mu\nu} \nonumber \\
&-& i e \frac{c_Z}{s_Z} \left[ 1+\Delta g^Z_1\right](W^{+\mu\nu}W^-_\mu - W^{-\mu\nu} W^+_\mu)Z_\nu \nonumber \\
& -& ie (W^{+\mu\nu}W^-_\mu - W^{-\mu\nu} W^+_\mu) A_\nu~,
\label{eq:3point}
\end{eqnarray}
where the two-index tensors denote the Lorentz field-strength
tensor of the corresponding field. In the standard model,
$\Delta\kappa_Z = \Delta\kappa_\gamma = \Delta g^Z_1 \equiv 0$.
Note that the expressions for $\kappa_Z$ and $g^Z_1$ involve
$c_Z \equiv \cos \theta_Z$ and $s_Z \equiv \sin \theta_Z$, as defined by
\begin{equation}
c^2_Z s^2_Z = \frac{e^2}{4 \sqrt{2} G_F M_Z^2},
\end{equation}
rather than the leading order mixing angle $\theta$.
Let us begin with the coupling of the photon of the form
$(W^{+\mu\nu}W^-_\mu - W^{-\mu\nu} W^+_\mu) A_\nu$. In terms of the wavefunctions
$v_{\gamma, W}$, this coupling is proportional to
\begin{equation}
g_\gamma = \sum_{i,j} g_i v^i_\gamma v^i_W \tilde{Z}_{ij} v_W^j~.
\label{eq:ggamma}
\end{equation}
From eqn. (\ref{eq:photon}), we have $g_i v^i_\gamma \equiv
e$ and therefore, by applying the normalization condition $\vec{v}^T_W \tilde{Z} \vec{v}_W = 1$, we obtain
$g_\gamma \equiv e$ independent of any choice of the four-site parameters
--- as required by gauge-invariance and consistent with the form
of eqn. (\ref{eq:3point}).
Next, we evaluate $\Delta \kappa_\gamma$, with
\begin{equation}
e\,[1+\Delta\kappa_\gamma] = \sum_{i,j} g_i (v^i_W)^2 \tilde{Z}_{ij} v^j_\gamma = e\, \sum_{i,j} \frac{g_i}{g_j} (v^i_W)^2 \tilde{Z}_{ij}~,
\end{equation}
for which we calculate
\begin{equation}
\Delta \kappa_\gamma = \frac{\varepsilon\, f^4_1}{(f^2_1+2 f^2_2)^2}\, x^2
= \frac{\varepsilon\, v^4}{f^4_2}\, x^2~.
\end{equation}
Note that this vanishes in the absence of wavefunction mixing ($\varepsilon \to 0$), and also in the
``three-site" limit ($v/f_2 \to 0$), as consistent with \cite{SekharChivukula:2006cg}.
Similarly we may compute $\Delta g^Z_1$ and $\Delta \kappa_Z$,
and we find
\begin{eqnarray}
&\Delta g^Z_1& = \Delta \kappa_Z + \frac{\varepsilon f^4_1 \tan^2\theta_Z\, x^2}{(f^2_1+2f^2_2)^2}~,\\
&=& - \, \frac{( \varepsilon s^2_Z f^4_1 + (1+\varepsilon)f^2_1 f^2_2+(1+\varepsilon)f^4_2) }{(f^2_1+2 f^2_2)^2 \cos(2\theta_Z)} \frac{x^2}{c^2_Z} ~,\nonumber
\end{eqnarray}
where the difference between $\theta$ and $\theta_Z$ is irrelevant to this order. Note that $\Delta g^Z_1-\Delta \kappa_Z$
vanishes when $\varepsilon \to 0$, and also, as expected \cite{SekharChivukula:2006cg}, in the ``three-site" limit $f_2 \to \infty$.
\subsection{$\rho,\,a_1 \to W + \gamma$}
Finally, we consider the $(\rho,a_1) - W - \gamma$ couplings that motivated this
study. Electromagnetic gauge-invariance implies
that the coupling of the form $(\rho^{+\mu\nu}W^-_\mu - W^{-\mu\nu} \rho^+_\mu) A_\nu$
must vanish. Analogous to eqn. (\ref{eq:ggamma}) we find that the $\rho - W - \gamma$
and $a_1 - W - \gamma$ couplings
of this form are proportional to $\vec{v}^T_W \tilde{Z} \vec{v}_{\rho,a_1} \equiv 0$, and
therefore vanish identically.
There is no reason, however, that terms proportional
to $(\rho^+_\mu,\, a^+_{1\mu})\, W^-_\nu A^{\mu\nu}$ must vanish \cite{Hirn:2006nt,Hirn:2007we}. In this case, we find
\begin{equation}
e\, \kappa_{\gamma W \rho} = \sum_{i,j} g_i v^i_W v^i_\rho \tilde{Z}_{ij} v^j_\gamma = e \sum_{i,j} \frac{g_i}{g_j} v^i_W v^i_\rho \tilde{Z}_{ij}~,
\end{equation}
and similarly for the $a_1$.
Computing these couplings to ${\cal O}(x^3)$,
we find
\begin{eqnarray}
\kappa_{\gamma W \rho} & = & -\, \frac{\varepsilon(1+\varepsilon)^{3/2} f^4_1}
{2 \sqrt{2}(f^2_1 + 2 f^2_2)
(\varepsilon f^2_1 + (1+\varepsilon) f^2_2)}\, x^3~,
\label{eq:rhoWgamma}\\
\kappa_{\gamma W a_1}& = & \frac{\sqrt{2}\, \varepsilon\, v^2}{\sqrt{1-\varepsilon}\, f^2_2}\, x~.
\label{eq:a1Wgamma}
\end{eqnarray}
Note that both couplings vanishes in the $\varepsilon \to 0$ and $f_2 \to \infty$ limits.
Furthermore, while the $\rho - W-\gamma$ coupling is typically small
(${\cal O}(x^3)$), we find the $a_1 - W - \gamma$ coupling is only
suppressed by $x$, consistent with
\cite{Hirn:2006nt,Hirn:2007we}. If the value of $\varepsilon$ corresponds (\ref{eq:zeroS}) to $\alpha S = 0$, then $\kappa_{\gamma W a_1}$ is
\begin{equation}
\kappa_{\gamma W a_1} = - \frac{2 \sqrt{2} v^2 (f_1^2 + f_2^2)\ x}{(f_1^2 + 2 f_2^2)\sqrt{f_1^2 + 2 f_1^2 f_2^2 + 2 f_2^2}} .
\end{equation}
As mentioned earlier, for this value of $\varepsilon$, the $a_1$ state is lighter than the $\rho$.
\section{Summary}
We have introduced a deconstructed Higgsless model with four sites and non-trivial wavefunction mixing, and have shown that it exhibits key features of holographic technicolor \cite{Hirn:2007we, Hirn:2006nt}. The electroweak parameter $\alpha S$ vanishes for a value of the wavefunction mixing at which the $a_1$ is lighter than the $\rho$ -- even if all fermions are brane-localized. Furthermore, the model includes the decay $a_1 \to W \gamma$, suppressed by only one power of $(M_W / M_{\rho})$, in contrast with an $(M_W/M_\rho)^3$ suppression of the decay $\rho \to W \gamma$. These decays are of potential phenomenological interest at LHC.
\section{Acknowledgements}
This work was supported in part by the US National Science Foundation under
grant PHY-0354226. The authors acknowledge the hospitality and support of the
the Aspen Center for Physics, where this manuscript was completed.
We thank Tom Appelquist, Veronica Sanz, Johannes Hirn, and Adam Martin
for useful discussions.
|
1,116,691,498,574 | arxiv | \section{Introduction}
It is estimated that around 13.2 billion Internet of Things (IoT) devices are already connected to the mobile networks till November 2022 and the number will increase to approximately 35 billion by 2028 \cite{Ericsson_MR}. As the number of IoT devices connected to the mobile networks are rapidly increasing, the mobile data traffic will increase in multi-fold between 2022 and 2028, with compound annual growth rate of 24\% and almost doubled in last two years \cite{Ericsson_MR}. Furthermore, beyond 5G and 6G networks are envisioned to support new case cases and act as drivers to achieve the UN's Sustainable Development Goals by 2030 \cite{UN_SDG} \cite{NGMN_6G}. Hence, network operators are under pressure to support new services and use cases while reducing the overall Capital Expenditures (CapEx) and Operational Expenditures (OpEx) in a highly competitive mobile services markets. Recent technologies such as Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) have changed the way of designing, deploying, and operating/managing network functions at the core network by leveraging the features of virtualization, cloud computing, and decoupling control plane from user plane \cite{NFV_SDN} \cite{NFV_5G} \cite{3gpp_23.501}. NFV and SDN have enabled to design a more agile and less expensive core networks using virtual machines, containers, and programmable controllers. However, it is estimated that 65-70\% of the CapEx and OpEx is spent only for the Radio Access Network (RAN) part alone in an entire total cost of ownership \cite{O-RAN_WP1} \cite{O-RAN_PW}. Because the traditional RAN, which consists of Radio Unit (RU) and Baseband Unit (BBU), is implemented as all-in-one architecture and each RAN operates independently \cite{O-RAN_survey2}. Hence, the traditional RAN is also referred to as Distributed RAN (D-RAN).
In all-in-one traditional RAN deployment, RU and BBU are integrated as a single unit and thus it is easy to implement. However, the traditional hardware based RAN network functions are \cite{O-RAN_survey1}: i) proprietary and closed, ii) expensive and designed as a single unit, and thus often seen as a black-box by network operators, iii) not quickly scalable and agile, iv) not reconfigurable for network operators unique needs without the help of vendors, v) not easily integrable and interoperable with other vendor equipments, and v) not designed to enable automation through intelligent networks. To overcome these limitations and reduce the overall expenditures, the traditional RAN has to be re-designed to improve the efficiency of RAN deployments and operations of RAN network functions.
Over the last decade, many research and standardization activities (e.g., C-RAN, vRAN, and xRAN) have been carried out to re-design RAN deployments in order to reduce both CapEx and OpEx \cite{O-RAN_PW} \cite{O-RAN_survey2} \cite{O-RAN_survey1}. They are majorly focused on disaggregating the traditional RAN components RU and BBU into two or more separate units (BBU can be further split into Distributed Units (DUs) and Central Unit (CU)) and share a pool of BBU processing units in a central physical location with multiple RUs. A new interface called the Fronthaul (FH) interface is used to connect and communicate between the pool of BBU processing units and RUs. This approach has helped to reduce the overall expenditures significantly by leveraging the features of cloud computing and standardize some interfaces (e.g., FH interface) \cite{O-RAN_survey2} \cite{O-RAN_survey1}. However, still there are problems such as the software and some of the interfaces are remain either proprietary or closed that prevent interoperability and a multi-vendor ecosystem, vendor lock-in and monopoly prevent competitive and vibrant supplier system which could reduce the network equipment cost, and new management mechanisms are required to handle the increased complexity to support new 5G services and use cases~\cite{O-RAN_PW}~\cite{O-RAN_survey2}. To overcome these problems, RAN interfaces and network functions need to be made open to enable interoperability through multi-vendor deployments, healthy competition, and vibrant supplier ecosystem by allowing smaller vendors participation \cite{O-RAN_PW} \cite{O-RAN_survey2} \cite{O-RAN_survey1}. In addition, RAN virtualization and network automation through intelligent controllers can be considered to efficiently manage and orchestrate network services in an intelligent manner with reduced cost~\cite{O-RAN_survey2}~\cite{Mavenir_WP1}.
Open RAN is the movement in mobile industry to disaggregate RAN components and create open interfaces between them. Two industry groups are majorly leading the Open RAN movement. The first one is Telecom Infra Project (TIP) which formed in 2016 \cite{TIP}. TIP mainly focuses on development and deployment of open, disaggregated, and multi-vendor interoperable RAN solutions through the OpenRAN project group \cite{TIP_OpenRAN}. TIP does not create any specifications on its own, but uses specifications to create multi-vendor interoperable solutions for network operators and service providers through different project groups. The other one is O-RAN Alliance which formed in 2018 by merging two related organizations C-RAN Alliance and xRAN Forum~\cite{O-RAN_PW}~\cite{O-RAN_web1}. The O-RAN Alliance leads the mobile industry towards open interfaces, interoperable RAN ecosystem, RAN virtualization, and data-driven RAN intelligence by creating specifications and supporting open source projects \cite{O-RAN_web1}. TIP and O-RAN Alliance have signed a liaison agreement for sharing information, referencing, and validation activities \cite{TIP_OpenRAN}. Both TIP and O-RAN Alliance encourage the use of open source software and Commercial-Off-The-Shelf~(COTS) open white box hardware for RAN deployments \cite{O-RAN_WP1} \cite{TIP_OpenRAN}. The core principles of the O-RAN Alliance are open interfaces, cloudification, and automation through closed-loop control \cite{O-RAN_web1}. The O-RAN Alliance leverages the features of NFV, SDN, and Artificial Intelligence/Machine Learning (AI/ML) to realize its vision \cite{O-RAN_WP1} \cite{O-RAN_WP2} \cite{O-RAN_WP3}. It is estimated that NFV, SDN, and AI/ML based network deployment and management can save about 50\% of total cost of ownership \cite{O-RAN_survey2}.
In the literature, current challenges faced by telecom industry players, state-of-the-art standardization activities, and future research directions considering the emerging technologies are not covered sufficiently. This paper, to fill that gap, provides a comprehensive overview of evolution of RAN architecture, disaggregated RAN functions and interfaces between them, O-RAN Alliance standardization activities (including O-RAN architecture, open interfaces, intelligent controllers for orchestration and automation of RAN, virtualization and cloud infrastructure, use cases, security aspects, and base station white box deployment), related open source projects, and system related open issues, challenges, and future research directions to explore further for in-depth study and analysis.
The rest of the paper is organized as follows: Section II provides background details about the evolution of RAN and O-RAN Alliance. Section III discusses the O-RAN Alliance reference architecture, main network functions, and protocol stack. Section IV explains about the need for open interfaces, 3GPP defined interfaces, and O-RAN Alliance defined open interfaces in the O-RAN reference architecture. Section V explains about functionalities of RAN Intelligent Controllers for RAN optimization and automation. Section VI discusses virtualization and edge computing enabled NFV infrastructure. Section VII discusses network slicing and RAN slicing optimization related aspects and Section VIII discusses O-RAN security related aspects. Section IX discusses a set of use cases enabled by O-RAN network functions and nodes. Section X discusses about deployment aspects and related open source projects. Section XI discusses open issues, challenges, and future research directions. Finally, Section XII concludes the paper.
\section{Background}
\subsection{Evolution of RAN}
In mobile communication networks, the Radio Access Network (RAN) plays an important role to connect User Equipments~(UEs) with the public/private Core Network (CN) to register and access services by routing control plane and user plane data. The architecture of RAN has been evolved in each generation of the mobile communication networks to support new services and accommodate ever increasing users to access services seamlessly. In the 2nd Generation (2G) networks, RAN (also known as base transceiver station) was designed to support mainly voice services through circuit-switched network and controlled by base station controllers. In the 3rd Generation (3G) networks, RAN (also known as Node Base station (NB)) was designed to support both voice and data services through circuit-switched and packet-switched networks and controlled by radio network controllers. In the 4th Generation (4G) networks, RAN was designed to support high data rate services through only packet-switched, Internet Protocol based architecture, network. To simplify the architecture and operations, RAN and its controller are integrated in 4G and called as evolved NB (eNB).
The 5th Generation (5G) and beyond networks are envisioned to provide services not only to human beings, but also to multiple industry verticals such as transportation, agriculture, healthcare, energy, manufacturing, gaming, entertainment, and public safety services \cite{NGMN1_5G} \cite{NGMN2_5G}. 5G services are broadly classified into three categories \cite{IMT_2020}: i) enhanced Mobile Broadband (eMBB) which supports high data rate services such as ultra-high definition video broadcasting and online gaming, ii) Ultra-Reliable and Low Latency Communications (URLLC) which supports time sensitive and high reliable services such as remote surgery and autonomous vehicle driving, and iii) massive Machine Type Communications (mMTC) which supports to provide connectivity to billions of IoT devices. Compare to previous generation of networks, 5G network has stringent service requirements such as latency in the order of milliseconds, ultra-reliable networks, connectivity for billions of IoT devices, and 100x network energy efficiency compare to 4G \cite{IMT_2020} \cite{5g_req}.
Network operators are transforming their networks to support new 5G based services and business use cases, handle the ever increasing number of connected terminals and mobile data traffic, and to optimize the available network resources to manage the load and offer diverse services with satisfiable Quality of Service/Experience (QoS/QoE). In particular, network operators leverage the features of NFV, SDN, Edge Computing, and Satellite Networks to meet the service requirements of the users/verticals while aiming to reduce their overall CapEx and OpEx \cite{NGMN2_5G}. However, it is estimated that around 65-70\% of the total cost of ownership is dedicated only for RAN deployments, operations, and maintenance \cite{O-RAN_WP1} \cite{O-RAN_PW}. Therefore, network operators are focusing to reduce the CapEx and OpEx of RAN by redesigning RAN in a flexible manner such that different deployment options and new services can be supported.
In general, RAN consists of RU and BBU for radio signal transmission/reception and radio resource management, respectively. In traditional RAN or D-RAN (e.g., NB and eNB), both RU and BBU are integrated in all-in-one architecture for easy implementation and thus each D-RAN operates independently. As the number of UEs attempt to attach to the network increases, additional D-RANs need to be deployed in a dense manner to accommodate additional requests. In this case, network operators have to spend more money to increase the capacity and maintain the QoS/QoE. This is because the traditional RAN networks are closed, proprietary, and designed as a single entity. Thus, CapEx and OpEx are major issues when it comes to base station deployments and operations \cite{O-RAN_WP1} \cite{O-RAN_PW} \cite{O-RAN_survey2}.
To reduce the overall expenditures, first Cloud RAN (C-RAN) concept was proposed \cite{C-RAN}. C-RAN exploited the idea of RAN node resource sharing. The C-RAN concept is also referred to as Centralized RAN. The idea is to split RU and BBU of RAN into two separate units and process signalling information of multiple RUs in a central pool of BBUs. In C-RAN architecture, the RU is also referred to Remote Radio Unit/Head (RRU/RRH) as it is located at a distant place in a cell tower compared to a pool of BBUs in a single physical location \cite{C-RAN}. Here, the pool of BBUs are placed in a cloud data center. C-RAN supports centralized processing, collaborative radio, real-time radio networking, and scalability. A new FH interface is used as transport network to connect and communicate between the BBU pool and a set of RUs~\cite{C-RAN2}. The industry standardized Common Public Radio Interface (CPRI) can be used for FH implementation \cite{C-RAN3}. The C-RAN architecture based deployments has reduced energy consumption, CapEx, and OpEx. However, there are limitations such as a single point of failure for BBU, large FH overhead and throughput limitations, security issues, and a proprietary interface based deployment results in vendor lock-in.
Virtualized RAN (vRAN) also follows a similar approach as C-RAN and employs virtualization additionally. vRAN replaces proprietary and expensive hardware with COTS hardware and decouples the software from hardware by applying the principles of NFV \cite{vRAN2}. Network functions can be deployed in a flexible manner using vRAN on top of virtual machines or containers and there is no hardware dependency. Hence, vRAN enhances flexibility, scalability, and total cost of ownership saving. However, coordination among RAN nodes to distribute network resources for different services based on the service requirements is challenging, complexity of the network management has increased significantly, and a proprietary interface based deployment results in vendor lock-in.
Extensible RAN (xRAN) follows a similar approach as vRAN and has done the following in addition by applying the principles of NFV and SDN: i) decouples the RAN control plane from the user plane, ii) builds a modular BBU software stack that operates on COTS hardware, and iii) defines open north- and south-bound interfaces \cite{XRAN}. xRAN has proposed a standardized FH, and promoted the idea of managing RAN nodes through RAN controller with data analytics platform and standardizing the interfaces between BBU and RAN controller \cite{Mavenir_WP1}. xRAN can support to meet the 5G performance requirements. To unify the efforts, the O-RAN Alliance formed by merging C-RAN Alliance and xRAN Forum in 2018. The O-RAN Alliance leads the Open RAN movement to realize the open, disaggregated, virtualized, and automated RAN deployments. Table \ref{tab:table-1} compares different characteristics of the RAN types and its evolution.
\begin{table}[]
\centering
\caption{Comparison of characteristics of RAN types and evolution.}
\label{tab:table-1}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Types} & \textbf{\begin{tabular}[c]{@{}c@{}}RU and BBU \\ Separation\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}BBU at \\ Cloud\end{tabular}} & \textbf{Virtualization} & \textbf{\begin{tabular}[c]{@{}c@{}}Control and User\\ Plane Separation\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Standardized\\ Interfaces\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Multi-vendor\\ Support\end{tabular}} & \textbf{Automation} \\ \hline
D-RAN & No & No & No & No & NA & No & No \\ \hline
C-RAN & Yes & Yes & No & No & Partially Yes & Partially Yes & No \\ \hline
vRAN & Yes & Yes & Yes & No & Partially Yes & Partially Yes & No \\ \hline
xRAN & Yes & Yes & Yes & Yes & Mostly Yes & Mostly Yes & Mostly Yes \\ \hline
O-RAN & Yes & Yes & Yes & Yes & Yes & Yes & Yes \\ \hline
\end{tabular}
\end{table}
\subsection{3GPP and 5G}
\label{section:3GPP and 5G}
The 3rd Generation Partnership Project (3GPP) is a consortium formed in 1998 by seven regional telecommunications standard development organizations to develop global standards for 3G \cite{3GPP_Intro}. Later, the scope of 3GPP was extended and is responsible for developing standards for beyond 3G system such as 4G and 5G \cite{3GPP_Intro}. 3GPP standard specifications are structured in terms of Releases \cite{3GPP_Releases}. 3GPP specifications are primarily defined as three-stage process: i) stage 1 specifications define the service requirements from the user point of view, ii) stage 2 specifications define an architecture to support the service requirements defined in stage 1, and iii) stage 3 specifications define an implementation of the architecture defined in stage 2 by specifying protocols in details. 3GPP has been defined 5G System architecture for different scenarios and use cases since Release 15 in different phases \cite{3gpp_23.501}. The 5G System Phase 1 was specified in 3GPP Release 15, Phase 2 was specified in Release 16, and the capability of the 5G System was enhanced further in 3GPP Release 17. Currently, 3GPP Releases 18 and 19 are being specified as part of 5G Advanced and they are open.
In 5G System, the RAN is called Next Generation RAN (NG-RAN) and the new air interface for 5G is called as New Radio (NR). 5G NR is a new radio access technology which is a physical connection method for radio based communication. Two frequency bands are defined for 5G NR: Frequency Range 1 (FR1) which includes sub-6 GHz frequency bands and Frequency Range 2 (FR2) which includes frequency bands from 24.25 GHz to 71 GHz \cite{3gpp_38.101-1} \cite{3gpp_38.101-2}. NG-RAN supports both 5G NR radio access and 4G Long Term Evolution (LTE) radio access \cite{3GPP_38.300}. An NG-RAN node (i.e., base station) can be either gNB (5G base station) which provides NR radio access towards UE or ng-eNB (evolved 4G base station that connected to 5G core) which provides LTE radio access towards UE \cite{3GPP_38.300}. The gNBs and ng-eNBs are interconnected via Xn interface. NG-RAN supports Multi-Radio Dual Connectivity (MR-DC) in which a UE can connect to two NG-RAN nodes (LTE-NR or NR-NR)~\cite{3GPP_38.300}~\cite{3GPP_37.340}.
From deployment point of view, 3GPP 5G System supports two modes: Non-Standalone (NSA) and Standalone (SA). In NSA mode, LTE and NR are tightly integrated and connected to the existing 4G CN (Evolved Packet Core (EPC)), leveraging the MR-DC towards the UE. In MR-DC architecture, gNB and eNB provide radio resources concurrently towards the UE for high data rate. In NSA mode, eNB handles both control plane and user plane data and gNB handles only user plane data. In SA mode, gNB is connected to the 5GC directly and the MR-DC with two NG-RAN nodes (NR-NR) can be leveraged for high data rate. In SA mode, both gNBs handles control plane and user plane data. The NSA mode of deployment can support to provide eMBB class of services and it is expected that the SA mode of deployment can support URLLC and mMTC class of services with 5G Advanced system.
4G and 5G architectures are shown in Figure \ref{fig:4G_5G}. In Figure \ref{fig:4G_arch}, 4G architecture depicts the traditional RAN deployment. As shown in Figure \ref{fig:5G_arch}, 5G architecture supports RAN disaggregation and an NG-RAN node gNB may consist of a Central Unit (gNB-CU) and one or more Distributed Units (gNB-DU) \cite{3GPP_38.401}. A gNB-CU and gNB-DU units are connected via F1 interface. One gNB-DU is connected to only one gNB-CU. A gNB is connected to 5G Core (5GC) via NG interface. The interfaces Xn, F1, and NG are logical interfaces and they can be split further with respect to control plane and user plane data (e.g., F1-C and F1-U). The gNB-CU can be split into gNB-CU-CP and gNB-CU-UP with respect to control plane and user plane separation, and they are interconnected via the E1 interface. In end-to-end 5G System architecture, N2 interface is used in the place of NG-C and N3 interface is used in the place of NG-U \cite{3gpp_23.501}. There are many deployment options that can be considered by network operators based on the requirements of the users \cite{3GPP_38.801}. NG-RAN supports eight different functional split options between CU and DUs from higher layer split to lower layer split to realize the enhanced performance \cite{3GPP_38.801}.
\begin{figure*}[!t]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=0.6]{Figures/4G_arch.pdf}
\vspace{-16cm}
\caption{4G architecture}
\label{fig:4G_arch}
\end{subfigure}
\hspace{-1cm}
%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=0.45]{Figures/5G_arch.pdf}
\vspace{-11.5cm}
\caption{5G architecture}
\label{fig:5G_arch}
\end{subfigure}
\caption{4G and 5G architectures.}
\label{fig:4G_5G}
\end{figure*}
5G NG-RAN supports disaggregation of RAN functionalities, flexible functional split options, and control plane and user plane separation for effective scaling. However, open interfaces and network functions, virtualization, optimization, and automation are not yet fully supported to offer various services based on the requirements of the users/industry verticals and enable interoperability \cite{O-RAN_PW} \cite{O-RAN_survey2}. Moreover, management of mobile networks is increasingly complex due to the advent of 5G and densification of heterogeneous disaggregated networks operating in multi-band frequency ranges to support multiple industry verticals and massive IoT connectivity. To overcome these limitations and issues, the mobile networks need to be virtualized, software-driven, flexible, scalable, intelligent, reconfigurable, interoperable, and energy efficient \cite{O-RAN_WP1}. O-RAN Alliance is working towards to fill this gap through open interfaces, intelligence, and virtualized edge computing infrastructure.
\begin{table}[]
\centering
\caption{O-RAN Alliance groups and their objectives.}
\label{tab:WGs}
\begin{tabular}{|l|l|lll}
\cline{1-2}
\multicolumn{1}{|c|}{\textbf{Groups}} & \multicolumn{1}{c|}{\textbf{Objectives}} & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG1} & Focuses on O-RAN reference architecture and use cases & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG2} & \begin{tabular}[c]{@{}l@{}}Focuses on the non-RT RIC and its relevant open interfaces for non-RT \\ automation and optimization of RAN Radio Resource Management\end{tabular} & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG3} & \begin{tabular}[c]{@{}l@{}}Focuses on the near-RT RIC and its relevant open interfaces for near-RT \\ control and optimization of RAN elements and resources\end{tabular} & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG4} & Focuses on the open Fronthaul interfaces & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG5} & \begin{tabular}[c]{@{}l@{}}Focuses on 3GPP defined NG-RAN interfaces for enabling fully interoperable \\ multi-vendor ecosystem\end{tabular} & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG6} & Focuses on RAN cloudification and orchestration & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG7} & Focuses on the white box hardware design for open base station deployments & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG8} & Focuses on stack reference design based on the NR protocol stack & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG9} & Focuses on the new transport network based on fronthaul, midhaul, and backhaul & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG10} & Focuses on OAM and its relevant interfaces & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{WG11} & Focuses on security aspects of O-RAN reference architecture & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{FG1} & Focuses on O-RAN standardization development strategies & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{FG2} & Focuses on open source projects and related activities & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{FG3} & Focuses on testing and integration & & & \\ \cline{1-2}
\multicolumn{1}{|c|}{RG} & Focuses on next generation open and intelligent RAN principles & & & \\ \cline{1-2}
\end{tabular}
\end{table}
\subsection{O-RAN Alliance}
The O-RAN Alliance formed in 2018 by a group of leading network operators and vendors. The O-RAN Alliance aims to transform RAN towards open, intelligent, virtualized, fully interoperable, and autonomous RAN, while improving the performance, cost efficiency, and agility \cite{O-RAN_web1}. The core principles of O-RAN Alliance foster open interfaces and functions, multi-vendor interoperable RAN ecosystem, faster innovation, supporting virtualization, leveraging open source software, running on COTS white box hardware in cloud infrastructure, and automation of RAN using AI and ML \cite{O-RAN_web1}.
The O-RAN Alliance specifies its principles on top of 3GPP 4G and 5G RANs (LTE and NR). Particularly, the O-RAN Alliance takes the disaggregated NG-RAN (shown in Figure \ref{fig:5G_arch}) as base and additionally introduces two RAN Intelligent Controllers (RICs), open interfaces, and NFV based cloud infrastructure for enabling openness, automation, and cost-efficiency. RICs aid for automation and optimization of RAN elements and resources in Real-Time (RT) and non-RT to support diverse services on-demand without violating the service requirements. The RICs are connected with the disaggregated NG-RAN components via open interfaces for enabling fully interoperable multi-vendor ecosystem. The O-RAN reference architecture is shown in Figure \ref{fig:O-RAN_arch}.
The O-RAN Alliance is actively involved in three main streams: Specification Work, O-RAN Software Community (SC), and Testing and Integration. The O-RAN specification work has been divided into technical Work Groups (WG), Focus Groups (FG), and Research Groups (RG) to improve the efficiency of RAN deployments (e.g., energy- and spectral-efficient) and operations of mobile networks (e.g., cost-effective and agile) \cite{O-RAN_web2}. The O-RAN Alliance groups and their activities are listed in Table \ref{tab:WGs}.
\section{O-RAN Architecture}
O-RAN architecture has been built upon the foundation set forth by 3GPP 5G System. The O-RAN Alliance includes new RAN functions, open, and interoperable interfaces. The O-RAN Alliance's approach to RAN architecture is to provide additional benefits to network operators such as minimizing deployment cost (e.g., using white box COTS servers) and increased supply chain diversity (e.g., more vendors and vibrant supply chain ecosystem) while handling complexity through intelligence and enabling interoperability among different vendors. The main changes in O-RAN architecture in comparison with 3GPP NG-RAN architecture are: introduction of new nodes such as RICs and service management entities, introduction of new open interfaces, and the cloud infrastructure.
Figure \ref{fig:O-RAN_arch} shows the O-RAN reference architecture with RAN nodes/functions and interfaces, which provides the foundation for building virtualized RAN on open hardware in cloud infrastructure with intelligent radio controllers powered by AI/ML~\cite{O-RAN_web1}~\cite{W1a}. The O-RAN architecture consists of various physical and logical components which are connected through different interfaces. The O-RAN reference architecture is being developed by O-RAN Alliance WG1 \cite{W1a}, which mainly consists of: i) Service Management and Orchestration (SMO) Framework, ii) RICs and closed-loop control, iii) O-RAN enabled CU (O-CU), iv) O-RAN enabled DU (O-DU), v) O-RAN enabled RU (O-RU), vi) O-RAN enabled Cloud Infrastructure (O-Cloud), vii) O-RAN enabled eNB (O-eNB), and viii) UEs. The radio side of the O-RAN reference architecture includes near-RT RIC, O-CU-CP, O-CU-UP, O-DU, and O-RU functions, whereas the management side includes SMO Framework containing a Non-RT-RIC and other management functions. The main entities of the O-RAN reference architecture are briefly described below.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.5]{./Figures/O-RAN_Arch_v6.pdf}
\vspace{-1.8cm}
\hspace{3cm}\caption{O-RAN reference architecture.}
\label{fig:O-RAN_arch}
\end{figure}
\subsection{SMO Framework}
The 5G RAN as envisioned by O-RAN is designed to be highly flexible, quickly scalable, and multi-vendor interoperable for different deployment models. Thus, the management and automation aspects of the RAN attain paramount importance. The SMO framework introduced in the O-RAN architecture (shown in Figure \ref{fig:O-RAN_arch}) is a collection of integrated functions and services that is in charge of handling all the management, orchestration, and automation aspects of various O-RAN components. In principle, the SMO is similar to the Management and Orchestration (MANO) entity in NFV reference architecture \cite{NFV_arch} \cite{NFV-MANO_arch}. SMO operates on Service Based Architecture (SBA) principles to provide and consume services (e.g., authentication, authorization, service registration and discovery, data management, trained model sharing etc.) using standardized service-based interfaces which enable interoperability within SMO functions \cite{W1a}.
SMO provides Fault, Configuration, Performance, Authentication, and Security (FCAPS) management functions to O-RAN nodes/components and O-Cloud via different interfaces (including O1, O2, A1, and Open FH interfaces). SMO also supports RAN optimization through non-RT RIC, O-Cloud management via O2 interface, orchestration, and workflow management \cite{W1a}.
\subsection{RICs and Closed-Loop Control}
With the emergence of data-driven intelligent AI/ML technologies, the O-RAN Alliance aims to leverage these in order to automate RAN deployment and operations. The O-RAN Alliance introduces two flavors of the RIC as software-defined components that are responsible for controlling and optimizing the O-RAN elements and resources. The first one is non-Real-Time RIC which deals with functionality and operations at a larger time scale (more than 1s). The other one is near-Real-Time RIC which deals with functionality and operations at a smaller time scale and close to real-time (10ms to 1s). The RICs connect to each other via the A1 interface and to other O-RAN components via the E2 interface.
The O-RAN architecture incorporates the concept of control loops, which can be defined as closed-loop autonomous action and feedback loops intended for network optimization by providing real-time intelligence and analytics \cite{Control_Loops}. The O-RAN reference architecture supports three types of control loops: non-RT loops, near RT loops, and RT loops. The Non-RT RIC runs non-RT control loop and typical execution time is more than one second. The Near-RT RIC runs near-RT control loop and typical execution time is less than one second and more than ten milliseconds. The E2 nodes (O-RAN nodes that terminate E2 interfaces) run RT control loop and typical execution time is less than ten milliseconds. Multiple Control loops can run simultaneously and interact with each other depending on the use case requirements. The interaction between different control loops and network functions for various use cases are specified in~\cite{W1c}~\cite{W1h}. RICs use data collected from various O-RAN functions (e.g., number of users, users' mobility data and resource usage) to create an abstract and centralized view of the network and can use AI and ML algorithms and training models to automate the RAN operations (e.g., RAN slicing, handovers, scheduling policies, and managing conflicts) through closed-loop controls.
\subsection{O-RU}
O-RU is a radio unit located in cell sites which transmits, receives, and processes the radio signals at the physical layer of the network. Through the Open FH interface, the radio signals are communicated between the O-RU and the O-DU in the O-RAN reference architecture. As per the functional split option 7.2x, the O-RU implements the PHY-Low and Radio Frequency (RF) processing functions as shown in Figure \ref{fig:O-RAN_stack}. The O-RU terminates the Open FH interface for communication towards other O-RAN components as listed in Table \ref{tab:interfaces}.
\subsection{O-DU}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.7]{./Figures/O-RAN_stack.pdf}
\vspace{-7.3cm}
\hspace{3cm}\caption{O-RAN architecture with a functional split.}
\label{fig:O-RAN_stack}
\end{figure}
O-DU is a logical node that hosts part of the protocol functions of O-RAN flexibly in order to support diverse use cases. For instance as per the functional split option 7.2x, PHY-High, Medium Access Control (MAC), and Radio Link Control (RLC) protocols of the base station are hosted by O-DU as shown in Figure \ref{fig:O-RAN_stack} \cite{W1a}. The functionality of the O-DU can be split into two logical nodes: one is implementing the PHY-High function and the other is implementing the MAC and RLC functions on the received radio data\cite{O-RAN_survey1}. These two logical nodes can be used to implement the Small Cell Forum's standard interface Functional Application Platform Interface (FAPI) \cite{W1a} \cite{SCF} \cite{SCF1}.
\subsection{O-CU}
O-CU is a logical node that implements a set of base station protocols such as Radio Resource Control (RRC), Packet Data Convergence Protocol (PDCP), and Service Data Adaptation Protocol (SDAP) \cite{W1a}. O-RAN architecture leverages the SDN concept of decoupling the control plane from the user plane and applies it to split the CU into CU-CP and CU-UP. This methodology of splitting the CU into separate logical nodes is inherited from 3GPP and uses the E1 interface, which allows easy scaling and cost effective solutions for UP. Decoupling also enables advanced control functionality for better RRM via data-driven optimization, closed-loop control, and automation using advanced AI/ML tools.
As shown in Figure \ref{fig:O-RAN_stack}, O-CU-CP runs RRC and PDCP-C and O-CU-UP runs SDAP and PDCP-U \cite{W8a}. The network functions O-DU, O-CU-CP, O-CU-UP run in the edge cloud and RICs can run either in edge cloud or regional cloud of the O-Cloud infrastructure as shown in Figure \ref{fig:O-RAN_stack}.
\subsection{O-Cloud}
O-Cloud is a cloud computing platform that consists of a collection of physical infrastructure nodes to host O-RAN functions (e.g., near-RT RIC, O-CU-CP, O-CU-UP, and O-DU), support software components (e.g., Operating Systems, Virtual Machines, Containers, Network Function Images, and Templates), and support management and orchestration related functions (e.g., Infrastructure Managers, Network Function Managers, Service Orchestration, Resource Orchestration, and RAN Slice Managers). O-Cloud Notification interface is used to notify O-Cloud related information to O-RAN network functions.
O-Cloud can host RAN functions in edge cloud and regional cloud as shown in Figure \ref{fig:O-RAN_stack}. The functions O-DU, O-CU-CP, and O-CU-UP are placed in edge cloud to meet the latency requirements. Near-RT RIC can be placed either in edge cloud or regional cloud, whereas non-RT RIC is placed in regional cloud. There can be more than one edge clouds and regional clouds that are managed and orchestrated by O-Cloud and SMO.
\subsection{O-eNB}
O-eNB is an O-RAN enabled eNB or ng-eNB which supports O-DU and O-RU functions with an Open FH interface between them \cite{W1a}. O-eNB also supports E2 interface related functions and operations, and it is connected to near-RT RIC via E2 interface \cite{W1a}.
\subsection{UEs}
UEs are attached to O-RAN via Uu interface and can connect to O-eNB and O-gNB to access services simultaneously using MR-DC \cite{3GPP_38.300} \cite{3GPP_37.340}. UEs can be smartphones used by human beings and/or static and mobile IoT devices (e.g., sensors, machines, and vehicles) to support diverse vertical services.
\begin{table}[!t]
\centering
\caption{O-RAN interfaces.}
\label{tab:interfaces}
\begin{tabular}{|c|l|c|}
\hline
\textbf{Interfaces} &
\multicolumn{1}{c|}{\textbf{Description}} &
\textbf{\begin{tabular}[c]{@{}c@{}}Managing \\ Authority\end{tabular}} \\ \hline
A1 &
\begin{tabular}[c]{@{}l@{}}A1 is the interface between the Non-RT RIC function in SMO and \\ the Near-RT RIC function\end{tabular} &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
O1 &
\begin{tabular}[c]{@{}l@{}}The O1 interface is between O-RAN Managed Element (SMO) and \\ the management entity (Radio side nodes)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
O2 &
\begin{tabular}[c]{@{}l@{}}The O2 interface is between the SMO and O-Cloud to provide O-Cloud\\ platform resources and workload management\end{tabular} &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
E2 &
E2 is a logical interface connecting the near-RT RIC with an E2 Node &
3GPP \\ \hline
E1 &
This interface is between O-CU-CP and O-CU-UP functions &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
F1-c &
This F1-c interface is between the O-CU-CP and the O-DU functions &
3GPP \\ \hline
F1-u &
The F1-u interface is between the O-CU-UP and the O-DU functions &
3GPP \\ \hline
NG-c &
The NG-c interface is between O-CU-CP and the 5GC &
3GPP \\ \hline
NG-u &
The NG-u interface is between the O-CU-UP and the 5GC &
3GPP \\ \hline
X2-c &
\begin{tabular}[c]{@{}l@{}}The X2-c interface is for transmitting control plane information\\ between eNBs or between eNB and en-gNB in EN-DC\end{tabular} &
3GPP \\ \hline
X2-u &
\begin{tabular}[c]{@{}l@{}}The X2-u interface is for transmitting user plane information\\ between eNBs or between eNB and en-gNB in EN-DC\end{tabular} &
3GPP \\ \hline
Xn-c &
\begin{tabular}[c]{@{}l@{}}Xn-c interface is for transmitting control plane information \\ between gNBs, ng-eNBs or between ng-eNB and gNB\end{tabular} &
3GPP \\ \hline
Xn-u &
\begin{tabular}[c]{@{}l@{}}The Xn-u interface is for transmitting user plane information\\ between gNBs, ng-eNBs or between ng-eNB and gNB\end{tabular} &
3GPP \\ \hline
Uu &
The Uu interface is between the UE to eNB/gNB &
3GPP \\ \hline
R1 &
R1 interface is between rApps and functions that enable rApps in SMO &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
Y1 &
Y1 interface is between Y1 consumer and Near-RT RIC &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}O-Cloud \\ Notification\end{tabular} &
\begin{tabular}[c]{@{}l@{}}This interface is within O-Cloud where event consumer\\ subscribes to events/status of O-Cloud\end{tabular} &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
CTI &
CTI is interface between the O-DU and TN &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Open \\ Fronthaul\end{tabular} &
\begin{tabular}[c]{@{}l@{}}This interface is between O-DU and O-RU, and it is\\ further split into CUS-plane for and M-plane\end{tabular} &
\begin{tabular}[c]{@{}c@{}}O-RAN\\ Alliance\end{tabular} \\ \hline
\end{tabular}
\end{table}
\section{3GPP NG-RAN INTERFACES AND O-RAN OPEN INTERFACES}
As discussed in Section \ref{section:3GPP and 5G}, 3GPP is a strong proponent for disaggregation of the RAN in 5G deployments and thus introduced various new network functions and interfaces thereby providing benefits related to lower deployment cost (using COTS servers) and increased supply chain diversity (more vendors and choices for operators) \cite{OpenInterfaces1}. However, this would result in an increased network complexity and integration costs. A good example of this problem can be seen at the interface between the DU and the RU. This interface is known as the FH interface and operates over the CPRI protocol. 3GPP leaves the implementation of this FH interface and CPRI protocol up to to the vendors choice \cite{OpenInterfaces2}, thereby limiting the interoperability among vendors. Another limitation of the 3GPP defined interfaces is the X2 interface, which is used for inter-connectivity between eNBs, but unfortunately this was left optional and upto implemention of vendors. This becomes an issue when deploying 3GPP 5G NSA architecture, where this X2 interface is used for connectivity between existing eNBs and newly deployed gNBs. This ties down operators to reuse the existing 4G vendors in the space.
A primary goal of the O-RAN Alliance is to resolve these issues regarding interoperability and multi-vendor deployments. Apart from 3GPP interfaces, the O-RAN Alliance has introduced a new set of interfaces that enable connectivity to the newly introduced components in the O-RAN architecture like Near-RT RIC, Non-RT RIC and also improves upon existing interfaces such as FH and X2 interface.
The new Open FH interface is introduced to enable open connectivity between various implementations of O-DU and O-RU. The O-RAN Alliance WG4 is tasked with developing and maintaining specifications for the Open FH interface. The Open FH interface is subdivided into Open FH CUS-Plane interface and Open FH M-Plane interface \cite{W4b}. The Open FH CUS-Plane interface get rids of the traditional vendor specific CPRI protocol and implements a newer and open standard called enhanced CPRI (eCPRI) \cite{W4b} \cite{eCPRI}.
The Open FH M-Plane interface facilitates connectivity between O-RU and it's managing entity (either O-DU or SMO). It specifically deals with the initialization, configuration and management aspects of the O-RU. Depending on the location of the management component, the Open FH M-Plane can be classified as hierarchical model (where O-RU is managed entirely by an O-DU) or hybrid model (where O-RU is managed by the SMO) \cite{W4g}.
The O-RAN Alliance has also introduced a new set of open interfaces (e.g., O1, O2, A1, E2, and O-Cloud Notification) to leverage virtualization, cloud computing features, and network automation and service management using different types of RICs. The O-RAN Alliance has specifically created a new O1 interface to facilitate a common and open approach to implement management functionality between various managed elements and any management entity \cite{W1e} \cite{W1d}. The O-RAN Alliance WG5 is tasked with developing and maintaining the specifications for O1 interface terminating at O-DU or O-CU. It creates the O1 specification for O-CU used over the interface linking the O-CU with the SMO \cite{W5e}. It also creates the O1 specification for O-DU used over the interface linking the O-DU with other management plane entities (that may include the O-CU as well as SMO) \cite{w5g}. The O-RAN Alliance WG5 also focuses on existing 3GPP defined NG-RAN interfaces (e.g., X2, Xn, E1, F1, and NG) and complements the existing standards by defining further O-RAN specifications related to C-Plane functions, U-Plane functions, and transport network \cite{W5b} \cite{W5c} \cite{W5j}. The aim is to promote improved openness and fully operable multi-vendor deployments using existing 3GPP defined interfaces. The set of interfaces defined in the O-RAN reference architecture are listed in Table \ref{tab:interfaces}.
\section{RICs, Automation, and Optimization of the RAN}
Human operators cannot handle the increased complexity in traditional way of deploying, optimizing, and operating the next generation mobile networks \cite{XAI}. Thus, AI and ML features can be leveraged to automate the operations of network functions and reduce operational expenditures. Using AI and ML algorithms and models, intelligence can be embedded in every layer of the RAN architecture to enable dynamic local radio resource allocation on demand and optimize network-wide efficiency \cite{O-RAN_WP1}. A combination of O-RAN open interfaces, RAN virtualization, and AI-powered closed-loop control can enable multi-vendor deployments, automation of network operations, and optimization of heterogeneous resources \cite{O-RAN_WP1}.
The O-RAN reference architecture enhances the Radio Resource Management (RRM) functionalities with embedded intelligence by introducing the Non-RT RIC and Near-RT RIC through the A1 and E2 interfaces, respectively. RT analytics that drive embedded AI and ML backend modules will empower network intelligence \cite{O-RAN_WP1}.
\subsection{Non-RT RIC}
Non-RT RIC is a logical function resides in SMO and handles the data carried across the A1 interface and non-RT RIC applications. The primary goal of non-RT RIC is to support non-real-time intelligent RRM, higher layer procedure optimization, policy optimization in RAN, and providing AI/ML models to near-RT RIC and other RAN functions. The A1 interface supports communication and information exchange between non-RT RIC and near-RT RIC. The key objective of A1 interface is to support policy-based guidance of near-RT RIC functions, transmission of enrichment information in support of AI/ML models into near-RT RIC, and basic feedback mechanisms from near-RT RIC. Non-RT RIC manages non-RT control functionalities, which may take more than one second time ($>$ 1s). The A1 interface is between Non-RT RIC in the SMO and O-gNB/O-eNB containing Near-RT RIC as shown in Figure \ref{fig:O-RAN_arch}. The Non-RT RIC shares the messages generated from AI-enabled policies, ML-based training models, and RT control functions with the Near-RT RIC for runtime execution through A1 interface. Also, network management applications running in Non-RT RIC can utilize the data received from O-DU and O-CU in a standardized format and the processed/analysed data can be shared to Near-RT RIC to make decisions quickly. The Non-RT RIC core algorithms developed and managed by network operators to modify RAN operations based on the individual policies and objectives for different deployment models. Non-RT RIC applications are called as rApps which are modular applications and leverage the functionality exposed via the non-RT RIC framework/SMO Framework to provide value added services related to RAN operation and optimization. The scope of rApps includes, but is not limited to, RRM, data analytics, and providing enrichment information.
The Non-RT RIC and A1 interface related aspects are being specified by the O-RAN Alliance WG2~\cite{W2b} .
\subsection{Near-RT RIC}
The O-RAN reference architecture enhances the RRM to support network intelligence via RICs. The primary goal for the near-RT RIC architecture is to specify the near-RT control functionalities and interfaces with CU/DU. Near-RT RIC is a logical function which supports to control RAN elements and optimize RAN resources in near-RT via fine-grained data collection and actions through E2 interface \cite{W1a}. Near-RT RIC handles near-RT control functionalities, which may take less than one second time ($<$ 1s), such as radio configuration management, mobility management, RAN slicing, QoS management, interference management, load balancing, connectivity and seamless handover management, enhanced/novel RRM functionalities, and resource block management. Near-RT RIC supports a robust, secure, and scalable platform that allows for flexible on-boarding of 3rd party applications (i.e., the application can be independent of the near-RT RIC). An application designed to run on the near-RT RIC is called as xApp which may consist of one or more microservices to consume and provide data. The xApps and related functions can leverage the database called Radio Network Information Base (R-NIB) which collects data from the underlying network via E2 interface and Non-RT RIC via A1 interface. From functionality point of view, xApp shall be able to receive event-triggered information on RAN information and time-varying network state and provide collected logging, tracing, and metrics information to Near-RT RIC. Near-RT RIC may include AI and ML workflow including model training, inferences, and updates \cite{W3a}. Various RAN measurements data are fed to near-RT RIC via E2 interface for better RRM. The E2 interface also carries the configuration commands directly from near-RT RIC to CU and DU. Near-RT RIC will execute new models (e.g., training models, policy decisions, and traffic and mobility predictions) received from non-RT RIC to change the functional behaviour of the network and its applications. The RRM functional allocation between the Near-RT RIC and the E2 Node is subject to the capability of the E2 Node exposed over the E2 interface by means of the E2 Service Model~\cite{W3g}. The Near-RT RIC and E2 interface related aspects are being specified by the O-RAN Alliance WG3 \cite{W3c}.
\section{Virtualization and Cloud Infrastructure}
\subsection{NFV Infrastructure}
O-RAN network functions can be realized as Virtualized Network Functions (VNFs) and/or Physical Network Functions~(PNF). VNFs can run on Virtual Machines (VMs) and/or Containers on top of NFV Infrastructure (NFVI) in O-Cloud, and PNFs can be specially designed on a dedicated hardware to support certain features (e.g., reliability and security). NFVI platforms can be used to support various functional split options such as high layer split between PDCP and RLC and low layer split within PHY. These functions can be implemented as software modules on top of COTS hardware inside private or public cloud infrastructures. This leads to effectively utilize the available resources wherein multiple network functions can be virtualized and run on the same COTS hardware and reduces the overall CapEx and OpEx. The O-RAN framework follows similar principles of ETSI NFV which focuses on virtualization, softwarization, and cloudification \cite{NFV_5G}. The O-RAN Alliance WG6 specifies the cloud architecture and deployment scenarios to run O-RAN network functions on VMs and OS containers in~\cite{W6f}, similar to ETSI NFV framework. RAN cloudification is one of the fundamental key principles of the O-RAN architecture. The O-RAN Alliance WG6 also has specified a set of use cases regarding the deployment of O-RAN network functions on O-Clouds, as well as relevant functional and interface requirements between the SMO and O-Cloud \cite{W6h}.
The primary goal of NFV is to disaggregate hardware and software parts of the network functions defined as part of O-RAN. As defined in \cite{W6f}, an NFV architecture would broadly be categorized into three layers: a bottom hardware layer (this maps to the ETSI NFVI hardware sublayer in the case of VM/container based deployment), a middle layer that includes cloud stack functions as well as Acceleration Abstraction Layer functions (this maps to the ETSI NFVI virtualization sublayer and Virtualized Infrastructure Manager (VIM) in the case of VM/container based deployment), and thirdly a top layer that supports the virtual RAN functions such as O-DU, O-CU-CP, O-CU-UP, and near-RT RIC.
The ETSI NFV specifications define the NFV MANO architectural framework which is a set of functional blocks and interfaces enabling the deployment and management of VM and container based VNFs \cite{NFV_arch} \cite{NFV-MANO_arch}. The management of virtualized and container based resources is performed according to a specified set of interfaces, models and APIs \cite{W6l}. The list of NFV MANO architectural framework functional blocks and their main responsibilities are \cite{NFV_arch} \cite{W6f}: i) VIM which is responsible for the management of virtualized resources (virtual compute, virtual network and virtual storage) in the NFVI and VM software images, ii) Container Infrastructure Service Management which is responsible for the management of container based resources and container workloads in Container Infrastructure Service clusters, iii) Container Image Registry which is responsible for the management of OS container software images, iv) VNF Manager which is responsible for the management of VNFs, including their lifecycle management, and provides corresponding management services based on a common set of interfaces and models regardless of the technology used for implementing the VNF, be it VM or container, and v) NFV Orchestrator which is responsible for the management of VNF Packages and other artifacts and the overall orchestration of NFVI resources across multiple sites (referred also as NFVI points-of-presence) and the lifecycle management of network services.
\subsection{Edge Computing}
Mobile network operators are investing in edge infrastructure, with market drivers including use cases from private cellular networks to edge robotics in industry, as well as disaggregated open networks. Currently, the applications are being processed in a way where less than 10\% of the traffic is actually processed outside of a traditional data center, which comes from the operator. But, the Mobile Edge Forum predicts that in the coming decade up to 75\% of the processing would happen outside of the traditional data centers due to the anticipated extrapolated growth in edge computing to various industry vertical services (e.g., Remote Healthcare Diagnostics, Industrial IoT, and online gaming) \cite{MEF}. With the introduction of the RIC function in O-RAN, the application hosting capability of operator networks favours edge computing and O-RAN network APIs. With edge computing the operators can host third-party RAN applications that are much closer to the consumer and have a per-user or per-device control mechanism in place. 5G use cases such as Traffic Steering, QoS Optimization, User Mobility Robustness among others can be supported with the help of edge computing to offer faster and higher quality "per-user" services as part of any dynamic network \cite{MEF}.
Multi-access/Mobile Edge Computing (MEC) enables the placement of applications close to the customer and the use of RAN contextual information. It supports seamless application mobility, provides service-oriented APIs for users’ location and radio conditions, and handles application-related traffic redirection. The MEC architecture consists of the MEC Platform~(MEP) that hosts MEC applications, the MEP Manager which is responsible for the management of platform and MEC applications life cycle, and the Virtualization Infrastructure and its manager \cite{MEC_GS003}. MEC leverages the features of NFV, SDN, network slicing, and AI/ML to offer services dynamically and improve the QoS and QoE \cite{MEC2}.
The 5GC User Plane Function and 5G RAN (CU, DU, and RU) can be deployed at the network edge cloud for latency considerations and flexible execution of network functions based on the use case. MEC can be co-located with O-RAN to leverage the unique features available in both networks and reduce CapEx and OpEx. For instance, MEC hosts can be combined with the O-RAN near-RT RIC, the MEC based control plane services can be integrated with O-RAN services, MEC databases (about UE locations, cell performance, Radio Network Information Service) can be integrated with O-RAN databases, and the MEC Application Orchestrator can be used for xApps orchestration (including SON functions). The MEC application mobility mechanism can be reused in multi-near-RT RIC environment, solving an essential problem of the inter-near-RT RICs cooperation \cite{MEC2}.
\section{Network Slicing and RAN Slice Optimization}
\subsection{Network Slicing}
Network slicing is a key feature for 5G where multiple logical networks are created from a single physical infrastructure, with isolation of resources and optimized topology to serve a specific service category such as mMTC use cases which require low-power wide area communication, eMBB use cases that need high data rates, or URLLC use cases such as autonomous driving which demands low latency and high reliability \cite{3GPP_28.530}. Achieving E2E network slicing with predictable QoS is essential for a growing number of 5G services that depend on network slicing to operate at scale.
Network slicing overlays multiple virtual networks on top of a shared network domain, that is, a set of shared network and computing resources. 5G RAN slicing is part of an end-to-end (E2E) network slicing deployment for a 5G SA network.
Network slicing enables service providers to maximize the use of network resources and service flexibility by leveraging the principles of 5G software and hardware disaggregation using NFV. This helps service providers to: i) create new revenue opportunities by lowering the barriers to trying out new service offerings, ii) increase service flexibility by enabling more kinds of services to be offered simultaneously, and iii) support rapid scaling and better time to market as all physical infrastructures are pooled as common shared resources \cite {slicing}.
The slicing framework can be broadly categorized into three distinct layers, namely: the Infrastructure Layer (IL), the Network Function Layer (NFL), and the Service Layer (SL), all three layers will be managed by a MANO entity. The IL broadly refers to the physical network infrastructure consisting of both RAN and CN. It also includes deployment, control and management of the infrastructure, the allocation of resources (computing, storage, network, radio) to slices, and the way that these resources are available to and managed by the higher layers. The NFL encapsulates all the operations that are related to the configuration and lifecycle management of the network functions that, after being optimally placed over the (virtual) infrastructure and chained together, offer an E2E service that meets certain constraints and requirements described in the service design of the network slice. The SL caters to the way that services should be described and how they should be mapped to the underlying network components, and the architecture of network slicing managers and orchestrators. An important element that distinguishes network slicing in the context of 5G from other forms of slicing that have been considered in the past (e.g., cloud computing) is its E2E nature and the requirement to express a service through a high-level description and to flexibly map it to the appropriate infrastructural elements and network functions. This observation regarding the operation of slicing in the context of 5G naturally leads to two new high-level concepts: i) a service layer that is directly linked to the business model behind the creation of a network slice and ii) network slice orchestration of a slice’s lifecycle management.
\subsection{Slice Monitoring and SLA Assurance}
Network slicing is the foundation for many use cases unique to 5G, ranging from private enterprise 5G network deployments with specific Service Level Agreement (SLA) requirements to low-latency applications like Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The SLAs are created between the mobile operator and the business customer based on the requirements and to ensure there are no violations. RAN slicing enables service differentiation handling on the RAN that allows for the effective use of dynamic radio resource partitioning, slice-aware QoS enforcement, and slice orchestration functionality for meeting SLA \cite{RAN_slicing}.
O-CU-CP and O-CU-UP should be slice aware and execute slice-specific resource allocation and isolation strategies. They initially configured through O1 based on slice-specific requirements and then dynamically updated through E2 via Near-RT RIC for various slicing use cases. These may generate and send specific performance measurements (PMs) through O1 and E2, where PMs can be used for slice performance monitoring and slice SLA assurance purposes. Network Slicing can be tailored to specific business requirements. O-RAN’s open interfaces and AI/ML based architecture will enable such challenging mechanisms to be implemented and help pave the way for operators to realize the features of network slicing in an efficient manner \cite{W1i}.
\section{O-RAN Security Aspects}
The O-RAN Alliance WG11 has been focusing on security aspects of the O-RAN reference architecture network functions, interfaces, and cloud infrastructure. The objective is to build a secure, open, interoperable, and automated RAN. The focus of WG11 is to mitigating risk across O-RAN architecture in order to strengthen the security of Open RAN. The O-RAN Alliance has specified inherent security benefits of Open RAN \cite{W1a} (e.g., transparency and common control, interoperability of security protocols and security features, secure supply chain, and enhanced intelligence), possible threats \cite{W11d} (e.g., against architecture, cloud, supply chain, open source code, 5G radio networks, AI/ML system, and physical infrastructure), possible attack surface (e.g., added new functions and open interfaces, architecture modification, decoupling of software and hardware, 3rd party xApps and rApps, VMs, containers, and open source software), risk assessment \cite{W11d}, security requirements \cite{W11b} (e.g., confidentiality, integrity, availability, authentication, authorization, and access control), security principles and controls~\cite{W1a}~\cite{W11a} (e.g., using secure protocols such as SSHv2, TLS 1.2 and 1.3, DTLS 1.2, IPSec, CMPv2, OAuth2.0, and NETCONF over Secure Transport on O-RAN interfaces), and security test \cite{W11c} (e.g., methodology, test cases, validation, and evaluation) to mitigate risk \cite{W1a}. Furthermore, the O-RAN Alliance WG11 is studying security aspects of different network functions~\cite{W11f}~\cite{W11e} (e.g., Non-RT RIC and Near-RT RIC) and cloud infrastructure \cite{W11g} (e.g., O-Cloud).
It is expected that the security services offered by O-RAN should be at least with the same level as a 3GPP 5G NG-RAN~\cite{W1a}. Hence, the O-RAN Alliance is following the 3GPP security design principles and industry best practices to ensure the desired level of security expected by network operators and users. In addition, the O-RAN Alliance follows the principle of zero trust architecture which assumes no implicit trust of a user/asset based on location and ownership due to disaggregation of RAN and involvement of multiple industry players \cite{W1a} \cite{ZTA}.
\section{Use Cases}
The O-RAN Alliance WG1 has specified a set of use cases which employ AI/ML based methods to achieve the goal \cite{W1h} \cite{W1i}. The use cases can be broadly classified into three categories: user application related use cases, network resource optimization related use cases, and performance and QoS/QoE related use cases.
\begin{itemize}
\item Some of the user application related use cases are:
\begin{itemize}
\item \textit{Context-based dynamic handover management for Vehicle-to-Everything (V2X)} which improves the functionality of V2X applications by resolving handover related issues when vehicles move at high speed by using RICs and V2X Application Server.
\item \textit{Radio resource allocation for UAV application scenario} which supports new applications using edge cloud and RICs.
\end{itemize}
\item Some of the network resource optimization related use cases are:
\begin{itemize}
\item \textit{RAN sharing} which increases network capacity and coverage while decreasing the cost of network implementation by using RICs.
\item \textit{Dynamic Spectrum Sharing} which aids operators to dynamically share spectral resources already deployed between LTE and NR devices without compromising the QoE of the current 4G subscribers while providing the same level of coverage and essential QoS to NR devices by using RICs.
\item \textit{Shared O-RU} which allows a controller (a NETCONF client) that determines and configures how the resources of a shared O-RU are partitioned between shared resource operators by using RICs.
\end{itemize}
\item Some of the performance and QoS/QoE realted use cases are:
\begin{itemize}
\item \textit{Energy saving} which turns off one or more carriers or the cell, energy saving can be accomplished over timescales of minutes, hours, and above when the cell load is minimal using RICs.
\item \textit{Flight path based dynamic UAV radio resource allocation} which improves mobility performance and QoE using RICs.
\item \textit{QoE Optimization} which aids for traffic recognition, QoE prediction and QoS enforcement decisions using RICs.
\item \textit{QoS Based Resource Optimization} in which the network is equipped with features to maintain resource separation between slices and to keep track of whether each slice Service Level Specifications (SLS) are met using RICs.
\end{itemize}
\end{itemize}
\section{Deployment Aspects and Open Source Projects}
The O-RAN Alliance encourages the use of open source software and open white box hardware to reduce the deployment cost and develop Open RAN interoperable ecosystem that can be tested and integrated easily. The use of white box COTS hardware for designing software in a modular fashion enables to plan for scale out designs for capacity, availability, and reliability to support automation and optimization based on the service requirements. O-RAN white box hardware aspects for different base station deployment types are considered in WG7 to improve the performance, energy and spectral efficiency, and cost efficiency~\cite{W7a}.
\subsection{Deployment Aspects}
Operator requirements for different deployment scenarios (e.g., Indoor Picocell, Outdoor Picocell, Outdoor Microcell, Integrated Access and Backhaul, and Outdoor Macrocell), use cases (e.g., eMBB and URLLC), and base station classes (e.g., local area, medium area, and wide area) for O-RAN white box hardware are being specified by WG7 \cite{W7a}. It also considers carrier frequency (e.g., FR1 and FR2), inter-site distance, and other base station related key attributes in the base station deployment scenarios. Key performance indicators such as peak data rate, peak spectral efficiency, bandwidth, latency, and mobility are considered to specify the requirements for both indoor and outdoor base station deployment scenarios~\cite{W7a}~\cite{W7h}~\cite{W7i}~\cite{W7j}.
Base station architecture can be of three types for any deployment scenarios: i) split architecture (O-RU and O-DU are physically separated), ii) integrated architecture (O-RU and O-DU are implemented on one platform), and iii) all-in-one architecture (O-RU, O-DU, and O-CU are implemented on one platform) \cite{W7a}.
As defined by 3GPP \cite{3GPP_38.801}, there are eight different FH split options. For white box hardware reference design architecture, the O-RAN Alliance supports three FH split options now: i) split option 6 (all PHY functions reside in O-RU and remaining RAN protocol functions reside in O-DU and O-CU), ii) split option 7.2x (RF and PHY-Low functions reside in O-RU and PHY-High, MAC, and RLC functions reside in O-DU, shown in Figure \ref{fig:O-RAN_stack}), and iii) split option 8 (Only RF function resides in O-RU and remaining PHY and other RAN protocol functions reside in O-DU and O-CU) \cite{W7c} \cite{W7d} \cite{W7e} \cite{W7f} \cite{W7g}. The O-CU shall be placed at a data centre and O-DU can be placed either at the data centre or at a cell site. Base station split architecture can be further divided into two types based on whether protocol translation is supported or not. FH Gateway (FHGW) can be used to interconnect O-RU/RU and O-DU, and FHGW supports protocol translation based on the FH split options \cite{W7b}. An optional FH Multiplexer (FHM) or a switch can be used in the place of FHGW to connect O-CU and O-DU with multiple or cascaded O-RU using FH interface. But, FHM or switch does not support protocol translation.
O-RAN network functions and nodes (e.g., O-RU, FHM, FHGW, O-DU, O-CU-CP, O-CU-UP, Near-RT RIC, and SMO with Non-RT RIC) can be deployed in multiple ways depending on the operator's policies and use case requirements. The O-RAN Alliance WG1 has discussed the different implementation options of O-RAN functions and network elements in \cite{W1a}. For instance, all disaggregated nodes and functions can be deployed separately as defined in O-RAN reference architecture. It is also possible in the implementation to bundle some or all of these O-RAN nodes, and thus collapsing some of the internal interfaces such as F1-c, F1-u, E1 and E2 \cite{W1a}.
\subsection{Open Source Projects}
Currently, most of the telecommunication industry products use commercial software provided by software design companies.
Open source software is generally not preferred for commercial products. However, in recent years there is an increase in acceptance and usage of open source software in software design companies and we are starting to see utilization of open source software in commercial Open RAN products.
The development of open source software for O-RAN is being led by the O-RAN Software Community (SC). This a partnership between the O-RAN ALLIANCE and the Linux Foundation, with the goal of aiding in the development of open software for the RAN. The O-RAN SC is concentrated on aligning with the open architecture and requirements of the O-RAN Alliance to produce a solution suitable for commercial deployment.
Some major internal projects led by the O-RAN SC are listed in Table \ref{tab:O-RAN SC Projects} \cite{O-RAN-SC}.
\begin{table}[]
\centering
\caption{O-RAN Software Community projects \cite{O-RAN-SC}.}
\label{tab:O-RAN SC Projects}
\begin{tabular}{|c|l|}
\hline
\textbf{Project Name} &
\multicolumn{1}{c|}{\textbf{Description}} \\ \hline
RIC Applications &
\begin{tabular}[c]{@{}l@{}}This project aims at developing sample xApps and platform \\ applications that can be used for integration, testing, and demos.\end{tabular} \\ \hline
Near Real Time RIC &
\begin{tabular}[c]{@{}l@{}}This project aims at developing an initial RIC Platform that supports \\ xApps with limited support for O1, A1, and E2 interfaces.\end{tabular} \\ \hline
O-RAN Central Unit &
\begin{tabular}[c]{@{}l@{}}This project aims an initial software deliverable with limited \\ functionality for the O-RAN Central Unit.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}O-RAN Distributed Unit \\ High Layers\end{tabular} &
This project focuses on initial L2 functional blocks for the O-RAN DU. \\ \hline
\begin{tabular}[c]{@{}c@{}}O-RAN Distributed Unit \\ Low Layers\end{tabular} &
This project focuses on initial L1 functional blocks for the O-RAN DU. \\ \hline
Simulations &
\begin{tabular}[c]{@{}l@{}}This project aims at developing initial simulators used for testing \\ O-RAN NF interfaces.\end{tabular} \\ \hline
Infrastructure &
\begin{tabular}[c]{@{}l@{}}This project focuses on developing the building blocks for infrastructure \\ to run O-RAN NF components.\end{tabular} \\ \hline
Non-Real Time RIC &
\begin{tabular}[c]{@{}l@{}}This project aims at developing an initial Non-Real Time RIC Platform \\ that supports xApps with limited support for O1, A1, and E2 interfaces.\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Service Management and Orchestration \\ (SMO)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}The primary goal of the SMO project is to integrate different software \\ artifacts of existing open-source projects creating a fully functional \\ open-source SMO.\end{tabular} \\ \hline
AI/ML Framework &
\begin{tabular}[c]{@{}l@{}}This project aims at creating an AI/ML workflow implementation for \\ O-RAN environment.\end{tabular} \\ \hline
\end{tabular}
\end{table}
Apart from the internal O-RAN SC led projects, there are multiple other 5G related software development by various external open source community projects.
Other major open source projects related to Open RAN are listed in Table \ref{tab:External Open-Source Projects}. Wide spread usage of open source RAN software enables the RAN software to be accessible to the majority of people, especially academics and research institutions, and can enable them to provide solutions to challenges faced by the industry. More research works using open source software can help to provide feedback on the limitations of the open source platforms and also on the O-RAN Alliance specifications. This creates a feedback loop where the solutions proposed by these open source communities and researchers can also be used for new feature developments in O-RAN specifications.
\begin{table}[]
\centering
\caption{External open source projects.}
\label{tab:External Open-Source Projects}
\begin{tabular}{|c|l|}
\hline
\textbf{Project} &
\multicolumn{1}{c|}{\textbf{Description}} \\ \hline
Colosseum\text{\cite{ProjColo}} &
\begin{tabular}[c]{@{}l@{}}This is a large-scale wireless testbed with open access and \\ public availability for research using virtualized and software waveforms. \\ It is hosted at the Northeastern University in Boston, USA \\ and the system is remotely accessible to users.\end{tabular} \\ \hline
OpenAirInterface\text {\cite{ProjOpenAirInt1}\cite{ProjOpenAirInt2}} &
\begin{tabular}[c]{@{}l@{}}This project brings together a group of open-source software developers \\ who collaborateto create the RAN and CN technologies for wireless communication.\end{tabular} \\ \hline
srsRAN\text{\cite{ProjsrsRAN}} &
\begin{tabular}[c]{@{}l@{}}An open source projected by Software Radio Systems (SRS) to develop a 5G software \\ radio suite.\end{tabular} \\ \hline
ONF\text {\cite{ProjONF}} &
\begin{tabular}[c]{@{}l@{}}The Open Networking Foundation (ONF) is a non-profit organization that promotes \\ innovation in software-defined programmable networks. It is mainly operator-driven.\end{tabular} \\ \hline
ONAP\text {\cite{ProjONAP}} &
\begin{tabular}[c]{@{}l@{}}ONAP is an open-source software platform that offers comprehensive \\ lifecycle management and real-time, policy-driven orchestration and \\ automation of physical and virtual network activities.\end{tabular} \\ \hline
Open Source MANO\text{\cite{ProjOpenMANO}} &
\begin{tabular}[c]{@{}l@{}}In line with ETSI NFV models, ETSI OSM is aimed at creating an\\ open source Management and Orchestration (MANO) stack.\end{tabular} \\ \hline
O-RAN Gym\text {\cite{ProjGym}} &
\begin{tabular}[c]{@{}l@{}}A publicly accessible research platform that enables large-scale data-driven O-RAN\\ experimentation. It provides an O-RAN compliant near-real-time RIC and E2 termination,\\ which can be used to build frameworks creation and testing of data-driven xApps.\end{tabular} \\ \hline
Open5GS\text {\cite{ProjOpen5GS}} &
\begin{tabular}[c]{@{}l@{}}Open5GS is a C-language implementation of 5G Core and EPC, \\ i.e., the core network of NR/LTE network.\end{tabular} \\ \hline
Magma\text {\cite{ProjMagma}} &
\begin{tabular}[c]{@{}l@{}}It is an open-source software platform that aims to provide network operators an \\ open and flexible mobile CN solution.\end{tabular} \\ \hline
\end{tabular}
\end{table}
\section{Open Issues and Future Research Directions}
Although O-RAN Alliance has been drafting specifications and collaborating with open source communities to enable open, interoperable, virtualized, and intelligent RAN, still there are issues and challenges, from network operators and standardization points of view, that need to be addressed to truly realize the features of Open RAN. In this section, we summarize the open issues, challenges, and future research directions that we identified in this study.
\subsection{Open Issues and Challenges}
\begin{itemize}
\item \textbf{Architectural aspects:} The O-RAN reference architecture has evolved slowly over time since 2018. Meanwhile 3GPP has also been working continuously towards improving the 5G architecture and technology. A key requirement of O-RAN architecture is be as close as possible to 3GPP architecture. Hence, the specification work for O-RAN architecture is not fully completed and additional features, blocks, and functions would need to be introduced to enhance the capability of O-RAN. For instance, the O-RU termination of the O1 interface towards SMO and potential virtualization opportunities for the O-RU need to be studied and are earmarked as candidates for future studies [26]. Furthermore, the cooperation among multiple near-RT RICs and controlling them for data collection and execution (e.g., centralized, distributed, or hybrid) are important aspects that require further study. Similarly, communication between the near-RT RICs with 3rd party application servers and the possibilities of exposing the RAN capabilities need to be studied.
\item \textbf{Performance aspects:} The virtualization of O-RAN network functions offers a unique opportunity to improve RAN performance. Different O-RAN network functions and nodes can be migrated dynamically from one infrastructure to another (e.g., from regional cloud to edge cloud and vice versa) based on the load predictions or in-case of node/function failures. However, predicting such traffic congestion that might lead to overloading or predetermining any imminent failures and also arranging alternatives become a highly complex requirement especially in highly scaled and dynamic RAN deployments. Supporting time-sensitive use cases through resource migration or application migration without affecting the QoS and service continuity are some of the challenges that need to be addressed from a deployment and management perspective.
\item \textbf{Security aspects:} The O-RAN Alliance WG11 has been studying the security aspects of the O-RAN architecture by considering the following aspects: thread modeling, risk assessment, security requirements, security mechanisms and protocols to meet the requirements, and security testing for validation and evaluation. However, only a few interfaces (e.g., Open FH) and network functions (near-RT RIC and non-RT RIC) related security aspects are studied \cite{W11f} \cite{W11e}. There is a lot of scope to explore and analyse the security aspects for various entities and interfaces. For instance, the security aspects of network functions (e.g., O-CU-CP, O-CU-UP, O-DU, O-RU, FHGW, and FHM) and interfaces (e.g., E2, R1, Y1, O-Cloud Notification, and Cooperative Transport Interface) need to be studied. Furthermore, the security aspects of shared cloud infrastructure, integration of nodes and network functions, open source software, secure lifecycle management of network functions also have to be studied.
\item \textbf{AI/ML aspects:} The O-RAN Alliance prioritizes the usage of RICs to operate and maintain highly scaled network deployments, relying heavily on AI and ML based algorithms for training models. These are then used for extracting inferences through data analytics and transferring policy based guidelines to near-RT RIC via control loops to improve the performance of the RAN. However, the accuracy of the models and the validity of the inferred data is paramount to take the right decisions at the right times. This becomes a major challenge to overcome and there lies a potential of catastrophic impact in the case of safety and mission-critical applications if the accuracy and freshness of the inference data is not up to the expected level.
\item \textbf{Energy saving aspects:} Adapting AI and ML based techniques to support new services and applications, optimize network resources, and automate the RAN using RICs may consume more energy to train models as AI and ML techniques which employ deep neural networks often have large amounts of computational power requirement \cite{EI}, which may cause to global warming and climate change effects. Hence, the challenge of developing a sustainable and environment friendly advanced AI/ML techniques and energy-efficient architectures need to be considered.
\end{itemize}
\subsection{Future Research Directions}
\begin{itemize}
\item \textbf{Blockchain:} With the advent of 5G networks and its diverse features, several new technologies such as SDN and NFV are being integrated into the 5G networks to fulfil the requirements \cite{Blockchain1}. However, integrating these technologies into the telecom network poses several challenges associated with decentralization, transparency, privacy, and security. To address these issues, blockchain technology has emerged as a viable solution due to its strengths such as auditability, immutability and distributed architecture.
The O-RAN Alliance is specifically interested in developing a secure and interoperable RAN enabled by blockchain. The current O-RAN security relies on opt-in Public Key Infrastructure (PKI) based encryption and authentication solutions and transferring the trust to a centralized CA as a trusted party. This becomes a single point of failure for the entire network in the probability of a communication outage with the CA or a compromised CA due to malicious activities. To overcome such a vulnerable situation, security and authentication powered by a blockchain becomes appealing for network deployment \cite{Blockchain3}.
The O-RAN Alliance is working on a new type of RAN architecture called the Blockchain-enabled RAN (BE-RAN). The main focus of this architecture is supporting mutual authentication and blockchain-based PKI \cite{Blockchain2}. The potential areas of future research and analysis would include the impact of BE-RAN on existing security systems including zero trust system, certificate identity, and runtime security. Further studies would also be required to understand the impact and approach to adapting blockchain on the interfaces used for control plane, user plane, and synchronization plane \cite{Blockchain3}.
\item \textbf{Digital Twin:}
Digital Twins (DTs) are becoming an important part of the industrial manufacturing domain because it creates virtual simulations of physical assets such as factories, supply and transportation chains. Due to major advances in development of software platforms that leverage AI/ML methodologies and also availability of high-performance computing accelerated by GPUs, DTs have gained traction in many different areas, such as smart cities, manufacturing, and retail \cite{DigitalTwin3}.
With the advent of the network disaggregation paradigm as championed by O-RAN Alliance, the ensuing complexity of the networks and their management increases to such a degree that traditional deterministic engineering approaches might falter. Therefore, network simulation tools like Digital Twin Networks (DTNs) that run on a high-definition digital representation of the network become increasingly important. DTNs use real-time data and models to create an accurate simulation platform of the physical network, that can be used to provide up-to-date network status and also predict future network states \cite{DigitalTwin1}\cite{DigitalTwin2}. It also provides interfaces for communication with the physical network and other network applications/users.
The O-RAN Alliance's next Generation RG (nGRG) is working on ideas and use cases to leverage DTNs in 5G and 6G deployments. The main focus of future studies would be on defining requirements and design principles including reliability, latency, scalability, agility, and security \cite{DigitalTwin3}. The data exchange and the interfacing between DTNs and physical O-RAN components are also important areas that needs further research and analysis from an Open RAN perspective.
\item \textbf{Metaverse:} In one aspect, metaverse can be described as an immersive virtual world that is facilitated by the use of VR and AR headsets. In order to enable end users to connect into the metaverse via a 5G network, 3GPP has introduced the support for Extended Reality (XR which includes VR, AR, and MR) and cloud gaming devices as a subset of UEs. As part of Releases 16 and 17, 3GPP has identified some enhancements aimed at improving XR device latency and power efficiency. From an Open RAN perspective, further studies are required regarding on the optimal implementation aspects of these enhancements that include i) reduce bandwidth if no/low data, ii) switch to low power if no data through secondary cell dormancy, iii) cross-slot scheduling gap between control and data for sleep, iv) handle periodic traffic through uplink configured grant, v) switch to low power if no data through uplink skipping, and vi) faster transition to sleep after XR burst through control channel skipping \cite{Meta1}. 3GPP is planning for further research on various topics that can also be incorporated into the O-RAN architecture. These include i) low latency mobility using L1/L2 signalling for handoff, ii) staggering UE traffic arrival at gNB through improved scheduler, and iii) improve QoS based on multimedia traffic requirements and patterns.
\item \textbf{Non-Public/Private Networks:} A private 5G network, also termed as Non-Public Networks (NPN) by 3GPP, is a 5G network deployed for non-public use. As 5G network promises very high speed, ultra-high reliability, and low latency, 5G private network is a potential candidate for supporting non-public applications in many industry verticals such as smart manufacturing, Industry 4.0, transportation and logistics, airport, ports, mining, healthcare, education, and entertainment. 5G private networks can adapt O-RAN architecture to offer new services using RICs. However, secure operation, expanding coverage based on the UE mobility, sharing data and resources, and integrating edge cloud and exposing the capability are challenging and need further study.
\item \textbf{Non-Terrestrial Networks:} A Non-Terrestrial Network (NTN) refers to a network for communication purposes which partially or fully operates through a space-borne vehicle, i.e., using satellites in Geostationary Earth Orbit, Medium Earth Orbit, and Low Earth Orbit, or an airborne vehicle (e.g., High Altitude Platforms and Unmanned Aerial Vehicles). The most important feature that makes NTNs unique is their capability to provide connectivity in unreachable areas (i.e., ocean vessels and airplanes) or remote areas (i.e., rural areas) where huge investment is required to build a terrestrial infrastructure. 3GPP has introduced new reference architectures to implement NTNs\cite{3GPP_38.821} as part of Release 15. These are i) NG-RAN architecture with transparent satellite and ii) NG-RAN architecture with regenerative satellite. New components such as NTN-Gateway, NTN-Payload, NTN feeder link, NTN service link, and NTN control functions have been introduced. Future studies should focus on the approach to standardize these newly introduced components and interfaces in an open manner.
\item \textbf{Precise Positioning and Ranging:} In order to implement highly precise positioning of UEs, 3GPP has introduced a new function called Location Management Function (LMF). The LMF receives positioning measurements and information from the NG-RAN and UEs, via the Access and Mobility Management Function (AMF) \cite{LMF1}. A new interface called NLs interface is introduced in the 5G core for communication between AMF and LMF. Additionally, a new NR Positioning Protocol A (NRPPa) is introduced to carry the positioning information from the RAN to the LMF over NG-C interface. The LMF can also configure the UE using the LTE Positioning Protocol (LPP) via AMF and through the gNB. In order to support precise positioning and ranging functionalities in O-RAN, further studies are required regarding the optimal implementation aspects of NRPPa and LPP protocols. Also further analysis would be required on how the measurements data is collected from the O-RAN components and carried via the NG-C interface.
\item \textbf{Explainable AI:} In recent years, Deep Learning (DL) based techniques are employed to successfully handle complex and hard tasks that are beyond the limit of human operators. DL based techniques can be applied for training models in non-RT RIC for specific RAN functionalities and the trained models can be shared with near-RT RIC for real-time operations. However, DL based models are considered as black-box and thus it is difficult to understand the underlying operations and reasons that why the model have taken certain complex actions and decisions \cite{XAI}. This lack of transparency may lead to an issue of vulnerability to attacks and inject malicious data. To offer seamless and secure services, it is expected that AI/ML techniques should be explainable, robust, and verifiable. To handle this issue, explainable AI and adversarial AI techniques can be explored.
\end{itemize}
\section{Conclusion}
The Open Radio Access Network (Open RAN) is a new paradigm which revolutionizes the telecom industry to move towards open, interoperable, virtualized, and intelligent RAN, while improving the performance, agility, and cost efficiency. It enables openness through open interfaces and running network functions on whitebox hardware by leveraging the open source modular software. In this paper, we first presented a comprehensive overview of evolution of RAN and the O-RAN Alliance standardization activities. We also discussed about security aspects of O-RAN architecture, use cases, deployment aspects, and relevant open source projects. Finally, we summarized the open issues, challenges, and future research directions for future study. We hope this work can serve as a good reference to understand the O-RAN standardization activities and pursue further research in this field to support and realize the features of Open RAN movement.
\section*{acknowledgments}
We would like to thank Ritesh Kumar Kalle for his comments on the paper for further improvements. We also thank Balaji Durai, Manikantan Srinivasan, and Murugesan P for initial discussions to form a team.
\bibliographystyle{IEEEtran}
|
1,116,691,498,575 | arxiv | \section{Introduction}
The fast development of digital image processing technology, advancement of image editing tools like and the omnipresence of the digital camera, makes it quite easy to edit and tamper digital images. These tampering and editing images looks so natural that humans often cannot tell whether image is forged or real. While in many cases image editing is used for well-intended entertainment or artistic purposes, in some cases, such forgeries can also be misused. This has led to a rise in the cases of image forgery in several fields relating to surveillance, crime, and is an important aspect of modern forensic investigations.
There are many ways to forge images such as inpainting, splicing, copy-move forgeries etc. In splicing a part of an image is copied and pasted on other image, whereas in copy-move forgery (CMF) a portion of an image is copied and pasted within the same image. In both cases, before pasting a image part, one can also carry out various image processing operations on the copied part such as rotation, scaling, blurring, colour variations etc. In image inpainting an object is removed and removed portions are filled using the background neighbouring colour and texture using an inpainting algorithm, such that the inpainted region looks quite natural. In this work we mainly
focus on copy move and inpainting forgery.
Image forgery detection methods are generally classified into two categories, the active detection methods and the passive methods\cite{Sharma2016ImageFA}. The active methods are generally based on digital signatures or watermarks. The major drawback of active image authentication is, for verification of the authenticity of an image, a watermark or a digital signature need to be embedded into an image at the time of capture or immediately after the image is captured.
Passive forgery detection is an alternative to active authentication which requires no active information available for the purpose of authentication. Traditionally, these techniques involved detecting forgeries by analyzing the low-level image pixel statistics or geometrical relations among the objects.
In more recent times, the passive methods involve deep learning based approaches, wherein a network can learn abstract discriminating features among the forged and real images. In this work we focus on the learning based paradigm, and contribute as follows:
\begin{itemize}
\item We first use an in-house convolutional neural network (CNN) to classify forged and real images, separately for copy-move forgery and inpainting forgeries. We then use it for an integrated forgery detection, without much drop in performance.
\item Further, via Gradient Class Activation Map (Grad-CAM) we interpret the forgery detection by this network, and show that the heat maps correctly detect the locations of forgeries, even in cases of relatively small forged regions. We believe that this analysis is important as it makes the forgery detection approach more reliable.
\end{itemize}
\section{Related Work}
Below, we briefly discuss some earlier passive image forgery methods including traditional paradigms and a few contemporary learning based methods.
\subsection{Format based techniques}
Format based techniques mainly work on specific image formats, in which JPEG format is most popular. Statistical correlation at a block level introduced by specific lossy compression schemes, has been shown to be helpful for image forgery detection. Some examples of such techniques are based on JPEG quantization \cite{Farid2008DigitalIB}, Double JPEG \cite{Luks2003EstimationOP} and JPEG blocking \cite{4217384}.
For instance, in\cite{4217384}, the authors characterize the blocking artifacts using pixel value differences within and across block boundaries. These differences tend to be smaller within blocks than across blocks. When an image is cropped and re-compressed, a new set of blocking artifacts may be introduced.
Within- and across-block pixel value differences are computed from 4-pixel neighborhoods that are spatially offset from each other by a fixed amount, where one neighborhood lies entirely within a JPEG block and the other borders or overlaps a JPEG block. A histogram of these differences is computed from all 8x8 non overlapping image blocks. A 8x8 “blocking artifact” matrix (BAM) is computed as the average difference between these histograms. For uncompressed images, this matrix is random, while for a compressed image, this matrix can show some patterns. Such patterns are often observed manually or identified using
supervised pattern classification
The main disadvantage was that format based methods were mostly restricted to JPEG compressed images. Their performance was not checked on other image formats.
\subsection{Block-based method}
The block-based method mainly used to detect copy-move forgeries, where we select one window and compare in whole other image. The block-based matching can involve lexicographical matching, hashing, Euclidean distance as some of the approaches used to match the blocks based on defined thresholds, comparing the similarity between these blocks are time taken.\cite{inproceedings}
\subsection{Pixel based technique}
Some forgery detection approaches directly operate on pixels. For example, two computationally efficient algorithms have been developed to detect cloned image regions\cite{Popescu04exposingdigital,5999526}. In \cite{Popescu04exposingdigital} the author overcome the time complexity by projecting the image in to lower dimension using (PCA). Then they apply matching algorithms which reduce the computation like calculating euclidean distance.
Another common form of photographic manipulation is the digital splicing of two or more images into a single composite. When performed carefully, the border between the spliced regions can be visually imperceptible.
In \cite{10.1007/978-3-540-87442-3_136,zhao2010detecting}, the authors show that splicing disrupts higher-order Fourier statistics, which can subsequently be used to detect the forgery. In image splicing sharp edges are usually introduced. To detect the image splicing forgery the author uses chrominance channel, and the forgeries are based on observation of the edge maps of these. A common limitation of most traditional methods, is that the judgement of the forgery has to be made manual observations.
\subsection{Learning based technique}
Among the learning based methods, some of the earlier methods use features which are extracted by using SIFT, SURF etc.\cite{4756779} and identify forged images using SVM or deep neural network. However, in more recently, the popularity and performance deep learning methods has also proven to benefit the forgery detection area\cite{abdalla2019convolutional,kumar2020syn2real,rao2016deep,wang2020intelligent}. While in some earlier works on deep fakes interpretability also has been considered, this has been restricted for face images. Our work reported here is on similar lines but we consider forgeries in general natural images.
\section{Proposed work}
As indicated earlier, our focus on passive method is not just from the perspective of detection of copy-move and inpainting forgeries, but also on interpreting the classification via which one can also identifying the location of the forged regions.
For this purpose, we use a forged dataset that we synthesize ourselves.
We then use an in-house convolutional neural network (CNN) for classification, and finally employ Grad-CAM to interpret the results, which also yields approximate forgery locations. We describe these in the three subsections below:
\subsection{Dataset Generation}
For the forgery detection tasks, large natural image datasets required to train CNNs, are not available (except for face forgeries in the deepfake domain).
Hence, we generate synthetic datasets for inpainting and copy-move forgery using COCO dataset. Fig. 1 shows some synthetically generated images. COCO (Common Object in Context) dataset is publicly available for multiple purpose tasks in computer vision such as object detection, semantic segmentation, and key-point detection \cite{lin2015microsoft}. To generate our forgery dataset, we use the pixel-level object mask information provided in the COCO data.
\begin{figure}[h]
\begin{tabular}{c|c|c}
\textbf{Original} & \textbf{Copy move} & \textbf{Inpainting}\\
\includegraphics[width=4cm,height=2.5cm]{cmfd_laptop_586.png}
&\includegraphics[width=4cm,height=2.5cm]{cmfdr_laptop_586.png}
&\includegraphics[width=4cm,height=2.5cm]{laptop-inp_468830.png}
\\
\includegraphics[width=4cm,height=2.5cm]{cmfd_apple_603.png}
&\includegraphics[width=4cm,height=2.5cm]{cmfdr_apple_603.png}
&\includegraphics[width=4cm,height=2.5cm]{apple-inp_46085.png}\\
\includegraphics[width=4cm,height=2.5cm]{cmfd_bird_453.png}
&\includegraphics[width=4cm,height=2.5cm]{cmfdr_bird_453.png}
&\includegraphics[width=4cm,height=2.5cm]{bird-inp_239214.png}\\
\includegraphics[width=4cm,height=2.5cm]{cmfd_tv_78.png}
&\includegraphics[width=4cm,height=2.5cm]{cmfdr_tv_78.png}
&\includegraphics[width=4cm,height=2.5cm]{tv-inp_82122.png}\\
\textbf{a} & \textbf{b} & \textbf{c}
\label{syndata}
\end{tabular}
\caption{a: Authentic image, b: Copy move forgery c: Image inpainting}
\end{figure}
\subsubsection{Copy-Move Forgery:}
To generate copy-move forged images, we selected a specific categories of the COCO dataset. Then, we took into consideration of each mask’s area belonging to that category. Comparing all the mask areas, we select the mask with the largest area. Now, this copied area is pasted over the image after affine transformations and a image blending\cite{forte2020f} operation shown in Fig.2.
$$ I_{f} = \alpha F + (1 - \alpha B)$$
where $I_{f}$ is final image, F is foreground object, B is background image and $\alpha$ is the blending factor.
Blending helps the image to fuse in another image smoothly and Deep Image Matting operation helps to fit on the second image. With this approach, we created a dataset of approximately 60,000 images.
\begin{figure}[h]
\includegraphics[width=13cm,height=5.5cm]{cp.png}
\caption{Method of Copy-move data generation}
\label{cmf}
\end{figure}
\subsubsection{Semantic Inpainting}
We synthesized a dataset of inpainted images using all the sub-categories of the COCO dataset equally. The mask of particular sub-categories was cropped out. After that the Edge-connect inpainting method \cite{nazeri2019edgeconnect} is used. It is a type of deep Semantic inpainting that uses a two-stage approach to complete an image. Firstly, the edge generator fills out the missing edges, and then the image completion network completes the image based on the edges deduced as shown in Fig. 3. Using the above approach, we created approximately 21,000 inpainted images.
\begin{figure}[h]
\includegraphics[width=13cm,height=5.5cm]{inp.png}
\caption{Method of Inpainting data generation}
\label{fig:my_label}
\end{figure}
\subsection{CNN Architecture}
For classification, we train three CNN models using three datasets as follows:\\
1. Model-1 (for Inpainting data only )\\
2. Model-2 (for Copy-Move data only)\\
3. Model-3 (for combined data [Inpainting + Copy-Move dataset])\\
We use five layer CNN architecture as a feature extractor having convolutions with pooling and batch norm layers.
These are then followed by feature are further classified by fully connected layer into two classes (forged or non-forged class). We use the binary cross entropy loss, which is backpropagated using RMSProp optimizer. Fig. 4 shows the architecture of CNN.
\begin{figure}
\includegraphics[width=14cm,height=6.5cm]{arc.png}
\caption{CNN architecture for classifying image forgery in to forge or non forge category.\\
(Edited version of the original figure from \cite{unknown})}
\label{fig:cnn}
\end{figure}
\subsection{Grad-CAM}
After classifying the image in to forge or non-forge category, we employ the Grad-CAM method for interpreting the results. Specifically, the idea is to observe which image regions contribute to the classification, and ideally these should be the forged regions in cases of the forged images. For real images, this is not important, and hence we consider only the images classified as forged images for this analysis. We ensure that a reasonably good classification performance is achieved so that the interpretability can be relied upon. Grad-CAM yields a heatmap highlighting regions based on their importance to classification. We breifly describe the Grad-CAM method below.
Grad-CAM stand for Gradient-Class Activation Mapping. It is a technique for producing ‘visual explanations’ for decisions from a large class of CNN Convolutional Neural Network based models.
In (Grad-CAM) the gradients of any target concept are propagated to the final convolutional layer to produce a localization map / heat map that highlights the important area in the image for predicting the concept \cite{selvaraju2017grad}. To obtain the class-Activation map\cite{selvaraju2017grad}, Grad-CAM calculates the gradient of $y_c$ (score for class c) with respect to feature maps $A$ of a convolutional layer. these gradients flowing back are global-average-pooled to obtain the importance weights $\alpha^c_k$:
$$\alpha^c_k = \frac{1}{z}\sum_i\sum_j\dfrac{\partial y^c}{\partial A^k_{ij}} $$
finally Grad-CAM heat-map is a weighted combination of feature maps, followed by a ReLU activation function as shown in Fig. 5:
$$L^C_{Grad-CAM} = ReLU (\sum_k \alpha^c_k A^K )$$\\
\begin{figure}[h]
\includegraphics[width=13cm,height=8cm]{cam.jpg}
\caption{Grad-CAM Functionality (Edited version of the original figure from \cite{selvaraju2017grad}) }
\label{fig:my_label}
\end{figure}
\section{Experiment \& Results}
As indicated above, we train three CNN based model for three different scenarios, to assess results on individual forgery types, as well as for the combined data.
\subsubsection{Model-1 Trained on Inpainting dataset}
Inpainting dataset: (60,000 images, 75\%train data , 25\%validation data) we train the 60k images of inpainting forgery (30k authentic, 30k forged). we split the dataset 75\% as a training data and 25\% as a validation data.
\subsubsection{Model-2 Trained on copy-Move dataset}
Copy-Move dataset: (60,000 images, 75\%train data , 25\%validation data)we train the 60k images of Copy-Move forgery (30k authentic,30k forged). we split the dataset 75\% as a training data and 25\% as a validation data.
\subsubsection{Model-3 Trained on copy-Move + Inpainting dataset}
Model-3 trained on Copy-Move+Inpaint dataset. Note that this is a more realistic scenario, as in practical case, one would not know the forgery type a-priori.
The classification results are shown in Table 1. We note that on the solo models the forgery detection in case of inpainting is better than that in case of copy-move forgery. This could be due to the fact that the network is able to learn subtle inpainting artifacts better as a complete region is approximated with neighbourhood texture / colour. While the results for the copy-move case are also reasonably good, there are some misclassification. These can be attributed to the fact that the forged region is also natural-looking region from the image, and the artifacts would largely be affecting the borders.
For the combined model, while we expect some drop in the classification performance, it is noted that the reduction is not significant in case of copy-move forgery and for the inpainting case, the combined model still yields well above 70$\%$.
\begin{table}[htbp]
\caption{Classification results}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Test Dataset}&\multicolumn{3}{|c|}{\textbf{Testing Accuracy}} \\
\cline{2-4}
\textbf{.} & \textbf{\textit{Model-1}}& \textbf{\textit{Model-2}}& \textbf{\textit{Model-3}} \\
\hline
Copy-Move dataset&NA&70\% & 69\% \\
\hline
Inpainting dataset& 80\% &NA& 73\% \\
\hline
Inpaint+CopyMove&NA&NA&70\% \\
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\subsection{Grad-CAM Results}
We now demonstrate some of the typical visual results for demonstrating the interpretability on all the three models.
As we observe in Fig. 6 the grad-cam results are satisfactory (even in cases where the inpainted regions are relatively small). The Grad-cam generates the heatmap in forged region (Inpainted area) well. This indicates that our CNN model (Model-1) learns right features from the input images and classify properly with valid forged features.
\begin{figure}[h]
\begin{tabular}{|c|c|}
\hline
\textbf{original}&\textbf{forged image$~~~$heatmap$~~~$overlayheatmap}\\
\hline
\includegraphics[width=3cm,height=3cm]{232.png}&\includegraphics[width=9cm,height=3cm]{Output210.jpg}\\
\includegraphics[width=3cm,height=3cm]{9023.png}&\includegraphics[width=9cm,height=3cm]{Output2245.jpg}\\
\includegraphics[width=3cm,height=3cm]{abc.jpg}&\includegraphics[width=9cm,height=3cm]{Output20.jpg}\\
\hline
\end{tabular}
\caption{Above figure shows the grad-cam results of image inpainting using Model-1}
\label{fig:my_label}
\end{figure}
Fig. 7 shows the Grad-CAM results of copy-move dataset. As we can observe the high intensity values of the heatmap (red) heatmap correctly superimposes on the copy-paste area. Interestingly the localization is quite accurate, in cases where the images are classified correctly. So, in this case too, we can conclude that our Model-2 learns suitable features from the input dataset.
\begin{figure}[h]
\begin{tabular}{|c|c|}
\hline
\textbf{original}&\textbf{forged image$~~~$heatmap$~~~$overlayheatmap}\\
\hline
\includegraphics[width=3cm,height=3cm]{cmfd_orange_13.png}&\includegraphics[width=9cm,height=3cm]{Output161.jpg}\\
\includegraphics[width=3cm,height=3cm]{cmfd_clock_204.png}&\includegraphics[width=9cm,height=3cm]{Output123.jpg}\\
\includegraphics[width=3cm,height=3cm]{cmfd_train_71.png}&\includegraphics[width=9cm,height=3cm]{Output231.jpg}\\
\hline
\end{tabular}
\caption{Above figure shows the grad-cam results of image copy-move using Model-2}
\label{fig:my_label}
\end{figure}
Finally, Fig. 8 shows the Grad-CAM results of copy-move dataset \& Inpainting dataset. In spite of the combined training, the Grad-CAM results demonstrate good explanabilty on the correctly classifed samples, irrespective of the kind of forgery.
\begin{figure}[h]
\begin{tabular}{|c|c|}
\hline
\textbf{original}&\textbf{forged image$~~~$heatmap$~~~$overlayheatmap}\\
\hline
\includegraphics[width=3cm,height=3cm]{cmfd_chair_472.png}&\includegraphics[width=9cm,height=3cm]{Output134.jpg}\\
\includegraphics[width=3cm,height=3cm]{bench_393493.png}&\includegraphics[width=9cm,height=3cm]{Output575.jpg}\\
\includegraphics[width=3cm,height=3cm]{cmfd_cake_201.png}&\includegraphics[width=9cm,height=3cm]{Output0.jpg}\\
\hline
\end{tabular}
\caption{Above figure shows the grad-cam results of image inpainting + Copymove using Model-3}
\label{fig:my_label}
\end{figure}
\newpage
\section{Conclusion}
In this work, we demonstrated an interpretable approach to image forgery, focusing on two types of image forgeries, and across three CNN models. We show that the forgery detection can be explainable even for small forged regions from general natural images. While, our focus was on considering the explanability, and not merely on achieving very high results, we also show that a combined model learning two different types of forgeries can yield reasonably good results on random forgeries (unlike some recent work which consider face forgeries in deep-fakes). We note that the explanability analysis can be generalized to other networks which can yield better performance too, and should be an important consideration for image based classifiers for localizing forgeries.
\bibliographystyle{IEEEtran}
|
1,116,691,498,576 | arxiv | \section{Oddness and resistance}
Cubic graphs naturally fall into two classes depending on
whether they do or do not admit a $3$-edge-colouring.
Besides the trivial family of graphs with bridges, which are trivially uncolourable, there are many examples of $2$-edge-connected cubic graphs that do not admit a $3$-edge-colouring.
Such graphs are called \emph{snarks}; sometimes they are required to
satisfy additional conditions, such as cyclic
$4$-edge-connectivity and girth at least five, to avoid
triviality.
In their many attempts to understand snarks better, researchers have come up with various measures that refine the notion of ``being close to 3-edge-colourable''.
In Section~\ref{sec2}, we introduce a new such measure closely related to oddness and resistance. We follow \cite{my} in our presentation of these two concepts.
Every bridgeless cubic graph has a
$1$-factor \cite{petersen} and consequently also a $2$-factor.
It is easy to see that a cubic graph is $3$-edge-colourable if
and only if it has a $2$-factor that only consists of even
circuits. In other words, snarks are those cubic graphs which
have an odd circuit in every $2$-factor. The minimum number of
odd circuits in a $2$-factor of a bridgeless cubic graph $G$ is
its \emph{oddness}, and is denoted by $\omega(G)$. Since every
cubic graph has even number of vertices, its oddness must also
be even.
The relevance of oddness stems from the importance of snarks.
The crux of many important problems and conjectures, like the Tutte's $5$-flow conjecture or the cycle
double cover conjecture, consists in dealing with snarks.
While most of these
problems are exceedingly difficult for snarks in general, they are often tractable for those that are close to being $3$-edge-colourable.
For example, the $5$-flow conjecture has been
verified for snarks with oddness at most $2$ (Jaeger
\cite{jaeger2}) and for cyclically $6$-edge-connected snarks with oddness at most $4$ \cite{sm}, and the cycle double cover conjecture has been
verified for snarks with oddness at most $4$ (Huck and Kochol
\cite{hk}, H\"aggkvist and McGuinness \cite{hg}). Snarks with
large oddness thus remain potential counterexamples to these
conjectures and therefore deserve further study.
Another natural measure of uncolourability of a cubic graph is
based on minimising the use of the fourth colour in a
$4$-edge-colouring of a cubic graph, which can alternatively be viewed as minimising the number of edges that have to be deleted in order to get a
$3$-edge-colourable graph. Surprisingly, the required
number of edges to be deleted is the same as the number of
vertices that have to be deleted in order to get a
$3$-edge-colourable graph (see \cite[Theorem~2.7]{steffen1}).
This quantity is called the \emph{resistance} of $G$, and will
be denoted by~$\rho(G)$. Observe that $\rho(G) \le \omega(G)$
for every bridgeless cubic graph $G$ since deleting one edge
from each odd circuit in a $2$-factor leaves a colourable
graph. The difference between $\rho(G)$ and $\omega(G)$ can be arbitrarily large in
general \cite{steffen2}.
\section{Weak oddness}
\label{sec2}
A \emph{factor} of a graph $G$ is a subgraph containing all vertices of $G$. The most commonly encountered factors are those defined by vertex degrees: for instance, a $1$-factor is a spanning subgraph whose all vertices have degree $1$. There is, however, an almost endless list of other variants of factors; we refer the reader to the excellent surveys \cite{kano, plummer} for an overview. One important kind of factors are \emph{even factors}, that is, factors with all vertices of even degree \cite{kano, tsp, plummer, xiong}. Even factors play a profound role in the areas of flows and cycle covers where snarks are especially relevant. It is, therefore, very natural to introduce a measure of uncolourability based on even factors: the \emph{weak oddness} of a cubic graph $G$, denoted by $\omega_{\textrm w}(G)$, is the least number of odd components in an even factor of $G$. (We do not need to avoid graphs with bridges in the definition since every graph has an even factor containing just isolated vertices.)
In a cubic graph, an even factor is comprised of circuits and isolated vertices, thus it can be viewed as a relaxation of $2$-factor where ``degenerated'' circuits of length $1$ are allowed.
Since every $2$-factor is an even factor, $\omega_{\textrm w}\le \omega$. On the other hand, we can remove a vertex from each odd component of an even factor of a graph $G$, and thus obtain a $3$-edge-colourable graph, so $\rho\le \omega_{\textrm w}$.
The new invariant $\omega_{\textrm w}$ thus approximates resistance from above more tightly than oddness, and approximates oddness from below more tightly than resistance. Of course, the enhanced approximation potential materializes only if weak oddness differs from both oddness and resistance. We demonstrate that this is indeed the case in Section~\ref{construction}.
Besides that, there are several reasons why we think weak oddness is useful.
First, weak oddness does not change when we contract a triangle. On the other hand, as we will show in Section~\ref{conc}, contracting a triangle can increase oddness arbitrarily.
We hope that this property might improve the chance of developing inductive methods based on oddness.
Second, many results for graphs with small oddness can be easily transformed to results for graphs with small weak oddness. As $\omega$ might be larger than $\omega_{\textrm w}$, the modified statements comprise a larger family of graphs. As a potential example we give the following.
\begin{theorem}\label{cdcthm}
Graphs with weak oddness at most $4$ have a cycle-double cover.
\end{theorem}
\begin{proof}
Let $G$ be a graph with weak oddness $4$. Let us replace each vertex of $G$ with a triangle,
obtaining a graph $G'$. Every even factor of $G$ can be naturally extended to a $2$-factor of $G'$, thus the oddness of $F'$ is at most $4$.
As graphs with oddness at most $4$ have a cycle double cover \cite{hg, hk}, the graph $G'$ has a cycle double cover. This cover
reduced to the edges of $G$ is a cycle double cover of $G$.
\end{proof}
It is known that if $\rho(G) \le 2$ (and thus $\omega_{\textrm w}(G)\le\omega(G)\le 2$), then
$\rho(G)=\omega(G)$ \cite[Lemma 2.5]{steffen1} and thus $\omega_{\textrm w}(G)=\omega(G)$.
For graphs with resistance more than $2$, resistance and oddness may be distinct
\cite{steffen2}.
In Section~\ref{construction} we find a graph with weak oddness $14$ and oddness $16$ which illustrates that weak oddness and oddness can differ.
We do not know, however, whether weak odness and oddness can be different for graphs with
weak oddness smaller than $14$. In particular, the following is an open problem.
\begin{question}
Does there exist a cubic graph with weak oddness $4$ and oddness at least $6$?
\end{question}
\noindent If the answer to this question is affirmative, then Theorem~\ref{cdcthm} is more general than the original theorem requiring oddness at most $4$.
\section{Graphs with $\rho < \omega_{\textrm w} < \omega$}
\label{construction}
Our construction utilizes smaller blocks to build larger graphs.
A \emph{$2$-pole} is a triple $(G, s, t)$, where $G$ is a graph and $s$, $t$ are two different vertices of $G$ which both have degree one;
we will call them \emph{terminals} and the edges incident to them \emph{terminal edges}.
Each terminal of a $2$-pole serves as a place of connection with
another terminal. Two terminal edges of two disjoint $2$-poles can be naturally joined to form a
new nonterminal edge by identifying the terminal
vertices incident to them and suppressing the resulting $2$-valent vertex.
A standard way to create terminal vertices is by \emph{splitting off} a vertex $v$ from a
graph~$G$; by this we mean removing $v$ from $G$ and
attaching a terminal vertex to each dangling edge originally
incident with $v$.
The first step in our construction is to create the $2$-pole $H$ depicted in Fig.~\ref{diamond}. Let $P$ be the $2$-pole obtained from the Petersen graph by inserting a new vertex into one of its edges and then splitting the new vertex off. (The Petersen graph is edge-transitive, so the result of our operation is uniquely determined.) We take two copies $(P_1, s_1, t_1)$, $(P_2, s_2, t_2)$ of $P$, identify $s_1$ with $s_2$, $t_1$ with $t_2$, and attach a new terminal edge to both of $s_1$ and $t_1$.
\begin{figure}
\centering\includegraphics{obr-1}
\caption{The $2$-pole $H$ (its terminal vertices are marked by empty circles).}
\label{diamond}
\end{figure}
The next step is to create the $2$-pole $H_2$ by taking two copies of $P$ and joining a terminal edge of the first copy to a terminal edge of the second copy.
Finally, we take the complete graph on four vertices $u_0$, $u_1$, $u_2$, $u_3$ and remove all edges incident with $u_0$.
Then, for each $i\in\{1,2,3\}$, we take a new copy of $H_2$ and identify one of its terminals with $u_0$ and the other with $u_i$.
We denote the resulting cubic graph $G$ (see Fig.~\ref{thegraph}).
\begin{figure}
\centering\includegraphics{obr-2}
\caption{The construction of the graph $G$.}
\label{thegraph}
\end{figure}
\begin{lemma}
If an even factor $F$ of $G$ contains a cycle $C$ passing through a non-terminal vertex of a copy $H'$ of $H$, but not lying in $H'$, then $H'$ contains at least three odd components of $F$ different from $C$.
\label{passing}
\end{lemma}
\begin{proof}
If $C$ passes through $H$, it passes through non-terminal vertices of exactly one copy of $P$ contained in $H$.
In that copy, there is at least one odd component of $F$ apart from $C$.
In the other copy of $P$, there must be at least two odd components, because the Petersen is not $3$-edge-colourable.
\end{proof}
\begin{theorem}
The resistance, weak oddness, and oddness of the graph $G$ are $12$, $14$, and $16$, respectively.
\label{main}
\end{theorem}
\begin{proof}
First, we determine the resistance of $G$. Consider a copy of $P$ included in $H$. If we remove a vertex at distance $3$ from both terminals from it, the rest is $3$-edge-colourable in such a way that the colours of terminal edges of $P$ are different (one possible colouring is presented in Fig.~\ref{colouring}).
Consequently, it is possible to remove two vertices of $H$ (one from each copy of $P$) and properly colour the edges of the remaining graph with $3$ colours so that the terminal edges of $H$ have the same colour. In other words, $H$ behaves just like an edge for our purpose, and so does $H_2$, so we are essentially colouring just $K_4$.
Therefore, by removing $12$ vertices from $G$ (two from each copy of $P$), we obtain a $3$-edge-colourable graph.
On the other hand, if we do not remove a vertex from $P$, it is not $3$-edge-colourable. This proves that $\rho(G) = 12$.
Assume that $F$ is an even factor of $G$ with minimum number of odd cycles. If $u_0$ is an isolated vertex in $F$, then there are at least $12$ odd circuits in the copies of $P$, and one more triangle in the rest of the graph. If $u_0$ is not isolated in $F$, then some four copies of $H$ are passed through by a circuit of $F$, so, by Lemma~\ref{passing}, they contain at least $4\cdot 3$ odd circuits. Each of the remaining two copies of $H$ contain at least two odd circuits. In either case, $F$ has at least $14$ odd components, and so $\omega_{\textrm w}(G)\ge 14$.
Next, we describe an even factor $F$ of $G$ which contains $u_0$ as an isolated vertex and has $14$ odd components. No copy of $H$ is passed through by $F$. The Petersen graph has a $2$-factor comprised of two $5$-circuits, and thus $H$ has a $2$-factor composed of two $5$-circuits (one in each copy of $P$) and one $12$-circuit. Altogether, $F$ is comprised of six $12$-circuits, twelve $5$-circuits, one triangle $u_1u_2u_3$, and one isolated vertex. Consequently, $\omega_{\textrm w}\le 14$.
Finally, we determine the oddness of $G$. In every $2$-factor of $G$, the vertex $u_0$ belongs to a circuit of $F$, and this circuit must thus pass through four copies of $H$.
According to Lemma~\ref{passing}, those copies of $H$ contain at least $12$ odd circuits of $F$, and each of the remaining two copies of $H$ contains at least two odd circuits of $F$. Consequently, there are at least $16$ odd cycles, so $\omega(G)\ge 16$. It is easy to find a $2$-factor of $F$ having $16$ odd cycles (actually, all $2$-factors of $G$ have this property).
\end{proof}
\begin{figure}
\centering\includegraphics{obr-3}
\caption{A $3$-edge-colouring of $P$ without a vertex.}
\label{colouring}
\end{figure}
As can be easily seen from the proof of Theorem~\ref{main}, the construction can be modified by using more copies of $H$ in $H_2$,
which would lead to an arbitrarily large difference between oddness and weak oddness.
If we also insert one copy of $H_2$ into each of the edges $u_1u_2$, $u_2u_3$ and $u_3u_1$, the difference between weak oddness and resistance grows to $4$:
it is enough to delete one vertex in each copy of $P$ in order to get a $3$-edge-colourable graph, but we have to include all the isolated vertices $u_0$, $u_1$, $u_2$, $u_3$ in an even factor to have only one odd component in each copy of $P$. For an even larger difference, we can take
a large $3$-edge-colourable cubic graph and insert a copy of $H_2$ into each of its edges.
Besides showing that resistance, weak oddness, and oddness can be all distinct,
we can also make the following observation.
If we expand the vertex $u_0$ of $G$ into a triangle, the resulting graph will have oddness $14$,
while $G$ has oddness $16$. Thus expanding a vertex into a triangle can decrease oddness.
By using more copies of $H$ to produce $H_2$ we can obtain a graph
in which the expansion of $u_0$ into a triangle can decrease the oddness by as much as we wish (without changing its parity, of course).
\section{Conclusion}
\label{conc}
While we have demonstrated that weak oddness might differ from oddness, we have very little insight into a characterisation of graphs for which $\omega=\omega_{\textrm w}$.
Apparently, a small snark $G$ has $\omega=\omega_{\textrm w}=2$ (this is true for snarks up to at least $26$ vertices \cite{my} and cyclically $4$-connected snarks up to at least $36$ vertices \cite{brinkmann}). On the one hand, a snark with edge cuts of size $2$ can have vastly different values of $\omega$ and $\omega_{\textrm w}$. On the other hand, if the Jaeger's conjecture \cite{jaeger7cc} is true, then there are no cyclically $7$-connected snarks, so for cubic graphs with sufficiently large cyclic connectivity it might be well possible that $\omega=\omega_{\textrm w}=0$. We propose the following question.
\begin{question}
For which integers $k\ge 3$ does there exist a cyclically $k$-connected snark $G$ such that $\omega(G)\neq\omega_{\textrm w}(G)$?
\end{question}
\noindent Note that for $3$-connected snarks the answer coincides with the answer to the following question.
\begin{question}
In a $3$-connected snark, can the expansion of a vertex into a triangle decrease oddness?
\end{question}
\noindent\textbf{Acknowledgements.} We would like to thank Barbora Candráková, Edita Máčajová, Eckhard Steffen, and Martin Škoviera for many fruitful discussions on topics related to even factors in cubic graphs.
The authors acknowledge support from the research grants VEGA 1/0042/14 and VEGA 1/0474/15.
|
1,116,691,498,577 | arxiv | \section{Introduction}
Scalar viscous balance laws in one space dimension are equations of the form
\begin{equation}
\label{VBL}
u_t + f(u)_x = u_{xx} + g(u),
\end{equation}
where $u = u(x,t) \in \mathbb{R}$ is a scalar unknown, $x \in \mathbb{R}$ and $t > 0$ denote the space and time variables, respectively, and $f = f(u)$ and $g = g(u)$ are nonlinear functions. Equation \eqref{VBL} describes the dynamics of a scalar quantity $u$ in a one dimensional domain, which is subject to three different mechanisms: the reaction $g(u)$ may describe production/consumption, chemical reactions or combustion, among other interactions; the density $u$ is (nonlinearly) transported with speed $f'(u)$; and the diffusion of $u$ is represented by the Laplace operator, $\partial_x^2$. In sum, scalar viscous balance laws constitute simplified models that combine diffusion (viscosity), convection and reaction into one single equation. For an abridged list of references on scalar viscous balance laws, see \cite{CroMas07,AlPl21,HaeSa06,Hae03}.
In these models, one of the most important mathematical solution types is the traveling wave. A \emph{spatially periodic} traveling wave solution to \eqref{VBL} has the form
\begin{equation}
\label{tws}
u(x,t) = \varphi(x-ct),
\end{equation}
where the constant $c \in \mathbb{R}$ is the speed of the wave and the profile function, $\varphi = \varphi(\xi)$, $\xi \in \mathbb{R}$, is a sufficiently smooth periodic function of its argument with fundamental period $L > 0$.
In this paper, we are concerned with the stability of a periodic wave as a solution to the equation \eqref{VBL}. In general, stability of a specific traveling wave solution under small perturbations is a fundamental property for understanding the real-world dynamics of models of evolution type. Examples of such models are the non-linear Schr\"odinger equation, the sine-Gordon equation and models of Korteweg-de Vries type, among others. The existence and stability theory of periodic waves has developed rapidly in recent years and it has drawn the attention of researchers from different areas of science, such as in fluid mechanics, optics, biology, or engineering, just to mention a few. New methods and theories have been developed using tools of non-linear analysis, bifurcation theory, spectral theory, as well as Fourier and harmonic analyses. The following (abridged) list of references may provide the reader with a panoramic idea of the recent evolution of the theory, \cite{AngAMS09, AngNat14, AngNat16, AnLN08, AngNat08, AngNat09,AlPl21,JMMP14,BJNRZ10}. In the particular case of scalar viscous balance laws, the analyses available in the literature have mainly focused on the stability of traveling fronts on the real line (see, e.g., \cite{Xing05, XuJJ16,WuXi05} and the references therein). In contrast, less attention has been paid to the stability of periodic waves.
In this work, we are interested in establishing new results related to the dynamics of periodic traveling waves for models of the general type in \eqref{VBL}. The present analysis can be regarded as a complement to the recent study in \cite{AlPl21}, where the existence and spectral instability of periodic waves for equations of the form \eqref{VBL} were established. The natural question is whether this spectral information guarantees the instability of the waves under the nonlinear evolution. Hence, the purpose of this paper is to show that, if a periodic wave is spectrally unstable then it is also nonlinearly (orbitally) unstable under the flow of the evolution equation \eqref{VBL}. For such statement to be meaningful, it is crucial to specify the spaces under which the spectrum is calculated, and for which the well-posedness holds. Our instability criterion warrants the orbital instability of the manifold generated by any spectrally unstable periodic wave, under the flow of the nonlinear viscous balance law \eqref{VBL} in periodic Sobolev spaces with \emph{same period} as the fundamental period of the wave.
The analysis is based on a combination of the local well-posedness theory for \eqref{VBL}, the implicit function theorem, and an (important) abstract result by Henry {\it{et al.}} \cite{HPW82}, which essentially determines the instability of a manifold of equilibria under iterations of a nonlinear map with unstable linearized spectrum. This general abstract theorem has been the basis of the nonlinear instability theory of periodic waves in other contexts, such as the KdV equation \cite{LopO02}, the critical KdV and NLS models \cite{AngNat09}, KdV systems \cite{AnLN08}, and for general dispersive models \cite{AngNat16}, just to mention a few. In order to apply the theorem by Henry \emph{et al.}, some essential elements are needed, such as a suitable well-posedness theory and the property that the data-solution map is of class $C^2$.
There exist several studies of well-posedness for parabolic equations of the form \eqref{VBL} available in the literature (see, e.g., \cite{AmannI-95,AmannII-19,LaSoUr68,Arm66}). For convenience of the reader, we present a detailed (yet concise) proof of local well-posedness of the Cauchy problem for equations of the form \eqref{VBL} in periodic Sobolev spaces of distributions (in the spirit of the analysis of Iorio and Iorio \cite{IoIo01} for nonlinear equations). Even though our well-posedness analysis is, indeed, quite standard, several refined estimates in the course of proof need to be established as they are used to prove the smoothness of the data-solution map, an important key element of the abstract result by Henry \emph{et al.} Moreover, up to our knowledge, the well-posedness of equations of the form \eqref{VBL} in Sobolev spaces of $L$-periodic distributions has not been reported as such in the literature.
Once the orbital instability criterion is at hand, one may ask about its applicability to particular examples. In the aforementioned recent paper \cite{AlPl21}, the authors applied dynamical systems techniques in order to show that, under certain structural assumptions, there exist two families of periodic waves for equations of the form \eqref{VBL}. The first family emerges from a local Hopf bifurcation when the speed $c$ crosses a critical value $c_0$. These waves have small-amplitude and finite period. The second family is generated by a global homoclinic bifurcation around a second critical value of the speed $c_1$, which is the speed of a traveling pulse or homoclinic wave. These periodic waves have amplitude of order $O(1)$ but have large period tending to $\infty$ (which can be regarded as the period of the traveling pulse). A couple of examples and numerical computations, which illustrate both families of waves, are also presented. Therefore, in order to present the applicability of the criterion, we study the orbital instability of both families of waves via a verification of the conditions for an unstable spectrum. The two families are parametrized by small parameter $\epsilon > 0$ (measuring the deviation of the speed of the wave from the critical speed in each case). Hence, we obtain instability under the flow of the evolution equation in periodic spaces with same period of the wave, once the parameter $\epsilon > 0$ is fixed (see Theorems \ref{teoorbsmall} and \ref{teoorblarge} below).
The paper is structured as follows. In Section \ref{secprelim} we make precise the notions of spectral and orbital instability of periodic waves and state the main results of the paper, namely, the orbital instability criterion and the well-posedness theorem. Section \ref{secwellpos} is devoted to the well-posedness theory for equations of the form \eqref{VBL} in periodic Sobolev spaces. Special attention is devoted to show that the data-solution map is smooth enough. Section \ref{secmain} contains the proof that spectral instability implies orbital instability, upon application of an abstract result on instability of equilibrium points. The final Section \ref{secappl} contains the description of the two families of periodic waves found in \cite{AlPl21} and verifies the appropriate hypotheses to apply our orbital instability criterion.
\subsection*{On notation}
Linear operators acting on infinite-dimensional spaces are indicated with calligraphic letters (e.g., ${\mathcal{L}}$), except for the identity operator which is indicated by $\mathrm{Id}$. The domain of a linear operator, ${\mathcal{L}} : X \to Y$, with $X$, $Y$ Banach spaces, is denoted as ${\mathcal{D}}({\mathcal{L}}) \subseteq X$. For a closed linear operator with dense domain the usual definitions of resolvent and spectra apply (cf. Kato \cite{Kat80}). When computed with respect to the space $X$, the spectrum of ${\mathcal{L}}$ is denoted as $\sigma({\mathcal{L}})_{|X}$. We denote the real part of a complex number $\lambda \in \mathbb{C}$ by $\Re\lambda$. The classical Lebesgue and Sobolev spaces of complex-valued functions on the real line will be denoted as $L^2(\mathbb{R})$ and $H^m(\mathbb{R})$, with $m \in \mathbb{N}$, endowed with the standard inner products and norms. For any $L > 0$ and any $s \in \mathbb{R}$, we denote by $H^s_\mathrm{\tiny{per}} = H^s_\mathrm{\tiny{per}}([0,L])$ the Sobolev space of $L$-periodic distributions such that
\[
\| u \|^2_s := L \sum_{k=-\infty}^{\infty} (1 + |k|^2)^s |\widehat{u}(k)|^2 < \infty,
\]
where $\widehat{u}$ is the Fourier transform of $u$. According to custom we denote $H^0_\mathrm{\tiny{per}} = L^2_\mathrm{\tiny{per}}$. If $s > k + \tfrac{1}{2}$, $k \in \mathbb{N} \cup \{0\}$, then there holds the continuous embedding, $H^s_\mathrm{\tiny{per}} \hookrightarrow C^k_\mathrm{\tiny{per}}$, where $C^k_\mathrm{\tiny{per}}$ is the space of $L$-periodic functions with $k$ continuous derivatives. The translation operator in $H^s_\mathrm{\tiny{per}}([0,L])$ will be denoted as $\zeta_\eta : H^s_\mathrm{\tiny{per}}([0,L]) \to H^s_\mathrm{\tiny{per}}([0,L])$, $\zeta_\eta(u) = u(\cdot + \eta)$ for any $\eta \in \mathbb{R}$. Translation is a smooth operator in $H^s_\mathrm{\tiny{per}}([0,L])$. Moreover, if $s \geq 0$ then we have $\| \zeta_\eta(u) \|_s = \| u \|_s$ for all $u \in H^s_\mathrm{\tiny{per}}$ and all $\eta \in \mathbb{R}$ (see Iorio and Iorio \cite{IoIo01} for details).
\section{Stability Framework and Main Theorems}
\label{secprelim}
In this section we describe the different notions of stability under consideration and state our main results.
\subsection{Spectral stability}
Suppose that a sufficiently smooth profile function, $\varphi = \varphi(\cdot)$, determines an $L$-periodic traveling wave solution to \eqref{VBL} of the form \eqref{tws} for some speed value $c \in \mathbb{R}$. Substitution of \eqref{tws} into \eqref{VBL} yields the following ODE for the profile,
\begin{equation}
\label{profileq}
-c \varphi' + f'(\varphi)\varphi' = \varphi'' + g(\varphi).
\end{equation}
With a slight abuse of notation let us rescale the space variable as $x \mapsto x - c t$ (the co-moving Galilean frame) in order to transform \eqref{VBL} into the equation
\begin{equation}
\label{VBLg}
u_t = u_{xx} + g(u) +cu_x - f(u)_x,
\end{equation}
for which now the periodic wave is a stationary solution, $u(x,t) = \varphi(x)$, in view of \eqref{profileq}. For solutions to \eqref{VBLg} of the form $\varphi(x) + v(x,t)$, where $v$ denotes a nearby perturbation, the leading approximation is given by the linearization of this equation around $\varphi$, namely
\[
v_t = v_{xx} + (c - f'(\varphi))v_x + (g'(\varphi) - f'(\varphi)_x) v.
\]
Specializing to perturbations of the form $v(x,t) = e^{\lambda t} u(x)$, where $\lambda \in \mathbb{C}$ and $u$ lies in an appropriate Banach space $X$, we arrive at the eigenvalue problem
\begin{equation}
\label{primeraev}
\lambda u = u_{xx} + (c - f'(\varphi))u_x + (g'(\varphi) - f'(\varphi)_x) u,
\end{equation}
in which the complex growth rate appears as the eigenvalue. Intuitively, a necessary condition for the wave to be ``stable" is the absence of eigenvalues with $\Re \lambda > 0$, precluding exponentially growing models at the linear level. Motivated by the notion of spatially localized, finite energy perturbations in the Galilean coordinate frame in which the periodic wave is stationary, we consider $X = L^2(\mathbb{R})$ and define the linearized operator around the wave as
\begin{equation}
\label{linop}
\left\{
\begin{aligned}
{\mathcal{L}}^c \, &: \, L^2(\mathbb{R}) \longrightarrow L^2(\mathbb{R}),\\
{\mathcal{L}}^c \, &: = \, \partial_x^2 + a_1(x) \partial_x + a_0(x) \mathrm{Id},
\end{aligned}
\right.
\end{equation}
with dense domain ${\mathcal{D}}({\mathcal{L}}^c) = H^2(\mathbb{R})$, and where the coefficients,
\begin{equation}
\label{defas}
\begin{aligned}
a_1(x) &:= c - f'(\varphi),\\
a_0(x) &:= g'(\varphi) - f'(\varphi)_x,
\end{aligned}
\end{equation}
are bounded and periodic, satisfying $a_j(x + L) = a_j(x)$ for all $x \in \mathbb{R}$, $j = 0,1$. ${\mathcal{L}}^c$ is a densely defined, closed operator acting on $L^2(\mathbb{R})$ with domain ${\mathcal{D}}({\mathcal{L}}^c) = H^2(\mathbb{R})$. Hence, the eigenvalue problem \eqref{primeraev} is recast as ${\mathcal{L}}^c u = \lambda u$ for some $\lambda \in \mathbb{C}$ and $u \in {\mathcal{D}}({\mathcal{L}}^c) = H^2(\mathbb{R})$.
\begin{definition}[spectral stability]
\label{defspectstab}
We say that a bounded periodic wave $\varphi$ is \emph{spectrally stable} as a solution to the viscous balance law \eqref{VBL} if the $L^2$-spectrum of the linearized operator around the wave defined in \eqref{linop} satisfies
\[
\sigma({\mathcal{L}})_{|L^2(\mathbb{R})} \cap \{\lambda \in \mathbb{C} \, : \, \Re \lambda > 0\} = \varnothing.
\]
Otherwise we say that it is \emph{spectrally unstable}.
\end{definition}
\begin{remark}
We remind the reader that any complex number $\lambda$ belongs to the point spectrum of an operator ${\mathcal{L}}$, denoted as $\sigma_\mathrm{\tiny{pt}}({\mathcal{L}})$, if ${\mathcal{L}} - \lambda$ is a Fredholm operator with index equal to zero and with a non-trivial kernel. $\lambda$ belongs to the essential spectrum, $\sigma_\mathrm{\tiny{ess}}({\mathcal{L}})$, provided that either ${\mathcal{L}} - \lambda$ is not Fredholm, or it is Fredholm with non-zero index. Clearly, $\sigma_\mathrm{\tiny{pt}}({\mathcal{L}}), \sigma_\mathrm{\tiny{ess}}({\mathcal{L}}) \subset \sigma({\mathcal{L}})$. Moreover, since the operator is closed, $\sigma({\mathcal{L}}) = \sigma_\mathrm{\tiny{pt}}({\mathcal{L}}) \cup \sigma_\mathrm{\tiny{ess}}({\mathcal{L}})$. The point spectrum consists of isolated eigenvalues with finite (algebraic) multiplicity (see \cite{Kat80,KaPro13} for further information).
\end{remark}
Since the coefficients of the operator ${\mathcal{L}}^c$ are periodic, it is well known from Floquet theory that ${\mathcal{L}}^c$ has no $L^2$-point spectrum and that its spectrum is purely essential (or continuous), $\sigma({\mathcal{L}}^c)_{|L^2(\mathbb{R})} = \sigma_\mathrm{\tiny{ess}}({\mathcal{L}}^c)_{|L^2(\mathbb{R})}$ (see Lemma 3.3 in \cite{JMMP14}, or Lemma 59, p. 1487, in \cite{DunSch2}). However, it is possible to parametrize the spectrum in terms of Floquet multipliers of the form $e^{i\theta} \in \mathbb{S}^1$, $\theta \in \mathbb{R}$ (mod $2\pi$) via a \emph{Bloch transformation} \cite{KaPro13,Grd97}. Indeed, the purely essential spectrum $\sigma({\mathcal{L}}^c)_{|L^2(\mathbb{R})}$ can be written as the union of partial point spectra:
\begin{equation}
\label{Floquetrep}
\sigma({\mathcal{L}}^c)_{|L^2(\mathbb{R})} = \!\!\bigcup_{-\pi<\theta \leq \pi}\sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^c_\theta)_{|L^2_\mathrm{\tiny{per}}([0,L])},
\end{equation}
where the one-parameter family of Bloch operators,
\begin{equation}
\label{Blochop}
\left\{
\begin{aligned}
{\mathcal{L}}^c_\theta &:= (\partial_x + i\theta/L)^2 + a_1(x) (\partial_x + i \theta/L) + a_0(x) \mathrm{Id},\\
{\mathcal{L}}^c_\theta &: L^2_\mathrm{\tiny{per}}([0,L]) \to L^2_\mathrm{\tiny{per}}([0,L]),
\end{aligned}
\right.
\end{equation}
with domain ${\mathcal{D}}({\mathcal{L}}^c_\theta) = H^2_\mathrm{\tiny{per}}([0,L])$, are parametrized by the Floquet exponent (or \emph{Bloch parameter}) $\theta \in (-\pi,\pi]$, and act on the periodic Sobolev space with same period $L > 0$ as the period of the wave. Since the family has compactly embedded domains in $L^2_\mathrm{\tiny{per}} = L^2_\mathrm{\tiny{per}}([0,L])$ then their spectrum consists entirely of isolated eigenvalues, $\sigma({\mathcal{L}}^c_\theta)_{|L^2_\mathrm{\tiny{per}}} = \sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^c_\theta)_{|L^2_\mathrm{\tiny{per}}} $. Moreover, they depend continuously on the Bloch parameter $\theta$, which may be regarded as a local coordinate for the spectrum $\sigma({\mathcal{L}}^c)_{|L^2(\mathbb{R})}$ (see Proposition 3.7 in \cite{JMMP14}), meaning that $\lambda \in \sigma({\mathcal{L}}^c)_{|L^2(\mathbb{R})}$ if and only if $\lambda \in \sigma_\mathrm{\tiny{pt}}({\mathcal{L}}^c_\theta)_{|L^2_\mathrm{\tiny{per}}}$ for some $\theta \in (-\pi,\pi]$. The parametrization \eqref{Floquetrep} is called the \emph{Floquet characterization of the spectrum} (for details, see \cite{AlPl21,JMMP14,KaPro13,Grd97} and the references therein). As a consequence of \eqref{Floquetrep} we conclude that the periodic wave $\varphi$ is $L^2$-spectrally unstable if and only if there exists $\theta_0 \in (-\pi, \pi]$ for which
\[
\sigma_\mathrm{\tiny{pt}}({\mathcal{L}}_{\theta_0}^\epsilon)_{|L^2_\mathrm{\tiny{per}}([0,L])} \cap \{ \lambda \in \mathbb{C} \, : \, \Re \lambda > 0\} \neq \varnothing.
\]
\begin{remark}
Notice that when the Bloch parameter is $\theta = 0$, the expression of the operator ${\mathcal{L}}_0^c$ coincides with that of the linearized operator around the wave in \eqref{linop}, but now acting on a periodic space:
\begin{equation}
\label{linopBloch0}
\left\{
\begin{aligned}
{\mathcal{L}}_0^c \, &: \, L^2_\mathrm{\tiny{per}}([0,L]) \longrightarrow L^2_\mathrm{\tiny{per}}([0,L]),\\
{\mathcal{L}}_0^c \, &: = \, \partial_x^2 + a_1(x) \partial_x + a_0(x) \mathrm{Id}.
\end{aligned}
\right.
\end{equation}
\end{remark}
\subsection{Orbital stability}
Once the spectral (in)stability of a periodic wave is established, a natural question arises. Is the traveling wave solution nonlinearly stable, in a certain sense, with respect to the flow of the equation \eqref{VBL}? Can we deduce from the spectral (in)stability of a periodic wave a nonlinear (in)stability result? In this paper we prove that spectral instability implies orbital (nonlinear) instability in a sense that is described below.
First, note that if the profile function $\varphi = \varphi(\cdot)$ is smooth enough then it belongs to the periodic space $H^2_\mathrm{\tiny{per}}([0,L])$. Hence, one can compare the motion $\varphi(x - ct)$, as a solution to \eqref{VBL}, to a general class of motions $u = u(x,t)$ evolving from initial conditions, $u(0) =\psi$, that are close in some sense to $\varphi$. The notion of orbital stability is, thus, the property that $u(\cdot,t)$ remains close to $\varphi(\cdot +\gamma)$, $\gamma=\gamma(t)$, for all times provided that $u(0)$ starts close to $\varphi(\cdot)$. In other words, the type of stability that we expect is that the perturbation remains close to the manifold generated by translations of the traveling wave, leading to the concept of \emph{orbital stability} (also called stability \emph{in shape} \cite{AngAMS09}). We define the orbit generated by $\varphi$ as the set
\[
{\mathcal{O}}_\varphi = \{ \varphi(\cdot + r) \, : \, r \in \mathbb{R} \} \subset H^2_\mathrm{\tiny{per}}([0,L]).
\]
We note that ${\mathcal{O}}_\varphi$ represents a $C^1$-curve, $\Gamma=\Gamma(r)$, in $H^2_\mathrm{\tiny{per}}([0,L])$ determined by the parameter $r\in \mathbb R$, $\Gamma(r)=\zeta_r( \varphi)$. Thus, the traveling wave profile will be orbitally stable if its orbit $\Gamma$ is stable by the flow generated by the evolution equation. Consequently, we have the following definition associated to \eqref{VBL} (cf. \cite{AngAMS09}).
\begin{definition}[orbital stability]
\label{deforbital}
Let $X$, $Y$ be Banach spaces, with the continuous embedding $Y \hookrightarrow X$. Let $\varphi \in X$ be a traveling wave solution to equation \eqref{VBL}. We say $\varphi$ is \emph{orbitally stable} in $X$ by the flow of \eqref{VBL} if for each $\varepsilon > 0$ there exists $\delta = \delta(\varepsilon) > 0$ such that if $\psi \in Y$ and
\[
\inf_{r \in \mathbb{R}} \| \psi(\cdot) - \varphi(\cdot + r) \|_X < \delta,
\]
then the solution $u(x,t)$ of \eqref{VBL} with initial condition $u(0) = \psi$ exists globally and satisfies
\[
\sup_{t>0} \inf_{r \in \mathbb{R}} \| u(\cdot, t) - \varphi(\cdot + r) \|_X < \varepsilon.
\]
Otherwise we say that $\varphi$ is \emph{orbitally unstable} in $X$.
\end{definition}
In the present context of periodic waves for equations of the form \eqref{VBL}, we select $X = Y = H^2_\mathrm{\tiny{per}}([0,L])$, where $L > 0$ is the fundamental period of the wave.
\subsection{Main results}
Let us now state the main results of the paper. The first theorem establishes the local well-posedness of the evolution equation \eqref{VBL} in periodic Sobolev spaces.
\begin{theorem}
\label{teolocale}
Assume $f \in C^2(\mathbb{R})$, $g \in C^1(\mathbb{R})$, $L > 0$ and $s > 3/2$. If $\phi \in H^s_\mathrm{\tiny{per}}([0,L])$ then there exist some ${T} = {T}(\| \phi \|_s) > 0$ and a unique solution $u \in C([0,T];H^s_\mathrm{\tiny{per}}([0,L])) \cap C^1((0,T];H_\mathrm{\tiny{per}}^{s-2}([0,L]))$ to the Cauchy problem for equation \eqref{VBL} with initial datum $u(0)=\phi$. For each $T_0\in (0,T)$, the data-solution map,
\[
\phi\in H^s_\mathrm{\tiny{per}}([0,L]) \mapsto u_\phi\in C([0,T_0];H^s_\mathrm{\tiny{per}}([0,L])),
\]
is continuous. Moreover, if we further assume $f \in C^4(\mathbb{R})$ and $g \in C^3(\mathbb{R})$ then the data-solution map is of class $C^2$.
\end{theorem}
The second main result is precisely the general criterion for orbital instability based on an unstable spectrum of the linearized operator around the wave. It establishes orbital instability under the flow of the nonlinear viscous balance law in periodic Sobolev spaces with same period as the fundamental period of the wave.
\begin{theorem}[orbital instability criterion for viscous balance laws]
\label{mainthem}
Suppose that $f \in C^4(\mathbb{R})$, $g \in C^3(\mathbb{R})$. Let $u(x,t) = \varphi (x-ct)$ be a periodic traveling wave solution with speed $c \in \mathbb{R}$ to the viscous balance law \eqref{VBL}, where the profile function $\varphi = \varphi(\cdot)$ is of class $C^2$ and has fundamental period $L > 0$. Assume that the following \emph{spectral instability property} holds: the linearized operator around the wave, ${\mathcal{L}}_0^c : L^2_\mathrm{\tiny{per}}([0,L]) \to L^2_\mathrm{\tiny{per}}([0,L])$, defined in \eqref{linopBloch0}, has an unstable eigenvalue, that is, there exists $\lambda \in \mathbb{C}$ with $\Re \lambda > 0$ and some eigenfunction $\Psi \in {\mathcal{D}}({\mathcal{L}}_0^c) = H^2_\mathrm{\tiny{per}}([0,L]) \subset L^2_\mathrm{\tiny{per}}([0,L])$ such that ${\mathcal{L}}_0^c \Psi = \lambda \Psi$. Then the periodic traveling wave is orbitally unstable in $X = H_\mathrm{\tiny{per}}^2([0,L])$ under the flow of \eqref{VBL}.
\end{theorem}
Additionally, in this paper we apply Theorem \ref{mainthem} to prove the nonlinear instability of periodic waves belonging to the two families described in the Introduction, whose existence and spectral instability were proved in \cite{AlPl21} (see Theorems \ref{teoorbsmall} and \ref{teoorblarge} below).
\section{Local well-posedness}
\label{secwellpos}
In this section we establish the local well-posedness in $H^s_\mathrm{\tiny{per}}([0,L])$ for any $s > 3/2$, of the model equation \eqref{VBL}. Our analysis is standard and it is based on Banach's fixed point theorem. Albeit the arguments are classical and without major problems, several estimates are key ingredients in order to obtain the smoothness of the data-solution map associated to \eqref{VBL} (see Section \ref{secsmooth}). In the sequel (and for the rest of the paper) we use the notation,
\[
X_s=H^s_\mathrm{\tiny{per}}([0,L]), \qquad \text{for any } s \in \mathbb{R}.
\]
We start with the recollection of well-known facts.
\subsection{The heat semigroup in $X_s$}
The following properties of the heat semigroup acting on periodic Sobolev spaces can be found in the book by Iorio and Iorio \cite{IoIo01}.
\begin{theorem}
\label{teoheatSG}
The Cauchy problem for the heat equation,
\begin{equation}
\label{heat}
\begin{aligned}
u_t& = u_{xx} , \\
u(0) &= \phi,
\end{aligned}
\end{equation}
is globally well-posed in $X_s$ for any $s \in \mathbb{R}$, $L > 0$. That is, if $\phi \in X_s$ then there exists a unique mild solution $u \in C([0,T];X_s)$ for all $T > 0$. The solution is given by
\[
u(t) = \mathcal{V}(t) \phi,
\]
where the family of operators, $\mathcal{V}(t) : X_s \to X_s$, $t \geq 0$, is the heat $C_0$-semigroup of contractions,
\[
\mathcal{V}(t) \phi := \big( e^{-k^2 t} \widehat{\phi} \, \big)^{\vee},
\]
with generator ${\mathcal{T}} = \partial_x^2$ and dense domain ${\mathcal{D}} = H^{s+2}_\mathrm{\tiny{per}}([0,L])$. The solution depends continuously on the initial data in the following sense,
\[
\sup_{t\in[0, \infty)} \big\| \mathcal{V}(t) \phi_1 - \mathcal{V}(t) \phi_2 \big\|_s \leq \| \phi_1 - \phi_2\|_s.
\]
\end{theorem}
\begin{proof}
Follows from standard theory: it is a particular case (with $q=0$) of Corollary 4.16 and Theorems 4.9, 4.14 and 4.25 in Iorio and Iorio \cite{IoIo01}, pp. 218--232.
\end{proof}
\begin{corollary}
\label{corheat}
For all $s \in \mathbb{R}$, $N \geq 0$ and any $\phi \in H^s_\mathrm{\tiny{per}}$,
\begin{equation}
\label{limsmN}
\lim_{h \to 0} \left\| h^{-1} (\mathcal{V}(t+h) - \mathcal{V}(t))\phi - \partial_x^2 (\mathcal{V}(t) \phi ) \right\|_{s-N} = 0,
\end{equation}
uniformly with respect to $t \geq 0$. In particular, there exists a uniform $\overline{C} > 0$ such that
\begin{equation}
\label{unifcota}
\| h^{-1} (\mathcal{V}(h) - \mathrm{Id}) - \partial_x^2 \| \leq \overline{C},
\end{equation}
in the operator norm for all small $0 < |h| \ll 1$.
\end{corollary}
\begin{proof}
See Theorem 4.15 in \cite{IoIo01}. The second assertion follows immediately from \eqref{limsmN}.
\end{proof}
\begin{corollary}[regularity inequality]
\label{corregineq}
For all $r \in \mathbb{R}$ and $\delta \geq 0$ there exists a uniform constant $K_\delta > 0$ depending only on $\delta$ such that
\begin{equation}
\label{regineq}
\| \mathcal{V}(t) u \|_{r + \delta} \leq K_\delta \Big[ 1 + \Big( \frac{\delta}{2t}\Big)^{\delta}\Big]^{1/2} \| u \|_r,
\end{equation}
for all $u \in H_\mathrm{\tiny{per}}^r([0,L])$, $t >0$.
\end{corollary}
\begin{proof}
This is a particular case, with $q = 0$ and $\mu =1$, of Theorem 4.17 in \cite{IoIo01}.
\end{proof}
\subsection{The Cauchy problem for viscous balance laws}
Let us consider the Cauchy problem for the viscous balance law \eqref{VBL} in $X_s$, $s > 3/2$. It reads:
\begin{equation}
\label{CpVBL}
\begin{aligned}
u_t - u_{xx} &= g(u) - f'(u)u_x,\\
u(0) &= \phi,
\end{aligned}
\end{equation}
for some initial condition $u(0) = \phi \in X_s$. Assuming $f \in C^2$, $g \in C^1$, we define $F \in C^1(\mathbb{R}^2)$ as
\[
F(u,p) := g(u) - f'(u)p, \qquad (u,p) \in \mathbb{R}^2.
\]
Hence, the Cauchy problem \eqref{CpVBL} can be recast as
\begin{equation}
\label{CpVBL2}
\begin{aligned}
u_t - u_{xx} &= F(u,u_x),\\
u(0) &= \phi,
\end{aligned}
\end{equation}
with $\phi \in X_s$. Upon application of the variation of constants formula we arrive at the integral equation,
\begin{equation}
\label{inteq}
{\mathcal{A}} u := u(t) = \mathcal{V}(t) \phi + \int_0^t \mathcal{V}(t-\tau) F(u,u_x) \, d \tau.
\end{equation}
In order to prove existence and uniqueness of solutions to the Cauchy problem \eqref{CpVBL2} we follow the standard blueprint (see, e.g. Taylor \cite{TayPDE3-2e}, chapter 15): (i) the linear part of the equation generates a $C_0$ semigroup in a certain Banach space $X$ (this step has been already verified by Theorem \ref{teoheatSG}); (ii), the nonlinear term $F$ is locally Lipschitz from $X$ to another Banach space $Y$; and, (iii), the operator ${\mathcal{A}}$ is a contraction in a closed ball in $C([0,T];X)$ for $T$ sufficiently small yielding, upon application of Banach's fixed point theorem, a solution to the integral equation \eqref{inteq}.
The following lemmata are devoted to verify these steps in the context of periodic Sobolev spaces and in the spirit of the analysis of Iorio and Iorio \cite{IoIo01} for nonlinear equations.
\begin{lemma}
\label{lemFlip}
For any $s > 3/2$ and assuming $f \in C^2$, $g \in C^1$, then $F = F(u,u_x)$ is locally Lipschitz from $H^s_\mathrm{\tiny{per}}$ to $H_\mathrm{\tiny{per}}^{s-1}$. More precisely, for any
\[
u,v \in \overline{B_M} = \{ w \in H^s_\mathrm{\tiny{per}} \, : \, \| w \|_s \leq M \} \subset H^s_\mathrm{\tiny{per}},
\]
with $M > 0$ fixed but arbitrary, there holds the estimate
\begin{equation}
\label{E1}
\| F(u,u_x) - F(v,v_x) \|_{s-1} \leq L_s(\| u \|_s, \| v \|_s) \| u - v \|_s,
\end{equation}
where $L_s : [0,\infty) \times [0,\infty) \to (0,\infty)$, $L_s = L_s(\varrho_1,\varrho_2)$, is a continuous, positive function and non-decreasing with respect to each argument. In particular, there holds the estimate
\begin{equation}
\label{E2}
\| F(u,u_x) \|_{s-1} \leq L_s(\| u \|_s, 0) \| u \|_s,
\end{equation}
for all $u \in \overline{B_M}$.
\end{lemma}
\begin{proof}
Let $u,v \in \overline{B_M}$. Since for each $s > 1/2$, $H^s_\mathrm{\tiny{per}}$ is a Banach algebra (see Theorem 3.200 in \cite{IoIo01}), there exists a constant $C_s \geq 0$ depending only on $s$ such that
\[
\begin{aligned}
\| F(u,u_x) - F(v,v_x) \|_{s-1}
&\leq \| g(u) - g(v) \|_{s-1} + C_s \| f'(u) \|_{s-1} \| u_x - v_x \|_{s-1} + \\ &\quad + C_s \|f'(v) - f'(u) \|_{s-1} \| v_x \|_{s-1}\\
&\leq \| g(u) - g(v) \|_{s-1} + C_s \| f'(u) \|_{s-1} \| u - v \|_{s} + \\ &\quad + C_s \|f'(v) - f'(u) \|_{s-1} \| v \|_{s}.
\end{aligned}
\]
In view that $s > 3/2$ we have $H^s_\mathrm{\tiny{per}} \subset H^1_\mathrm{\tiny{per}}$ and by Sobolev's inequality there holds $|u| \leq \| u \|_{L^\infty} \leq 2 \| u \|_0^{1/2} \| u_x \|_0^{1/2} \leq 2 \|u\|_1 \leq 2 \|u\|_s \leq 2M$ a.e. in $x \in [0,L]$ for all $u \in \overline{B_M}$. Since $f(u), g(u)$ and $f'(u)$ are continuous in the compact set $[-2M,2M]$, then they are locally Lipschitz and there exist uniform constants $L_f, L_g > 0$, depending only on $s$ and $M$, such that
\begin{equation}
\label{Lipest}
\begin{aligned}
\| f'(u) - f'(v) \|_{s-1} &\leq L_f \|u - v \|_{s-1},\\
\| g(u) - g(v) \|_{s-1} &\leq L_g \|u - v \|_{s-1},\\
\| g'(u) - g'(v) \|_{s-1} &\leq L_g \|u - v \|_{s-1},\\
\| f'(u) \|_{s-1} &\leq L_f \| u \|_{s-1} + |f'(0)|,\\
\| g'(u) \|_{s-1} &\leq L_g \| u \|_{s-1} + |g'(0)|,
\end{aligned}
\end{equation}
for all $u,v \in \overline{B_M}$. Therefore,
\[
\begin{aligned}
\| F(u,u_x) - F(v,v_x) \|_{s-1} &\leq \big[ L_g + C_s \| f'(u) \|_{s-1}+ C_s \| v \|_{s-1} \big] \| u - v \|_{s-1}\\
&\leq \big[ L_g + C_s (L_f \| u \|_s + |f'(0)|+ \| v \|_s) \big] \| u - v \|_{s}\\
&\leq L_s(\|u\|_s, \|v \|_s) \| u - v \|_{s},
\end{aligned}
\]
since from definition $\| u \|_{s-1} \leq \| u \|_s$ for all $u$, and where we have defined\footnote{for later use (see the proof of Lemma \ref{leminvert} below) we have incorporated $|g'(0)|$ into the definition of this upper bound $L_s(\cdot,\cdot)$.}
\begin{equation}
\label{defLs}
L_s(\varrho_1,\varrho_2) := L_g + C_s \big( (L_f + L_g) \varrho_1 + \varrho_2 + |f'(0)| + |g'(0)| \big) > 0,
\end{equation}
for all $(\varrho_1,\varrho_2) \in [0,\infty) \times [0,\infty)$. Clearly, $L_s(\cdot,\cdot)$ is continuous and non-decreasing with respect to each argument. This yields \eqref{E1} and the lemma is proved.
\end{proof}
Let $\phi \in X_s$. For any $\alpha > 0$, fixed but arbitrary, and for $T > 0$ to be chosen later, let us define
\[
Z_{\alpha,T} := \Big\{ u \in C([0,T];X_s) \, : \, \sup_{t \in [0,T]} \| u(t) - \phi \|_s \leq \alpha \Big\}.
\]
Clearly, $Z_{\alpha,T} \subset C([0,T];X_s)$ and it is closed under the norm
\[
\| u \|_{C([0,T]; X_s)} := \sup_{t \in [0,T]} \| u(t) \|_s.
\]
Next result establishes the conditions under which the operator ${\mathcal{A}}$ defined in \eqref{inteq} is a contraction mapping on $Z_{\alpha,T}$, yielding the existence and uniqueness of a mild solution to \eqref{CpVBL2}.
\begin{lemma}
\label{lemmild}
Let $s > 3/2$ and $\phi \in X_s$. Then there exist $T > 0$ and a unique mild solution $u \in C([0,T];X_s)$ to the Cauchy problem \eqref{CpVBL2} (that is, to the integral equation \eqref{inteq}). Moreover, the data-solution map $\phi \mapsto u$ is continuous.
\end{lemma}
\begin{proof}
First, we verify that if $u \in Z_{\alpha,T}$ then ${\mathcal{A}} u \in C([0,T];X_s)$. Indeed, for all $0 < t_1 < t_2 < T$ we have
\begin{equation}
\label{difA}
\begin{aligned}
\| {\mathcal{A}} u(t_1) - {\mathcal{A}} u(t_2) \|_s &\leq \| (\mathcal{V}(t_1) - \mathcal{V}(t_2) ) u \|_s + \int_0^{t_1} \| (\mathcal{V}(t_1-\tau) - \mathcal{V}(t_2-\tau)) F(u,u_x) \|_s \, d\tau\\
& \quad + \int_{t_1}^{t_2} \| (\mathcal{V}(t_1-\tau) - \mathcal{V}(t_2-\tau)) F(u,u_x) \|_s \, d\tau.
\end{aligned}
\end{equation}
Since $\mathcal{V}(t)$ is a $C_0$-semigroup, $\| (\mathcal{V}(t_1) - \mathcal{V}(t_2) ) u \|_s\to 0$ as $t_2\to t_1$. In order to control the second term in \eqref{difA}, we apply inequality \eqref{regineq} with $\delta = 1$, $r = s-1$, $C = K_1 > 0$, and estimate \eqref{E2} (inasmuch as $Z_{\alpha,T} \subset \overline{B_M}$ with $M = \alpha + \| \phi \|_s$); this yields,
\[
\begin{aligned}
\| (\mathcal{V}(t_1-\tau) - \mathcal{V}(t_2-\tau)) F(u,u_x) \|_s \leq 2C \Big[ 1 + \frac{1}{2(t_1-\tau)} \Big]^{1/2} \!\! \sup_{t \in [0,T]} L_s(\| u(t) \|_s,0) \| u(t) \|_s,
\end{aligned}
\]
for all $0 < \tau \leq t_1$. The function on the right side of last inequality is integrable in $\tau \in (0,t_1)$. Therefore, by the Dominated Convergence Theorem,
\[
\lim_{t_2 \to t_1} \int_0^{t_1} \| (\mathcal{V}(t_1-\tau) - \mathcal{V}(t_2-\tau)) F(u,u_x) \|_s \, d\tau = 0.
\]
Analogously, for the second integral in \eqref{difA} we have the estimate
\[
\|\mathcal{V}(t_2-\tau) F(u,u_x) \|_s \leq C \Big[ 1 + \frac{1}{2(t_2-\tau)} \Big]^{1/2} L_s(\| u(\tau) \|_s,0) \| u(\tau) \|_s,
\]
for all $\tau \in (t_1,t_2)$. Clearly, since $u \in Z_{\alpha,T}$ then we have $\| u(\tau)\|_s \leq \alpha + \| \phi \|_s$ and therefore
\[
\begin{aligned}
\int_{t_1}^{t_2} \| \mathcal{V}(t_2-\tau) F(u,u_x) \|_s \, d\tau \leq C L_s(\alpha + \| \phi \|_s,0) ( \alpha + \| \phi \|_s) \big( t_2 - t_1 - \sqrt{2(t_2-t_1)}\big) \to 0,
\end{aligned}
\]
as $t_2 \to t_1$. This shows that ${\mathcal{A}} u(t) \in X_s$ for all $t \in [0,T]$ and that ${\mathcal{A}} u \in C([0,T];X_s)$.
Next, we choose $T> 0$ small enough such that ${\mathcal{A}}(Z_{\alpha,T}) \subset Z_{\alpha,T}$ and that ${\mathcal{A}}$ is a contractive mapping. First, note that since $\mathcal{V}(t)$ is a $C_0$-semigroup we can choose $T_1 > 0$ such that $\| \mathcal{V}(t) \phi - \phi \|_s < \alpha/2$ for all $t \in [0,T_1]$. Now, if $u \in Z_{\alpha,T}$ then we have the estimate (see \eqref{E2}),
\[
\begin{aligned}
\Big\| \int_0^t \mathcal{V}(t - \tau) F(u, u_x) \, d\tau \Big\|_s &\leq \int_0^t \| \mathcal{V}(t-\tau) F(u,u_x) \|_s \, d\tau \\
&\leq C L_s(\alpha + \| \phi \|_s,0) ( \alpha + \| \phi \|_s) (T + \sqrt{2 T}) \\
&< \tfrac{1}{2} \alpha,
\end{aligned}
\]
provided that we choose $T < T_1$ small enough. This shows that ${\mathcal{A}}(Z_{\alpha,T}) \subset Z_{\alpha,T}$. Finally, in order to show that ${\mathcal{A}}$ is a contraction for some (possibly smaller) $T > 0$, let $u,v \in Z_{\alpha,T}$. Similar arguments yield the estimate
\[
\begin{aligned}
\| {\mathcal{A}} u(t) - {\mathcal{A}} v(t) \|_s &\leq \int_0^t \| \mathcal{V}(t-\tau)(F(u,u_x) - F(v,v_x)) \|_s \, d\tau \\
&\leq C \int_0^t \Big[ 1 + \frac{1}{2(t - \tau)} \Big]^{1/2} \| F(u,u_x) - F(v, v_x) \|_s \, d\tau \\
&\leq C L_s( \alpha + \| \phi \|_s, \alpha + \| \phi \|_s ) (T + \sqrt{2 T}) \sup_{t \in [0,T]} \| u(t) - v(t) \|_s\\
&< \tfrac{1}{2} \| u - v \|_{C([0,T];X_s)},
\end{aligned}
\]
where for
\begin{equation}
\label{C}
C_\phi := C L_s( \alpha + \| \phi \|_s, \alpha + \| \phi \|_s ) > 0,
\end{equation}
we choose $T$ sufficiently small such that
\begin{equation}
\label{laii}
C_\phi (T + \sqrt{2 T}) < \frac{1}{2}.
\end{equation}
Notice that $T$ depends on $\| \phi \|_s$. Hence, we conclude that there exists a small ${T} ={T}(\| \phi \|_s) > 0$ such that ${\mathcal{A}}(Z_{\alpha,T}) \subset Z_{\alpha,T}$ and ${\mathcal{A}}$ is a contraction on $Z_{\alpha,T}$. By Banach's fixed point theorem, there exists a unique fixed point $u \in Z_{\alpha,T}$ of ${\mathcal{A}}$ that solves \eqref{inteq}.
Finally, to show the continuity of the data-solution map let $u$ and $v$ in $C([0,T];X_s)$ be the solutions to the Cauchy problem with initial data $u(0) = \phi$ and $v(0) = \psi$, respectively. Then, using the regularity estimate \eqref{regineq} it is easy to show that
\[
\begin{aligned}
\| u(t) - v(t) \|_s &\leq \| \mathcal{V}(t) \phi - \mathcal{V}(t) \psi \|_s + \int_0^t \| \mathcal{V}(t-\tau) (F(u,u_x) - F(v,v_x)) \|_s \, d\tau\\
&\leq \| \phi - \psi \|_s + (T + \sqrt{2T}) L_s(M_s,M_s) \int_0^t \| u(\tau) - v(\tau) \|_s \, ds,
\end{aligned}
\]
with $M_s := \max \, \big\{ \sup_{t \in [0,T]} \| u(t) \|_s, \sup_{t \in [0,T]} \| u(t) \|_s \big\}$. Gronwall's inequality then yields
\[
\| u(t) - v(t) \|_s \leq C_{s,T} \| \phi - \psi \|_s,
\]
for all $t \in [0,T]$ with a constant $C_{s,T} > 0$ depending only on $s$ and $T$. The lemma is proved.
\end{proof}
It remains to verify that the unique solution from Lemma \ref{lemmild} is, in fact, a strong solution to the Cauchy problem \eqref{CpVBL}.
\begin{lemma}
\label{lemstrong}
Under the assumptions of Lemma \ref{lemmild}, the unique mild solution $u \in C([0,T];X_s)$ to \eqref{inteq} satisfies $u \in C^1([0,T];H_\mathrm{\tiny{per}}^{s-2}([0,L]))$ and, therefore, it is a strong solution to the Cauchy problem \eqref{CpVBL}.
\end{lemma}
\begin{proof}
It suffices to show that
\[
\lim_{h \to 0} \| h^{-1} (u(t+h) - u(t)) - \partial_x^2 u - F(u, u_x) \|_{s-2} = 0.
\]
To that end, write
\begin{equation}
\label{ABC}
\begin{aligned}
h^{-1} (u(t+h) &- u(t)) - \partial_x^2 u - F(u, u_x) = h^{-1} (\mathcal{V}(t+h) - \mathcal{V}(t)) \phi - \partial_x^2 (\mathcal{V}(t) \phi) \\
&+ h^{-1} \int_0^t (\mathcal{V}(t+h-\tau) - \mathcal{V}(t-\tau)) F(u, u_x)(\tau) \, d \tau - \partial^2_x \int_0^t \mathcal{V}(t-\tau) F(u, u_x)(\tau) \, d\tau \\
&+ h^{-1} \int_t^{t+h} \mathcal{V}(t+h-\tau) F(u,u_x)(\tau) \, d \tau - F(u, u_x)(t).
\end{aligned}
\end{equation}
First, note that Corollary \ref{corheat} immediately implies that
\[
\lim_{h \to 0} \| h^{-1} (u(t+h) - u(t)) - \partial_x^2 (\mathcal{V}(t) \phi) \|_{s-2} = 0.
\]
The $\| \cdot \|_{s-2}$-norm of the last term in \eqref{ABC} is clearly bounded above by
\[
h^{-1}\int_t^{t+h} R(\tau) \, d \tau,
\]
where $R(\tau) := \| \mathcal{V}(t+h-\tau) F(u,u_x)(\tau) - F(u, u_x)(t) \|_{s-2}$. $R$ is a continuous function of $\tau \in (t, t + h)$ and, hence, there exists some $\vartheta \in (t, t + h)$ for which
\[
R(\vartheta) = h^{-1}\int_t^{t+h} R(\tau) \, d \tau.
\]
Since $\vartheta \to t$ as $h \to 0$, by continuity of the semigroup we have,
\[
\lim_{h \to 0} R(\vartheta) = \lim_{h \to 0} \| \mathcal{V}(t+h-\vartheta) F(u,u_x)(\vartheta) - F(u, u_x)(t) \|_{s-2} = 0.
\]
This yields
\[
0 \leq \lim_{h \to 0} \Big\| h^{-1} \int_t^{t+h} \mathcal{V}(t+h-\tau) F(u,u_x)(\tau) \, d \tau - F(u, u_x)(t) \Big\|_{s-2} \leq \lim_{h \to 0} R(\vartheta) = 0.
\]
Finally, apply \eqref{regineq}, \eqref{unifcota} and \eqref{E2} to observe that, for all $ 0 < \tau < t$ and all $|h|$ small, there holds the estimate
\[
\begin{aligned}
\big\| h^{-1} (\mathcal{V}(t+h &-\tau) - \mathcal{V}(t-\tau)) F(u, u_x)(\tau) - \partial^2_x \big( \mathcal{V}(t-\tau) F(u, u_x)(\tau)\big) \big\|_{s-2} =\\
&\leq \| \mathcal{V}(t-\tau)( h^{-1} (\mathcal{V}(h) - \mathrm{Id}) - \partial^2_x )F(u, u_x)(\tau) \|_{s-2}\\
&\leq C \Big[ 1 + \frac{1}{2(t-\tau)}\Big]^{1/2} \| h^{-1} (\mathcal{V}(h) - \mathrm{Id}) - \partial^2_x \| \| F(u, u_x)(\tau) \|_{s-3}\\
&\leq C \overline{C} \Big[ 1 + \frac{1}{2(t-\tau)}\Big]^{1/2} \| F(u, u_x)(\tau) \|_{s-1}\\
&\leq C \overline{C} L_s(\sup_{\tau \in(0,t)} \| u(\tau) \|_s,0)) \sup_{\tau \in (0,t)} \| u(\tau) \|_s \Big[ 1 + \frac{1}{2(t-\tau)}\Big]^{1/2}.
\end{aligned}
\]
Once again, the right hand side of last inequality is integrable in $\tau \in (0,t)$. Corollary \ref{corheat} then yields
\[
\big\| (h^{-1}(\mathcal{V}(h) - \mathrm{Id}) - \partial^2_x )F(u, u_x)(\tau) \big\|_{s-2} \to 0,
\]
as $h \to 0$, uniformly in $\tau \in (0,t)$. Thus, by the Dominated Convergence Theorem, we conclude that
\[
\lim_{h \to 0} \Big\| h^{-1} \int_0^t (\mathcal{V}(t+h-\tau) - \mathcal{V}(t-\tau)) F(u, u_x)(\tau) \, d \tau - \partial^2_x \int_0^t \mathcal{V}(t-\tau) F(u, u_x)(\tau) \, d\tau \Big\|_{s-2} = 0.
\]
This shows that $u \in C^1([0,T];H_\mathrm{\tiny{per}}^{s-2}([0,L]))$ and the lemma is proved.
\end{proof}
\subsection{Smoothness of the data-solution map}
\label{secsmooth}
Let $B$ be the ball $B=B_\varepsilon(\phi) = \{ u \in X_s \, : \, \| u - \phi \|_s < \varepsilon \}$ with $\varepsilon > 0$. Define the map
\begin{equation}
\label{defGamma}
\begin{aligned}
\Gamma &: B \times C([0,T];X_s) \to C([0,T];X_s),\\
\Gamma (\psi, w)(t) &:= w(t) - \mathcal{V}(t) \psi - \int_0^t \mathcal{V}(t-\tau) F(w,w_x) \, d\tau.
\end{aligned}
\end{equation}
For any given $\phi \in X_s$, $s>3/2$, let us denote by $u_\phi \in C([0,T]; X_s)$,
\begin{equation}
\label{uphi}
u_\phi(t) = \mathcal{V}(t) \phi + \int_0^t \mathcal{V}(t-\tau) F(u_\phi,\partial_x u_\phi ) \, d\tau,
\end{equation}
the unique solution to the Cauchy problem \eqref{CpVBL} with $u_\phi(0)=\phi$. Then, clearly,
\[
\Gamma(\phi, u_\phi)(t) = 0,
\]
for all $t \in [0,T]$.
At this point we need to impose further regularity on the functions $f$ and $g$ to guarantee twice Fr\'echet differentiability of the mapping $\Gamma$ in a neighborhood of $(\phi, u_\phi)$.
\begin{lemma}
\label{lemFrechdiff}
Let $f \in C^4(\mathbb{R})$, $g \in C^3(\mathbb{R})$ and $s > 3/2$. Then the map $\Gamma : X_s \times C([0,T];X_s) \to C([0,T];X_s)$ defined in \eqref{defGamma} is twice Fr\'echet differentiable in an open neighborhood $B_\varepsilon(\phi) \times B_\delta(u_\phi)$ of $(\phi, u_\phi)$.
\end{lemma}
\begin{proof}
Follows directly from the regularity of $F(u,u_x) = g(u) - f'(u) u_x$, the definition of the mapping $\Gamma$ and standard properties of the contractive semigroup $\mathcal{V}(t)$. (Recall that the existence of continuous G\^ateaux derivatives in open neighborhoods yields Fr\'echet differentiability; see \cite{ZeidI86}, \S 4.2, Proposition 4.8.) We omit the details.
\end{proof}
\begin{lemma}
\label{leminvert}
Suppose that $f \in C^4(\mathbb{R})$, $g \in C^3(\mathbb{R})$. Let $\phi\in X_s$, $s>3/2$, and consider $u_\phi \in C([0,T]; X_s)$, $T>0$, the unique strong solution to \eqref{CpVBL} given by Lemma \ref{lemstrong}. Then, the operator
\begin{equation}
\label{derivwGamma}
\begin{aligned}
\partial_w \Gamma (\phi, u_\phi) &: C([0,T];X_s) \to C([0,T];X_s),\\
\partial_w \Gamma (\phi, u_\phi) w(t) &= w(t) - \int_0^t \mathcal{V}(t-\tau) \Big[ \big(g'(u_\phi) - f''(u_\phi) \partial_x u_\phi\big) w - f'(u_\phi) w_x \Big] \, d\tau,
\end{aligned}
\end{equation}
is one to one and onto. Moreover, the data-solution map associated to \eqref{CpVBL},
\begin{equation}
\label{datasolmap}
\begin{aligned}
\Upsilon &: X_s \to C([0,T];X_s),\\
\phi &\mapsto \Upsilon(\phi) = u_\phi,
\end{aligned}
\end{equation}
is of class $C^2$.
\end{lemma}
\begin{proof}
First, let us verify the formula for the operator $\partial_w \Gamma (\phi, u_\phi)$ by computing $\lim_{h \to 0} h^{-1} \Gamma(\phi, u_\phi + hw)$ for any $w \in C([0,T],X_s)$. From the definition of $F = F(u,u_x)$ we have by Taylor expansion,
\[
F(u_\phi + hw, \partial_x u_\phi + hw_x) = F(u_\phi, \partial_x u_\phi) + h[ \big( g'(u_\phi) - f''(u_\phi) \partial_x u_\phi\big) w - f'(u_\phi) w_x \big)] + O(h^2).
\]
Hence,
\[
h^{-1} \Gamma(\phi, u_\phi + hw)(t) = w(t) - \int_0^t \mathcal{V}(t-\tau) \Big[ \big(g'(u_\phi) - f''(u_\phi) \partial_x u_\phi\big) w - f'(u_\phi) w_x \Big] \, d\tau + O(h),
\]
in view of \eqref{uphi}. This yields \eqref{derivwGamma} when $h \to 0$.
Now, let us apply the regularity inequality \eqref{regineq} to estimate, for any $t \in [0,T]$ and all $w \in C([0,T],X_s)$,
\[
\begin{aligned}
\| \partial_w \Gamma (\phi, u_\phi) w(t) - w(t) \|_s &\leq \int_0^t \Big\| \mathcal{V}(t-\tau) \Big[ \big(g'(u_\phi) - f''(u_\phi) \partial_x u_\phi\big) w - f'(u_\phi) w_x \Big] \Big\|_s \, d \tau \\
&\leq C \int_0^t \Big[ 1 + \frac{1}{2(t - \tau)} \Big]^{1/2} \| g'(u_\phi) w - (f'(u_\phi)w)_x \|_{s-1} \, d \tau.
\end{aligned}
\]
In view of estimates \eqref{Lipest}, the fact that $H^s_\mathrm{\tiny{per}}$ is a Banach algebra for any $s > 1/2$ and that, by the fixed point theory of Lemma \ref{lemmild} there holds ${\mathcal{A}}(Z_{\alpha,T}) \subset Z_{\alpha,T}$, or in other words, $\| u_\phi \| \leq \alpha + \| \phi \|_s$, then we obtain the following estimate for all $0 < \tau < t$,
\[
\begin{aligned}
\| g'(u_\phi) w - (f'(u_\phi)w)_x \|_{s-1} &\leq C_s \| g'(u_\phi) \|_{s-1} \| w \|_{s-1} + \| (f'(u_\phi)w)_x \|_{s-1}\\
&\leq C_s [ L_g \| u_\phi \|_{s-1} + |g'(0)| ] \| w \|_s + C_s [L_f \| u_\phi \|_s + |f'(0)|] \| w \|_s \\
&\leq C_s \big[ (L_f + L_g) (\alpha + \| \phi \|_s) + |f'(0)| + |g'(0)| \big] \sup_{\tau \in (0,T)} \| w(\tau) \|_s\\
&= L_s(\alpha + \| \phi \|_s, 0) \sup_{\tau \in (0,T)} \| w(\tau) \|_s\\
&\leq L_s(\alpha + \| \phi \|_s, \alpha + \| \phi \|_s) \sup_{\tau \in (0,T)} \| w(\tau) \|_s.
\end{aligned}
\]
This yields,
\[
\begin{aligned}
\| \partial_w \Gamma (\phi, u_\phi) w(t) - w(t) \|_s &\leq C L_s(\alpha + \| \phi \|_s, \alpha + \| \phi \|_s) (T + \sqrt{T}) \sup_{\tau \in (0,T)} \| w(\tau) \|_s\\
&= C_\phi (T + \sqrt{T}) \| w \|_{C([0,T];X_s)},
\end{aligned}
\]
for the same constant $C_\phi > 0$ given in \eqref{C}. Since $T$ satisfies \eqref{laii} we conclude that
\[
\| (\partial_w \Gamma (\phi, u_\phi) - \mathrm{Id})w \|_{C([0,T];X_s)} < \tfrac{1}{2} \| w \|_{C([0,T];X_s)},
\]
for all $w \in C([0,T],X_s)$. In other words, in the operator norm there holds
\[
\| \partial_w \Gamma (\phi, u_\phi) - \mathrm{Id} \| < 1.
\]
This proves that $\partial_w \Gamma (\phi, u_\phi)$ is invertible on $C([0,T];X_s)$.
In view of Lemma \ref{lemFrechdiff}, we now apply the Implicit Function Theorem in Banach spaces (cf. \cite{ZeidI86}, \S 4.7) to conclude the existence of a neighborhood $\widetilde{B} \subset B$ of $\phi$ and a $C^2$-mapping
\begin{equation}
\label{datasolmap}
\Upsilon : \widetilde{B} \to C([0,T];X_s),
\end{equation}
such that $\Gamma (w, \Upsilon(w)) = 0$ for all $w \in \widetilde{B}$. By \eqref{defGamma}, the mapping $\Upsilon$ is clearly the data-solution map inasmuch as $\phi \mapsto \Upsilon(\phi) = u_\phi$. The conclusion follows.
\end{proof}
\subsection{Proof of Theorem \ref{teolocale}} Assuming $f \in C^2(\mathbb{R})$, $g \in C^1(\mathbb{R})$, the first assertion follows immediately upon application of Lemmata \ref{lemmild} and \ref{lemstrong}. If we suppose more regularity, $f \in C^4(\mathbb{R})$ and $g \in C^3(\mathbb{R})$, then the hypotheses of Lemmata \ref{lemFrechdiff} and \ref{leminvert} are satisfied and the data-solution map, $\phi \mapsto \Upsilon(\phi)$, is of class $C^2$. The theorem is proved.
\qed
\section{Orbital instability criterion}
\label{secmain}
This section is devoted to prove Theorem \ref{mainthem}.
\subsection{An abstract result}
The following theorem provides the link to obtain nonlinear (orbital) instability from spectral instability.
\begin{theorem}[Henry \emph{et al.} \cite{HPW82}]
\label{teohenry}
Let $Y$ be a Banach space and $\Omega \subset Y$ an open subset such that $0 \in \Omega$. Assume that there exists a map ${\mathcal{M}} : \Omega \to Y$ such that ${\mathcal{M}}(0) = 0$ and, for some $p > 1$ and some continuous linear operator ${\mathcal{L}}$ with spectral radius $r({\mathcal{L}}) > 1$, there holds
\[
\| {\mathcal{M}}(y) - {\mathcal{L}} y \|_Y = O(\| y \|_Y^p)
\]
as $y \to 0$. Then $0$ is unstable as a fixed point of ${\mathcal{M}}$. More precisely, there exists $\varepsilon_0 > 0$ such that for all $B_\eta(0) \subset Y$ and arbitrarily large $N_0 \in \mathbb{N}$ there is $n \geq N_0$ and $y \in B_\eta(0)$ such that $\| {\mathcal{M}}^n(y) \|_Y \geq \varepsilon_0$.
\end{theorem}
\begin{proof}
See Theorem 2 in \cite{HPW82} (see also Theorem 5.1.5 in \cite{He81}).
\end{proof}
\begin{remark}
\label{remHenry}
The statement in Theorem \ref{teohenry} establishes the instability of $0$ as a fixed point of ${\mathcal{M}}$; in other words, it shows the existence of points moving away from $0$ under successive applications of ${\mathcal{M}}$. In the Remark after Theorem 2 in \cite{HPW82}, the following extension of Theorem 2 is obtained: if $\Gamma_0$ is a $C^1$-curve of fixed points of ${\mathcal{M}}$ with $0\in \Gamma_0$ then $\Gamma_0$ is unstable, in other words, the points $\{{\mathcal{M}}^n(y), n \geq 0\}$ not only move away from $0$, but also from $\Gamma_0$.
\end{remark}
Theorem \ref{teohenry} can be recast in a more suitable form for applications to nonlinear wave instability (see also \cite{AngNat14,AngNat16}).
\begin{corollary}
\label{corhenry}
Let ${\mathcal{S}} : \Omega \subset Y \to Y$ be a $C^2$ map defined on an open neighborhood of a fixed point $\varphi$ of ${\mathcal{S}}$. If there is an element $\mu \in \sigma({\mathcal{S}}'(\varphi))$ with $|\mu| > 1$ then $\varphi$ is unstable as a fixed point of ${\mathcal{S}}$. Moreover, if $ \Gamma$ is a $C^1$-curve of fixed points of ${\mathcal{S}}$ with $\varphi \in \Gamma$ then $\Gamma$ is unstable.
\end{corollary}
\begin{proof}
Define the open set $\widetilde{\Omega} = \{ y - \varphi \, : \, y \in B \} \subset Y$, where $B = B_\delta(\varphi)$ is an open ball with radius $\delta > 0$, and consider the mapping ${\mathcal{M}} : \widetilde{\Omega} \to Y$, ${\mathcal{M}}(x) := {\mathcal{S}}(x+\varphi) - \varphi$. Then, clearly, ${\mathcal{M}}(0) = 0$ and ${\mathcal{M}}$ is of class $C^2$ in $\widetilde{\Omega}$. Define ${\mathcal{Z}} := {\mathcal{S}}'(\varphi)$. Then, by hypothesis, there exists an eigenvalue $\mu \in \sigma({\mathcal{Z}})$ with $1 < |\mu| \leq r({\mathcal{Z}})$. By Taylor's formula,
\[
{\mathcal{M}}(x) = {\mathcal{M}}(0) + {\mathcal{M}}'(0) x + O(\|x\|_Y^2) = {\mathcal{Z}} x + O(\|x\|_Y^2),
\]
provided that $\| x \|_Y \ll 1$. Apply Theorem \ref{teohenry} to deduce the existence of $\varepsilon_0 > 0$ such that, for any ball $B_\eta(\varphi)$, with radius $\eta > 0$ and arbitrarily large $N_0 \in \mathbb{N}$, there exists $n \geq N_0$ and $y \in B_\eta(\varphi)$ such that $\| {\mathcal{S}}^n(y) - \varphi \|_Y \geq \varepsilon_0$. This completes the proof.
\end{proof}
\subsection{The mapping ${\mathcal{S}}$}
Before proving our main result, Theorem \ref{mainthem}, we need to specify the particular mapping ${\mathcal{S}}$ (in the context of Corollary \ref{corhenry}) suitable for our needs. We start by making a couple of observations.
First notice that, if we denote the unique solution to the Cauchy problem \eqref{CpVBL} with initial datum $\varphi \in X_2 = H^2_\mathrm{\tiny{per}}$ as $u_\varphi = \Upsilon(\varphi) \in C([0,T];X_2)$ (where $\varphi = \varphi(\cdot)$ is the $L$-periodic $C^2$ profile function), then for each $x \in [0,L]$ a.e. there holds $u_\varphi(t)(x) = \varphi(x-ct)$, or, in other words,
\begin{equation}
\label{solprofil}
u_\varphi(t) = \varphi(\cdot - ct) = \zeta_{-ct}(\varphi) \in X_2,
\end{equation}
where $\zeta_\eta$ is the translation operator in $X_2$ for any $\eta \in \mathbb{R}$. This follows by direct differentiation and by the profile equation \eqref{profileq}.
Our second observation is the content of the following
\begin{lemma}[global well-posedness of the linearized problem]
\label{lemglobalwp}
Let $f \in C^4(\mathbb{R})$, $g \in C^3(\mathbb{R})$. Then for every $\phi \in X_2 = H^2_\mathrm{\tiny{per}}([0,L])$ and all $T > 0$ there exists a unique solution $v_\phi \in C([0,T];X_2) \cap C^1([0,T]; L^2_\mathrm{\tiny{per}}([0,L]))$ to the Cauchy problem for the linearized operator around the periodic traveling wave $\varphi$, namely,
\begin{equation}
\label{Cplin}
\begin{aligned}
v_t &= {\mathcal{L}}_0^c v,\\
v(0) &= \phi,
\end{aligned}
\end{equation}
where
\begin{equation}\label{linear}
{\mathcal{L}}_0^c v = v_{xx} + g'(\varphi) v - (f'(\varphi)v)_x + cv_x.
\end{equation}
\end{lemma}
\begin{proof}
Follows similarly as the proof for the nonlinear well-posedness result in periodic Sobolev spaces of section \ref{secwellpos}. The fact that the solution is now global is a consequence of the well-posedness and regularity for parabolic linear problems (see \cite{TayPDE3-2e}). We omit the details.
\end{proof}
\begin{remark}
Recall that ${\mathcal{L}}_0^c$ denotes the linearized operator around the wave defined on the periodic Lebesgue space $L^2_\mathrm{\tiny{per}}([0,L])$ (with Bloch parameter, or Floquet exponent, $\theta = 0$, see \eqref{Blochop}). The operator is defined in terms of the traveling wave profile $\varphi \in X_2$, its fundamental period $L$ and its speed, $c$.
\end{remark}
Let us now define a mapping which plays the role of the operator ${\mathcal{S}}$ in the abstract Corollary \ref{corhenry}. For each $\phi \in X_2 $, set
\begin{equation}
\label{defS}
\begin{aligned}
{\mathcal{S}} &: X_2 \to X_2,\\
{\mathcal{S}}(\phi) &:= \zeta_{cT} (u_\phi(T))
\end{aligned}
\end{equation}
where $u_\phi = \Upsilon(\phi)$ denotes the unique solution to the Cauchy problem \eqref{CpVBL} with $u_\phi(0)=\phi$, $u_\phi \in C([0,{T}];X_2)$. Recall that $u_\phi$ is given by the variation of constants formula \eqref{uphi}.
\begin{lemma}[properties of ${\mathcal{S}}$]
\label{propS}
Let $\varphi$ be a periodic profile for equation \eqref{VBL}. The mapping ${\mathcal{S}}$ defined in \eqref{defS} satisfies:
\begin{itemize}
\item[(a)] ${\mathcal{S}}(\varphi) = \varphi \in X_2$.
\item[(b)] ${\mathcal{S}}$ is twice Fr\'echet differentiable in an open neighborhood of $\varphi$.
\item[(c)] For every $\psi \in X_2$ there holds
\begin{equation}
\label{devS}
{\mathcal{S}}'(\varphi) \psi = v_\psi(T),
\end{equation}
where $v_\psi (t) \in X_2$ denotes the unique solution to the linear Cauchy problem \eqref{Cplin} with initial datum $v_\psi (0) = \psi$.
\end{itemize}
\end{lemma}
\begin{proof}
First, notice that ${\mathcal{S}}(\varphi) = \zeta_{cT}(u_{\varphi}(T)) = \zeta_{cT} (\zeta_{-cT}(\varphi)) = \varphi$ in view of \eqref{solprofil}. That is, $\varphi \in X_2$ is a fixed point of ${\mathcal{S}}$, showing (a). Now, from Theorem \ref{teolocale} we know that the data-solution map $\phi \mapsto \Upsilon(\phi) = u_\phi$ is of class $C^2$. Also, the translation operator is of class $C^2$ in $H^2_\mathrm{\tiny{per}}$ ($C^\infty$ indeed). Hence, the composition is of class $C^2$ and we conclude that ${\mathcal{S}}$ is twice Fr\'echet differentiable in an open neighborhood, $\Omega = \{ \phi \in X_2 \, : \, \| \phi - \varphi \|_2 < \eta \}$, of $\varphi$. This proves (b).
Therefore, we obtain the Fr\'echet derivative by computing the (G\^ateaux derivative) operator,
\[
{\mathcal{S}}'(\varphi) \psi = \frac{d}{d\varepsilon} \big( {\mathcal{S}}(\varphi + \varepsilon \psi) \big)_{|\varepsilon = 0},
\]
for any arbitrary $\psi \in X_2$. First observe that, by definition, ${\mathcal{S}}(\varphi + \varepsilon \psi) = \zeta_{cT} \big( u_{\varphi + \varepsilon \psi} (T) \big) = \zeta_{cT} \big( \Upsilon(\varphi + \varepsilon \psi) (T) \big)$. Since $\Upsilon$ is of class $C^2$ around $\varphi$ we make the expansion
\begin{equation}
\label{exp1}
u_{\varphi + \varepsilon \psi} = \Upsilon (\varphi + \varepsilon \psi) = \Upsilon(\varphi) + \varepsilon \Upsilon'(\varphi) \psi + O(\varepsilon^2).
\end{equation}
From formula \eqref{uphi} we know that
\[
\begin{aligned}
u_{\varphi + \varepsilon \psi}(t) &= \mathcal{V}(t) (\varphi + \varepsilon \psi) + \int_0^t \mathcal{V}(t-\tau) F(u_{\varphi + \varepsilon \psi}, \partial_x u_{\varphi + \varepsilon \psi})(\tau) \, d\tau \\
&= \mathcal{V}(t) (\varphi + \varepsilon \psi) + \int_0^t \mathcal{V}(t-\tau) \big[ g(u_{\varphi + \varepsilon \psi}) - f'(u_{\varphi + \varepsilon \psi}) \partial_x u_{\varphi + \varepsilon \psi} \big] \, d\tau.
\end{aligned}
\]
Substituting \eqref{exp1} and recalling $\Upsilon(\varphi) = u_\varphi$ we arrive at the expansions,
\[
\begin{aligned}
g(u_{\varphi + \varepsilon \psi}) &= g(u_\varphi) + \varepsilon g'(u_\varphi) \Upsilon'(\varphi) \psi + O(\varepsilon^2),\\
f'(u_{\varphi + \varepsilon \psi}) \partial_x \Upsilon (\varphi + \varepsilon \psi) &= f'(u_\varphi) \partial_x u_\varphi + \varepsilon \partial_x \big( f'(u_\varphi) \Upsilon'(\varphi) \psi \big) + O(\varepsilon^2).
\end{aligned}
\]
Substitution into the previous integral formula yields
\[
\begin{aligned}
u_{\varphi + \varepsilon \psi}(t) = \mathcal{V}(t) \varphi + \int_0^t \mathcal{V}(t-\tau) \big[ g(u_\varphi) - f'(u_\varphi) \partial_x u_\varphi \big] \, d\tau + \varepsilon V_{\varphi,\psi}(t) + O(\varepsilon^2),
\end{aligned}
\]
where
\[
V_{\varphi,\psi}(t) :=
\mathcal{V}(t) \psi + \int_0^t \mathcal{V}(t-\tau) \big[ g'(u_\varphi) \Upsilon'(\varphi) \psi - \partial_x \big( f'(u_\varphi) \Upsilon'(\varphi) \psi \big) \big] \, d\tau.
\]
Upon differentiation, we notice that
\[
\frac{d}{d\varepsilon} \big( u_{\varphi + \varepsilon \psi}(t) \big)_{|\varepsilon = 0} = \frac{d}{d\varepsilon} \big( \Upsilon(\varphi)(t) + \varepsilon (\Upsilon'(\varphi) \psi)(t) + O(\varepsilon^2) \big)_{|\varepsilon = 0} = (\Upsilon'(\varphi) \psi)(t),
\]
and therefore
\[
V_{\varphi,\psi}(t) = (\Upsilon'(\varphi) \psi)(t) \in X_2,
\]
for all $t \in [0, {T}]$. Then we have shown that $V_{\varphi,\psi}$ is a solution to the integral equation
\begin{equation}
\label{exprV}
V_{\varphi,\psi}(t) = \mathcal{V}(t) \psi + \int_0^t \mathcal{V}(t-\tau) \big[ g'(u_\varphi) V_{\varphi,\psi}(\tau) - \partial_x \big( f'(u_\varphi) V_{\varphi,\psi}(\tau) \big) \big] \, d\tau,
\end{equation}
for all $t \in [0, {T}]$. From formula \eqref{exprV} we recognize that $V_{\varphi,\psi}(0) = \psi$ and that it is the solution to a linearized Cauchy problem \eqref{Cplin} with $c=0$. We claim that
\begin{equation}
\label{claimV}
v_\psi(t):=\zeta_{ct} \big(V_{\varphi,\psi}(t) \big),\quad t \in [0, {T}],
\end{equation}
is the unique solution to the linearized Cauchy problem \eqref{Cplin} with initial datum $\psi$. Indeed, first notice that $\zeta_0 \big(V_{\varphi,\psi}(0) \big) = V_{\varphi,\psi}(0) = \psi$. Now, for $x \in [0,L]$ let us denote
\[
V(t,x) := \zeta_{ct} \big(V_{\varphi,\psi}(t) \big)(x) = V_{\varphi,\psi}(t)(x+ct) = V_{\varphi,\psi}(t, x + ct).
\]
Hence, from \eqref{exprV} and since ${\mathcal{T}} = \partial_x^2$ is the infinitesimal generator of the semigroup $\mathcal{V}(t)$, we obtain
\[
\begin{aligned}
\partial_t V &= \partial_t V_{\varphi,\psi}(t, x +ct) + c \partial_x V_{\varphi,\psi}(t,x+ct) \\
&= \partial_x^2 V_{\varphi,\psi}(t,x+ct) + g'(u_\varphi(x+ct)) V_{\varphi,\psi}(t,x+ct) +\\
&\quad - \partial_x \big( f'(u_{\varphi}(x+ct)) V_{\varphi,\psi}(t,x+ct) ) + c \partial_x V_{\varphi,\psi}(t,x+ct) \\
&= \partial_x^2 V + g'(\varphi(x)) V - \partial_x \big( f'(\varphi(x)) V \big) + c \partial_x V,
\end{aligned}
\]
because $u_\varphi(\cdot + ct) = \varphi(\cdot -ct + ct) = \varphi(\cdot)$. This shows that $\partial_t V = {\mathcal{L}}_0^c V$, $V(0) = \psi$ and therefore it is a solution to \eqref{Cplin}. By uniqueness of the solution, we obtain \eqref{claimV} for all $t \in [0, {T}]$.
Finally, evaluating at $t = T$ we have
\[
\begin{aligned}
{\mathcal{S}}'(\varphi) \psi &= \frac{d}{d\varepsilon} \Big( \zeta_{cT} (\Upsilon(\varphi) (T)) + \varepsilon \zeta_{cT} ( \Upsilon'(\varphi) \psi (T) ) + O(\varepsilon^2) \Big)_{|\varepsilon = 0} \\
&= \zeta_{cT} \big( \Upsilon'(\varphi) \psi (T) \big)= \zeta_{cT} \big(V_{\varphi,\psi}(T) \big)= v_\psi(T),
\end{aligned}
\]
for any $\psi \in X_2$. This shows (c) and the lemma is proved.
\end{proof}
We are now able to prove the main instability result.
\subsection{Proof of Theorem \ref{mainthem}}
Let us consider the eigenfunction $\Psi \in X_2 = H_\mathrm{\tiny{per}}^2([0,L])$, of the linearized operator ${\mathcal{L}}_0^c \, : \, L^2_\mathrm{\tiny{per}}([0,L]) \to L^2_\mathrm{\tiny{per}}([0,L])$, as the initial condition for the linear Cauchy problem \eqref{Cplin}. By Lemma \ref{lemglobalwp} there exists a unique solution $v_\Psi \in C([0,T];X_2) \cap C^1([0,T];L^2_\mathrm{\tiny{per}}([0,L])$ with $v_\Psi(0)=\Psi$. If we define, however, $U(t) = e^{\lambda t} \Psi \in X_2$ for all $t \geq 0$ then, clearly, $U \in C([0,T];X_2) \cap C^1([0,T]; L^2_\mathrm{\tiny{per}}([0,L]))$, $U(0) = \Psi$ and
\[
\partial_t U = \lambda e^{\lambda t} \Psi = e^{\lambda t} {\mathcal{L}}_0^c \Psi = {\mathcal{L}}_0^c \big( e^{\lambda t} \Psi \big) = {\mathcal{L}}_0^c U.
\]
Hence, $U$ is a solution to the Cauchy problem \eqref{Cplin} with $U(0) = \Psi$. By uniqueness of the solution we obtain $U(t) = v_\Psi (t)$ in $X_2$ for all $t \geq 0$. Now, define $\mu := e^{\lambda T}$. This yields
\[
{\mathcal{S}}'(\varphi) \Psi = v_\Psi (T) = U(T) = e^{\lambda T} \Psi = \mu \Psi.
\]
This shows that $\mu \in \sigma({\mathcal{S}}'(\varphi))$ with $|\mu| > 1$ because $\Re \lambda > 0$. Thus, the mapping defined in \eqref{defS} on an open neighborhood of $\varphi$ satisfies the hypotheses of Corollary \ref{corhenry}.
Therefore, for $\Gamma=\mathcal O_\varphi$ being a $C^1$-curve of fixed points of ${\mathcal{S}}$, we conclude that the periodic traveling wave $\varphi$ is orbitally unstable in the space $X_2 = H^2_\mathrm{\tiny{per}}([0,L])$. The proof is complete.
\qed
\section{Applications}
\label{secappl}
In order to apply the orbital instability criterion to specific examples, let us write the following hypotheses on the nonlinear functions $f$ and $g$, which were considered by the authors in \cite{AlPl21} in their existence and spectral stability analysis. These hypotheses describe a particular class of viscous balance laws:
\begin{itemize}
\item[(A$_1$)] $f \in C^4(\mathbb{R})$. \phantomsection\label{A1}
\item[(A$_2$)] \phantomsection\label{A2} $g \in C^3(\mathbb{R})$ and it is of Fisher-KPP type, satisfying
\begin{equation}
\label{FisherKPP}
\begin{aligned}
&g(0) = g(1) =0,\\
&g'(0) > 0, \, g'(1) < 0,\\
&g(u)>0 \, \textrm{ for all } \, u \in(0,1),\\
&g(u)<0 \, \textrm{ for all } \, u \in (-\infty,0).
\end{aligned}
\end{equation}
\item[(A$_3$)] \phantomsection\label{A3} There exists $u_* \in (-\infty, 0)$ such that
\[
\int_{u_*}^0 g(s) \, ds + \int_0^1 g(s) \, ds= 0.
\]
\item[(A$_4$)] \phantomsection\label{A4} Genericity condition:
\begin{equation}
\label{gencon}
\overline{a}_0 := f'''(0) - \frac{f''(0) g''(0)}{\sqrt{g'(0)}} \neq 0.
\end{equation}
\item[(A$_5$)] \phantomsection\label{A5} Non-degeneracy condition:
\begin{equation}
\label{nondeg}
\left[ \int_{u_*}^1 \!\!\gamma(s) \, ds \right] \left[ \int_{u_*}^1 \!\! f'(s) \sqrt{1+\gamma'(s)^2} \, ds\right] \neq \left[ \int_{u_*}^1 \!\! \sqrt{1+\gamma'(s)^2} \, ds \right] \left[ \int_{u_*}^1 \!\! f'(s) \gamma(s) \, ds \right],
\end{equation}
where
\begin{equation}
\label{defPsi}
\gamma(u):= \sqrt{2\int_u^1 g(s) \, ds} \, > 0, \qquad u \in (u_*,1).
\end{equation}
\item[(A$_6$)] \phantomsection\label{A6} Saddle condition:
\begin{equation}
\label{saddle}
f'(1) \left[ \int_{u_*}^1 \gamma(s) \, ds \right] \neq \int_{u_*}^1 f'(s) \gamma(s) \, ds.
\end{equation}
\end{itemize}
\begin{remark}
Although hypotheses \hyperref[A1]{(A$_1$)} - \hyperref[A6]{(A$_6$)} may at first glance seem too restrictive, they are fulfilled by a large number of models including, for example, the well-known Burgers-Fisher equation (cf. \cite{LeHa16,AlPl21,MiGu02}),
\begin{equation}
\label{eqBF}
u_t + uu_x = u_{xx} + u(1-u),
\end{equation}
for which the nonlinear flux function is given by the paradigmatic Burgers' flux \cite{Bur48,La57}, $f(u) = \tfrac{1}{2} u^2$, and the reaction term is the classical logistic growth function, $g(u) = u(1-u)$ (cf. \cite{Fis37,KPP37}). Other models that satisfy these assumptions include the Buckley-Leverett flux \cite{BuLe42} together with the logistic reaction, yielding the scalar equation
\begin{equation}
\label{LogBLmodel}
u_t + \partial_x \left( \frac{u^2}{u^2 + \tfrac{1}{2}(1-u)^2}\right) = u_{xx} + u(1-u),
\end{equation}
as well as the modified Burgers-Fisher equation,
\begin{equation}
\label{MBFmodel}
u_t + \partial_x \big( \tfrac{1}{4}u^4 - \tfrac{1}{3} u^3\big) = u_{xx} + u-u^4,
\end{equation}
just to mention a few. See \cite{AlPl21} for more details.
\end{remark}
\begin{remark}
The most important assumption is \hyperref[A2]{(A$_2$)}, which specifies a balance (or production) term with logistic response, that is, with an unstable equilibrium point at $u = 0$ and a stable one at $u=1$. Reaction functions of logistic type are used to model dynamics of populations with limited resources, which saturate into a stable equilibrium point associated to an intrinsic carrying capacity (in this case, the equilibrium state $u = 1$). They are also known as source functions of Fisher-KPP \cite{Fis37,KPP37} or monostable type.
\end{remark}
As it is established in \cite{AlPl21}, the unstable nature of the origin ($g'(0) > 0$) is responsible of both the existence and spectral instability of small-amplitude periodic waves emerging from a local Hopf bifurcation, as well as the existence and spectral instability of large-period waves which emerge from a global homoclinic bifurcation near a traveling pulse based on a saddle. Even if you change the diffusion mechanism, the unstable equilibrium of the reaction produce similar results (see, for example, the recent paper \cite{AMP22}, where hyperbolic systems with logistic source are considered). It is to be observed that these periodic waves do not exhibit intrinsic symmetries, inasmuch as the existence analysis does not rely on the standard construction techniques but on bifurcation analyses. The existence proof (which is based on local and global bifurcations) also provides the tools to analyze their spectrum. For instance, it can be shown that, for both families of waves, the Floquet spectrum intersects the unstable complex half plane and, hence, they are spectrally unstable (for details see \cite{AlPl21}).
Let us apply the criterion established in Theorem \ref{mainthem} to both families, yielding their orbital instability in Sobolev periodic spaces with the same period as the fundamental period of the underlying wave. We first examine the case of small-amplitude waves.
\subsection{Orbital instability of small-amplitude periodic waves}
Under assumptions \hyperref[A1]{(A$_1$)} - \hyperref[A4]{(A$_4$)}, the profile equation \eqref{profileq} defines a first order ODE system in the phase plane for which the origin is a center for a critical value of the speed, $c_0$, and where a local Hopf bifurcation occurs when the speed $c$ crosses $c_0$. Then small-amplitude periodic orbits, with period of order $O(1)$, emerge. This behavior can be stated as follows.
\begin{theorem}[existence of small amplitude periodic waves \cite{AlPl21}]
\label{thmexbded}
Suppose that conditions \emph{\hyperref[A1]{(A$_1$)}} - \emph{\hyperref[A4]{(A$_4$)}} hold. Then there exist a critical speed given by $c_0 := f'(0)$ and some $\epsilon_0 > 0$ sufficiently small such that, for each $0 < \epsilon < \epsilon_0$ there exists a unique (up to translations) periodic traveling wave solution to the viscous balance law \eqref{VBL} of the form $u(x,t) = \varphi^\epsilon(x - c(\epsilon)t)$, with speed $c(\epsilon) = c_0 + \epsilon$ if $\overline{a}_0 > 0$, or $c(\epsilon) = c_0 - \epsilon$ if
$\overline{a}_0 < 0$, and with fundamental period,
\begin{equation}
\label{fundphopf}
L_\epsilon = \frac{2 \pi}{\sqrt{g'(0)}} + O(\epsilon), \qquad \text{as } \, \epsilon \to 0^+.
\end{equation}
The profile function $\varphi^\epsilon = \varphi^\epsilon(\cdot)$ is of class $C^3(\mathbb{R})$, satisfies $\varphi^\epsilon(x + L_\epsilon) = \varphi^\epsilon(x)$ for all $x \in \mathbb{R}$ and is of small amplitude. More precisely,
\begin{equation}
\label{bdsmalla}
|\varphi^\epsilon(x)|, |(\varphi^\epsilon)'(x)| \leq C \sqrt{\epsilon},
\end{equation}
for all $x \in \mathbb{R}$ and some uniform $C > 0$.
\end{theorem}
\begin{proof}
See the proof of Theorem 1.2, \S 2.1, in \cite{AlPl21} for details.
\end{proof}
\begin{remark}
The proof of this existence result is based on a (local) Hopf bifurcation analysis around the critical value $c_0 = f'(0)$ of the wave speed. The bifurcation can be either sub- or supercritical, depending on the sign of $\overline{a}_0$ in \eqref{gencon}.
\end{remark}
\begin{figure}[t]
\begin{center}
\subfigure[$(\varphi,\varphi')$]{\label{figFaseHopfBF}\includegraphics[scale=.465, clip=true]{PlanoFaseHopf-BF.pdf}}
\subfigure[$\varphi = \varphi(x)$]{\label{figWaveHopfBF}\includegraphics[scale=.465, clip=true]{WaveHopf-BF.pdf}}
\end{center}
\caption{\small{Emergence of small-amplitude waves for the Burgers-Fisher equation \eqref{eqBF}. Panel (a) shows the phase portrait (in the $(\varphi,\varphi')$ plane) of the ODE \eqref{profileq} for the speed value $c = 0.005$. Numerical solutions of \eqref{profileq} with nearby initial points are shown in light blue color. The orbit in red is a numerical approximation of the unique small amplitude periodic wave for this speed value. Panel (b) shows the graph (in red) of the approximated periodic wave $\varphi$ as a function of $x$ (color online).}}\label{figHopfBF}
\end{figure}
It is to be noted that formulae \eqref{fundphopf} and \eqref{bdsmalla} imply that, for a fixed small $\epsilon$, the fundamental period of the wave is of order $O(1)$ and the amplitude of the waves is of order $O(\sqrt{\epsilon})$, respectively. Thus, one expects that when $\epsilon \to 0^+$ the small-amplitude waves tend to the origin and the linearized operator (formally) becomes a constant coefficient linearized operator around the zero solution, whose spectrum is determined by a dispersion relation that invades the unstable half plane thanks to the sign of $g'(0)$. This observation is the basis of the analysis in \cite{AlPl21}, which proves that unstable point eigenvalues of the constant coefficient operator split into neighboring curves of Floquet spectra of the underlying small amplitude waves.
Indeed, the Bloch family of linearized operators around the wave,
\begin{equation}
\label{allBloch}
\left\{
\begin{aligned}
{\mathcal{L}}^{c(\epsilon)}_\theta &:= (\partial_x + i\theta/L_\epsilon)^2 + a^\epsilon_1(x) (\partial_x + i \theta/L_\epsilon) + a^\epsilon_0(x) \mathrm{Id},\\
{\mathcal{L}}^{c(\epsilon)}_\theta &: L^2_\mathrm{\tiny{per}}([0,L_\epsilon]) \to L^2_\mathrm{\tiny{per}}([0,L_\epsilon]),
\end{aligned}
\right.
\end{equation}
where
\[
a^\epsilon_1(x) := c(\epsilon) - f'(\varphi^\epsilon), \qquad a^\epsilon_0(x) := g'(\varphi^\epsilon) - f'(\varphi^\epsilon)_x,
\]
for each $\theta \in (-\pi,\pi]$, with domain ${\mathcal{D}}({\mathcal{L}}^{c(\epsilon)}_\theta) = H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$, can be transformed into a family of operators, $\tilde{{\mathcal{L}}}^{\epsilon}_\theta$, defined on the periodic space $L^2_\mathrm{\tiny{per}}([0,\pi])$, for which the period no longer depends on $\epsilon$.
For that purpose, the authors in \cite{AlPl21} make the change of variables, $y := \pi x/ L_\epsilon$ and $w(y) := u(L_\epsilon y/\pi)$, and apply \eqref{fundphopf} and $c(\epsilon) = c_0 + O(\epsilon)$, in order to recast the spectral problem for the operators in \eqref{allBloch} as $\widetilde{{\mathcal{L}}}^{\epsilon}_\theta w = \lambda w$,
where
\[
\begin{aligned}
\widetilde{{\mathcal{L}}}^{\epsilon}_\theta := \widetilde{{\mathcal{L}}}^{0}_\theta + \sqrt{\epsilon} \widetilde{{\mathcal{L}}}^{1}_\theta &: L^2_\mathrm{\tiny{per}}([0,\pi]) \to L^2_\mathrm{\tiny{per}}([0,\pi]),\\
\widetilde{{\mathcal{L}}}^{0}_\theta &:= (i \theta + \pi \partial_y)^2 + 4 \pi^2 \mathrm{Id},\\
\widetilde{{\mathcal{L}}}^{1}_\theta &:= b_1(y) (i \theta + \pi \partial_y) + b_0(y) \mathrm{Id},
\end{aligned}
\]
for each $\theta \in (0,\pi]$ and where the coefficients behave like
\[
\begin{aligned}
b_1(y) &:= \frac{1}{\sqrt{\epsilon}} a_1^\epsilon(y) = O(1), \\
b_0(y) &:= \frac{1}{\sqrt{\epsilon}} \big( a_0^\epsilon(y) - 4 \pi^2 \big) = O(1),
\end{aligned}
\]
as $\epsilon \to 0^+$ (for details, see \cite{AlPl21}). It can be shown that, for every $\theta$, $\widetilde{{\mathcal{L}}}^{1}_\theta$ is $\widetilde{{\mathcal{L}}}^{0}_\theta$-bounded (see Lemma 4.6 in \cite{AlPl21}). Therefore, upon application of standard perturbation theory for linear operators (cf. Kato \cite{Kat80}), it is shown that both spectra, $\sigma(\widetilde{{\mathcal{L}}}^{\epsilon}_\theta)$ and $\sigma(\widetilde{{\mathcal{L}}}^{0}_\theta)$, are located nearby in the complex plane for $\epsilon > 0$ small enough.
Transforming back into the original coordinates, the same conclusion holds for any fixed, sufficiently small $\epsilon > 0$ and the associated family of Bloch operators \eqref{allBloch} defined on $L^2_\mathrm{\tiny{per}}([0,L_\epsilon])$. In particular, for $\theta = 0$, the unperturbed operator
\[
\left\{
\begin{aligned}
{\mathcal{L}}_0^{c(0)} &= \partial_x^2 + g'(0) \mathrm{Id}, \\
{\mathcal{L}}_0^{c(0)} &: L^2_\mathrm{\tiny{per}}([0,L_\epsilon]) \to L^2_\mathrm{\tiny{per}}([0,L_\epsilon]),
\end{aligned}
\right.
\]
with ${\mathcal{D}}({\mathcal{L}}_0^{c(0)}) = H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$, is clearly self-adjoint with a positive eigenvalue $\widetilde{\lambda}_0 = g'(0)$ associated to the constant eigenfunction $\Psi_0(y) = 1 \in H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$. Hence, the operator ${\mathcal{L}}^{c(\epsilon)}_0$ has discrete eigenvalues $\widetilde{\lambda}_j(\epsilon)$ in a $\sqrt{\epsilon}$-neighborhood of $\widetilde{\lambda}_0 = g'(0)$ with multiplicities adding up to the multiplicity of $\widetilde{\lambda}_0$ provided that $\epsilon$ is sufficiently small. Moreover, since $\widetilde{\lambda}_0 >0$ there holds $\Re \lambda_j(\epsilon) > 0$. Henceforth, we have the following result.
\begin{lemma}
\label{lemspectcond}
For each $0 < \epsilon \ll 1$ sufficiently small there holds
\begin{equation}
\sigma_\mathrm{\tiny{pt}}({\mathcal{L}}_0^{c(\epsilon)})_{|L^2_\mathrm{\tiny{per}}} \cap \{ \lambda \in \mathbb{C} \, : \, | \lambda - g'(0) |\} \neq \varnothing,
\end{equation}
\end{lemma}
\begin{proof}
See Lemma 4.7 and the proof of Theorem 1.4 in \cite{AlPl21} (in particular, see equation (4.8) in \cite{AlPl21}).
\end{proof}
Therefore, we conclude the existence of an unstable eigenvalue $\lambda(\epsilon) \in \mathbb{C}$ with $\Re \lambda(\epsilon) > 0$ and an eigenfunction $\Psi^\epsilon \in H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$, such that ${\mathcal{L}}_0^{c(\epsilon)} \Psi^\epsilon = \lambda(\epsilon) \Psi^\epsilon$, that is, the spectral instability property holds. Hence, upon application of Theorem \ref{mainthem}, we have the following
\begin{theorem}[orbital instability of small-amplitude periodic waves]
\label{teoorbsmall}
Under assumptions \emph{\hyperref[A1]{(A$_1$)}} - \emph{\hyperref[A4]{(A$_4$)}}, there exists $\bar{\epsilon}_0 \in (0, \epsilon_0)$ sufficiently small such that each periodic wave of Theorem \ref{thmexbded}, $u(x,t) = \varphi^\epsilon(x - c(\epsilon)t)$, with $\epsilon \in (0, \bar{\epsilon}_0)$, is orbitally unstable in the periodic space $X_2 = H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$ under the flow of the viscous balance law \eqref{VBL}.
\end{theorem}
\begin{remark}
It is to be observed that, in the case of small amplitude waves, the instability is due to a structural assumption on the model equations: the instability of the origin as an equilibrium point of the reaction generates an unstable eigenvalue of an associated constant coefficient operator, from which the linearization of a small-amplitude wave represents a perturbation.
\end{remark}
\subsection{Orbital instability of large-period waves}
The analysis of \cite{AlPl21} also reveals the existence of a different family of periodic waves. Under further assumptions {\hyperref[A5]{(A$_5$)}} and {\hyperref[A6]{(A$_6$)}}, one guarantees that, for another critical value of the speed, the point $(1,0)$ in the phase plane is the (saddle) base of a homoclinic orbit, representing a traveling pulse solution to \eqref{VBL}. Then, from a global bifurcation argument (see, e.g., \cite{SSTC01}) one deduces the existence of large period waves in a vicinity of the homoclinic orbit when the speed tends to the critical value (the speed of the traveling pulse) or, equivalently, when their period goes to infinity. This is the content of the following
\begin{theorem}[existence of large period waves \cite{AlPl21}]
\label{thmexlarge}
Under assumptions \emph{\hyperref[A1]{(A$_1$)}} - \emph{\hyperref[A3]{(A$_3$)}}, \emph{\hyperref[A5]{(A$_5$)}} and \emph{\hyperref[A6]{(A$_6$)}}, there is a critical speed given by
\begin{equation}
\label{defc1}
c_1 := \frac{\int_{u_*}^1 f'(s) \gamma(s) \, ds}{\int_{u_*}^1 \gamma(s) \, ds},
\end{equation}
such that there exists a traveling pulse solution (homoclinic orbit) to equation \eqref{VBL} of the form $u(x,t) = \varphi^0(x - c_1 t)$, traveling with speed $c_1$ and satisfying $\varphi^0 \in C^3(\mathbb{R})$, $\varphi^0(x) \to 1$ as $x \to \pm\infty$, with
\begin{equation}
\label{expulse}
|\varphi^0(x) - 1|, |(\varphi^0)'(x)| \leq C e^{-\kappa |x|},
\end{equation}
for all $x \in \mathbb{R}$ and some $\kappa > 0$. In addition, one can find $\epsilon_1 > 0$ sufficiently small such that, for each $0 < \epsilon < \epsilon_1$ there exists a unique periodic traveling wave solution to the viscous balance law \eqref{VBL} of the form $u(x,t) = \varphi^\epsilon(x - c(\epsilon)t)$, traveling with speed $c(\epsilon) = c_1 + \epsilon$ if $f'(1) < c_1$ or $c(\epsilon) = c_1 - \epsilon$ if $f'(1) > c_1$, with fundamental period
\begin{equation}
\label{fundphomo}
L_\epsilon = O(| \log \epsilon |) \to \infty,
\end{equation}
and amplitude
\begin{equation}
\label{bdO1}
|\varphi^\epsilon(x)|, |(\varphi^\epsilon)'(x)| = O(1),
\end{equation}
as $\epsilon \to 0^+$. Moreover, these periodic orbits converge to the homoclinic or traveling pulse solution as $\epsilon \to 0^+$ and satisfy the bounds (after a suitable reparametrization of $x$),
\begin{equation}
\label{bounds}
\begin{aligned}
\sup_{x \in [-\frac{1}{2}L_\epsilon, \frac{1}{2}L_\epsilon]} \left( |\varphi^0(x) - \varphi^\epsilon(x)| + |(\varphi^0)'(x) - (\varphi^\epsilon)'(x) |\right) &\leq C \exp \Big( \!- \frac{\kappa}{2} L_\epsilon\Big), \\
| c_1 - c(\epsilon)| &\leq C \exp \big( \!- \kappa L_\epsilon \big),
\end{aligned}
\end{equation}
for some uniform $C > 0$, the same $\kappa > 0$ and for all $0 < \epsilon < \epsilon_1$.
\end{theorem}
\begin{proof}
See the proof of Theorem 1.3, \S 2.3, in \cite{AlPl21} for details.
\end{proof}
\begin{remark}
The proof of this existence result is based on two components. First, it establishes the existence of a traveling pulse for equation \eqref{VBL}, traveling with speed $c = c_1$ given in \eqref{defc1}. This is a consequence of Melnikov's integral method. Second, upon application of Andronov-Leontovich's theorem in the plane it is shown that there exists a family of periodic waves emerging from the homoclinic orbit; the family is parametrized by $\epsilon = | c_1 - c(\epsilon)|$, for which each wave travels with speed $c = c(\epsilon)$ and converges to the traveling pulse as $\epsilon \to 0^+$. The fundamental period of the family of periodic waves, $L_\epsilon$, converges to $\infty$ as $\epsilon \to 0^+$ at order $O(|\log \epsilon|)$. As an illustration, Figure \ref{figHomoBL} shows the emergence of large period waves for the logistic Buckley-Leverett equation \eqref{LogBLmodel}. For example, it can be proved that the value of the speed of the homoclinic orbit defined in \eqref{defc1}, from which the periodic loops with large period bifurcate, is $c_1 \approx 0.589097$ (see \cite{AlPl21}). Since $c_1 > f'(1) = 0$, Theorem \ref{thmexlarge} then implies that the family of periodic waves with large period emerge for speed values in a neighborhood above the value $c_1$, that is, for $c \in (0.5891, 0.5891 + \epsilon)$ with $\epsilon > 0$ small. Figure \ref{figFaseHomoBL} shows a numerical approximation (in the phase plane) of the homoclinic loop to the ODE \eqref{profileq} with speed $c_1$ (dashed line in blue) and of a large-period wave from the family with speed $c \approx c_1 + 0.025$ (continuous line in red). Figure \ref{figWaveHomoBL} shows numerical approximations of the graph (in red) of the large period wave $\varphi$ as a function of $x$, together with the traveling pulse (dashed, blue line).
\end{remark}
\begin{figure}[t]
\begin{center}
\subfigure[$(\varphi,\varphi')$]{\label{figFaseHomoBL}\includegraphics[scale=.465, clip=true]{PlanoFaseHomo-BL3.pdf}}
\subfigure[$\varphi = \varphi(x)$]{\label{figWaveHomoBL}\includegraphics[scale=.465, clip=true]{WaveHomo-BL3.pdf}}
\end{center}
\caption{\small{Large period waves for the logistic Buckley-Leverett equation \eqref{LogBLmodel}. Panel (a) shows numerical approximations in the phase plane of both the homoclinic loop (traveling pulse) for equation \eqref{LogBLmodel} with speed value $c_1 \approx 0.5891$ (in dashed blue line) and the periodic wave nearby with speed value $c_1 + \epsilon$, $\epsilon \approx 0.025$ (solid, red line). Panel (b) shows numerical approximations of the graph (solid, red line) of the large period wave $\varphi$ as a function of $x$, together with the traveling pulse (dashed, blue line). The period of the wave is of order $O(| \log \epsilon |) \approx O(3.69)$. (Color online.)}}\label{figHomoBL}
\end{figure}
The proof of existence of this family of waves also underlies the tools to show that their Floquet spectrum is unstable. For instance, if we linearize the equation around the pulse, we obtain the following linear operator:
\begin{equation}
\label{LbarR}
\begin{aligned}
\bar{{\mathcal{L}}}^0 &:= \partial_x^2 + \bar{a}_1^0(x) \partial_x + \bar{a}_0^0(x) \mathrm{Id},\\
\bar{{\mathcal{L}}}^0 &: \, L^2(\mathbb{R}) \longrightarrow L^2(\mathbb{R}).
\end{aligned}
\end{equation}
with smooth coefficients
\[
\begin{aligned}
\bar{a}_1^0(x) &:= c_1 - f'(\varphi^0(x)),\\
\bar{a}_0^0(x) &:= g'(\varphi^0(x)) - f'(\varphi^0(x))_x,
\end{aligned}
\]
which decay exponentially to finite limits as $x \to \pm \infty$; more precisely,
\begin{equation}
\label{expconvpulse}
| \bar{a}_1^0(x) - \bar{a}_1^{\infty} | + | \bar{a}_0^0(x) - \bar{a}_0^{\infty} | \leq C e^{- \kappa |x|},
\end{equation}
for all $x \in \mathbb{R}$ with $\bar{a}_1^{\infty} := c_1 - f'(1)$, $\bar{a}_0^{\infty} := g'(1)$. This behavior holds because of the exponential decay of the traveling pulse to hyperbolic end points (see \eqref{expulse}). The operator $\bar{{\mathcal{L}}}^0$ is closed and densely defined in $L^2(\mathbb{R})$ with domain ${\mathcal{D}}(\bar{{\mathcal{L}}}^0) = H^2(\mathbb{R})$. Moreover, $\bar{{\mathcal{L}}}^0$ is of Sturmian type (see, e.g., Kapitula and Promislow \cite{KaPro13}, \S 2.3) and, upon application of standard Sturm-Liouville theory, we have the following instability result.
\begin{lemma}
\label{theoinspulse}
The traveling pulse solution of Theorem \ref{thmexlarge}, $\varphi^0$, is spectrally unstable; more precisely, there exists $\bar{\lambda}_0 > 0$ such that $\bar{\lambda}_0 \in \sigma_\mathrm{\tiny{pt}}(\bar{{\mathcal{L}}}^0)$. Moreover, this eigenvalue is simple.
\end{lemma}
\begin{proof}
See Theorem 5.1 in \cite{AlPl21}.
\end{proof}
The pioneering work by Gardner \cite{Grd97} characterized the spectrum of the linearized operator around a periodic wave of the approximating family and related it to that of the operator $\bar{{\mathcal{L}}}^0$. Gardner proved the convergence of both spectra in the infinite period limit and, under very general conditions, that loops of continuous periodic spectra bifurcate from isolated point spectra of the limiting homoclinic wave. Hence, the typical spectral instability of the traveling pulse determines the spectral instability of the periodic waves. Thanks to the convergence estimates \eqref{bounds}, the authors in \cite{AlPl21} verified the hypotheses of a recent refinement of Gardner's result due to Yang and Zumbrun \cite{YngZ19} in order to conclude the spectral instability of the family (see Corollary 4.1 and Proposition 4.2 in \cite{YngZ19}, as well as Theorems 5.2 and 1.5 in \cite{AlPl21}).
In order to apply our orbital instability criterion, however, we need to verify the spectral instability property for the particular Bloch operator with $\theta = 0$. For that purpose, we state the following result which is, in fact, a Corollary of the proof of Theorem 1.5 in \cite{AlPl21}.
\begin{corollary}
\label{corBloch0}
Consider the eigenvalue problem for the Bloch operator \eqref{allBloch} linearized around the family of waves of Theorem \ref{thmexlarge}. Let ${\mathcal{C}} \subset \mathbb{C}$ be a positively oriented simple circle of fixed radius with ${\mathcal{C}}\subset \{\lambda\in \mathbb{C}: \Re \lambda>0\}$ containing $\bar{\lambda}_0$ (which is the simple, real and unstable isolated eigenvalue of $\bar{{\mathcal{L}}}^0$) in its interior, and containing no other eigenvalue of $\bar{{\mathcal{L}}}^0$ in the closure of ${\mathcal{C}}$. Then for sufficiently small $0 < \epsilon \ll 1$ and for each $-\pi < \theta \leq \pi$, the Bloch wave spectral problem ${\mathcal{L}}^{c(\epsilon)}_\theta w = \lambda w$ has exactly one point eigenvalue $\lambda = \lambda(\epsilon)$ in the interior of ${\mathcal{C}}$.
\end{corollary}
\begin{proof}
Let us define the matrix coefficients
\[
\mathbb{A}^0(x,\lambda) := \begin{pmatrix} 0 & 1 \\ \lambda - \bar{a}_0^0(x) & - \bar{a}_1^0(x) \end{pmatrix},
\]
for $x \in \mathbb{R}$ and $\lambda \in \mathbb{C}$. These coefficients are clearly analytic in $\lambda$ and of class $C^1(\mathbb{R};\mathbb{C}^{2 \times 2})$ as functions of $x \in \mathbb{R}$. Moreover, they have asymptotic limits given by
\[
\mathbb{A}^0_\infty(\lambda) := \lim_{x \to \pm \infty} \mathbb{A}^0(x,\lambda) = \begin{pmatrix} 0 & 1 \\ \lambda - \bar{a}_0^{\infty} & - \bar{a}_1^{\infty} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ \lambda - g'(1) & - c_1 + f'(1) \end{pmatrix}.
\]
Thanks to exponential decay \eqref{expulse} of the traveling pulse and from continuity of the coefficients, we reckon that, for any $|\lambda| \leq M$ with some $M > 0$, there exists a constant $C(M) > 0$ such that
\begin{equation}
\label{laH2K}
| \mathbb{A}^0(x,\lambda) - \mathbb{A}^0_\infty(\lambda)| \leq C(M) e^{-\kappa |x|},
\end{equation}
for all $x \in \mathbb{R}$. Likewise, define the coefficients,
\[
\mathbb{A}^\epsilon(x,\lambda) = \begin{pmatrix} 0 & 1 \\ \lambda - \bar{a}_0^{\epsilon}(x) & - \bar{a}_1^{\epsilon}(x)\end{pmatrix},
\]
which are analytic in $\lambda \in \mathbb{C}$, continuous in $\epsilon > 0$ and of class $C^1(\mathbb{R};\mathbb{C}^{2 \times 2})$ as functions of $x \in \mathbb{R}$. Hence, since the coefficients are smooth and bounded and from estimates \eqref{bounds} we have, for $|\lambda |\leq M$,
\[
\begin{aligned}
|\mathbb{A}^\epsilon(x, \lambda) - \mathbb{A}^0(x,\lambda) | &\leq \overline{C}(M) \Big( |\varphi^0(x) - \varphi^\epsilon(x)| + | (\varphi^0)'(x) - (\varphi^\epsilon)'(x)| + |c_1 - c(\epsilon)| \Big)\\
&\leq C(M) e^{- \kappa L_\epsilon/2}.
\end{aligned}
\]
Last estimate, together with \eqref{laH2K} and Theorem \ref{thmexlarge} yields,
\[
\begin{aligned}
|\mathbb{A}^0(x,\lambda) - \mathbb{A}^0_{\infty}| &\leq C(M) e^{- \kappa |x|}, \quad \text{for all } \, x \in \mathbb{R},\\
|\mathbb{A}^0(x,\lambda) - \mathbb{A}^\epsilon(x,\lambda)| &\leq C(M) e^{-\kappa L_\epsilon/2}, \quad \text{for all } \, |x| \leq \frac{L_\epsilon}{2},
\end{aligned}
\]
for every $|\lambda| \leq M$ and some uniform constants $C(M), \kappa > 0$ (see estimates (5.10) in \cite{AlPl21}). We then conclude that the Hypothesis 1 of Theorem 1.2 by Gardner \cite{Grd97} is satisfied (notice that Hypothesis 1 of Gardner requires the estimates for $|\mathbb{A}^0 - \mathbb{A}^\epsilon|$ in half a period too, because the fundamental period in \cite{Grd97} is $2L_\epsilon$). Hypothesis 2 is fulfilled in the set of consistent splitting, $\Omega = \{ \lambda \in \mathbb{C} \, : \, \Re \lambda > g'(1) \}$ (see the proof of Theorem 5.1 in \cite{AlPl21}). And Hypothesis 3 is trivially fulfilled by the traveling pulse by Sturm-Liouville theory. Upon application of Theorem 1.2 in \cite{Grd97} and since the eigenvalue $\bar{\lambda}_0 \in \sigma_\mathrm{\tiny{pt}}(\bar{{\mathcal{L}}}^0)$ is simple, we conclude the existence of exactly one eigenvalue, $\lambda \in \lambda(\epsilon)$, of ${\mathcal{L}}_\theta^{c(\epsilon)}$ for each $\theta \in (-\pi,\pi]$, in the interior of ${\mathcal{C}}$.
\end{proof}
Henceforth, Corollary \ref{corBloch0} implies that, for the particular case of the Bloch operator with $\theta = 0$, the spectral instability property holds: there exists an unstable eigenvalue $\lambda(\epsilon) \in \mathbb{C}$ with $\Re \lambda(\epsilon) > 0$ and an eigenfunction $\Psi^\epsilon \in H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$, such that ${\mathcal{L}}_0^{c(\epsilon)} \Psi^\epsilon = \lambda(\epsilon) \Psi^\epsilon$. Finally, upon application of the orbital instability criterion (Theorem \ref{mainthem}), we have proved the following
\begin{theorem}[orbital instability of large period waves]
\label{teoorblarge}
Under assumptions \emph{\hyperref[A1]{(A$_1$)}} - \emph{\hyperref[A3]{(A$_3$)}}, \emph{\hyperref[A5]{(A$_5$)}} and \emph{\hyperref[A6]{(A$_6$)}}, there exists $\bar{\epsilon}_1 \in (0, \epsilon_1)$ sufficiently small such that each large period wave of Theorem \ref{thmexlarge}, $u(x,t) = \varphi^\epsilon(x - c(\epsilon)t)$, with $\epsilon \in (0, \bar{\epsilon}_1)$, is orbitally unstable in the periodic space $X_2 = H^2_\mathrm{\tiny{per}}([0,L_\epsilon])$ under the flow of the viscous balance law \eqref{VBL}.
\end{theorem}
\section*{Acknowledgements}
The work of E. \'{A}lvarez was supported by the project CONACYT, FORDECYT-PRONACES 429825/2020 (proyecto aprobado por el CONACYT, FORDECYT-PRONACES 429825/2020), recently renamed as project CF-2019/429825. The work of J. Angulo Pava was partially supported by Grant-CNPq and Universal Project- CAPES/Brazil. J. Angulo would like to express his gratitude to the IIMAS (UNAM), D.F., for its hospitality when this work was carried out. The work of R. G. Plaza was partially supported by DGAPA-UNAM, program PAPIIT, grant IN-104922.
\def$'\!\!$} \def\cprimel{$'\!${$'\!\!$} \def\cprimel{$'\!$}
|
1,116,691,498,578 | arxiv | \section{Introduction}
Spin-transfer torques in magnetic devices, i.e., the transfer of angular momentum leveraged by itinerant electrons to the magnetization dynamics,\cite{Berger,Slonczewski} enable the electrical control of the latter\cite{exp1,exp2,exp4,exp3} and are of interest for diverse technological applications. For example, spin-transfer torques can compensate the action of damping forces, sustaining a large-angle precessional motion in magnetic nano-oscillators,\cite{nano1,nano2,nano3} a phenomenon of potential interest in the field of neuromorphic computing.\cite{neuromorphic} Recent advances on this front exploit the torques of relativistic origin generated at the interface between a magnet and a heavy metal when charge flows in the latter.\cite{Scott} These torques can be described in terms of nonequilibrium accumulations due to the interfacial Edelstein \cite{Edelstein} and/or spin Hall effects,\cite{sH1,sH2} which rely on the lack of inversion symmetry imposed by the geometry of the device.
Spin accumulations can also be generated by spin-polarized currents in metallic ferromagnets without the active intervention of adjacent normal metals. In fact, the broken symmetries associated with the spontaneous magnetic ordering allow for more complex spin-current responses,\cite{spin-transfer1,spin-transfer2,spin-transfer3,perspective} such as the anisotropic magnetoresistance\cite{anisotropy} (AMR, which includes the so-called planar Hall effect\cite{pH}), leading to different mechanisms of spin transfer.\cite{SOT1,SOT2,SOT3} In this article, we present a minimal model for the spin-orbit torques generated by electronic currents in a heterostructure consisting of a ferromagnetic metal (FM) sandwiched between spin-relaxing layers, such as heavy normal metals (NM). Our theory is complementary to recent \textit{ab initio} studies.\cite{ab-initio1,ab-initio3,ab-initio2} The model relies on a phenomenological description of the flows of charge and longitudinal (to the magnetic order) spin, accompanied by suitable boundary conditions defined at the interfaces. We find that, when the heterostructure is inversion asymmetric, the uncompensated spin accumulation induced by a current density $\mathbf{j}_{\textrm{c}}$ exerts a damping-like torque (normalized by volume) on the magnetization of the form
\begin{align}
\label{eq:anti-damping}
\boldsymbol{\tau}_{\textrm{d}}= & \frac{\eta \hbar}{2e\Gamma_s L}
\left(\bm{\hat{z}}\cdot\bm{n}\right)\,\left(\bm{n}\bm{\times} \bm{\hat{z}}\times \bm{n}\right)\\
&\times \left[\vartheta_{\textrm{H}}\,\bm{n}\cdot\left(\bm{\hat{z}}\bm{\times}\mathbf{j}_{c}\right)+ \varrho_{\textrm{MR}}\,\left(\bm{\hat{z}}\cdot\bm{n}\right)\left(\mathbf{j}_{c}\bm{\cdot}\bm{n}\right)\right].
\nonumber
\end{align}
Here, $\boldsymbol{n}$ is a unit vector along the collective spin density, $-e$ is the electron charge, $L$ is the thickness of the film, and $\Gamma_s$ is a dimensionless number characterizing the spin relaxation rate in the bulk of the ferromagnetic metal. The dimensionless coefficients $\vartheta_{\textrm{H}}$ and $\varrho_{\textrm{MR}}$ are related to the anomalous Hall and AMR effects, respectively, while $\eta$ (with units of the inverse volume) characterizes the torque by the out-of-equilibrium longitudinal spins on the order parameter at the interface. This spin torque can be either direct or mediated by magnons,\cite{Benedetta} leading to a characteristic temperature dependence in the latter case.\cite{tvetenPRB15} The above expression has been derived under the assumption that the magnetization dynamics occur on long timescales (longer than, e.g., the longitudinal spin-flip time, $\tau_s$), so that the itinerant electrons respond to a (quasi-)static magnetic background during their transport.
The torque~\eqref{eq:anti-damping} affects the linewidth of the ferromagnetic resonance as electron charge flows through the system. When the static component of the magnetization lies within the plane defined by the normal of the layered heterostructure and the charge current (the $xz$ plane in Fig.~\ref{fig:Fig1}), the shift in the resonance linewidth follows
\begin{align}
\Delta B\left(\theta\right)\approx\mathcal{A}_{xz}\Big(\sin2\theta+\textstyle{\frac{1}{2}}\sin4\theta\Big),
\end{align}
where $\theta$ is the polar angle of the magnetization relative to the $z$ axis and $\mathcal{A}_{xz}$ is the single fitting parameter proportional to $\varrho_{\textrm{MR}}/L$, see Eq.~\eqref{linewidth_PHE}. If, on the other hand, the order parameter lies within the plane perpendicular to the current and the heterostructure (the $yz$ plane in Fig.~\ref{fig:Fig1}), the shift in the resonance linewidth reads
\begin{align}
\label{eq:yz}
\Delta B\left(\theta\right)\approx\mathcal{A}_{yz}\left(\sin\theta+\sin3\theta\right),
\end{align}
where the prefactor $\mathcal{A}_{yz}$ now scales linearly with $\vartheta_{\textrm{H}}/L$. The linewidth shift vanishes for the magnetization within the film's ($xy$) plane, according to the torque \eqref{eq:anti-damping}. We note that these dependences for the resonance linewidth have been derived under the simplified assumption of circular precession of the magnetization.
The manuscript is structured as follows: We present our model and derive the main results in Secs.~\ref{sec:theory} and~\ref{sec:linewidth}. In Sec.~\ref{sec:experiment}, we analyze data from recent spin-torque ferromagnetic resonance (ST-FMR) measurements\cite{SOT2} performed in different NM/FM heterostructures.
We conclude by discussing and summarizing our findings in Sec.~\ref{sec:discussion}.
\section{Phenomenological model}
\label{sec:theory}
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{Fig1.pdf}
\caption{a) Schematic representation of the heterostructure under consideration. b) Vertical profile of the spin accumulation within the ferromagnetic film (of thickness $L$) when both normal metals behave as bad spin sinks. c) The same when one of the metals is a good spin sink. The uncompensated spin accumulation is localized close to the interface with the bad spin sink on a length scale set by the spin diffusion length in the ferromagnet, $\ell_s$.}
\label{fig:Fig1}
\end{figure}
We consider a two-dimensional stack of a ferromagnetic conductor sandwiched between normal metals, see Fig.~\ref{fig:Fig1}(a). The relevant hydrodynamical variables are the charge density, $\rho_c$, and the longitudinal spin density, $\rho_s\equiv\boldsymbol{n}\cdot\boldsymbol{\rho}_s$.
The conjugate thermodynamic forces are $\mu_c\equiv-e\,\delta_{\rho_c}\mathcal{F}$ and $\mu_s\equiv\hbar\, \delta_{\rho_s}\mathcal{F}$, where
$\mathcal{F}$ is the free energy of the itinerant magnet. We assume that there are no slow variables related to transverse spin dynamics, apart from a coherent Landau-Lifshitz-type precession. In particular, any electronic spin dynamics relative to the collective order should relax very fast. In essence, we are constructing a phenomenology in which the spin degrees of freedom are coarse-grained down to the directional spin-density variable $\bm{n}$ and its magnitude that is parametrized by $\rho_s$ (which, while generally consisting of both the electronic and thermally-excited magnonic contributions, has our focus on the former). In the bulk of the ferromagnetic metal, we have local conservation laws of the form\begin{subequations}\begin{align}
& \partial_t\rho_c+\boldsymbol{\nabla}\cdot\mathbf{j}_c=0,\\
& \partial_t\rho_s+\boldsymbol{\nabla}\cdot\mathbf{j}_s=-\Gamma_{s}\,\mu_s,
\end{align}\end{subequations}
where $\Gamma_{s}=\hbar\nu_F/2\tau_s$. Here, $\tau_s$ and $\nu_F$ are, respectively, the spin-relaxation time and density of states per volume at the Fermi level. These continuity equations must be supplemented with constitutive relations of the form:
\begin{align}
\left(\begin{array}{c}
\mathbf{j}_c\\
\frac{2e}{\hbar}\mathbf{j}_s
\end{array}\right)=
\sigma\left(\begin{array}{cc}
\hat{\sigma}_c\left[\boldsymbol{n}\right] & \hat{\sigma}_{\textrm{x}}\left[\boldsymbol{n}\right] \\
\hat{\sigma}_{\textrm{x}}^T\left[-\boldsymbol{n}\right] & \hat{\sigma}_s\left[\boldsymbol{n}\right]
\end{array}\right)
\left(\begin{array}{c}
\frac{1}{e}\boldsymbol{\nabla}\mu_c\\
-\frac{1}{2e}\boldsymbol{\nabla}\mu_s
\end{array}\right),
\end{align}
where the off-diagonal matrix elements are related by the Onsager reciprocal relations.
In a featureless, isotropic ferromagnet, the normalized conductivity tensors have the following general structure: \begin{subequations}
\begin{align}
& \left[\hat{\sigma}_{c}\right]_{ij}=\delta_{ij}+\vartheta\,\epsilon_{ijk}n_k+\varrho\, n_in_j,\\
& \left[\hat{\sigma}_{s}\right]_{ij}=\delta_{ij}+\vartheta_{s} \,\epsilon_{ijk}n_k+\varrho_{s} \, n_in_j,\\
& \left[\hat{\sigma}_{\textrm{x}}\right]_{ij}=P\, \delta_{ij}+\vartheta_{\textrm{x}}\,\epsilon_{ijk}n_k+\varrho_{\textrm{x}}\, n_in_j.
\end{align}
\end{subequations}
Here, $\sigma$ is the total conductivity, which, in the two-channel phenomenology with little spin mixing, is given by $\sigma\approx\sigma_{\uparrow}+\sigma_{\downarrow}$, in terms of the conductivity $\sigma_{\uparrow}$ ($\sigma_{\downarrow}$) of the majority (minority) electrons. The dimensionless parameter $P\approx(\sigma_{\uparrow}-\sigma_{\downarrow})/(\sigma_{\uparrow}+\sigma_{\downarrow})$ measures the spin polarization of the electrical current. The dimensionless coefficients $\vartheta$ and $\varrho$ parametrize the anomalous Hall and AMR effects, respectively.
The coefficients $\vartheta_s$ and $\varrho_s$ parametrize analogous effects in the spin sector, while $\vartheta_{\textrm{x}}$ and $\varrho_{\textrm{x}}$ are associated with similar spin-charge cross terms.
Microscopically, all these phenomenological constants depend on relativistic interactions and can typically be assumed to be small: $|\vartheta|\sim|\vartheta_{s,x}|,\varrho|\sim|\varrho_{s,x}|\ll 1$.
We apply these equations to the device geometry depicted in Fig.~\ref{fig:Fig1}(a), with the charge flowing in the $x$ direction, $\mathbf{j}_c=j\,\boldsymbol{\hat{x}}$. For simplicity, we assume translational invariance along $y$, and we focus on the charge and spin accumulations deep inside the ferromagnet, namely, far away from the leads.\cite{foot3} For the asymmetric case, consisting of the top/bottom normal metals being bad/good spin sinks, the spin accumulation along the transverse direction reads (see Appendix~\ref{AppA})
\begin{align}
\label{eq:mus}
\mu_s(z)=& \frac{2\,eE\,\ell_s'\sinh\left(\frac{z}{\ell_s'}\right)}{\left[1+\varrho_sn_z^2-\frac{\left(P+\varrho_{\textrm{x}} n_z^2\right)^2}{1+\varrho\,n_z^2}\right]\cosh\left(\frac{L}{\ell_s'}\right)}
\\
& \times \left[
\vartheta_{\textrm{x}}n_y+\varrho_{\textrm{x}}n_xn_z-
\frac{P+\varrho_xn_z^2}{1+\varrho\,n_z^2}\left(\varrho\,n_xn_z+\vartheta\,n_y\right)\right],
\nonumber
\end{align}
whose profile is shown in Fig.~\ref{fig:Fig1}(c). Here, $z>0$ is measured from the good spin sink and $\ell_{s}'$ denotes a magnetic-order dependent spin diffusion length, see Eq.~\eqref{spin_dif_length}. The parameter $E$ is associated with the voltage drop between the leads. In our final expression, we neglect a small misalignment between the applied current and the electric field associated with the Hall/MR effects, and write simply $E\approx j/\sigma$.
\section{Torque-induced linewidth}
\label{sec:linewidth}
Next we focus on the absorption power in a usual ferromagnetic resonance (FMR) experiment. We assume hereafter the low-frequency regime for the ac field, namely $\omega T_{1}\ll1$; furthermore, we assume that the corresponding wavelength is much larger than the size of the sample and, therefore, the itinerant ferromagnet exhibits a uniform dynamic state. The dynamics of the order parameter is described by the Landau-Lifshitz-Gilbert (LLG) equation,\cite{LL,G}
\begin{equation}
\label{LLG}
s(1+\alpha\,\boldsymbol{n}\times)\dot{\boldsymbol{n}}=\boldsymbol{n}\times\boldsymbol{H}_{\textrm{eff}}+\boldsymbol{\tau},
\end{equation}
where $s$ is the saturated spin density, $\alpha$ denotes the Gilbert damping constant and $\boldsymbol{H}_{\textrm{eff}}=-\delta \mathcal{F}/\delta\boldsymbol{n}$ is the thermodynamic force conjugate to the order parameter. To simplify the analysis,
we will disregard in what follows anisotropy terms. Consequently, the magnetic free energy contains nothing but the Zeeman energy, $\mathcal{F}=-\gamma\, s\,\boldsymbol{n}\cdot(\boldsymbol{B}_{0}+\boldsymbol{b})$, with $\gamma$ being the gyromagnetic ratio and $\bm{B}_{0}$, $\bm{b}(t)$ denoting the strong dc and weak ac components of the magnetic field, respectively.
When reflection symmetry along the heterostructure axis ($\boldsymbol{\hat{z}}$) is broken while retaining the axial ($C_{\infty v}$) symmetry, the most generic torques to the lowest order in the spin accumulation can be written as\cite{Benedetta}
\begin{align}
\label{eq:torque}
\boldsymbol{\tau}=\eta'\mu_s\left(\boldsymbol{\hat{z}}\cdot\boldsymbol{n}\right)\boldsymbol{n}\times \boldsymbol{\hat{z}}+\eta\,\mu_s\left(\boldsymbol{\hat{z}}\cdot\boldsymbol{n}\right)\boldsymbol{n}\times \boldsymbol{\hat{z}}\times \boldsymbol{n}.
\end{align}
Here $\eta$, $\eta'$ are phenomenological constants (with units of inverse of volume). One can imagine two possible microscopic mechanisms for these torques. In one scenario, the electronic spin accumulation exerts directly a torque on the order parameter due to the inversion-symmetry breaking-induced spin-orbit coupling at the interface. Another possibility is an inelastic channel mediated by magnons: the electronic spin accumulation is first converted into a magnon chemical potential,\cite{tvetenPRB15} and the magnon cloud subsequently exerts a torque on the coherent spin dynamics.\cite{Benedetta} The damping torque, second term in Eq.~\eqref{eq:torque}, reduces to the expression in Eq.~\eqref{eq:anti-damping}, where the prefactor comes from the average spin accumulation across the film thickness, $\bar{\mu}_s=\frac{1}{L}\int_0^{L}dz\,\mu_s\left(z\right)$. This is only different from $0$ for asymmetric heterostructures; from Eq.~\eqref{eq:mus}, to the leading order in relativistic effects (i.e., assuming $L\gg \ell_s$), we have
\begin{equation}
\label{eq:integrated_mu}
\bar{\mu}_s\approx\frac{2e\,\ell_s^{2}}{\sigma L}\big[\vartheta_{\textrm{H}}\,\bm{n}\bm{\cdot}(\bm{\hat{z}}\bm{\times}\mathbf{j}_{\textrm{c}})+\varrho_{\textrm{MR}}\,(\bm{\hat{z}}\bm{\cdot}\bm{n})(\mathbf{j}_{\textrm{c}}\bm{\cdot}\bm{n})\big],
\end{equation}
where we have introduced $\vartheta_{\textrm{H}}=\vartheta_{\textrm{x}}-P\vartheta$ and $\varrho_{\textrm{MR}}=\varrho_{\textrm{x}}-P\varrho$. Note that the prefactor can be conveniently written as
\begin{align}
\frac{2e\ell_s^{2}}{\sigma L}=\frac{\tau_s}{e\nu_FL}=\frac{\hbar}{2e\Gamma_{s} L},
\end{align}
yielding the prefactor in Eq.~\eqref{eq:anti-damping}.
In the measurements discussed later in Sec.~\ref{sec:experiment}, the static magnetic field was in the ballpark of 0.1 T,\cite{SOT2} which translates into a Larmor frequency of $\omega_{L}=\gamma B_{0}\simeq 10$ GHz. Just like in Ref.~\onlinecite{SOT2}, we assume for simplicity that in equilibrium the order parameter follows the static component of the magnetic field, $\bm{n}_{0}\propto\bm{B}_0$. When $\boldsymbol{b}(t)=\boldsymbol{b}\,e^{-i\omega t}$ is switched on, the order parameter acquires a small transverse ac component, $\boldsymbol{n}(t)=\boldsymbol{n}_{0}+\boldsymbol{\zeta}(t)$. The absorption power (averaged over an oscillation period) is proportional to the imaginary part of the transverse component of the susceptibility tensor, which can be obtained from the solution of the linearized LLG equation for $\boldsymbol{\zeta}$,
\begin{align}
\label{chi}
\chi_{t}\left(\omega\right)=\frac{\gamma(\omega_{L}-i\alpha\omega)}{(\omega_{L}-i\alpha\omega)^{2}-\left[\omega+i\frac{\eta\bar{\mu}_{s}}{s}(\hat{\bm{z}}\bm{\cdot}\bm{n}_{0})^{2}\right]^{2}}.
\end{align}
Close to the resonant frequency, $\omega\approx\omega_L$, the absorption power goes as\begin{subequations}\begin{align}
P\left(\omega\right)\propto \omega\, \text{Im}\chi_t(\omega)\approx\frac{\gamma\omega}{2}\frac{\Gamma}{\left(\omega-\omega_L\right)^2+\Gamma^2},
\end{align}
with the resonance linewidth as a function of $\bm{n}_0$ and $\mathbf{j}_{\textrm{c}}$ given by\begin{align}
\label{linewidth}
\Gamma=\alpha\,\omega_L+\frac{\eta\hbar}{2es\Gamma_{s} L}\left(\boldsymbol{\hat{z}}\bm{\cdot}\boldsymbol{n}_0\right)^2 & \left[ \vartheta_{\textrm{H}}\,\boldsymbol{n}_0\bm{\cdot}\left(\boldsymbol{\hat{z}}\bm{\times}\mathbf{j}_{\textrm{c}}\right)\right.\\
& \left. +\varrho_{\textrm{MR}}\,\left(\boldsymbol{\hat{z}}\bm{\cdot}\boldsymbol{n}_0\right)\left(\mathbf{j}_{\textrm{c}}\bm{\cdot}\bm{n}_0\right)\right].
\nonumber
\end{align}
\end{subequations}
\section{Comparison to ST-FMR data}
\label{sec:experiment}
The current-induced shift in the resonance linewidth corresponds roughly to $\Delta B\sim\Gamma/\gamma$ after subtracting the Gilbert damping contribution,
\begin{align}
\label{linewidth_PHE}
\frac{\Delta B}{j_{c}}\approx & \frac{\eta\hbar}{8e\gamma s\Gamma_{s} L}\Big[\vartheta_{\textrm{H}}\left(\sin\theta+\sin3\theta\right)\sin\phi\\
& \hspace{1.1cm}+\varrho_{\textrm{MR}}\left(\sin2\theta+\textstyle{\frac{1}{2}}\sin4\theta\right)\cos\phi\Big], \nonumber
\end{align}where $\phi$ and $\theta$ denote the azimuthal and polar angles of the magnetization measured with respect to the direction of the current and $\bm{\hat{z}}$, respectively.
Figure~\ref{fig:Fig2} depicts the experimental values of the resonance linewidth shifts reported in Ref.~\onlinecite{SOT2} for three asymmetric nanostrips Ta/NM/FM/Ta, where FM denotes a [Co/Ni]$_{2}$/Co magnetic superlattice and NM is either Au [panel (a)], Pd [panel (b)] or Pt [panel (c)]. The corresponding measurements were taken at room temperature and FM layers of thickness $L=5.11$ nm were deposited during the fabrication of the nanostrips. In the same experiment, the current-induced shift was dramatically reduced for symmetric heterostructures.
The data points in Fig.~\ref{fig:Fig2} corresponds to measurements with the static magnetic field lying within the plane defined by $\bm{\hat{z}}$ and the current ($\phi=0$), as depicted in the geometry of Fig.~\ref{fig:Fig1}. Red curves in Fig.~\ref{fig:Fig2} correspond to the second line of Eq.~\eqref{linewidth_PHE} with the overall factor $\eta\hbar/8e\gamma s\Gamma_{s} L$ and $\varrho_{\textrm{MR}}$ combined in a single fitting parameter, $\mathcal{A}_{xz}$. The formula reproduces well the angular dependence observed in the experiments for Au and Pd. In particular, our model captures the extra beating $\propto\sin 4\theta$ observed in the data, which goes beyond the behavior that may be naively expected for a magnetoresistance compatible with the reduced symmetry of the heterostructure, $(\bm{\hat{z}}\bm{\cdot}\bm{n})(\mathbf{j}_{c}\bm{\cdot}\bm{n})\propto\sin2\theta$. For Pt, our model does not yield the correct angular dependence for the resonance linewidth (neither does the fitting of the formula $\propto\sin2\theta$ showed in dashed line), which suggests that the strong spin-orbit interaction in Pt modifies physics beyond our simple spin-sink boundary condition.
The data for the case where the static field lies within the plane of the heterostructure ($\theta=\pi/2$) confirms this scenario (see Fig.~3a in Ref.~\onlinecite{SOT2}). Our model predicts no shift, which is indeed the case for Au within the experimental error, while for Pt heterostructures there is a sizable shift resulting from a more conventional mechanism rooted in the spin Hall effect (see Appendix~\ref{AppB}) originating in the heavy metal,\cite{exp4} which appears to be the largest contribution for the totality of the angular dependence ($\propto\sin\phi$ in this configuration, with some corrections).
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{Fig2.pdf}
\caption{Values of the resonance linewidth shift for the lowest-frequency mode measured in Ta/Au/FM/Ta [panel (a)], Ta/Pd/FM/Ta [panel (b)] and Ta/Pt/FM/Ta [panel (c)] at room temperature, extracted from Ref.~\onlinecite{SOT2}. Red lines represent the fitting of the second line of Eq.~\eqref{linewidth_PHE} to the data with $\mathcal{A}_{xz}=\eta\hbar\varrho_{\textrm{MR}}/8e\gamma s\Gamma_{s}L$. We obtain $\mathcal{A}_{xz}\simeq 1.5\cdot10^{-3}$ Oe m/A for Au and $\mathcal{A}_{xz}\simeq1.3\cdot10^{-3}$ Oe m/A for Pd, with coefficients of determination $R^{2}_{\textrm{Au}}=0.93$ and $R^{2}_{\textrm{Pd}}=0.95$, respectively. For Pt in panel (c) we obtain $\mathcal{A}_{xz}\simeq 0.74\cdot10^{-3}$ Oe m/A (with $R^{2}_{\textrm{Pt}}=0.68$). The blue dashed line represents the fitting of $\sin2\theta$ to the data, with $R^2=0.88$ in that case (the same fit to Au and Pd data yields similar values of $R^2$).}
\label{fig:Fig2}
\end{figure}
\section{Discussion}
\label{sec:discussion}
One important property of our model is that it reproduces well the extra beating in the angular dependence of the linewidth shifts of Au and Pd heterostructures. Specifically, the model fixes the relative strength of the $\sin2\theta$ and $\sin 4\theta$ components in the signal, with only one global fitting parameter measuring the overall strength of the current-induced shift. The similar values of $\mathcal{A}_{xz}$ for both nanostrips ($\mathcal{A}_{xz}\simeq 1.5\cdot10^{-3}$ Oe m/A for Au and $\mathcal{A}_{xz}\simeq1.3\cdot10^{-3}$ Oe m/A for Pd) agrees well with the basic ingredient of the model, namely, that the exact nature of the normal metals play a secondary role beyond defining the boundary conditions for the coupled spin-charge diffusion in the ferromagnetic metal. The interface with Pt, however, seems to play a more active role in the magnetization dynamics. The same trend is confirmed by the data in the $xy$ plane of Ref.~\onlinecite{SOT2}. Our model predict a nil shift, compatible with the data in Au heterostructures. There is, however, a sizable shift coming from the adjacent Pt film and also, to a smaller extent, in the Pd case. We conclude that the spin-orbit effects in Pt give rise to the more conventional external torques,\cite{Scott} while Au is dominated by our internal mechanism. Pd (which is electronically similar to Pt, but with a weaker spin-orbit interaction) seems to be somewhat intermediate and displays both mechanisms, with a stronger self-induced torque, as suggested by the good fit to the $xz$ plane data shown in Fig.~\ref{fig:Fig2}b.
The previous analysis and, in particular, the disagreement between our model and the experimental data for Pt makes clear that the present theory is not the most general one. Thus, it is worth discussing our results in the context of a more general phenomenology guided by symmetry, in the spirit of Ref.~\onlinecite{Garello_etal}. In order to make contact with our model, in the following construction, we incorporate the separation of time scales between the dynamics of the order parameter and the itinerant degrees of freedom, by considering only symmetry-allowed interfacial torques up to linear order in the current density $\mathbf{j}_{\textrm{c}}$. For simplicity, we discuss only asymmetric heterostructures with the principal axis oriented along $\bm{\hat{z}}$. As in Sec.~\ref{sec:theory}, the ferromagnets are assumed to be isotropic, while the presence of normal metals reduces the symmetry down to $C_{\infty v}$ (for the subsequent notation, see Ref.~\onlinecite{book}).
Torques must be orthogonal to the magnetic order $\bm{n}$, yielding two possibilities at the interface, up to a (pseudo)scalar prefactor:
\begin{subequations}
\label{eq:vectors}
\begin{align}
& \bm{\hat{z}}\times\bm{n},\\
& \bm{n}\times\bm{\hat{z}}\times\bm{n}.
\label{eq:vector-PHE}
\end{align}
\end{subequations}
The torques must also transform as $\bm{n}$, namely, the $z$ component must be a pseudoscalar ($A_2$ representation), and the rest of the components form a vector ($E_1$ representation). The two candidates in Eqs.~\eqref{eq:vectors} behave accordingly only if multiplied by a pseudoscalar (e.g., $n_z$, which leads to Eq.~\ref{eq:torque}). Here, we consider all the possible pseudoscalars up to linear order in the current density. Decomposing $\mathbf{j}_{\textrm{c}}$ in its collinear and orthogonal components to the projection of $\bm{n}$, we have again two possibilities:\begin{subequations}
\label{eq:scalars}
\begin{align}
\label{eq:psedu-Hall}
& \bm{n}\cdot\mathbf{j}_{\textrm{c}},\\
& \left({\bm{\hat{z}}}\cdot\bm{n}\right)\left[\bm{n}\cdot\left(\bm{\hat{z}}\times\mathbf{j}_{\textrm{c}}\right)\right].
\label{eq:psedu-PHE}
\end{align}
\end{subequations}
The combination of these two pseudoscalars with the two vectors in Eq.~\eqref{eq:vectors} generate four groups of magnetic torques.\cite{book} In particular, the Hall torque, first term in Eq.~\eqref{eq:anti-damping}, follows directly by combining Eqs.~\eqref{eq:vector-PHE} and \eqref{eq:psedu-PHE}. Additional ST-FMR measurements with the static component of the field within the plane perpendicular to the current ($\phi=\pi/2$ in our expressions, $yz$ plane in Fig.~\ref{fig:Fig1}) would directly test this Hall contribution, as described by Eq.~\eqref{eq:yz} in our model.
In general, we should include higher powers in ${\bm{\hat{z}}}\cdot\bm{n}$ (only even powers are allowed by symmetry) weighted by different phenomenological constants; for example, \begin{align}
\hspace{-0.2cm} \left(\bm{n}\cdot\mathbf{j}_{\textrm{c}}\right)\left[A+B(\bm{\hat{z}\cdot\bm{n}})^2+...\right]\bm{n}\times\bm{\hat{z}}\times\bm{n}.
\end{align}
The magnetoresistance torque, second term in Eq.~\eqref{eq:anti-damping}, corresponds to the case of $A=0$, which is specific to our transport/torque model. For a more general expansion as in Ref.~\onlinecite{Garello_etal}, we should allow also for geometrical factors of the form $1/[1-(\bm{\hat{z}\cdot\bm{n}})^2]$, as is the case, for example, for the usual spin Hall torque generated by spin accumulation in an adjacent normal metal, see Appendix~\ref{AppB}.
In conclusion, we have presented a theory for self-induced torques in NM1/FM/NM2 heterostructures. The model relies on the separation of time scales between the magnetization and electron dynamics. The latter is described by diffusion equations for the charge and longitudinal spin coupled via constitutive relations that include the anomalous Hall and AMR effects in the bulk of the ferromagnet generated by the static component of the magnetization. Both effects produce steady-state spin accumulations at the interfaces with the normal metals which, in turn, exert a net torque on the order parameter if uncompensated. The damping-like interfacial torques are manifested through characteristic model-specific beatings in the angular dependence of the ST-FMR resonance linewidths. Other signatures are the dependence on the thickness of the heterostructure or the temperature dependence contained in $\tau_s$ and the coupling $\eta$.
\begin{acknowledgments}
The authors are grateful to Eric Montoya and Ilya Krivorotov for sharing their data before publication and bringing this problem to our attention. This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences under Award No.~DE-SC0012190.
\end{acknowledgments}
|
1,116,691,498,579 | arxiv | \section{Introduction\protect\footnote{Keywords: Hopf structure, permutations, supercharacters, categorification}}
The categorification of the Hopf algebra of symmetric functions $\mathrm{Sym}$ by the representation theory of the symmetric group is a foundational result in combinatorial representation theory. There are other classical cases with similar constructions---as outlined in Macdonald \cite{Mac}---coming from wreath products and the finite general linear groups. However, all these examples give Hopf algebras that are essentially copies of $\mathrm{Sym}$ (or a PSH algebra in the language of Zelevinsky \cite{Ze}). It has traditionally been thought that to categorify other Hopf algebras one may need to dispense with towers of groups in favor of towers of algebras (especially in light of results like \cite{BLL}).
The paper \cite{AIM} took a different approach to towers of groups by replacing the full representation theory with a supercharacter theory and the traditional induction/restriction functor pairing with new functor combinations. In this way, \cite{AIM} was able to categorify the symmetric functions in noncommuting variables $\mathrm{NCSym}$, and a similar approach in \cite{AT2} found a categorification of a Catalan Hopf subalgebra. The underlying algebraic structure turns out to be the shadow of a Hopf monoid \cite{ABT}, a generalization that better captures the underlying representation theory.
While \cite{AIM} and \cite{AT2} began with the representation theory of finite unipotent uppertriangular groups $\mathrm{UT}_n(\FF_q)$ and found a Hopf structure, this paper was motivated by the opposite approach. That is, we wanted to find a tower of groups with an associated supercharacter theory that would give us a non-commutative and non-cocommutative Hopf algebra. As a test case, we selected the Malvenuto--Reutenauer Hopf algebra
$$\mathrm{FQSym}=\bigoplus_{n\geq 0} \mathrm{FQSym}_n,$$
a graded self-dual Hopf algebra where each graded degree satisfies $\dim(\mathrm{FQSym}_n)=n!$.
The Malvenuto--Reutenauer Hopf algebra was introduced by Malvenuto in \cite{Malv} with a basis called the fundamental basis. It contains many well-known Hopf algebras, such as the Hopf algebra of symmetric functions $\mathrm{Sym}$ \cite{Mac}, the Hopf algebra of non-commutative symmetric functions $\mathrm{NSym}$ \cite{GKal}, Stembridge's peak algebra $\mathfrak{P}$ \cite{Stem97}, and the Lody--Ronco Hopf algebra of planar trees $\mathrm{LR}$ \cite{LR98}. Moreover, $\mathrm{FQSym}$ is a natural lift of the Hopf algebra of quasi-symmetric functions $\mathrm{QSym}$ \cite{Ges}. Aguiar and Sottile \cite{AS05} studied the structure of Malvenuto--Reutenauer Hopf algebra and produced a new basis, called monomial basis, related to the fundamental basis by M\"{o}bius inversion on the weak order on the symmetric groups. They give a geometric description of the monomial basis product structure constants. This paper studies a similar basis using a different order on permutations that arises naturally in our setting (see Section \ref{PermutationCharacterStructure}).
A supercharacter theory is a framework developed by Diaconis--Isaacs \cite{DI} to study the representation theory of a group without requiring full knowledge of the irreducible characters. In general, while groups have many such theories, there are not many known constructions that work for arbitrary groups. This paper uses the normal lattice supercharacter theory of a finite group $G$ developed by Aliniaeifard in \cite{Al}. This theory assigns a supercharacter theory to every sublattice of normal subgroups of $G$; in particular, any set of combinatorial objects that forms a lattice might arise in this way. Our first goal was to find a tower of groups with associated sublattices of normal subgroups indexed by permutations. We further decided to focus on abelian groups, since these tend to have more normal subgroups, and we settled on the Lie algebra $\mathfrak{ut}_n(\FF_q)$ of $\mathrm{UT}_n(\FF_q)$ viewed as a finite additive group. As an elementary abelian group $\mathfrak{ut}_n(\FF_q)$ has a fairly uninspiring group structure, but the normal lattice supercharacter theory makes the group far more combinatorially compelling.
The main result (Corollary \ref{MainIsomorphism}) of Section \ref{HopfAlgebraIsomorphism} finds a Hopf algebra isomorphism between $\mathrm{FQSym}$ and a representation theoretic algebra
$$\mathrm{scf}(\mathfrak{ut})=\bigoplus_{n\geq 0}\mathrm{scf}(\mathfrak{ut}_n).$$
A key component of this isomorphism is to identify the functors $\mathrm{Exfl}$ and $\mathrm{Dela}$ that encode the Hopf structure of $\mathrm{FQSym}$ in $\mathrm{scf}(\mathfrak{ut})$. Here, we take advantage of the feature that every supercharacter theory identifies two canonical bases: the superclass identifier basis and the supercharacter basis. By computing the structure constants for the supercharacter basis in Theorems \ref{GoingUpCombinatorics} and \ref{GoingDownCombinatorics}, we deduce an isomorphism that sends the supercharacter basis of $\mathrm{scf}(\mathfrak{ut})$ to the fundamental basis of $\mathrm{FQSym}$.
In Section \ref{PermutationCharacterStructure}, we examine the structure constants for a third canonical basis that arises in the normal lattice supercharacter theory construction. As far as we know, this gives a new basis for $\mathrm{FQSym}$ that never-the-less has a nice combinatorial structure. We use the representation theoretic functors to compute the co-product in Theorem \ref{PermutationCharacterCoproduct}, and we believe a more combinatorial approach would be significantly more complicated; here the coefficients are in the set $\{0,1\}$. With slightly more effort Theorem \ref{PermutationCharacterProduct} computes the product in this basis, and the coefficients are in the set $\{-1,0,1\}$.
While we categorify $\mathrm{FQSym}$, we do not supply much evidence that our construction is canonical; in fact, it seems likely that it is one of many possible choices for a tower of groups. However, in Section 6 we describe an implied Hopf monoid associated to $\mathrm{FQSym}$. In any case, this paper should be viewed as more of a ``proof of concept" for the method outlined above. To give a more firm connection one would ideally interpret all the relations between $\mathrm{FQSym}$ and other Hopf algebras via functors on the corresponding towers of groups, but at present this remains largely unexplored.
\vspace{.5cm}
\noindent\textbf{Acknowledgements.}
The second author was supported by a Simons Foundation collaboration grant 426594.
\section{Preliminaries}
In this section we set up our notation for permutation combinatorics and introduce the Malvenuto--Reutenauer Hopf algebra. We then review supercharacter theory fundamentals.
\subsection{Permutations}\label{Permutations}
For $n\in \mathbb{Z}_{\geq 0}$, let $S_n$ be the symmetric group on the set $\{1,2,\ldots, n\}$. In this paper, we will use a number of different ways to represent elements of this group (see also Stanley \cite{StV1}).
\begin{description}
\item[One line notation.] For $w\in S_n$, we write the anagram of $12\cdots n$ given by $w(1)w(2)\cdots w(n)$.
\item[Inversion table.] The \emph{inversion table} of $w\in S_n$ is the sequence $\iota(w)=(\iota_1(w),\iota_2(w),\ldots,\iota_n(w))$, where
$$\iota_k(w)=\#\{i<w^{-1}(k)\mid w(i)>k\}.$$
In one line notation of $w$, $\iota_k(w)$ is the number of integers to the left of $k$ that are bigger than $k$.
For example, $\iota(314625)=(1,3,0,0,1,0)$. Note that for each $k$, $0\leq \iota_k(w)\leq n-k$.
\item[Code.] The \emph{code} of $w\in S_n$ is the sequence $\kappa(w)=(\kappa_1(w),\kappa_2(w),\ldots,\kappa_n(w))$, where
$$\kappa_k(w)=\#\{i<w(k)\mid w^{-1}(i)>k\}.$$
In one line notation of $w$, $\kappa_k(w)$ is the number of integers to the right of $w(k)$ that are less than $w(k)$.
In our example, $\kappa(314625)=(2,0,1,2,0,0)$. Note that for each $k$, $0\leq \kappa_k(w)\leq n-k$ and $\kappa(w)=\iota(w^{-1})$.
\item[Rothe diagram.] The \emph{Rothe diagram} of $w\in S_n$ is the subset
$$R_w=\{(i,j)\mid w(i)>j, w^{-1}(j)>i\}\subseteq \{(i,j)\mid 1\leq i,j\leq n\}.$$
In our running example, the $R_{314625}$ is the set of coordinates marked by $\circ$ in the decorated matrix
$$\left[\begin{tikzpicture}[scale=.5,baseline=1.65cm]
\foreach \i/\j in {1/3,2/1,3/4,4/6,5/2,6/5}
\draw (\j,1) -- (\j,7-\i) -- (6,7-\i);
\foreach \i/\j in {1/1,1/2,3/2,4/2,4/5}
\node at (\j,7-\i) {$\circ$};
\end{tikzpicture}\right]$$
Note that
$$\iota_k(w)=\#\{(i,k)\in R_w\}\quad \text{and}\quad \kappa_k(w)=\#\{(k,j)\in R_w\}.$$
\end{description}
There is a natural poset on $\mathbb{Z}_{\geq 0}^n$ given by
\begin{equation}\label{PermutationSequenceOrder}
v\geq u \qquad \text{if and only if} \qquad \text{$v_i\geq u_i$ for all $1\leq i\leq n$}.
\end{equation}
This poset defines two distributive lattices on permutations: one applies the order to inversion tables, and one applies the order to codes. For example, with $S_4$, we obtain
$$\begin{tikzpicture}
\foreach \x/\y/\z in {0/6/4321,
-2/5/3421,0/5/4231,2/5/4312,
-4/4/3241,-2/4/4213,0/4/3412,2/4/2431,4/4/4132,
-5/3/3214,-3/3/2413,-1/3/2341, 1/3/4123, 3/3/3142, 5/3/1432,
-4/2/2314,-2/2/3124,0/2/2143,2/2/1423,4/2/1342,
-2/1/2134,0/1/1324,2/1/1243,0/0/1234,
}
\node (\z) at (\x,\y) {$\z$};
\foreach \a/\b in {2134/1234,1324/1234,1243/1234,3124/2134,2314/2134,2143/2134,3124/1324,2143/1243,1342/1324,1423/1324,1423/1243,2341/2314,3214/2314,3214/3124,3142/3124,3142/1342,2413/2314,2413/2143,4123/3124,4123/2143,4123/1423,1432/1342,1432/1423,3241/3214,3241/2341,2431/2341,2431/2413,3412/3214,3412/3142,4213/2413,4213/4123,4213/3214,4132/1432,4132/4123,4132/3142,4231/4213,4231/2431,4231/3241,3421/3412,3421/3241,4312/4132,4312/4213,4312/3412,4321/4231,4321/3421,4321/4312}
\draw[thick,gray] (\a) -- (\b);
\foreach \a/\b in {2134/1234,1324/1234,1243/1234,2314/2134,2314/1324,3124/2134,2143/2134,2143/1243,1342/1324,1342/1243,1423/1324,3214/2314,3214/3124,2341/2314,2341/2143,2341/1342,4123/3124,2413/2314,2413/1423,3142/3124,3142/2143,1432/1342,1432/1423,3241/3214,3241/2341,3241/3142,4213/3214,4213/4123,3412/3214,3412/2413,2431/2341,2431/2413,2431/1432,4132/4123,4132/3142,3421/3241,3421/3412,3421/2431,4312/4213,4312/3412,4231/3241,4231/4213,4231/4132,4321/3421,4321/4312,4321/4231}
\draw[dotted,thick] (\a) -- (\b);
\node at (7,4.5) {inversion table};
\node at (7.8,3.5) {code};
\draw[thick,gray] (8.5,4.5) -- (10,4.5);
\draw[dotted,thick] (8.5,3.5) -- (10,3.5);
\end{tikzpicture}$$
\subsection{The Malvenuto--Reutenauer Hopf algebra $\mathrm{FQSym}$}
The Malvenuto--Reutenauer algebra is a graded Hopf algebra with underlying vector space
$$\mathrm{FQSym}=\bigoplus_{n\in \mathbb{Z}_{\geq 0}} \CC\textnormal{-span}\{F_w\mid w\in S_n\}.$$
To define an algebra structure on $\mathrm{FQSym}$, we define a notion of shifted shuffle. Given $v\in S_m$, $w\in S_n$ and $A\subseteq\{1,2,\ldots,m+n\}$ with $|A|=n$, define the \emph{$A$-shuffle} $v\shuffle_A w\in S_{m+n}$ by
$$(v\shuffle_A w)(i)=\left\{\begin{array}{ll} v(i-\#\{a<i\mid a\in A\}) & \text{if $i\notin A$}\\
w(\#\{a\leq i\mid a\in A\})+m & \text{if $i\in A$.}
\end{array}\right.$$
For example,
$$31542\shuffle_{\{1,4,5,8\}} 3124 = \overset{{\color{gray} 1}}{8}31\overset{{\color{gray}4}}{6}\overset{{\color{gray}5}}{7}54\overset{{\color{gray}8}}{9}2.$$
Define
$$v\shuffle w=\{v\shuffle_A w\mid A\subseteq\{1,2,\ldots, m+n\} \text{ with } |A|=n\}.$$
The product on $\mathrm{FQSym}$ is given by
\begin{equation}\label{FQSymProduct}
F_vF_w=\sum_{y\in v\shuffle w} F_y.
\end{equation}
To define a co-algebra structure on $\mathrm{FQSym}$, we define a standardized deconcatenation. For $w\in S_{m+n}$, the \emph{$m$-standardized deconcatenation} of $w$ is the pair $(w_{\leq m},w_{>m})\in S_m\times S_n$, where
\begin{align*}
w_{\leq m}(i) &=w(i)-\#\{w(j)<w(i)\mid m<j\leq m+n\}\\
w_{>m}(j) & =w(j+m)-\#\{w(i)<w(j)\mid 1\leq i\leq m\}.
\end{align*}
For example, the $5$-standardized deconcatenation of $319825647$ is $(31542,3214)$.
The coproduct on $\mathrm{FQSym}$ is given by
\begin{equation}\label{FQSymCoProduct}
\Delta(F_w)=\sum_{m=0}^n F_{w_{\leq m}}\otimes F_{w_{>m}}\quad \text{for $w\in S_n$}.
\end{equation}
\subsection{Supercharacter theories}\label{SupercharacterTheories}
Supercharacter theories were introduced by \cite{DI} as a means to get representation theoretic control of groups with difficult representation theories (eg. the Sylow $p$-subgroups of the finite general linear groups $\mathrm{GL}_n(\FF_p)$). However, \cite{FGK} also showed that one may use these theories to make the representation theory of less exciting groups (eg. abelian groups) more compelling.
A \emph{supercharacter theory} $(\mathtt{Cl},\mathtt{Ch})$ of a finite group $G$ is a pair of partitions where $\mathtt{Cl}$ is a partition of the group and $\mathtt{Ch}$ is a partition of the irreducible characters $\mathrm{Irr}(G)$ such that
\begin{enumerate}
\item[(SC1)] $\{1\}\in \mathtt{Cl}$,
\item[(SC2)] $|\mathtt{Cl}|=|\mathtt{Ch}|$,
\item[(SC3)] For each $A\in \mathtt{Ch}$,
$$\sum_{\psi\in A}\psi(1)\psi(g)=\sum_{\psi\in A}\psi(1)\psi(h)$$
whenever $g,h\in K$ for some $K\in \mathtt{Cl}$.
\end{enumerate}
We typically call the blocks of $\mathtt{Cl}$ \emph{superclasses}.
In fact, condition (SC2) and (SC3) imply that the space of functions
$$\mathrm{scf}(G)=\{\gamma:G\rightarrow \CC\mid \gamma(g)=\gamma(h)\text{ whenever $g$ and $h$ are in the same superclass}\}$$
has two distinguished bases:
\begin{description}
\item[Superclass identifier functions.] For each $K\in \mathtt{Cl}$, define
$$\delta_K(g)=\left\{\begin{array}{ll} 1 & \text{if $g\in K$,}\\ 0 & \text{otherwise.}\end{array}\right.$$
\item[Supercharacters.] For each $A\in \mathtt{Ch}$, the corresponding \emph{supercharacter} is the function
$$\chi^A=\sum_{\psi\in A}\psi(1)\psi.$$
\end{description}
A given finite group typically has many supercharacter theories, but we will focus on a construction developed in \cite{Al} that works particularly well for groups with many normal subgroups (eg. noncyclic abelian groups). We will refer to such a theory as a \emph{normal lattice} supercharacter theory.
\begin{theorem}\cite[Theorem 3.4]{Al}
Let $\mathcal{L}$ be a set of normal subgroups of a group $G$ containing both $G$ and $\{1\}$ such that for $M, N\in \mathcal{L}$, we have $M\cap N, MN\in \mathcal{L}$.
\begin{enumerate}
\item[(a)] Let $\mathtt{Cl}$ be the partition of $G$ obtained by placing $g,h\in G$ in the same block if and only if the smallest normal subgroup in $\mathcal{L}$ containing $g$ is also the smallest one containing $h$.
\item[(b)] Let $\mathtt{Ch}$ be the partition of $G$ obtained by placing $\psi,\tau\in \mathrm{Irr}(G)$ in the same block if and only if the largest normal subgroup in $\mathcal{L}$ contained in the kernel of $\psi$ is also the largest subgroup contained in the kernel of $\tau$.
\end{enumerate}
Then $(\mathtt{Cl},\mathtt{Ch})$ is a supercharacter theory of $G$.
\end{theorem}
A feature of this result is that (a) associates a normal subgroup $N$ to each superclass (and we will refer to the corresponding identifier function as $\delta_N$), and (b) associates a normal subgroup $N$ to each supercharacter (referred to as $\chi^N$). A feature of a normal lattice theory is the existence of two new distinguished bases of $\mathrm{scf}(G)$.
\begin{description}
\item[Normal subgroup identifier functions.] For each $N\in \mathcal{L}$, define
$$\bar\delta_N(g)=\left\{\begin{array}{ll} 1 & \text{if $g\in N$,}\\ 0 & \text{otherwise.}\end{array}\right.$$
\item[Permutation characters.] For each $N\in \mathcal{L}$, define
\begin{equation}\label{PermutationCharacterBasis}
\bar\chi^N=\tr\Big(\cdot, \mathrm{Ind}_N^G({1\hspace{-.14cm} 1})\Big),
\end{equation}
where ${1\hspace{-.14cm} 1}$ is the trivial module of $G$.
\end{description}
\begin{remark}
In fact, these two bases are the same (up to scaling) since
$$\bar\delta_N=\frac{|N|}{|G|}\bar\chi^N.$$
However, philosophically one comes from classes and the other from modules. In particular, for $N\in \mathcal{L}$,
$$\bar\delta_N=\sum_{M\subseteq N}\delta_M\quad \text{and} \quad \bar\chi^N=\sum_{O\supseteq N} \chi^O.$$
\end{remark}
\section{The vector space $\mathrm{scf}(\mathfrak{ut}_n)$}
In this section, we construct the main vector space $\mathrm{scf}(\mathfrak{ut}_n)$ using a sublattice of normal subgroups of $\mathfrak{ut}_n$ that gives a normal lattice supercharacter theory. We then introduce a somewhat mysterious involution on this space that will be important for the Hopf algebra structure.
\subsection{A normal lattice theory for $\mathfrak{ut}_n$}
Fix a finite field $\FF_q$ with $q$ elements and let $M_n(\FF_q)$ denote the algebra of $n\times n$ matrices with entries in $\FF_q$. Define the nilpotent subalgebra
$$\mathfrak{ut}_n=\{x\in M_n(\FF_q)\mid x_{ij}\neq 0\text{ implies } i<j\}.$$
If $q$ is a prime $p$-power, then the additive group of $\mathfrak{ut}_n$ is an elementary abelian $p$-group. For each $w\in S_n$, define
\begin{equation}
\mathfrak{ut}_w =\{x\in \mathfrak{ut}_n\mid x_{ij}\neq 0 \text{ implies } 0< j-i\leq \iota_i(w)\}.
\end{equation}
For example, if $w=314625$, then $\iota(w)=(1,3,0,0,1,0)$ and
$$\mathfrak{ut}_w=\left[\begin{array}{cccccc}
0 & * & 0 & 0 & 0 & 0\\
0 & 0 & * & * & * & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & * \\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}\right]\subseteq
\left[\begin{array}{cccccc}
0 & * & * & * & * & *\\
0 & 0 & * & * & * & * \\
0 & 0 & 0 & * & * & * \\
0 & 0 & 0 & 0 & * & * \\
0 & 0 & 0 & 0 & 0 & * \\
0 & 0 & 0 & 0 & 0 & 0\\
\end{array}\right]=\mathfrak{ut}_6.$$
The subgroup containment lattice of $\{\mathfrak{ut}_w\mid w\in S_n\}$ gives the inversion table order (\ref{PermutationSequenceOrder}) on $S_n$. The following result re-interprets the covers in the inversion table order directly on permutations.
\begin{proposition}\label{CoveringInversions}
The permutation $w\in S_n$ covers $v\in S_n$ if and only if there exists $i<k$ such that
\begin{itemize}
\item $w(j)=v(j)$ for all $j\notin \{i,k\},$
\item $v(i)=w(k)<w(i)=v(k)$,
\item for each $i<j<k$, $w(j)<w(k)$.
\end{itemize}
\end{proposition}
\begin{proof}
By definition $w$ covers $v$ if there exists $b=w(i)$ such that $\iota_b(w)=\iota_b(v)+1$ and $\iota_c(w)=\iota_c(v)$ for all $c\neq b$. Since
$$\iota_b(w)=\#\{a<i\mid w(a)>b\}$$
let $k$ be minimal such that $k>i$ and $v(i)<v(k)$. If no such $k$ exists, then $\iota_b(w)=\iota_b(v)+1$ cannot occur, since $\iota_b(v)$ is already maximal. We have that $v(k)>v(i)>v(j)$ for all $i<j<k$, so if $w'$ is the permutation obtained by switching $v(i)$ and $v(k)$, then $\iota_c(w')=\iota_c(v)$
for all $c\neq b$ and $\iota_b(w')=\iota_b(v)+1$. Thus, $w'=w$.
\end{proof}
Let $(\mathtt{Cl}_n,\mathtt{Ch}_n)$ be the normal lattice supercharacter theory of $\mathfrak{ut}_n$ associated with the lattice
$$\{\mathfrak{ut}_w\mid w\in S_n\}.$$
The superclasses are given by
\begin{equation}\label{Superclasses}
\mathtt{Cl}_w=\mathfrak{ut}_w-\bigcup_{\mathfrak{ut}_v\subset \mathfrak{ut}_w} \mathfrak{ut}_v,
\end{equation}
which is a nonempty set for each $w\in S_n$. As in Section \ref{SupercharacterTheories}, the vector space of superclass functions
$$\mathrm{scf}(\mathfrak{ut}_n)=\{\psi:\mathfrak{ut}_n\rightarrow \CC\mid \psi(x)=\psi(y), \text{ if $x,y\in \mathtt{Cl}_w$ for some $w\in S_n$}\}$$
has four distinguished bases
$$\{\delta_w\mid w\in S_n\},\quad \{\bar\delta_w\mid w\in S_n\},\quad \{\bar\chi^w\mid w\in S_n\},\quad \text{and}\quad \{\chi^w\mid w\in S_n\},$$
where it is notationally convenient to label each basis element with the underlying permutation $w$ rather than the normal subgroup $\mathfrak{ut}_w$. From \cite[Corollary 3.4]{AT}, we obtain a supercharacter formula.
\begin{proposition}
For $w,v\in S_n$ and $x\in \mathtt{Cl}_w$,
$$\chi^v(x)=\left\{\begin{array}{@{}ll}
\frac{|\mathfrak{ut}_n|}{|\mathfrak{ut}_v|}\left(\frac{q-1}{q}\right)^{\#\{j\mid \iota_j(v)<n-j\}}\left(\frac{-1}{q-1}\right)^{\#\{j\mid \iota_j(w)=\iota_j(v)+1\}} & \text{if $\iota_j(w)\leq \iota_j(v)+1$ for all $1\leq j\leq n$,}\\ 0 & \text{otherwise.}
\end{array}\right.$$
\end{proposition}
We may re-interpret this result as an entry by entry factorization. Let ${1\hspace{-.14cm} 1}_{\FF_q^+}$ be the trivial character of $\FF_q^+$ and $\mathrm{reg}_{\FF_q^+}$ be the regular character of $\FF_q^+$.
\begin{corollary}\label{SupercharacterFactorization}
For $v\in S_n$ and $x\in \mathfrak{ut}_n$,
$$\chi^v(x)=\prod_{1\leq i<j\leq n\atop j-i\leq \iota_i(v)} {1\hspace{-.14cm} 1}_{\FF_q^+}(x_{ij}) \prod_{1\leq i<j\leq n\atop j-i= \iota_i(v)+1} (\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(x_{ij}) \prod_{1\leq i<j\leq n\atop j-i> \iota_i(v)+1}\mathrm{reg}_{\FF_q^+}(x_{ij}).
$$
\end{corollary}
\subsection{A $\star$-duality}
The involution
$$\begin{array}{rccc} \star: & \mathrm{scf}(\mathfrak{ut}_n) & \longrightarrow & \mathrm{scf}(\mathfrak{ut}_n)\\
& \sum_{w\in S_n} c_w\chi^w & \mapsto & \sum_{w\in S_n} c_w \chi^{w^{-1}}\end{array}$$
has some stability properties on the underlying subgroups.
\begin{proposition} For $w\in S_n$,
$$\mathfrak{ut}_n/\mathfrak{ut}_{w^{-1}}\cong \mathfrak{ut}_n/\mathfrak{ut}_w$$
\end{proposition}
\begin{proof}
We claim that $n-j-\iota_j(w^{-1})=n-w(j)-\iota_{w(j)}(w)$ for all $1\leq j\leq n$. Note that
\begin{align*}
n-j-\iota_j(w^{-1}) &=n-j-\#\{i<w(j)\mid w^{-1}(i)>j\}\\
&=\#\{i>w(j)\mid w^{-1}(i)>j\}\\
&=n-w(j)-\#\{i<j\mid w(i)>w(j)\}\\
&=n-w(j)-\iota_{w(j)}(w).
\end{align*}
The result now follows from the definition of $\mathfrak{ut}_w$ and its implicit complement in $\mathfrak{ut}_n$.
\end{proof}
\begin{remark}
The proof of the proposition in fact shows that inverting $w$ permutes the entries in the vector
\begin{equation} \label{DualInversionTable}
\iota^\vee(w)=(n-1-\iota_1(w),n-2-\iota_2(w),\ldots, n-n-\iota_n(w)).
\end{equation}
Thus, $(\chi^{w})^{\star}(1)=\chi^{w^{-1}}(1)=\chi^w(1)$, so the $\star$-involution preserves the degrees of the supercharacters. On the other hand, while $|\mathfrak{ut}_w|=|\mathfrak{ut}_{w^{-1}}|$, the $\star$-involution does not preserve superclass size. For example,
$$|\mathfrak{ut}_{(312)}|=\left|\left[\begin{array}{ccc} 0 & \ast & 0\\ 0 & 0 & \ast\\ 0 & 0 &0\end{array}\right]\right|=\left|\left[\begin{array}{ccc} 0 & \ast & \ast\\ 0 & 0 & 0\\ 0 & 0 &0\end{array}\right]\right|=|\mathfrak{ut}_{(231)}|=q^2,$$
but by (\ref{Superclasses}) the sizes of the corresponding superclasses are $(q-1)^2$ and $q(q-1)$, respectively.
\end{remark}
\section{The Hopf algebra $\mathrm{scf}(\mathfrak{ut})$} \label{HopfAlgebraIsomorphism}
The goal of this section is to define functors that give a Hopf structure to the space
$$\mathrm{scf}(\mathfrak{ut})=\bigoplus_{n\geq 0}\mathrm{scf}(\mathfrak{ut}_n).$$
By computing the structure constants on the supercharacter basis, we deduce that the Hopf algebra is in fact isomorphic to the Malvenuto--Reutenauer Hopf algebra $\mathrm{FQSym}$.
\subsection{Functorial subgroups}
We begin by defining a family of subgroups of $\mathfrak{ut}_n$ that depend on a subset $A\subseteq \{1,2,\ldots, n\}$. First define a partition of $\{1\leq i<j\leq n\}$ with blocks
\begin{align*}
U_A&=\{(i,j)\mid 1\leq i<j\leq n, i\in A, j> n-\#\{a\in A\mid a>i\}\}\\
L_A&=\{(i,j)\mid 1\leq i<j\leq n, i\in A, j\leq n-\#\{a\in A\mid a>i\}\}\\
U_A^\vee&=\{(i,j)\mid 1\leq i<j\leq n, i\notin A, j\leq n-\#\{a\in A\mid a>i\}\}\\
R_A&=\{(i,j)\mid 1\leq i<j\leq n, i\notin A, j> n-\#\{a\in A\mid a>i\}\}.
\end{align*}
For example, if $A=\{1,4,5, 7\}\subseteq \{1,2,\ldots, 9\}$, then
$$\begin{tikzpicture}[scale=.5]
\fill[pattern=vertical lines] (7,8) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[gray] (3,8) +(-2.5,-0.5) rectangle ++(2.5,0.5);
\fill[pattern=vertical lines] (3,8) +(-2.5,-0.5) rectangle ++(2.5,0.5);
\fill[pattern=horizontal lines] (3.5,7) +(-2,-0.5) rectangle ++(2,0.5);
\fill[gray] (7,7) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=horizontal lines] (7,7) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=horizontal lines] (4,6) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[gray] (7,6) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=horizontal lines] (7,6) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=vertical lines] (7.5,5) +(-1,-.5) rectangle ++(1,.5);
\fill[gray] (5,5) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=vertical lines] (5,5) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=vertical lines] (8,4) +(-.5,-.5) rectangle ++(.5,.5);
\fill[gray] (6,4) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=vertical lines] (6,4) +(-1.5,-0.5) rectangle ++(1.5,0.5);
\fill[pattern=horizontal lines] (6.5,3) +(-1,-0.5) rectangle ++(1,0.5);
\fill[gray] (8,3) +(-.5,-0.5) rectangle ++(.5,0.5);
\fill[pattern=horizontal lines] (8,3) +(-.5,-0.5) rectangle ++(.5,0.5);
\fill[pattern=vertical lines] (8.4,2) +(-.1,-.5) rectangle ++(.1,.5);
\fill[gray] (7.5,2) +(-1,-0.5) rectangle ++(.8,0.5);
\fill[pattern=vertical lines] (7.5,2) +(-1,-0.5) rectangle ++(.8,0.5);
\fill[pattern=horizontal lines] (8,1) +(-.5,-0.5) rectangle ++(.3,0.5);
\fill[gray] (8.4,1) +(-.1,-0.5) rectangle ++(.1,0.5);
\fill[pattern=horizontal lines] (8.4,1) +(-.1,-0.5) rectangle ++(.1,0.5);
\fill[pattern=horizontal lines] (8.4,0) +(-.1,-0.5) rectangle ++(.1,0.5);
\foreach \x in {0,...,8}
{\foreach \y in {\x,...,8}
\node (\x,\y) at (8-\x,\y) {$\bullet$};}
\foreach \y/\e in {2/7,4/5,5/4,8/1}
\node at (9,\y) {$\scriptstyle\e$};
\fill[pattern=vertical lines] (-1,4) +(-1,-1) rectangle ++(1,1);
\node at (-1,4) {$U_A$};
\fill[gray] (-3,4) +(-1,-1) rectangle ++(1,1);
\fill[pattern=vertical lines] (-3,4) +(-1,-1) rectangle ++(1,1);
\node at (-3,4) {$L_A$};
\fill[pattern=horizontal lines] (-3,6) +(-1,-1) rectangle ++(1,1);
\node at (-3,6) {$U_A^\vee$};
\fill[gray] (-1,6) +(-1,-1) rectangle ++(1,1);
\fill[pattern=horizontal lines] (-1,6) +(-1,-1) rectangle ++(1,1);
\node at (-1,6) {$R_A$};
\end{tikzpicture}$$
These coordinates give rise to subgroups
\begin{align*}
\mathfrak{ut}_A &= \{u\in \mathfrak{ut}_n\mid u_{ij}\neq 0 \text{ implies } (i,j)\in U_A\}\\
\mathfrak{l}_A & = \{u\in \mathfrak{ut}_n\mid u_{ij}\neq 0\text{ implies } (i,j)\in L_A\}\\
\mathfrak{ut}_A^\vee &= \{u\in \mathfrak{ut}_n\mid u_{ij}\neq 0\text{ implies } (i,j)\in U_A^\vee\}\\
\mathfrak{r}_A & = \{u\in \mathfrak{ut}_n\mid u_{ij}\neq 0 \text{ implies } (i,j)\in R_A\}\\
\mathfrak{p}_A &= \mathfrak{l}_A\mathfrak{ut}_A^\vee \mathfrak{ut}_A.
\end{align*}
Note that $\mathfrak{ut}_n= \mathfrak{p}_A\mathfrak{r}_A$, $\mathfrak{ut}_{|A|}\cong \mathfrak{ut}_A$ and $\mathfrak{ut}_{n-|A|}\cong \mathfrak{ut}_A^\vee$. In fact, consider the explicit isomorphisms
\begin{equation} \label{CheckIsomorphism}
\begin{array}{rccc} \tau'_A: & \mathfrak{ut}_A^\vee & \longrightarrow & \mathfrak{ut}_m\\
& e_{ij}(t) & \mapsto & e_{i-\#\{a\in A\mid a<i\},j-\#\{a\in A\mid a<i\}}(t),\end{array}
\end{equation}
and
\begin{equation}\label{UnCheckIsomorphism}
\begin{array}{rccc} \tau_A: & \mathfrak{ut}_A & \longrightarrow & \mathfrak{ut}_n\\
& e_{ij}(t) & \mapsto & e_{i-\#\{b<i\mid b\notin A\},j-m}(t),\end{array}
\end{equation}
where $e_{ij}(t)$ is the matrix with $t$ in the $i$th row and $j$th column and zeroes elsewhere.
Our functors, below, will pass up from $\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A$ through $\mathfrak{p}_A$ to $\mathfrak{ut}_n$ or down in reverse.
\subsection{The functor $\mathrm{Exfl}$ that goes up}
There are two functors we will use to obtain a function $\mathrm{scf}(\mathfrak{ut})\otimes\mathrm{scf}(\mathfrak{ut})\rightarrow \mathrm{scf}(\mathfrak{ut})$. The first is \emph{inflation}
$$\begin{array}{rccc}
\Inf_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{p}_A} : & \mathrm{scf}(\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}) & \longrightarrow & \mathrm{scf}(\mathfrak{p}_A)\\
& \chi\otimes \psi & \mapsto &\begin{array}{c@{\ }c@{\ }c} \mathfrak{p}_A & \rightarrow & \CC\\ lu'u & \mapsto & {1\hspace{-.14cm} 1}_{\mathfrak{l}_A}(l) \chi(u')\psi(u)\end{array}\end{array} \quad \text{for $u'\in \mathfrak{ut}_A^\vee$, $l\in \mathfrak{l}_A$, $u\in \mathfrak{ut}_A$,}$$
and the second is the extension
$$\begin{array}{rccc} \mathrm{Ext}_{\mathfrak{p}_A}^{\mathfrak{ut}_n} : & \mathrm{scf}(\mathfrak{p}_A) & \longrightarrow & \mathrm{scf}(\mathfrak{ut}_n)\\
& \chi & \mapsto & \begin{array}{c@{\ }c@{\ }c} \mathfrak{ut}_n & \rightarrow & \CC\\ yc & \mapsto & \chi(y) \mathrm{reg}_{\mathfrak{r}_A}(c),\end{array}
\end{array}\quad \text{for $c\in \mathfrak{c}_A$, $y\in \mathfrak{p}_A$.}$$
Note that the latter is almost extension by zeroes (up to a multiple by a power of $q$). Their composition is the \emph{exflation} functor
$$ \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n}=\mathrm{Ext}_{\mathfrak{p}_A}^{\mathfrak{ut}_n} \circ\Inf_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{p}_A}.$$
\begin{lemma}\label{ProductLemma}
Let $A\subseteq \{1,2,\ldots, m+n\}$ with $|A|=n$, $w\in S_m$, and $v\in S_n$.
\begin{enumerate}
\item[(a)] The character
$$\mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{m+n}}(\chi^w\otimes \chi^v)=\chi^{w\bowtie_A v},$$
where for $j\notin A$ and $a\in A$,
$$\iota_j(w\bowtie_A v)=\iota_{j-\#\{b\in A\mid b<j\}}(w)\qquad \text{and}\qquad \iota^\vee_a(w\bowtie_A v)=\iota^\vee_{a-\#\{i\notin A\mid i<a\}}(v).$$
\item[(b)] For $1\leq j\leq m+n$,
$$(w\bowtie_A v)^{-1}(j)=\left\{\begin{array}{ll} w^{-1}(j-\#\{a\in A\mid a<j\}) & \text{if $j\notin A$,}\\
v^{-1}(j-\#\{i\notin A\mid i<j\})+m & \text{if $j\in A$.}\end{array}\right.$$
\end{enumerate}
\end{lemma}
\begin{proof}
(a) If $u'\in \mathfrak{ut}_A^\vee$, $u\in \mathfrak{ut}_A$, $l\in \mathfrak{l}_A$ and $r\in \mathfrak{r}_A$, then by Corollary \ref{SupercharacterFactorization},
\begin{equation*}
\mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n}(\chi^w\otimes \chi^v)(lu'ur) ={1\hspace{-.14cm} 1}_{\mathfrak{l}_A}(l)\chi^w(u')\chi^v(u)\chi_{\CC \mathfrak{r}_A}(r)=\chi^{w\bowtie_A v}(lu'ur).
\end{equation*}
(b) Let $y\in S_k$, and $\iota(y)=(\iota_1(y),\ldots,\iota_k(y))$. Then for $1\leq j\leq k$, we have
$$y^{-1}(j)=\begin{array}{l}\text{the $1+\iota_j(y)$th smallest element in increasing order of the set }\\ \{1,2,\ldots, k\}-\{y^{-1}(1),\ldots, y^{-1}(j-1)\}.\end{array}$$
Suppose $j\notin A$. Then $\iota_j(w\bowtie_A v)=\iota_{j-k}(w)$ where $k=\#\{a\in A\mid a<j\}$. Thus,
\begin{equation}\label{NotInA}
(w\bowtie_A v)^{-1}(j)=\begin{array}{l} \text{the $1+\iota_{j-k}(w)$th element in increasing order of the set}\\ \{1,2,\ldots, m+n\}-\{(w\bowtie_A v)^{-1}(1),\ldots, (w\bowtie_A v)^{-1}(j-1)\}.\end{array}
\end{equation}
Similarly, if $j\in A$ and $k=\#\{i\notin A\mid i<j\}$, then
\begin{equation}\label{InA}
(w\bowtie_A v)^{-1}(j)=\begin{array}{l} \text{the $1+m-k+\iota_{j-k}(v)$th element in increasing order of the set}\\ \{1,2,\ldots, m+n\}-\{(w\bowtie_A v)^{-1}(1),\ldots, (w\bowtie_A v)^{-1}(j-1)\}.\end{array}
\end{equation}
If we show that for every $j\in A$, $(w\bowtie_A v)^{-1}(j)>m$ and for every $j\not\in A$, $(w\bowtie_A v)^{-1}(j)\leq m$, the desired result follows from \ref{NotInA} and \ref{InA}. Suppose these properties hold for every positive integer less than $j$. Then the set $\{(w\bowtie_A v)^{-1}(1),\ldots, (w\bowtie_A v)^{-1}(j-1)\}$ contains $|\{a\in A\mid a<j\}|$ elements bigger than $m$ and $|\{1\leq i<j\mid i\notin A\}|$ elements less than $m$. It now follows that if $j\not\in A$, then
the $1+\iota_{j-k}(w)$th element in increasing order of the set $\{1,2,\ldots, m+n\}-\{(w\bowtie_A v)^{-1}(1),\ldots, (w\bowtie_A v)^{-1}(j-1)\}$ is less than or equal to $m$, and if $j\in A$, the $1+m-k+\iota_{j-k}(v)$th element in increasing order of the set $\{1,2,\ldots, m+n\}-\{(w\bowtie_A v)^{-1}(1),\ldots, (w\bowtie_A v)^{-1}(j-1)\}$ is greater than $m$.
\end{proof}
\begin{remark} \label{ProductHeuristic} On diagrams, we shuffle the inverse permutations or the columns of the diagrams according to the columns picked out by $A$ (indicated in gray in the example, below). For example,
$$
\underset{314625\atop \color{gray} 251364}{\left[\begin{tikzpicture}[scale=.5,baseline=1.65cm]
\foreach \i/\j in {1/3,2/1,3/4,4/6,5/2,6/5}
\draw (\j,1) -- (\j,7-\i) -- (6,7-\i);
\foreach \i/\j in {1/1,1/2,3/2,4/2,4/5}
\node at (\j,7-\i) {$\circ$};
\end{tikzpicture}\right]}\bowtie_{\{1,4,5,8\}}
\underset{2413\atop \color{gray} 3142}{\left[\begin{tikzpicture}[scale=.5,baseline=1.2cm]
\foreach \i/\j in {1/2,2/4,3/1,4/3}
\draw[dotted,thick] (\j,1) -- (\j,5-\i) -- (4,5-\i);
\foreach \i/\j in {1/1,2/1,2/3}
\node at (\j,5-\i) {$\square$};
\end{tikzpicture}\right]}=
\underset{627\hspace{1pt} 10\hspace{1pt} 394815\atop \color{gray} 9257\hspace{1pt} 10\hspace{1pt} 13864}{\left[\begin{tikzpicture}[scale=.5,baseline=2.7cm]
\foreach \i/\j in {2/2,3/7,6/9,1/6,4/10,5/3}
\draw (\j,1) -- (\j,11-\i) -- (10,11-\i);
\foreach \i/\j in {7/4,9/1,10/5,8/8}
\draw[dotted,thick] (\j,1) -- (\j,11-\i) -- (10,11-\i);
\foreach \i/\j in {1/1,2/1,3/1,4/1,5/1,6/1,7/1,8/1,1/4,3/4,4/4,6/4,1/5,3/5,4/5,6/5,8/5,4/8,6/8}
\node at (\j,11-\i) {$\square$};
\foreach \i/\j in {1/2,1/3,3/3,4/3,4/9}
\node at (\j,11-\i) {$\circ$};
\end{tikzpicture}\right]}
$$
Thus, we can think of the $\bowtie_A$ as a shifted shuffle of the columns of the Rothe diagrams.
\end{remark}
By the above remark, if we first invert the permutations, we actually start shuffling the permutations, so we use the $\star$-duality.
\begin{theorem} \label{GoingUpCombinatorics}
Let $A\subseteq \{1,2,\ldots, n\}$. Then for $w\in S_m$ and $v\in S_n$,
$$\mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n}\Big((\chi^w)^\star\otimes (\chi^{v})^\star\Big)^\star=\chi^{w\shuffle_A v}.$$
\end{theorem}
\begin{proof}
By definition,
$$\mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n}\Big((\chi^w)^\star\otimes (\chi^{v})^\star\Big)^\star=\chi^{(w^{-1}\bowtie_A v^{-1})^{-1}}.$$
Thus, it suffices to show that $w\shuffle_A v=(w^{-1}\bowtie_A v^{-1})^{-1}$. By Lemma \ref{ProductLemma}(b),
\begin{align*}
(w^{-1}\bowtie_A v^{-1})^{-1}(j)&=\left\{\begin{array}{ll} w(j-\#\{a\in A\mid a<j\}) & \text{if $j\notin A$,}\\
m+v(j-\#\{i\notin A\mid i<j\}) & \text{if $j\in A$.} \end{array}\right.\\
&=(w\shuffle_A v) (j),
\end{align*}
as desired.
\end{proof}
\subsection{The functor $\mathrm{Dela}$ that goes down}
For $1\leq i<j\leq n$, let
$$e_{ij}(\ast)=\frac{1}{q}\sum_{t\in \FF_q}{1\hspace{-.14cm} 1}_{\FF_q^+}(t) e_{ij}(t)\in \CC\mathfrak{ut}_n\quad \text{and}\quad e_{ij}^\perp(\ast)= \frac{1}{q}\sum_{t\in \FF_q} \frac{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(t)}{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(1)}e_{ij}(t)\in \CC\mathfrak{ut}_n.$$
For a subset $S\subseteq \{(i,j)\mid 1\leq i<j\leq n\}$, let
$$e_S=\prod_{(i,j)\in S} e_{ij}(\ast)\quad \text{and} \quad e_S^\perp=\prod_{(i,j)\in S} e^\perp_{ij}(\ast).$$
The Frobenius adjoint to inflation is \emph{deflation}
$$\begin{array}{rccc}
\mathrm{Def}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{p}_A} : & \mathrm{scf}(\mathfrak{p}_A)& \longrightarrow & \mathrm{scf}(\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}) \\
& \chi & \mapsto & \begin{array}{c@{\ }c@{\ }c} \mathfrak{ut}_A^\vee \times \mathfrak{ut}_A & \rightarrow & \CC\\ (u',u) & \mapsto & \chi(e_{L_A}u'u).\end{array} \end{array}$$
We will also use \emph{collapsing}
$$\begin{array}{rccc}
\mathrm{Col}_{\mathfrak{p}_A}^{\mathfrak{ut}_n} : & \mathrm{scf}(\mathfrak{ut}_n)& \longrightarrow & \mathrm{scf}(\mathfrak{p}_A) \\
& \chi & \mapsto & \begin{array}{c@{\ }c@{\ }c} \mathfrak{p}_A & \rightarrow & \CC\\ y & \mapsto & \chi(ye^\perp_{R_A}),\end{array} \end{array}$$
which does not seem to be a standard functor in the literature.
\begin{remark}
In our definition of these functors, we abuse notation slightly by defining the functors on elements in the group algebra, instead of directly on elements in the group. However, one may obtain a formula on group elements by linearity.
\end{remark}
Their composition is the \emph{delapsing} functor
$$\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n} = \mathrm{Def}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{p}_A} \circ \mathrm{Col}_{\mathfrak{p}_A}^{\mathfrak{ut}_n}.$$
While exflation is generally nonzero on supercharacters, the following result shows that delapsing only gives nonzero values in very specific instances.
\begin{lemma}\label{SupportLemma}
Let $A\subseteq \{1,2,\ldots, n\}$, $w\in S_n$.
\begin{enumerate}
\item[(a)] If $\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\chi^w)\neq 0$, then
$$\{(a,a+\iota_a(w)+1)\mid a\in A\}\subseteq U_A\quad \text{and}\quad \{(j,j+\iota_j(w))\mid j\notin A, \iota_j(w)\neq 0\}\subseteq U_A^\vee.$$
\item[(b)] If $\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\chi^w)\neq 0$, then for each $i$, $w(i)\in A$ implies $i>n-|A|$.
\end{enumerate}
\end{lemma}
\begin{proof} (a) We prove the contrapositive in two cases.
Suppose there exists $a\in A$ with $(a,a+\iota_a(w)+1)\notin U_A$. Then $(a,a+\iota_a(w)+1)=(a_0,b_0)\in L_A$, so $b_0-a_0=\iota_{a_0}(w)+1$. By Corollary \ref{SupercharacterFactorization}, for $x\in \mathfrak{ut}_A^\vee\times \mathfrak{ut}_A\subseteq \mathfrak{ut}_n$,
\begin{align*}
\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}&(\chi^w)(x)\\
&=(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})\Big(\frac{1}{q}\sum_{t\in \FF_q}t\Big)\hspace{-.25cm} \prod_{1\leq i<j\leq n\atop j-i\leq \iota_i(v)}\hspace{-.25cm} {1\hspace{-.14cm} 1}_{\FF_q^+}(x_{ij}) \hspace{-.25cm}\prod_{1\leq i<j\leq n\atop{ j-i= \iota_i(v)+1\atop (i,j)\neq (a_0,b_0)}} \hspace{-.25cm}(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(x_{ij}) \hspace{-.25cm}\prod_{1\leq i<j\leq n\atop j-i> \iota_i(v)+1}\hspace{-.25cm}\mathrm{reg}_{\FF_q^+}(x_{ij}),
\end{align*}
where
$$(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})\Big(\frac{1}{q}\sum_{t\in \FF_q}t\Big)=\frac{1}{q}\Big(q-1+(q-1)(-1)\Big)=0.$$
Suppose instead that there exists $j\notin A$ with $\iota_j(w)\neq 0$ and $(j,j+\iota_j(w))\notin U_A^\vee$. Then $(j,j+\iota_j(w))=(i_0,j_0)\in R_A$ with $j_0-i_0<\iota_{i_0}(w)+1$. For $x\in \mathfrak{ut}_A^\vee\times \mathfrak{ut}_A\subseteq \mathfrak{ut}_n$,
\begin{align*}
&\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\chi^w)(x)\\
&=({1\hspace{-.14cm} 1}_{\FF_q^+})\Big(\frac{1}{q}\sum_{t\in \FF_q}\frac{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(t)}{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(1)}t\Big)\hspace{-.25cm} \prod_{1\leq i<j\leq n\atop { j-i\leq \iota_i(v)\atop (i,j)\neq (i_0,j_0)}}\hspace{-.25cm} {1\hspace{-.14cm} 1}_{\FF_q^+}(x_{ij}) \hspace{-.25cm}\prod_{1\leq i<j\leq n\atop j-i= \iota_i(v)+1} \hspace{-.25cm}(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(x_{ij}) \hspace{-.25cm}\prod_{1\leq i<j\leq n\atop j-i> \iota_i(v)+1}\hspace{-.25cm}\mathrm{reg}_{\FF_q^+}(x_{ij}),
\end{align*}
where
$$({1\hspace{-.14cm} 1}_{\FF_q^+})\Big(\frac{1}{q}\sum_{t\in \FF_q}\frac{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(t)}{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(1)}t\Big)=\frac{1}{q}\Big(1+(q-1)\frac{-1}{q-1}\Big)=0.$$
(b) If $\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\chi^w)\neq 0$, then both conditions from (a) must be satisfied.
Let $i_0$ be maximal such that $w(i_0)\notin A$. Then $(w(i_0),w(i_0)+\iota_{w(i_0)}(w))\in U_A^\vee$ implies
$$\#\{a\in A\mid a>w(i_0)\}\leq n-\iota_{w(i_0)}(w)-w(i_0)=\#\{h>i_0\mid w(h)>w(i_0)\}.$$
In other words,
\begin{equation}\label{Local1}
\#\{w(h)\in A\mid h<i_0,w(h)>w(i_0)\}\leq \#\{h>i_0\mid w(h) \notin A,w(h)>w(i_0)\}=0.
\end{equation}
If there is no $w(h)\in A$ with $h<i_0$, then we are done. Else, let $h_0$ be minimal such that $w(h_0)\in A$. Then by (\ref{Local1}) we must have $w(h_0)<w(i_0)$. Since $(w(h_0),w(h_0)+\iota_{w(i_0)}(w)+1)\in U_A$, we have
$$\#\{a\in A\mid a>w(h_0)\}\geq n-\iota_{w(h_0)}(w)-w(h_0)=\#\{i>h_0\mid w(i)>w(h_0)\},$$
or
$$0=\#\{w(i)\in A\mid i<h_0,w(i)>w(h_0)\}\geq \#\{w(i)\notin A\mid i>h_0,w(i)>w(h_0)\}\geq 1$$
a contradiction. Thus, there is no $w(h)\in A$ with $h<i_0$.
\end{proof}
This lemma implies that the $\mathrm{Dela}$-functor gives a standardized deconcatenation on the supercharacter basis.
\begin{theorem} \label{GoingDownCombinatorics}
Let $A\subseteq \{1,2,\ldots, m+n\}$ with $|A|=n$. For $w\in S_{m+n}$,
$$\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n} (\chi^w)=\left\{\begin{array}{ll} \chi^{w_{\leq m}}\otimes \chi^{w_{>m}} & \text{if
$w^{-1}(A)=\{m+1,\ldots, m+n\}$,}\\ 0 & \text{otherwise.}\end{array}\right.$$
\end{theorem}
\begin{proof} By Lemma \ref{SupportLemma} (b), we may assume $w(i)\in A$ only if $i>m$. By Lemma \ref{SupportLemma} (a), if $a\in A$ and $(a,b)\in L_A$, then $b-a\leq\iota_a(w)$, and if $i\notin A$ and $(i,j)\in R_A$, then $j-i\geq \iota_i(w)+1$. Thus, when we apply Corollary \ref{SupercharacterFactorization} to $\chi^w$, for $(a,b)\in L_A$ we have
$${1\hspace{-.14cm} 1}_{\FF_q^+}\Big(\frac{1}{q}\sum_{t\in \FF_q} t\Big)=\frac{1}{q}q=1,$$
and for $(i,j)\in R_A$ either
$$(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})\Big(\frac{1}{q}\sum_{t\in \FF_q}\frac{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(t)}{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(1)}t\Big)=\frac{1}{q}\Big((q-1)+(q-1)\frac{(-1)^2}{q-1}\Big)=1,$$
or
$$\mathrm{reg}_{\FF_q^+}\Big(\frac{1}{q}\sum_{t\in \FF_q}\frac{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(t)}{(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(1)}t\Big)=\frac{1}{q} q=1.$$
By the definition of delapsing we therefore have
\begin{align*}
\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n} (\chi^w)(u',u)= \bigg(&\prod_{(i,j)\in U_A^\vee\atop j-i\leq \iota_i(w)}\hspace{-.25cm} {1\hspace{-.14cm} 1}_{\FF_q^+}(u'_{ij}) \hspace{-.25cm}\prod_{(i,j)\in U_A^\vee\atop j-i= \iota_i(w)+1} \hspace{-.25cm}(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(u'_{ij}) \hspace{-.25cm}\prod_{(i,j)\in U_A^\vee\atop j-i> \iota_i(w)+1}\hspace{-.25cm}\mathrm{reg}_{\FF_q^+}(u'_{ij})\bigg)\\
&\cdot \bigg(\prod_{(i,j)\in U_A\atop j-i\leq \iota_i(w)}\hspace{-.25cm} {1\hspace{-.14cm} 1}_{\FF_q^+}(u_{ij}) \hspace{-.25cm}\prod_{(i,j)\in U_A\atop j-i= \iota_i(w)+1} \hspace{-.25cm}(\mathrm{reg}_{\FF_q^+}-{1\hspace{-.14cm} 1}_{\FF_q^+})(u_{ij}) \hspace{-.25cm}\prod_{(i,j)\in U_A\atop j-i> \iota_i(w)+1}\hspace{-.25cm}\mathrm{reg}_{\FF_q^+}(u_{ij})\bigg).
\end{align*}
Using the isomorphism (\ref{CheckIsomorphism}),
$$\chi^{w_{\leq m}}(\tau'_A(u'))=\chi^w(u'),$$
since $(j-\#\{a\in A\mid a<i\})-(i-\#\{a\in A\mid a<i\})=j-i$ and $\iota_{j}(w)=\iota_{j-\#\{a<j\mid a\in A\}}(w_{\leq m})$ for all $j\notin A$.
Next using the isomorphism (\ref{UnCheckIsomorphism}),
$$\chi^{w_{>m}}(\tau_A(u))=\chi^w(u), $$
since $j-m-(i-\#\{b<i\mid b\notin A\})=j-i-\#\{b>i\mid b\notin A\}$ and
$$\iota_{j-\#\{b<j\mid b\notin A\}}(w_{>m})=\iota_{j}(w)-\#\{b>j\mid b\notin A\}.$$
We can conclude that
$$\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_n} (\chi^w)(u',u)=\chi^{w_{\leq m}}(u')\chi^{w_{>m}}(u),$$
as desired.
\end{proof}
\begin{remark} \label{CoProductHeuristic}
For deconcatenating, we split off the last $k$ rows (if we are deconcatenating $k$ digits off the end).
$$\underset{831\hspace{1pt}10\hspace{1pt}746925\atop \color{gray} 3926\hspace{1pt}10\hspace{1pt}57184}{\left[\begin{tikzpicture}[scale=.5,baseline=2.7cm]
\foreach \i/\j in {7/6,8/9,9/2,10/5}
\draw[dotted,thick] (\j,1) -- (\j,11-\i) -- (10,11-\i);
\foreach \i/\j in {1/8,2/3,3/1,6/4,4/10,5/7}
\draw (\j,1) -- (\j,11-\i) -- (10,11-\i);
\foreach \i/\j in {7/2,7/5,8/2,8/5}
\node at (\j,11-\i) {$\square$};
\foreach \i/\j in {1/1,1/2,1/3,1/4,1/5,1/6,1/7,4/2,4/4,4/5,4/6,4/7,4/9,5/2,5/4,5/5,5/6, 2/1,2/2,6/2}
\node at (\j,11-\i) {$\circ$};
\end{tikzpicture}\right]}=\underset{521643\atop \color{gray} 326514}{\left[\begin{tikzpicture}[scale=.5,baseline=1.65cm]
\foreach \i/\j in {1/5,2/2,3/1,4/6,5/4,6/3}
\draw (\j,1) -- (\j,7-\i) -- (6,7-\i);
\foreach \i/\j in {1/1,1/2,1/3,1/4,2/1,4/3,4/4,5/3}
\node at (\j,7-\i) {$\circ$};
\end{tikzpicture}\right]}\ .\
\underset{3412\atop \color{gray} 3412}{\left[\begin{tikzpicture}[scale=.5,baseline=1.2cm]
\foreach \i/\j in {1/3,2/4,3/1,4/2}
\draw[dotted,thick] (\j,1) -- (\j,5-\i) -- (4,5-\i);
\foreach \i/\j in {1/1,1/2,2/1,2/2}
\node at (\j,5-\i) {$\square$};
\end{tikzpicture}\right]}$$
This can be thought of as a deconcatenation of the codes. Alternatively, this can also be viewed as a deshuffle of the inverse permutations picked out by the columns of $A$ (as indicated by the gray inverses of the permutations).
\end{remark}
\subsection{The Malvenuto--Reutenauer algebra and $\mathrm{scf}(\mathfrak{ut})$}
Using the functors from the previous section, we may define a representation theoretic product and coproduct on $\mathrm{scf}(\mathfrak{ut})$. For the product we define
$$\begin{array}{ccc} \mathrm{scf}(\mathfrak{ut}_m)\otimes \mathrm{scf}(\mathfrak{ut}_n) & \longrightarrow & \mathrm{scf}(\mathfrak{ut}_{m+n})\\
\alpha\otimes \beta & \mapsto & \displaystyle\sum_{A\subseteq \{1,2,\ldots, m+n\}\atop |A|=n} \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{m+n}} (\alpha^\star\otimes \beta^\star)^\star,\end{array}$$
and coproduct given by
$$\begin{array}{rccc} \Delta: & \mathrm{scf}(\mathfrak{ut}_n) & \longrightarrow & \displaystyle\bigoplus_{m=0}^n\mathrm{scf}(\mathfrak{ut}_{n-m})\otimes \mathrm{scf}(\mathfrak{ut}_m)\\
&\alpha & \mapsto & \displaystyle\sum_{A\subseteq \{1,2,\ldots, n\}} \mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}} (\alpha).\end{array}$$
From Theorems \ref{GoingUpCombinatorics} and \ref{GoingDownCombinatorics} we obtain the following corollary.
\begin{corollary}\label{SupercharacterStructureConstants}\hfill
\begin{enumerate}
\item[(a)] For $w\in S_m$ and $v\in S_n$,
$$\chi^w\chi^v=\sum_{y\in w\shuffle v} \chi^y.$$
\item[(b)] For $w\in S_n$,
$$\Delta(\chi^w)=\sum_{m=0}^n \chi^{w_{\leq m}}\otimes \chi^{w_{>m}}.$$
\end{enumerate}
\end{corollary}
In particular, by comparing with (\ref{FQSymProduct}) and (\ref{FQSymCoProduct}) we obtain a Hopf algebra isomorphism to the Malvenuto--Reutenauer algebra.
\begin{corollary}\label{MainIsomorphism}
The function
$$\begin{array}{r@{\ }c@{\ }c@{\ }c} \mathrm{ch}: & \mathrm{scf}(\mathfrak{ut}) & \longrightarrow & \mathrm{FQSym}\\ & \chi^w & \mapsto & F_w\end{array}$$
is a Hopf algebra isomorphism.
\end{corollary}
\begin{remark}
By computing the structure constants we get away with not checking that $\mathrm{Exfl}$ with the $\star$-involution and $\mathrm{Dela}$ are Hopf compatible. There should be an algebraic proof of this, but that would likely also require a better representation theoretic interpretation of the $\star$-involution.
\end{remark}
\subsection{The dual Hopf algebra $\mathrm{scf}(\mathfrak{ut})^*$}
It is well-known that the Malvenuto--Reutenauer algebra $\mathrm{FQSym}$ is self-dual (albeit in an interesting fashion); however, we think it is instructive to study this duality from a representation theoretic point of view. The dual Hopf algebra can be constructed from a choice of bilinear form on $\mathrm{scf}(\mathfrak{ut})$. However, the representation theoretic construction supplies a canonical inner product on characters given by
$$\begin{array}{r@{\ }c@{\ }c@{\ }c}
\langle\cdot,\cdot\rangle : &\mathrm{scf}(\mathfrak{ut}_m)\otimes \mathrm{scf}(\mathfrak{ut}_n) & \longrightarrow & \CC\\
& \gamma\otimes \psi & \mapsto & \displaystyle\left\{\begin{array}{@{}ll@{}}\displaystyle\frac{1}{|\mathfrak{ut}_n|} \sum_{u\in \mathfrak{ut}_n} \gamma(u)\overline{\psi(u)} & \text{if $m=n$},\\
0 & \text{otherwise.}\end{array}\right.
\end{array}$$
Note that for $w,v\in S_n$,
$$\langle \delta_w,\delta_v\rangle=\left\{\begin{array}{@{}ll@{}}\frac{|\mathtt{Cl}_w|}{|\mathfrak{ut}_n|} & \text{if $v=w$,} \\ 0 & \text{otherwise,}\end{array}\right. \qquad \text{and}\qquad \langle \chi^w,\chi^v\rangle=\left\{\begin{array}{@{}ll@{}} \chi^w(1) & \text{if $v=w$,} \\ 0 & \text{otherwise.}\end{array}\right. $$
However, the permutation characters are not orthogonal (eg. they all contain the trivial character). To construct the dual Hopf algebra, it will be helpful to construct the adjoint functors with respect to this inner product.
\begin{proposition} Let $A\subseteq \{1,2,\ldots, n\}$.
For $\gamma\in \mathrm{scf}(\mathfrak{ut}_n)$ and $\varphi\otimes\theta \in \mathrm{scf}(\mathfrak{ut}_A^\vee)\otimes \mathrm{scf}(\mathfrak{ut}_A)$,
$$|\mathfrak{r}_A|\langle \mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\gamma),\varphi\otimes \theta\rangle=\langle \gamma, \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\varphi\otimes \theta)\rangle.$$
\end{proposition}
\begin{proof}
It suffices to check this equality on a basis. Let $w\in S_n$. By Remark \ref{CoProductHeuristic}, if $w^{-1}=v^{-1}\shuffle_A x^{-1}$, then
$$\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\chi^w)=\chi^v\otimes \chi^x.$$
On the other hand, if $y\in S_{n-|A|}$ and $z\in S_{|A|}$, then by Remark \ref{ProductHeuristic},
$$ \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\chi^y\otimes \chi^z)=\chi^{(y^{-1}\shuffle_A z^{-1})^{-1}}.$$
Since supercharacters are orthogonal,
\begin{align}
|\mathfrak{r}_A|\langle \mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\chi^w),\chi^y\otimes \chi^z\rangle
&=|\mathfrak{r}_A|\langle \chi^v\otimes \chi^x,\chi^y\otimes \chi^z\rangle\notag\\
&=\left\{\begin{array}{ll} |\mathfrak{r}_A| \chi^y(1)\chi^z(1) & \text{if $v=y$ and $x=z$,}\\ 0 & \text{otherwise.}\end{array}\right.\label{OneSide}
\end{align}
while
\begin{align}
\langle \chi^w, \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\chi^y\otimes \chi^z)\rangle &= \langle \chi^w, \chi^{(y^{-1}\shuffle_A z^{-1})^{-1}}\rangle\notag\\
&=\left\{\begin{array}{ll} \chi^w(1) & \text{if $w^{-1}=y^{-1}\shuffle_A z^{-1}$,}\\ 0 &\text{otherwise.}\end{array}\right.\label{OtherSide}
\end{align}
By construction (\ref{OneSide}) and (\ref{OtherSide}) are simultaneously nonzero. By the definition of exflation,
$$ \chi^w(1)= \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}}(\chi^v\otimes \chi^x)(1)=|\mathfrak{r}_A| \chi^v(1)\chi^x(1),$$
completing the desired equality.
\end{proof}
We therefore obtain the dual Hopf algebra structure on $\mathrm{scf}(\mathfrak{ut})$ with product
$$\begin{array}{ccc} \mathrm{scf}(\mathfrak{ut}_m)\otimes \mathrm{scf}(\mathfrak{ut}_n) & \longrightarrow & \mathrm{scf}(\mathfrak{ut}_{m+n})\\
\alpha\otimes \beta & \mapsto & \displaystyle\sum_{A\subseteq \{1,2,\ldots, m+n\}\atop |A|=n}\frac{1}{|\mathfrak{r}_A|} \mathrm{Exfl}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{m+n}} (\alpha\otimes \beta),\end{array}$$
and coproduct
$$\begin{array}{rccc} \Delta: & \mathrm{scf}(\mathfrak{ut}_n) & \longrightarrow & \displaystyle\bigoplus_{m=0}^n\mathrm{scf}(\mathfrak{ut}_{n-m})\otimes \mathrm{scf}(\mathfrak{ut}_m)\\
&\alpha & \mapsto & \displaystyle\sum_{A\subseteq \{1,2,\ldots, n\}} |\mathfrak{r}_A| \mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_A}^{\mathfrak{ut}_{n}} (\alpha^\star)^\star.\end{array}$$
With respect to our inner product $\langle\cdot,\cdot \rangle$ we have dual bases
\begin{description}
\item[Superclass identifiers.] By the orthogonality relation,
$$\delta_w^*=\frac{1}{|\mathtt{Cl}_w|}\delta_w.$$
\item[Supercharacters.] By the orthogonality relation,
$$(\chi^w)^*=\frac{1}{\chi^w(1)}\chi^w.$$
\end{description}
\begin{remark}
From the results of the previous section, the product and coproduct of $\frac{\chi^w}{\chi^w(1)}$ shuffle and deconcatenate along the columns of the diagram of $w$. Alternatively, we shuffle inverses and deconcatenate the inverse word. It follows that
$$\begin{array}{ccc} \mathrm{scf}(\mathfrak{ut}) & \longrightarrow & \mathrm{scf}(\mathfrak{ut})\\ \chi^w & \mapsto & \big((\chi^w)^\star\big)^*\end{array}$$
is a Hopf algebra isomorphism. Note that this differs somewhat from the standard isomorphism since we are using a slightly different pairing.
\end{remark}
\section{The permutation character basis of $\mathrm{scf}(\mathfrak{ut})$} \label{PermutationCharacterStructure}
Up to this point, we have worked almost exclusively with the supercharacter basis. However, we have two other canonical bases at our disposal. This section computes the structure constants for the permutation character basis (\ref{PermutationCharacterBasis}).
\subsection{$\star$-duality}
We begin with a result that applies the $\star$-involution to a permutation character. We note that while the formula in Theorem \ref{SubgroupDual} has negative coefficients, the resulting sum remains a character (ie. is ``supercharacter-positive").
For $x,y\in S_n$, let $x\cup y, x\cap y\in S_n$ be the unique permutations such that
$$\mathfrak{ut}_{x}\cap \mathfrak{ut}_y=\mathfrak{ut}_{x\cap y}\quad \text{and}\quad \mathfrak{ut}_{x}\cup \mathfrak{ut}_y=\mathfrak{ut}_{x\cup y}.$$
Let $C_z^\vee$ be the Boolean sublattice of the permutation inversion table lattice generated by the set
$$\{y<z\mid z\text{ covers } y\}.$$
Here, we require the notion of code $\kappa(w)$ introduced in Section \ref{Permutations}.
\begin{theorem}\label{SubgroupDual} For $w\in S_n$,
$$(\bar\chi^w)^\star=\sum_{\{x\in C_z^\vee\mid \kappa(x)\geq \kappa(w^{-1})\}=\{y\}}(-1)^{|\iota(z)|-|\iota(y)|} \bar\chi^z. $$
\end{theorem}
We require two lemmas that examine the structure of $C_z^\vee$. Because we will be using both the code and the inversion table order simultaneously, we will use Rothe diagrams heavily in this section. By inspection, Proposition \ref{CoveringInversions} implies the diagram of $w$ covers the diagram of $v$ if we ``swap" two hooks that are only separated by hooks from earlier columns (or columns to the left). Visually,
$$
w=\begin{tikzpicture}[scale=.4,baseline=2cm]
\draw[dotted] (0,0) -- (0,10);
\draw[dotted] (10,0) -- (10,10);
\draw (3,0) -- (3,4) -- (10,4);
\draw (5,0) -- (5,7) -- (10,7);
\draw[gray] (2,0) -- (2,6) -- (10,6);
\draw[gray] (1,0) -- (1,5) -- (10,5);
\node at (3,10.5) {$\scriptstyle w(j)$};
\node at (5,10.5) {$\scriptstyle w(i)$};
\node at (-1,4) {$\scriptstyle j$};
\node at (-1,7) {$\scriptstyle i$};
\end{tikzpicture}\quad \text{and}\quad
v=
\begin{tikzpicture}[scale=.4,baseline=2cm]
\draw[dotted] (0,0) -- (0,10);
\draw[dotted] (10,0) -- (10,10);
\draw (3,0) -- (3,7) -- (10,7);
\draw (5,0) -- (5,4) -- (10,4);
\draw[gray] (2,0) -- (2,6) -- (10,6);
\draw[gray] (1,0) -- (1,5) -- (10,5);
\node at (3,10.5) {$\scriptstyle v(i)$};
\node at (5,10.5) {$\scriptstyle v(j)$};
\node at (-1,4) {$\scriptstyle j$};
\node at (-1,7) {$\scriptstyle i$};
\end{tikzpicture}\ .
$$
Fix $w,z\in S_n$. For Lemmas \ref{ConvexLatticeLemma} and \ref{FullBooleanLemma}, below, we define
$$\mathcal{L}=\{x\in C^\vee_z\mid \kappa(x)\geq \kappa(w)\}.$$
Our first lemma shows that $\mathcal{L}$ is a lattice.
\begin{lemma}\label{ConvexLatticeLemma}
If $x,y\in\mathcal{L}$, then $x\cap y,x\cup y\in \mathcal{L}$.
\end{lemma}
\begin{proof} We begin by proving that
\begin{enumerate}
\item[(a)] if $x\cap y\neq x$, then there exists $\tilde{x}\in C^\vee_z$ such that $|\iota(\tilde{x})|=|\iota(x)|-1$ with $\iota(\tilde{x})\geq \iota(x\cap y)$ and $\kappa(\tilde{x})\geq \kappa(w)$.
\item[(b)] if $x\cup y\neq x$, then there exists $\tilde{x}\in C^\vee_z$ such that $|\iota(\tilde{x})|=|\iota(x)|+1$ with $\iota(\tilde{x})\leq \iota(x\cup y)$ and $\kappa(\tilde{x})\geq \kappa(w)$.
\end{enumerate}
The result then follows by iterating (a) and (b), respectively.
(a) Since $x\neq x\cap y$, there exists $i$ minimal such that $\iota_i(x\cap y)= \iota_i(x)-1$; let $\tilde{x}\in C^\vee_z$ be determined by $\iota(\tilde{x})=\iota(x)-e_i$. By considering Rothe diagrams, we have that $\kappa_h(\tilde{x})=\kappa_h(x)$ unless $h\in \{x^{-1}(i),\tilde{x}^{-1}(i)\}$. In these positions,
$$x=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,4) -- (8,4);
\draw (5,0) -- (5,6) -- (8,6);
\node at (2,8.5) {$\scriptstyle i$};
\node at (-1,4) {$\scriptstyle x^{-1}(i)$};
\node at (-1,6) {$\scriptstyle \tilde{x}^{-1}(i)$};
\end{tikzpicture}\quad \tilde{x}=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,6) -- (8,6);
\draw (5,0) -- (5,4) -- (8,4);
\node at (2,8.5) {$\scriptstyle i$};
\node at (-1,4) {$\scriptstyle x^{-1}(i)$};
\node at (-1,6) {$\scriptstyle \tilde{x}^{-1}(i)$};
\end{tikzpicture}\quad y=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,6) -- (8,6);
\node at (2,8.5) {$\scriptstyle i$};
\node at (-1,4) {$\scriptstyle x^{-1}(i)$};
\node at (-1,6) {$\scriptstyle \tilde{x}^{-1}(i)$};
\end{tikzpicture}$$
However, $\kappa_{x^{-1}(i)}(\tilde{x})\geq \kappa_{x^{-1}(i)}(x)\geq \kappa_{x^{-1}(i)}(w)$ and by the minimality of $i$,
$$\kappa_{\tilde{x}^{-1}(i)}(\tilde{x})= \kappa_{\tilde{x}^{-1}(i)}(y)\geq \kappa_{\tilde{x}^{-1}(i)}(w).$$
Thus, $\kappa(\tilde{x})\geq \kappa(w)$.
(b) Since $x\neq x\cup y$, there exists $i$ minimal such that $\iota_i(y\cup x)= \iota_i(x)+1$; let $\tilde{x}\in C^\vee_z$ be determined by $\iota(\tilde{x})=\iota(x)+e_i$. We have that $\kappa_h(\tilde{x})=\kappa_h(x)$ unless $h\in \{x^{-1}(i),\tilde{x}^{-1}(i)\}$.
$$x=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,6) -- (8,6);
\draw (5,0) -- (5,4) -- (8,4);
\node at (2,8.5) {$\scriptstyle i$};
\node at (-1,4) {$\scriptstyle \tilde{x}^{-1}(i)$};
\node at (-1,6) {$\scriptstyle x^{-1}(i)$};
\end{tikzpicture}
\quad \tilde{x}=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,4) -- (8,4);
\draw (5,0) -- (5,6) -- (8,6);
\node at (2,8.5) {$\scriptstyle i$};
\node at (-1,6) {$\scriptstyle x^{-1}(i)$};
\node at (-1,4) {$\scriptstyle \tilde{x}^{-1}(i)$};
\end{tikzpicture}\quad
y=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,4) -- (8,4);
\node at (2,8.5) {$\scriptstyle i$};
\node at (-1,6) {$\scriptstyle x^{-1}(i)$};
\node at (-1,4) {$\scriptstyle \tilde{x}^{-1}(i)$};
\end{tikzpicture}$$
However, $\kappa_{x^{-1}(i)}(\tilde{x})\geq \kappa_{x^{-1}(i)}(x)\geq \kappa_{x^{-1}(i)}(w)$ and by the minimality of $i$, $\kappa_{\tilde{x}^{-1}(i)}(\tilde{x})= \kappa_{\tilde{x}^{-1}(i)}(y)\geq \kappa_{\tilde{x}^{-1}(i)}(w)$. Thus, $\kappa(\tilde{x})\geq \kappa(w)$.
\end{proof}
The next lemma shows that the $\mathcal{L}$ is in fact a Boolean lattice.
\begin{lemma} \label{FullBooleanLemma}
Suppose $y_M$ is the maximal element and $y_m$ the minimal element of $\mathcal{L}$. Let
$$J=\{1\leq j\leq n\mid \iota_j(y_M)=\iota_j(y_m)+1\}.$$
If $\iota(x)=\iota(y_M)-e_j$ with $j\in J$, then $x\in \mathcal{L}$.
\end{lemma}
\begin{proof}
Fix $j\in J$ and let $\iota(x)=\iota(y_M)-e_j$. Then
$$y_M=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,4) -- (8,4);
\draw (5,0) -- (5,6) -- (8,6);
\node at (2,8.5) {$\scriptstyle j$};
\node at (-1,6) {$\scriptstyle x^{-1}(j)$};
\node at (-1,4) {$\scriptstyle y_M^{-1}(j)$};
\end{tikzpicture}
\quad x=
\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (2,0) -- (2,6) -- (8,6);
\draw (5,0) -- (5,4) -- (8,4);
\node at (2,8.5) {$\scriptstyle j$};
\node at (-1,6) {$\scriptstyle x^{-1}(j)$};
\node at (-1,4) {$\scriptstyle y_M^{-1}(j)$};
\end{tikzpicture}
\quad
y_m=\begin{tikzpicture}[scale=.4,baseline=1.5cm]
\draw[dotted] (0,0) -- (0,8);
\draw[dotted] (8,0) -- (8,8);
\draw (1,0) -- (1,6) -- (8,6);
\node at (1,8.5) {$\scriptstyle j'$};
\node at (-1,6) {$\scriptstyle x^{-1}(j)$};
\node at (-1,4) {$\scriptstyle y_M^{-1}(j)$};
\end{tikzpicture}$$
where $j'\leq j$ is some element in $J$.
We have $\kappa_i(x)=\kappa_i(y_M)$ unless $i\in \{x^{-1}(j),y_M^{-1}(j)\}$, and $\kappa_{y_M^{-1}(j)}(x)\geq \kappa_{y_M^{-1}(j)}(y_M)\geq \kappa_{y_M^{-1}(j)}(w)$.
It remains to show that $\kappa_{x^{-1}(j)}(x)\geq \kappa(w)$. Recall that
$$\kappa_{x^{-1}(j)}(x)=\#\{i<j\mid x^{-1}(i)>x^{-1}(j)\}\quad \text{and}\quad \kappa_{x^{-1}(j)}(y_m)=\#\{i<j'\mid y_m^{-1}(i)>y_m^{-1}(j')\},$$
so it suffices to show that
$$\{i<j\mid x^{-1}(i)>x^{-1}(j)\}\supseteq \{i<j'\mid y_m^{-1}(i)>y_m^{-1}(j')\}.$$
Consider $i<j'$ with $y_m^{-1}(i)>y_m^{-1}(j')=x^{-1}(j)$. The following shows that $x^{-1}(i)>x^{-1}(j)$ (in which case we are done).
\begin{description}
\item[Case $i\in J$.] Since $y_m^{-1}(i)>y_m^{-1}(j')=x^{-1}(j)$, we must have $x^{-1}(i)>x^{-1}(j)$.
\item[Case $i\notin J$.] Here, either $x^{-1}(i)=y_m^{-1}(i)$ so $x^{-1}(i)>x^{-1}(j)$, or $x^{-1}(i)<y_m^{-1}(i)$. In the latter case, there exists $h<i$ with $h\in J$, $x^{-1}(h)>x^{-1}(i)$ minimal, and $x^{-1}(h)=y_m^{-1}(i)$. However, for this covering swap to be possible, $x^{-1}(i)>x^{-1}(j)$. \qedhere
\end{description}
\end{proof}
Putting these two lemmas gives us the proof of the main theorem of this section.
\begin{proof}[Proof of Theorem \ref{SubgroupDual}] Using the decomposition of $\bar{\chi}^w$ into supercharacters, we have
\begin{equation*}
(\bar\chi^w)^\star = \sum_{\mathfrak{ut}_y\supseteq \mathfrak{ut}_w} \chi^{y^{-1}} =\sum_{\mathfrak{ut}_y\supseteq \mathfrak{ut}_w} \sum_{\mathfrak{ut}_z\supseteq \mathfrak{ut}_{y^{-1}}}\mu(y^{-1},z)\bar\chi^{z}=\sum_{z\in S_n} \Big(\sum_{\mathfrak{ut}_y\subseteq \mathfrak{ut}_z\atop \mathfrak{ut}_{y^{-1}}\supseteq \mathfrak{ut}_w} \mu(y,z)\Big)\bar\chi^z.
\end{equation*}
By definition, $ \mathfrak{ut}_{y^{-1}}\supseteq \mathfrak{ut}_w$ if and only if $\iota(y^{-1})\geq \iota(w)$ if and only if $\kappa(y)\geq \kappa(w^{-1})$. Note that
$$\mu(y,z)=\left\{\begin{array}{ll} (-1)^{|\iota(z)|-|\iota(y)|} & \text{if $y\in C_z^\vee$,}\\ 0 & \text{otherwise.}\end{array}\right.$$
Thus, the coefficient is zero if $\{x\in C_z^\vee\mid \kappa(x)\geq \kappa(w^{-1})\}=\emptyset$. If $\#\{x\in C_z^\vee\mid \kappa(x)\geq \kappa(w^{-1})\}\geq 1$, then by Lemmas \ref{ConvexLatticeLemma} and \ref{FullBooleanLemma}, these elements form a complete Boolean sublattice of $C_z^\vee$. Thus, the sum of the M\"obius functions will be zero unless the sublattice has only one element.
\end{proof}
\subsection{Coproduct}
The coproduct on this basis behaves as a weak standardized deconcatenation. We begin with the analog to Lemma \ref{SupportLemma} in this context.
\begin{lemma}\label{SubgroupSupportLemma}
Let $A\subseteq \{1,2,\ldots, n\}$, $w\in S_n$. If $\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\bar\chi^w)\neq 0$, then $\{(j,j+\iota_j(w))\mid j\notin A, \iota_j(w)\neq 0\}\subseteq U_A^\vee.$
\end{lemma}
\begin{proof}
Suppose that there exists $j\notin A$ with $\iota_j(w)\neq 0$ and $(j,j+\iota_j(w))\notin U_A^\vee$.
If $\mathfrak{ut}_v\geq \mathfrak{ut}_w$, then $(j,j+\iota_j(v))\in R_A$, and thus by part (a) of Lemma \ref{SupportLemma}, we have $\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\chi^v)=0$. Therefore,
$$ \mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}(\bar\chi^w)= \mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}\left(\sum_{\mathfrak{ut}_w\subseteq \mathfrak{ut}_v}\chi^v\right)=\sum_{\mathfrak{ut}_w\subseteq \mathfrak{ut}_v}\mathrm{Dela}_{\mathfrak{ut}_A^\vee\times \mathfrak{ut}_{A}}^{\mathfrak{ut}_n}\left(\chi^v\right)=0,$$
as desired.
\end{proof}
Let $\mathrm{id}$ be the identity element of $S_n$, recall the notation $\iota^\vee(w)$ from (\ref{DualInversionTable}), and for $B=\{b_1,\ldots,b_\ell\}\subseteq \{1,2,\ldots, n\}$, let
$$\iota(w)_B=(\iota_{b_1}(w),\ldots,\iota_{b_\ell}(w)) \quad\text{where $b_1<b_2<\cdots<b_\ell$}.$$
\begin{theorem} \label{PermutationCharacterCoproduct} For $w\in S_n$,
$$\Delta(\bar{\chi}^w)=\sum_{A\sqcup B= \{1,2,\ldots, n\}\atop\iota(w)_{B}\text{ an inversion table}}
\bar{\chi}^{\iota(w)_{B}}\otimes \bar\chi^{\iota^\vee(w)_A\wedge\iota^\vee(\mathrm{id}_A)}.$$
\end{theorem}
\begin{proof} Let $(h,g)\in \mathtt{Cl}_{\beta}\times \mathtt{Cl}_{\alpha}$, where $\alpha$ and $\beta$ are inversion tables.
We have that
$$\mathrm{Dela}_{\mathfrak{ut}^\vee_A\times \mathfrak{ut}_A}^{\mathfrak{ut}_n}(\bar{\chi}^w)(h,g)=\sum_{l\in \mathfrak{l}_A,r\in\mathfrak{r}_A}\frac{1}{q^{|L_A|+|R_A|}}\left(\frac{-1}{q-1}\right)^{\#\{(i,j)\mid r_{ij}\neq 0\}} \bar\chi^w(l(h,g)r).$$
However, by Lemma \ref{SubgroupSupportLemma}, every term in the sum is zero for nontrivial $r\in \mathfrak{r}_A$. In addition, $\bar\chi^w(l(h,g))\neq 0$ if and only if $l\in \mathfrak{ut}_w$,
$$\beta_{j-\#\{a<j\mid a\in A\}}\leq \iota_j(w), \quad\text{for all $j\notin A$,}\quad \text{and} \quad \alpha^\vee_{a-\#\{i<a\mid i\notin A\}}>\iota^\vee_a(w)\quad \text{for all $a\in A$.}$$
If $\alpha$ and $\beta$ satisfy these conditions, then
\begin{equation}\label{FirstVersion}
\mathrm{Dela}_{\mathfrak{ut}^\vee_A\times \mathfrak{ut}_A}^{\mathfrak{ut}_n}(\bar{\chi}^w)(h,g)=\frac{|\mathfrak{l}_A\cap \mathfrak{ut}_w|}{q^{|L_A|+|R_A|}}\frac{|\mathfrak{ut}_n|}{|\mathfrak{ut}_w|}=\frac{|\mathfrak{l}_A\cap \mathfrak{ut}_w|}{|\mathfrak{l}_A||\mathfrak{r}_A|}q^{|\iota^\vee(w)|}.
\end{equation}
On the other hand, $\bar{\chi}^{\iota(w)_{\overline{A}}}\otimes \bar\chi^{\iota^\vee(w)_A\wedge\iota^\vee(\mathrm{id}_A)}(h,g)\neq 0$ exactly when for all $j\notin A$,
$$\beta_{j-\#\{a<j\mid a\in A\}}\leq \iota_j(w)
\quad \text{and for all $a\in A$,}\quad
\alpha^\vee_{a-\#\{i<a\mid i\notin A\}}>\iota^\vee_a(w).$$
In this case,
\begin{align*}
\bar{\chi}^{\iota(w)_{\overline{A}}}\otimes \bar\chi^{\iota^\vee(w)_A\wedge\iota^\vee(\mathrm{id}_A)}(h,g)&=\frac{|\mathfrak{ut}_A^\vee|}{|\mathfrak{ut}_{\iota(w)_{\overline{A}}}|} \frac{|\mathfrak{ut}_A|}{|\mathfrak{ut}_{\iota^\vee(w)_A\wedge\iota^\vee(1_A)}|}\\
&=\prod_{j\notin A} q^{|A|-j-\iota_j(w)} \hspace{-.25cm}\prod_{a\in A\atop \iota_a^\vee(w)\leq \#\{b>a\mid b\in A\}} \hspace{-.25cm}q^{\iota_a^\vee(w)}\hspace{-.25cm}\prod_{a\in A\atop \iota_a^\vee(w)>\#\{b>a\mid b\in A\}}\hspace{-.25cm}q^{\#\{b>a\mid b\in A\}}\\
&=\frac{1}{q^{|R_A|}}\prod_{j\notin A} q^{n-j-\iota_j(w)} \hspace{-.25cm}\prod_{a\in A\atop \iota_a^\vee(w)\leq \#\{b>a\mid b\in A\}}\hspace{-.25cm} q^{\iota_a^\vee(w)}\hspace{-.25cm}\prod_{a\in A\atop \iota_a^\vee(w)>\#\{b>a\mid b\in A\}}\hspace{-.25cm}q^{\#\{b>a\mid b\in A\}}\\
&=\frac{1}{|\mathfrak{r}_A|}\prod_{j\notin A} q^{n-j-\iota_j(w)}\hspace{-.5cm}\prod_{a\in A\atop \iota_a^\vee(w)\leq \#\{b>a\mid b\in A\}} \hspace{-.5cm}q^{\iota_a^\vee(w)}\hspace{-.5cm}\prod_{a\in A\atop \iota_a^\vee(w)>\#\{b>a\mid b\in A\}}\hspace{-.5cm}\frac{q^{\iota_a^\vee(w)}}{q^{\#\{(a,j)\in L_A\mid j-a>\iota_a(w)\}}}\\
&=\frac{|\mathfrak{l}_A\cap \mathfrak{ut}_w|}{|\mathfrak{l}_A||\mathfrak{r}_A|}\prod_{j\notin A} q^{\iota_j^\vee(w)}\hspace{-.25cm}\prod_{a\in A\atop \iota_a^\vee(w)\leq \#\{b>a\mid b\in A\}} \hspace{-.25cm}q^{\iota_a^\vee(w)}\hspace{-.25cm}\prod_{a\in A\atop \iota_a^\vee(w)>\#\{b>a\mid b\in A\}}\hspace{-.25cm}q^{\iota_a^\vee(w)}.
\end{align*}
Apply the definition of $|\iota^\vee(w)|$ to get equality with (\ref{FirstVersion}).
\end{proof}
\subsection{Product}
To compute the product on the permutation characters, one might either use the duality formula given in Theorem \ref{SubgroupDual}, or the supercharacter basis. This section will take the latter approach. Using the transition matrix, for $v\in S_m$ and $w\in S_n$,
\begin{align*}
\bar\chi^v\cdot \bar\chi^w & = \Big(\sum_{x\geq v} \chi^y\Big)\Big(\sum_{x'\geq w} \chi^{x'}\Big)\\
&=\sum_{x\geq v\atop x'\geq w} \sum_{y\in x\shuffle x'} \chi^y\\
&=\sum_{x\geq v\atop x'\geq w} \sum_{y\in x\shuffle x'} \sum_{z\geq y} \mu(y,z)\bar\chi^z\\
&=\sum_{z\in S_{m+n}} \bigg(\sum_{{y\leq z\atop y_{\{1,2,\ldots,m\}}\geq v}\atop y_{\{m+1,\ldots,m+n\}}\geq w} \mu(y,z) \bigg)\bar\chi^z.
\end{align*}
The goal of this section is to show
$$\sum_{{y\leq z\atop y_{\{1,2,\ldots,m\}}\geq v}\atop y_{\{m+1,\ldots,m+n\}}\geq w} \mu(y,z)\in \{-1,0,1\}.$$
By Proposition \ref{CoveringInversions}, a \emph{covering inversion} $(w(i),w(j))$ in $w$ is a pair with $i$ maximal such that $i\leq {\color{red}{(<?)}} j$ and $w(i)>w(j)$. Alternatively, if $w'$ is the permutation obtained by switching $w(i)$ and $w(j)$, then
$$\iota_k(w')=\left\{\begin{array}{ll}
\iota_k(w) - 1 & \text{if $k=w(j)$,}\\
\iota_k(w) & \text{otherwise.}
\end{array}\right.$$
Thus, every $1\leq w(j)\leq n$ can be the second coordinate in at most one covering inversion (determined by whether $\iota_{w(j)}(w)>0$ or not). We will refer to $w'$ as the permutation obtained by \emph{removing the covering inversion $(w(i),w(j))$ from $w$}. Let
$$\mathrm{CInv}(w)=\{(w(i),w(j))\text{ a covering inversion}\}.$$
The following lemma is not used explicitly below, but underlies much of the intuition in what follows. Specifically, it says that given any set of covering inversions, one may remove them in a specific order so that with each removal the remaining covering inversions do not change.
\begin{lemma} \label{CoveringLemma} Let $w\in S_n$ and $I\subseteq \mathrm{CInv}(w)$. Let $(w(i),w(j))\in I$ be selected with $w(j)$ minimal. If $w'$ is the permutation with $(w(i),w(j))$ removed, then $I-\{(w(i),w(j))\}\subseteq \mathrm{CInv}(w')$.
\end{lemma}
\begin{proof}
Let $(w(k),w(l))\in I-\{(w(i),w(j))\}$ where by minimality of $w(j)$, we have $w(j)<w(l)$. Then either $l<i$ or $l>j$. In either case, if we remove $(w(i),w(j))$ from $w$, then the resulting permutation $w'$ will still have $(w(k),w(l))$ as a covering inversion.
\end{proof}
By iteratively applying Lemma \ref{CoveringLemma} it makes sense to define for $C\subseteq \mathrm{CInv}(w)$, the permutation
$$w^{\mathrm{rm}(C)}=\text{ the permutation obtained by removing $C$ from $w$}.$$
\begin{remark}
Note that since
$$\{z^{\mathrm{rm}(C)}\mid C\subseteq \mathrm{CInv}(z)\}$$
is a Boolean lattice, we have that for $z\in S_n$ and $y\leq z$,
$$\mu(y,z)=\left\{\begin{array}{ll} (-1)^{|C|} & \text{if $y=z^{\mathrm{rm}(C)}$ for some $C\subseteq \mathrm{CInv}(z)$},\\ 0 & \text{otherwise.}\end{array}\right.$$
\end{remark}
If $v\in S_m$ and $w\in S_n$, let
\begin{align*}
\mathrm{dSh}_{v,w}&=\{y\in S_{m+n}\mid y_{\{1,2,\ldots,m\}}\geq v,y_{\{m+1,\ldots, m+n\}}\geq w\}\\
\mathrm{CInvS}^z_{v,w}&=\{C\subseteq \mathrm{CInv}(z)\mid z^{\mathrm{rm}(C)}\in \mathrm{dSh}_{v,w}\}.
\end{align*}
Our goal is to compute
\begin{equation} \label{ProductStrategy} \mathrm{Coeff}(\bar\chi^v\bar\chi^w;\bar\chi^z)=\sum_{C\in \mathrm{CInvS}^z_{v,w}} (-1)^{|C|}.
\end{equation}
We will do this by iteratively partitioning $\mathrm{CInvS}^z_{v,w}$ into blocks that contain an internal sign reversing bijection until we are left with at most 1 element.
\subsubsection{Free covering inversions}
A covering inversion $c\in \mathrm{CInv}(z)$ is called \emph{free} with respect to the pair $(v,w)$ if for all $C\subseteq \mathrm{CInv}(z)-\{c\}$,
$$C\in \mathrm{CInvS}^z_{v,w}\qquad \text{if and only if} \qquad C\cup\{c\}\in \mathrm{CInvS}^z_{v,w}.$$
To help get an intuition for free covering inversions we analyze the different types of covering inversions that occur. For example, let
$$z=971458326\qquad \text{with}\qquad \mathrm{CInv}(z)=\{(9,7),(9,8),(7,1),(7,4),(7,5),(8,3),(8,6),(3,2)\},$$
and consider the deshuffle
\begin{align*}
z_{\{1,2,3,4,5\}}&=14532\quad\text{with}\quad \iota(z_{\{1,2,3,4,5\}})=(0,3,2,0,0),\\
z_{\{6,7,8,9\}}&=4231\quad\text{with}\quad \iota(z_{\{6,7,8,9\}})=(3,1,1,0).
\end{align*}
Then $(3,2)$, $(9,7)$, and $(7,5)$ have fundamentally different behaviors in terms of $z_{\{1,2,3,4,5\}}$ and $z_{\{6,7,8,9\}}$, as outlined below.
\begin{description}
\item[Case 1.] If a covering inversion $(z(i),z(j))$ satisfies $z(i)\leq m$, then there is a corresponding covering inversion $(z(i),z(j))\in \mathrm{CInv}(z_{\{1,2,\ldots, m\}})$ and
\begin{equation}\label{SmallIndicesCover}
\Big(z^{\mathrm{rm}(\{(z(i),z(j)\})}\Big)_{\{1,2\ldots, m\}}=(z_{\{1,2,\ldots,m\}})^{\mathrm{rm}(\{(z(i),z(j)\})}.
\end{equation}
In terms of their inversion table,
$$\iota_{z(j')}\Big((z^{\mathrm{rm}(\{(z(i),z(j))\}})_{\{1,2,\ldots,m\}}\Big)=\left\{\begin{array}{ll}
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big)-1 & \text{if $j'=j$},\\
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big) & \text{otherwise.}
\end{array}\right.$$
In this case, the removal of $(z(i),z(j))$ has no effect on $z_{\{m+1,\ldots, m+n\}}$.
\item[Case 2.] If a covering inversion $(z(i),z(j))$ satisfies $z(j)>m$, then there is a corresponding covering inversion $(z(i)-m,z(j)-m)\in \mathrm{CInv}(z_{\{m+1,\ldots, m+n\}})$, and
\begin{equation}\label{LargeIndicesCover}
\Big(z^{\mathrm{rm}(\{(z(i),z(j)\})}\Big)_{\{m+1,\ldots, m+n\}}=(z_{\{m+1,\ldots,m+n\}})^{\mathrm{rm}(\{(z(i)-m,z(j)-m\})}.
\end{equation}
In terms of inversion tables,
$$\iota_{z(j')}\Big((z^{\mathrm{rm}(\{(z(i),z(j))\}})_{\{m+1,\ldots,m+n\}}\Big)=\left\{\begin{array}{ll}
\iota_{z(j)}\big(z_{\{m+1,\ldots,m+n\}}\big)-1 & \text{if $j'=j$},\\
\iota_{z(j)}\big(z_{\{m+1,\ldots,m+n\}}\big) & \text{otherwise.}
\end{array}\right.$$
In this case, the removal of $(z(i),z(j))$ has not effect on $z_{\{1,2,\ldots, m\}}$.
\item[Case 3.] If a covering inversion $(z(i),z(k))$ satisfies $z(i)>m$ and $z(k)\leq m$, then stranger things can happen. In this case,
$$\iota_{z(j)}\Big((z^{\mathrm{rm}(\{(z(i),z(k))\}})_{\{1,2,\ldots,m\}}\Big)=\left\{\begin{array}{ll}
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big)+1 & \text{if $i<j<k$},\\
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big) & \text{otherwise.}
\end{array}\right.$$
So in this case, removing the covering inversion can increase the size of the deshuffled permutation, and the removal of $(z(i),z(j))$ has no effect on $z_{\{m+1,\ldots, m+n\}}$.
\end{description}
\begin{lemma}\label{FreeLemma}
Let $v\in S_m$, $w\in S_n$, and $z\in S_{m+n}$. Then
\begin{enumerate}
\item[(a)] If $z$ has a free covering inversion with respect to $(v,w)$ then
$$\sum_{C\in \mathrm{CInvS}^z_{v,w}} (-1)^{|C|}=0.$$
\item[(b)] Suppose $\mathrm{CInvS}_{v,w}^z\neq \emptyset$. If $z$ has no free covering inversions with respect to $(v,w)$, then $z_{\{1,2\ldots,m\}}\ngtr v$ and $z_{\{m+1,\ldots,m+n\}}= w$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) If $\mathrm{CInvS}^z_{v,w}=\emptyset$, then the sum is trivially 0. Else, $C\in \mathrm{CInvS}^z_{v,w}$ and we have a bijection between
$$\phi:\{C\in \mathrm{CInvS}^z_{v,w}\mid c\in C\} \longrightarrow\{D\in \mathrm{CInvS}^z_{v,w}\mid c\notin D\},$$
such that $(-1)^{|\phi(C)|}=-(-1)^{|C|}$. It follows that the sum is zero.
(b) By (\ref{SmallIndicesCover}) and (\ref{LargeIndicesCover}) any covering inversion that is in those two cases respects the ordering on the deshuffled permutation. Thus, if $z_{\{1,2\ldots,m\}}> v$ or $z_{\{m+1,\ldots,m+n\}}> w$, then there exists $(z(i),z(j))\in \mathrm{CInv}(z)$ with either $z(i)\leq m$ or $z(j)>m$ such that $\iota_{z(j)}(z)>\iota_{z(j)}(v.w)$; it follows that $(z(i),z(j))$ is free. If $z_{\{m+1,\ldots,m+n\}}\ngeq w$, then there is no set of covering inversions $C$ such that $z^{\mathrm{rm}(C)}_{\{m+1,\ldots, m+n\}}\geq w$. Thus, $z_{\{m+1,\ldots,m+n\}}=w$.
\end{proof}
\begin{corollary}
Suppose $\mathrm{CInvS}^z_{v,w}$ has no free covering inversions and $C\in\mathrm{CInvS}^z_{v,w}$. Then $(z(i),z(j))\in C$ implies $z(j)\leq m$.
\end{corollary}
For a set of covering inversions $C\subseteq \mathrm{CInv}(z)$, we can represent each covering inversion on a row of nodes by denoting $(z(i),z(j))$ with an arc connecting the $i$th node to the $j$th node. For example,
$$z=971458326\qquad \text{with}\qquad \mathrm{CInv}(z)=\{(9,7),(9,8),(7,1),(7,4),(7,5),(8,3),(8,6),(3,2)\},$$
would be given by
$$\begin{tikzpicture}[scale=.5]
\foreach \x/\y in {1/9,2/7,3/1,4/4,5/5,6/8,7/3,8/2,9/6}
{\node (\y) at (\x,0) [inner sep=-1pt] {$\bullet$};
\node at (\x,-.3) {$\scscs\y$};}
\foreach \s/\t in {9/7,9/8,7/1,7/4,7/5,8/3,8/6,3/2}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\ .$$
Note that by the definition of a covering inversion we cannot have $(w(i),w(k)),(w(j),w(l))\in C$ with $i<j<k<l$, since we would simultaneously have $w(k)>w(j)$ and $w(k)<w(j)$. Thus, the arc diagram will always be crossing free. A \emph{connected component} of $C\subseteq \mathrm{CInv}(z)$ is a nonempty set of arcs that form connected component of the graph.
\begin{lemma} \label{ComponentRemoval}
Let $C\subseteq \mathrm{CInv}(z)$ be a connected component with minimal element at position $i$.
\begin{enumerate}
\item[(a)] If $z(i)\leq m$, then
$$\iota_{z(j)}(z^{\mathrm{rm}(C)}_{\{1,2,\ldots,m\}})=\left\{\begin{array}{ll}
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big)-1 & \text{if $(z(i'),z(j))\in C$ for some $i'\geq i$},\\
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big) & \text{otherwise.}
\end{array}\right.$$
\item[(b)] Suppose $z(i)>m$ and for all $j>i$ such that $(z(i),z(j))\in C$ suppose $z(j)\leq m$. Let $k$ be the maximal position in $C$. Then
$$\iota_{z(j)}(z^{\mathrm{rm}(C)}_{\{1,2,\ldots,m\}})=\left\{\begin{array}{ll@{}}
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big) & \text{if $j<i$, $j>k$ or $(z(i'),z(j))\in C$ for some $i'\geq i$},\\
\iota_{z(j)}\big(z_{\{1,2,\ldots,m\}}\big)+1 & \text{otherwise.}
\end{array}\right.$$
\end{enumerate}
\end{lemma}
\begin{proof}
These follow from the fact that if $i=j_0<j_1<\cdots<j_\ell$ are the points of $C$, then $z^{\mathrm{rm}(C)}$ is the permutation given by
$$z^{\mathrm{rm}(C)}(k)=\left\{\begin{array}{ll} z(j_{r+1}) & \text{if $k=j_r$, $r<\ell$,}\\
z(i) & \text{if $k=j_\ell$,}\\
z(k) & \text{otherwise.}\end{array}\right.$$
In the first case, $z(i)$ does not get removed when restricting to the subset, and in the second case it does.
\end{proof}
\subsubsection{Nested covering inversions}
We say a covering inversion $(z(j),z(k))$ \emph{nests} in $C\subseteq \mathrm{CInv}(z)$ if there exists $i\leq j<k<l$ such that $(z(i),z(l))\in C$. Fix $v\in S_m$, $w\in S_n$ and $z\in S_{m+n}$ and assume $\mathrm{CInvS}_{v,w}^z\neq \emptyset$ with no free covering inversions. Let $C\in\mathrm{CInvS}_{v,w}^z$. We say a covering inversion $b\notin C$ is \emph{addable} if $b$ nests in $C$ and $C\cup \{b\}\in \mathrm{CInvS}_{v,w}^z$. Similarly, a covering inversion $b\in C$ is \emph{removable} if $b$ nests in $C$ and $C-\{b\}\in \mathrm{CInvS}_{v,w}^z$. If there exists $C\in \mathrm{CInvS}_{v,w}^z$ such that $b$ is addable (resp. removable) in $C$, then we say $b$ is addable (resp. removable) in $\mathrm{CInvS}_{v,w}^z$.
Let
\begin{equation}\label{CoreSet}
\mathrm{crSt}_{v,w}^z=\{C\in \mathrm{CInvS}_{v,w}^z\mid C\text{ has no addable and no removable inversions}\},
\end{equation}
and
$$\mathrm{core}_{v,w}(z)=\left\{\begin{array}{ll} \displaystyle\min_{C\in \mathrm{crSt}_{v,w}^z} \{|C|\} & \text{if $\mathrm{CInvS}_{v,w}^z$ has no free covering inversions and $|\mathrm{crSt}_{v,w}^z|\notin 2\mathbb{Z}$,}\\
0 & \text{otherwise}.\end{array}\right.$$
The main result of this section is the following product formula.
\begin{theorem}\label{PermutationCharacterProduct}
For $v\in S_m$ and $w\in S_n$,
$$\bar\chi^v\cdot \bar\chi^w=\sum_{z\in S_{m+n} \atop \mathrm{core}_{v,w}(z)\neq 0} (-1)^{\mathrm{core}_{v,w}(z)} \bar\chi^z.$$
\end{theorem}
By Lemma \ref{FreeLemma} (a), we may assume for the remainder of this section that $\mathrm{CInvS}_{v,w}^z$ has no free covering inversions. The basic strategy of the proof is to construct a number of sign reversing bijections on subsets of $\mathrm{CInvS}_{v,w}^z$ that slowly reduce the sum in (\ref{ProductStrategy}) into at most one term. The first lemma sets up the underlying philosophy.
\begin{lemma} \label{BaseNesting} Assume $\mathrm{CInv}(z)$ has no free covering inversions. Let $b\in \mathrm{CInv}(z)$ be removable in $\mathrm{CInvS}_{v,w}^z$. Then the function
$$\begin{array}{ccc} \{C\in \mathrm{CInvS}_{v,w}^{z}\mid b\text{ is removable in $C$}\} & \longrightarrow & \{D\in \mathrm{CInvS}_{v,w}^{z}\mid b\text{ is addable in $D$}\}\\
C & \mapsto & C-\{b\}
\end{array}$$
is a bijection.
\end{lemma}
\begin{proof}
The goal is to show the function is well-defined, or $C-\{b\}\in \mathrm{CInvS}_{v,w}^{z}$. Since there are no free covering inversions, either $z(j)$ is in the same connected component as $z(i)$ or $z(j)\leq m$. By Lemma \ref{ComponentRemoval}, any nested arc removed increases the inversion table over $\{1,2,\ldots, m\}$, so if $C\in \mathrm{CInvS}_{v,w}^{z}$, then so is $C-\{b\}$.
\end{proof}
Thus, there is a subset $\mathcal{R}(\{b\})\subseteq \mathrm{CInvS}_{v,w}^z$ such that
$$\sum_{C\in \mathrm{CInvS}_{v,w}^z} (-1)^{|C|}=\sum_{C\in \mathcal{R}(\{b\})} (-1)^{|C|}.$$
In this case,
$$\mathcal{R}(\{b\})=\{C\in \mathrm{CInvS}_{v,w}^z\mid \text{$b$ is neither addable nor removable in $C$}\}.$$
The following iterates this procedure.
Let $\cB$ be a set of covering inversions removable in $\mathrm{CInvS}_{v,w}^z$. Let
$$
\mathcal{R}(\cB)=\left\{C\in \mathrm{CInvS}_{v,w}^z\mid \text{$C$ has no addable or removable inversions in $\cB$}\right\}.
$$
A \emph{removal sequence} of a set of removable inversions $\cB$ is a bijection
$$\begin{array}{rccc} \eta: & \{1,2,\ldots, |\cB|\} & \longrightarrow &\cB\\
& j & \mapsto & \eta(j)\end{array}$$
such that for each $1\leq j\leq |\cB|$,
$$\eta(j)\text{ is removable in $C$ for some $C\in\mathcal{R}(\{b\in \cB\mid \eta^{-1}(b)<j\})$.}$$
In this situation, let
\begin{equation*}
\mathcal{K}_j(\cB;\eta)=\mathcal{R}(\{b\in \cB\mid \eta^{-1}(b)< j\})-\mathcal{R}(\{b\in \cB\mid \eta^{-1}(b)\leq j\}).
\end{equation*}
We obtain a set partition
$$\{\mathcal{K}_j(\cB;\eta)\mid 1\leq j\leq |\cB|\}\cup \mathcal{R}(\cB)$$
of $\mathrm{CInvS}_{v,w}^z$.
\begin{example}
Suppose $z=917426358$, $v=7142635$, and $w=21$. Then
\begin{align*}
\mathrm{CInv}(z)&=
\begin{tikzpicture}[scale=.5,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\bullet$};
\node at (\x,-.3) {$\scscs\l$};}
\foreach \s/\t in {9/1,9/8,9/7,7/4,7/6,4/2,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\mathrm{CInvS}_{v,w}^z&=\left\{\begin{array}{@{}c@{}}
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,4/2}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,6/3}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,6/3}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2,6/3}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2,6/3}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}
\end{array}\right\}
\end{align*}
Let
$$\cB=\{(6,3),(4,2)\}\quad \text{with}\quad (\eta(1),\eta(2))=((6,3),(4,2)).$$
Then
\begin{align*}
\mathcal{K}_1(\cB;\eta)&=\left\{\begin{array}{@{}c@{}}
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2,6/5}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (6) to [in=120, out=60] (3);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (6) to [in=120, out=60] (3);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (6) to [in=120, out=60] (3);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2,6/3,6/5}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (6) to [in=120, out=60] (3);
\end{tikzpicture}
\end{array}\right\}\\
\mathcal{K}_2(\cB;\eta)&=\left\{\begin{array}{@{}c@{}}
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,6/3}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,6/3}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}\\
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (4) to [in=120, out=60] (2);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (4) to [in=120, out=60] (2);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/6,4/2,6/3}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (4) to [in=120, out=60] (2);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,7/6,4/2,6/3}
\draw (\s) to [in=120, out=60] (\t);
\draw[thick] (4) to [in=120, out=60] (2);
\end{tikzpicture}
\end{array}\right\}\\
\mathcal{R}(\cB)&=\Big\{
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture},
\begin{tikzpicture}[scale=.3,baseline=0cm]
\foreach \x/\l in {1/9,2/1,3/7,4/4,5/2,6/6,7/3,8/5,9/8}
{\node (\l) at (\x,0) [inner sep=-1pt] {$\scriptstyle\bullet$};
\node at (\x,-.5) {$\scscs\l$};}
\foreach \s/\t in {9/7,7/4,4/2}
\draw (\s) to [in=120, out=60] (\t);
\end{tikzpicture}
\Big\}.
\end{align*}
\end{example}
Note by the definition of $\mathcal{K}_j(\cB;\eta)$,
$$\mathcal{K}_j(\cB;\eta)=\left\{C\in \mathcal{K}_{j}(\cB;\eta)\mid \eta(j)\text{ removable in $C$}\right\} \cup \{D\in \mathcal{K}_{j}(\cB;\eta)\mid \text{$\eta(j)$ addable in $D$}\}.$$
The following lemma finds sign-reversing bijection between these two non-intersecting subsets in the way suggested by Lemma \ref{BaseNesting}.
\begin{lemma}
Suppose $\cB$ is a set of covering inversions removable in $\mathrm{CInvS}_{v,w}^z$ that admits a removal sequence $\eta$. Then for $1\leq j\leq |\cB|$, the function
$$\begin{array}{ccc} \left\{C\in \mathcal{K}_{j}(\cB;\eta)\mid \eta(j)\text{ removable in $C$}\right\} & \longrightarrow & \{D\in \mathcal{K}_{j}(\cB;\eta)\mid \text{$\eta(j)$ addable in $D$}\}\\
C & \mapsto & C-\{\eta(j)\}
\end{array}$$
is a bijection.
\end{lemma}
\begin{proof}
The goal is to show that the bijection is well-defined. Suppose $C-\{\eta(j)\}\notin \mathcal{K}_i(\cB;\eta)$ for some $i\leq j$. Then $\eta(i)$ is either addable or removable in $C-\{\eta(j)\}$. But adding $\eta(j)$ cannot affect whether $\eta(i)$ is addable or removable, so $C\in \mathcal{K}_i(\cB;\eta)$ and $i=j$.
\end{proof}
As a consequence of this iterated procedure,
\begin{equation} \label{LastIteration}
\sum_{C\in \mathrm{CInvS}_{v,w}^z} (-1)^{|C|}=\sum_{C\in \mathcal{R}(\cB)} (-1)^{|C|}.
\end{equation}
For $1\leq i\leq m+n$, let
$$\mathrm{ccs}_{v,w}^z(i)=\left\{C\subseteq \mathrm{CInv}(z)\ \bigg|\ \begin{array}{@{}l@{}}\text{$C$ a connected component for some}\\ \text{$D\in \mathrm{crSt}_{v,w}^z$ with smallest position $i$}\end{array}\right\},$$
where we recall the notation $\mathrm{crSt}_{v,w}^z$ from (\ref{CoreSet}).
\begin{lemma}\label{RemainingSets}
Let $\cB$ be a set of covering inversions removable in $\mathrm{CInvS}_{v,w}^z$ maximal with the property that there exists a removal sequence $\eta$. Then
\begin{enumerate}
\item[(a)] $\mathcal{R}(\cB)=\mathrm{crSt}_{v,w}^z$,
\item[(b)] for $z(i)>m$, the set $\mathrm{ccs}_{v,w}^z(i)$ forms a chain under containment, such that if $B$ covers $A$ in $\mathrm{ccs}_{v,w}^z(i)$, then $|B-A|=1$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a) By definition $\mathrm{crSt}_{v,w}^z\subseteq \mathcal{R}(\cB)$. Let $C\in \mathcal{R}(\cB)$. By the maximality of $\cB$, $C$ has no removable inversions. Suppose $b$ is an addable inversion for $C$. Then $D=C\cup\{b\}\in \mathcal{K}_j(\cB;\eta)$ for some $1\leq j\leq |\cB|$. By assumption $b\neq \eta(j)$. Thus, $\eta(j)\in C$. Since $\eta(j)$ is nested in $D$ it must also be so in $C$ (worst case it is nested in $b$, but $b$ is also nested). Thus, $C\in \mathcal{K}_j(\cB;\eta)$, a contradiction.
(b) Suppose $A, B\in \mathrm{ccs}_{v,w}^z(i)$ with $B\neq \emptyset$. Let $k$ be the largest position of $B$. If $A=\emptyset$, then $A\subseteq B$. Else, let $A=\{(i,a_1),(a_1,a_2),\ldots,(a_{\ell-1},a_\ell)\}$ and $B=\{(i,b_1),(b_1,b_2),\ldots,(b_{r-1},b_r)\}$. WLOG suppose $\ell\leq r$. If $A\nsubseteq B$, then there exists a minimal $h<\ell$ such that $(a_h,a_{h+1})\neq (b_h,b_{h+1})$ (in particular, $a_h=b_h$ and $a_{h+1}\neq b_{h+1}$). However, in this case either $(a_h,a_{h+1})$ is addable in $B$ or $(b_h,b_{h+1})$ is addable in $A$, both contradicting (a). Thus, $A\subseteq B$. If $B$ covers $A$, then by the above argument, $(a_h,a_{h+1})= (b_h,b_{h+1})$ for all $h\leq \ell$. Now $A,B\in \mathrm{ccs}_{v,w}^z(i)$ implies $A\cup \{(b_{\ell},b_{\ell+1})\}\in \mathrm{ccs}_{v,w}^z(i)$, so $r=\ell+1$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{PermutationCharacterProduct}]
Choose a set $\cB$ of covering inversions removable in $\mathrm{CInvS}_{v,w}^z$ maximal with the property that there exists a removal sequence $\eta$. Note that by Lemma \ref{RemainingSets} (a) and (\ref{LastIteration}), we are interested in
$$\sum_{C\in \mathrm{crSt}_{v,w}^z} (-1)^{|C|}.$$
Let
$$I=\{1\leq i\leq m+n\mid z(i)>m,z(i+1)\leq m\}.$$
Since $\mathrm{CInvS}_{v,w}^z$ has no free covering inversions, every $C\in \mathrm{crSt}_{v,w}^z$ has connected components whose smallest positions are in $I$. Write
$$C=\bigcup_{i\in I} C_i,$$
where $C_{i}\in \mathrm{ccs}_{v,w}^z(i)$. Conversely, each collection of choices $B_{i}\in \mathrm{ccs}_{v,w}^z(i)$ for $i\in I$ gives
$$\bigcup_{i\in I} B_i\in \mathrm{crSt}_{v,w}^z.$$
Let $\min\in \mathrm{crSt}_{v,w}^z$ be the choice where $\min_{i}\in \mathrm{ccs}_{v,w}^z(i)$ is minimal for each $i\in I$.
Then
$$\sum_{C\in \mathrm{crSt}_{v,w}^z} (-1)^{|C|}=\prod_{r=1}^\ell \bigg(\sum_{C_{i}\in \mathrm{ccs}_{v,w}^z(i)} (-1)^{|C_{i}|}\bigg)=\left\{\begin{array}{ll}
\displaystyle \prod_{i\in I} (-1)^{|\min_{i}|} & \text{if $|\mathrm{ccs}_{v,w}^z(i)|\notin2\mathbb{Z}$ for all $i\in I$,}\\ 0 & \text{otherwise.}
\end{array}\right.$$
Finally, $\mathrm{core}_{v,w}(z)=\sum_{i\in I} |\min_{i}|$ if $|\mathrm{ccs}_{v,w}^z(i)|\notin 2\mathbb{Z}$ for all $i\in I$ and 0 otherwise, recovering the result.
\end{proof}
\section{A final Hopf monoid remark}
The representation theoretic point of view suggests a Hopf monoid version of $\mathrm{FQSym}$. We refer the reader to \cite{AM10} for more details on Hopf monoids. Define a vector species
$$\begin{array}{rccc} \mathrm{scf}(\mathfrak{ut}):& \{\text{sets}\} & \longrightarrow & \{\text{vector spaces}\}\\
& A & \mapsto & \bigoplus_{\phi\in \mathcal{L}[A]} \mathrm{scf}(\mathfrak{ut}_\phi),
\end{array}$$
where $\mathcal{L}[A]$ is the set of linear orders on $A$ and $\mathfrak{ut}_\phi$ is the group of upper-triangular matrices with rows and columns indexed by $A$ in the order given by $\phi$.
For non-intersecting sets $A$ and $B$ with $\phi\in \mathcal{L}[A]$, $\tau\in\mathcal{L}[B]$ and $\gamma\in \mathcal{L}[A\cup B]$, define a product by the linear extension of
$$\begin{array}{rccc}m_{B,A} :& \mathrm{scf}(\mathfrak{ut}_\phi) \otimes \mathrm{scf}(\mathfrak{ut}_\tau) & \longrightarrow & \displaystyle\bigoplus_{\gamma\in \phi\shuffle \tau} \mathrm{scf}(\mathfrak{ut}_{\gamma})\\
& \psi \otimes \eta & \mapsto &\displaystyle \sum_{\gamma\in \phi\shuffle \tau} \mathrm{Exfl}_{\mathfrak{ut}_\phi\times \mathfrak{ut}_\tau}^{\mathfrak{ut}_{\gamma}} (\psi^\star\otimes\eta^\star)^\star,
\end{array}$$
and coproduct by the linear extension of
$$\begin{array}{rccc} \Delta_{B,A}: & \mathrm{scf}(\mathfrak{ut}_\gamma) & \longrightarrow & \mathrm{scf}(\mathfrak{ut}_{\gamma_B})\otimes \mathrm{scf}(\mathfrak{ut}_{\gamma_A})\\
&\psi & \mapsto & \mathrm{Dela}_{\mathfrak{ut}_{\gamma_B}\times \mathfrak{ut}_{\gamma_A}}^{\mathfrak{ut}_{\gamma}} (\psi).
\end{array}$$
\begin{remark}
Unlike in \cite{ABT}, the underlying functions on the towers of groups do not come from a Hopf structure on linear orders. There will therefore not be an (obvious) Hadamard product in this case.
\end{remark}
\begin{theorem}
The species $\mathrm{scf}(\mathfrak{ut})$ is a Hopf monoid.
\end{theorem}
\begin{proof}
We check the compatibility condition
$$\Delta_{T,B}\circ m_{L,R}=(m_{LT,RT}\otimes m_{LB,RB})\circ (\Delta_{LT,LB}\otimes \Delta_{RT,RB}).$$
for $L\sqcup R=T\sqcup B$, $LT=L\cap T$, $RT=R\cap T$, $LB=L\cap B$, and $RB=R\cap B$.
Let $v\in S_\phi$ and $w\in S_\tau$. One the one hand,
\begin{align*}
\mathrm{LHS}(\chi^v\otimes \chi ^w) &=\sum_{\gamma\in \phi\shuffle\tau} \mathrm{Dela}_{\mathfrak{ut}_{\gamma_T}\times \mathfrak{ut}_{\gamma_B}}^{\mathfrak{ut}_{\gamma}}(\chi^{v\shuffle_{\gamma^{-1}(R)} w})\\
&=\sum_{\gamma\in \phi\shuffle \tau\atop T=(v\shuffle_{\gamma^{-1}(R)} w)\circ\gamma(\{1,2,\ldots,|T|\})} \chi^{(v\shuffle_{\gamma^{-1}(R)} w)_T}\otimes\chi^{(v\shuffle_{\gamma^{-1}(R)} w)_B}.
\end{align*}
On the other hand,
\begin{align*}
(\Delta_{LT,LB}\otimes \Delta_{RT,RB})&(\chi^v\otimes \chi ^w) \\
&= \left\{\begin{array}{ll} \chi^{v_{LT}}\otimes \chi^{v_{LB}}\otimes \chi^{w_{RT}}\otimes \chi^{w_{RB}} & \begin{array}{@{}l@{}}\text{if $LT=\{v\circ\phi(j)\mid 1\leq j\leq |LT|\}$}\\ RT=\{w\circ\tau(j)\mid 1\leq j\leq |RT|\},\end{array}\\
0 & \text{otherwise,}\end{array}\right.
\end{align*}
so
\begin{align*}
\mathrm{RHS}&(\chi^v\otimes \chi ^w)\\ &= \left\{\begin{array}{ll}
\displaystyle \sum_{\alpha\in \phi_{LT}\shuffle \tau_{RT}\atop \beta\in \phi_{LB}\shuffle \tau_{RB}}\chi^{v_{LT}\shuffle_{\alpha^{-1}(RT)} w_{RT}}\otimes \chi^{v_{LB}\shuffle_{\beta^{-1}(RB)} w_{RB}} & \begin{array}{@{}l@{}}\text{if $LT=\{v\circ\phi(j)\mid 1\leq j\leq |LT|\}$}\\ RT=\{w\circ\tau(j)\mid 1\leq j\leq |RT|\},\end{array}\\
0 & \text{otherwise.}\end{array}\right.
\end{align*}
Note that by inspection both the LHS and RHS are multiplicity free. Suppose
$$\Big((v\shuffle_{\gamma^{-1}(R)} w)_T,(v\shuffle_{\gamma^{-1}(R)} w)_B\Big)$$
satisfies $\gamma\in \phi\shuffle \tau, T=(v\shuffle_{\gamma^{-1}(R)} w)\circ\gamma(\{1,2,\ldots,|T|\})\}$. Then for $1\leq j\leq |T|$,
\begin{itemize}
\item $\gamma(j)\in L$ if and only if $v(\phi(\tilde j))\in LT$, where $\tilde{j}=j-\#\{i<j\mid \gamma(i)\in R\}$,
\item $\gamma(j)\in R$ if and only if $w(\tau(j'))\in RT$, where $j'=j-\#\{i<j\mid \gamma(i)\in L\}$.
\end{itemize}
Thus,
$$\Big((v\shuffle_{\gamma^{-1}(R)} w)_T,(v\shuffle_{\gamma^{-1}(R)} w)_B\Big)=\Big(v_{LT}\shuffle_{\gamma_T^{-1}(RT)} w_{RT},v_{LB}\shuffle_{\gamma_B^{-1}(RB)} w_{RB}\Big),$$
where $LT=\{v\circ\phi(j)\mid 1\leq j\leq |LT|\}$ and $RT=\{w\circ\tau(j)\mid 1\leq j\leq |RT|\}$.
Conversely, if
$$\Big(v_{LT}\shuffle_{\alpha^{-1}(RT)} w_{RT},v_{LB}\shuffle_{\beta^{-1}(RB)} w_{RB}\Big),$$
satisfies $\alpha\in \phi_{LT}\shuffle \tau_{RT}$, $\beta\in \phi_{LB}\shuffle \tau_{RB}$, $LT=\{v\circ\phi(j)\mid 1\leq j\leq |LT|\}$ and $RT=\{w\circ\tau(j)\mid 1\leq j\leq |RT|\}$, then
$$\Big(v_{LT}\shuffle_{\alpha^{-1}(RT)} w_{RT},v_{LB}\shuffle_{\beta^{-1}(RB)} w_{RB}\Big)=\Big((v\shuffle_{(\alpha.\beta)^{-1}(R)} w)_T,(v\shuffle_{(\alpha.\beta)^{-1}(R)} w)_B\Big),$$
where $T=(v\shuffle_{(\alpha.\beta)^{-1}(R)} w)\circ(\alpha.\beta)(\{1,2,\ldots,|T|\})\}$.
\end{proof}
\begin{example}
We conclude with the following example of the compatibility proved above. Let $\phi=(\clubsuit,\spadesuit,\blacklozenge)$, $v=(\blacklozenge,\spadesuit,\clubsuit)$, $\tau=(\heartsuit,\Box)$ and $w=(\Box,\heartsuit)$. Let
$$T=\{\blacklozenge,\spadesuit,\Box\}\quad \text{and} \quad B=\{\clubsuit,\heartsuit\}.$$
Then
$$\begin{array}{ccc}
\mathrm{scf}(\mathfrak{ut}_\phi)\otimes \mathrm{scf}(\mathfrak{ut}_\tau) & \longrightarrow &\displaystyle \bigoplus_{\gamma\in \phi\shuffle \tau} \mathrm{scf}(\mathfrak{ut}_\gamma)\\
\chi^{(\blacklozenge,\spadesuit,\clubsuit)}\otimes \chi^{(\Box,\heartsuit)} & \mapsto &\begin{array}{@{}c@{}}
\chi^{(\blacklozenge,\spadesuit,\clubsuit,\Box,\heartsuit)}+ \chi^{(\blacklozenge,\spadesuit,\Box,\clubsuit,\heartsuit)}+\chi^{(\blacklozenge,\Box,\spadesuit,\clubsuit,\heartsuit)}
+\chi^{(\Box,\blacklozenge,\spadesuit,\clubsuit,\heartsuit)}\\+ \chi^{(\blacklozenge,\spadesuit,\Box,\heartsuit,\clubsuit)}+\chi^{(\blacklozenge,\Box,\spadesuit,\heartsuit,\clubsuit)}
+\chi^{(\blacklozenge,\Box,\heartsuit,\spadesuit,\clubsuit)}+ \chi^{(\Box,\blacklozenge,\spadesuit,\heartsuit,\clubsuit)}\\+\chi^{(\Box,\blacklozenge,\heartsuit,\spadesuit,\clubsuit)}
+\chi^{(\Box,\heartsuit,\blacklozenge,\spadesuit,\clubsuit)}
\end{array}
\\
& \longrightarrow & \displaystyle \bigoplus_{\gamma\in \phi\shuffle \tau} \mathrm{scf}(\mathfrak{ut}_{\gamma_T})\otimes \mathrm{scf}(\mathfrak{ut}_{\gamma_B})\\
& \mapsto & \begin{array}{@{}c@{}}
0+ \chi^{(\blacklozenge,\spadesuit,\Box)}\otimes\chi^{(\clubsuit,\heartsuit)}+\chi^{(\blacklozenge,\Box,\spadesuit)}\chi^{(\clubsuit,\heartsuit)}
+\chi^{(\Box,\blacklozenge,\spadesuit)}\otimes\chi^{(\clubsuit,\heartsuit)}\\+ \chi^{(\blacklozenge,\spadesuit,\Box)}\otimes\chi^{(\heartsuit,\clubsuit)}+\chi^{(\blacklozenge,\Box,\spadesuit)}\otimes\chi^{(\heartsuit,\clubsuit)}
+0+ \chi^{(\Box,\blacklozenge,\spadesuit)}\otimes\chi^{(\heartsuit,\clubsuit)}\\+0
+0
\end{array}
\end{array}$$
On the other hand,
$$\begin{array}{ccc}
\mathrm{scf}(\mathfrak{ut}_\phi)\otimes \mathrm{scf}(\mathfrak{ut}_\tau) & \longrightarrow &\displaystyle \mathrm{scf}(\mathfrak{ut}_{\gamma_{LT}})\otimes \mathrm{scf}(\mathfrak{ut}_{\gamma_{LB}})\otimes \mathrm{scf}(\mathfrak{ut}_{\gamma_{RT}})\otimes \mathrm{scf}(\mathfrak{ut}_{\gamma_{RB}})\\
\chi^{(\blacklozenge,\spadesuit,\clubsuit)}\otimes \chi^{(\Box,\heartsuit)} & \mapsto & \chi^{(\blacklozenge,\spadesuit)}\otimes \chi^{(\clubsuit)}\otimes \chi^{(\Box)}\otimes \chi^{(\heartsuit)}\\
& \longrightarrow & \displaystyle\bigoplus_{\alpha\in \gamma_{LT}\shuffle \gamma_{RT}\atop \beta\in \gamma_{LB}\shuffle \gamma_{RB}} \mathrm{scf}(\mathfrak{ut}_\alpha)\otimes \mathrm{scf}(\mathfrak{ut}_\beta)\\
& \mapsto & \begin{array}{@{}c@{}}
\chi^{(\blacklozenge,\spadesuit,\Box)}\otimes\chi^{(\clubsuit,\heartsuit)}+\chi^{(\blacklozenge,\Box,\spadesuit)}\chi^{(\clubsuit,\heartsuit)}
+\chi^{(\Box,\blacklozenge,\spadesuit)}\otimes\chi^{(\clubsuit,\heartsuit)}\\+ \chi^{(\blacklozenge,\spadesuit,\Box)}\otimes\chi^{(\heartsuit,\clubsuit)}+\chi^{(\blacklozenge,\Box,\spadesuit)}\otimes\chi^{(\heartsuit,\clubsuit)}
+ \chi^{(\Box,\blacklozenge,\spadesuit)}\otimes\chi^{(\heartsuit,\clubsuit)}
\end{array}
\end{array}$$
Note that if $T=\{\blacklozenge,\spadesuit,\heartsuit\}$, then we get 0 in both cases.
\end{example}
|
1,116,691,498,580 | arxiv | \section{Introduction}
The stunning discovery of the 125 GeV Higgs-like particle at the Large Hadron Collider~\cite{:2012gk,:2012gu}
does not exclude new physics beyond the Standard Model (BSM) in the framework
of some new strongly-interacting gauge
theory with a composite Higgs mechanism, an idea which was outside experimental
reach when it was first introduced as an attractive BSM scenario
~\cite{Weinberg:1979bn,Susskind:1978ms,Dimopoulos:1979es,
Eichten:1979ah,Farhi:1980xs,Holdom:1984sk,Appelquist:1987fc,Miransky:1996pd}.
The original framework has been considerably extended by new
explorations of the multi-dimensional theory space in fermion flavor number, the choice of color gauge group, and
fermion representation~\cite{Caswell:1974gg,Banks:1981nn,Marciano:1980zf,Kogut:1984sb,Appelquist:2003hn, Sannino:2004qp,
Dietrich:2005jn,Luty:2004ye,Dietrich:2006cm,Kurachi:2006ej}.
Systematic and non-perturbative lattice studies play an important role in studies of this extended theory
space~\cite{Fodor:2009wk,Fodor:2011tw,Fodor:2011tu,Fodor:2012uu,Appelquist:2007hu,Appelquist:2009ty,
Appelquist:2011dp,Appelquist:2009ka,Deuzeman:2008sc,
Deuzeman:2009mh,Deuzeman:2011pa,Hasenfratz:2009ea,Hasenfratz:2010fi,Cheng:2011ic,Jin:2009mc,Jin:2010vm,
Aoki:2012kr,Catterall:2007yx,Catterall:2008qk,Hietanen:2008mr,
Hietanen:2009az,DelDebbio:2010hx,Bursa:2010xn,DelDebbio:2010ze,DelDebbio:2011kp,
Shamir:2008pb,DeGrand:2010na,DeGrand:2011cu,Kogut:2010cz,Kogut:2011ty, Bilgici:2009kh,
Itou:2010we,Yamada:2009nt,Hayakawa:2010yn,Gavai:1985wi,Attig:1987mf,
Meyer:1990xd,Damgaard:1997ut,Kim:1992pk,Brown:1992fz,Iwasaki:2003de}.
Even without spin and parity information, the new Higgs-like particle with decay modes
not far from that of
the Standard Model brings new focus and clarity to the search for the proper theoretical
framework.
One example is the light dilaton as a pseudo-Goldstone particle of spontaneous breaking of
scale invariance that has been featured in recent phenomenological discussions as a viable
interpretation of the discovery~\cite{Ellis:2012hz,Low:2012rj}.
Nearly conformal
gauge theories serve as theoretical laboratories for realistic implementations
of this scenario ~\cite{Yamawaki:1985zg,Bardeen:1985sm,Holdom:1986ub,Goldberger:2007zk,
Appelquist:2010gy,Grinstein:2011dq,Antipin:2011aa,Hashimoto:2010nw,Matsuzaki:2012fq}. Unfortunately,
a credible realization of the idea as a strongly interacting BSM gauge theory
is still lacking. We investigate here
a candidate theory with a fermion flavor doublet in the two-index symmetric (sextet) representation
of the SU(3) color gauge group close to the conformal window, if it can hide a light Higgs-like scalar state with or without
dilaton-like interpretation.
The sextet force
and a new fermion doublet driving electroweak symmetry breaking was introduced in QCD a long time ago by Marciano~\cite{Marciano:1980zf}.
Early pioneering lattice work, limited to the quenched approximation at that time,
investigated the sextet fermion representation~\cite{Kogut:1984sb}.
The main difference in the model we investigate here is the introduction of a new
SU(3) gauge force not associated with QCD gluons and motivated by ideas of compositeness from a new super-strong force.
After chiral symmetry breaking we find three massless Goldstone pions in the spectrum
providing the minimal realization of the Higgs
mechanism, just like in the original technicolor idea~\cite{Weinberg:1979bn,Susskind:1978ms}.
The important new ingredient is the sextet representation of the fermion doublet which brings the model very
close to the conformal window as indicated in a recent paper~\cite{DeGrand:2012yq}.
The accuracy of the very small nearly vanishing $\beta$-function in difficult simulations could not resolve
the existence of a conformal fixed point gauge coupling from the alternative slowly walking scenario.
When combined with our
observation of chiral symmetry breaking ($\chi{\rm SB}$) reported here for small fermion mass deformations,
the overall consistency of all simulations is resolved if the sextet model
is close to the conformal window with a very small non-vanishing $\beta$-function (see, also~\cite{Kogut:2010cz,Kogut:2011ty}).
In this case the model exhibits the simplest composite Higgs mechanism and leaves open the possibility
of a light scalar state with quantum numbers of the Higgs impostor emerging as the
pseudo-Goldstone dilaton state from spontaneous symmetry breaking of scale invariance.
Even if scale symmetry breaking is entangled with
$\chi{\rm SB}$ without dilaton interpretation,
a light Higgs-like scalar state can emerge from the new force close to the conformal window.
Our new Higgs project with lattice simulations in the sextet model may resolve these important problems.
In section 2 we will outline the computational strategy and the simulation set-up including the important treatment
of finite size effects. In section 3 results on the chiral condensate are presented with extrapolation to the massless
fermion limit. The spectrum is presented in Section 4 and compared with the $\chi{\rm SB}$ hypothesis.
In Section 5 it is shown that fermion mass deformations of
spectral properties are not consistent in the model with conformal scaling behavior near the critical surface
of a conformal theory. Section 6 will describe the new Higgs project to determine the scalar $0^{++}$ mass spectrum
when disconnected diagrams are included in the calculations. Closely related to the dilaton interpretation, we also outline
in Section 6 the important role of the non-perturbative gluon
condensate and our strategy for investigating it within our new Higgs project.
\section{Computational strategy and lattice simulations}
Probing $\chi{\rm SB}$, and conformal behavior for comparison, we extrapolate the spectrum
to infinite volume at fixed fermion mass $m$. In large volumes the leading
finite size corrections are exponentially small and dominated by the lowest state of the spectrum which has pion quantum numbers.
From the mass spectrum, extrapolated to infinite volume, we can probe the pattern of $\chi{\rm SB}$
when small fermion mass deformations are simulated close to the massless limit. We also probe the hypothesis of
mass deformed conformal scaling behavior. Our results, as we report here, strongly favor the $\chi{\rm SB}$ hypothesis.
\subsection{The algorithm and simulation results}
We have used the tree-level Symanzik-improved gauge action for all simulations reported in this paper.
The conventional $\beta=6/g^2$ lattice gauge coupling is defined as the overall
factor in front of the well-known terms of the Symanzik lattice action. Its values are $\beta=3.20$ and
$\beta=3.25$ for our simulations.
The link variables in the staggered fermion matrix were exponentially smeared with two
stout steps~\cite{Morningstar:2003gk}; the precise definition of the staggered stout action was given in~\cite{Aoki:2005vt}.
The RHMC algorithm was deployed in all runs. The fermion flavor doublet requires rooting in the algorithm.
For molecular dynamics time evolution we applied multiple time scales~\cite{Urbach:2005ji} and the
Omelyan integrator~\cite{Takaishi:2005tz}.
Our error analysis of hadron masses used correlated fitting with double jackknife procedure on the covariance matrices~\cite{DelDebbio:2007pz}.
The time histories of the fermion condensate, the plaquette,
and correlators were used to monitor autocorrelation times in the simulations.
We have new simulation results at $\beta=3.2$ in the fermion mass range ${\rm m=0.003-0.010}$ on
$24^3\times48$, $28^3\times56$, and $32^3\times64$ lattices. Five fermion masses
at ${\rm m=0.003,0.004,0.005,0.006,0.008}$ are used in most fits.
A very large and expensive $48^3\times96$
run was added recently at ${\rm m=0.003}$ to control finite size effects.
We also have new simulation results at $\beta=3.25$ in the mass range ${\rm m=0.004-0.008}$ on
$24^3\times48$, $28^3\times56$, and $32^3\times64$ lattices.
\subsection{Finite size effects}
Infinite-volume extrapolations of the lowest state in the spectrum
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=5cm]{figures/MpiFinVolume3p2m0p003.pdf}\\
\includegraphics[height=5cm]{figures/FpiFinVolume3p2m0p003.pdf}\\
\includegraphics[height=5cm]{figures/pbpFinVolume3p2m0p003.pdf}
\end{tabular}
\end{center}
\vskip -0.2in
\caption{\footnotesize Finite volume dependence at the lowest fermion mass for $\beta=3.2$.
The form of $\widetilde g_1(\lambda,\eta)$ is a complicated infinite sum which contains Bessel
functions and requires numerical evaluation~\cite{Gasser:1986vb}. Since we are not in the chiral log regime, the prefactor of
the $\widetilde g_1(\lambda,\eta)$ function was replaced by a fitted coefficient. The leading term of the function
$\widetilde g_1(\lambda,\eta)$ is a special exponential Bessel function $K_1(\lambda)$ which dominates in the simulation range.}
\vskip -0.1in
\label{fig:sextetInfVol}
\end{figure}
with pion quantum numbers, the related $F_\pi$,
and the condensate $\langle\overline\psi\psi\rangle$
are shown in Figure~\ref{fig:sextetInfVol} where $\widetilde g_1(\lambda,\eta)$ describes finite volume corrections
from the exchange of the lightest pion state with $\lambda=M_\pi L$ and lattice aspect ratio $\eta=T/L$,
similarly to what was introduced in~\cite{Leutwyler:1987ak}.
The fitting procedure approximates the leading treatment of the pion which wraps around the finite volume,
whether in chiral perturbation theory ($\chi{\rm pt}$), or in L\"uscher's non-perturbative finite size analysis~\cite{Luscher:1985dn}.
This equivalence relaxes the requirement on the fitted parameters $c_M$,$c_F$,$c_1$ to agree with 1-loop
$\chi{\rm PT}$ as long as the pion is the lightest state dominating the finite volume corrections.
It should be noted that the form of the fitting function $\widetilde g_1(\lambda,\eta)$ does not commit to the chirally broken phase.
At fixed fermion mass $m$, the leading exponential term of the function is also the expected behavior
in the conformal phase with mass deformation. The asymptotic exponential form simply originates from the
lightest state wrapping around
the volume once emitted from and re-absorbed by the composite state whose sensitivity to finite volume corrections is
being investigated. The analysis is therefore applicable to both mass deformed phases with different symmetry properties.
The infinite-volume limits of $M_\pi$, $F_\pi$, and $\langle\overline\psi\psi\rangle$ for $m=0.003$ at $\beta=3.2$
were determined self-consistently from the fitting procedure. Similar fits were applied to other composite states.
The value of $M_\pi$ in the fit of the top plot in Figure~\ref{fig:sextetInfVol} was
determined from the highly non-linear fitting function and used as input in the other two fits.
Based on the fits at $m=0.003$,
the results are within one percent of the infinite-volume limit at $M_\pi L= 5$.
In the fermion mass range $m \geq 0.004$ the condition $M_\pi L> 5$ is reached at $L=32$.
Although it will require high precision runs to test, we expect less than one percent residual finite size effects
in the $32^3\times64$ runs for $m \geq 0.004$.
Based on these observations, we will interpret the results from the $32^3\times64$ runs for $m \geq 0.004$
as infinite-volume behavior in mass deformed chiral and conformal analysis.
\section{The chiral condensate}
Our simulations show that the chiral condensate $\langle \overline{\psi}\psi\rangle$ is consistent with $\chi{\rm SB}$
and remains non-vanishing in the massless fermion limit.
It has the infinite-volume spectral representation,
\begin{equation}
\langle \overline{\psi}\psi\rangle = -2m\cdot\int^{\Lambda}_0 d\lambda\frac{\rho(\lambda)}{m^2+\lambda^2}\, ,
\end{equation}
which is UV-divergent when the cutoff $\Lambda$ is taken to infinity.
The divergences are isolated by writing the integral of the spectral representation in twice subtracted form~\cite{Leutwyler:1992yt},
\begin{eqnarray}
&&\langle \overline{\psi}\psi\rangle = -2m\cdot\int^{\mu}_0 d\lambda\frac{\rho(\lambda)}{m^2+\lambda^2}\nonumber\\
&&~~~~~~~~~~~ -2m^5\cdot\int^{\Lambda}_\mu\frac{d\lambda}{\lambda^4}\frac{\rho(\lambda)}{m^2+\lambda^2}
+c_1(a)\cdot m + c_3(a)\cdot m^3 \, .
\label{eq:condensate}
\end{eqnarray}
The first integral in Eq.~(\ref{eq:condensate}) isolates the infrared part and recovers the well-known relation
$\langle \overline{\psi}\psi\rangle=-\pi\rho(0) $ in the $m\rightarrow 0$ limit~\cite{Banks:1979yr}.
The linear fermion mass term $c_1(a)\cdot m$ is a quadratically divergent UV contribution
$\approx a^{-2}\cdot m$ with lattice cutoff $a$.
There is also a very small
third-order UV term $c_3(a)\cdot m^3$ without power divergences which is hard to detect for small $m$ and has
not been tested within the accuracy of the simulations.
IR finite contributions to the condensate from the chiral Lagrangian are connected at the low energy scale $\mu$ with the
first integral in Eq.~(\ref{eq:condensate}). In the chiral expansion of the condensate there is an $m$-independent constant
term which is proportional to $ B F^2$, a linear term proportional to $ B^2\cdot m$, a quadratic term $\sim B^3F^{-2}\cdot m^2$, and higher
order terms, in addition to
logarithmic corrections generated from chiral loops.
The expansion in the fermion mass is expressed in terms of low energy constants
of chiral perturbation theory, like $B$ and $F$~\cite{Bijnens:2009qm}.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=7cm]{figures/PbP.pdf}\\
\includegraphics[width=7.2cm]{figures/PbPintercept.pdf}
\end{tabular}
\end{center}
\vskip -0.2in
\caption{\footnotesize The chiral condensate and its reduced form with subtracted derivative (both have to converge
to the same chiral limit) are shown in the top plot with linear fit to the condensate. The data without derivative subtraction cannot
detect higher order fermion mass terms with significant accuracy.
The fit to the reduced form
with subtracted derivative is defined in the text and shown in the magnified lower plot.
A linear term is not included in this fit since the subtracted derivative form approximately eliminates it.
The value of $d_0$ at $m=0$ is shown to be consistent
with the direct determination of $c_0$ from the chiral limit of $\langle \overline{\psi}\psi\rangle$.
The consistency is very reassuring since the two results are derived from independent determinations.
For $m=0.003$ the data from infinite-volume extrapolation were used in the fit.
As we explained earlier, at higher $m$ values the largest volume
$32^3\times 64$ runs were used for the condensate and its derivative subtraction.}
\label{fig:PbPsextet}
\end{figure}
We used two independent methods for the determination of the chiral condensate in the
massless fermion limit. In the first method fits were made directly to
$\langle \overline{\psi}\psi\rangle$ with constant and linear terms in the fitted function.
Quadratic
and third order terms are hard to detect within the accuracy of the data.
The result is shown in the top plot of Figure~\ref{fig:PbPsextet}.
When the quadratic term is added to the fit,
the massless intercept $c_0=\langle \overline{\psi}\psi\rangle_{m=0}$ from the quadratic fit agrees with the one from the linear fit
and the quadratic fit coefficient in $c_2\cdot m^2$
is zero within fitting error.
For an independent determination, we also studied the subtracted chiral condensate operator defined with the help
of the connected part $\chi_{conn}$ of the chiral susceptibility $\chi$,
\begin{eqnarray}
&&\Bigl [1-m_{\rm v}\frac{ d}{dm_{\rm v}}\Bigr ] \langle\overline\psi\psi\rangle\Big |_{m_{\rm v}=m}
= \langle\overline\psi\psi\rangle - m\cdot\chi_{con}~,\nonumber \\
&& \chi =\frac{ d}{dm} \langle\overline\psi\psi\rangle = \chi_{con} + \chi_{disc}~,
~~\chi_{con}=\frac{ d}{dm_{\rm v}}\langle\overline\psi\psi\rangle_{pq}\Big |_{m_{\rm v}=m} .
\end{eqnarray}
The derivatives $d/dm$ and $d/dm_{\rm v}$ are taken at fixed gauge coupling $\beta$. The derivative
$d/dm_{\rm v}$ is defined in the partially quenched functional integral of $\langle \overline{\psi}\psi\rangle_{pq}$
with respect to the valence mass $m_{\rm v}$
and the limit $m_{\rm v}=m$ is taken after differentiation.
The removal of the derivative term significantly reduces the
dominant linear part of the $\langle \overline{\psi}\psi\rangle$ condensate without changing the intercept in the $m=0$ limit.
Once the derivative term is subtracted, the first non-perturbative IR contribution,
quadratic in $m$, is better exposed.
The two independent determinations give consistent non-vanishing fit results in the massless chiral limit
as shown in the lower plot of Figure~\ref{fig:PbPsextet}.
The independent determinations of the non-vanishing condensate in the chiral limit with separate fits $c_0=\langle \overline{\psi}\psi\rangle_{m=0}$
and $d_0=\langle \overline{\psi}\psi\rangle_{m=0}$ are consistent with each other but differ
from the GMOR~\cite{GellMann:1968rz} relation $\langle\overline{\psi}\psi\rangle=2BF^2$ by a factor of two.
As shown in the next section, the value of $2B$ is determined in lattice units from the pion spectrum using the leading
$M^2_\pi=2B\cdot m$ relation. We find the numerical value
$2B= 6.35(21)$ as shown in the top plot of Figure~\ref{fig:MpiFpi}.
$F$ is determined from the pseudoscalar correlator which satisfies the PCAC relation. We find in lattice units
the numerical value $F=0.0279(4)$ from the lower plot of Figure~\ref{fig:MpiFpi} with $2BF^2 = 0.0049(2)$.
Both sides of the GMOR relation are sensitive to cutoff effects in $B$ and $F$ at bare lattice coupling $\beta=3.2$.
Our preliminary fits based on staggered chiral perturbation theory
indicate that cutoff effects modifying the continuum values of $B$ and $F$ are likely sources of the discrepancy~\cite{jk}.
Some increase in the cutoff dependent values of $B$ and $F$, which is the observed trend, would bring the two sides of
the GMOR relation in agreement.
\section{Spectral tests of the $\chi {\rm SB}$ hypotheses}
\subsection{Strategy and challenges of the spectrum analysis}
Spectrum calculations in a gauge theory with massless fermions require important and difficult lattice extrapolations:
\begin{enumerate}
\item[(1)] Extrapolation from finite lattice size to infinite volume,
\item[(2)] Extrapolation to the massless fermion limit,
\item[(3)] Extrapolation in lattice spacing to the continuum.
\end{enumerate}
All three issues will be addressed as we present details of the spectrum analysis in this section. The strategy of
finite size corrections was explained in Section 2 and it will be applied here. Extrapolation from
finite fermion masses will be used to test the two contrasting hypotheses, one with $\chi{\rm SB}$
and the other with conformal behavior. As a first step to address the removal of finite lattice spacing, we will
compare the Goldstone and non-Goldstone pion spectra at two different lattice spacings
to probe the restoration of taste symmetry for staggered fermions as the lattice spacing is decreased.
\subsection{The Goldstone pion and $F_\pi$}
The chiral Lagrangian describes the low energy theory of
Goldstone pions and non-Goldstone pions in the staggered lattice fermion formulation.
It will be used as an effective tool probing the $\chi{\rm SB}$ hypothesis at finite fermion masses
including extrapolation to the massless chiral limit.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6cm]{figures/fpiPionQuadFit4.pdf}\\
\includegraphics[height=6cm]{figures/FpiLinearFit4.pdf}
\end{tabular}
\end{center}
\vskip -0.2in
\caption{\footnotesize Polynomial fits from the analytic mass dependence of the chiral Lagrangian without logarithmic
loop corrections are shown for the Goldstone pion and $F_\pi$. The dashed line in the top plot for the Goldstone pion
shows the leading linear contribution.}
\label{fig:MpiFpi}
\end{figure}
Close to the chiral limit, the pion spectrum and the pion decay constant $F_\pi$
are organized in powers of the fermion mass $m$ which is an input parameter in the simulations.
Chiral log corrections to the polynomial terms are
generated from pion loops~\cite{Gasser:1983yg}. Their analysis will
require an extended dateset with high statistics.
In Section 2 we presented results of infinite-volume extrapolations. The effects are largest
at $m=0.003$ in our dataset and the infinite-volume limits of $M_\pi$ and $F_\pi$ were shown for $m=0.003$ for fixed
lattice cutoff and bare coupling $\beta=3.2$. Similar fits were applied to the chiral condensate and composite states in the
spectrum at $m=0.003$.
Based on the analysis at $m=0.003$, we determined that the infinite-volume limit is reached at $M_\pi L= 5$ within one percent accuracy.
It is expected that similar or better accuracy is reached for $M_\pi L\geq 5$ at higher $m$ values in all states of the spectrum.
In the fermion mass range $m \geq 0.004$ the condition $M_\pi L> 5$ is reached at $L=32$.
Based on these observations, in fits to the observed pion spectrum and $F_\pi$
we will use infinite-volume extrapolation at $m=0.003$ and treat the $32^3\times64$ runs for $m \geq 0.004$
as if the volume were infinite.
In Figure~\ref{fig:MpiFpi} we used the local pion correlator with noisy sources to extract $M_\pi$ and $F_\pi$.
The correlator is tagged as the PCAC channel since the PCAC relation, based on axial Ward identities, holds for this correlator and $F_\pi$ can
be directly determined from the residue of the pion pole.
The other staggered meson states and correlators we use are defined in ~\cite{Ishizuka:1993mt}.
For example, what we call the non-Goldstone scPion and the $f_0$ meson are
identified in correlator I of Table 1 in ~\cite{Ishizuka:1993mt}. Similarly, the non-Goldstone i5Pion is from correlator VII,
the non-Goldstone ijPion is from correlator VIII, and the rho and A1 mesons are
from correlator III of Table 1 in ~\cite{Ishizuka:1993mt}. We measure the Goldstone pion in two different ways, with one of
them defined above and the other is correlator II of Table 1 in ~\cite{Ishizuka:1993mt}. For baryon states in the sextet fermion representation,
not presented here, we
use our own construction of correlators which are different from the baryon correlators of~\cite{Ishizuka:1993mt}.
Based on the analytic fermion mass dependence of the chiral Lagrangian, and using the lowest four fermion masses,
good polynomial fits were obtained without logarithmic loop corrections as shown in Figure~\ref{fig:MpiFpi} for $M_\pi$ and $F_\pi$.
Although we could fit $M_\pi$ and $F_\pi$ with the continuum chiral
logarithms included, the two sets of $F$ and $B$ values from separate fits to $M_\pi$ and $F_\pi$ are not quite self-consistent.
Rooted and partially quenched staggered perturbation theory is a useful procedure
at finite lattice spacing for simultaneous fits of $M_\pi$ and $F_\pi$
with a consistent pair of $F$ and $B$ values~\cite{Aubin:2003mg,Aubin:2003uc}.
The explicit cutoff dependent corrections to the $F$ and $B$ parameters would require further testing
at weaker gauge couplings and a set of valence fermion masses.
We made the first step in this direction by adding a new run set
to our database at $\beta=3.25$. In Figure~\ref{fig:non-GoldstoneSpectrum} we show taste-breaking effects in two pion spectra for comparison.
We find significant reduction in taste breaking at smaller lattice spacing at the weaker coupling. Our
staggered perturbation theory analysis will be presented in a longer follow-up report
which will also include other results from the new runs at the weaker coupling $\beta=3.25$~\cite{jk}.
\subsection{Taste breaking in the non-Goldstone pion spectrum}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6cm]{figures/GoldstoneSpectrum3p2.pdf}\\
\includegraphics[height=6cm]{figures/GoldstoneSpectrum3p25.pdf}
\end{tabular}
\end{center}
\skip -0.2in
\caption{\footnotesize The top plot in the figure is the spectrum at $\beta=3.2$. It shows the polynomial fit
of the Goldstone pion (magenta points). The red points are the non-Goldstone scPion data covering the green i5Pion
data with complete degeneracy. The slightly split ijPion is shown with cyan color.
The lower plot in the figure is the spectrum at $\beta=3.25$. In identical notation it displays the improvement in taste splitting
with a considerably less taste-broken spectrum when plotted on the same scale.}
\label{fig:non-GoldstoneSpectrum}
\vskip -0.2in
\end{figure}
The non-Goldstone pion spectra, quite different from the one found in QCD, are shown at $\beta=3.2$ in the top plot
of Figure~\ref{fig:non-GoldstoneSpectrum} using standard notation, introduced earlier.
The non-Goldstone i5Pion is split from the Goldstone pion and remains exactly degenerate with
the non-Goldstone scPion, a similar feature in QCD. The new feature is the mass dependence of the split
between the Goldstone pion and
the non-Goldstone i5Pion with non-parallel slopes of the fitting functions.
The non-Goldstone ijPion is further split from the i5Pion with a small mass-independent offset. Although taste breaking
effects appear substantial on the scale of the plot, they are comparable with those from the HISQ action
when the lattice spacings are matched~\cite{Bazavov:2010pg}.
The trends of the splits, particularly the fan-out structure and the lack of parallel equi-spaced splits with a constant slope determined by $B$
is characteristic of gauge models as they get close to the conformal window.
A very small residual mass at $m=0$ is consistent with fits for the non-Goldstone pion states and decreases as
we lower the lattice spacing with the weaker coupling at $\beta=3.25$.
This is shown in the lower plot of Figure~\ref{fig:non-GoldstoneSpectrum} which exhibits a similar structure for the same pion states as the top
figure but on a significantly more collapsed scale. Taste breaking is reduced considerably. It will be interesting to conduct a full analysis of
all data on the finer lattice scale, closer to the continuum limit, and compare with the results presented here on the coarser lattice scale~\cite{jk}.
\subsection{The $ \rho$ and $A_1$ parity partner states}
\vskip -0.003in
It is useful and important to investigate the chiral limit of composite hadron states separated by a gap
from the Goldstone and non-Goldstone pion spectra. The baryon mass gap in the chiral limit can provide further evidence
for $\chi{\rm SB}$ but our preliminary results are not shown here.
Hadron masses of parity partners also provide important information with split parity masses in the chiral limit.
This is particularly helpful not only to confirm $\chi{\rm SB}$ but to obtain a first estimate on the S parameter
for probing the model against electroweak precision tests~\cite{Peskin:1991sw}.
As an example, we will briefly review our results for the $\rho$ meson state and its parity partner, the ${\rm A_1}$ meson.
Particularly interesting is the
$\rho-{\rm A_1}$ mass splitting with parity violation.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=5.5cm]{figures/CORNER-RHO4_RHO20_8data.pdf}\\
\includegraphics[height=5.5cm]{figures/A1_from_CORNER-RHO4_RHO30_8data.pdf}
\end{tabular}
\end{center}
\vskip -0.2in
\caption{\footnotesize Linear fit to the $\rho$ meson mass is shown in the top plot of the figure.
The lower plot shows the linear fit to the ${\rm A_1}$ meson superimposed on the $\rho$ meson plot. The parity split
is quite visible with varying size errors in the fitted $m$ range. }
\label{fig:sextetRhoA1}
\vskip -0.1in
\end{figure}
Figure~\ref{fig:sextetRhoA1} shows fits to the $\rho$ meson and its $A_1$ parity partner. The top plot is a linear fit to the
$\rho$ meson with a non-vanishing mass at $m=0$, consistent with $\chi{\rm SB}$. The lower plot shows the linear fit to the ${\rm A_1}$
meson. Both states extrapolate to non-vanishing masses
in the chiral limit. The split appears to be significant for all fermion masses but the error is too large to resolve the chiral limit.
More work with higher statistics is needed on this correlator before conclusive
results can be obtained.
\section{Spectral tests of the conformal scaling hypothesis}
Under the conformal scaling hypothesis, the mass $M_\pi$
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=5cm]{figures/fpiPionConformPowerFit4.pdf}\\
\includegraphics[height=5cm]{figures/FpiConformPowerFit4.pdf}
\end{tabular}
\end{center}
\vskip -0.2in
\caption{\footnotesize The two plots represent separate conformal fits to $M_\pi$ (top) and $F_\pi$ (bottom).
The separate fits have reasonable $\chi^2$ values but the incompatibility of the fitted $\gamma$ values
disfavors the conformal hypothesis in its leading form.}
\label{fig:sextetConformTest1}
\end{figure}
and the decay constant $F_\pi$ are given at leading order by $M_\pi = c_\pi\cdot m^{1/1+\gamma}$ and $F_\pi = c_F\cdot m^{1/1+\gamma}$.
The coefficients $c_\pi$ and $c_F$
are channel specific but the exponent $\gamma$ is universal in all channels~\cite{DelDebbio:2010hx,Bursa:2010xn,DelDebbio:2010ze,DelDebbio:2011kp}.
The leading scaling form sets in for small $m$ values,
close to the critical surface. According to the hypothesis, there is an infrared conformal fixed point on the critical surface which controls
the conformal scaling properties of small mass deformations.
All masses of the spectrum can be subjected to similar conformal
scaling tests, but we will mostly focus on accurate data in the $M_\pi$ and $F_\pi$ channels.
When $M_\pi$ and $F_\pi$ are fitted {\em separately} in the range of
the four lowest fermion masses closest to the critical surface, we get reasonable $\chi^2$ values for the fits,
as shown in Figure~\ref{fig:sextetConformTest1}. However, the incompatibility of the fitted $\gamma$ values
disfavors the hypothesis, inconsistent
with mass deformed conformal behavior.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=5cm]{figures/CombinedConformFitMpi4.pdf}\\
\includegraphics[height=5cm]{figures/CombinedConformFitResidMpi4.pdf}\\
\includegraphics[height=5cm]{figures/CombinedConformFitFpi4.pdf}\\
\includegraphics[height=5cm]{figures/CombinedConformFitResidFpi4.pdf}
\end{tabular}
\end{center}
\vskip -0.25in
\caption{\footnotesize The first plot shows the simultaneous conformal fit result for
the pion mass, while the second displays the $M_pi$ residuals. The last two
plots show the simultaneous fit result for the pion decay constant and the
$F_pi$ residuals."
The combined fit forces $\gamma=1.53(28)$
with an unacceptable ${\rm \chi^2/dof}$ of 44.5.}
\label{fig:sextetConformTest2}
\vskip -0.2in
\end{figure}
The conflicting simultaneous fits to universal conformal form with the same $\gamma$
for the Goldstone pion and the $F_\pi$ decay constant
are illustrated in Figure~\ref{fig:sextetConformTest2}. Fitting to the pion mass separately requires $\gamma=1.040(73)$ while the
separate $F_\pi$ fit is forcing $\gamma=2.20(15)$. In the combined fit they compromise with $\gamma=1.53(28)$
and the unacceptable ${\rm \chi^2/dof}$ of 44.5.
It is important to note that the exponent $\gamma$ for the fit to $M_\pi$ only is what
$\chi{\rm SB}$ would prefer. The separate conformal exponent $\gamma$ for $F_\pi$ is large to force
to the origin the linear string of data which extrapolate to a finite constant in $\chi{\rm SB}$.
This creates conflict with the universal exponent $\gamma$ in the conformal analysis.
From the tests we were able to perform, the sextet model is consistent with $\chi {\rm SB}$ and inconsistent with conformal symmetry.
It will require further investigations to show that subleading effects cannot alter this conclusion.
We will consider comprehensive conformal finite size scaling (FSS) tests which do not rely on
infinite-volume extrapolation in the scaling fits.
Conformal FSS was extensively applied to a different much discussed model with twelve fermion flavors in the fundamental representation
of the SU(3) color gauge group~\cite{Fodor:2012uu}. These kinds of tests are at a preliminary stage in the sextet project requiring new runs and
systematic analysis. The FSS analysis of the existing dataset of this paper, when smaller volumes are included,
disfavors the conformal hypothesis similarly to what we just presented in the infinite-volume limit.
It is difficult to reconcile $\chi{\rm SB}$ and large exponents in the fermion mass dependence with the low value of $\gamma$
defined by the chiral condensate
using the Sch\"odinger functional for massless fermions~\cite{DeGrand:2012yq}.
\section{The new sextet Higgs project}
If $\chi {\rm SB}$ of the sextet model is confirmed in the massless fermion limit, its potential relevance for the
realization of the composite Higgs mechanism is self-evident.
The three Goldstone pions of the model have the perfect match
for providing the longitudinal components of the $W^{\pm}$ and $Z$ bosons.
The remaining most important issues are: (1) to calculate the mass of the
$0^{++}$ state when the disconnected part of correlator I in Table 1 of~\cite{Ishizuka:1993mt} is included; (2) the determination of the
non-perturbative gluon condensate on the lattice to clarify the dilaton connection if the Higgs particle turns out to be light;
(3) a more precise determination of the running coupling for which we will deploy our
new method based on the gradient flow of the gauge field in finite volume~\cite{Fodor:2012he}.
We will outline in some details the first and second issues.
\subsection{The $ f_0$ state in the $0^{++}$ channel}
Figure~\ref{fig:f0} shows the fermion mass dependence of the $f_0$ meson without including the disconnected
part of correlator I in Table 1 of~\cite{Ishizuka:1993mt}. The non-Goldstone scPion and $f_0$ are parity partner states in this correlator.
The quantum numbers of the $f_0$ meson match that of the $0^{++}$ state in the staggered correlator.
Close to the conformal window the $f_0$ meson is not expected to be similar to the $\sigma$ particle of QCD.
The full $f_0$ state including the disconnected diagrams could replace the role of the
elementary Higgs and act as the Higgs impostor if it turns out to be light.
It is very difficult to do the full calculation including the disconnected diagram which is
the main part of our next generation sextet Higgs project. First, we will discuss preliminary results which ignore
the disconnected part. The challenges will be outlined in the effort to include the disconnected part.
The linear fit from the connected diagram is shown in Figure~\ref{fig:f0}. It has a non-zero intercept in the chiral limit with a mass more
than five times $F$ so it corresponds to a heavy state and not a Higgs candidate.
Since the $f_0$ state is the parity partner of the non-Goldstone scPion in the full correlator, the two states
would become degenerate in the chiral limit with unbroken symmetry. Close to the conformal window it is reasonable to expect that the disconnected
diagram will dramatically reduce the $f_0$ mass and its split from the scPion when the chiral limit is taken. This
will leave the full $f_0$ state a viable Higgs candidate before new simulations resolve the issue and perhaps eliminate this attractive scenario.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6cm]{figures/f0_cornerPionSCcombined_8data.pdf}
\end{tabular}
\end{center}
\vskip -0.2in
\caption{\footnotesize The linear fit is shown to the mass of the $0^{++}$ $f_0$ meson from the
connected part of correlator I in Table 1 of~\cite{Ishizuka:1993mt}. For comparison, the scPion which is the parity partner of the $f_0$ meson
in the correlator is replotted with its fit from Figure~\ref{fig:non-GoldstoneSpectrum} (magenta color). In the continuum limit,
the mass of the non-Goldstone scPion will vanish and the $f_0$ state could become light close to the conformal window.
The disconnected part of the correlator is required to resolve this issue.}
\vskip -0.1in
\label{fig:f0}
\end{figure}
To study flavor-singlet mesons, we need to consider fermion loops which are disconnected (often called hairpin diagrams).
Flavor-singlet correlators have
fermion-line connected and fermion-line disconnected contributions from the hairpin diagrams.
To evaluate disconnected quark loops with zero momentum, we need to sum over propagators from sources
at each spatial location for a given time slice.
To avoid the very costly $\mathcal{O}(V)$
inversions to compute all-to-all propagators in lattice terminology, random sources have to be used with noise
reduction.
A very interesting further challenge and complication is the existence of two types of distinct $0^{++}$ scalar mesons.
One of them is the composite fermion state and the other is the scalar glueball with the same quantum number.
In dynamical sextet simulations, these two types of state will mix with an observable spectrum of scalar mesons which will
require a well-chosen variational operator set to disentangle the scalar state. This further underlines the room left for a light
scalar state to emerge in the spectrum. It is also entirely possible that careful lattice calculations will shut down the Higgs interpretation.
Staggered fermions present an additional complication from the contribution of
pairs of pseudoscalar meson taste channels contributing to the scalar meson correlator.
To be a physical state, the scalar meson $f_0$ has to be taste singlet. Taste selection rules then require that the $f_0$ meson
couples only to pairs of pseudoscalar mesons of the same taste.
We have shown earlier in Section 4 that the pion taste multiplet splits into the Goldstone state and a variety of
higher-lying non-Goldstone states, all degenerate
with vanishing mass in the continuum limit.
In the continuum limit only the taste singlet states (physical states) are expected to have the correct masses from the
U(1) axial anomaly which is itself a taste singlet.
The other non-singlet states remain light and create complicated threshold effects.
This complication is present in the $f_0$ correlator masked by the physical two-pion intermediate state~\cite{Bernard:2007qf}.
\subsection{The Higgs particle and the dilaton}
If the sextet model is very close to the conformal window
with a small but nonvanishing $\beta$-function, a necessary
condition is satisfied for spontaneous breaking of scale invariance generating the light pseudo-Goldstone dilaton state.
The model, as we argued earlier, is also consistent with chiral symmetry breaking ($\chi{\rm SB}$)
with the minimal Goldstone pion spectrum required for electroweak
symmetry breaking and the Higgs mechanism.
The very small beta function (walking) and $\chi{\rm SB}$ are
not sufficient to guarantee a light dilaton state if scale symmetry breaking and $\chi{\rm SB}$
are entangled in a complicated way. However,
a light Higgs-like scalar could emerge near the conformal window as a composite state, not necessarily with dilaton interpretation.
To understand the important role of the non-perturbative gluon condensate in the partially
conserved dilatation current (PCDC) relation and its related dilaton implications,
lattice simulations of the non-perturbative gluon condensate will be needed near the conformal window.
For discussion of the PCDC relation constraining
the properties of the dilaton, we will closely follow the standard
argument like in~\cite{Appelquist:2010gy,Hashimoto:2010nw,Matsuzaki:2012fq}. We will also show how
non-perturbative lattice methods
can explore the implications of the PCDC relation when applied to the sextet model.
In strongly interacting gauge theories, like the sextet model under consideration, a dilatation current
${\mathcal D}^\mu=\Theta^{\mu\nu}x_\nu$
can be defined from the symmetric energy-momentum tensor $\Theta^{\mu\nu}$. Although the massless theory is scale invariant on the classical
level, from the scale anomaly the dilatation current has a non-vanishing divergence,
\begin{equation}
\partial_\mu \mathcal {D}^\mu = \Theta_\mu^\mu=\frac{\beta (\alpha)}{4\alpha}G^a_{\mu\nu}G^{a\mu\nu} \, .
\label{eq:D1}
\end{equation}
Although $\alpha(\mu)$ and $G^a_{\mu\nu}G^{a\mu\nu} $ depend on the renormalization scale $\mu$, the trace of the
energy-momentum tensor is scheme independent after renormalization. In the sextet model, the massless fermions are in the
two-index symmetric representation of the SU(3) color gauge group. The gluon fields are
in the adjoint representation with $G^a_{\mu\nu},~ a=1,2,...8$.
We will assume that the perturbative parts of the composite gauge operator $G^a_{\mu\nu}G^{a\mu\nu}$ and $\Theta_\mu^\mu$
are removed
in Eq.~(\ref{eq:D1}) and only the non-perturbative (NP) infrared part will be considered in what follows.
The dilaton coupling $f_\sigma$ is defined by the matrix element
\begin{equation}
\langle 0| \Theta^{\mu\nu}(x)|\sigma(p)\rangle = \frac{f_\sigma}{3}(p^\mu p^\nu - g^{\mu\nu} p^2) e^{-ipx}
\label{eq:D2}
\end{equation}
with $p^2=m^2_\sigma$ for the on-shell dilaton state $\sigma (p)$.
From the divergence of the dilatation current in Eq.~(\ref{eq:D1}) we get
\begin{equation}
\langle 0| \partial_\mu\mathcal{D}^\mu(x)|\sigma(p)\rangle = f_\sigma m^2_\sigma e^{-ipx}\, .
\label{eq:D3}
\end{equation}
The subtracted non-perturbative part of the energy-momentum tensor,
\begin{equation}
\Bigl [\Theta^\mu_\mu \Bigr]_{NP} = \frac{\beta (\alpha)}{4\alpha}\Bigl[ G^a_{\mu\nu}G^{a\mu\nu}\Bigr]_{NP}\,,
\label{eq:D3A}
\end{equation}
is defined by removing the perturbative part of the gluon condensate in the vacuum,
\begin{equation}
\Bigl [\Theta^\mu_\mu \Bigr]_{NP} = \frac{\beta (\alpha)}{4\alpha} G^a_{\mu\nu}G^{a\mu\nu} -
\langle 0|\frac{\beta (\alpha)}{4\alpha}G^a_{\mu\nu}G^{a\mu\nu}|0\rangle_{PT}\, .
\label{eq:D4}
\end{equation}
The lattice implementation of the subtraction procedure will be briefly described after the derivation of the PCDC relation.
It is easy to derive, like for example in~\cite{Appelquist:2010gy}, the dilaton matrix element
of the energy-momentum tensor trace using some particular definition
of the subtraction scheme,
%
\begin{equation}
\langle \sigma (p=0)|\Bigl[\Theta^\mu_\mu (0)\Bigr]_{NP}|0\rangle \simeq \frac{4}{f_\sigma}\langle 0|\Bigl[\Theta^\mu_\mu (0)\Bigr]_{NP}|0\rangle \, .
\label{eq:D5}
\end{equation}
%
When combined with Eq.~(\ref{eq:D3}), the partially conserved dilatation current (PCDC) relation is obtained,
\begin{equation}
m^2_\sigma\simeq - \frac{4}{f^2_\sigma }\langle 0|\Bigl[\Theta^\mu_\mu (0)\Bigr]_{NP}|0\rangle \, .
\label{eq:D6}
\end{equation}
Predictions for $m_\sigma$ close to the conformal window depend on the behavior
of $f_\sigma$ and the gluon condensate $G^a_{\mu\nu}G^{a\mu\nu}$ of Eq.~(\ref{eq:D3A}). There are two distinctly
different expectations about the limit of the gluon condensate to $f_\sigma$ ratio when the conformal window is approached.
In one interpretation, the right-hand side is predicted to approach zero in the limit, so that the dilaton mass would parametrically vanish when
the conformal limit is reached ~\cite{Appelquist:2010gy}.
The formal parameter is the non-physical (fractional) critical number of fermions when the conformal phase is reached.
In an alternate interpretation the right-hand side ratio of Eq.~(\ref{eq:D6}) remains finite in the limit and a residual dilaton mass is expected~\cite{Hashimoto:2010nw,Matsuzaki:2012fq}. The two interpretations make different assumptions about the
entanglement of $\chi{\rm SB}$ and scale symmetry breaking but both scenarios expect a light dilaton mass
in some exact non-perturbative realization of a viable BSM model.
It is important to note that there is no guarantee, even with a very small $\beta$-function near the conformal window, for
the realization of a light enough dilaton to act as the new Higgs-like particle. Realistic BSM models have not been built with
parametric tuning close to the conformal window. For example, the sextet model is at some intrinsically determined position
near the conformal window and
only non-perturbative lattice calculations can explore the physical properties of the scalar particle.
\subsection{The non-perturbative gluon condensate on the lattice}
Power divergences are severe in the
calculation of the lattice gluon condensate,
because the operator $\alpha G^a_{\mu\nu}G^{a\mu\nu}$ has quartic divergences.
The gluon condensate is computed on the lattice from the
expectation value of the plaquette operator $U_P$. On the tree level we have the relation
\begin{equation}
{\rm lim}_{a\rightarrow 0} ~~\Biggl (\frac{1}{a^4}\, \langle 1-\frac{1}{3}\,\mbox{tr}\,U_P\rangle \Biggr )
= \frac{\pi^2}{36} \,\langle\frac{\alpha}{\pi}
GG\rangle_{\rm lattice}
\label{eq:plaquette1}
\end{equation}
as the continuum limit is approached in the limit of vanishing bare lattice coupling $g_0$.
At finite lattice coupling we have the sum of a perturbative series in $g_0$ and the non-perturbative gluon condensate,
\begin{equation}
\Bigl\langle 1-\frac{1}{3}\,\mbox{tr}\,U_P\Bigr\rangle
= \sum_{n}c_n \cdot g_0^{2n} +
a^4\,\frac{\pi^2}{36} \,\Biggl( \frac{b_0}{\beta(g_0)} \Biggr)\,\Bigl\langle\frac{\alpha}{\pi}
GG\Bigr\rangle_{\rm lattice} + \,O(a^6)\, ,
\label{eq:plaquette2}
\end{equation}
where $b_0$ is the leading $\beta$-function coefficient.
There is no gauge-invariant operator of dimension
2 and therefore the order $a^2$ term is missing in Eq.~(\ref{eq:plaquette2}).
For small lattice spacing $a$, the perturbative
series is much larger than the non-perturbative gluon condensate, and its determination requires the subtraction of
the perturbative series from the high accuracy Monte Carlo data of the plaquette.
The $c_n$ expansion coefficents can be determined to high order using stochastic perturbation theory~\cite{Di Renzo:2004ge}.
This procedure requires the investigation of Borel summation of the high order terms in the perturbative expansion since the coefficients $c_n$
are expected to diverge in factorial order and one has to deal with the well-known renormalon issues.
The methodology has been extensively studied in pure Yang-Mills theory
on the lattice~\cite{Horsley:2012ra}.
It will be very important to undertake similar investigations of the non-perturbative gluon
condensate in the sextet model with full fermion dynamics.
We hope to return to this problem in the near future.
\section*{Summary and outlook}
We have shown that the chiral condensate and the mass spectrum of the sextet model
are consistent with chiral symmetry breaking in the limit of vanishing fermion mass.
In contrast,
sextet fermion mass deformations of
spectral properties are not consistent with leading conformal scaling behavior near the critical surface
of a conformal theory.
Our new results are reconciled with recent findings of the sextet $\beta$-function~\cite{DeGrand:2012yq},
if the model is close to the conformal window with a very small non-vanishing $\beta$-function.
This leaves open the possibility
of a light scalar state with quantum numbers of the Higgs impostor. The light Higgs-like state could emerge as the
pseudo-Goldstone dilaton from spontaneous symmetry breaking of scale invariance.
Even without association with the dilaton, the scalar Higgs-like state
can be light if the sextet gauge model is very close to the conformal window.
A new Higgs project of sextet lattice simulations was outlined to resolve these important questions.
Plans include the determination of the S parameter and the sextet confining force
with results on the string tension already
reported, strongly favoring the $\chi{\rm SB}$ hypothesis~\cite{Holland}.
\section*{Acknowledgments}
This work was supported by the DOE under grant DE-FG02-90ER40546,
by the NSF under grants 0704171 and 0970137, by the EU Framework Programme 7 grant (FP7/2007-2013)/ERC
No 208740, and by the Deutsche Forschungsgemeinschaft grant SFB-TR 55.
The simulations were performed using USQCD computational resources
at Fermilab and JLab. Further support was provided by the UCSD GPU cluster
funded by DOE ARRA Award ER40546.
Some of the simulations used allocations from
the Extreme Science and Engineering Discovery Environment (XSEDE),
which is supported by National Science Foundation grant number OCI-1053575.
In addition, some computational resources were used at the University of Wuppertal, Germany.
We are grateful to Kalman Szabo and Sandor Katz
for their code development building on Wuppertal gpu technology~\cite{Egri:2006zm}.
KH wishes to thank the Institute for Theoretical Physics and the
Albert Einstein Center for Fundamental Physics at Bern University for their support.
|
1,116,691,498,581 | arxiv | \section{Introduction}
High-resolution satellite imagery opens new possibilities for the extraction of linear features such as roads \cite{wang2016review}. The advantages of this data compared to aerial imagery are the almost worldwide availability, and sometimes the imagery data contains additional spectral channels. The spatial resolution with 0.5-1.0 meters is worse than for aerial imagery, but for the road extraction, it is sufficient \cite{deepglobe_website, demir2018deepglobe}. The worldwide availability of the data makes it possible to produce topographic databases for nearly any region of the earth. It, in turn, can help in various industries to enhance their productivity and quality of work whether it is for military purposes of disaster prevention or relief.
Reliable image segmentation is one of the important tasks in computer vision. Semantic image segmentation essentially involves dividing images into meaningful regions, which can be viewed as a pixel level classification task. The most straightforward (and slow) approach to such problem is manual segmentation of the images. However, this is a time-consuming process that is prone to mistakes and inconsistencies that are unavoidable when human data curators are involved. To overcome this bottleneck, automatic means are needed. Automating the treatment provides a systematic way of segmenting an image on the fly as soon as the image is acquired. This process requires providing necessary accuracy to be useful in the production environment.
In the last years, different methods have been proposed to tackle the problem of creating convolutional neural networks (CNN) designed to be an efficient architecture for pixel-wise semantic segmentation. This networks can produce a segmentation map for an entire input image in a single forward pass. One of the most successful state-of-the-art deep learning method is based on the Fully Convolutional Networks (FCN) \cite{long2015fully}. The main idea of this approach is to use CNN as a powerful feature extractor by replacing the fully connected layers by convolution one to output spatial feature maps instead of classification scores. Those maps are further upsampled to produce dense pixel-wise output. Moreover, this approach achieved an improvement in segmentation accuracy over common methods on standard datasets like PASCAL VOC \cite{everingham2015pascal}. This method has been further improved and now known as U-Net neural network \cite{ronneberger2015u}. The U-Net architecture uses skip connections to combine low-level feature maps with higher-level ones, which enables precise pixel-level localization. A large number of feature channels in upsampling part allows propagating context information to higher resolution layers. This type of network architecture proven themselves in binary image segmentation competitions for satellite image analyses \cite{iglovikov2017satellite, iglovikov2018ternausnet, iglovikov2018ternausnetv2, zhang2018road, zhong2016fully} and other applications \cite{shvets2018automatic, shvets2018angiodysplasia}.
\section{Dataset}
The training data for road extraction challenge contains 6226 satellite imagery in RGB format. Each image has the size of 1024x1024 pixels. These images have 50cm pixel resolution, collected by DigitalGlobe's satellite \cite{deepglobe_website, demir2018deepglobe}. Moreover, each image in the training dataset contains a paired mask for road labels (see Fig. \ref{fig:road}). The mask is given in a grayscale format, with white standing for road pixel, and black standing for the background. It is worth to mention that the values of the mask image may not be pure 0 and 255. As a result, the recommended threshold for binarization is 128. The labels are not perfect due to the cost of annotating segmentation mask, especially in rural areas. In addition, small roads within farmlands are not annotated consciously. To measure the performance of our model we were also provided by 1243 validation images that do not contain masks. The predicted masks for the validation images should be upload to deepglobe website \cite{deepglobe_website}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./figures/road.png}
\end{center}
\caption{A satellite image with overlay binary masks where green pixels indicate class membership (roads).}
\label{fig:road}
\end{figure}
\section{Model}
Objects segmentation in different scales is challenging in particular for small objects. For this problem, we use a fully convolutional network from U-Net family to implement road segmentation. In general, a U-Net-like architecture consists of a contracting path to capture context and of a symmetrically expanding path that enables precise localization (for example, see Fig.\ref{fig::fpn}). The contracting path follows the typical architecture of a convolutional network with alternating convolution and pooling operations and progressively downsamples feature maps, increasing the number of feature maps per layer at the same time. Every step in the expansive path consists of an upsampling of the feature map followed by a convolution. Hence, the expansive branch increases the resolution of the output. To localize, upsampled features, the expansive path combines them with high-resolution features from the contracting path via skip-connections \cite{ronneberger2015u}. The output of the model is a pixel-by-pixel mask that shows the class of each pixel. We use a slightly modified version of the original U-Net model that previously proved itself very useful for segmentation problems with limited amounts of data, for example, see \cite{iglovikov2018ternausnet}.
As an improvement over the standard U-Net architecture, we use similar networks with pre-trained encoders. Our network has a U-Net-like architecture that uses pre-trained ResNet-34 \cite{he2016deep} networks as an encoder (see Fig. \ref{fig::fpn}). The encoder starts with the initial block that performs convolution with a kernel of size $7\times7$ and stride $2$. This block is followed by max-pooling with stride $2$. The later portion of the network consists of repetitive residual blocks. In every residual block, the first convolution operation is implemented with stride $2$ to provide downsampling, while the rest convolution operations use stride $1$. In addition, the decoder of the network consists of several decoder blocks that are connected with the corresponding encoder block. As for vanilla U-Net, the transmitted block from the encoder is concatenated to the corresponding decoder block. Each decoder block includes $1\times1$ convolution operation that reduces the number of filters by $4$, followed by batch normalization and transposed convolution to upsample the feature map.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth,height=9cm]{./figures/albunet.png}
\end{center}
\caption{Segmentation networks based on the encoder-decoder architecture of U-Net family. This network uses the pre-trained ResNet-34 network as an encoder. Each box corresponds to a multi-channel feature map. The number of channels is pointed below the box. The height of the box represents a feature map resolution while its thickness is proportional to the number of channels. The blue arrows denote skip-connections where information is transmitted from the encoder to the decoder..}
\label{fig::fpn}
\end{figure*}
\section{Training}
We use Jaccard index (Intersection Over Union) as the evaluation metric. It can be interpreted as a similarity measure between a finite number of sets. For two sets $A$ and $B$, it can be defined as following:
\begin{equation}
\label{jaccard_iou}
J(A, B) = \frac{|A\cap B|}{|A\cup B|} = \frac{|A\cap B|}{|A|+|B|-|A\cap B|}
\end{equation}
Since an image consists of pixels, the last expression can be adapted for discrete objects in the following way:
\begin{equation}
\label{dicrjacc}
J=\frac{1}{n}\sum\limits_{i=1}^n\left(\frac{y_i\hat{y}_i}{y_{i}+\hat{y}_i-y_i\hat{y}_i}\right)
\end{equation}
where $y_i$ and $\hat{y}_i$ are a binary value (label) and a predicted probability for the pixel $i$, correspondingly.
Since image segmentation task can also be considered as a pixel classification problem, we additionally use common classification loss functions, denoted as $H$. For a binary segmentation problem, $H$ is a binary cross entropy, while for a multi-class segmentation problem $H$ is a categorical cross entropy.
The final expression for the generalized loss function is obtained by combining (\ref{dicrjacc}) and $H$ as following:
\begin{equation}
\label{free_en}
L=\alpha H+(1-\alpha) (1-J)
\end{equation}
Minimizing this loss function, we simultaneously maximize probabilities for right pixels to be predicted and maximize the intersection $J$ between masks and corresponding predictions. The weighted parameter $\alpha=0.7$ is found from the evaluation of the network using hold out data set.
For training our network, we split our dataset using 1/4 hold out values for validation. Then, on the fly, we make several augmentations to increase the train size artificially. For spatial augmentation, we use scale transformations $0.6-1.4$ of the original image and mask. Then, we randomly rotate the image and mask by 30 degrees. From the resulting image and mask, we take random crops with size 448x448 pixels. These images are subject to color transformation such as random contrast/brightness/HSV. One video card GTX1080$Ti$ with 11 Gb of memory allows using the batch size of 8 images.
We train our network using Adam optimizer with learning rate 1e-4 and decay 1e-4 \cite{kingma2014adam}. The training is done for 20k iterations (batches) saving weights from several best iterations. Since the data set is fairly limited in its size and labeling of train images is not robust the predicted value for IoU is varied significantly between iterations. To reduce the effect of over-fitting, we used spatial dropout operation on the output of our network with $p=0.3$.
We made predictions on the whole image with 1024x1024 pixels without padding because the side is divisible by $32=2^5$. To improve the robustness of our predictions, we also implemented test time augmentation (TTA) that composed of averaging of 4 predictions that correspond to 90 degrees rotation each.
\section{Conclusions}
We developed a binary segmentation model using the encoder-decoder network with skip connections. For this network, we used encoder based on the pre-trained ResNet-34 network while the decoder was similar to a vanilla U-Net decoder. In addition, we neatly designed loss function that simultaneously takes into account binary cross entropy and intersection over union (IoU). To improve the performance of our method we also used test time augmentation technique. The best public score of our model on the public leaderboard is 0.64. This method can be further improved implementing cross-validation for five folds, improving image augmentation or making more TTA transformations. Last but not least, to make robust and precise predictions of our method the easiest way is to prepare labeled masks in high quality. After that our method could potentially be optimized to work on embedded devices and to provide real-time road extraction.
\section*{Acknowledgment}
The authors would like to thank Open Data Science community \cite{ods_website} for many valuable discussions and educational help in the growing field of machine/deep learning.
{\small
\bibliographystyle{ieee}
|
1,116,691,498,582 | arxiv | \section{Introduction}
Semiclassical approach of quantum gravity theory is known as quantum
matter
field theory propagated on a curved space-time,
in which a classically treated curved space-time is perturbed by
a suitable quantum matter field \citep{Bir82}.
A fundamental problem in this version of the quantum gravity
theory,
is calculation of renormalized expectation value of quantum
matter stress tensor operator $<\hat{T}_{\mu\nu}>_{ren}$. Renormalization theory give us a
suitable theoretical prediction, in which expectation value of a
singular quantum field stress tensor operator reduces to a
nonsingular quantity contained an anomalous trace. This nonsingular
stress tensor treats as source in RHS of the Einstein`s gravity
equation such as follows.
\begin{equation} G_{\mu\nu}-\Lambda
g_{\mu\nu}=8\pi \{T^{class}_{\mu\nu}+<\hat{T}_{\mu\nu}>_{ren}\}
\end{equation}
where $G_{\mu\nu}$ is Einstein tensor with the perturbed metric
$g_{\mu\nu}=\hat{g}_{\mu\nu}+\Delta g_{\mu\nu}$ and the background
metric $\hat{g}_{\mu\nu},$ $\Lambda$ is positive cosmological
constant and $T^{class}_{\mu\nu}$ is classical baryonic matter or
non-baryonic dark matter field stress tensor. Non-minimally coupled
scalar dark matter fields with a negative value of equation of state
parameter may to be come originally from effects of conformal
frames. The latter case of the matter is a good candidate to explain
positivity accelerated expansion of the universe and to remove the
naked singularity of the universe in quantum cosmological approach.
See \citep{Noz09} and references therein. The above equation which
is written in units $G=\hbar=c=1$ is called the metric back-reaction
equation. There are presented several methods for the
renormalization prescription, namely dimensional regularization,
point splitting, adiabatic and Hadamared renormalization
prescriptions \citep{Bir82}. The latter method has distinctions with
respect to the other methods of the renormalization prescriptions.
Hadamared renormalization prescription is described in terms of
Hadamared states and it predicts few conditions on unknown quantum
vacuum state of an arbitrary interacting quantum field
\citep{Bro84,Ber86,Gha97}. Hence it provides the most direct and
logical approach to the renormalization problem for practical
calculations. Furthermore it is well defined for both
massive and massless fields. \\
Renormalization theory is still establish the covariant conservation
of stress tensor operator expectation value of quantum field
contained with a non vanishing trace anomaly, namely
$\nabla^{\nu}<\hat{T}_{\mu\nu}>_{ren}=0.$ This anomaly is obtained
in terms of geometrical objects such as $R_{\mu\nu}R^{\mu\nu},$
$R_{\mu\nu\alpha\beta}R^{\mu\nu\alpha\beta},$ $\Box R,$ and $R^2$
for conformaly coupling massless quantum field propagated on four
dimensional curved space time
\citep{Chr76,Adl77,Wal78,Bir82,Bro84,Ber86,Gha97,Par09}. In two
dimension the conformal coupling reduces to minimal coupling and so
the quantity of trace anomaly is obtained in terms of the Ricci
scalar $\mathcal{R}=R^{\beta}_{\beta}$ which for a massless scalar
matter field become: \begin{equation}
<\hat{T}_{\mu}^{\mu}>_{ren}=\frac{\mathcal{R}}{24\pi}.\end{equation}
The main problem in the equation (1) is to fined
$<\hat{T}_{\mu\nu}>_{ren}$ coupled with an arbitrary non-static and
non-spherically symmetric dynamical metric. But there are many
degrees of freedom and inherent complexity on four dimensional
solutions of equation (1). There are obtained in detail only for
class of four dimensional spherically symmetric space times which
are treated as two dimensional curved space times, because the
spherically symmetric condition on four dimensional space times
eliminates the extra degrees of freedom of Equation (1)
\citep{Chr77}. Two dimensional analog of the renormalization theory
and solutions of the back-reaction equation is used to determine
final state of spherically symmetric dilatonic and also
non-dilatonic evaporating black holes metric by several authors. For
instance Strominger et al were obtained a nonsingular metric for
final state of an evaporating two dimensional dilatonic massive
black hole \citep{Alw92,Ban92,Cal92,Rus92,Pir93}. It is shown in
\citep{Low93} that an evaporating two dimensional dilatonic Reissner
Nordestr\"{o}m
black hole reduces to a remnant, stable nonsingular space time.
Evaporating dilatonic Schwarzschild de Sitter black holes final
state whose size is comparable to that of the cosmological horizon
is in thermal equilibrium \citep{Bou98}. It is obtained that final
state of a non-dilatonic Schwarzschild-de Sitter evaporating black
hole reduces to a remnant stable object with a nonsingular metric
\citep{Gha06,Gha07}. It is shown by Balbinot et al that the Hawking evaporation \citep{Haw74,Haw75}
of the two dimensional non-dilatonic Schwarzschild black hole is
stopped \citep{Bbr84,Bal84,Bal85,Bal86,Bal89}. Back reaction
corrections of conformaly invariant quantum scalar
field in the Hartle Hawking vacuum state
\citep{Har76} was used to determine quantum perturbed metric of a non-dilatonic Reissner
Nordstr\"{o}m black hole by Wang et al \citep{Wan01}. They followed
the York approach where a small quantity $\epsilon$ is introduced to
solve the metric back-reaction equation (1) by applying the
perturbation method \citep{Yor85}.\\
Furthermore noncommutative
quantum field theory in curved space times and so generalized
uncertainty principle derived from string theory \citep{Ama87,
Ama88, Ama89, Ama90,Cap00,Sny47,Sei99,Dou01}, is other quantum
gravity approach in which the space-time points might be
noncommutative \citep{Asc05,Cal05,Cal06,Cha01}. The latter quantum
gravity model is also predicts remnant stable mini-quantum black
hole where the Hawking radiation process finishes when black hole
approaches to
its Planck scale with a nonzero temperature \citep{Nic06, Noz05, Noz08}.\\
According to the perturbation method presented by the York, we solve
in this paper, two dimensional analog of the metric back-reaction
equation (1) and determine final state of an evaporating Lukewarm
black hole. This kind of a black hole is a special class of Reissner
Nordstr\"{o}m de sitter spherically symmetric static black hole
where mass parameter is equal to the charge parameter. According to
the work presented by Christensen and Fulling \citep{Chr77} we
obtain the renormalized stress tensor components of black hole
Hawking radiation
in terms of a nonlocal contribution of the trace anomaly.
The plan of this paper is as follows.\\
In section 2, we define Lukewarm
classical black hole metric and obtain locations of its event
horizons. In section 3, we derive thermal radiation stress tensor
operator expectation value of a massless, charge-less quantum matter
scalar field propagating on the black hole metric. Having the
obtained Hawking radiation quantum stress tensor, we solve
back-reaction metric equation (1) in the section 4 and obtain
locations of the quantum perturbed horizons. Section 5 denotes to
the concluding remarks.
\section{Lukewarm Black Hole Metric}
Reissner Nordstr\"{o}m de Sitter space times with Lorentzian line
element is given by \begin{equation}
ds^2=-\Omega(r)dt^2+\frac{dr^2}{\Omega(r)}+r^2(d\theta^2+\sin^2\theta
d\varphi^2) \end{equation} where
\begin{equation}
\Omega(r)=1-\frac{2M}{r}+\frac{Q^2}{r^2}-\frac{\Lambda r^2}{3}
\end{equation}
and $M,Q$ are the mass and charge of the black hole
respectively. $\Lambda$ is the positive cosmological constant.
Lukewarm black holes are a particular class of Reissner
Nordstr\"{o}m-de Sitter, with $Q=M.$ For $4M<\sqrt{3/\Lambda}$ we
have three distinct horizons, namely black hole event horizon at
$r=r_h,$ inner Cauchy horizon at $r=r_{ca},$ and cosmological
horizon at $r=r_c,$ where
\begin{equation}
r_{ca}=\frac{1}{2}\sqrt{3/\Lambda}\left(-1+\sqrt{1+4M\sqrt{\Lambda/3}}\right)
\end{equation}
\begin{equation}
r_h=\frac{1}{2}\sqrt{3/\Lambda}\left(1-\sqrt{1-4M\sqrt{\Lambda/3}}\right)
\end{equation}
and
\begin{equation}
r_c=\frac{1}{2}\sqrt{3/\Lambda}\left(1+\sqrt{1-4M\sqrt{\Lambda/3}}\right).
\end{equation} While the event horizon is formed by the gravitational
potential of the black hole, the cosmological horizon is formed as a
result of the expansion of the universe due to the cosmological
constant \citep{Gib77,Bre11}. An observer located between the two
horizons is causally isolated from the region within the event
horizon, as well as from the region outside the cosmological
horizon. The above line element is exterior metric of a spherically
symmetric static body with mass $M$ and charge $Q$. It is solution
of the equation (1) under the condition $<\hat{T}_{\mu\nu}>_{ren}=0$
where $T^{class}_{\mu\nu}$ is stress tensor of classical
electromagnetic field of a point charge $Q$ and it is given in
$(t,r,\theta,\varphi)$ coordinates such as follows:
\begin{equation}
{T^{(class)}}^{\mu}_{\nu}=\frac{1}{8\pi}\left(\frac{Q}{r^2}\right)^2\left
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 &1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 &0 & 0 & -1 \\
\end{array
\right).\end{equation} In advanced time Eddington-Finkelestein
coordinates $(v,r,\theta,\varphi)$ where
\begin{equation}
dv=dt+\frac{dr}{\Omega(r)} \end{equation} one can obtained classical
electromagnetic field stress tensor (8) such as follows.
\begin{equation}
T_{vv}^{class}(v,r)=\frac{\Omega^{-1}(x)-\Omega(x)}{128M^2x^4}
\end{equation} with
\begin{equation}
\Omega(x)=1-\frac{1}{x}+\frac{q^2}{4x^2}-\frac{\varepsilon x^2}{4},
\end{equation}
\begin{equation}
T^{class}_{vr}=T^{class}_{rr}=\frac{\Omega^{-1}(x)}{128M^2x^4}
\end{equation} and
\begin{equation}
T^{class}_{\theta\theta}=-\frac{1}{32\pi
x^2},~~~T^{class}_{\varphi\varphi}=\sin^2\theta
T_{\theta\theta}^{class}
\end{equation}where we defined
\begin{equation}
x=\frac{r}{2M},~~~q=\frac{Q}{M},~~~\varepsilon=\frac{16M^2\Lambda}{3}>0.
\end{equation} Locations of the classical event horizons defined by
(5), (6) and (7) become respectively
\begin{equation}
x_{ca}=\frac{1-\sqrt{1+\sqrt{\varepsilon}}}{\sqrt{\varepsilon}},~~
~~x_b=\frac{1-\sqrt{1-\sqrt{\varepsilon}}}{\sqrt{\varepsilon}},
~~~x_c=\frac{1+\sqrt{1-\sqrt{\varepsilon}}}{\sqrt{\varepsilon}}
\end{equation} where
\begin{equation}
x_bx_c=\frac{1}{\sqrt{\varepsilon}} \end{equation}
and in case
$0<\varepsilon<1$ we have
\begin{equation}
x_{ca}\approx\frac{\varepsilon}{8}-\frac{1}{2},~~~x_b\approx\frac{1}{2}+\frac{\sqrt{\varepsilon}}{8},~~
~x_c\approx\frac{2}{\sqrt{\varepsilon}}-\frac{1}{2}-\frac{\sqrt{\varepsilon}}{8}.
\end{equation} Applying (11) with $q=1,$ we obtain locations of the
horizons and quasi-flat regions of the black hole space time, from
the equations
$\Omega(x)=0$ and $\frac{d\Omega(x)}{dx}=0$ respectively. These conditions reduce
to the following relations.
\begin{equation}
\varepsilon_e(x)=\frac{4}{x^2}-\frac{4}{x^3}+\frac{1}{x^4}.
\end{equation} and
\begin{equation}
\varepsilon_q(x)=\frac{2}{x^2}-\frac{1}{x^3}.\end{equation}
Diagrams
of the functions defined by (18) and (19) are given by dash-lines
and solid line in figure 1, respectively. These diagrams are valid
for $0<\varepsilon<1.$ In case $\varepsilon\geq1$ locations of the
black hole and the cosmological horizons reach to each others and so
cases to instability of the black hole.\\
In the next section we derive the
Hawking thermal radiation of a quantum Lukewarm black hole minimally
coupled with a linear two dimensional, massless, charge-less,
quantum scalar field. We will consider that the interacting quantum
scalar field to be charge-less and so has not electromagnetic action
with the classical electric field stress tensor $T^{class}_{\mu\nu}$
defined by (8). So we can suppose that the electric charge of the
black hole is not perturbed by the quantum scalar field. Also we
will assume that the quantum scalar field is propagated in s
(spherically) mode on the spherically symmetric background metric
(3) and so its $g_{tt}$ and $g_{rr}$ components are perturbed by the
renormalized expectation value of quantum field stress tensor
operator $<\hat{T}_{\mu\nu}[\hat{\phi}]>_{ren}.$ Applying the latter
assumption one can use two dimensional analog of the quantum field
back-reaction corrections on the metric such as follows.
\section{Black Hole Hawking Radiation}
According to the work presented by Christensen and Fulling
\citep{Chr77} we will fined here general solution of the covariant
conservation equation defined by
\begin{equation}
\nabla_{\nu}S^{\nu}_{\mu}=0,~~~~S^{\nu}_{\mu}=<\hat{T}_{\mu}^{\nu}>_{ren}
\end{equation} under the anomaly condition (2). Assuming
$\theta,\varphi=constant,$ two dimensional analog of the metric (3)
described in the advanced time Eddington-Finkelestein coordinates
(9), become
\begin{equation}
ds^2=-\Omega(r)dv^2+2dvdr. \end{equation}
Applying (21) the corresponding Ricci
scalar become $\mathcal{R}=\Omega^{\prime\prime}(r)$ where the over
prime $\prime$ denotes to differentiation with respect to radial
coordinate $r$ and hence the anomaly condition (2) become
\begin{equation}
S^{v}_{v}(r)+S^{r}_{r}(r)-\Omega^{\prime\prime}(r)/24\pi=0.
\end{equation} Nonzero components of second kind Christoffel symbols
are obtained as
\begin{equation}
\Gamma^v_{vv}=\frac{\Omega^{\prime}(r)}{2}
=-\Gamma^r_{vr}=\Gamma^r_{rv},~~~\Gamma^r_{vv}=\frac{\Omega(r)\Omega^{\prime}(r)}{2}.
\end{equation} Applying (23), the covariant conservation equation
defined by (20) leads to the following differential equations.
\begin{equation}
{S^{\prime}}^r_{v}+\Omega^{\prime}(S^r_{r}-S^v_v)/2-\Omega
\Omega^{\prime}S^v_{r}/2=0 \end{equation} and
\begin{equation}
{S^{\prime}}^r_r+\Omega^{\prime}S^v_r/2=0. \end{equation} Using
\begin{equation}
S^v_v=S_{rv},~~~~S^v_r=S_{rr},~~~S^r_r=S_{vr}+\Omega
S_{rr},~~~S^r_v=S_{vv}+\Omega S_{rv}
\end{equation}with $S_{vr}=S_{rv}$ the
equations (22), (24) and (25) become respectively
\begin{equation} \Omega
S_{rr}+2S_{vr}=\frac{\Omega^{\prime\prime}}{24\pi},
\end{equation}
\begin{equation}
S_{vv}+\Omega S_{rv}=C_1
\end{equation}and
\begin{equation}
S_{vr}^{\prime}+\frac{3}{2}\Omega^{\prime}S_{rr}+\Omega
S^{\prime}_{rr}=0 \end{equation} where $C_1$ is integral constant.
Applying (27) and (29) we obtain
\begin{equation}
S_{rr}(r)=\frac{1}{\Omega^2(r)}\left\{C_2-\frac{1}{24\pi}
\int^{r}\Omega(\tilde{r})\Omega^{\prime\prime\prime}(\tilde{r})d\tilde{r}\right\}
\end{equation} where $C_2$ is also integral constant. Using (27) and
(30) one can show
\begin{equation} S_{vr}(r)=S_{rv}(r)=-\frac{C_2}{2\Omega(r)}
+\frac{1}{48\pi}\left\{\Omega^{\prime\prime}(r)+\frac{1}{\Omega(r)}\int^r\Omega
(\tilde{r})\Omega^{\prime\prime\prime}(\tilde{r})d\tilde{r}\right\}.
\end{equation}
Applying (28) and (31) we obtain
\begin{equation} S_{vv}(r)=C_1+\frac{C_2}{2}-\frac{1}{48\pi}\left\{\Omega(r)\Omega^{\prime\prime}(r)+
\int^{r}\Omega(\tilde{r})\Omega^{\prime\prime\prime}(\tilde{r})d\tilde{r}\right\}.
\end{equation}
Using (4) and (14) with $q=1,$ $0<\varepsilon<1,$ the
stress tensor components defined by (30), (31) and (32) can be
rewritten as
\begin{equation}
S_{\mu\nu}(v,r)=\frac{1}{96\pi M^2}$$$$\times\left
\begin{array}{cc}
48\pi M^2(2C_1+C_2)-2B(x)-12A(x) & \frac{2B(x)+12A(x)-48\pi M^2C_2}{\Omega(x)} \\
\frac{2B(x)+12A(x)-48\pi M^2C_2}{\Omega(x)} & \frac{96\pi M^2C_2-12A(x)}{\Omega^2(x)}\\
\end{array
\right)\end{equation} where we defined
\begin{equation}
\Omega(x)=1-\frac{1}{x}+\frac{1}{4x^2}-\frac{\varepsilon x^2}{4},
\end{equation}
\begin{equation}
A(x)=\frac{1}{6}\int^x\Omega(\tilde{x})\Omega^{\prime\prime\prime}
(\tilde{x})d\tilde{x}=\frac{1}{24x^6}-\frac{1}{4x^5}+\frac{1}
{2x^4}-\frac{1}{3x^3}-\frac{\varepsilon}{8x^2}+\frac{\varepsilon}{4x}
\end{equation} and
\begin{equation}
\Omega(x)\Omega^{\prime\prime}(x)=B(x)=\frac{3}{8x^6}
-\frac{2}{x^5}+\frac{7}{2x^4}-\frac{2}{x^3}
-\frac{\varepsilon}{2x^2}+\frac{\varepsilon}{x}-\frac{\varepsilon}{2}+\frac{\varepsilon^2
x^2}{8}.
\end{equation} Now we should be determine the integral constants
$C_{1}$ and $C_2.$ For the determination of these constants we
require the regularity of $S^{\mu}_{\nu}$ at the black hole horizon
in a coordinate system which is regular there. The stress tensor
$S^{\mu}_{\nu},$ as measured in a local Kruskal coordinate system at
black hole horizon, will be finite if $S_{vv}$ and $S^t_t+S^r_r,$
are finite as $x\to x_b$ and
\begin{equation}\lim_{x\to
x_b}(x-x_b)^{-2}|S_{uu}|<\infty, \end{equation} where $(u,v)$ are
null coordinates \citep{Chr77}. We find easily
\begin{equation}
S_{uu}=\frac{1}{4}(S_{tt}+\Omega^2S_{rr}-2\Omega S_{tr})
\end{equation} where
\begin{equation} S_{tt}=S_{vv}+\Omega^2S_{rr}-2\Omega
S_{rv} \end{equation} and
\begin{equation} S_{tr}=S_{rt}=S_{vr}-\Omega S_{rr}
\end{equation} are obtained by applying (9) and definition
$S_{\mu\nu}=\delta_{\mu}^{\alpha}\delta_{\nu}^{\beta}
S_{\alpha\beta}.$ Applying (30), (38), (39) and (40) we obtain
\begin{equation}S_{uu}(x)=156\pi M^2C_2+24\pi M^2C_1-27A(x)-5B(x)/2.
\end{equation} For a fixed $\varepsilon$ as $0<\varepsilon<1,$
diagram of the figure 1 determines locations of the unperturbed
black hole and cosmological horizons $x_b,x_c$ where $x_b<x_c.$
Having this obtained black hole horizon radius $x_b,$ and (41), the
initial condition $S_{uu}(x_b)=0$ reduces to
\begin{equation}
2C_1+13C_2=\frac{54A(x_b)+5B(x_b)}{24\pi M^2}. \end{equation} For
fields describing a gas of massless bosons (without spin, charge, or
internal degrees of freedom) moved in quasi flat regions of a two
dimensional curved space, the density and the flux are actually
equal, so that $S^t_t(x_q)+S^r_r(x_q)=0,$ \citep{Chr77} which in
terms of the $(v,r)$
coordinates become
\begin{equation} 2\Omega(x_q)S_{rv}(x_q)-S_{vv}(x_q)=0
\end{equation}
where $x_q$ obtained from $\Omega^{\prime}(x_q)=0,$ (see figure 1 )
defines quasi-flat regions of two dimensional version of the space
time (3). Applying (33) the initial condition (43) become
\begin{equation}
2C_1+3C_2=\frac{18A(x_c)+3B(x_c)}{24\pi M^2}. \end{equation} Using
(42) and (44) one can obtain
\begin{equation}
C_{1}=\frac{18[13A(x_c)-9A(x_b)]+13B(x_c)-15B(x_b)}{480 \pi M^2}
\end{equation} and
\begin{equation}
C_2=\frac{18[3A(x_b)-A(x_c)]+5B(x_b)-B(x_c)}{240\pi M^2}.
\end{equation} We are now in a position to show that the stress
tensor (33) defined in the quasi flat region $x=x_q$ can be
decomposed in terms of thermal equilibrium ${S^{(e)}}^{\nu}_{\mu}$
and radiating ${S^{(r)}}^{\nu}_{\mu}$ stress energy tensors of
massless and charge-less bosonic gas
respectively as
\begin{equation} {S^{(e)}}_{\nu\mu}(t,r)=\frac{\pi}{12}T_c^2\left
\begin{array}{cc}
-2 & 0 \\
0 & 2 \\
\end{array
\right)
\end{equation}and
\begin{equation} {S^{(r)}}_{\nu\mu}(t,r)= \frac{\pi}{12}T_b^2\left
\begin{array}{cc}
-1 & 1 \\
1 & 1 \\
\end{array
\right) \end{equation}where
\begin{equation}
\frac{T_b}{T_{S}}=4\sqrt{B(x_q)+12A(x_q)-16.2A(x_b)+6.4A(x_c)-3.9B(x_c)+4.5B(x_b)},
\end{equation} and
\begin{equation}
\frac{T_c}{T_S}=2\sqrt{54A(x_b)-18A(x_c)+5B(x_b)-B(x_c)-2B(x_q)-24A(x_q)}
\end{equation} are defined as the black hole radiation and the
cosmological thermal equilibrium temperatures respectively.
$T_{S}=\frac{1}{8\pi M}$ is the well known Schwarzschild black hole
temperature. Now we seek to obtain time-independent solutions of the
back reaction equation (1) by applying (10), (11), (12) and (33) in
case $q=1.$
\section{Back Reaction Equation}
Applying the advanced-time Eddington-Finkelestein coordinates
$(v,r,\theta,\varphi),$ defined by (9), the quantum perturbed metric
(3) is taken to have the form \begin{equation}
ds^2_f=-e^{2\psi(r)}F(r)dv^2+2e^{\psi(r)}dvdr+r^2(d\theta^2+\sin^2\theta
d\varphi^2) \end{equation} with
\begin{equation}
F(r)=1-\frac{2m(r)}{r}+\frac{Q^2}{r^2}-\frac{1}{3}\lambda(r)r^2
\end{equation} in which $\psi,m$ are assumed to be depended alone to
the radial coordinate $r,$ because the perturbed metric should still
be static and spherically symmetric. The index $f$ denotes to the
word $final$ state of quantum perturbed evaporating Lukewarm Black
hole. The perturbed metric (51) leads to the static metric (3) under
the following boundary conditions:
\begin{equation}
\psi(x_b;\varepsilon=0)=0,~~~m(x_b;\varepsilon=0)=M,~~~~\lambda(x_b;\varepsilon=0)=\Lambda
\end{equation} where $x_b=\frac{1}{2}$ is obtained from (15) under
the condition $\varepsilon=0.$ Applying (51) and definitions
\begin{equation}\frac{m(r)}{M}=\rho(x),~~~\lambda(x)=
\frac{3\varepsilon\sigma(x)}{16M^2},~~~q=1=\frac{Q}{M},~~~x=\frac{r}{2M}
\end{equation} the $(v,r)$ components of the Einstein`s tensor become
\begin{equation}
G_{vv}(x)=-\frac{e^{2\psi(x)}}{x^2}\left(1-\frac{\rho(x)}{x}+\frac{1}{4x^2}-\frac{\varepsilon\sigma(x)
x^2}{4}\right)$$$$\times\left(\rho^{\prime}(x)+\frac{1}{4x^2}+\frac{3\varepsilon\sigma(x)x^2}{4}+
\frac{\varepsilon\sigma^{\prime}(x)x^3}{4}\right), \end{equation}
\begin{equation} G_{vr}(x)=G_{rv}(x)=e^{\psi(x)}
\left(\frac{\rho^{\prime}(x)}{x^2}+\frac{\varepsilon\sigma^{\prime}(x)x}{4}
+\frac{1}{4x^4}+\frac{3\varepsilon\sigma(x)}{4 }\right)
\end{equation}
\begin{equation} G_{rr}(x)=-2\frac{\psi^{\prime}}{x}
\end{equation}where
$\prime$ denotes to
differentiation with respect to $x$. All other components are zero
except $G_{\theta}^{\theta}=G_{\varphi}^{\varphi}$ which follows
from the Binachi identity $\nabla_{\xi}G^{\xi}_r=0.$ Applying (10),
(11), (12), (33), (55), (56) and (57), we obtain $vv,$ $vr$ and $rr$
components of the Back-reaction equation (1) as respectively
\begin{equation}
\Omega(x) e^{2\psi(x)}\left(1-\frac{\rho(x)}{x}+\frac{1}
{4x^2}-\frac{\varepsilon\sigma(x)x^2}{4}\right)\left(\frac{1}{16x^4}+
\frac{\rho^{\prime}(x)}{4x^2}+\frac{\varepsilon\sigma^{\prime}(x)x}{16}
\right)$$$$+\frac{\pi[1-\Omega^2(x)]}{16x^2}+\Omega(x)[4\pi M^2(2C_1+C_2)-A(x)-B(x)/6]=0
\end{equation}
\begin{equation}
\Omega(x) e^{\psi(x)}\left(\frac{1}{16x^4}+
\frac{\rho^{\prime}(x)}{4x^2}+\frac{\varepsilon\sigma^{\prime}(x)x}{16}
\right)+4\pi M^2C_2-\frac{\pi}{16x^2}-A(x)-\frac{B(x)}{6}=0,
\end{equation} and
\begin{equation}
\psi^{\prime}(x)=\frac{16x^4[A(x)-8\pi
M^2C_2]-\pi\Omega(x)}{8x^3\Omega^2(x)} \end{equation} where
$\Omega(x)$ is given by (34). Solution of the equation (60) can be
obtained directly by integrating. It is useful that, we obtain
behavior of the solution $\psi(x)$ at neighborhood of its singular
points, namely $x=0$ and $x_{b,c}$ where $\Omega(x_{b,c})=0.$
However we obtain
\begin{equation} \psi(x<x_b)\simeq
C_{\psi}+0.24\ln \left(4-\frac{1}{x}\right)+\frac{0.3125}{4x-1},
\end{equation}
\begin{equation} \psi(x\to x_b)\simeq C_{\psi}-\frac{2x_b^3[A(x_b)-8\pi M^2C_2]}{(x-x_b)}
\end{equation}
and
\begin{equation} \psi(x\to x_c)\simeq C_{\psi}+\frac{x_c^3[A(x_c)-8\pi M^2C_2]}{2(x_c-x)}
\end{equation}
where
\begin{equation}\Omega(x<x_b)\simeq\frac{1}{4x^2}-\frac{1}{x},~~~0<\varepsilon<1,
\end{equation}
\begin{equation} \Omega(x\to x_b)\simeq1-\frac{x_b}{x},~~
~\Omega(x\to x_c)\simeq1-\frac{x^2}{x_c^2}\simeq2(1-\frac{x}{x_c})
\end{equation}
and $C_{\psi}$ is integral constant which is determined by the
initial conditions (53) such as follows.\\
Applying $\psi(x_b)=0$
where $x_b=\frac{1}{2}$ with $\varepsilon=0$ the solution (61) leads
to
\begin{equation}
C_{\psi}\simeq2.07\times10^{-3},~~~e^{C_{\psi}}\approx1.
\end{equation} Inserting (59) the equation (58) become
\begin{equation}
\frac{\rho(x)}{x}+\frac{\varepsilon
x^2\sigma(x)}{4}=\frac{H(x)}{G(x)} \end{equation} where
\begin{equation}
H(x)=(1+4x^2)[\pi/4x^2+4[A(x)+B(x)/6]-16\pi M^2C_2]$$$$+\{\pi+64\pi
M^2 x^2(2C_1-C_2)\Omega(x)-[\pi+16
x^2(A(x)+B(x)/6)]\Omega^2(x)\}e^{-\psi(x)} \end{equation} and
\begin{equation}
G(x)=\pi-64\pi M^2C_2 x^2+16x^2[A(x)+B(x)/6]. \end{equation} One can
rewrite the equation (59) as
\begin{equation}
\frac{\rho^{\prime}(x)}{x^2}+\frac{\varepsilon
x\sigma^{\prime}(x)}{4}=Z(x) \end{equation} where we defined
\begin{equation}
Z(x)=\frac{\pi+16x^4[A(x)+B(x)/6]-64\pi M^2C_2 x^4-\Omega(x)
e^{\psi(x)}}{4x^4\Omega(x) e^{\psi(x)}}. \end{equation} Applying
(67), (70) and identity
\begin{equation}
\frac{2\rho(x)}{x^3}-\frac{\varepsilon\sigma(x)}{4}=\frac{\rho^{\prime}}{x^2}+\frac{\varepsilon
x\sigma^{\prime}}{4}-\left[\frac{1}{x}\left(\frac{\rho(x)}{x}+\frac{\varepsilon\sigma(x)
x^2}{4}\right)\right]^{\prime} \end{equation} we obtain
\begin{equation}
\frac{2\rho(x)}{x^3}-\frac{\varepsilon\sigma(x)}{4}=Z(x)-\left[\frac{H(x)}{xG(x)}\right]^{\prime}.
\end{equation} Using (67) and (73) we obtain exactly
\begin{equation}\rho(x)=\frac{x^3}{3}\left[Z(x)+\frac{1}{x}\left(\frac{H(x)}{G(x)}\right)^{\prime}\right]
\end{equation} and
\begin{equation}
\sigma(x)=\frac{4}{3\varepsilon}\left[-Z(x)+\frac{1}{x}
\left(\frac{H(x)}{G(x)}\right)^{\prime}+\frac{H(x)}{x^2G(x)}\right].
\end{equation} Applying (35), (36), (61), (62), (63),
(64), (65), and (66) one obtain
\begin{equation}
H(x<x_b)\simeq\frac{0.42}{x^6}\left(1-\frac{1}{x^{1.76}}\right),
~~~G(x<x_b)\simeq\frac{9.3}{x^3}\left(\frac{0.18}{x}-1\right),
\end{equation}
\begin{equation} H(x\to x_b)\simeq\pi
\exp\left\{\frac{2x_b^3[A(x_b)-8\pi M^2C_2]}{(x-x_b)}\right\}
\end{equation}
\begin{equation} H(x\to
x_c)\simeq(1+4x_c^2)[\pi/4x_c^2+4A(x_c)+2B(x_c)/3-16\pi
M^2C_2]$$$$\times \pi\exp\left\{-\frac{x_c^3[A(x_c)-8\pi
M^2C_2]}{2(x_c-x)}\right\} \end{equation}
\begin{equation} G(x\to x_{b,c})=\pi-64\pi M^2C_2
x_{b,c}^2+16x_{b,c}^2[A(x_{b,c})+B(x_{b,c})/6], \end{equation}
\begin{equation}
Z(x<x_b)\simeq\frac{2.27}{x^3}\left(1-\frac{0.11}{x}\right),
\end{equation}
\begin{equation}
Z(x\to
x_b)\simeq\left[\frac{\pi}{4x_b^3}+4x_b[A(x_b)+B(x_b)/6]-16\pi M^2
C_2
x_b\right]$$$$\times(x-x_b)^{-1}\exp\left\{\frac{2x_b^3[A(x_b)-8\pi
M^2C_2]}{(x-x_b)}\right\} \end{equation} and
\begin{equation} Z(x\to
x_c)\simeq\left[\frac{\pi}{8x_c^3}+2x_c[A(x_c)+B(x_c)/6]-8\pi
M^2C_2x_c\right]$$$$\times(x_c-x)^{-1}\exp\{-\frac{x_c^3[A(x_c)-8\pi
M^2C_2]}{2(x_c-x)}\} \end{equation} Using (76) and (80), the
equations defined by (74) and (75) become respectively
\begin{equation}
\rho(x<x_b)\simeq0.76\left(1-\frac{0.11}{x}\right)-\frac{2.51\times10^{-3}}{x^{5.52}(0.18-x)^2}
\end{equation} and
\begin{equation}
\sigma(x<x_b)\simeq-\frac{4}{3\varepsilon}\left\{\frac{2.27}{x^{3}}
\left(1-\frac{0.11}{x}\right)+\frac{0.043}{x^{5.76}(0.18-x)}+\frac{7.53\times10^{-5}}{x^{8.52}(0.18-x)^2}\right\}.
\end{equation} Applying (77), (79) and (81) the equations
defined by (73) and (75) become respectively
\begin{equation}\rho(x\to
x_b)\simeq\{\frac{\frac{\pi}{12}+\frac{4x_b^4[A(x_b)+B(x_b)/6]}{3}-\frac{16\pi
M^2C_2x_b^4}{3}}{(x-x_b)}$$$$ -\frac{\frac{2\pi x_b^5[A(x_b)-8\pi
M^2C_2]}{3G(x_b)}}{(x-x_b)^{2}}\}\exp\left\{\frac{2x_b^3[A(x_b)-8\pi
M^2C_2]}{(x-x_b)}\right\}, \end{equation}
\begin{equation}
\sigma(x\to
x_b)\simeq-\frac{4}{3\varepsilon}\{\frac{\pi/3x_b^3+4x_b[A(x_b)+B(x_b)/6]-16\pi
M^2 x_bC_2}{(x-x_b)}$$$$+\frac{2\pi x_b^2[A(x_b)-8\pi
M^2C_2]}{G(x_b)(x-x_b)^{2}}\}\exp\left\{\frac{2x_b^3[A(x_b)-8\pi
M^2C_2]}{(x-x_b)}\right\}, \end{equation}
\begin{equation} \rho(x\to
x_c)\simeq\{\frac{\pi/24+2x_c^4[A(x_c)+B(x_c)/6]/3-8\pi
M^2C_2x_c^4/3}{x_c-x}$$$$-\frac{\pi
x_c^3(1+4x_c^2)[\pi/4+4A(x_c)+2B(x_c)/3-16\pi M^2C_2][A(x_c)-8\pi
M^2C_2]}{2G(x_c)(x_c-x)^2}\}$$$$\times\exp\left\{-\frac{x_c^3[A(x_c)-8\pi
M^2C_2]}{2(x_c-x)} \right\} \end{equation} and
\begin{equation}
\sigma(x\to x_c)\simeq -\frac{4}{3\varepsilon}\{\frac{\pi/8x_c^3+2x_c[A(x_c)+B(x_c)/6]-8\pi
M^2C_2x_c}{x_c-x}$$$$+\frac{\pi
x_c^2(1+4x_c^2)[\pi/4x_c^2+4A(x_c)+2B(x_c)/3-16\pi
M^2C_2][A(x_c)-8\pi
M^2C_2]}{2G(x_c)(x_c-x)^2}\}$$$$\times\exp\left\{-\frac{x_c^3[A(x_c)-8\pi
M^2C_2]}{2(x_c-x)} \right\} \end{equation} Having the above obtained
solutions we are now in a position to write the quantum perturbed
Lukewarme black hole metric (51) defined in $(t,r,\theta,\varphi)$
coordinates as
\begin{equation}
ds^2_f=-F(r)dt^2+\frac{dr^2}{F(r)}+r^{2}\{d\theta^2+\sin^2\theta
d\varphi^2\} \end{equation}in which
\begin{equation}
dt=e^{\psi(r)}dv-\frac{dr}{F(r)} \end{equation} and $\psi(r)$ with
$r=2Mx,$ is given by (61), (62) and (63). $F(r)$ defined by (52) and
(54) as
\begin{equation}
F(x)=1+\frac{1}{4x^2}-\frac{\rho(x)}{x}-\frac{\varepsilon\sigma(x)x^2}{4}
\end{equation} is given exactly by applying (83), (84), (85),
(86), (87) and (88). It will be useful that we choose a
numerical value for $x_{b,c}$ from the figure 1 such as follows.\\
Experimental limits on the cosmological constant is obtained as
\citep{Ken91}
\begin{equation}
|\Lambda|\leq10^{-54}cm^{-2}
\end{equation} and order of magnitude of
Schwarzschild radiuses for a galaxy and the Sun is given
by $(2M\sim10^{16} cm)$ and $(2M\sim3\times10^5 cm)$ respectively. So
whose corresponding coupling parameter
$\varepsilon=\frac{16M^2\Lambda}{3}$ will be obtain as
$\varepsilon_{galaxy}\simeq1.33\times10^{-22}$ and
$\varepsilon_{sun}\simeq1.2\times10^{-43}$ respectively which are
very small digits. As a numerical result we use here
$\varepsilon=10^{-22}$ and obtain
\begin{equation} (x_b,x_c)\cong(0.5,10^{11})
\end{equation}
\begin{equation}
A(x_b)=A(x_c)\cong0~~~B(x_b)\cong48,~~~B(x_c)\cong-3.75\times10^{-23}
\end{equation} and
\begin{equation}\pi
M^2C_2\cong1,~~~G(x_b)\cong-15,~~~G(x_c)\cong12.57. \end{equation}
Using (83), (84), (85), (86), (87), (88) and the above numerical
values the equation (91) leads to
\begin{equation}
F(x<0.5)\cong\frac{0.62}{x^{6.52}(0.18-x)^2}, \end{equation}
\begin{equation}
F(x\to
0.5)\cong\frac{42.53(x-0.45)}{(x-0.5)^2}\exp\left\{\frac{2}{0.5-x}\right\},
\end{equation}
\begin{equation} F(x\to
10^{11})\cong\frac{2.13\times10^{56}}
{\left(1-\frac{x}{10^{11}}\right)^2}\exp\left\{\frac{4\times10^{12}}{1-\frac{x}{10^{11}}}\right\}.
\end{equation} The solution (96) dose not vanished in regions
$0<x<0.5.$ The solution (97) vanishes at $x\cong0.45.$ This is
location of the perturbed black hole event horizon where $x_b=0.5$
is classical unperturbed radius of the Lukewarm black hole event
horizon. It is seen easily that the solution (98) converges to a
zero value (the perturbed cosmological event horizon) at limits
$x>>10^{11}.$ These solutions predict that the interacting quantum
field back reaction corrections on the perturbed Lukewarm static
black hole metric cause to shift the location of event horizons. In
other word the cosmic sensor-ship hypothesis is still saved in the
presence of the quantum field perturbations on a curved background
metric. As a future work the authors will be attempt to seek a time
dependent version of perturbation solutions of the problem.
Particularly stability prediction of an evaporating Lukewarm black
hole encourages us to seek unperturbed solutions of the back
reaction equation of the problem by using the Wheeler-DeWitt
canonical quantum gravity approach. Result of this work together
with results of several works pointed in the introduction predict
remnant stable mini quantum black holes where the cosmic sensor-ship
hypothesis is still valid.
\section{Concluding Remarks}
Two dimensional analog of the Hawking thermal radiation stress tensor
of the quantum perturbed spherically symmetric static Lukewarm back
hole is derived, by applying the Christensen and Fulling method.
Then the obtained stress tensor, is used to solve a time-independent
version of the well known metric back-reaction equation defined in a
perturbed Lukewarm metric. According to the York`s hypothesis
\citep{Yor85}, we assume here that the massless and charge-less
quantum scalar fields propagated on the background metric are in s
(spherically) modes and so $(t,r)$ components of the metric are
perturbed only. This leads still to save its spherically symmetric
property and to assume that the mass and cosmological parameter of
the Lukewarm black hole to be chosen as slowly varying radial
dependence functions. However, mathematical derivations predict a
shrunk black hole horizon with an extended cosmological horizon
with respect to the corresponding classical horizons location.
Particularly these quantum field perturbations do not cause
violations of the cosmic sensor-ship hypothesis.
|
1,116,691,498,583 | arxiv | \section{Introduction}
\subsection{Exotic 4- and 5-quark particles}
In 1964, Gell-Mann and Zweig theorized that mesons were formed by quark-antiquark pairs and baryons by three quarks. Both theories also allowed for mesons consisting of two quarks and two antiquarks and baryons consisting of four quarks plus an antiquark. These exotic multi-quark states have been the subject of extensive theoretical and experimental studies~\cite{KRS,OSZ}.
The first confirmed 4-quark state was the $X(3872)$ decaying into $J/\psi \: \pi^+ \pi^-$, with quark content $c \overline{c} u \overline{u}$, discovered in 2001 by the Belle~\cite{Belle-X3872} experiment.
The $X(3872)$ mass is within $<$ 0.25 MeV of the $D^0 \: \overline{D}^{*0}$ threshold, and since it has a large ($>$ 30 $\%$) branching fraction into $D^0 \: \overline{D}^{*0}$, there has been theoretical speculation that it is a weakly bound $D^0 \: \overline{D}^{*0}$ molecule. However, such a molecule, with such a small binding energy, would have a large spatial extent $\approx$~10 fermi, and a large probability for dissociation by interactions with other particles produced at primary hadronic interaction vertices. So, could the $X(3872)$ be composed of a $D^0 \: \overline{D}^{*0}$ molecule, or hadrocharmonium (a $c \overline{c}$ pair embedded in a cloud of $u \overline{u}$ light quarks), or a tightly bound tetraquark or diquark-antidiquark pair?
In 2015, the LHCb~\cite{LHCb-Pc} experiment discovered pentaquarks, with quark content $ u u d c \overline{c}$, in $\Lambda_b^0 \rightarrow K^- P_c^+$ with the pentaquark decay $P_c^+ \rightarrow J/\psi \: p$.
\subsection{The D0 Experiment and Analyses}
D0~\cite{D0-X3872,D0-Zc3900,D0-Pc} studied charmonium-like exotics decaying into $J/\psi \rightarrow \mu^+ \mu^-$ plus 1, 2, or 3 hadrons. In $p \overline{p}$ collisions, $J/\psi$ are produced promptly, at the primary event vertex, either directly or in strong decays of higher-mass charmonium-like states, or non-promptly in $b$-hadron decays with the decay vertex displaced from the primary vertex, with proper decay length $c \tau \approx$ 0.46 $\mu$m.
The event sample was split into: ``displaced vertex" candidates from the decays of $b$-hadrons, based mainly on the significance of the transverse vertex separation $L_{xy}/\sigma_{L_{xy}}$, with the remaining events classified as ``primary vertex" candidates.
After correcting for the $b$-decay events feeding into the primary vertex candidates, we studied the prompt and non-prompt production of $X(3872)$ and $Z_c^\pm(3900)$ and compared with prompt and $b$-decay samples of charmonium $\psi(2S) \rightarrow J/\psi \; \pi^+ \pi^-$.
\section{\boldmath $X(3872)$ and Comparison with \boldmath $\psi(2S)$ -- is the \boldmath $X(3872)$ a molecule?}
\begin{figure}
\begin{flushleft}
\includegraphics[width=6.5in]{Fig_1a}
\caption{(left to right:) Mass spectra for (a) $\psi(2S)$ and (b) $X(3872)$, both decaying into $J/\psi \; \pi^+ \pi^-$;
and $p_T$ distributions for the non-prompt fractions $f_{NP}$ for (c) $\psi(2S)$ and (d) $X(3872)$.}
\end{flushleft}
\end{figure}
We sorted the $X(3872)$ and $\psi(2S) \rightarrow J/\psi \; \pi^+ \pi^-$ candidates into bins in $p_T$ and
pseudo-proper time, $t_{pp} = \vec{L}_{xy} \cdot \vec{p}_T \; m/(p_T^2 \; c)$, and then fitted the for the $\psi(2S)$ and $X(3872)$ yields for that bin. These yields vs. $t_{pp}$ were then fit for the non-prompt fractions $f_{NP}(\psi(2S))$ and $f_{NP}(X(3872))$. The prompt fractions are $f_P = 1 - f_{NP}$. Figure 1(c) shows the non-prompt fractions
$f_{NP}(\psi(2S),p_T)$, for ATLAS, CMS, CDF, and D0~\cite{D0-X3872}.
In Fig. 1(d), $f_{NP}(X(3872),p_T)$ is constant with $p_T$, but $f_{NP}(LHC) \approx 3*f_{NP}(D0)$, The prompt production of $X(3872)$ may be suppressed at the LHC relative to the Tevatron due to the dissociation of the spatially large
$D^0 \: \overline{D}^{*0}$ molecular component by interactions with the higher LHC hadron multiplicity at the primary vertex.
This seems to be consistent with the recent LHCb $X(3872)$ production versus primary vertex multiplicity~\cite{LHCb-X3872}.
If $X(3872)$ were a molecule, it could be formed by the direct coalescence of $D^0$ and $\overline{D}^{*0}$.
Braaten {\sl et al.}~\cite{EB-1,EB-2} also considered production by the scattering
$D^{*+} \: \overline{D}^{*0} \rightarrow \pi^+ \; D^0 \; \overline{D}^{*0}$
followed by the coalescence of $D^0$ $\overline{D}^{*0} \rightarrow X(3872)$.
They calculated that $f_\pi$ $\approx$ 14 $\%$ of the molecular component of the $X(3872)$, whether produced promptly in the primary hadronic vertex or in the decay of $b$-hadrons, would be accompanied by a $\pi^\pm$ with kinetic energy in the $X \pi$ center-of-mass
reference frame, T($X\pi$) $<$ 11.8 MeV. This estimate
is strongly dependent on the binding energy of the $X(3872)$, which is still imprecisely known. No such low-T enhancement is expected for the charmonium $\psi(2S) \; \pi$, so $\psi(2S) \; \pi$
is used to scale acceptances, efficiencies, and random coincidences for $X(3872) \; \pi$.
Figure 2 displays the 5-track primary vertex
$\psi(2S) \: \pi$ (left) and $X(3872) \: \pi$ (center) with the T $<$ 11.8 MeV cut. D0 found 44 $\pm$ 14 $\psi(2S) \: \pi$ events, consistent with the random coincidence rate. After subtracting 6 expected random coincidences, 12 $\pm$ 16 $X(3872) \:\pi$ events are observed. Applying the acceptances and the $f_\pi \approx 14 \%$ ratio, we expected between 245 and 730 events. Thus we have no evidence for the accompanying $\pi^\pm$ in the prompt production as expected for a molecular $X(3872)$ state.
The T($X\pi$) distribution for displaced 5-track vertex events is displayed in Fig. 2(right), showing 27 $\pm$ 12 events, including 2 random events for T($X \pi) < $ 11.8 MeV. Again, scaling the total $X(3872)$ yield and applying the acceptances, 30-90 events would be expected.
The displaced vertex $X(3872) \: \pi$
events, although consistent with the expected yield, are also within 2$\: \sigma$ of the null-hypothesis for this low-T($X\pi$) molecular scattering process. Of course, the $X(3872)$ could have only a small molecular component and be consistent with these observations.
\begin{figure}
\begin{flushleft}
\includegraphics[width=6.25in]{Fig_2a}
\caption{$J/\psi \; \pi^+ \pi^-$ mass distributions for $J/\psi \; \pi^+ \pi^- \pi^\pm$ with T $<$ 11.8 MeV cut for $\psi(2S)$ (left) and $X(3872)$ (center) for the primary vertex event selection. (right) The T($X \pi$) distribution for the displaced vertex event selection for $X(3872) \pi$.}
\end{flushleft}
\end{figure}
\section{Production of \boldmath $Z_c^\pm(3900)$}
The $Z_c^+(3900)$ has a quark composition of $c \overline{c} u \overline{d}$ and decays into $J/\psi \; \pi^+$. D0~\cite{D0-Zc3900} searched for
$Z_c^\pm(3900)$ in 4-track vertex events with $Z_c^\pm \; \pi^\mp$ arising from both prompt and $b$-hadron decay processes. We performed a coarse scan in six 100 MeV wide mass bins for the parents of the $Z_c(3900)$, e.g. the previously seen
$\psi(4260) \rightarrow Z_c(3900) \; \pi$.
Figure 3(left) shows the M($J/\psi \; \pi^\pm$) spectrum for the 4.2 $<$ M($J/\psi \; \pi^\pm \pi^\mp$) $<$ 4.3 GeV bin for displaced vertices, the only bin with a significant $Z_c(3900)$ yield. We found 376 $\pm$ 76 $Z_c(3900)$ events with a significance of $5.2 \: \sigma$. Figure 3 also shows the $Z_c(3900)$ yields for each of the 6 parent mass bins for the displaced and primary vertex events.
\begin{figure}
\centering
\includegraphics[width=6.25in]{Fig_3a}
\caption{(left) The M($J/\psi \; \pi^\pm$) distribution for 4.2 $<$ M($J/\psi \; \pi^+ \; \pi^-$) $<$ 4.3 MeV for displaced vertex events, the only mass range with a $Z_c^\pm(3900)$ yield statistically greater than zero; the $Z_c^\pm(3900)$ yields for the six M($J/\psi \; \pi^+ \;
\pi^-$) mass bins, (center) displaced vertex events, and (right) primary vertex events. }
\end{figure}
\section{Inclusive Pentaquarks}
Until now, only LHCb~\cite{LHCb-Pc} has observed pentaquarks, quoting masses, widths, and relative yields for three $P_c$ states at 4312, 4440, and 4450 MeV.
D0~\cite{D0-Pc} searched for inclusive production of pentaquarks in $P_c \rightarrow J/\psi \; p$ without reconstructing a $b$-baryon parent.
The D0 mass resolution is insufficient to separate the $P_c(4440)$ and $P_c(4457)$ states. We fixed the masses and widths at the LHCb quoted values and convoluted with the D0 mass resolution for our fits. We assumed an incoherent sum of $P_c(4440)$ and
$P_c(4457)$, but allowed their relative yields to vary. Figure 4(left) shows the $J/\psi \; p$
mass spectrum for displaced vertices, finding a total of
N(4440) + N(4457) = 830 $\pm$ 206 events, with significance of $3.2 \: \sigma$ (stat. + syst.).
The ratio of N(4440)/(N(4440)+N(4457)) = 0.61 $\pm$ 0.23, consistent with the LHCb ratio of 0.68.
D0 found only 151 $\pm$ 186 events for displaced vertex $P_c(4312)$.
\begin{figure}
\centering
\includegraphics[width=4.3in]{Fig_4}
\caption{D0 Inclusive Invariant Mass($J/\psi \: p$) distributions: (left:) showing an unresolved peak at the average mass of the $P_c(4440)$ and $Pc(4457)$ states with significance of $3.2 \: \sigma$ (statistical + systematic) for displaced vertex events; (right:) no significant peak was observed for primary vertex events, nor for the $Pc(4312)$ state.}
\end{figure}
\section{Summary Highlights}
The non-prompt fraction $f_{NP}(X(3872))$ at the Tevatron is $\approx$ 3 times lower than at the LHC, indicating that prompt production of a
$D^0 \: \overline{D}^{*0}$ molecular component of the $X(3872)$ may be more readily dissociated by interactions with the higher primary vertex multiplicity at the LHC.
We found no significant evidence for $\overline{D}^{*+} + D^{*0}$ to produce a co-moving $X(3872$ and $\pi^+$.
We observed $Z_c^\pm$ production consistent with the sequential decay of $b$-hadrons $\rightarrow \psi(4260) \rightarrow Z_c(3900)^\pm
\pi^\mp $, but found no significant evidence for $Z_c(3900)$ via decays of prompt $\psi(4260)$ or via decays of other parents of $Z_c(3900)^\pm \pi^\mp$ within the 4.1-4.7 GeV mass range.
We observed an inclusive unresolved peak corresponding to the $P_c(4440) + P_c(4457)$ pentaquarks from the decay of $b$-baryons
with significance = $3.2 \: \sigma$ (stat. + syst.). This was the first confirmation of the LHCb pentaquark discovery.
We found no significant evidence for prompt $P_c(4414) + P_c(4457)$ or for the $P_c(4312)$ from either prompt or $b$-baryon decay processes.
\section*{Acknowledgments}
The D0 Collaboration thanks the staffs at Fermilab and collaborating institutions,
and acknowledges support of the many international funding agencies which made this experiment possible.
\section*{References}
|
1,116,691,498,584 | arxiv | \section{Introduction}
\label{se:int}
This paper presents two-sided estimates for the value of the cost functional (assuming
that the state equation can not be solved exactly) and shows how they can be used
to generate estimates for a certain error quantity
(cf. (\ref{eq:errdef}) and Theorem \ref{th:gen:mikh}).
In the case of unconstrained control, some estimates and numerical tests have
been in presented in \cite{GaevskayaHoppeRepin2007}. In \cite{Repin2008},
the case of ``box constraints'' is treated.
Here, these results are extended considerably for constraints of more
general type, a new error quantity is introduced, and the results are confirmed
by numerical tests.
In section \ref{se:mod}, definitions and standard results related to optimal
control problems with elliptic state equation are recalled. Cost functionals
are assumed to be of a certain type, where the state is measured in terms of
the energy norm generated by the state equation. This is a special case of the
general theory which can be found, e.g., from monographs
\cite{Lions1971,Troltzsch2010}.
In section \ref{se:est}, the functional a posteriori error estimates (see
monographs \cite{NeittaanmakiRepin2004,Repin2008,MaliNeittaanmakiRepin2014}
and references therein) for the state equation are applied to generate two-sided
bounds for the
value of the cost functional. The strong connections between the estimates and
the principal relations generating the optimal control problem are underlined.
Theorem \ref{th:gen:mikh} (generalization of \cite[Ch. 9, Th. 9.14]{Repin2008} for
the case of constrained control) is the analog of the Mikhlin identity
(cf. Theorem \ref{th:gen:mikh}) for the optimal control problem. It
introduces a well motivated error quantity and shows how the estimates for the
cost function value can be used to generate two-sided bounds.
Some examples of optimal control problem of the type described in Sect.
\ref{se:mod} are discussed in Sect. \ref{se:ex}. Numerical tests in Sect.
\ref{se:num} depict how
the estimates can be combined with an arbitrary (conforming) numerical
method.
\section{Elliptic optimal control problem}
\label{se:mod}
\subsection{Definitions}
Let $\LSspace$, $\LFspace$, and $\Cspace$ be Hilbert spaces. Their
inner products and norms are denoted by subscripts, e.g., $(\cdot,\cdot)_{\LSspace}$
and $\| \cdot \|_{\LSspace}$.
Moreover, $\Sspace \subset \LSspace$ is a Hilbert space
generated by the inner product
$ (q,z)_{\Sspace} := (q,z)_{\LSspace} + (\Lambda q, \Lambda z)_{\LFspace}$, where
$\Lambda: \Sspace \rightarrow \LFspace$ is a linear,
bounded operator.
The injection from $\Sspace$ to $\LSspace$ is continuous and
$\Sspace$ is dense in $\LSspace$. Operator $\Lambda$ satisfies
a Friedrichs type inequality
\begin{equation} \label{eq:gen:Fri}
\| q \|_{\LSspace} \leq c \| \Lambda q \|_{\LFspace},
\quad \forall \, q \in {\Sspace_0} ,
\end{equation}
where a subspace $\Sspace_0 \subset \Sspace$ is closed. Assume
$\Sspace_0 \subset \Sspace \subset \LSspace \subset \Sspace_0^*$,
where $\Sspace_0^*$ is the dual space of $\Sspace_0$.
Define linear bounded operators
$\mathcal B:\Cspace \rightarrow \Sspace_0^*$,
$\mathcal A:\LFspace \rightarrow \LFspace$,
$\mathcal N:\Cspace \rightarrow \Cspace$, where $\mathcal A$ and $\mathcal N$
are symmetric and positive definite,
\begin{equation*}
\blow \| q \|^2_{\LFspace}
\leq ( \mathcal A q , q )_{\LFspace} \leq
\overline{c} \| q \|^2_{\LFspace}, \quad \forall \, q \in \LFspace
\end{equation*}
and
\begin{equation*}
\underline \kappa \| v \|^2_{\Cspace}
\leq ( \mathcal N v , v )_{\Cspace} \leq
\overline \kappa \| v \|^2_{\Cspace}, \quad \forall \, v \in \Cspace ,
\end{equation*}
where $\blow$ and $\overline{c}$ ($\underline \kappa$ and $\overline \kappa$)
are positive constants. Thus, they generate inner products
\[
( q , z )_{\mathcal A} := (\mathcal A q, z)_\LFspace,
\quad
( q , z )_{\mathcal A^{-1}} := (\mathcal A^{-1} q, z)_\LFspace ,
\quad
( v , w )_{\mathcal N} := (\mathcal N v,w)_{\Cspace},
\]
and the respective norms
\[
\| q \|_{\mathcal A} := \sqrt{(\mathcal A q, q)_\LFspace} ,
\quad
\| q \|_{\mathcal A^{-1}} := \sqrt{ (\mathcal A^{-1} q, q)_\LFspace} ,
\quad
\| v \|_{\mathcal N} := \sqrt{ (\mathcal N v,v)_{\Cspace}} .
\]
The adjoint operators $\Lambda^*: \LFspace \rightarrow \Sspace_0^*$ and
$\mathcal B^*: \Sspace_0 \rightarrow \Cspace^*$ are defined
by the relations
\begin{equation*}
\langle \Lambda^* z, q \rangle_{\Sspace_0} = ( z , \Lambda q )_{\LFspace} ,
\quad \forall \, z
\in \LFspace, \; q \in \Sspace_0
\end{equation*}
and
\begin{equation} \label{eq:Badj}
\langle B v, q \rangle_{\Sspace_0} = \langle v, B^* q \rangle_{\Cspace} ,
\quad \forall \, v
\in \Cspace, \; q \in \Sspace_0 ,
\end{equation}
where $\langle \cdot, \cdot \rangle_{\Sspace_0}$ denotes the pairing
of $\Sspace_0$ and its dual space $\Sspace_0^*$. By the Riesz
representation theorem, there exists an isomorphism (denoted, e.g., by
\mbox{$\mathcal I_\Cspace:\Cspace \rightarrow \Cspace^*$}) from any Hilbert space
onto the corresponding dual space.
The adjoint operator defines a subspace
\[
\Fspace := \{ q \in \LFspace \, | \, \Lambda^* q \in \LSspace \} \subset \LFspace .
\]
The norm to $\Sspace_0^*$ is
\begin{equation*}
\NNN \ell \NNN := \sup\limits_{q \in \Sspace_0 \atop q \neq 0}
\frac{|\langle \ell, q \rangle_{\Sspace_0}|}{ \| \Lambda q \|_{\mathcal A}} .
\end{equation*}
Consider a bilinear form $a: \Sspace_0 \times \Sspace_0 \rightarrow \mathbb{R}$,
\begin{equation*}
a(q,z) := ( \mathcal A \Lambda q , \Lambda z )_{\LFspace} .
\end{equation*}
It is $\Sspace_0$ -elliptic and continuous and generates
an energy norm $ \NT q \NT := \sqrt{a(q,q)} $
in $\Sspace_0$.
\subsection{Optimal control problem}
The state equation is
\begin{equation}
\label{eq:state:weak}
a(y(v),q) = \langle \ell + \mathcal Bv, q \rangle_{\Sspace_0} ,
\quad \forall q \in \Sspace_0 ,
\end{equation}
where $\ell \in \Sspace_0^*$, $v \in \Cspace_{\rm ad} \subset \Cspace$ is the control,
and $y(v) \in \Sspace_0$ is the corresponding state. Let
$\Cspace_{\rm ad} \subset \Cspace$ be a non-empty,
convex, and closed set.
The cost functional $J:\Cspace \rightarrow \mathbb{R}$ is
\begin{equation}
\label{eq:cost} J(v) := \NT y(v) - y^d \NT^2
+ \| v-u^d \|_{\mathcal N}^2 ,
\end{equation}
where $u^d \in \Cspace$ and $y^d \in \Sspace_0$.
The optimal control problem is to find $u \in \Cspace_{\rm ad}$,
such that
\begin{equation} \label{eq:ocp}
J(u) \leq J(v) , \quad \forall v \in \Cspace_{\rm ad} .
\end{equation}
Under earlier assumptions, $J$ is $\Cspace$-elliptic, coercive, and lower
semi-continuous. Thus, the solution of the optimal control problem exists and is unique
(see, e.g., \cite[Chap. II, Th. 1.2]{Lions1971}).
\begin{Remark}
Cost functional of type
\[
J_2(v) := \| \Lambda y(v) - \sigma^d \|^2_{\mathcal A}
+ \| v-u^d \|_{\mathcal N}^2
\]
can be shifted using a projection: Find $y^d \in \Sspace_0$ such that
\[
(\mathcal A (\Lambda y^d - \sigma^d) , \Lambda q )_{\LSspace} = 0 , \quad \forall q \in \Sspace_0 .
\]
Then,
$J(v) = J_2(v) - \| \Lambda y^d - \sigma^d \|^2_{\mathcal A}$
\end{Remark}
The derivative of $J$ at $v$ is
\begin{multline} \label{eq:Jderiv}
\langle J'(v) , w \rangle_\Cspace =
\lim\limits_{t \rightarrow 0_+} \tfrac{1}{t}\left( J(v+tw)-J(v) \right)
=
2 \langle \mathcal B w , y(v) - y^d \rangle_{\Sspace_0} + ( v-u^d , w )_{\mathcal N} \\
=
2 ( \mathcal I_\Cspace^{-1} \mathcal B^*(y(v)-y^d) + \mathcal N (v-u^d) , w )_\Cspace .
\end{multline}
The necessary conditions for the optimal control
problem (\ref{eq:ocp}) are
(\ref{eq:state:weak}) and
\begin{equation} \label{eq:nec:der}
\langle J'(u) , v-u \rangle_\Cspace \geq 0, \quad \forall v \in \Cspace_{\rm ad}
\end{equation}
(see, e.g., \cite[Ch. I, Th. 1.3]{Lions1971}, \cite[Le. 2.21]{Troltzsch2010}), i.e.,
\begin{equation} \label{eq:nec:const}
( \mathcal I_\Cspace^{-1} \mathcal B^*(y(u)-y^d) + \mathcal N (u-u^d) , v-u )_\Cspace
\geq 0,
\quad \forall v \in \Cspace_{\rm ad} .
\end{equation}
Note that for the cost functional of type (\ref{eq:cost}), there is
no need to define an adjoint state to present the necessary conditions
(compare \cite[Chap. II, Th. 1.4]{Lions1971}).
The following proposition
(dating back to
\cite{Moreau1965}, see, e.g.,
\cite[Chap. I, Pr. 2.2]{EkelandTemam1976} or \cite[Chap. 7, Pr. 7.4]{Clarke2013})
allows
to write (\ref{eq:nec:const}) in a different form.
\begin{Proposition} \label{pr:proj}
Including the earlier assumptions, let $x \in \Cspace$.
Then, the following conditions are equivalent,
\[
\begin{tabular}{ll}
(i) $\quad \quad$&
$
(u-x,v-u)_{\mathcal N} \geq 0, \quad \forall v \in \Cspace_{\rm ad} ,
$
\\
(ii) &
$
\| x-u \|_{\mathcal N} = \inf\limits_{v \in \Cspace_{\rm ad}} \| x- v \|_{\mathcal N} ,
$
\\
(iii) &
$u = \Pi_{\rm ad}^{\mathcal N} x$, where
$\Pi_{\rm ad}^{\mathcal N}: \Cspace \rightarrow \Cspace_{\rm ad}$ is a projection.
\end{tabular}
\]
\end{Proposition}
\begin{proof}
Assume (i). The identity
\[
\| x - v \|_{\mathcal N}^2 - \| x - u \|_{\mathcal N}^2 =
\| u - v \|_{\mathcal N}^2 + 2 (u-x,v-u)_{\mathcal N} \geq 0
\]
leads at $\| x - u \|_{\mathcal N} \leq \| v - x \|_{\mathcal N}$
for arbitrary $v \in \Cspace_{\rm ad}$,
i.e., (ii).
Assume (ii). Let $v \in \Cspace_{\rm ad}$ be arbitrary
and $t \in (0,1)$, then by the convexity of $\Cspace_{\rm ad}$
\[
\| x - u \|_{\mathcal N}^2 \leq \| x - ((1-t)u + tv) \|_{\mathcal N}^2
= \| (x - u ) + t(u - v) \|_{\mathcal N}^2 .
\]
Expanding the right side leads at
$2 t (x-u,u-v)_{\mathcal N} \leq t^2 \| t(u - v) \|_{\mathcal N}^2$,
tending $t$ to zero yields (i).
Conditions (ii) and (iii) equal by definition.
\end{proof}
Proposition \ref{pr:proj} and (\ref{eq:nec:const}) yield the so called projection condition
\begin{equation} \label{eq:nec:proj}
u = \Pi_{\rm ad}^{\mathcal N}
\left( u^d - \mathcal N^{-1} \mathcal I_\Cspace^{-1} \mathcal B^*(y(u)-y^d) \right) .
\end{equation}
\begin{Remark} \label{re:Nid}
Typical choice is $\mathcal N = \alpha {\rm Id}$, where $\alpha > 0$ and ${\rm Id}$ denotes
the identity mapping.
Then (\ref{eq:nec:proj}) becomes
\[
u = \Pi_{\rm ad}
\left( u^d - \tfrac{1}{\alpha} \mathcal I_\Cspace^{-1} \mathcal B^*(y(u)-y^d) \right) .
\]
\end{Remark}
\begin{Remark} \label{re:nec:linear}
If $\Cspace_{\rm ad} = \Cspace$, then $\Pi_{\rm ad}^{\mathcal N} = {\rm Id}$ and
(\ref{eq:nec:proj}) reduces to
\begin{equation} \label{eq:nec:unconst}
u = u^d - \mathcal N^{-1} \mathcal I_\Cspace^{-1} \mathcal B^*(y(u)-y^d) .
\end{equation}
Substituting (\ref{eq:nec:unconst}) to (\ref{eq:state:weak})
yields a following
linear problem: Find $y(u) \in \Sspace_0$ satisfying
\begin{multline} \label{eq:gen:linear}
a(y(u),z) +
\langle \mathcal B \mathcal N^{-1} \mathcal I_\Cspace^{-1} \mathcal B^* y(u) , z \rangle_{\Sspace_0} \\
= \langle \ell + \mathcal B u^d ,z \rangle_{\Sspace_0} +
\langle \mathcal B \mathcal N^{-1} \mathcal I_\Cspace^{-1} \mathcal B^*y^d , z \rangle_{\Sspace_0}
\quad \forall z \in \Sspace_0 .
\end{multline}
\end{Remark}
\section{Estimates}
\label{se:est}
\subsection{Estimates for the state equation}
The solution $y(v) \in \Sspace_0$ of
(\ref{eq:state:weak}) minimizes a quadratic
energy functional (see, e.g., \cite[Chapter I, Theorem 1.2 and
Remark 1.5]{Lions1971} ), i.e.,
\begin{equation} \label{eq:state:var}
E(y(v)) \leq E( q )
:= \NT q \NT^2 - 2 \langle \ell + \mathcal B v, q \rangle_{\Sspace_0} ,
\quad \forall q \in \Sspace_0 .
\end{equation}
The benefit for measuring $y(v)-y^d$ in the
$\NT \cdot \NT$-norm in (\ref{eq:cost})
(instead of, e.g., $\| \cdot \|_{\LSspace}$-norm) is due to the following
results (Theorem \ref{th:Mikh} is due to \cite{Mikhlin1964} and generalized in
\cite{Repin2008}).
\begin{Theorem} \label{th:Mikh}
Let $y(v)$ be the solution of
(\ref{eq:state:var}) and $z \in \Sspace_0$ be
arbitrary, then
\begin{equation} \label{eq:Mikh}
\NT y(v) - z \NT^2
= E(z) - E(y(v)) .
\end{equation}
\end{Theorem}
\begin{proof}
By (\ref{eq:state:weak}),
\begin{align*}
\NT y(v) - z \NT^2 & =
\NT y(v) \NT^2 - 2 a( y(v) , z )
+ \NT z \NT^2 \\
& \quad - 2 \left( a( y(v), y(v)) + \langle \ell+\mathcal B v,y(v)\rangle_{\Sspace_0} \right) \\
& =
- \NT y(v) \NT^2 + 2 \langle \ell + \mathcal B v,z \rangle_{\Sspace_0}
+ \NT z \NT^2 - 2 \langle \ell + \mathcal B v,y(v) \rangle_{\Sspace_0} \\
& =
E (z) - E (y(v)) .
\end{align*}
\end{proof}
\begin{Theorem} \label{th:state:est}
Let $y(v)$ be the solution of (\ref{eq:state:var}) and
$z \in \Sspace_0$ be arbitrary, then
\begin{equation*}
\sup\limits_{q \in \Sspace_0} \underline{M}^2(z,q,v)
= \NT y(v)- z \NT^2 =
\inf\limits_{\tau \in \Fspace \atop \beta > 0} \maj^2(z,\tau,\beta,v) ,
\end{equation*}
where
\begin{equation} \label{eq:minor}
\underline{M}^2(z,q,v) := E(z) - E(q)
\end{equation}
and
\begin{equation} \label{eq:major}
\maj^2(z,\tau,\beta,v) := (1+\beta) \| \tau - \mathcal A \Lambda z \|_{\mathcal A^{-1}}^2 +
\frac{1+\beta}{\beta} \NNN \Lambda^* \tau + \mathcal B v + \ell \NNN^2 .
\end{equation}
\end{Theorem}
\begin{proof}
$\underline{M}^2$ is obtained directly from (\ref{eq:state:var}) and (\ref{eq:Mikh}). For $\maj^2$,
see, e.g., \cite[Chap. 6, (6.2.3)]{NeittaanmakiRepin2004},
\cite[Chap. 7, (7.1.19)]{Repin2008}. Upper bounds of more general
type have been presented already in \cite{Repin1997,Repin2000}.
\end{proof}
\begin{Remark} \label{Re:winfyinf}
It is easy to confirm that the supremum over $\underline{M}^2$ is obtained at $q = y(v)$ and
the infimum over $\maj^2$ is attained
at $\tau = \mathcal A \Lambda y(v)$ and $\beta \rightarrow 0$.
\end{Remark}
\subsection{Estimates for the cost functional}
Applying Theorem \ref{th:state:est} to the first term of (\ref{eq:cost}), leads to
two-sided bounds for $J(v)$. These bounds are guaranteed, have no gap,
and do not depend on $y(v)$, i.e., they do not require the solution of the state equation.
\begin{Theorem} \label{th:costest}
For any $v \in \Cspace$,
\begin{equation} \label{eq:Jbounds}
\sup\limits_{q \in \Sspace_0} \underline{J}(v,q)
=
J(v)
=
\inf\limits_{\tau \in \Fspace \atop \beta>0} \overline{J}(v,\tau,\beta),
\end{equation}
where
\begin{equation} \label{eq:defcostlow}
\underline{J}(v,q) := \underline{M}^2(y^d,q,v) + \| v - u^d \|_{\mathcal N}^2
\end{equation}
and
\begin{equation} \label{eq:defcostup}
\overline{J}(v,\tau,\beta) := \maj^2(y^d,\tau,\beta,v)
+ \| v - u^d \|_{\mathcal N}^2 .
\end{equation}
\end{Theorem}
Theorem \ref{th:costest} can be used to estimate $J(u)$. By (\ref{eq:ocp}) and
(\ref{eq:Jbounds}),
\begin{equation} \label{eq:Jubounds}
\inf\limits_{v \in \Cspace_{\rm ad}} \underline{J}(v,q)
\leq
J(u)
\leq
\overline{J}(v,\tau,\beta), \quad
\forall \, q \in \Sspace_0, \, v \in \Cspace, \, \tau \in \LFspace, \; \beta > 0 ,
\end{equation}
where all inequalities hold as equalities if $v=u$, $q=y(u)$,
$\tau = \mathcal A \Lambda y(u)$, and $\beta \rightarrow 0$.
In view of
(\ref{eq:Jubounds}), it is very important that
the minimizer of $\underline{J}(v,q)$ over $v \in \mathcal \Cspace_{\rm ad}$ can be
explicitly computed.
Computation of the minimizers of $\overline{J}$ require further assumptions of the structure
of the problem (cf. Propositions \ref{pr:ex1:Jupmin} and \ref{pr:ex2:Jupmin}).
\begin{Proposition} \label{pr:Jlowmin}
For all $v \in \Cspace_{\rm ad}$ and $q \in \Sspace_0$,
\begin{align}
\label{eq:Jlow:vmin}
\underline{J}(\hat v(q), q) & = \inf\limits_{v \in \Cspace_{\rm ad}} \underline{J}(v,q), \\
\nonumber
\underline{J}(v, \hat q(v)) & = \sup\limits_{q \in \Sspace_0} \underline{J}(v,q),
\end{align}
where $\hat q(v) = y(v)$ (from (\ref{eq:state:weak})) and
\begin{equation} \label{eq:vhatdef}
\hat v (q) := \Pi_{\rm ad}^{\mathcal N}
\left( u^d + \mathcal N^{-1} \mathcal B^*(y^d-q) \right) .
\end{equation}
\end{Proposition}
\begin{proof}
The condition $\hat q(v) = y(v)$ follows directly from Remark \ref{Re:winfyinf}.
By (\ref{eq:state:var}), (\ref{eq:minor}), and (\ref{eq:defcostlow}),
$\underline{J}$ has the following form
\begin{multline*}
\underline{J}(v,q) =
\NT y^d \NT^2 - 2 \langle \ell,y^d \rangle
- \NT q \NT^2 + 2 \langle \ell,q \rangle + 2\langle B v,q-y^d \rangle_{\Sspace_0}
+ \| v - u^d \|_{\mathcal N}^2 \\
= \| v \|_{\mathcal N}^2 - 2 (v , u^d)_{\mathcal N} - 2 \langle \mathcal B v,y^d \rangle_{\Sspace_0}
- \NT q \NT^2 + 2 \langle \ell,q \rangle + 2 \langle \mathcal B v,q \rangle_{\Sspace_0} + {\rm const.}
\end{multline*}
Clearly, it is quadratic w.r.t
$v$ and the minimizer $\hat v \in \Cspace_{\rm ad}$ is identified
by the following variational inequality
(see, e.g., \cite[Chap. I, Th. 1.2]{Lions1971}
):
\[
(\hat v,v-\hat v)_{\mathcal N} \geq (v-\hat v,u^d)_{\mathcal N}
+\langle {\mathcal B} (v-\hat v) , y^d - q \rangle_{\Sspace_0},
\quad \forall v \in \mathcal \Cspace_{\rm ad} .
\]
Reorganizing and (\ref{eq:Badj}) yields
\begin{equation*}
\left( \hat v-u^d+ \mathcal N^{-1} {\mathcal B}^*(q-y^d), v - \hat v \right )_{\mathcal N} \geq 0 ,
\quad \forall v \in \mathcal \Cspace_{\rm ad} ,
\end{equation*}
and Proposition \ref{pr:proj} leads at (\ref{eq:vhatdef}).
\end{proof}
\begin{Remark} \label{re:boundinfo}
By (\ref{eq:Jbounds}) and (\ref{eq:Jubounds}), $\overline{J}(v,\tau,\beta)$ is an upper
bound of $J(u)$ for all $v \in \Cspace_{ad}$, $\tau \in \Fspace$, and $\beta>0$ and
$\underline{J}(v,q)$ is a lower bound for $J(v)$ for all $q \in \Cspace_{ad}$,
but it is a lower bound of $J(u)$ only if $v=\hat v(q)$ (see (\ref{eq:vhatdef})).
\end{Remark}
\begin{Remark} \label{re:costlowreform}
Lower bound $\underline{J}$ generates a saddle point formulation for the original
optimal control problem (\ref{eq:ocp}). Find
$(\tilde v, \tilde q)$ satisfying
\begin{equation} \label{eq:saddle:ineg}
\underline{J}(\tilde v,q) \leq \underline{J}(\tilde v,\tilde q) \leq \underline{J}(v,\tilde q),
\quad \forall v \in \Cspace_{\rm ad} , q \in \Sspace_0 .
\end{equation}
Note that $\underline{J}$ is convex,
lower semi-continuous,
and coercive w.r.t. $v$ and concave,
upper semi-continuous, and anti-coercive
w.r.t $q$, $\Cspace_{\rm ad}$ is convex, closed, and non-empty, and
$\Sspace_0$ is convex, closed, and non-empty. Thus,
the solution of (\ref{eq:saddle:ineg})
exists and is unique (see, e.g., \cite[Chap. VI, Pr. 2.4]{EkelandTemam1976}).
By Remark \ref{Re:winfyinf}, $\tilde v = u$ and $\tilde q = y(u)$. Moreover,
$\hat v(y(u)) = u$, where $\hat v$ is defined in (\ref{eq:vhatdef}).
The left and right-hand-side of (\ref{eq:saddle:ineg})
yield (\ref{eq:state:var}) and (\ref{eq:nec:const}) (i.e., necessary conditions
(\ref{eq:state:weak}) and (\ref{eq:nec:der})), respectively.
\end{Remark}
\begin{Remark} \label{re:costupmin}
By (\ref{eq:Jubounds}),
$J(u) \leq J(v) \leq \overline{J}(v,\tau,\beta)$ and it is easy to see that
$J(u)=\lim\limits_{\beta \rightarrow 0} \overline{J}(u,\mathcal A\Lambda y(u),\beta)$.
Thus, the upper bound generates a minimization problem
\[
\overline{J}(u,\mathcal A \Lambda y(u),0) =
\min\limits_{v \in \Cspace_{\rm ad}, \tau \in \Fspace \atop \beta > 0}
\overline{J}(v,\tau,\beta) ,
\]
where the constraint related to (\ref{eq:state:weak}) does not appear.
\end{Remark}
\subsection{Estimates for an error quantity}
The following identity can be viewed as an analog of (\ref{eq:state:var})
for the optimal control problem.
\begin{Theorem} \label{th:gen:mikh}
For any $v \in \Cspace_{\rm ad}$,
\begin{equation} \label{eq:cont:equi}
\NT y(v) - y(u) \NT^2 + \| v - u \|_{\mathcal N}^2
+ \langle J'(u) , v-u \rangle_\Cspace = J(v) - J(u) .
\end{equation}
\end{Theorem}
\begin{proof}
We have,
\begin{multline*}
J(v)-J(u)
=
\NT y(v) - y(u) \NT^2 + 2a(y(v)-y(u),y(u)-y^d) \\
+ \| v - u \|_{\mathcal N}^2 + 2( v-u, u- u^d )_{\mathcal N} .
\end{multline*}
By (\ref{eq:state:weak}) and (\ref{eq:Jderiv}),
\[
a(y(v)-y(u),y(u)-y^d) = \langle \mathcal B (v-u), y(u)-y^d \rangle_{\Sspace_0}
\]
and
\[
2 a(y(v)-y(u),y(u)-y^d)+ 2 ( v-u, u- u^d )_{\mathcal N}
= \langle J'(u) , v-u \rangle_\Cspace.
\]
\end{proof}
\begin{Remark}
If $\Cspace_{\rm ad} = \Cspace$, then $\langle J'(u) , v \rangle_\Cspace = 0$, for all $v \in \Cspace$ and
(\ref{eq:cont:equi}) reduces to \cite[Ch. 9, Th. 9.14]{Repin2008}.
\end{Remark}
Equality (\ref{eq:cont:equi}) shows that it is reasonable to include $\langle J'(u) , v-u \rangle_\Cspace$
to the applied error measure.
Obviously, $\langle J'(u) , v-u \rangle_\Cspace$ is positive for any
$v \in \Cspace_{\rm ad}$
by (\ref{eq:nec:der}), it is convex and vanishes
if $v=u$. Thus, the error measure is
\begin{equation} \label{eq:errdef}
{\rm err}^2( v ) :=
\NT y(v) - y(u) \NT^2 + \| v - u \|_{\mathcal N}^2 + \langle J'(u) , v-u \rangle_\Cspace .
\end{equation}
The ``derivative weight'' guarantees that the sensitivity of the cost
functional at the optimal control is taken into account. Most importantly, ${\rm err}(v)$
can be estimated from both sides by computable functionals, which do not require
the knowledge of the optimal control $u$, the respective state $y(u)$, or the exact
state $y(v)$. Indeed, applying
(\ref{eq:Jbounds}), (\ref{eq:Jubounds}), and (\ref{eq:Jlow:vmin}) to
the right hand side of (\ref{eq:cont:equi}) yields the following theorem:
\begin{Theorem} \label{th:cont:est}
For any $v \in \Cspace_{\rm ad}$,
\begin{equation} \label{eq:cont:est}
\sup\limits_{q \in \Sspace_0, v_2 \in \Cspace_{\rm ad}, \atop \tau \in \Fspace, \beta > 0}
\underline{{\rm err}}^2(v,q,v_2,\tau,\beta)
= {\rm err}^2( v ) =
\inf\limits_{\tau \in \Fspace, \beta>0, \atop q_2 \in \Sspace_0}
\overline{{\rm err}}^2(v,\tau_2,\beta_2,q_2),
\end{equation}
where
\begin{equation*}
\underline{{\rm err}}^2(v,q,v_2,\tau,\beta) :=
\underline{J}(v,q) - \overline{J}(v_2,\tau,\beta)
\end{equation*}
and
\begin{equation*}
\overline{{\rm err}}^2(v,\tau_2,\beta_2,q_2) :=
\overline{J}(v,\tau_2,\beta_2) - \underline{J}(\hat v(q_2),q_2) .
\end{equation*}
\end{Theorem}
\begin{Remark}
By Remark \ref{re:boundinfo}, (\ref{eq:defcostlow}), (\ref{eq:defcostup}), and
(\ref{eq:cont:equi}), the equality (\ref{eq:cont:est}) is attained at
\begin{equation*}
\underline{{\rm err}}^2(v,y(v),u,\mathcal A \Lambda y(u),0)
= {\rm err}^2( v ) =
\overline{{\rm err}}^2(v,\mathcal A \Lambda y(v),0,y(u)) .
\end{equation*}
\end{Remark}
\begin{Remark}
Obviously $J(v)$ and ${\rm err}^2(v)$ are positive. However, e.g., the lower bound
$\underline{J}(\hat v(q_2),q_2)$ for
$J(u)$ may be negative if $q_2$ is not close enough to $y(u)$ and
$\underline{{\rm err}}^2(v,q,v_2,\tau,\beta)$ may be negative value if $v_2$ is not ``good
enough'' in comparison with $v$, or the upper bound $\overline{J}(v_2,\tau,\beta)$
is not ``sharp enough''.
\end{Remark}
\section{Examples, algorithms and numerical tests}
\label{se:examples}
\subsection{Examples}
\label{se:ex}
In the following examples,
the domain $\Omega \subset \mathbb{R}^d$ is open, simply connected
and has a piecewise Lipschitz-continuous boundary $\Gamma$.
Spaces are
$\LSspace=L^2(\Omega)$, \mbox{$\Sspace = H^1(\Omega)$},
\mbox{$\LFspace = L^2(\Omega,\mathbb{R}^d)$}, and \mbox{$\Fspace = \Hdiv$}.
Operators are
\mbox{$\Lambda = \nabla$}, \linebreak \mbox{$\Lambda^* = -\dvg$},
\mbox{$\mathcal A = {\rm Id}$},
and $N=\alpha {\rm Id}$ ($\alpha>0$). Then
$a(q,z):=(\nabla q, \nabla z)_{L^2(\Omega,\Rd)}$ and \linebreak
\mbox{$\NT w \NT = \| \nabla w \|_{L^2(\Omega,\Rd)}$}.
The examples differ only by the selection of $\Sspace_0$,
$\Cspace$, $\mathcal B$, and $\ell$.
\subsubsection{Dirichlet problem, distributed control}
Let $\Cspace:= L^2(\Omega)$,
$\Sspace_0 := H_0^1(\Omega)$, and
$\langle \ell,w \rangle=(f,w)_{L^2(\Omega)}$,
where $f \in L^2(\Omega)$.
Moreover, $B={\rm Id}$, i.e.,
$\langle Bv, q \rangle = ( v, q)_{L^2(\Omega)} $.
The analog of (\ref{eq:gen:Fri}) is the Friedrichs
inequality
\[
\| q \|_{L^2(\Omega)} \leq c_\Omega \| \nabla q \|_{L^2(\Omega,\mathbb{R}^d)} ,
\quad \forall q \in \H1o .
\]
The cost functional (\ref{eq:cost}) is
\begin{equation}
\label{eq:ex1:cost}
J(v) := \| \nabla (y(v) - y^d) \|_{L^2(\Omega,\Rd)}^2
+ \alpha \| v - u^d \|_{L^2(\Omega)}^2 .
\end{equation}
The state equation (\ref{eq:state:weak}) is
\begin{equation} \label{eq:ex1:state}
(\nabla y(v), \nabla z)_{L^2(\Omega,\Rd)} =
( f+v, z)_{L^2(\Omega)}, \quad \forall z \in \H1o
\end{equation}
and it has the classical form
\begin{equation*}
\left\{
\begin{array}{rclr}
- \Delta y(v) & = & f + v \quad & \textrm{a.e. in } \Omega, \\
y(v) & = & 0 \quad & \textrm{on } \Gamma .
\end{array}
\right.
\end{equation*}
The majorant (\ref{eq:major}) is
\begin{equation*}
\maj^2(q,\tau,\beta,v) = (1+\beta) \| \tau - \nabla z \|_{L^2(\Omega,\mathbb{R}^d)}^2 +
\frac{1+\beta}{\beta} c_\Omega^2 \| \dvg \tau + f + v \|_{L^2(\Omega)}^2 .
\end{equation*}
The counterpart of the Proposition \ref{pr:Jlowmin} is below.
\begin{Proposition} \label{pr:ex1:Jupmin}
For all $v \in \Cspace_{\rm ad}$, $\tau \in \Hdiv$, and $\beta > 0$
\begin{align*}
\overline{J}(\hat v(\tau,\beta),\tau,\beta) & = \inf\limits_{v \in \Cspace_{\rm ad}} \overline{J}(v,\tau,\beta) , \\
\overline{J}(v,\hat \tau(v,\beta),\beta) & = \inf\limits_{\tau \in \Hdiv} \overline{J}(v,\tau,\beta) , \\
\overline{J}(v,\tau,\hat \beta(v,\tau)) & = \inf\limits_{\beta > 0} \overline{J}(v,\tau,\beta) ,
\end{align*}
where
\begin{equation} \label{eq:ex1:hatvdef}
\hat v(\tau,\beta) = \Pi_{\rm ad} \left(
\tfrac{\alpha \beta}{(1+\beta) c_\Omega^2} u^d
- \dvg \tau - f
\right) ,
\end{equation}
$\hat \tau := \hat \tau (v,\beta)$ satisfies
\begin{multline} \label{eq:ex1:hattaudef}
\beta (\hat \tau, \xi)_{L^2(\Omega,\mathbb{R}^d)} +
c_\Omega^2 (\dvg \hat \tau, \dvg \xi )_{L^2(\Omega)} \\
=
\beta (\nabla y^d, \xi )_{L^2(\Omega,\mathbb{R}^d)} +
c_\Omega^2 (f+v, \dvg \xi )_{L^2(\Omega)},
\quad \forall \xi \in \Hdiv ,
\end{multline}
and
\begin{equation} \label{eq:ex1:hatbetadef}
\hat \beta(v,\tau) = \frac{c_\Omega \| \dvg \tau + f + v \|_{L^2(\Omega)}}
{\| \tau - \nabla y^d \|_{L^2(\Omega,\mathbb{R}^d)}} .
\end{equation}
\end{Proposition}
\begin{proof}
The upper bound $\overline{J}$ can be rewritten as follows,
\begin{multline*}
\overline{J}(v,\tau,\beta) =
(1+\beta) \| \tau - \nabla z \|_{L^2(\Omega,\mathbb{R}^d)}^2 +
\tfrac{1+\beta}{\beta} c_\Omega^2 \| \dvg \tau + f + v \|_{L^2(\Omega)}^2 +
\alpha \| v -u^d \|^2_{L^2(\Omega)} \\
=
\left( \tfrac{1+\beta}{\beta} c_\Omega^2 + \alpha \right)
\| v \|_{L^2(\Omega)}^2
- 2 \left( \alpha u^d - \tfrac{1+\beta}{\beta} c_\Omega^2 (\dvg \tau + f), v \right)_{L^2(\Omega)}
+ \textrm{ const w.r.t } v .
\end{multline*}
Thus, the minimizer $\hat v \in \Cspace_{\rm ad}$ satisfies
\[
\left( \tfrac{1+\beta}{\beta} c_\Omega^2 + \alpha \right)
( \hat v , w - \hat v )_{L^2(\Omega)}
\geq
\left( \alpha u^d - \tfrac{1+\beta}{\beta} c_\Omega^2 (\dvg \tau + f), w - \hat v \right)_{L^2(\Omega)},
\; \forall w \in \Cspace_{\rm ad} .
\]
Reorganizing leads at
\[
( \hat v - \tfrac{\alpha \beta}{(1+\beta) c_\Omega^2} u^d
+ \dvg \tau + f, w - \hat v )_{L^2(\Omega)} ,
\quad \forall w \in \Cspace_{\rm ad} ,
\]
and Proposition \ref{pr:proj} yields (\ref{eq:ex1:hatvdef}).
Condition (\ref{eq:ex1:hattaudef}) can be
easily derived, since $\maj^2$ is quadratic w.r.t. \linebreak
\mbox{$\tau \in \Hdiv$} and
(\ref{eq:ex1:hatbetadef}) results from solving a one-dimensional minimization
problem.
\end{proof}
The relation (\ref{eq:vhatdef}) becomes
\begin{equation} \label{eq:ex1:vminstep}
\hat v (q) = \Pi_{\rm ad} \left( u^d + \tfrac{1}{\alpha} (y^d-q) \right) ,
\end{equation}
where $\Pi_{\rm ad}:L^2(\Omega) \rightarrow \Cspace_{\rm ad}$ is
a projection.
\begin{Example} \label{re:ex1:linear}
If $\Cspace_{\rm ad} = L^2(\Omega)$, then by
(\ref{eq:gen:linear}) $y(u) \in H_0^1(\Omega)$ satisfies
\begin{multline} \label{eq:ex1:linear}
(\nabla y(u) , \nabla z)_{L^2(\Omega,\Rd)}
+ \tfrac{1}{\alpha} (y(u),z)_{L^2(\Omega)} \\
=
( f + \tfrac{1}{\alpha} y^d + u^d,z )_{L^2(\Omega)}, \quad
\forall z \in \H1o
\end{multline}
and $\Pi_{\rm ad} = {\rm Id}$ in (\ref{eq:ex1:hatvdef}) and (\ref{eq:ex1:vminstep}).
\end{Example}
\begin{Example} \label{re:ex1:Uad:loc}
Let
\begin{equation} \label{eq:ex1:Uad:loc}
\Cspace_{\rm ad} = \{ v \in L^2(\Omega) \, | \,
\psi_{-} \leq v \leq \psi_{+} \quad \textrm{a.e in } \Omega \} ,
\end{equation}
then the projection operator
$\Pi_{\rm ad}:L^2(\Omega) \rightarrow \Cspace_{\rm ad}$
is
\[
\Pi_{\rm ad} v = \min \left\{ \psi_{+} , \max \left\{ \psi_{-} , v \right\} \right\} .
\]
\end{Example}
\begin{Example} \label{re:ex1:Uad:glo}
Let
\begin{equation*}
\Cspace_{\rm ad} = \{ v \in L^2(\Omega) \, | \,
\| v \|_{L^2(\Omega)} \leq M \} ,
\end{equation*}
then the projection operator
$\Pi_{\rm ad}:L^2(\Omega) \rightarrow \Cspace_{\rm ad}$
is
\[
\Pi_{\rm ad} v = \left\{ \begin{array}{ll}
\frac{M v}{\| v \|_{L^2(\Omega)}} \quad & \textrm{ if } \| v \|_{L^2(\Omega)} > M , \\
v \quad & \textrm{ else }
\end{array} \right.
\]
\end{Example}
Finally, functional a posteriori error estimates for the problem
(\ref{eq:ex1:linear}) are
recalled. (see, e.g.,
\cite[Ch. 4.2]{Repin2008}, and \cite[Ch. 3.2]{MaliNeittaanmakiRepin2014}).
\begin{Theorem}
Let $y$ be the solution of (\ref{eq:ex1:linear}) and $z \in \H1o$, then
\[
\| \nabla(y-z) \|_{L^2(\Omega,\mathbb{R}^d)}^2 +
\tfrac{1}{\alpha} \| y-q \|_{L^2(\Omega)}^2 =
\inf\limits_{\tau \in \Hdiv, \beta > 0, \atop \nu \in L^2(\Omega,[0,1])}
\maj (z,\tau,\beta,\nu) ,
\]
where
\begin{multline*}
\maj (z,\tau,\beta,\nu) :=
(1+\beta) \| \nabla z - \tau \|_{L^2(\Omega,\mathbb{R}^d)}^2 \\
+ \tfrac{1+\beta}{\beta} c_\Omega^2 \| \nu \mathcal R(z,\tau) \|_{L^2(\Omega)}^2 +
\alpha \| (1-\nu) \mathcal R(z,\tau) \|_{L^2(\Omega)}^2
\end{multline*}
and
\[
\mathcal R(z,\tau) = \dvg \tau - \tfrac{1}{\alpha} z + f + \tfrac{1}{\alpha} y^d + u^d .
\]
\end{Theorem}
\subsubsection{Neumann problem, boundary control}
The boundary $\Gamma$ consists of two parts $\Gamma_{N} \cup \Gamma_D$, where
$\Gamma_D$ has a positive measure. By the trace theorem
there exists a bounded linear mapping \mbox{$\gamma: \H1o \rightarrow L^2(\Gamma_{N})$},
\[
\| \gamma q \|_{L^2(\Gamma_{N})} \leq c \| q \|_{H^1(\Omega)} ,
\]
such that $\gamma v = v_{| \Gamma}$ for all $v \in C^1(\bar \Omega)$.
Let $\Cspace:= L^2(\Gamma_{N})$ and
\[
\Sspace_0 := V_0 :=
\{ w \in H^1(\bar \Omega) \, | \, w \textrm{ has zero trace on } \Gamma_D \}.
\]
Moreover,
$\langle Bv,q \rangle = (v,\gamma q)_{L^2(\Gamma_{N})}$ and
$\langle \ell,q \rangle = (f,q)_{L^2(\Omega)} - (g,\gamma q)_{L^2(\Gamma_{N})}$,
where $f \in L^2(\Omega)$ and $g \in L^2(\Gamma_{N})$.
The cost functional (\ref{eq:cost}) is
\begin{equation*}
J(v) := \| \nabla (y(v) - y^d) \|_{L^2(\Omega)}^2
+ \alpha \| v - u^d \|_{L^2(\Gamma_{N})}^2 ,
\end{equation*}
and the state equation (\ref{eq:state:var}) is
\begin{equation*}
(\nabla y(v), \nabla q)_{L^2(\Omega,\mathbb{R}^d)} =
(f,q)_{L^2(\Omega)} + (g+v,\gamma q)_{L^2(\Gamma_{N})} ,
\quad \forall q \in V_0 .
\end{equation*}
It has the classical form
\[
\left\{
\begin{array}{rclr}
- \Delta y(v) & = & f \quad & \textrm{a.e. in } \Omega, \\
y(v) & = & 0 \quad & \textrm{on } \Gamma_D, \\
\tfrac{\partial y(v)}{\partial n} & = & g + v \quad & \textrm{on } \Gamma_{N} .
\end{array}
\right.
\]
The majorant (\ref{eq:major}) has the form (see, e.g., \cite[Sect. 4.1]{Repin2008}
for details)
\begin{multline*}
\maj^2(q,\tau,\beta) =
(1+\beta) \| \tau - \nabla q \|_{L^2(\Omega,\mathbb{R}^d)}^2 \\
+
\frac{1+\beta}{\beta} \left( c_{\Omega,2}^2 \| \dvg \tau + f \|_{L^2(\Omega)}^2 +
c_{\Gamma_N}^2 \| \tfrac{\partial \tau}{\partial n} + g + v \|_{L^2(\Gamma_N)}^2 \right),
\end{multline*}
where constants satisfy
\[
\| q \|_{L^2(\Omega)} \leq c_{\Omega,2} \| \nabla q \|_{L^2(\Omega,\mathbb{R}^2)}
\quad \textrm{ and} \quad
\| q \|_{L^2(\Gamma_N)} \leq c_{\Gamma_N} \| \nabla q \|_{L^2(\Omega,\mathbb{R}^2)} ,
\quad \forall q \in V_0 .
\]
\begin{Proposition} \label{pr:ex2:Jupmin}
For all $q \in \H1o$, $\tau \in \Hdiv$, and $\beta > 0$
\begin{align*}
\overline{J}(q,\hat \tau,\beta) & = \inf\limits_{\tau \in \Hdiv} \overline{J}(q,\tau,\beta) , \\
\overline{J}(q,\tau,\hat \beta) & = \inf\limits_{\beta > 0} \overline{J}(q,\tau,\beta) ,
\end{align*}
where $\hat \tau$ satisfies
\begin{multline*}
\beta (\hat \tau, \xi)_{L^2(\Omega,\mathbb{R}^d)} +
c_\Omega^2 (\dvg \hat \tau, \dvg \xi )_{L^2(\Omega)} +
c_{\Gamma_N}^2 (\tfrac{\partial \hat \tau}{\partial n},\tfrac{\partial \xi}{\partial n})_{L^2(\Gamma_{N})}
\\
=
\beta (\nabla q, \xi )_{L^2(\Omega,\mathbb{R}^d)} +
c_\Omega^2 (f+v, \dvg \xi )_{L^2(\Omega)} +
c_{\Gamma_N}^2 (g+v,\tfrac{\partial \xi}{\partial n})_{L^2(\Gamma_{N})}
,
\quad \forall \xi \in \Hdiv
\end{multline*}
and
\begin{equation*}
\hat \beta = \frac{\left( c_{\Omega,2}^2 \| \dvg \tau + f \|_{L^2(\Omega)}^2 +
c_{\Gamma_N}^2 \| \tfrac{\partial \tau}{\partial n} + g + v \|_{L^2(\Gamma_N)}^2 \right)^{1/2}}
{\| \tau - \nabla q \|_{L^2(\Omega,\mathbb{R}^d)}} .
\end{equation*}
\end{Proposition}
\subsection{Algorithms}
\label{se:alg}
The results of Sect. \ref{se:est} give grounds for several
error estimation Algorithms. Note that the estimates in Theorems \ref{th:costest} and
\ref{th:cont:est} are valid for any
approximations from $\Cspace_{\rm ad}$. There is no need for Galerkin
orthogonality, extra regularity, or mesh dependent data. Thus they can
be combined with any existing numerical scheme, which generates approximations
of the optimal control (and/or state).
Computation of the derived estimates requires
some finite dimensional subspaces. Hereafter, assume that
$\Cspace_{\rm ad}^h\subset \Cspace_{\rm ad}$
$\Sspace_0^h \subset \Sspace_0$ and $\Fspace^h \subset \Fspace$ are given.
They can be generated, e.g., by finite elements or Fourier
series. The approximate solution of (\ref{eq:state:weak}) is
$y^h(v) \in \Sspace_0^h \subset \Sspace_0$ that satisfies
\begin{equation} \label{eq:state:num}
a(y^h(v),z) = \langle Bv + \ell , z \rangle_{\Sspace_0}, \quad \forall z \in \Sspace_0^h .
\end{equation}
\begin{Remark}
By Remark \ref{re:boundinfo}, the evaluation of
(the approximation of) $J(v)$ by computing $y^h(v)$ from (\ref{eq:state:num})
and
$J_h(v) := \NT y^h(v) - y^d \NT^2 + \| v- u^d \|_{\mathcal N}^2$ coincides with the
lower bound
$\underline{J}(v,y^h(v)) = \max\limits_{y \in \Sspace_0^h} \underline{J}(v,y)$.
\end{Remark}
The generation of the estimates for the cost function value $J(v)$ for a given
approximation $v \in \Cspace_{\rm ad}$ is depicted as Algorithm \ref{alg:costest}.
\begin{algorithm}[tb]
\caption{Generation of bounds for the cost functional value}
\label{alg:costest}
\begin{algorithmic}
\STATE {\bf input:}
$v \in \Cspace_{\rm ad}$ \COMMENT{approximation of the control}
$\Sspace_0^h$ \COMMENT{subspace for state},
$\Fspace^h$ \COMMENT{subspace for the flux of state},
$I_{\max}$ \COMMENT{maximum number of iterations},
$\varepsilon$ \COMMENT{stopping criteria}
\STATE {}
\STATE $y^h = \argmax\limits_{y \in \Sspace_0^h} \underline{J}(v,y)$
\COMMENT{compute $y^h(v)$ from (\ref{eq:state:num})}
\STATE $\hat v^h = \argmin\limits_{v \in \Cspace_{\rm ad}} \underline{J}(v,y^h)$
\COMMENT{compute $\hat v(y^h)$ by (\ref{eq:vhatdef})}
\STATE {}
\STATE $\beta^0 = 1$
\FOR{$\; k = 1 \;$ {\bf to} $\; I_{\max} \;$}
\STATE $\tau^k = \argmin\limits_{\tau \in \Fspace^h} \overline{J}(v,\tau,\beta^{k-1})$
\STATE $\beta^k = \argmin\limits_{\beta > 0} \overline{J}(v,\tau^k,\beta)$
\IF{
$
\tfrac{\overline{J}(v,\tau^{k-1},\beta^{k-1})-\overline{J}(v,\tau^{k},\beta^{k})}
{\overline{J}(v,\tau^{k},\beta^{k})} < \varepsilon
$
}
\STATE{\bf break}
\ENDIF
\ENDFOR
\STATE $\underline{J}_h(v) = \underline{J}(v,y^h)$
\STATE $\overline{J}_h(v) = \overline{J}(v,\tau^k,\beta^k)$
\STATE $\underline{J}_h(u) = \underline{J}(\hat v^h,y^h)$
\STATE {}
\STATE {\bf output:}
$\underline{J}_h(u)$ \COMMENT{lower bound for $J(v)$},
$\overline{J}_h(v)$ \COMMENT{upper bound for $J(v)$},
$\underline{J}_h(u)$ \COMMENT{lower bound for $J(u)$},
\end{algorithmic}
\end{algorithm}
In order to test the presented error estimates, a projected gradient method
(see, e.g., \cite{GruverSachs1981,Kelley1999})
is applied to generate a sequence approximations. Method consists of line
searches along (anti)gradient directions, where all evaluated points are first projected
to the admissible set.
A projected gradient method with error estimates is depicted as Algorithm
\ref{alg:projgrad}.
\begin{algorithm}[tb]
\caption{Projected gradient method with guaranteed cost estimates}
\label{alg:projgrad}
\begin{algorithmic}
\STATE {\bf input:}
$v^0 \in \Cspace_{\rm ad}$ \COMMENT{initial approximation of the control}
$\Sspace_0^h$ \COMMENT{subspace for state},
$\Fspace^h$ \COMMENT{subspace for the flux of state},
$I_{\max}^{PG}$ \COMMENT{maximum number of iterations (projected gradient)},
$\varepsilon^{PG}$ \COMMENT{stopping criteria (projected gradient)}
$I_{\max}$ \COMMENT{maximum number of iterations ($\overline{J}$ minimization)},
$\varepsilon$ \COMMENT{stopping criteria ($\overline{J}$ minimization)}
\STATE {}
\FOR{$\; k = 0 \;$ {\bf to} $\; I_{\max}^{\rm PG}$}
\STATE
$
\left\{ \underline{J}_h(v^k), \, \overline{J}_h(v^k), \, \underline{J}_h^k(u) \right\} =
\textrm{GenerateCostEstimates}(v^k,\Sspace_0^h,\Fspace^h,I_{\max},\varepsilon)
$
\STATE
$
d^k = 2 \left( B^*(y^d - y(v^k)) + \mathcal N (u^d - v^k) \right)
$
\COMMENT{search direction}
\STATE
$
s^k \argmin\limits_{\lambda \in [0,\lambda_{\max}]}
J_h \left( \Pi_{\rm ad} \left( v^k + \lambda d(v^k) \right) \right)
$
\COMMENT{step length (golden section method)}
\STATE
$
v^{k+1} = v^k + s^k d^k
$
\COMMENT{update approximation}
\IF{
$
\tfrac{\| v^k - v^{k-1}\|}
{\| v^{k-1} \|} < \varepsilon^{\rm PG}
$
}
\STATE{\bf break}
\ENDIF
\ENDFOR
\STATE {}
\STATE {\bf output:}
$\left\{ v^k) \right\}_{k=1}^N$ \COMMENT{sequence of approximations},
$\left\{ \underline{J}_h(v^k) \right\}_{k=1}^N$ \COMMENT{lower bounds for $J(v^n)$},
$\left\{ \overline{J}_h(v^k) \right\}_{k=1}^N$ \COMMENT{upper bounds for $J(v^n)$},
$\underline{J}_h^N(u)$ \COMMENT{lower bound for $J(u)$}
\end{algorithmic}
\end{algorithm}
At the beginning of every projected gradient step Algorithm \ref{alg:costest}
is used to generate approximations for the cost functional. After the execution
of Algorithm \ref{alg:projgrad} ($N$ iteration steps taken), cost estimates
are recalled to generate two-sided estimates for ${\rm err}(v)$ (i.e., the
difference $J(v)-J(u)$) at each iteration step
($k=1,\dots,N$) as follows:
\begin{align*}
{\rm err}^2(v^k) & \geq \underline{J}^2_h(v^k) - \overline{J}_h(v^N) \\
{\rm err}^2(v^k) & \leq \overline{J}_h(v^k) - \underline{J}_h^N(u)
\end{align*}
Note that the iterate of the last step ($N$'th step) is used to generate
as accurate bounds as possible for $J(u)$.
\subsection{Numerical tests}
\label{se:num}
Finite dimensional subspaces are generated by the finite element method
(see, e.g., \cite{Ciarlet1978}). In these tests, $\Cspace=L^2(\Omega)$, $\Sspace_0=\H1o$,
and $\Fspace=\Hdiv$. Subspaces $DG_h^p \subset L^2(\Omega)$, $V_h^p \subset \H1o$,
and ${\rm RT}^p \subset \Hdiv$
are generated by Discontinous Galerkin elements, Lagrange elements, and
Raviart-Thomas elements, respectively. Superscripts $p$ denote the order of basis functions.
All the numerical tests were performed using FEniCS (see \cite[Ch. 3]{LoggMardalEtAl2012a}
for detailed descriptions of the applied elements and for additional references).
\begin{Example} \label{ex:dir:co:loc}
Let $\Omega = (0,1)^2$.
Consider the optimal control problem generated by (\ref{eq:ex1:cost}),
(\ref{eq:ex1:state}), and $\Cspace_{\rm ad}$ defined by
(\ref{eq:ex1:Uad:loc}), where $\psi_{-}(x_1,x_2) = -3$ and
$\psi_{+}(x_1,x_2) = 3$. Select
\begin{align*}
y(x_1,x_2) & = \sin(k_1 \pi x_1) \sin(k_1 \pi x_2), \\
y^d(x_1,x_2) & = \sin(k_1 \pi x_1) \sin(k_1 \pi x_2)
+ \beta \sin(m_1 \pi x_1) \sin(m_1 \pi x_2) , \\
u^d(x_1,x_2) & = 0 \\
u(x_1,x_2) & =
\max\left\{ \psi_{-}(x_1,x_2) ,
\min\left\{ \psi_{+}(x_1,x_2) , \tfrac{\beta}{\alpha} \sin(m_1 \pi x_1) \sin(m_1 \pi x_2) \right\} \right\} \\
f(x_1,x_2) & = \pi^2 (k_1^2+k_2^2) \sin(k_1 \pi x_1) \sin(k_1 \pi x_2) - u(x_1,x_2) ,
\end{align*}
where $k_1,k_2,m_1,m_2 \in \mathbb{Z}$ and $\beta \in \mathbb{R}$.
\end{Example}
In Example \ref{ex:dir:co:loc}, select $k_1=1$, $k_2=1$, $m_1=2$, $m_2=1$,
$\beta = 0.5$, and $\alpha = 0.05$. A mesh of 50$\times$50 cells divided to triangular
elements is being used. Consider first linear elements, i.e., $p_1=p_2=p_3=1$, the
amount of corresponding global degrees of freedom are ${\rm dim}({\rm DG}_h^1) = 15000$,
${\rm dim}({V}_h^1) = 2601$, and ${\rm dim}({\rm RT}_h^1) = 7600$.
The bounds generated by
Algorithm \ref{alg:projgrad} ($I_{\rm max}^{PG}=10$) are depicted in
Figure \ref{fig:ex:U1V1Q1}.
\begin{figure}
\begin{center}
\begin{tabular}{l}
\includegraphics[scale=0.5]{fig_U1V1Q1_2_10iter.pdf} \\
\includegraphics[scale=0.5]{fig_U1V1Q1_3_10iter.pdf}
\end{tabular}
\caption{Estimates for the cost function value (top) and the error quantity (bottom),
where subspaces for control, state, and flux are ${\rm DG}_h^1$, ${V}_h^1$, and
${\rm RT}_h^1$, respectively.}
\label{fig:ex:U1V1Q1}
\end{center}
\end{figure}
If the order of approximation for state and flux are increased, i.e., subspaces $\Sspace_h$
and $\Fspace_h$ are enhanced, then the accuracy of error bounds improves significantly
(see Fig. \ref{fig:ex:U1V2Q2}). Here ${\rm dim}({V}_h^2) = 10201$
and ${\rm dim}({\rm RT}_h^2) = 25200$
\begin{figure}
\begin{center}
\begin{tabular}{l}
\includegraphics[scale=0.5]{fig_U1V2Q2_2_10iter.pdf} \\
\includegraphics[scale=0.5]{fig_U1V2Q2_3_10iter.pdf}
\end{tabular}
\caption{Estimates for the cost function value (top) and the error quantity (bottom),
where subspaces for control, state, and flux are ${\rm DG}_h^1$, ${V}_h^2$, and
${\rm RT}_h^2$, respectively.}
\label{fig:ex:U1V2Q2}
\end{center}
\end{figure}
In previous examples, $J(v)$ and $J(u)$ (and other integrals also) were computed using a
uniformly refined mesh and 121 integration points in each triangle.
Obviously, the negative lower bound for the error could be rejected immediately. Sharp
lower bound requires a very good approximation of the optimal control $v \approx u$ and the
corresponding flux of the respective state $\tau \approx \nabla y(u)$. Then the upper
bound $J(u) \leq J(v) \leq \overline{J}(v,\tau,\beta)$ would be very efficient. However, ten steps
of the projected gradient method does not provide a very accurate approximation.
It is a matter of further numerical tests to apply more efficient approximation methods
(see, e.g., \cite{ItoKunisch2008}) and to apply the element wise contributions of the
error estimates to generate adaptive sequences of subspaces.
\def$'${$'$}
|
1,116,691,498,585 | arxiv | \section{Introduction}
After standard recombination at $z \simeq 1100$, the universe is
considered to be reionized at some redshift $z \gtrsim 5$ from the
negative results of Gunn-Peterson experiments in quasar spectra.
It has been pointed out that first stars can play an important role in
the reionization of the universe (e.g., Couchman \& Rees 1986).
This scenario has been investigated in detail using numerical
simulations (e.g., Gnedin \& Ostriker 1997)
or semi-analytical models (e.g., Fukugita \& Kawasaki 1994; Haiman \&
Loeb 1997).
In the latter models, it is crucial to know whether a cloud of mass $M$
that virializes at redshift $z$ can cool or not, i.e., if star
formation occurs or not.
Haiman, Thoul, \& Loeb (1996) investigated this problem (see also
Tegmark et al. 1997).
They found that molecular cooling plays a crucial role for clouds with $T_{\rm
vir} \lesssim 10^{4} {\rm K}$ (``small'' pregalactic clouds, hereafter) at
$10 \lesssim z_{\rm vir} \lesssim 100$, where $T_{\rm vir}$ and $z_{\rm
vir}$ are the virial temperature and the redshift at virialization
respectively, and determined the minimum mass of the virialized cloud
must have in order to cool in a Hubble time.
On the other hand, Haiman, Rees, \& Loeb (1997) pointed out that in the
presence of ultraviolet (UV) background radiation at the level needed to
ionize the universe, molecular hydrogen is photodissociated by far ultraviolet
(FUV) photons, whose radiation energy is less than the Lyman limit, in
small pregalactic clouds.
Thus, they asserted that molecular hydrogen in small pregalactic clouds
is universally destroyed long before the reionization of the universe.
In their reionization model, Haiman \& Loeb (1997) assumed that only
objects with virial temperature above $10^{4}$ K can cool in a Hubble
time, owing to atomic cooling and star formation occurs subsequently.
Recently, Ciardi, Ferrara, \& Abel (1999) found that the
photodissociated regions are not large enough to overlap at $z \simeq
20-30$.
In the same redshift range, the flux of FUV background is well below the
threshold required by Haiman et al.(1997) to prevent the collapse of the
clouds.
However, molecular hydrogen in a virialized cloud is photodissociated
not only by external FUV background radiation, but also by FUV photons
produced by massive stars within the cloud.
Here, we assess the negative feedback of massive star formation on
molecular hydrogen formation in a primordial cloud.
\section{Region of Influence of an OB star}
Around an OB star, hydrogen is photoionized, and an HII region is formed.
Ionizing photons hardly escape from the HII region, but photons whose
radiation energy are below the Lyman limit can get away.
Such FUV photons photodissociate molecular hydrogen, and a
photodissociation region (PDR) is formed just outside the HII region.
In this section, we study how much mass in a primordial cloud is
affected by such FUV photons from an OB star and, as a result, becomes
unable to cool in a free-fall time owing to the lack of the coolant.
We consider a small pregalactic cloud of primordial composition.
In such an object, H$_{2}$ is formed mainly by the H$^{-}$ process at
$z \lesssim 100$:
\begin{eqnarray}
\label{eq:Hm1}
{\rm H}+e^{-} &\rightarrow & {\rm H^{-}}+\gamma ; \\
\label{eq:Hm2}
{\rm H}+{\rm H^{-}} &\rightarrow & {\rm H_{2}}+e^{-}.
\end{eqnarray}
At $z \gtrsim 100$, H$^{-}$ is predominantly photodissociated by CMB photons
before the reaction (\ref{eq:Hm2}) proceeds.
On the other hand, in a PDR, photodissociation of H$^{-}$ by UV
radiation from an OB star does not dominate the reaction (\ref{eq:Hm2})
except in the vicinity of the star.
Then we neglect the photodissociation of H$^{-}$.
The rate-determining stage of the H$^{-}$ process is the reaction
(\ref{eq:Hm1}), whose rate coefficient $k_{\rm H^{-}}$ is (de Jong 1972)
\begin{equation}
k_{\rm H^{-}}=1.0 \times 10^{-18} T~{\rm s^{-1} cm^{3}}.
\end{equation}
In a PDR, H$_{2}$ is dissociated mainly via the two-step photodissociation
process:
\begin{equation}
{\rm H_2}+\gamma \rightarrow {\rm H_{2}^*} \rightarrow 2{\rm H},
\end{equation}
whose rate coefficient $k_{\rm 2step}$ is given by
(Kepner, Babul, \& Spergel 1997; Draine \& Bertoldi 1996)
\begin{equation}
k_{\rm 2step}=1.13 \times 10^{8} F_{\rm LW}~{\rm s^{-1}}.
\end{equation}
Here $F_{\rm LW}~({\rm ergs~s^{-1}cm^{-2}Hz^{-1}})$ is the averaged
radiation flux in the Lyman and Werner (LW) bands and can be written as
\begin{equation}
F_{\rm LW}=F_{\rm LW,ex} f_{\rm shield},
\end{equation}
where $F_{\rm LW,ex}$ is the incident flux into the PDR at
12.4 eV and the shielding factor $f_{\rm shield}$ is given by
(Draine \& Bertoldi 1996)
\begin{equation}
\label{eq:fsh}
f_{\rm shield}={\rm min} \left[ 1,(\frac{N_{\rm H_2}}{10^{14}})^{-0.75}
\right].
\end{equation}
The timescale in which the H$_{2}$ fraction reaches the equilibrium
value is given by
\begin{equation}
t_{\rm dis}=k_{\rm 2step}^{-1}.
\end{equation}
In the presence of FUV radiation, if the temperature and density were
fixed, the H$_{2}$ fraction $f$ initially would increase and reach the
equilibrium value for a temporal ionization fraction after $\sim t_{\rm
dis}$, and then it would decline as ionization fraction decreased as a
result of recombination.
Actually, if the pregalactic cloud can once produce a sufficient amount
of molecular hydrogen to cool in a free-fall time, the cloud can
collapse and star formation occurs subsequently.
Note that because of their low ionization degree, inverse Compton
cooling by CMB photons is not effective in small objects with $T_{\rm
vir} \lesssim 10^{4}$ K.
We investigate here how much mass around an OB star is affected by the
photodissociating FUV radiation from the star and becomes unable to cool in a
free-fall time.
In particular, we seek the lower bound of such mass.
The equilibrium number density of H$_{2}$ under ionization degree $x$ is
\begin{eqnarray}
\label{eq:neq}
n_{\rm H_2} &=& \frac{k_{\rm H^{-}}}{k_{\rm 2step}}x n^{2} \\
&=& 0.88 \times 10^{-26} x F_{\rm LW}^{-1} T n^{2}.
\end{eqnarray}
Near the star, $F_{\rm LW}$ is so large that the dissociation time
$t_{\rm dis}$ is smaller than the recombination time $t_{\rm
rec}=(k_{\rm rec} x_{\rm i} n)^{-1}$, where $k_{\rm rec}$ is the
recombination coefficient, $x_{\rm i}$ is the ionization degree at
virialization, and $n$ is the number density of hydrogen nuclei.
Then the chemical equilibrium between above processes is reached before
significant recombination proceeds.
Far distant from the star, the recombination proceeds before the
molecular fraction reaches the equilibrium value and the ionization
degree significantly diminishes.
However, since we are seeking how much mass is at least affected by the
photodissociating FUV radiation from the star, we use the equilibrium
value (\ref{eq:neq}) with initial ionization degree $x=x_{\rm i}$
as the H$_{2}$ number density.
Consider an OB star, which radiates at the rate of
$L_{\rm LW}$ [ergs s$^{-1}$ Hz$^{-1}$] in the LW bands.
For an O5 star, whose mass is 40 $M_{\sun}$, $L_{\rm LW} \simeq
10^{24}$ ergs s$^{-1}$ Hz$^{-1}$.
At the point whose distance from the star is $r$, the averaged flux in
the Lyman and Werner bands is approximately given by
\begin{equation}
F_{\rm LW}=\frac{L_{\rm LW}}{4 \pi r^{2}} f_{\rm shield}.
\end{equation}
Using above relations, we obtain the H$_2$ column density between the
star and the point whose distance from the star is $r$:
\begin{equation}
\label{eq:column}
N_{\rm H_2}= \left\{
\begin{array}{ll}
C V & \mbox{($N_{\rm H_2}<10^{14} {\rm cm^{-3}}$)} \\
10^{14}[0.25 (C V/10^{14}) + 0.75]^{4} &
\mbox{($N_{\rm H_2}>10^{14}{\rm cm^{-3}}$)}
\end{array}
\right.
\end{equation}
where
\begin{eqnarray}
C &=& 0.88 \times 10^{-26} x_{\rm i}L_{\rm LW}^{-1} T n^{2},\\
V &=& \frac{4 \pi}{3} r^{3}.
\end{eqnarray}
Here we have assumed $n=$const. in space, for simplicity.
We define here the region of influence around a star as that where the
cooling time $t_{\rm cool}=\frac{(3/2)kT}{fn \Lambda_{\rm H_2}}$ becomes
larger than the free-fall time $t_{\rm ff}=(\frac{3 \pi \Omega_{\rm b}}
{32G m_{\rm p} n})^{1/2}$ as a result of the photodissociation of
molecular hydrogen, where $f=n_{\rm H_{2}}/n$ is the H$_{2}$ concentration,
$\Lambda_{\rm H_2}$ is the cooling function of molecular hydrogen, and
$\Omega_{\rm b}$ is the baryon mass fraction.
The condition $t_{\rm cool} > t_{\rm ff}$ is satisfied as long as the
H$_2$ fraction
\begin{eqnarray}
f < f^{\rm (cool)} &=& (\frac{24 G m_{\rm p}}{\pi})^{1/2}
\frac{kT}{\Omega_{\rm b}^{1/2} n^{1/2} \Lambda_{\rm H_2}}\\
&=& 1 \times 10^{-3} (\frac{n}{1 {\rm cm^{-3}}})^{-1/2} (\frac{T}{10^3
{\rm K}})^{-3} (\frac{\Omega_{\rm b}}{0.05})^{-1/2},
\end{eqnarray}
where we used our fit to the Martin, Schwarz,\& Mandy (1996) H$_{2}$
cooling function
\begin{equation}
\Lambda_{\rm H_2} \simeq 4 \times 10^{-25}
(\frac{T}{1000{\rm K}})^{4} {\rm ergs~s^{-1}cm^{3}}
\end{equation}
for the low temperature ($600{\rm K} \lesssim T \lesssim 3000{\rm K}$)
and low density ($n \lesssim 10^{4} {\rm cm^{-3}}$) regime.
The same condition leads to the condition on the averaged flux in the
LW bands with equation (\ref{eq:neq}):
\begin{eqnarray}
F_{\rm LW} > F_{\rm LW}^{\rm (cool)}
&=& \frac{k_{\rm H^{-}} x_{\rm i}n }{1.13 \times 10^{8} f^{(\rm cool)}}\\
&=& 0.7 \times 10^{-24} {\rm ergs~s^{-1} cm^{-2} Hz^{-1}}
(\frac{x_{\rm i}}{10^{-4}}) (\frac{n}{1 {\rm
cm^{-3}}})^{3/2} (\frac{T}{10^3 {\rm K}})^{4} (\frac{\Omega_{\rm
b}}{0.05})^{1/2}.
\end{eqnarray}
Corresponding to this critical LW flux $F_{\rm LW}^{\rm (cool)}$, a
critical radius $r^{\rm (cool)}$ is determined by the relation
\begin{equation}
F_{\rm LW}[r^{\rm (cool)}]=F_{\rm LW}^{\rm (cool)}.
\end{equation}
Actually, the timescale for molecular hydrogen to reach the equilibrium
value at $r=r^{\rm (cool)}$,
\begin{equation}
t_{\rm dis}[r^{\rm (cool)}]=4.0 \times 10^{8} {\rm yr}
(\frac{x_{\rm i}}{10^{-4}})^{-1}
(\frac{n}{1 {\rm cm^{-3}}})^{-3/2}
(\frac{T}{10^3 {\rm K}})^{-4}
(\frac{\Omega_{\rm b}}{0.05})^{-1/2},
\end{equation}
is longer than the lifetime of an OB star $t_{\rm OB} \sim 3 \times
10^{6} $ yr.
This means that as far region as $r=r^{\rm (cool)}$ is rarely affected
within the lifetime of a single massive star.
We define here another critical LW flux $F_{\rm LW}^{(\rm eq)}$
and its corresponding radius $r^{\rm (eq)}$ where the timescale for
molecular hydrogen to reach the equilibrium value $t_{\rm dis}$ becomes
equal to the lifetime of an OB star;
\begin{equation}
F_{\rm LW}^{(\rm eq)}=0.93 \times 10^{-22} {\rm ergs~s^{-1} cm^{-2} Hz^{-1}}
(\frac{t_{\rm OB}}{3 \times 10^6 {\rm yr}})^{-1},
\end{equation}
\begin{equation}
F_{\rm LW}[r^{\rm (eq)}]=F_{\rm LW}^{\rm (eq)}.
\end{equation}
In the region of influence, we require that two conditions be met: (1) the
cooling time for the equilibrium H$_{2}$ fraction is longer than
the free-fall time (i.e., $t_{\rm cool}[f=f^{\rm (eq)}]>t_{\rm
ff}$,where $f^{\rm (eq)}$ is the equilibrium H$_{2}$ fraction)
and (2) the equilibrium H$_{2}$ fraction is reached within the
lifetime of the central star (i.e., $t_{\rm dis}< t_{\rm OB}$).
Hence the radius of influence $r^{\rm (inf)}$ is determined by the
smaller one of either $r^{\rm (cool)}$ or $r^{\rm (eq)}$;
\begin{equation}
r^{\rm (inf)}={\rm min}[r^{\rm (cool)},r^{\rm (eq)}].
\end{equation}
Note the LW flux $F_{\rm LW}$ at which $t_{\rm dis}$ becomes equal to
$t_{\rm rec}$ is $2 \times 10^{-24} (x_{\rm i}/10^{-4})(n/1 {\rm
cm^{-3}})(T/10^{3} {\rm K})^{-0.64} {\rm ergs~s^{-1} cm^{-2} Hz^{-1}}$.
Here we used the recombination coefficient $k_{\rm rec}=1.88 \times
10^{-10} T^{-0.64}~{\rm s^{-1} cm^{3}}$ (Hutchins 1976).
Therefore, even at the edge of the region of influence, the condition
$t_{\rm dis}<t_{\rm rec}$ is usually satisfied.
In this case, the chemical equilibrium value for H$_2$ is reached before
the ionization degree significantly decreases from the initial value
$x_{\rm i}$.
If the self-shielding of LW band photons could be
neglected, the radius $r^{\rm (cool)}$ would be given by
\begin{eqnarray}
\label{eq:rinf}
r^{({\rm cool})}=r^{({\rm cool})}_{\rm no-sh}
&=&[\frac{L_{\rm LW}}{4 \pi F_{\rm LW}^{\rm (cool)}}]^{1/2} \\
&=&3.4 \times 10^{23} {\rm cm} (\frac{x_{\rm i}}{10^{-4}})^{-1/2}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{1/2}
(\frac{T}{10^3 {\rm K}})^{-2} (\frac{n}{1 {\rm cm^{-3}}})^{-3/4}
(\frac{\Omega_{\rm b}}{0.05})^{-1/4}.
\end{eqnarray}
This expression is valid only when $N_{\rm H_{2}}<10^{14} {\rm
cm^{-2}}$.
When the H$_2$ column density $N_{\rm H_2}$ becomes larger than
$10^{14} {\rm cm^{-2}}$, self-shielding of LW band photons
begins as can be seen from equation (\ref{eq:fsh}).
Here, we define the shielding radius $r_{\rm sh}$ as the radius where $N_{\rm
H_2}(r_{\rm sh})=10^{14} {\rm cm^{-2}}$.
Using equation (\ref{eq:column}), the shielding radius is
\begin{eqnarray}
\label{eq:rsh}
r_{\rm sh}&=&[\frac{10^{14}}{(4 \pi/3) C}]^{1/3} \\
&=&3.0 \times 10^{21} {\rm cm} (\frac{x_{\rm i}}{10^{-4}})^{-1/3}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{1/3}
(\frac{T}{10^3 {\rm K}})^{-1/3} (\frac{n}{1 {\rm cm^{-3}}})^{-2/3}.
\end{eqnarray}
When the self-shielding becomes important, $N_{\rm H_2}$ increases and
$F_{\rm LW}$ decreases rapidly with $r$.
Then $r^{\rm (cool)}$ is not much larger than $r_{\rm sh}$.
In such a case, we put $r^{\rm (cool)}=r_{\rm sh}$ as a lower bound.
Then $r^{\rm (cool)}$ is given by the lesser one of those given by
equations (\ref{eq:rinf}) or (\ref{eq:rsh}).
In all the same way as above, we can obtain the value of $r^{({\rm
eq})}$;
\begin{equation}
r^{({\rm eq})}={\rm min}[r^{({\rm eq})}_{\rm no-sh},r_{\rm sh}],
\end{equation}
where
\begin{eqnarray}
\label{eq:req}
r^{({\rm eq})}_{\rm no-sh}
&=&[\frac{L_{\rm LW}}{4 \pi F_{\rm LW}^{\rm (eq)}}]^{1/2} \\
&=&2.9 \times 10^{22}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{1/2}
(\frac{t_{\rm OB}}{3 \times 10^{6} {\rm yr}})^{1/2}.
\end{eqnarray}
The baryonic mass within the region of influence $M_{\rm b}^{\rm (inf)}$
is then
\begin{eqnarray}
M_{\rm b}^{\rm (inf)} &=& \frac{4 \pi}{3} n m_{\rm p} {r^{\rm (inf)}}^{3}\\
\label{eq:Minf}
&=&
{\rm min} \left\{
\begin{array}{ll}
1.4 \times 10^{14} M_{\sun} (\frac{x_{\rm i}}{10^{-4}})^{-3/2}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/2}
(\frac{T}{10^3 {\rm K}})^{-6} (\frac{n}{1 {\rm cm^{-3}}})^{-5/4}
(\frac{\Omega_{\rm b}}{0.05})^{-3/4} \\
0.85 \times 10^{11} M_{\sun}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/2}
(\frac{t_{\rm OB}}{3 \times 10^{6} {\rm yr}})^{3/2}
(\frac{n}{1 {\rm cm^{-3}}}) \\
1.0 \times 10^{8} M_{\sun} (\frac{x_{\rm i}}{10^{-4}})^{-1}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})
(\frac{T}{10^3 {\rm K}})^{-1} (\frac{n}{1 {\rm cm^{-3}}})^{-1}
\end{array}
\right\}
\end{eqnarray}
In equation (\ref{eq:Minf}), each expression corresponds to the case of
$r^{\rm (inf)}=r_{\rm no-sh}^{\rm (cool)}$, $r_{\rm no-sh}^{\rm (eq)}$,
and $r_{\rm sh}$ from the top to bottom, respectively.
We shall keep this order hereafter.
At first glance, the first expression in equation (\ref{eq:Minf}) seems
to be always larger than the others, but its stronger dependence on
temperature makes it important for higher temperature (i.e., more
massive) objects than the normalized value.
From equation (\ref{eq:Minf}), we can see that the mass within a
region of influence of an O star already exceeds the scale of
the small pregalactic object.
We have considered in this letter the regulation of star formation by
photodissociation of molecular hydrogen in a pregalactic cloud.
On the other hand, Lin \& Murray (1992) considered only the regulation by
photoionization.
In such a case, the mass affected by an OB star, namely the baryonic
mass within a Str\"{o}mgren sphere, is
\begin{eqnarray}
M_{\rm b}^{\rm (St)}&=&\frac{m_{\rm p} n Q_{\ast}}{k_{\rm rec} n^{2}}\\
&=& 3.7 \times 10^{3} M_{\sun} (\frac{T}{10^3 {\rm K}})^{0.64} (\frac{n}{1
{\rm cm^{-3}}})^{-1} (\frac{Q_{\ast}}{10^{49} {\rm s^{-1}}}),
\end{eqnarray}
where $Q_{\ast}$ is the flux of ionizing photons by a OB star and
$Q_{\ast} \simeq 10^{49} {\rm s^{-1}}$ for an O5 star.
This is by far smaller than our estimated mass of photodissociative
influence $M_{\rm b}^{\rm (inf)}$.
To be more specific in the cosmological context, we consider here
pregalactic clouds at virialization.
The number density at virialization is
\begin{equation}
\label{eq:nvir}
n_{\rm vir}=0.68 {\rm cm^{-3}} h_{50}^{2} (\frac{\Omega_{\rm
b}}{0.05})(\frac{1+z_{\rm vir}}{30})^{3},
\end{equation}
and the virial temperature is
\begin{equation}
\label{eq:Tvir}
T_{\rm vir}=6.8 \times 10^{2} {\rm K} h_{50}^{2/3} (\frac{\Omega_{\rm
b}}{0.05})^{-2/3} (\frac{M_{\rm b}}{10^{4}
M_{\sun}})^{2/3} ( \frac{1+z_{\rm vir}}{30}).
\end{equation}
Substituting equations (\ref{eq:nvir}) and (\ref{eq:Tvir}) into equation
(\ref{eq:Minf}),
we obtain
\begin{equation}
\label{eq:Minf2}
M_{\rm b}^{\rm (inf)}=
{\rm min} \left\{
\begin{array}{ll}
2.3 \times 10^{15} M_{\sun} (\frac{x_{\rm i}}{10^{-4}})^{-3/2}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/2} h_{50}^{-13/2}
(\frac{\Omega_{\rm b}}{0.05})^{2} (\frac{M_{\rm b}}{10^{4}
M_{\sun}})^{-4} ( \frac{1+z_{\rm vir}}{30})^{-39/4} \\
0.58 \times 10^{11} M_{\sun}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/2}
(\frac{t_{\rm OB}}{3 \times 10^{6} {\rm yr}})^{3/2} h_{50}^{2}
(\frac{\Omega_{\rm b}}{0.05})( \frac{1+z_{\rm vir}}{30})^{3} \\
2.2 \times 10^{8} M_{\sun} (\frac{x_{\rm i}}{10^{-4}})^{-1}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}}) h_{50}^{-8/3}
(\frac{\Omega_{\rm b}}{0.05})^{-1/3} (\frac{M_{\rm b}}{10^{4}
M_{\sun}})^{-2/3} ( \frac{1+z_{\rm vir}}{30})^{-4}
\end{array}
\right\}
\end{equation}
In order for the star formation to continue after a massive star forms,
the region of influence must be smaller than the original pregalactic
cloud.
Then $M_{\rm b}^{\rm (inf)}<M_{\rm b}$ is the necessary condition, which
leads to
\begin{equation}
\label{eq:cond}
M_{\rm b}>
{\rm min} \left\{
\begin{array}{ll}
1.9 \times 10^{6} M_{\sun}
(\frac{x_{\rm i}}{10^{-4}})^{-3/10}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/10}
h_{50}^{-13/10}
(\frac{\Omega_{\rm b}}{0.05})^{2/5}
( \frac{1+z_{\rm vir}}{30})^{-39/20}\\
0.58 \times 10^{11} M_{\sun}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/2}
(\frac{t_{\rm OB}}{3 \times 10^{6} {\rm yr}})^{3/2} h_{50}^{2}
(\frac{\Omega_{\rm b}}{0.05})( \frac{1+z_{\rm vir}}{30})^{3}\\
4.0 \times 10^{6} M_{\sun}
(\frac{x_{\rm i}}{10^{-4}})^{-3/5}
(\frac{L_{\rm LW}}{10^{24} {\rm ergs~s^{-1} Hz^{-1}}})^{3/5}
h_{50}^{-8/5}
(\frac{\Omega_{\rm b}}{0.05})^{-1/5}
( \frac{1+z_{\rm vir}}{30})^{-12/5}
\end{array}
\right\}.
\end{equation}
On the other hand, the baryonic mass of a pregalactic cloud that has virial
temperature $T_{\rm vir}$ is
\begin{equation}
\label{eq:Mb}
M_{\rm b}=1.8 \times 10^{4} M_{\sun}
(\frac{T_{\rm vir}}{1000 {\rm K}})^{3/2}
h_{50}^{-1}
(\frac{\Omega_{\rm b}}{0.05})
( \frac{1+z_{\rm vir}}{30})^{-3/2}.
\end{equation}
Comparing equations (\ref{eq:cond}) and (\ref{eq:Mb}), we can see that
for a small
pregalactic cloud ($T_{\rm vir}<10^{4}$K) the condition $M_{\rm b}^{\rm
(inf)}<M_{\rm b}$ is hardly satisfied in the redshift range of
$10 \lesssim z_{\rm vir} \lesssim 100$.
This indicates that FUV radiation from one or a few OB stars prohibits the
whole small pregalactic cloud from H$_{2}$ cooling and quenches
subsequent star formation in it.
After the death of the first OB star, star formation could occur
somewhere in the cloud, and another OB star could form successively.
Thereafter, some massive stars might form one after another, but only
a few could co-exist simultaneously, as we have shown above.
The timescale for reformation of OB stars depends on that for
${\rm H_2}$ replenishment after the death of the dissociating OB star,
which is
\begin{eqnarray}
t_{\rm rep}&=&\frac{f^{(\rm cool)}}{k_{\rm H^{-}} x_{\rm e} n} \\
&=&3 \times 10^{8} {\rm yr}
(\frac{n}{1 {\rm cm^{-3}}})^{-3/2}
(\frac{T}{10^3 {\rm K}})^{-4}
(\frac{x_{\rm e}}{10^{-4}})^{-1}
(\frac{\Omega_{\rm b}}{0.05})^{-1/2}.
\end{eqnarray}
If the ionization degree is as high as unity, which is typical in the HII
region, this timescale can be very short, namely, $t_{\rm rep} \simeq 3
\times 10^{4}$ yr.
If the timescale for reformation of OB stars is smaller than the
lifetime of OB star, it would be possible for a few OB stars to form
every few million years, and, as a result, a considerable amount of
stars would form in a Hubble time.
However, we do not expect that the star formation continues as long as a
Hubble time, because the gravitational binding energy of
baryonic gas in such a small pregalactic cloud,
\begin{eqnarray}
E_{\rm gr} & \simeq & \frac{G M M_{\rm b}}{R_{\rm vir}} \\
& \simeq & 3.3 \times 10^{48} {\rm ergs}
(\frac{M_{\rm b}}{10^{4} M_{\sun}})^{5/3}
(\frac{\Omega_{\rm b}}{0.05})^{-2/3} h_{50}^{2/3}
( \frac{1+z_{\rm vir}}{30}),
\end{eqnarray}
where $M$ is the total mass, and $R_{\rm vir}$ is the virial radius of
the cloud, is so small that a few supernova explosions would blow out
such a small pregalactic cloud (see, e.g., Dekel \& Silk 1986).
Thus, our probable scenario is as follows:
in a small pregalactic cloud,
star formation occurs only in a photodissociatively regulated fashion
until several supernovae explode, and it stops thereafter.
If numerous small stars were formed per a massive star, a substantial
proportion of the original cloud would be converted to stars at last.
However, in the primordial circumstance, the stellar initial mass
function could be strongly biased toward the formation of massive stars
because of the higher temperatures relative to the present-day
counterpart.
If this was the case and OB stars were formed selectively, the amount of
gas mass that was converted to stars would be extremely small.
\section{Summary}
We have studied the H$_{2}$ photodissociation region around an OB star in
a primordial gas cloud.
A region as large as the whole small pregalactic cloud is affected by
only one or a few OB stars and becomes unable to cool in a free-fall time
under the condition appropriate to virialization.
Therefore, in those clouds which have virial temperatures less than
$10^{4}$ K, star formation does not occur efficiently,
unless the primordial initial mass function is extremely weighted
toward low mass stars.
If the reionization of the universe is caused by stellar UV radiation,
some OB stars must form.
However as we have shown, an OB star formed in a small pregalactic cloud
would inevitably photodissociate the whole cloud and subsequent star
formation would be strongly suppressed.
Therefore, stellar UV radiation from small pregalactic clouds cannot
play a significant role in the reionization of the universe.
\acknowledgements
The authors would like to thank Toru Tsuribe for fruitful discussions,
Humitaka Sato and Naoshi Sugiyama for continuous encouragement, Evan
Scannapieco for checking the English, and the referee, Zolt\'{a}n Haiman, for
a careful reading of the manuscript and for useful comments.
This work is supported in part by the Grant-in-Aid for Scientific
Research on Priority Areas (No. 10147105) (R.N.), and the Grant-in-Aid for
Scientific Research from the Ministry of Education, Science, Sports and
Culture, No. 08740170 (R.N.).
\newpage
|
1,116,691,498,586 | arxiv | \section{Introduction}
\label{sec1}
\setcounter{equation}{0}
\setcounter{theo}{0}
In this paper, we consider the weak hypocoercivity issue for the kinetic Fokker-Planck (KFP for short) equation
\smallskip
\begin{equation}\label{1}
\partial_t f ={\mathcal L} f := -v \cdot \nabla_x f +\nabla_x V(x) \cdot \nabla_v f +\Delta_v f + \hbox{div}_v(v f),
\end{equation}
for a density function $f = f(t, x, v)$, with $t \ge 0 ,\ x \in \R^d,\ v \in \R^d$.
The evolution equation is complemented with an initial datum
\begin{equation}
\nonumber
f(0, \cdot) = f_0 \ \ on \ \R^{2d} .
\end{equation}
We make the fundamental assumption on the confinement potential $V$
\begin{equation}
\nonumber
V(x)= \langle x \rangle^{\gamma} ,\quad \gamma \in (0, 1) ,
\end{equation}
where $ \langle x \rangle^2 := 1 + |x|^2 $.
Let us make some elementary but fundamental observations. First, the equation is mass conservative,
that is
\begin{equation}
\nonumber
{\mathcal M} (f_0) ={\mathcal M}( f(t,\cdot )),
\end{equation}
where we define the mass of $f$ by
\begin{equation}
\nonumber
{\mathcal M}(f) =\int_{R^d \times R^d} f dx dv.
\end{equation}
\noindent Next, we observe that
\begin{equation}\label{A1}
\quad G=Z^{-1}e^{-W}, \quad W= \frac {v^2} {2} +V(x), \quad Z \in \R_{+}
\end{equation}
is a positive normalized steady state of the KFP model, precisely
\begin{equation}
\nonumber
LG=0, \quad G>0, \quad {\mathcal M}(G)=1,
\end{equation}
by choosing the normalizing constant $Z>0$ appropriately. Finally we observe that, contrary to the case $\gamma \ge 1$, a Poincar\'{e} inequality of the type
\begin{equation}
\nonumber
\exists c > 0, \quad \int_{\R^d} |f(x)|^2 \exp(-V(x)) dx \le c \int_{\R^d} |\nabla f(x)|^2 \exp(-V(x)) dx ,
\end{equation}
for any smooth function $f: \R^d \to \R$ such that
\begin{equation}
\nonumber
\int_{\R^d} f(x) \exp(-V(x)) dx =0 ,
\end{equation}
does not hold. Only a weaker version of this inequality remains true (see \cite{RW}, or below Section 2). In particular, there is no spectral gap for the associated operator ${\mathcal L}$, nor is there an exponential trend to the equilibrium for the associated semigroup.
\smallskip
For a given weight function $m$, we will denote $L^p(m) = \{ f | fm \in L^p \}$ the associated Lebesgue space and $\Vert f \Vert_{L^p(m)} = \Vert f m \Vert_{L^p}$ the associated norm.
The notation $A \lesssim B$ means $A \le C B $ for some constant $C>0$.
\smallskip
With these notations, we can introduce the main result of this paper.
\begin{theo}\label{T11}
(1) For any initial datum $f_0 \in L^p(G^{-(\frac {p-1} {p} +\epsilon)})$, $p \in [1,\infty)$, $\epsilon > 0$ small, the associated solution $f(t, \cdot)$ of the kinetic Fokker-Planck equation (\ref{1}) satisfies
\begin{equation}
\nonumber
\Vert f(t , \cdot) - {\mathcal M}(f_0) G\Vert_{L^p(G^{-\frac {p-1} {p}})} \lesssim e^{- C t^{b} } \Vert f_0 -{\mathcal M}(f_0) G\Vert_{L^p (G^{-(\frac {p-1} {p} +\epsilon) } ) } ,
\end{equation}
for any $b \in (0, \frac {\gamma} {2-\gamma})$ and some constant $C>0$.\\
(2) For any initial datum $f_0 \in L^1(m)$, $m = H^k$, $H =x^2+v^2$, $1 \le k $, the associated solution $f(t, \cdot)$ of the kinetic Fokker-Planck equation (\ref{1}) satisfies
\begin{eqnarray}
\nonumber
\Vert f(t,\cdot)-{\mathcal M}(f_0)G \Vert_{L^1} \lesssim (1+t)^{-a} \Vert f_0 - {\mathcal M}(f_0)G \Vert_{L^1(m)},
\end{eqnarray}
for any $0 < a <\frac {k} {1-\frac \gamma 2}$. The constants in the estimates do not depend on $f_0$, but rely on $\gamma, d, \epsilon,\theta, p, k$.
\end{theo}
\begin{rem}
Theorem \ref{T11} is also true when $V(x)$ behaves like $ \langle x \rangle^{\gamma} $, that is for any $V(x)$ satisfying
\begin{equation}
\nonumber
C_1 \langle x \rangle^{\gamma} \le V(x) \le C_2\langle x \rangle^{\gamma} , \ \ \forall x \in \R^d ,
\end{equation}
\begin{equation}
\nonumber
C_3|x| \langle x \rangle^{\gamma-1} \le x \cdot \nabla_x V(x) \le C_4|x|\langle x \rangle^{\gamma-1}, \ \ \forall x \in B_R^c ,
\end{equation}
and
\begin{equation}
\nonumber
|D^2_x V(x)| \le C_5 \langle x \rangle^{\gamma-2}, \ \ \forall x \in \R^d ,
\end{equation}
for some constant $C_i >0$, $R>0$.
\end{rem}
\begin{rem}
There are many classical results on the case $ \gamma \ge 1 $. In this case there is an exponentially decay, and we refer the interested readers to \cite{V, DMS, DMS2, HN, HN2, H, BCG}.
\end{rem}
\begin{rem}
There are already some convergence results for the weak confinement case proved by probability method on some particular $L^1$ or $L^2$ spaces in \cite{BCG} and \cite{DFG}, this paper extend the result to $L^p$ spaces and more larger spaces.
\end{rem}
\smallskip
Let us briefly explain the main ideas behind our method of proof.
\smallskip
We first introduce four spaces $E_1= L^2(G^{-1/2})$, $E_2=L^2(G^{-1/2}e^{\epsilon_1V(x)})$, $E_3 = L^2(G^{-(1+\epsilon_2)/2})$ and $E_0 =L^2(G^{-1/2} \langle x \rangle^{\gamma-1} )$, with
$\epsilon_1>0$ and $\epsilon_2>0$ small such that $E_3 \subset E_2 \subset E_1 \subset E_0 \subset L^2$. Thus $E_1$ is an interpolation space between $E_0$ and $E_2$. We first use a hypocorecivity argument as in \cite{DMS,DMS2} to prove that, for any $f_0 \in E_3$, the solution to the KFP equation (\ref{1}) satisfies
\begin{equation}
\nonumber
\frac {d} {dt} \Vert f(t) \Vert_{E_1} \le - \lambda \Vert f(t) \Vert_{E_0},
\end{equation}
for some constant $\lambda >0$. We use this and the Duhamel formula to prove
\begin{equation}
\nonumber
\Vert f(t) \Vert_{E_2} \lesssim \Vert f_0 \Vert_{E_3}.
\end{equation}
Combining the two inequalities and using a interpolation argument as in \cite{KM}, we get
\begin{equation}\label{7}
\Vert f(t) \Vert_{E_1} \lesssim e^{-a t^b} \Vert f_0 \Vert_{E_3},
\end{equation}
for some $a>0, b \in (0,1)$.
We then generalize the decay estimate to a wider class of Banach spaces by adapting the extension theory introduced in \cite{M2} and developed in \cite{MM, GMM}.
For any operator ${\mathcal L}$, denote $S_{\mathcal L}(t)$ the associated semigroup. We introduce a splitting ${\mathcal L}={\mathcal A}+{\mathcal B}$, where ${\mathcal A}$ is an appropriately defined bounded operator so that ${\mathcal B}$ becomes a dissipative operator. By proving some regularization estimate in $S_{\mathcal B}$ in $L^p$
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t) \Vert_{L^p(m_1) \to L^2(m_2)} \lesssim t^{-\alpha},\quad \forall t \in [0,\eta],
\end{equation}
for some weight function $m_1$, $m_2$ and some $\alpha, \eta>0$, and using the iterated Duhamel's formula
\begin{eqnarray}\label{10}
S_{\mathcal L} =S_{\mathcal B} + \sum_{l=1}^{n-1}(S_{\mathcal B})*( {\mathcal A} S_{\mathcal B} )^{(*l)} + S_{\mathcal L} *({\mathcal A} S_{\mathcal B}(t))^{*n},
\end{eqnarray}
we deduce the $L^p$ convergence on $S_{\mathcal L}$, where the convolution of two semigroups $S_{\mathcal A}(t)$ $S_{\mathcal B}(t)$is defined by
\begin{equation}
\nonumber
(S_{\mathcal A}*S_{\mathcal B})(t) =\int_0^t S_{\mathcal A}(s) S_{\mathcal B}(t-s) ds.
\end{equation}
\medskip
Let us end the introduction by describing the plan of the paper. In Section 2, we will develop a hypocoercivity argument to prove a weighted $L^2$ estimate for the KFP model. In section 3, we introduce a splitting ${\mathcal L} ={\mathcal A}+{\mathcal B}$ and using the $L^2$ estimate, we prove a $L^2$ convergence.
In Section 4 we present the proof of a regularization estimate on $S_{\mathcal B}$ from $L^2$ to $L^p$. In Section 5 we prove some $L^1$ estimate on the semigroup $S_{\mathcal B}$. Finally in Section 6 we use the above regularization estimate to conclude the $L^p$ convergence for KFP equation.
\medskip
{\bf Acknowledgment. }Â Â
The author thanks to S. Mischler for furitful discussions on the full work of the paper. This work was supported by grants from R\'egion Ile-de-France the DIM program.
\bigskip
\section{$L^2$ framework: Dirichlet form and rate of convergence estimate}
\label{sec2}
\setcounter{equation}{0}
\setcounter{theo}{0}
For later discussion, we introduce some notations for the whole paper.
\smallskip
\noindent We split the KFP operator as $$ {\mathcal L} = {\mathcal T} + {\mathcal S}, $$ where ${\mathcal T}$ stands for the transport part
\begin{equation}
\nonumber
{\mathcal T} f= -v \cdot \nabla_x f +\nabla_x V(x) \cdot \nabla_v f,
\end{equation}
and ${\mathcal S}$ stands for the collision part
\begin{equation}
\nonumber
{\mathcal S} f= \Delta_v f + div_v(v f).
\end{equation}
We will denote the cut-off function $\chi$ such that $\chi(x, v) \in [0, 1]$, $\chi(x, v) \in C^\infty $, $\chi(x, v) =1$ when $x^2+v^2 \le 1$ , $\chi(x, v) =0$ when $x^2+v^2 \ge 2$, and then denote $\chi_R = \chi(x/R, v/R)$.
\smallskip
\noindent We may also define another splitting of the KFP operator ${\mathcal L}$ by
\begin{equation}\label{21}
{\mathcal L} ={\mathcal A}+{\mathcal B}, \quad {\mathcal A}=K\chi_R(x, v).
\end{equation}
with $K, R >0$ to be chosen later.
\noindent We use $\int f$ in place of $\int_{\R^d \times R^d} f dx dv$ for short, similarly $\int f dx$ means $\int_{\R^d } f dx $ , $\int f dv $ means $\int_{\R^d } f dv$. $B_{|x| \le \rho}$ is used to denote the ball such that $\{ x\in \R^d | |x| \le \rho\} $, similarly $B_{\rho}$ means the ball such that $\{ x, v \in \R^d | |x|^2 +v^2 \le \rho\} $.
For $V(x)= \langle x \rangle^\gamma, 0 < \gamma < 1$, we also denote $\langle \nabla V \rangle$ for $\langle x \rangle^{\gamma-1}$, and $\langle \nabla V \rangle^{-1}$ for $\langle x \rangle^{1-\gamma}$.
\smallskip
With these notations we introduce the Dirichlet form adapted to our problem. We define the 0 order and first order moments
\begin{equation}
\nonumber
\rho_f=\rho[f]=\int f dv, \quad j_f = j[f] = \int v f dv,
\end{equation}
then we define a projection operator $\pi$ by
\begin{equation}
\nonumber
\pi f =M \rho_f, \quad M=C e^{-v^2/2}, \quad \int M dv=1,
\end{equation}
and the complement of $\pi$ by
\begin{equation}
\nonumber
\pi^{\bot} = 1 - \pi, \quad f^{\bot} = \pi^{\bot} f.
\end{equation}
We define an elliptic operator $\Delta_V$ and its dual $\Delta_V^*$ by
\begin{equation}
\nonumber
\Delta_V u :=div_x(\nabla_x u + \nabla_x V u),\quad \Delta_V^* u =\Delta_x u- \nabla_x V \cdot \nabla_x u,
\end{equation}
let $u=(\Delta_V^*)^{-1}\xi$ be the solution to the above elliptic equation
\begin{equation}
\nonumber
\Delta_V^* u =\xi \ on \ \R^d,
\end{equation}
note that u can differ by a constant, we also requires that
\begin{equation}
\nonumber
\int u e^{-V} \langle \nabla V \rangle^{-2} dx=0,
\end{equation}
using these notations, define a scalar product by
\begin{eqnarray}
\nonumber
((f, g))&:= &(f, g)_{{\mathcal H}} +\epsilon (\Delta_V^{-1} \nabla_x j_f, (\rho_g e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&& + \epsilon ((\rho_f e^V \langle \nabla V \rangle^2), \Delta_V^{-1} \nabla_x j_g)_{L^2}
\\ \nonumber
&=&(f, g)_{{\mathcal H}} + \epsilon (j_f, \nabla_x(\Delta_V^*)^{-1}(\rho_ge^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&&+\epsilon( (\nabla_x(\Delta_V^*)^{-1}(\rho_f e^V \langle \nabla V \rangle^2 ), j_g)_{L^2},
\end{eqnarray}
for some $\epsilon > 0$ to be specified later. \\
We then define the Dirichlet form
\begin{eqnarray}
\nonumber
D[f] &:= &((-{\mathcal L} f, f))
\\ \nonumber
&=& (-{\mathcal L} f, f)_{{\mathcal H}} +\epsilon (\Delta_V^{-1} \nabla_x j[-{\mathcal L} f], (\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&&+ \epsilon ((\rho[-{\mathcal L} f] e^V \langle \nabla V \rangle^2), \Delta_V^{-1} \nabla_x j_f)_{L^2}.
\end{eqnarray}
Finally we define ${\mathcal H}=L^2(G^{-1/ 2 })$, ${\mathcal H}_1=L^2(G^{-1/ 2 }\langle \nabla V \rangle)$ and
\begin{equation}
\nonumber
{\mathcal H}_0 = \{h \in {\mathcal H} , \int f dx dv =0 \}
\end{equation}
where we recall that $G$ has been introduced in (\ref{A1}). With these notations we can come to our first theorem.
\begin{theo}\label{T21}
There exists $\epsilon > 0$ small enough, such that on ${\mathcal H}_0$ the norm $((f, f))^{\frac 1 2} $ defined above is equivalent to the norm of ${\mathcal H}$, moreover there exist $\lambda > 0$, such that
$$D[f] \ge \lambda \Vert f \Vert^2_{{\mathcal H}_1}, \ \ \ \forall f \in {\mathcal H}_0.$$
As a consequence, for any $f_0 \in {\mathcal H}_0$, we have
\begin{equation}\label{22}
\frac d {dt} ((f, f)) \le - C \int f^2 G^{-1} \langle x \rangle^{2(\gamma - 1)},
\end{equation}
for some constant $C>0$. In particular for any $f_0 \in {\mathcal H}_0$, we have
\begin{equation}\label{23}
\Vert f(t , \cdot) \Vert_{L^2(G^{-\frac 1 2})} \le C \Vert f_0 \Vert_{L^2(G^{-\frac 1 2})},
\end{equation}
for some constant $C > 0$.
\end{theo}
\begin{rem}
In ${\mathcal H}_0$ we have
\begin{equation}
\nonumber
\int \rho_f e^V \langle \nabla V \rangle^2 e^{-V} \langle \nabla V \rangle^{-2} dx= \int \rho_f dx = \int f dx dv = 0,
\end{equation}
so the term $(\Delta_V^*)^{-1}(\rho_ge^V \langle \nabla V \rangle^2 )$ is well defined in ${\mathcal H}_0$.
\end{rem}
\begin{rem}
(1) By little modifying the method in Villani's paper \cite{V}, a $H^1$ version of our theorem can be established.\\
(2) Our statement is a generalization of \cite{DMS, DMS2}.
\end{rem}
Before proving the theorem, we need some lemmas.
We say that $W$ satisfies a local Poincar\'e inequality on a bounded open set $\Omega$ if there exist some constant $\kappa_\Omega > 0$ such that:
\begin{equation}
\nonumber
\int_\Omega h^2 W \le k_\Omega \int_\Omega |\nabla h |^2 W + \frac 1 {W(\Omega)}(\int_\Omega h W)^2,
\end{equation}
for any nice function $h : \R^d \to \R$ and where we denote $W(\Omega):= \langle W 1_\Omega \rangle$.
\begin{lem}\label{L21}
Under the assumption $W, W^{-1} \in L_{loc}^\infty(\R^d) $, the function $W$ satisfies the local Poincar\'e inequality for any ball $\Omega \in \R^d$.
\end{lem}
\noindent For the proof of Lemma \ref{L21} we refer to \cite{M3} Lemma 2.3.
\begin{lem}\label{L22}(weak Poincar\'{e} inequality)
There exist a constant $\lambda > 0$ such that
\begin{equation}
\nonumber
\Vert u \Vert_{L^2(\langle \nabla V \rangle e^{-V/2} )} \le \lambda \Vert \nabla u \Vert_{L^2( e^{-V/2} )}
\end{equation}
for any $u \in {\mathcal D}(\R^d)$ such that
\begin{equation}
\nonumber
\int_{\R^d} u e^{-V}\langle \nabla V \rangle^{-2} dx =0
\end{equation}
\end{lem}
\noindent {\sl Proof of Lemma \ref{L22}.} We prove for any $h \in {\mathcal D}(\R^d)$ such that
\begin{equation}\label{24}
\int_{\R^d} h e^{-V} \langle \nabla V \rangle^{-2}=0,
\end{equation}
we have
\begin{equation}
\nonumber
\int_{\R^d} |\nabla h |^2 e^{-V} \ge \lambda \int_{\R^d} h^2 e^{-V} \langle x\rangle^{2(\gamma-1)},
\end{equation}
for some $\lambda >0$. Taking $g=he^{-1/2V}$, we have $\nabla g =\nabla h e^{-\frac 1 2 V} -\frac 1 2 \nabla V h e^{-\frac 1 2 V}$, so that
\begin{eqnarray}
\nonumber
0 \le\int |\nabla g|^2 &=& \int |\nabla h |^2e^{-V} +\int h^2 \frac 1 4 |\nabla V|^2 e^{-V} -\int \frac 1 2\nabla(h^2) \cdot \nabla V e^{-V}
\\ \nonumber
&=&\int |\nabla h |^2e^{-V} +\int h^2 (\frac 1 2 \Delta V - \frac 1 4 |\nabla V|^2 )e^{-V} .
\end{eqnarray}
We deduce for some $K, R_0 > 0$
\begin{eqnarray}
\nonumber
\int |\nabla h |^2e^{-V} \ge \int \frac 1 8 h^2 \langle \nabla V \rangle^2 e^{-V} - K \int_{B_{R_0}} h^2 e^{-V} \langle \nabla V \rangle^{-2}.
\end{eqnarray}
Defining
\begin{equation}
\nonumber
\epsilon_R :=\int_{B_R^c} e^{-V} \langle \nabla V \rangle^{-6} , \quad Z_R :=\int_{B_R} e^{-V} \langle \nabla V \rangle^{-2} ,
\end{equation}
and using (\ref{24}), we get
\begin{eqnarray}
\nonumber
(\int_{B_R} he^{-V} \langle \nabla V \rangle^{-2} )^2&=&(\int_{B_R^c} he^{-V} \langle \nabla V \rangle^{-2} )^2
\\ \nonumber
&\le&\int_{B_R^c} h^2e^{-V} \langle \nabla V \rangle^{2} \int_{B_R^c} e^{-V} \langle \nabla V \rangle^{-6}
\\ \nonumber
&\le& \epsilon_R \int_{B_R^c} h^2e^{-V} \langle \nabla V \rangle^{2}.
\end{eqnarray}
Using the local Poincar\'e inequality in Lemma \ref{L21}, we deduce
\begin{eqnarray}
\nonumber
\int_{B_R} h^2e^{-V} \langle \nabla V \rangle^{-2} &\le& C_R \int_{B_R} |\nabla h|^2e^{-V} \langle \nabla V \rangle^{-2} +\frac 1 {Z_R} (\int_{B_R} he^{-V} \langle \nabla V \rangle^{-2} )^2
\\ \nonumber
&\le& C_R^{'} \int_{B_R} |\nabla h|^2e^{-V} +\frac {\epsilon_R} {Z_R} \int_{B_R} h^2e^{-V} \langle \nabla V \rangle^{2} .
\end{eqnarray}
Putting all the inequalities together and taking $R>R_0$, we finally get
\begin{eqnarray}
\nonumber
\int h^2e^{-V} \langle \nabla V \rangle^{2} &\le& 8\int |\nabla h |^2e^{-V} +8 K \int_{B_{R_0}} h^2 e^{-V} \langle \nabla V \rangle^{-2}
\\ \nonumber
&\le& 8(1+ KC_R^{'} )\int |\nabla h|^2e^{-V} +\frac {8 K \epsilon_R} {Z_R} \int_{B_R} h^2e^{-V} \langle \nabla V \rangle^{2} ,
\end{eqnarray}
and we conclude by taking $R$ large such that: $\frac {8 K \epsilon_R} {Z_R} \le \frac 1 2 $.
\qed
\smallskip
\begin{lem}\label{L23}
(Elliptic Estimate) For any $\xi_1 \in L^2( \langle \nabla V \rangle^{-1}e^{-V/2})$ and $\xi_2 \in L^2(e^{-V/2})$, the solution $u \in L^2(e^{-V/2})$ to the elliptic equation
\begin{equation}\label{E21}
-\Delta_V^* u = \xi_1 + \nabla \xi_2, \quad \int u e^{-V} \langle \nabla V \rangle^{-2}dx =0,
\end{equation}
satisfies
\begin{equation}\label{E22}
\Vert u \Vert_{L^2(\langle \nabla V \rangle e^{-V/2} )}+ \Vert \nabla u \Vert_{L^2(e^{-V/2})} \lesssim \Vert \xi_1 \Vert_{L^2(\langle \nabla V \rangle^{-1} e^{-V/2})} + \Vert \xi_2 \Vert_{L^2(e^{-V/2})}.
\end{equation}
Similarly for any $\xi \in L^2(e^{-V/2})$, the solution $u \in L(e^{-V/2}) $ to the elliptic problem
\begin{equation}
\nonumber
-\Delta_V^* u = \xi, \quad \int u e^{-V} \langle \nabla V \rangle^{-2} =0,
\end{equation}
satisfies
\begin{equation}\label{E23}
\Vert u \Vert_{L^2(\langle \nabla V \rangle^2 e^{-V/2} )}+ \Vert \nabla u \Vert_{L^2(\langle \nabla V \rangle e^{-V/2})} + \Vert D^2 u \Vert_{L^2(e^{-V/2})} \lesssim \Vert \xi \Vert_{L^2(e^{-V/2 }\langle \nabla V \rangle^{-1} ) }.
\end{equation}
\end{lem}
\noindent {\sl Proof of Lemma \ref{L23}.}
Multiply (\ref{E21}) by $u e^{-V}$ and observes that
\begin{equation}\label{E24}
e^{V} \hbox{div}_x[e^{-V}\nabla_x u] = \Delta_x u -\nabla_x V \cdot \nabla_x u= \Delta_V^* u,
\end{equation}
we have after integration
\begin{equation}
\nonumber
- \int e^V \hbox{div}_x[e^{-V}\nabla_x u] u e^{-V} = \int (\xi_1 + \nabla \cdot \xi_2) u e^{-V}.
\end{equation}
Performing one integration by parts, we deduce
\begin{equation}
\nonumber
\int e^{-V} |\nabla_x u|^2 = \int( \xi_1 u +\xi_2 \cdot \nabla u -\xi_2 \cdot \nabla V u )e^{-V},
\end{equation}
using Lemma \ref{L22} we obtain (\ref{E22}). In inequality (\ref{E23}), the first two terms are easily bounded by (\ref{E22}) and $\langle \nabla V \rangle \le 1$, we then only need to prove the bound for the third term. By integration by parts, we have
\begin{eqnarray}
\nonumber
\int |D^2 u|^2e^{-V} &=& \sum\limits_{i, j=1}^d \int (\partial_{ij}^2 u)^2e^{-V}
\\ \nonumber
&=& \sum\limits_{i, j =1}^d \int \partial_i u (\partial_{ij}^2 u \partial_j V -\partial_{ijj}^3 u )e^{-V}
\\ \nonumber
&=& \sum\limits_{i, j=1}^d \int \partial_{jj} u \partial_i(\partial_i u e^{-V}) -\frac 1 2 \int (\partial_i u)^2 \partial_j(\partial_j V e^{-V})
\\ \nonumber
&=& \int (\Delta u) (-\Delta_V^* u) e^{-V} + \int |\nabla u|^2( |\nabla V|^2 -\Delta V )e^{-V}
\\ \nonumber
&\lesssim&\Vert D^2 u\Vert_{L^2(e^{-V/2})} \Vert \xi \Vert_{L^2(e^{-V/2}) } + \Vert \langle \nabla V\rangle \nabla u \Vert_{L^2(e^{-V/2})},
\end{eqnarray}
where in the third equality we have used
\begin{eqnarray}
\nonumber
\int \partial_{ij}^2 u\partial_i u \partial_j V e^{-V} &=& -\int \partial_i u \partial_j (\partial_i u \partial_j Ve^{-V})
\\ \nonumber
&=& -\int \partial_{ij}^2 u\partial_i u \partial_j V e^{-V} -\int (\partial_i u)^2 \partial_j(\partial_j V e^{-V}),
\end{eqnarray}
which implies
\begin{eqnarray}
\nonumber
\int \partial_{ij}^2 u\partial_i u \partial_j V e^{-V} = -\frac 1 2 \int (\partial_i u)^2 \partial_j(\partial_j V e^{-V}),
\end{eqnarray}
and in the fourth equality we have used (\ref{E24}). That concludes the proof.
\qed
\smallskip
Now we turn to the proof of Theorem \ref{T21}.
\smallskip
\noindent {Proof of Theorem \ref{T21}.} First we prove the equivalence of the norms associated to $((\ , \ ))$ and $(\ ,\ )_{\mathcal H}$. By Cauchy-Schwarz inequality and Lemma \ref{L21}, we have
\begin{equation}
\nonumber
(j_f, \nabla_x(\Delta_V^*)^{-1}(\rho_ge^V \langle \nabla V \rangle^2 ))_{L^2} \le \Vert j_f \Vert_{L^2(e^{V/2})} \Vert \rho_g e^V \langle \nabla V \rangle^2 \Vert_{L^2(\langle \nabla V \rangle^{-1} e^{-V/2})},
\end{equation}
and obviously
\begin{equation}
\nonumber
\Vert \rho_g e^V \langle \nabla V \rangle^2 \Vert_{L^2(\langle \nabla V \rangle^{-1} e^{-V/2})} = \Vert \rho_g \Vert_{L^2( \langle \nabla V \rangle e^{V/2})} \le \Vert \rho_g \Vert_{L^2( e^{V/2}) } \lesssim \Vert g \Vert_{{\mathcal H}}.
\end{equation}
Using the elementary observations
\begin{equation}
\nonumber
|j_f |\lesssim \Vert f \Vert_{L^2(e^{v^2/4})} \ \ \ |\rho_f |\lesssim \Vert f \Vert_{L^2(e^{v^2/4})},
\end{equation}
we deduce
\begin{equation}
\nonumber
(j_f, \nabla_x(\Delta_V^*)^{-1}(\rho_ge^V \langle \nabla V \rangle^2 ))_{L^2} \lesssim \Vert f \Vert_{{\mathcal H}}\Vert g \Vert_{{\mathcal H}},
\end{equation}
The third term in the definition of (( , )) can be estimated in the same way and that ends the proof of equivalence of norms.
\qed
\smallskip
Now we prove the main estimate of the theorem. We split the Dirichlet term $D[f]$ into 3 parts
$$D[f]=T_1+\epsilon T_2 + \epsilon T_3,$$
with
\begin{eqnarray}
\nonumber
T_1&:=&({\mathcal L} f , f)_{{\mathcal H}}
\\ \nonumber
T_2 &:=& (\Delta_V^{-1} \nabla_x j[-{\mathcal L} f], \rho_f)_{L^2(e^{V/2 } \langle \nabla V \rangle )}
\\ \nonumber
T_3 &:=&( (\Delta_V)^{-1}\nabla_x j_f, \rho[-{\mathcal L} f])_{L^2(e^{V/2} \langle \nabla V \rangle)} \ ,
\end{eqnarray}
and compute them separately.\\
For the $T_1$ term, using the classical Poincar\'e inequality, we have
\begin{eqnarray}
\nonumber
T_1 &:=&(-{\mathcal T} f+ {\mathcal S} f, f)_{{\mathcal H}}=(-{\mathcal S} f, f)_{{\mathcal H}}
\\ \nonumber
&= &-\int [\Delta_v f + div_v( v f)] f M^{-1} e^{V} = \int |\nabla_v (f /M)|^2 M e^{V}
\\ \nonumber
&\ge& k_p \int | f/M - \rho_f |^2M e^{V} = k_p \Vert f - \rho_f M \Vert^2_{{\mathcal H}} = k_p \Vert f^{\bot} \Vert_{{\mathcal H}}^2,
\end{eqnarray}
for some $k_p>0$. We split the $T_2$ term as
\begin{eqnarray}
\nonumber
T_2 &:=& (\Delta_V^{-1} \nabla_x j[-{\mathcal L} f], \rho_f)_{L^2(e^{V/2 } \langle \nabla V \rangle )}
\\ \nonumber
&=&(\Delta_V^{-1} \nabla_x j[-{\mathcal T} \pi f], \rho_f)_{L^2(e^{V/2} \langle \nabla V \rangle )}
\\ \nonumber
&&+ (\Delta_V^{-1} \nabla_x j[-{\mathcal T} f^{\bot}], \rho_f)_{L^2(e^{V/2} \langle \nabla V \rangle )}
\\ \nonumber
&&+ (\Delta_V^{-1} \nabla_x j[-{\mathcal S} f], \rho_f)_{L^2(e^{V/2} \langle \nabla V \rangle )}
\\ \nonumber
&:=& T_{2,1} + T_{2,2} +T_{2,3} .
\end{eqnarray}
First observe
\begin{equation}
\nonumber
{\mathcal T} \pi f = - v \cdot \nabla_x \rho_f M - \nabla_x V \cdot v \rho_f M = -e^{-V} M v \cdot \nabla_x (\rho_f/e^{-V}),
\end{equation}
so that we have
\begin{equation}
\nonumber
j[-{\mathcal T} \pi f]=\langle v v_k M\rangle e^{-V}\partial_{x_k}(\rho_f/e^{-V})=e^{-V}\nabla_x(\rho_f/e^{-V}).
\end{equation}
Next by (\ref{E24}), we have
\begin{eqnarray}
\nonumber
T_{2,1} &=&(j[-{\mathcal T} \pi f], \nabla (\Delta_V^*)^{-1}(\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&=&(\rho_f, [e^Vdiv_x(e^{-V}\nabla)][(\Delta_V^*)^{-1}(\rho_f e^V \langle \nabla V \rangle^2 )] )_{L^2}
\\ \nonumber
&=& \Vert \rho_f e^{V/2} \langle \nabla V \rangle \Vert^2_{L^2} = \Vert \pi f \Vert^2_{{\mathcal H}_1}.
\end{eqnarray}
Using the notation $\eta_{1} = \langle v \otimes v f^{\bot} \rangle$ and $\eta_{2,\alpha \beta} = \langle v_\alpha \partial_{v_\beta} f^{\bot} \rangle$, and observing that
\begin{equation}
\nonumber
|\eta_1| \lesssim \Vert f^{\bot} \Vert_{L^2(e^{v^2/4})} , |\eta_2| \lesssim \Vert f^{\bot} \Vert_{L^2(e^{v^2/4})},
\end{equation}
we compute
\begin{eqnarray}
\nonumber
T_{2,2} &=&(j[-{\mathcal T} f^{\bot}], \nabla(\Delta_V^*)^{-1} (\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&=& (D\eta_1+ \eta_2 \nabla V, \nabla(\Delta_V^*)^{-1} (\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&=& (\eta_1, D^2(\Delta_V^*)^{-1} (\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2} + (\eta_2 ,\nabla V \nabla(\Delta_V^*)^{-1} (\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&= &\Vert \eta_1 \Vert_{L^2(e^{V/2} )}\Vert D^2(\Delta_V^*)^{-1} (\rho_f e^V\langle \nabla V \rangle^2) \Vert_{L^2(e^{-V/2})}
\\ \nonumber
&&+ \Vert \eta_2 \Vert_{L^2(e^{V/2})}\Vert \nabla V \nabla(\Delta_V^*)^{-1} (\rho_f e^V \langle \nabla V \rangle^2 ) \Vert_{L^2(e^{-V/2})}.
\end{eqnarray}
By Lemma \ref{L23}, we estimate
\begin{eqnarray}
\nonumber
T_{2,2} &\lesssim& \Vert \eta_1 \Vert_{L^2(e^{V/2})} \Vert \rho_f e^V \langle \nabla V \rangle^2 \Vert_{L^2(e^{-V/2} \langle \nabla V \rangle^{-1} )}
\\ \nonumber
&&+ \Vert \eta_2 \Vert_{L^2(e^{V/2})} \Vert \rho_f e^V \langle \nabla V \rangle^2 \Vert_{L^2(e^{-V/2} \langle \nabla V \rangle^{-1} )}
\\ \nonumber
&\lesssim& \Vert f^{\bot} \Vert_{\mathcal H} \Vert \pi f \Vert_{{\mathcal H}_1}.
\end{eqnarray}
Using
\begin{eqnarray}
\nonumber
j[-{\mathcal S} f] = j[-{\mathcal S} f^{\bot}] &=&- \int v[\Delta_v f^{\bot} + div_v(v f^{\bot})] dv
\\ \nonumber
&=&d\int f^{\bot} v dv \lesssim \Vert f^{\bot} \Vert_{L^2(e^{v^2/4})},
\end{eqnarray}
and Lemma \ref{L23}, we have
\begin{eqnarray}
\nonumber
T_{2,3} &=&(j[-Sf], \nabla (\Delta_V^*)^{-1}(\rho_f e^V \langle \nabla V \rangle^2 ))_{L^2}
\\ \nonumber
&\le &\Vert j[-Sf] \Vert_{L^2( e^{V/2})} \Vert \nabla (\Delta_V^*)^{-1}(\rho_f e^V \langle \nabla V \rangle^2) \Vert_{L^2( e^{-V/2})}
\\ \nonumber
&\lesssim&\Vert f^{\bot} \Vert_{{\mathcal H}} \Vert\rho_f e^V \langle \nabla V \rangle^2 \Vert_{L^2( \langle \nabla V \rangle^{-1} e^{-V/2}) }
\\ \nonumber
&= &\Vert f^{\bot} \Vert_{{\mathcal H}} \Vert \rho_f \Vert_{L^2( \langle \nabla V \rangle e^{V/2} )}
\\ \nonumber
&\lesssim&\Vert f^{\bot} \Vert_{{\mathcal H}} \Vert \pi f \Vert_{{\mathcal H}_1}.
\end{eqnarray}
Finally we come to the $T_3$ term. Using
\begin{equation}
\nonumber
\rho[-Sf] = \int \nabla_v \cdot (\nabla_v f + v f) dv =0,
\end{equation}
and
\begin{eqnarray}
\nonumber
\rho[-Tf] &= &\rho[v \nabla_x f - \nabla_x V(x) \nabla_v f]
\\ \nonumber
&= &\int v \nabla_x f - \nabla_x V(x) \nabla_v f dv
\\ \nonumber
&= &\nabla_x j[f],
\end{eqnarray}
because $\nabla (\langle \nabla V \rangle^2) \lesssim \langle \nabla V \rangle^2$ and $ \langle \nabla V \rangle^2 \lesssim \langle \nabla V \rangle$, we get
\begin{eqnarray}
\nonumber
T_3 &=&( (\Delta_V)^{-1}\nabla_x j_f, \rho[-{\mathcal L} f])_{L^2(e^{V/2} \langle \nabla V \rangle)}
\\ \nonumber
&=&((\Delta_V)^{-1} \nabla_x j[f^{\bot}], \rho[-{\mathcal T} f])_{L^2(e^{V/2} \langle \nabla V \rangle)}
\\ \nonumber
&=&(j[-f^{\bot}], \nabla (\Delta_V^*)^{-1} (\nabla_x j[f]e^V\langle \nabla V \rangle^2 )_{L^2}
\\ \nonumber
&=&\Vert j[f^{\bot}] \Vert_{L^2(e^{V/2})} \Vert \nabla(\Delta_V^*)^{-1} [\nabla_x(j_f e^V\langle \nabla V \rangle^2)
\\ \nonumber
&&-\nabla V j_f e^V \langle \nabla V \rangle^2 -\nabla (\langle \nabla V \rangle^2) j_f e^V ] \Vert_{L^2(e^{-V/2})},
\end{eqnarray}
using again Lemma \ref{L23}, we have
\begin{eqnarray}
\nonumber
T_3 &\lesssim& \Vert j[f^{\bot}] \Vert_{L^2(e^{V/2})} (\Vert j_f e^V\langle \nabla V \rangle^2 \Vert_{L^2(e^{-V/2} \langle \nabla V \rangle^{-1} )}
\\ \nonumber
&&+ \Vert j_f e^V \nabla (\langle \nabla V \rangle^2) \Vert_{L^2(\langle \nabla V \rangle^{-1} e^{-V/2}) })
\\ \nonumber
&\lesssim&\Vert f^{\bot} \Vert_{\mathcal H} \Vert f \Vert_{{\mathcal H}_1}.
\end{eqnarray}
Putting all the terms together and choosing $\epsilon > 0$ small enough, we can deduce
\begin{eqnarray}
\nonumber
D[f] &\ge& k_p \Vert f^{\bot} \Vert^2_{{\mathcal H}}+ \epsilon \Vert \pi f \Vert^2_{{\mathcal H}_1} - \epsilon 2 K\Vert f^{\bot} \Vert_{\mathcal H} \Vert f\Vert_{{\mathcal H}_1}- \epsilon 2 K \Vert f^{\bot} \Vert_{{\mathcal H}} \Vert \pi f \Vert_{{\mathcal H}_1}
\\ \nonumber
&\ge& k_p \Vert f^{\bot} \Vert^2_{{\mathcal H}} + \epsilon \Vert \pi f \Vert^2_{{\mathcal H}_1} - (2\epsilon + 4\epsilon^{1/2}) K \Vert f^{\bot} \Vert^2_{{\mathcal H}} - \epsilon^{3/2} 4 K \Vert \pi f \Vert^2_{{\mathcal H}_1}
\\ \nonumber
&\ge& \frac {k_p} 2 (\Vert f^{\bot} \Vert^2_{{\mathcal H}} + \epsilon \Vert \pi f \Vert^2_{{\mathcal H}_1}) \ge \frac \epsilon M \Vert f \Vert_{{\mathcal H}_1},
\end{eqnarray}
for some $M>0$.
\qed
\bigskip
\section{ $L^2$ sub-exponential decay for the kinetic Fokker-Planck equation based on a splitting trick}
\setcounter{equation}{0}
\setcounter{theo}{0}
In this section we establish a first decay estimate on $S_{\mathcal L}$ which is a particular case in the result of Theorem \ref{T11}.
\begin{theo}\label{T31}
Using the notation and results in Theorem \ref{T21}, we have
\begin{equation}
\nonumber
\Vert S_{\mathcal L}(t) f_0\Vert_{L^2(G^{-\frac 1 2})} \lesssim e^{- C t^{\gamma/(2-\gamma) }} \Vert f_0 \Vert_{L^2 (G^{-(\frac 1 2+\epsilon) } ) } ,
\end{equation}
for any $ f_0 \in L^2 (G^{-(\frac 1 2 + \epsilon) } ) \cap {\mathcal H}_0$, $ \epsilon > 0$ small enough.
\end{theo}
\begin{rem}
It's worth emphasizing that we deduce immediately part (1) of Theorem \ref{T11} in the case $p=2$ by considering the initial datum $f_0 - {\mathcal M}(f_0) $ for any $f_0 \in L^2(G^{-\frac {1} {2} +\epsilon } )$.
\end{rem}
Recall the splitting ${\mathcal L} ={\mathcal A} +{\mathcal B}$ introduced in (\ref{21}), we first prove some decay estimate on the semigroup $S_{\mathcal B}$.
\begin{lem} \label{L31} Let us fix $p \in [1,\infty)$.
(1) For any given smooth weight function $m$, we have
\begin{eqnarray}\label{E31}
\int f^{p-1}({\mathcal L} f) G^{-(p-1)} m = \frac 1 p \int f^p G^{-(p-1)} \tilde{m},
\end{eqnarray}
with
\begin{equation}
\nonumber
\tilde{m} = {\Delta_v m-\nabla_v m \cdot v-\nabla V(x) \cdot \nabla_v m + v \cdot \nabla_x m} .
\end{equation}
(2) Taking $m =e^{\epsilon H^{\delta}} $, $\epsilon>0$ if $0 < \delta <\frac \gamma 2$, $\epsilon$ small enough if $\delta =\frac \gamma 2$, $H = 3v^2 +2 x\cdot v + x^2 +1$, we have
\begin{eqnarray}\label{E32}
\int f^{p-1} ({\mathcal B} f) G^{-(p-1)} e^{\epsilon H^{\delta}} \le -C \int f^{p} G^{-(p-1)}e^{\epsilon H^{\delta}} H^{\frac \delta 2 +\gamma -1},
\end{eqnarray}
for some $K$ and $R$ large.
(3) With the same notation as above, there holds
\begin{equation}\label{E33}
\Vert S_{\mathcal B}(t) \Vert_{L^p(e^{2\epsilon H^\delta}G^{-\frac {p-1} {p}}) \to L^p( e^{\epsilon H^{\delta}} G^{-\frac {p-1} {p}} ) } \lesssim e^{-at^{\frac {2\delta} {2-\gamma}}},
\end{equation}
for some $a>0$. In particular, this implies
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t) \Vert_{L^p(G^{-(\frac {p-1} {p}+ \epsilon) }) \to L^p( G^{-\frac {p-1} {p}} ) } \lesssim e^{-at^{\frac {\gamma} {2-\gamma}}}.
\end{equation}
\end{lem}
\noindent {\sl Proof of Lemma \ref{L31}.} Step 1. Recall (\ref{A1}), we write
\begin{eqnarray}
\nonumber
\int f^{p-1} ({\mathcal L} f) G^{-(p-1)} m=\int f^{p-1} ({\mathcal T} f) G^{-(p-1)} m+ \int f^{p-1} ({\mathcal S} f) G^{-(p-1)} m .
\end{eqnarray}
We first compute the contribution of the term with operator ${\mathcal T}$
\begin{eqnarray}
\nonumber
\int f^{p-1} ({\mathcal T} f) G^{-(p-1)} m &=& \frac 1 p \int {\mathcal T} (f^p) G^{-(p-1)} m
\\ \nonumber
&=& -\frac 1 p \int f^p {\mathcal T}(G^{-(p-1)} m )
\\ \nonumber
&=& \frac 1 p \int f^p G^{-(p-1)} (v \cdot \nabla_x m -\nabla V(x) \cdot \nabla_v m).
\end{eqnarray}
For the the term with operator ${\mathcal S}$ , we use one integration by parts, and we get
\begin{eqnarray}
\nonumber
&&\int f^{p-1} ({\mathcal S} f) G^{-(p-1)} m
\\ \nonumber
&=& \int f^{p-1} (\Delta_v f +\hbox{div}_v(v f)) G^{-(p-1)} m
\\ \nonumber
&=& -\int \nabla_v((f G^{-1})^{p-1} m ) \cdot \nabla_v(f G^{-1})G
\\ \nonumber
&=&- \int (p-1) |\nabla_v(f G^{-1}) |^2 (fG^{-1})^{p-2} G m-\frac 1 p \nabla_v( (f G^{-1})^p ) \cdot (\nabla_v m) G.
\end{eqnarray}
Performing another integration by parts on the latter term, we have
\begin{eqnarray}
\nonumber
&&\int f^{p-1} ({\mathcal S} f) G^{-(p-1)} m
\\ \nonumber
&=&\int - (p-1)|\nabla_v(f G^{-1})|^2 (fG^{-1})^{p-2} G m +\frac 1 p \nabla_v \cdot (G \nabla_v m ) (fG^{-1})^{p}
\\ \nonumber
&=& \int - (p-1)|\nabla_v(f G^{-1} )|^2 (fG^{-1})^{p-2} G m+\frac 1 p (\Delta_v m -v \cdot \nabla_v m )f^pG^{-(p-1)}.
\end{eqnarray}
Identity (\ref{E31}) follows by putting together the two identities.\\
Step 2. We particular use $m=e^{\epsilon H^\delta}$ and we easily compute
\begin{equation}
\nonumber
\frac { \nabla_v m} {m}= \delta \epsilon \frac {\nabla_v H} {H^{1-\delta}},\quad \frac {\nabla_x m} {m}= \delta \epsilon \frac {\nabla_x H} {H^{1-\delta}} ,
\end{equation}
and
\begin{equation}
\nonumber
\frac {\Delta_v m} {m} \le \delta \epsilon \frac {\Delta_v H} {H^{1-\delta}}+ (\delta \epsilon)^2\frac {|\nabla_v H|^2} {H^{2(1-\delta)}}.
\end{equation}
We deduce that $\phi = \frac {\tilde{m}} m$ satisfies
\begin{eqnarray}
\nonumber
&&\frac {\phi H^{1-\delta}} {\epsilon \delta} \le \Delta_v H + \epsilon \delta \frac {|\nabla_v H|^2} {H^{1-\delta}} - v \cdot \nabla_v H + v \cdot \nabla_x H -\nabla_x V(x) \cdot \nabla_v H.
\end{eqnarray}
From the very definition of $H$, we have
\begin{equation}
\nonumber
\nabla_v H=6 v+ 2 x ,\quad \nabla_x H = 2 v + 2 x,\quad \Delta_v H= 6.
\end{equation}
Choosing $\epsilon>0$ arbitrary if $0 < 2\delta < \gamma$, $\epsilon$ small enough if $2\delta = \gamma$ ,we deduce
\begin{eqnarray}\label{L2C2}
\nonumber
&&\Delta_v H +2 \epsilon \delta \frac {|\nabla_v H|^2} {H^{1-\delta}} +v \cdot \nabla_x H - v \cdot \nabla_v H -\nabla_x V(x) \cdot \nabla_v H
\\ \nonumber
&=& 6+ \epsilon \delta \frac{(6 v+2 x)^2} {H^{1-\delta}} + 2 v^2+2 x \cdot v - 6 v^2- 2 x \cdot v - 6 v \cdot \nabla_x V(x) - 2 x \cdot \nabla_x V(x)
\\ \nonumber
&\le&(2 v^2 +C_1 v + C_2 v^{2\delta} - 6 v^2 ) +(C_3 \epsilon \delta x^{2\delta}- 2 x \cdot \nabla_x V(x)) +C
\\ \nonumber
&\le&-C_4 v ^2 - C_5 x \cdot \nabla_x V(x) + C_6
\\ \nonumber
&\le&- C_7H^{\frac \gamma 2} + K\chi_{R},
\end{eqnarray}
for some constants $C_i, K ,R>0$. As a consequence, we have proved
\begin{equation}
\nonumber
\phi - K\chi_R\le \frac {-C} {H^{1-\delta-\frac \gamma 2}} \le 0 ,
\end{equation}
which is nothing but (\ref{E32}).\\
Step 3. In the following, denote $f_t =S_{\mathcal B}(t)f_0$ the solution to the evolution equation $\partial_t f ={\mathcal B} f ,f(0)=f_0$. On the one hand, by (\ref{E32}) we have
\begin{eqnarray}
\nonumber
\frac {d} {dt} \int f_t^p G^{-(p-1)} e^{2\epsilon H^{\delta} } = \int f_t^{p-1} ({\mathcal B} f_t)G^{-(p-1)} e^{2\epsilon H^\delta} \le 0,
\end{eqnarray}
which implies
\begin{equation}
\nonumber
\int f_t^p G^{-(p-1)} e^{2\epsilon H^{\delta}} \le \int f_0^p G^{-(p-1)} e^{2\epsilon H^{\delta}}:= Y_1, \quad \forall t \ge 0
\end{equation}
On the other hand, defining
\begin{equation}
\nonumber
Y :=\int f_t^p G^{-(p-1)} e^{\epsilon H^{\delta} } ,
\end{equation}
using again (\ref{E32}), we have
\begin{eqnarray}
\nonumber
\frac {d} {dt} Y&= &p\int f_t^{p-1} {\mathcal B} f_t G^{-(p-1)} e^{\epsilon H^{\delta} }
\\ \nonumber
&\le& -a \int f_t^p G^{-(p-1)} e^{\epsilon H^{\delta} } H^{\delta + \frac \gamma 2 -1}
\\ \nonumber
&\le& -a \int f_t^p G^{-(p-1)}e^{\epsilon H^{\delta} } \langle x \rangle^{2\delta +\gamma-2}
\\ \nonumber
&\le& -a \int_{B_{|x| \le \rho}} f_t^p G^{-(p-1)} e^{\epsilon H^{\delta} } \langle x \rangle^{2\delta +\gamma-2},
\end{eqnarray}
for any $\rho > 0$ and for some $a>0$. As $2\delta + \gamma < 2$, $ 0 \le |x| \le \rho$ implies $\langle x \rangle^{2\delta +\gamma-2} \ge \langle \rho \rangle^{2\delta +\gamma-2}$, we deduce
\begin{eqnarray}
\nonumber
\frac {d} {dt} Y& \le &-a \langle \rho \rangle^{2\delta +\gamma-2} \int_{B_{|x| \le \rho}} f_t^p G^{-(p-1)}e^{\epsilon H^{\delta} }
\\ \nonumber
&\le& -a \langle \rho \rangle^{2\delta +\gamma-2} Y + a \langle \rho \rangle^{2\delta +\gamma-2} \int_{B_{|x| \ge \rho}} f_t^p G^{-(p-1)} e^{\epsilon H^{\delta} } ,
\end{eqnarray}
Using that $e^{\epsilon \langle x \rangle^{2\delta}} \ge e^{\epsilon \langle \rho \rangle^{2\delta}}$ on $ |x| \ge \rho$, we get
\begin{eqnarray}
\nonumber
\frac {d} {dt} Y&\le& -a \langle \rho \rangle^{2\delta +\gamma-2} Y + a \langle \rho \rangle^{2\delta +\gamma-2} e^{-\epsilon \langle \rho \rangle^{2\delta} } \int_{B_{|x| \ge \rho}} f_t^p G^{-(p-1)} e^{\epsilon H^{\delta} } e^{\epsilon \langle x \rangle^{2\delta}}
\\ \nonumber
&\le& -a \langle \rho \rangle^{2\delta +\gamma-2} Y + a \langle \rho \rangle^{2\delta +\gamma-2} e^{-\epsilon \langle \rho \rangle^{2\delta} } \int f_t^p G^{-(p-1)} e^{\epsilon H^{\delta} } e^{\epsilon \langle x \rangle^{2\delta} }
\\ \nonumber
&\le& -a \langle \rho \rangle^{2\delta +\gamma-2} Y + a \langle \rho \rangle^{2\delta +\gamma-2} e^{-\epsilon \langle \rho \rangle^{2\delta}} C Y_1.
\end{eqnarray}
Thanks to Gr\"{o}nwall's Lemma, we obtain
\begin{eqnarray}
\nonumber
Y(t) &\le& e^{-a \langle \rho \rangle^{2\delta +\gamma-2}t}Y(0) + C e^{- \epsilon \langle \rho \rangle^{2\delta}}Y_1
\\ \nonumber
&\lesssim& (e^{-a \langle \rho \rangle^{2\delta +\gamma-2}t}+ e^{- \epsilon \langle \rho \rangle^{2\delta} })Y_1,
\end{eqnarray}
Choosing finally $\rho$ such that $ a \langle \rho \rangle^{2\delta +\gamma-2} t = \epsilon \langle \rho \rangle^{2\delta}$ , that is $\langle \rho \rangle^{2-\gamma}=C t$, we deduce
\begin{equation}
\nonumber
Y(t) \le C_1 e^{- C_2 t^{\frac {2\delta} {2-\gamma}} }Y_2 ,
\end{equation}
for some $C_i>0$, and we deduce the proof of (\ref{E33}).
\qed
\\
\smallskip
\smallskip
Now we come to prove Theorem \ref{T31}.\\
\noindent {\sl Proof of Theorem \ref{T31}.}
We recall that from (\ref{23}), we have
\begin{equation}
\nonumber
\Vert S_{\mathcal L}(t) \Vert_{L^2(G^{-1/2} ) \to L^2(G^{-1/2} )} \lesssim 1, \quad \forall t \ge 0
\end{equation}
From the very definition of ${\mathcal A}$ we have
\begin{equation}
\nonumber
\Vert {\mathcal A} \Vert_{L^2(G^{-1/2} ) \to L^2( e^{2\epsilon H^{\delta}} G^{-1/2} )} \lesssim 1.
\end{equation}
From Lemma \ref{L31} case $p=2$, we have
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t) \Vert_{L^2 ( e^{2\epsilon H^{\delta}} G^{-1/2} ) \to L^2(e^{\epsilon H^{\delta}} G^{-1/2} )} \lesssim e^{-at^{\frac {2\delta } { 2-\gamma} }}, \quad \forall t \ge 0.
\end{equation}
Gathering the three estimates and using Duhamel's formula
\begin{equation}
\nonumber
S_{\mathcal L} = S_{\mathcal B} + S_{\mathcal B} {\mathcal A} * S_{\mathcal L}
\end{equation}
we deduce
\begin{equation}
\nonumber
\Vert S_{\mathcal L}(t) \Vert_{L^2(e^{2\epsilon H^{\delta}} G^{-1/2} ) \to L^2(e^{\epsilon H^{\delta}} G^{-1/2} )} \lesssim 1, \quad \forall t \ge 0.
\end{equation}
In the following, we denote $f_t =S_{\mathcal L}(t)f_0$ the solution to the evolution equation $\partial_t f ={\mathcal L} f ,f(0, \cdot)=f_0$. Taking $2\delta =\gamma$, $\epsilon$ small enough, we have in particular
\begin{equation}
\nonumber
\int f_t^2 G^{-1} e^{\epsilon H^{\frac \gamma 2} } \le C \int f_0^2 G^{-1} e^{2\epsilon H^{\frac \gamma 2} } =:Y_3.
\end{equation}
We define
\begin{equation}
\nonumber
Y_2(t):=((f, f)),
\end{equation}
with $((, ))$ is defined in Theorem \ref{T21}.
Thanks to the result in (\ref{22}), we have
\begin{eqnarray}
\nonumber
\frac {d} {dt} Y_2 &\le& -a \int f_t^2 G^{-1}\langle x \rangle^{2(\gamma-1)}
\\ \nonumber
&\le&-a \int_{B_{|x| \le \rho}} f_t^2 G^{-1} \langle x \rangle^{2(\gamma-1)},
\end{eqnarray}
for any $\rho \ge 0$, using the same argument as Lemma \ref{L31}, we deduce
\begin{eqnarray}
\nonumber
Y_2(t) &\le& C e^{-a \langle \rho \rangle^{2(\gamma-1)}t}Y_2(0) + C e^{- \epsilon_2 \langle \rho \rangle^{\gamma}} Y_3
\\ \nonumber
&\lesssim& (e^{-a \langle \rho \rangle^{2(\gamma-1)}t} + e^{- \epsilon_2 \langle \rho \rangle^{\gamma}})Y_3.
\end{eqnarray}
Choosing $\rho$ such that $ a \langle \rho \rangle^{2(\gamma-1)} t = \epsilon_2 \langle \rho \rangle^{\gamma}$ , that is $\langle \rho \rangle^{2-\gamma}=C t$, we conclude
\begin{equation}
\nonumber
Y_2(t) \le C_1 e^{- C_2 t^{\gamma/(2-\gamma)}} Y_3,
\end{equation}
for some constants $C_i>0$. As $H^{\frac \gamma 2} \lesssim C(\frac {v^2} 2 +V(x)) $, we have
\begin{equation}
\nonumber
e^{\epsilon H^{\frac \gamma 2} } \le G^{-C \epsilon},
\end{equation}
Taking $\epsilon$ small, the proof of Theorem \ref{T31} is done.
\qed
\bigskip
\section{Regularization property of $S_{\mathcal B}$}
\label{sec3}
\setcounter{equation}{0}
\setcounter{theo}{0}
In this section we will denote ${\mathcal L}^{*}= {\mathcal L}^{*}_{G^{- 1 /2} }= {\mathcal S}- {\mathcal T}$ be the dual operator of ${\mathcal L}$ on $L^2({G^{-1/2}})$. In other words, $L^*$ is defined by the identity
\begin{equation}
\nonumber
\int ({\mathcal L} f) g G^{-1}= \int ({\mathcal L}^{*}g) f G^{-1}.
\end{equation}
for any smooth function $f, g$. We also denote ${\mathcal B}^{*} = {\mathcal L}^{*} - K\chi_R$.
The aim of this section is to establish the following regularization property. The proof closely follows the proof of similar results in \cite{H, MM, V}
\begin{theo}\label{T41}
For any $0 \le \delta < 1$, there exist $\eta > 0$ such that
\begin{equation}
\nonumber
\Vert {\mathcal S}_{\mathcal B}(t)f \Vert_{L^2(G^{-1/2(1+\delta)})} \lesssim \frac 1 {t^{\frac {5d+1} {2} } } \Vert f \Vert_{L^1(G^{-1/2(1+\delta)})}, \ \ \forall t \in [0,\eta] .
\end{equation}
Similarly, for any $0 \le \delta < 1$, there exist $\eta > 0$ such that
\begin{equation}
\nonumber
\Vert {\mathcal S}_{{\mathcal B}^*}(t)f \Vert_{L^2(G^{-1/2(1+\delta)})} \lesssim \frac 1 {t^{\frac {5d+1} {2} } } \Vert f \Vert_{L^1(G^{-1/2(1+\delta}))}, \ \ \forall t \in [0,\eta] .
\end{equation}
As a consequence, for any $0 \le \delta < 1$, there exist $\eta > 0$ such that
\begin{equation}
\nonumber
\Vert {\mathcal S}_{{\mathcal B}}(t)f \Vert_{L^\infty(G^{-1/2})} \lesssim \frac 1 {t^{\frac {5d+1} {2} } } \Vert f \Vert_{L^2(G^{-1/2})}, \ \ \forall t \in [0,\eta] .
\end{equation}
\end{theo}
We start with some elementary lemmas.
\begin{lem}\label{L41}
For any $0 \le \delta < 1$, we have
\begin{eqnarray}\label{E41}
\nonumber
\quad \int (f ({\mathcal L} g) + g ({\mathcal L} f)) G^{-(1 + \delta )}&=&-2 \int \nabla_v(fG^{-1} )\cdot \nabla_v(gG^{-1 }) G^{1-\delta}
\\
&+&\int (\delta d-\delta(1-\delta) v^2) f g G^{-(1+\delta)}
\end{eqnarray}
in particular, this implies
\begin{eqnarray}\label{E42}
\nonumber
\int f({\mathcal L} f) G^{-(1+\delta)} &=& -\int |\nabla_v (fG^{-1})|^2 G^{1-\delta} + \frac {\delta d} {2} \int f^2 G^{-(1+\delta)}
\\
&-& \frac {\delta(1-\delta)} {2} \int v^2 f^2 G^{-(1+\delta)},
\end{eqnarray}
similarly, for any $0 \le \delta < 1$, we have
\begin{eqnarray}\label{E43}
\nonumber
\quad \int f ({\mathcal L} f) G^{-(1+\delta)} &=& -\int |\nabla_v f|^2 G^{-(1+\delta)} + \frac { \delta (1 + \delta)} {2} \int v^2 f^2 G^{-(1+\delta)}
\\
&+&\frac {(2+\delta) d } {2} \int f^2G^{-(1+\delta)}.
\end{eqnarray}
All the equalities remain true when ${\mathcal L}$ is replaced by ${\mathcal L}^{*}$.
\end{lem}
\smallskip
\noindent {Proof of Lemma \ref{L41}.} Recall ${\mathcal T} (G^{-(1+\delta)}) =0$, we have
\begin{eqnarray}
\nonumber
\int f( {\mathcal T} g) G^{-(1+\delta )} = \int {\mathcal T} (fG^{-(1 + \delta)}) g =-\int ({\mathcal T} f) g G^{-(1 + \delta)},
\end{eqnarray}
which implies
\begin{eqnarray}
\nonumber
\int f( {\mathcal T} g) G^{-(1+\delta )} +\int ({\mathcal T} f) g G^{-(1 + \delta)}=0.
\end{eqnarray}
for the term with operator ${\mathcal S}$ we have
\begin{eqnarray}
\nonumber
\int f ({\mathcal S} g) G^{-(1 +\delta ) } &=& - \int \nabla_v (fG^{-(1 + \delta)}) \cdot (\nabla_v g +v g)
\\ \nonumber
&=&- \int (\nabla_v f + (1+\delta)v f ) \cdot (\nabla_v g +v g)G^{-(1 + \delta)}
\\ \nonumber
&=& -\int \nabla_v(fG^{-1}) \cdot \nabla_v(gG^{-1}) G^{1-\delta}
\\ \nonumber
&&- \int( \delta v^2 f g +\delta f v \cdot \nabla_v g ) G^{-(1+\delta)} ,
\end{eqnarray}
using integration by parts
\begin{eqnarray}
\nonumber
\int \delta f v \cdot \nabla_v g G^{-(1+\delta)} &=& - \int \delta g\nabla_v \cdot ( v f G^{-(1+\delta)})
\\ \nonumber
&= &- \int \delta g v \cdot \nabla_v f G^{-(1+\delta)}
\\ \nonumber
&& - \int (\delta d + \delta (1+\delta) v^2 ) f gG^{-(1+\delta)} ,
\end{eqnarray}
so we deduce
\begin{eqnarray}
\nonumber
&&\int (f ({\mathcal S} g )+ g ({\mathcal S} f))G^{-(1 +\delta) }
\\ \nonumber
&=& -2 \int \nabla_v(fG^{-1}) \cdot \nabla_v(gG^{-1}) G^{1-\delta}+ \int ( \delta d -\delta(1-\delta)v^2 ) f g G^{-(1+\delta)},
\end{eqnarray}
so (\ref{E41}) and (\ref{E42}) are thus proved by combining the two terms above.
Finally, we compute
\begin{eqnarray}
\nonumber
&&\int f {\mathcal S} f G^{-(1 +\delta ) }
\\ \nonumber
&=& - \int (\nabla_v f + (1+\delta)v f ) \cdot (\nabla_v f +v f)G^{-(1 + \delta)}
\\ \nonumber
&=& -\int |\nabla_v f|^2 G^{-(1+\delta)} - \int (1+\delta) v^2 f^2 G^{-(1+\delta)} - \int (2+ \delta) f v \cdot\nabla_v f G^{-(1+\delta)}
\\ \nonumber
&=& -\int |\nabla_v f|^2 G^{-(1+\delta)} - \int (1+\delta) v^2 f^2 G^{-(1+\delta)} +\frac {2+ \delta} {2} \int \nabla_v \cdot (v G^{-(1+\delta)}) f^2
\\ \nonumber
&=& -\int |\nabla_v f|^2 G^{-(1+\delta)} + \frac { \delta (1 + \delta)} {2} \int v^2 f^2 G^{-(1+\delta)} +\frac {(2+\delta) d } {2} \int f^2 G^{-(1+\delta)},
\end{eqnarray}
so (\ref{E43}) follows by putting together the above equality with
\begin{equation}
\nonumber
\int f {\mathcal T} f G^{-(1+\delta)}=0.
\end{equation}
Since the term associated with ${\mathcal T}$ is 0, by ${\mathcal L} ={\mathcal S}+{\mathcal T}, {\mathcal L}^{*}= {\mathcal S} - {\mathcal T}$, we know the same equalities will remain true when ${\mathcal L}$ is replaced by ${\mathcal L}^*$.\qed
\smallskip
\begin{lem}\label{L42}
When $f_t =S_{\mathcal B}(t) f_0$, define an energy functional
\begin{eqnarray}\label{E44}
\nonumber
{\mathcal F}(t, f_t ) &:= &A \Vert f_t \Vert_{L^2(G^{-1/2 (1+\delta )})}^2 + at^2 \Vert \nabla_v f_t \Vert_{L^2(G^{-1/2(1+\delta) ) })}^2
\\
&+& 2 c t^4( \nabla_v f_t, \nabla_x f_t )_{L^2(G^{-1/2 (1+\delta) })}^2 + bt^6 \Vert \nabla_x f_t \Vert_{L^2(G^{-1/2 (1+\delta) )})}^2,
\end{eqnarray}
when $f_t =S_{{\mathcal B}^*}(t) f_0$, define another energy functional
\begin{eqnarray}\label{E45}
\nonumber
{\mathcal F}^*(t, f_t ) &:= &A \Vert f_t \Vert_{L^2(G^{-1/2 (1+\delta )})}^2 + at^2 \Vert \nabla_v f_t \Vert_{L^2(G^{-1/2(1+\delta) ) })}^2
\\
&-& 2 c t^4 ( \nabla_v f_t, \nabla_x f_t )_{L^2(G^{-1/2 (1+\delta) })}^2 + bt^6 \Vert \nabla_x f_t \Vert_{L^2(G^{-1/2 (1+\delta) )})}^2,
\end{eqnarray}
with $a, b, c >0, c \le \sqrt{ab} $ and $A$ large enough. Then for both cases we have
\begin{eqnarray}
\nonumber
\frac {d } {dt}F(t, f_t )\le -L(\Vert \nabla_v f_t \Vert_{L^2(G^{-1/2(1+\delta )})}^2 + t^4 \Vert \nabla_x f _t \Vert_{L^2 ( G^{-1/2 (1+\delta)}) }^2 ) +\Vert f_t \Vert_{L^2(G^{-1/2(1+\delta )})}^2 ,
\end{eqnarray}
for all $t \in [0, \eta]$, for some $L>0, C>0$ and $F= {\mathcal F}$ or ${\mathcal F}^*$.
\end{lem}
\noindent {Proof of Lemma \ref{L42}.} We only prove the case $F={\mathcal F}$, the proof for $F={\mathcal F}^*$ is the same. We split the computation into several parts and then put them together. First using (\ref{E42}) and (\ref{E43}) we have
\begin{eqnarray}
\nonumber
&&\frac {d} {dt} \int f^2 G^{-(1+\delta)}
\\ \nonumber
&= &\int f ({\mathcal L} -K\chi_R) f G^{-(1+\delta)}
\\ \nonumber
&=&\frac {1-\delta} 2 \int f {\mathcal L} f G^{-(1+\delta)} +\frac {1+\delta} 2\int f {\mathcal L} f G^{-(1+\delta)} -\int K\chi_R f^2 G^{-(1+\delta)}
\\ \nonumber
&\le& -\frac {1-\delta} 2\int |\nabla_v f|^2 G^{-(1+\delta)} -\frac {1+\delta} 2\int |\nabla_v (fG^{-1})|^2 G^{1-\delta} + C \int f^2 G^{-(1+\delta)}
\\ \nonumber
&\le& -\frac {1-\delta} 2\int |\nabla_v f|^2 G^{-(1+\delta)} + C \int f^2 G^{-(1+\delta)}.
\end{eqnarray}
By
\begin{equation}\label{E46}
\partial_{x_i}{\mathcal L} f = {\mathcal L} \partial_{x_i} f + \sum\limits_{j=1}^d \partial^2_{x_i x_j} V\partial_{v_j} f,
\end{equation}
and (\ref{E42}) we have
\begin{eqnarray}
\nonumber
&&\frac {d} {dt} \int (\partial_{x_i} f)^2 G^{-(1+\delta)}
\\ \nonumber
&=&\int \partial_{x_i} f \partial_{x_i} ({\mathcal L}- K\chi_R) f G^{-(1+\delta)}
\\ \nonumber
&=& -\int |\nabla_v (\partial_{x_i} f G^{-1})|^2 G^{1-\delta} + \frac {\delta d} {2} \int (\partial_{x_i} f)^2 G^{-(1+\delta)}
\\ \nonumber
&&-\frac {\delta(1-\delta) } {2} \int v^2 (\partial_{x_i} f)^2 G^{-(1+\delta)}+ \int \partial_{x_i} f\sum\limits_{j=1}^d \partial^2_{x_i x_j} V\partial_{v_j} f G^{-(1+\delta)}
\\ \nonumber
&&-\int K\chi_R |\partial_{x_i} f|^2 G^{-(1+\delta)} - \int K \partial_{x_i} f \partial_{x_i}\chi_R f G^{-(1+\delta)}.
\end{eqnarray}
Using Cauchy-Schwarz inequality and summing up by i, we get
\begin{eqnarray}
\nonumber
&&\frac {d} {dt} \int |\nabla_x f|^2 G^{-(1+\delta)}
\\ \nonumber
&\le& -\sum\limits_{i=1}^d \int |\nabla_v (\partial_{x_i} f G^{-1})|^2 G^{1-\delta} - \frac {\delta(1-\delta)} {2} \int v^2 (\nabla_x f)^2 G^{-(1+\delta)}
\\ \nonumber
&&+C\int |\nabla_v f |^2 G^{-(1+\delta)} +C \int |\nabla_{x} f|^2 G^{-(1+\delta)}+ C \int | f|^2 G^{-(1+\delta)} ,
\end{eqnarray}
for some $C > 0$. Similarly using
\begin{equation}\label{E47}
\partial_{v_i}{\mathcal L} f={\mathcal L}\partial_{v_i} f -\partial_{x_i}f +\partial_{v_i} f ,
\end{equation}
and (\ref{E42}), we have
\begin{eqnarray}
\nonumber
&&\frac {d} {dt} \int (\partial_{v_i} f)^2 G^{-(1+\delta)}
\\ \nonumber
&=&\int \partial_{v_i} f \partial_{v_i} ({\mathcal L}-K\chi_R) f G^{-(1+\delta)}
\\ \nonumber
&=& -\int |\nabla_v (\partial_{v_i} f G^{-1})|^2 G^{1-\delta} + \frac {\delta d} {2} \int (\partial_{v_i} f)^2 G^{-(1+\delta)}
\\ \nonumber
&&-\frac {\delta(1-\delta)} {2} \int v^2 (\partial_{v_i} f)^2 G^{-(1+\delta)}- \int \partial_{x_i} f \partial_{v_i} f G^{-(1+\delta)}
\\ \nonumber
&&+ \int |\partial_{v_i} f|^2 G^{-(1+\delta)}- \int K\chi_R |\partial_{v_i} f|^2 G^{-(1+\delta)}- \int K \partial_{v_i} f \partial_{v_i}\chi_R f G^{-(1+\delta)}.
\end{eqnarray}
Using Cauchy-Schwarz inequality and summing up by i we get
\begin{eqnarray}
\nonumber
&&\frac {d} {dt} \int |\nabla_v f|^2 G^{-(1+\delta)}
\\ \nonumber
&\le&- \sum\limits_{i=1}^d \int |\nabla_v (\partial_{v_i} f G^{-1})|^2 G^{1-\delta} + C\int |\nabla_x f||\nabla_v f|G^{-(1+\delta)}
\\ \nonumber
&& +C\int |\nabla_{v} f|^2 G^{-(1+\delta)} + C\int |f|^2 G^{-(1+\delta)} - \frac {\delta(1-\delta)} {2} \int v^2 (\nabla_v f)^2 G^{-(1+\delta)}.
\end{eqnarray}
For the crossing term, we split it also into two parts
\begin{eqnarray}
\nonumber
&&\frac d {dt} \int 2\partial_{v_i} f \partial_{x_i} f G^{-(1+\delta)}
\\ \nonumber
&=& (\int \partial_{v_i} f \partial_{x_i} {\mathcal L} f G^{-(1+\delta)} +\int \partial_{v_i} {\mathcal L} f \partial_{x_i} f G^{-(1+\delta)})
\\ \nonumber
&&- (\int \partial_{v_i} f \partial_{x_i}(K\chi_R f) G^{-(1+\delta)} +\int \partial_{x_i} (K\chi_R f) \partial_{v_i } f G^{-(1+\delta)})
\\ \nonumber
&:=& W_1 +W_2.
\end{eqnarray}
Using (\ref{E46}) and (\ref{E47}) we have
\begin{eqnarray}
\nonumber
W_1&=& \int \partial_{v_i} f {\mathcal L} (\partial_{x_i} f) G^{-(1+\delta)} +\int {\mathcal L}(\partial_{v_i} f) \partial_{x_i} f G^{-(1+\delta)}
\\ \nonumber
&&+ \int\partial_{v_i} f \sum\limits_{j=1}^d \partial_{x_i x_j} V(x) \partial_{v_j} f G^{-(1+\delta)} -\int |\partial_{x_i}f|^2 G^{-(1+\delta)}
\\ \nonumber
&&+\int \partial_{x_i} f \partial_{v_i} f G^{-(1+\delta)}.
\end{eqnarray}
By (\ref{E41}), we deduce
\begin{eqnarray}
\nonumber
W_1&=&- \int 2 \nabla_v(\partial_{v_i}fG^{-1}) \cdot \nabla_v(\partial_{x_i} fG^{-1}) G^{1-\delta}+ \delta d \int \partial_{v_i} f \partial_{x_i} f G^{-(1+\delta)}
\\ \nonumber
&&-\delta(1-\delta) \int v^2 \partial_{v_i} f \partial_{x_i} f G^{-(1+\delta)}+ \int\partial_{v_i} f \sum\limits_{j=1}^d \partial_{x_i x_j} V(x) \partial_{v_j} f G^{-(1+\delta)}
\\ \nonumber
&&-\int |\partial_{x_i}f|^2 G^{-(1+\delta)} +\int \partial_{x_i} f \partial_{v_i} f G^{-(1+\delta)}.
\end{eqnarray}
For the $W_2$ term we have
\begin{eqnarray}
\nonumber
W_2&=&- \int \partial_{v_i} f \partial_{x_i}(K\chi_R f) G^{-(1+\delta)} -\int \partial_{x_i} (K\chi_R f) \partial_{v_i } f G^{-(1+\delta)}
\\ \nonumber
&=& - \int 2 K\chi_R \partial_{x_i} f \partial_{v_i} f G^{-(1+\delta)} + \int K f (\partial_{v_i} \chi_R \partial_{x_i} f+ \partial_{v_i} f \partial_{x_i} \chi_R)G^{-(1+\delta)}
\\ \nonumber
&\le& C\int |\partial_{x_i} f | | \partial_{v_i} f | G^{-(1+\delta)} + C \int |\partial_{v_i} f || f | G^{-(1+\delta)}+ C\int |f | |\partial_{x_i} f | G^{-(1+\delta)},
\end{eqnarray}
Combining the two parts, using Cauchy-Schwarz inequality, and summing up by i we get
\begin{eqnarray}
\nonumber
&&\frac {d} {dt} \int 2 \nabla_x f \cdot \nabla_v f G^{-(1+\delta)}
\\ \nonumber
&\le& - \sum\limits_{i=1}^d \int 2 \nabla_v(\partial_{v_i}fG^{-1}) \cdot \nabla_v(\partial_{x_i} fG^{-1}) G^{1-\delta} -\frac 1 2 \int |\nabla_x f|^2 G^{-(1+\delta)}
\\ \nonumber
&& + C \int |\nabla_v f|^2 G^{-(1+\delta)} + C\int |f |^2 G^{-(1+\delta)} -\delta(1-\delta) \int v^2 \nabla_v f \cdot \nabla_x f G^{-(1+\delta)}.
\end{eqnarray}
For the very definition of ${\mathcal F}$ in (\ref{E44}), we easily compute
\begin{eqnarray}
\nonumber
&&\frac d {dt} {\mathcal F}(t, f_t)
\\ \nonumber
&=& A \frac d {dt} \Vert f_t \Vert_{L^2(G^{-1/2(1+\delta)})}^2 + at^2 \frac d {dt} \Vert \nabla_v f_t \Vert_{L^2(G^{-1/2(1+\delta)})}^2
\\ \nonumber
&&+2 c t^4 \frac d {dt} \langle \nabla_v f_t, \nabla_x f_t \rangle_{L^2(G^{-1/2(1+\delta)})}^2 + bt^6 \frac d {dt} \Vert \nabla_x f_t \Vert_{L^2(G^{-1/2(1+\delta)})}^2
\\ \nonumber
&&+ 2 a t \Vert \nabla_v f_t \Vert_{L^2(G^{-1/2 (1+\delta) })}^2 + 8c t^3 \langle \nabla_v f_t, \nabla_x f_t \rangle_{L^2(G^{-1/2 (1+\delta) })}^2
\\ \nonumber
&&+ 6 b t^5 \Vert \nabla_x f_t \Vert_{L^2(G^{-1/2(1+\delta) })}^2.
\end{eqnarray}
Gathering all the inequalities above together, we have
\begin{eqnarray}
\nonumber
&&\frac d {dt} {\mathcal F}(t, f_t)
\\ \nonumber
&\le& (2at - \frac {A(1-\delta)} 2+ Cat^2 + 2Ct^4c +C b t^6) \int | \nabla_v f_t |^2 G^{-(1+\delta)}
\\ \nonumber
&&+ (6bt^5- \frac c 2 t^4+C b t^6)\int |\nabla_x f_t|^2 G^{-(1+\delta)} + (8ct^3 +Cat^2 ) \int |\nabla_{v} f_t | |\nabla_{x} f_t | G^{-(1+\delta)}
\\ \nonumber
&& -( a t^2 \sum\limits_{i=1}^{d} \int |\nabla_v (\partial_{v_i} f_t G^{-1})|^2 G^{1-\delta} +b t^6 \sum\limits_{i=1}^{d} \int |\nabla_v (\partial_{x_i} f_t G^{-1})|^2 G^{1-\delta}
\\ \nonumber
&& + 2 ct^4 \sum\limits_{i=1}^{d} \int \nabla_v(\partial_{v_i} f_t G^{-1}) \cdot \nabla_v(\partial_{x_i} f_t G^{-1}) G^{1-\delta} ) - \frac {\delta(1-\delta)} {2} (at^2 \int v^2 (\nabla_v f)^2 G^{-(1+\delta)}
\\ \nonumber
&& + bt^6\int v^2 (\nabla_x f)^2 G^{-(1+\delta)} + 2c t^4\int v^2 \nabla_v f \cdot \nabla_x f G^{-(1+\delta)} ) +C\int f_t^2 G^{-(1+\delta)},
\end{eqnarray}
for some $C>0$. We observe that
\begin{eqnarray}
\nonumber
&&|2c t^4\int v^2 \nabla_v f \cdot \nabla_x f G^{-(1+\delta)} |
\\ \nonumber
&\le& at^2 \int v^2 (\nabla_v f)^2 G^{-(1+\delta)} + bt^6\int v^2 (\nabla_x f)^2 G^{-(1+\delta)} ,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
&&| 2 ct^4 \sum\limits_{i=1}^{d} \int 2 \nabla_v(\partial_{v_i}f_tG^{-1}) \cdot \nabla_v(\partial_{x_i} f_tG^{-1}) G^{1-\delta} |
\\ \nonumber
&\le & a t^2 \sum\limits_{i=1}^{d} \int |\nabla_v (\partial_{v_i} f_t G^{-1})|^2 G^{1-\delta} +b t^6 \sum\limits_{i=1}^{d} \int |\nabla_v (\partial_{x_i} f_t G^{-1})|^2 G^{1-\delta}.
\end{eqnarray}
by our choice on $a, b, c$. So by taking $A $ large and $0 < \eta$ small ($t \in [0, \eta]$), as a consequence
\begin{eqnarray}
\nonumber
\frac d {dt} {\mathcal F}(t, f_t)\le -L(\Vert \nabla_v f_t \Vert_{L^2(G^{-1/2(1+\delta )})}^2 + t^4 \Vert \nabla_x f _t \Vert_{L^2 ( G^{-1/2 (1+\delta)}) }^2) +C\Vert f_t \Vert_{L^2(G^{-1/2(1+\delta )})}^2,
\end{eqnarray}
for some $L, C > 0$, and that ends the proof.\qed \smallskip
\begin{rem}
For the case $F = {\mathcal F}^*$, the only difference in the proof is to change (\ref{E46}) and (\ref{E47}) into
\begin{equation}
\nonumber
\partial_{x_i}{\mathcal L}^* f = {\mathcal L}^* \partial_{x_i} f - \partial_{x_i}(\nabla_x V(x) \cdot \nabla_v f)= {\mathcal L}^* \partial_{x_i} f - \sum\limits_{j=1}^d \partial^2_{x_i x_j} V\partial_{v_j} f,
\end{equation}
and
\begin{equation}
\nonumber
\partial_{v_i}{\mathcal L}^* f={\mathcal L}^* \partial_{v_i} f +\partial_{x_i}f +\partial_{v_i} f.
\end{equation}
\end{rem}
\smallskip
The following proof of this section is true for both cases.
\begin{lem}\label{L53} For any $0 \le \delta <1$, we have
\begin{eqnarray}
\nonumber
\int |\nabla_{x, v} (f G^{-1/2(1+\delta)}) |^2 \le \int |\nabla_{x, v} f |^2 G^{-(1+\delta)} +C \int f^2 G^{-(1+\delta)},
\end{eqnarray}
\end{lem}
\noindent {Prove of Lemma \ref {L53}.}
For any weight function $m$ we have
\begin{eqnarray}
\nonumber
\int |\nabla_x (f m) |^2 &=& \int|\nabla_x f m +\nabla_x m f |^2
\\ \nonumber
&=&\int |\nabla_x f |^2 m^2 +\int |\nabla_x m |^2 f^2 +\int 2 f\nabla_x f m \nabla_x m
\\ \nonumber
&=& \int |\nabla_x f |^2 m^2 +\int (|\nabla_x m |^2-\frac 1 2 \Delta_x (m^2)) f^2,
\end{eqnarray}
taking $m = G^{-1/2(1+\delta)}$ we have
\begin{eqnarray}
\nonumber
&&\int |\nabla_x (f G^{-1/2(1+\delta)}) |^2
\\ \nonumber
&=&\int |\nabla_x f |^2 G^{-(1+\delta)} +\int -(\frac {(1+\delta)^2} 4|\nabla_x V(x)|^2 +\frac {1+\delta} 2 \Delta_x V(x)) f^2 G^{-(1+\delta)}
\\ \nonumber
&\le& \int |\nabla_x f |^2 G^{-(1+\delta)} +C \int f^2 G^{-(1+\delta)}.
\end{eqnarray}
Similarly, we have
\begin{eqnarray}
\nonumber
&&\int |\nabla_v (f G^{-1/2(1+\delta)}) |^2
\\ \nonumber
&=&\int |\nabla_v f |^2 G^{-(1+\delta)} +\int -(\frac {(1+\delta)^2} 4v^2 +\frac {1+\delta} 2 d) f^2 G^{-(1+\delta)}
\\ \nonumber
&\le& \int |\nabla_v f |^2 G^{-(1+\delta)}.
\end{eqnarray}
Putting together the two inequalities we obtain the result.
\qed
\begin{lem}\label{L54}
Nash's inequality: for any $f \in L^1(\R^d) \cap H^{1}(\R^d) $,there exist a constant $C_d$ such that:
\begin{equation}
\nonumber
\Vert f \Vert_{L^2}^{1+\frac 2 d} \le C_d \Vert f \Vert_{L^1}^{ 2/ d}\Vert \nabla_v f \Vert_{L^2},
\end{equation}
\end{lem}
\noindent For the proof of Nash's inequality, we refer to \cite{LL}, Section 8.13 for instance.
\qed
\begin{lem}\label{L55} For any $0 \le \delta <1$ we have
\begin{eqnarray}\label{E48}
\frac d {dt} \int | f | G^{-1/2(1+\delta)} \le d \int |f | G^{-1/2(1+\delta)},
\end{eqnarray}
which implies
\begin{eqnarray}
\nonumber
\int | f_t | G^{-1/2(1+\delta)} \le Ce^{ d t }\int |f_0 | G^{-1/2(1+\delta)}.
\end{eqnarray}
In particular we have
\begin{eqnarray}\label{E49}
\int | f_t | G^{-1/2(1+\delta)} \le C\int |f_0 | G^{-1/2(1+\delta)} , \ \ \ \forall t \in [0,\eta],
\end{eqnarray}
for some constant $C>0$.
\end{lem}
\noindent {Proof of Lemma \ref{L55}.} By Lemma \ref{L51} in the next section, letting $p=1$, we have
\begin{eqnarray}
\nonumber
&&\frac d {dt} \int | f | G^{-1/2(1+\delta)}
\\ \nonumber
&=& \int |f | (\Delta_v G^{-1/2(1+\delta)} -v \cdot \nabla_v G^{-1/2(1+\delta)}
\\ \nonumber
&&+ v \cdot \nabla_x G^{-1/2(1+\delta)} - \nabla V(x) \cdot \nabla_v G^{-1/2(1+\delta)} -K\chi_R G^{-1/2(1+\delta)})
\\ \nonumber
&\le& \int |f |(\frac {1+\delta} 2 d -\frac {(1+\delta) (1-\delta)} 4 v^2) G^{-1/2(1+\delta)} \le \int |f | d G^{-1/2(1+\delta)}.
\end{eqnarray}
so (\ref{E48}) is proved. As ${\mathcal T} G^{-1/2(1+\delta)} =0$, the result is still true when $F={\mathcal F}^*$.
\qed
\smallskip
Now we come to the proof of Theorem \ref{T41}.
\smallskip
\noindent {Proof of Theorem \ref{T41}.} We define
\begin{equation}
\nonumber
{\mathcal G}(t, f_t)=B \Vert f_t \Vert_{L^1(G^{-1/2(1+\delta)})}^2 + t^Z {\mathcal F} (t, f_t),
\end{equation}
with $B, Z > 0 $ to be fixed and ${\mathcal F}$ defined in Lemma \ref{L42}. We choose $t \in [0, \eta]$ , $\eta$ small such that $(a+b+c) Z\eta^{Z+1} \le \frac 1 2 L \eta^Z$ ($a, b, c, L$ are also defined Lemma \ref{L42}), by (\ref{E48}) and Lemma \ref{L42} we have
\begin{eqnarray}
\nonumber
\frac d {dt} {\mathcal G}(t, f_t) &\le& dB \Vert f_t \Vert_{L^1(G^{-1/2(1+\delta)})}^2 +Zt^{Z-1} {\mathcal F}(t, f_t)
\\ \nonumber
&&- L t^Z(\Vert \nabla_v f_t \Vert_{L^2(G^{-1/2(1+\delta)})}^2 + t^4 \Vert \nabla_x f _t \Vert_{L^2 ( G^{-1/2(1+\delta)} ) }^2 )
\\ \nonumber
&&+Ct^Z \Vert f_t \Vert_{L^2(G^{-1/2(1+\delta )})}^2
\\ \nonumber
&\le& dB \Vert f_t \Vert_{L^1(G^{-1/2(1+\delta)} ) }^2 +Ct^{Z-1}\Vert f_t \Vert_{L^2(G^{-1/2(1+\delta )})}^2
\\ \nonumber
&&- \frac L 2 t^Z(\Vert \nabla_v f_t \Vert_{L^2( G^{-1/2(1+\delta)} )}^2 + t^4 \Vert \nabla_x f _t \Vert_{L^2 ( G^{-1/2(1+\delta)} ) }^2 ).
\end{eqnarray}
Nash's inequality and Lemma \ref{L42} implies
\begin{eqnarray}
\nonumber
\int f_t^2 G^{-(1+\delta)} &\le& (\int |f_t| G^{-1/2(1+\delta)})^{\frac 4 {d+2}} (\int |\nabla_{x, v}(f_t G^{-1/2(1+\delta)}) |^2 )^{\frac d {d+2}}
\\ \nonumber
&\le& (\int |f_t| G^{-1/2(1+\delta)} )^{\frac 4 {d+2}} (\int |\nabla_{x, v}f_t |^2 G^{-(1+\delta)} +C\int f_t^2 G^{-(1+\delta)})^{\frac d {d+2}}.
\end{eqnarray}
Using Young's inequality, we have
\begin{equation}
\nonumber
\Vert f_t \Vert_{L^2( G^{-1/2(1+\delta)} )}^2 \le C_{\epsilon} t^{-5d} \Vert f \Vert_{L^1(G^{-1/2(1+\delta)} )}^2 + \epsilon t^5 (\Vert \nabla_{x, v} f_t \Vert_{L^2( G^{-1/2(1+\delta)} )}^2 + C\Vert f_t \Vert_{L^2(G^{-1/2(1+\delta)} )}^2).
\end{equation}
Taking $\epsilon$ small such that $C \epsilon \eta^5 \le \frac 1 2$, we deduce
\begin{equation}
\nonumber
\Vert f_t \Vert_{L^2(G^{-1/2(1+\delta)})}^2 \le 2C_{\epsilon} t^{-5d} \Vert f \Vert_{L^1(G^{-1/2(1+\delta)})}^2 + 2\epsilon t^5 \Vert \nabla_{x, v} f_t \Vert_{L^2(G^{-1/2(1+\delta)})}^2 .
\end{equation}
Taking $\epsilon$ small we have
\begin{equation}
\nonumber
\frac d {dt} {\mathcal G}(t, f_t) \le dB \Vert f_t \Vert_{L^1(G^{-1/2(1+\delta)})}^2+ C_1t^{Z-1-5d} \Vert f_t \Vert_{L^1(G^{-1/2(1+\delta)})}^2,
\end{equation}
for some $C_1>0$. Choosing $Z=1+5d$, and using (\ref{E49}), we deduce
\begin{equation}
\nonumber
\forall t \in [0, \eta], \ \ \ {\mathcal G}(t, f_t) \le {\mathcal G}(0, f_0) + C_2\Vert f_0 \Vert_{L^1(G^{-1/2(1+\delta)} )}^2 \le C_3 \Vert f_0 \Vert_{L^1(G^{-1/2(1+\delta)})}^2,
\end{equation}
which ends the proof.
\qed
\bigskip
\section{ $S_{\mathcal B}$ decay in larger spaces}
\label{sec:L1}
\setcounter{equation}{0}
\setcounter{theo}{0}
The aim of this section is to prove the following decay estimate for the semigroup $S_{\mathcal B}$ which will be useful in the last section where we will prove Theorem \ref{T11} in full generally.
\begin{theo}\label{T51}
Let $H=1 + x^2+ 2 v \cdot x +3v^2$, for any $\theta \in(0, 1)$ and
for any $l > 0$, we have
\begin{equation}
\nonumber
\Vert {\mathcal S}_{\mathcal B}(t) \Vert_{L^1(H^l) \to L^1(H^{l \theta})}\lesssim (1+t)^{-a},
\end{equation}
where
\begin{equation}
\nonumber
a = \frac {l(1- \theta)} {1- \frac \gamma 2 }.
\end{equation}
\end{theo}
We start with an elementary identity.
\begin{lem}\label{L51}
For the kinetic Fokker Planck operator ${\mathcal L}$ , let m be a weight function, for any $p \in [1, \infty]$ we have
\begin{equation}
\\ \nonumber
\int ({\mathcal L} f)f^{p-1}m^p=-(p-1)\int |\nabla_v(mf)|^2 (mf)^{p-2} + \int f^pm^p \phi ,
\end{equation}
with
\begin{equation}
\nonumber
\phi = \frac {2} {p^{'}} \frac {|\nabla_v m|^2} {m^2} + (\frac 2 p -1)\frac {\Delta_v m} {m} + \frac {d} {p^{'}}- v \cdot \frac {\nabla_v m} {m} -\frac {{\mathcal T} m } {m}.
\end{equation}
In particular when $p=1$, we have
\begin{equation}
\nonumber
\phi = \frac {\Delta_v m} {m}- v \cdot \frac {\nabla_v m} {m} -\frac {{\mathcal T} m } {m}.
\end{equation}
\end{lem}
\noindent {\sl Proof of Lemma \ref{L51}.} We split the integral as
\begin{equation}
\nonumber
\int ({\mathcal L} f)f^{p-1}m^p=\int f^{p-1}{\mathcal S} f m^p +\int f^{p-1}{\mathcal T} f m^p.
\end{equation}
First compute the contribution of the term with operator ${\mathcal T}$
\begin{equation}
\nonumber
\int f^{p-1}{\mathcal T} f m^p =\frac 1 p \int {\mathcal T} (f^p)m^p = -\int f^p m^p \frac {{\mathcal T} m} {m} .
\end{equation}
\smallskip
Concerning the term with operator ${\mathcal S}$, we split it also into two parts
\begin{equation}
\nonumber
\int ({\mathcal S} f)f^{p-1}m^p = \int f^{p-1} m^p (\Delta_v f +div_v(v f)) := C_1+C_2.
\end{equation}
We first compute the $C_2$ term, we have
\begin{eqnarray}
\nonumber
C_2&=&\int f^{p-1} m^p(d f+ v \cdot \nabla_v f)
\\ \nonumber
&=&\int d f^pm^p - \frac 1 p \int f^p \hbox{div}_v(v m^p)
\\ \nonumber
&=&\int f^p[(1-\frac 1 p )d -v\cdot \frac {\nabla_v m}{m}]m^p.
\end{eqnarray}
We turn to the $C_1$ term, we have
\begin{eqnarray}
\nonumber
C_1&=& \int f^{p-1} m^p \Delta_v f =- \int \nabla_v (f^{p-1}m^p) \cdot \nabla_v f
\\ \nonumber
&=&\int - (p-1)|\nabla_v f|^2 f^{p-2} m^{p} - \frac 1 p \int \nabla_v f^p \cdot \nabla_v m^p.
\end{eqnarray}
Using $\nabla_v(m f) =m \nabla_v f + f \nabla_v m$, we deduce
\begin{eqnarray}
\nonumber
C_1&=&-(p-1) \int |\nabla_v(mf)|^2f^{p}m^{p-2}+(p-1)\int |\nabla_v m|^2f^p m^{p-2}
\\ \nonumber
&&+ \frac{2(p-1)} {p^2} \int \nabla_v (f^p) \cdot \nabla_v (m^{p}) - \frac 1 p \int \nabla_v (f^p) \cdot \nabla_v (m^p)
\\ \nonumber
&=&-(p-1) \int |\nabla_v(m f)|^2f^{p-2} m^p+(p-1)\int |\nabla_v m|^2f^p m^{p-2}
\\ \nonumber
&& + \frac {p-2} {p^2} \int f^p \Delta_v m^p.
\end{eqnarray}
Using that $\Delta_v m^p = p \Delta_v m\ m^{p-1} + p(p-1)| \nabla_v m|^2m^{p-2} $, we obtain
\begin{eqnarray}
\nonumber
C_1=-(p-1) \int |\nabla_v(mf)|^2f^{p-2}m^{p-2} + \int f^p m^p[ (\frac 2 p -1) \frac { \Delta_v m} {m} + 2(1-\frac 1 p)\frac {|\nabla_v m|^2} {m^2}].
\end{eqnarray}
We conclude by combining the above equalities.
\qed
\smallskip
\smallskip
\smallskip
\noindent {\sl Proof of Theorem \ref{T51}.} From Lemma \ref{L51}, we have
\begin{eqnarray}\label{E51}
\int ({\mathcal B} f)f^{p-1}m^p &&= \int ({\mathcal L} -M \chi_R) f^{p-1} m^p
\\ \nonumber
&&= - (p-1)\int |\nabla_v(mf)|^2 (mf)^{p-2} + \int f^pm^p \phi ,
\end{eqnarray}
with
\begin{equation}
\nonumber
\phi = [\frac {2} {p^{'}} \frac {|\nabla_v m|^2} {m^2} + (\frac 2 p -1)\frac {\Delta_v m} {m} + \frac {d} {p^{'}}- v \cdot \frac {\nabla_v m} {m} -\frac {{\mathcal T} m } {m} - M\chi_R].
\end{equation}
When $p=1$, we have
\begin{eqnarray}
\nonumber
\phi = \frac {\Delta_v m} {m} - v \cdot \frac {\nabla_v m} {m} -\frac {{\mathcal T} m } {m} - M\chi_R.
\end{eqnarray}
Let $m=H^k$. We have
\begin{eqnarray}
\nonumber
\frac {\nabla_v m} {m}= k \frac {\nabla_v H} {H}, \quad \frac {\nabla_x m} {m}= k \frac {\nabla_x H} {H} ,
\end{eqnarray}
and
\begin{eqnarray}
\nonumber
\frac {\Delta_v m} {m}= \frac {k\Delta_v H} {H} + \frac {k (k-1)|\nabla_v H|^2} {H^2}.
\end{eqnarray}
Summing up, we have for $\phi$
\begin{eqnarray}
\nonumber
\frac {\phi H } {k} = \Delta_v H + (k-1)\frac {|\nabla_v H|^2} {H} - v \cdot \nabla_v H + v \cdot \nabla_x H -\nabla_x V(x) \cdot \nabla_v H -M \chi_R,
\end{eqnarray}
From the very definition of $H$, we have
\begin{equation}
\nonumber
\nabla_v H= 6 v+ 2 x , \quad \nabla_x H = 2 v + 2 x, \quad \Delta_v H= 6.
\end{equation}
We then compute
\begin{eqnarray}\label{L2C2}
\nonumber
&&\Delta_v H + (k-1)\frac {|\nabla_v H|^2} {H} +v \cdot \nabla_x H - v \cdot \nabla_v H -\nabla_x V(x)\cdot \nabla_v H
\\ \nonumber
&=& 6+ (k-1)\frac{( 6v+2 x)^2} {H} + 2 v^2+2 x \cdot v - 6v^2
\\ \nonumber
&&- 2 x \cdot v - 6 v \cdot \nabla_x V(x) - 2 x \cdot \nabla_x V(x)
\\ \nonumber
&\le&(2 v^2 +C v - 6v^2) - 2 x \cdot \nabla_x V(x) + C
\\ \nonumber
&\le& - C_1 v ^2 - C_2 x \cdot \nabla_x V(x) + C_3
\\ \nonumber
&\le& -C_4 H^{\frac \gamma 2} +K_1\chi_{R_1} ,
\end{eqnarray}
for some $C_i >0$. Taking $K$ and $R$ large enough, we have $ \phi \le -C H^{\frac \gamma 2 - 1}$, using this inequality in equation (\ref{E51}), we deduce
\begin{eqnarray}\label{E52}
\frac d {dt} Y_4(t) :=\frac d {dt} \int |f_{\mathcal B}(t)| H^k &=& \int sign (f_{\mathcal B}(t)) {\mathcal B} f_{\mathcal B}(t) H^k
\\ \nonumber
&\le& -C \int |f_B(t)| H^{k -1+ \frac \gamma 2} ,
\end{eqnarray}
for any $k >1$. In particular for any $l \ge 1$, we can find $K$ and $R$ large enough such that
\begin{equation}
\nonumber
\frac d {dt} \int |f_{\mathcal B}(t)| H^l \le 0,
\end{equation}
which readily implies
\begin{equation}
\nonumber
\int |f_{\mathcal B}(t) |H^l \le \int |f_0| H^l := Y_5.
\end{equation}
Denoting
\begin{equation}
\nonumber
\alpha=\frac {l-k} {l-k+1-\frac \gamma 2} \in (0,1),
\end{equation}
the H\"older's inequality
\begin{equation}
\nonumber
\int |f_B(t)| H^{k } \le (\int |f_B(t)| H^{k -1+\frac \gamma 2} )^{\alpha} (\int |f_B(t)| H^{l} )^{1-\alpha},
\end{equation}
implies
\begin{equation}
\nonumber
(\int |f_B(t)| H^{k} )^{\frac 1 \alpha }(\int |f_B(t)| H^{l} )^{\frac {\alpha-1} {\alpha}} \le \int |f_B(t)| H^{k -1+\frac \gamma 2} ,
\end{equation}
From this inequality and (\ref{E52}), we get
\begin{equation}
\nonumber
\frac d {dt} Y_4(t) \le -C (Y_4(t))^{\frac 1 \alpha} Y_5^{\frac {\alpha-1} {\alpha}}.
\end{equation}
Using $Y_4(0) \le Y_5$, after an integration, we deduce
\begin{equation}
\nonumber
Y_4(t) \le C_\alpha \frac 1 {(1+t)^{\frac {\alpha} {1-\alpha}} }Y_5,
\end{equation}
which is nothing but the polynomial decay on $S_{\mathcal B}$
$$\Vert {\mathcal S}_B(t) \Vert_{L^p(H^l) \to L^p(H^k)}\lesssim (1+t)^{-a},$$
with
$$ a = \frac {l-k} {1- \frac \gamma 2 }, \quad \forall 0 < k < l, \quad 1\le l.$$
We conclude Theorem \ref{T51} by writing $k = l\theta$, $0<\theta<1$.
\bigskip
\section{$L^p$ convergence for the KFP model}
\label{sec:Conclude}
\setcounter{equation}{0}
\setcounter{theo}{0}
Before going to the proof of our main theorem, we need two last deduced results.
\begin{lem}\label{L61}
For any $\epsilon > 0$ small enough, we have
\begin{equation}
\nonumber
\Vert {\mathcal A} S_{\mathcal B}(t) \Vert_{L^2(G^{- (\frac 1 2 +\epsilon)}) \to L^2(G^{-(\frac 1 2 +\epsilon)})} \lesssim e^{- a t^{\frac {\gamma} {2- \gamma }} } , \quad \forall t \ge 0,
\end{equation}
and
\begin{equation}
\nonumber
\Vert {\mathcal A} {\mathcal S}_B(t) \Vert_{ L^1(G^{-(\frac 1 2 +\epsilon)}) \to L^1(G^{-(\frac 1 2 +\epsilon)}) }\lesssim e^{-a t^{\frac {\gamma} {2- \gamma }}}, \quad \forall t \ge 0,
\end{equation}
for some $a>0$. Similarly for any $0 < b <\frac {\gamma} {2-\gamma}$ and for any $\epsilon > 0$ small enough, we have
\begin{equation}
\nonumber
\Vert {\mathcal A} S_{\mathcal B}(t) \Vert_{ L^1(G^{-(\frac 1 2 +\epsilon)}) \to L^2(G^{-(\frac 1 2 +\epsilon)}) } \lesssim t^{-\alpha}e^{- a t^{b}}, \quad \forall t \ge 0,
\end{equation}
for $\alpha =\frac {5d+1} {2}$ and some $a > 0$.
\end{lem}
\noindent{\sl Proof of Lemma \ref{L61}.}
The first two inequalities are obtained obviously by Lemma \ref{L31} and the property of ${\mathcal A}= M\chi_R$. For the third inequality we split it into two parts, $t \in [0, \eta]$ and $t > \eta$, where $\eta$ is defined in Theorem \ref{T41}. When $t \in [0, \eta]$ , we have $e^{-at^{\frac {\gamma} {2- \gamma }}} \ge e^{-a \eta^{\frac {\gamma} {2- \gamma }}}$, by Theorem \ref{T41}, we have
\begin{equation}
\nonumber
\Vert {\mathcal A} {\mathcal S}_B(t) \Vert_{ L^1(G^{-(\frac 1 2 +\epsilon)}) \to L^2(G^{-(\frac 1 2 +\epsilon)}) }\lesssim t^{-\alpha} \lesssim t^{-\alpha}e^{-a t^{\frac {\gamma} {2- \gamma }}} ,\quad \forall t \in [0, \eta],
\end{equation}
for some $a > 0$. When $ t \ge \eta$, by Theorem \ref{T41}, we have
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(\eta) \Vert_{ L^2(G^{-(\frac 1 2 +\epsilon)}) \to L^2(G^{-(\frac 1 2 +\epsilon)}) }\lesssim \eta^\alpha \lesssim 1,
\end{equation}
and by Lemma \ref{L31}
\begin{equation}
\nonumber
\Vert {\mathcal S}_{\mathcal B}(t - \eta) \Vert_{L^2(G^{-(\frac 1 2 +\epsilon)}) \to L^2(G^{-\frac 1 2 }) }\lesssim e^{-a (t-\eta)^{\frac {\gamma} {2- \gamma }}} \lesssim e^{-a t^{\frac {\gamma} {2- \gamma }}},
\end{equation}
gathering the two inequalities, we have
\begin{equation}
\nonumber
\Vert {\mathcal A} {\mathcal S}_B(t) \Vert_{L^1(G^{-1/2 (1 + 2\epsilon )}) \to L^2(G^{-1/2( 1+ 2\epsilon)})}\lesssim e^{-at^{\frac {\gamma} {2- \gamma }}} \lesssim t^{-\alpha}e^{-a t^b}, \quad \forall t > \eta,
\end{equation}
for any $0 < b < \frac \gamma {2-\gamma}$, the proof is ended by combining the two cases above.
\qed
\begin{lem}\label{L62}
Similarly as Lemma \ref{L61}. For any $p \in (2,\infty)$, we have
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t) {\mathcal A} \Vert_{L^2(G^{-1/2}) \to L^2(G^{-1/2 })} \lesssim e^{- a t^{\frac {\gamma} {2- \gamma }} } , \quad \forall t \ge 0.
\end{equation}
and
\begin{equation}
\nonumber
\Vert {\mathcal S}_B(t) {\mathcal A} \Vert_{L^p(G^{-1/2 })\to L^p(G^{-1/2
})}\lesssim e^{-a t^{\frac {\gamma} {2- \gamma }}}, \quad \forall t \ge 0.
\end{equation}
for some $a>0$. And for any $0 < b < \frac \gamma {2-\gamma}$ we have
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t) {\mathcal A} \Vert_{L^2(G^{-1/2 }) \to L^p(G^{-1/2 }) } \lesssim t^{-\beta}e^{- a t^b}, \quad \forall t \ge 0.
\end{equation}
for some $\beta >0$ and some $a > 0$.
\end{lem}
\noindent The proof of Lemma \ref{L62} is similar to the proof of Lemma \ref{L61} and thus skipped.
\begin{lem}\label{L63}
let $X, Y$ be two Banach spaces, $S(t)$ a semigroup such that for all $t \ge 0 $and some $0 < a,0 < b<1$ we have
\begin{equation}
\nonumber
\Vert S(t) \Vert_{X \to X} \le C_X e^{-at^b}, \ \ \Vert S(t) \Vert_{Y \to Y} \le C_Y e^{-at^b},
\end{equation}
and for some $0 < \alpha $, we have
\begin{equation}
\nonumber
\Vert S(t) \Vert_{X \to Y} \le C_{X,Y} t^{-\alpha} e^{-at^b}.
\end{equation}
Then we can have that for all integer $ n > 0$
\begin{equation}
\nonumber
\Vert S^{(*n)}(t) \Vert_{X \to X} \le C_{X,n} t^{n-1} e^{-at^b},
\end{equation}
similarly
\begin{equation}
\nonumber
\Vert S^{(*n)}(t) \Vert_{Y \to Y} \le C_{Y,n} t^{n-1} e^{-at^b},
\end{equation}
and
\begin{equation}
\nonumber
\Vert S^{(*n)}(t) \Vert_{X \to Y} \le C_{X,Y,n} t^{n-\alpha-1} e^{-at^b}.
\end{equation}
In particular for $\alpha+1 < n$, and for any $ b^{*} < b$
\begin{equation}
\nonumber
\Vert S^{(*n)}(t) \Vert_{X \to Y} \le C_{X,Y,n} e^{-at^{b^*}}.
\end{equation}
\end{lem}
\noindent { \sl Proof of Lemma \ref{L63}.}
The proof is the same as Lemma 2.5 in \cite{MQT}, plus the fact $t^b \le s^b +(t-s)^b$ for any $0 \le s \le t, 0 < b < 1$.\qed
Then we come to the final proof.
\noindent {Proof of Theorem \ref{T11}.}
We only prove the case when $m=G^{\frac {p-1} {p}(1+\epsilon)}, \quad p\in[1, 2]$, for the proof of the other cases, one need only replace the use of Lemma \ref{L61} in the following proof by Lemma \ref{L62} and Theorem \ref{T41}. We will prove $p=1$ first, this time we need to prove
\begin{equation}
\nonumber
\Vert S_{\mathcal L}(I - \Pi)(t) \Vert_{L^{1}(G^{-\epsilon}) \to L^1} \lesssim e^{-at^{b}},
\end{equation}
for any $0< b < \frac {\gamma} {2-\gamma}$, where $I$ is the identity operator and $\Pi$ is a projection operator defined by
\begin{equation}
\nonumber
\Pi (f) ={\mathcal M}(f) G.
\end{equation}
First, Iterating the Duhamel's formula we split it into 3 terms
\begin{eqnarray}
\nonumber
S_{\mathcal L}(I-\Pi) &=&(I-\Pi)\{S_{\mathcal B} + \sum_{l=1}^{n-1}( S_{\mathcal B} {\mathcal A})^{(*l)}* (S_{\mathcal B}) \}
\\ \nonumber
&&+\{ (I-\Pi)S_{\mathcal L} \}*({\mathcal A} S_{\mathcal B}(t))^{*n},
\end{eqnarray}
and we will estimate them separately. By Lemma \ref{L31}, we have
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t) \Vert_{L^{1}(G^{-\epsilon}) \to L^1} \lesssim e^{-at^{\frac {\gamma} {2- \gamma }}},
\end{equation}
the first term is thus estimated. For the second term, still using Lemma \ref{L31}, we get
\begin{equation}
\nonumber
\Vert S_{\mathcal B}(t){\mathcal A} \Vert_{L^1(G^{-\epsilon}) \to L^1} \lesssim e^{-at^{\frac {\gamma} {2- \gamma }}},
\end{equation}
by Lemma \ref{L63}, we have
\begin{equation}
\nonumber
\Vert (S_{\mathcal B}(t){\mathcal A})^{*l} \Vert_{L^1(G^{-\epsilon}) \to L^1} \lesssim t^{l-1} e^{-at^{\frac {\gamma} {2- \gamma }}},
\end{equation}
thus the second term is estimated. For the last term by Lemma \ref{L31}
\begin{equation}
\nonumber
\Vert {\mathcal A} S_{\mathcal B}(t) \Vert_{L^1( G^{-\epsilon}) \to L^1(G^{-(\frac 1 2 + \epsilon)})} \lesssim e^{- a t^{\frac {\gamma} {2- \gamma }}} .
\end{equation}
By Lemma \ref{L61} and \ref{L63}, for any $0 < b < \frac {\gamma } {2-\gamma}$, we have
\begin{equation}
\nonumber
\Vert ({\mathcal A} S_{\mathcal B})^{(*n-1)}(t) \Vert_{L^1(G^{-(\frac 1 2 + \epsilon)}) \to L^2(G^{-(\frac 1 2 + \epsilon)}) } \lesssim t^{n-\alpha-2}e^{- a t^b} ,
\end{equation}
finally by Theorem \ref{T31}, we have
\begin{equation}
\nonumber
\Vert S_{\mathcal L}(t)(I-\Pi) \Vert_{ L^2(G^{-(\frac 1 2 + \epsilon)}) \to L^2(G^{-1/2}) } \lesssim e^{-at^{\frac {\gamma} {2- \gamma }}}.
\end{equation}
Taking $n > \alpha+2 $ the third term is estimated thus the proof of case $p=1$ is concluded by gathering the inequalities above. As the case $p=2$ ia already proved in Theorem $\ref{T31}$, the case $p \in (1, 2)$ follows by interpolation. \qed
|
1,116,691,498,587 | arxiv | \section{Introduction}
One of the most studied semi-classical approaches is the cosmological particle creation (c.p.c.) in which one considers only the interaction of free particles, seen as quantum perturbations, with the gravity of a time-dependent curved background whose evolution remains unaffected by this interaction. This approach consists in separating the particle and antiparticle quantum modes at different epochs of the evolving spacetime determining thus different bases of the state space each one corresponding to a specific vacuum. These may be related among themselves through Bogolyubov transformations whose transition coefficients may point out the cosmological particle creation generating particle or antiparticle thermal baths \cite{P1,P2,GM,Zel,ZelS,DW,GMM,U1,Br,GMM1,U2}. A special attention was paid to the scalar field on the de Sitter spacetime \cite{Nach,CT,BODU,T,Csc,Pascu,Csc1} involved in many studies of c.p.c. \cite{h1,BuD,h2,h3,h4,h5,h6,h7,h8,h9,h10,h11,Ambrus,h12,h13}.
The main task here is just the criterion of separating the frequencies defining the particle and antiparticle modes and, implicitly, the current vacuum at a given time. The principal method used so far is to focus mainly on the asymptotic states whose behavior is similar to the usual Minkowskian particle and antiparticle mode functions. In this manner one may choose $in$ and $out$ states whose frequencies are separated as in the flat case defining thus the adiabatic vacua as, for example, the Bunch-Davies one \cite{BuD} largely used in applications.
On the other hand, recently we proposed the rest frame vacuum (r.f.v.) of the massive Dirac field on $(1+3)$-dimensional spatially flat Friedmann-Lema\^ itre-Robertson-Walker (FLRW) spacetimes \cite{Crfv}. We started with the observation that in the rest frame, where the particle momentum vanishes, the solutions of the Dirac equation on any FLRW spacetime have a Minkowskian behavior regardless the time evolution of the background. Thus we can separate the frequencies as in special relativity obtaining a time-independent vacuum but which is different from the Bunch-Davies one we used so far \cite{CdS,CQED}. Note that the r.f.v. can be defined only for massive particles since the massless ones do not have rest frames.
The next step might be the generalization of the r.f.v. to the Klein-Gordon and Proca fields seen as perturbations on the mentioned manifolds. Unfortunately, here we face with a serious difficulty since, in contrast with the Dirac field, the rest mode functions of these bosonic fields do not have the Minkowskian forms we need for defining the r.f.v. in a natural manner. Nevertheless, since our concept of particle and antiparticle comes from the Minkowskian quantum field theory, we are forced to impose the Minkowskian forms to the rest mode functions even though this is possible only at a given time, assuming that the time-dependent rest energy represents a {\em dynamical} effective mass.
Another challenge is to solve the ambiguity related to the normalization of these Minkowskian states, which can be done in two manners, either with respect to the scalar product of the curved manifold or by using the Minkowskian scalar product. This difficulty can be avoided since the state spaces are separable Hilbert spaces which are isometric among themselves such that we may look for an appropriate method of mapping the state spaces produced in different geometries into a unique one where we may calculate transition coefficient between these states. In what follows we would like to concentrate on these problems proposing a method of defining well-normalized Minkowskian rest mode functions on any spatially flat FLRW spacetime. These will help us to define at any moment the bosonic time-dependent r.f.v. associated to a time-dependent dynamical mass. When this is time-independent we say that the r.f.v. is {\em stable}. For concrete calculations we restrict ourselves to the massive Klein-Gordon field minimally coupled to the background gravity.
In our approach the r.f.v. of the scalar field are stable only on the FLRW manifolds where the energy is conserved as in the de Sitter expanding universe. In other FRLW manifolds we meet c.p.c. processes that may be studied deriving the Bogolyubov coefficients between states whose vacua are defined at two arbitrary moments, obtaining thus information about the time behavior of the c.p.c. in any FLRW geometry. For illustrating how our method works we give two examples, the stable r.f.v. on the de Sitter expanding universe and, for the first time, we present an example of time-dependent r.f.v. on a spatially flat FLRW with a Milne-type scale factor. On this last manifold we study the c.p.c. at finite times obtaining probabilities and rates which depend exclusively on the moments when the particle is prepared and then measured. Note that our results are different from other attempts of studying c.p.c. at finite times \cite{Ambrus} where the vacuum depends, in addition, on momentum.
We start in the next section presenting our basic assumptions concerning the scalar quantum modes prepared or measured by a global apparatus on a curved manifold and shoving how the state space can be mapped into a Minkowskian one. The next section is devoted to the spatially flat FLRW spacetimes where we propose a concrete method of defining Minkowskian rest states, correctly normalized at a given arbitrary time, regardless the time evolution of the background geometry. By using such states we define the r.f.v. showing that these vacua are stable only on the FLRW manifolds where the energy is conserved. The sections IV and V are devoted to the mentioned examples, the stable r.f.v. on de Sitter expanding universe and respectively the time-dependent one on the Milne-type universe. In this last section we study the c.p.c. produced by the vacuum instability deriving the Bogolyubov coefficients between two bases of mode functions whose frequency separation was performed at two different moments. Some physical consequences are briefly discussed based on a brief analytical and graphical analysis. In the last section we present our concluding remarks.
\section{Minkowskian scalar modes}
Respecting {\em ad litteram} the principles of the quantum theory, we assume that the quantum states on any curved manifold, $(M,g)$, are prepared or measured by a global apparatus represented by the algebra of the quantum observables, i. e. the Hermitian operators defined globally as vector fields on the whole manifold or on a portion with an independent physical meaning, as in the case of the de Sitter expanding universe. The operators proportional with the Killing vector fields are conserved, commuting with the operator of the field equation. Our global apparatus prepares quantum modes whose mode functions are common eigenfunctions of a system of commuting conserved operators (s.c.c.o.) $\{{\cal E}_{KG},A,B,...\}$ which includes the operator of the field equation ${\cal E}_{KG}$. In addition, these mode functions are supposed to be normalized with respect to a specific relativistic scalar product on $(M,g)$.
In general, the s.c.c.o. determining the quantum modes is not complete such that the mode functions remain with some integration constants which depend on the separation of the positive and negative frequencies defining the vacuum. Another possible manner of setting these constants is by defining the modes on $(M,g)$ in which one measures the parameters corresponding to another geometry $(\hat M,\hat g)$, according to the method we present in what follows.
We start with the $(1+3)$-dimensional curved manifold $(M,g)$, supposed to be local Minkowskian, where we consider a local chart $\{x\}$ of coordinates $x^{\mu}$ (labeled by natural indices $\alpha, ...\mu,...=0,1,2,3$) with $x^0=t$ and arbitrary space coordinates. The scalar field, $\Phi: M\to {\Bbb C}$, of mass $m$, minimally coupled to the gravity of $(M,g)$, satisfies the Klein Gordon equation ${\cal E}_{KG}\Phi=m^2 \Phi$ whose operator is defined as
\begin{equation}\label{KG}
{\cal E}_{KG}=-\frac{1}{\sqrt{g}}\,\partial_{\mu} \sqrt{g}\,
g^{\mu\nu}\partial_{\nu}\,,\quad g=|{\rm det} g_{\mu\nu}|\,.
\end{equation}
The solution of this equation may be expanded in terms of the mode functions $f_{\alpha}\equiv f_{a,b,...}$ which satisfy the Klein-Gordon equation and the eigenvalues ones $Af_{a,b...}=a f_{a,b,...},\,Bf_{a,b...}=b f_{a,b,...},...$, determining these functions partially or completely when the s.c.c.o. is complete. When the eigenvalues $\alpha\equiv\{a,b,...\}$ are of continuous spectra this expansion reds
\begin{equation}
\Phi(x)=\int d\alpha\, \left[f_{\alpha}(x){\frak a}(\alpha)+f^*_{\alpha}(x){\frak a}^c(\alpha)^{\dagger}\right]\,,
\end{equation}
where the particle, ${\frak a}, {\frak a}^{\dagger}$, and antiparticle, ${\frak a}^c, {\frak a}^{c\,\dagger}$, field operators must satisfy the canonical bosonic commutation relations \cite{BD}.
The mode functions, $f\in{\cal K}$, behave as tempered distributions or square integrable functions with respect to the indefinite Hermitian form
\begin{eqnarray}\label{SPgen}
\langle f,f'\rangle_M&=&i\int_{\Sigma} d\sigma^{\mu}\sqrt{g}\, f^*\stackrel{\leftrightarrow}{\partial}_{\mu}f' \nonumber\\
&=&i\int_{{\Bbb R}^3} d^3x\, g^{00}\sqrt{g}\, f^*\stackrel{\leftrightarrow}{\partial}_{t}f' \, \in {\Bbb C} \,,
\end{eqnarray}
written with the notation $f\stackrel{\leftrightarrow}{\partial}f'= f \partial f' -f'\partial f$. This is the relativistic scalar product giving the 'squared norms' $\langle f,f\rangle_M$ of the square integrable functions $f\in {\cal H}\subset{\cal K}$ which may have any sign splitting the space ${\cal K}$ as
\begin{equation}
f \in \left\{
\begin{array}{lll}
{\cal H}_+\subset{\cal K}_+& {\rm if}& \langle f,f\rangle_M>0\,,\\
{\cal H}_0\subset{\cal K}_0& {\rm if}& \langle f,f\rangle_M=0\,,\\
{\cal H}_-\subset{\cal K}_-& {\rm if}& \langle f,f\rangle_M<0\,.\\
\end{array}
\right.
\end{equation}
From the physical point of view the mode functions of ${\cal K}_{\pm}$ are of positive/negative frequencies while those of ${\cal K}_0$ do not have a physical meaning. For any $f\in {\cal K}_+$ we have $f^*\in {\cal K}_-$ so that $\langle f^*, f^*\rangle_M =-\langle f, f\rangle_M$ but whether $f^*=f$ then $f\in {\cal F}_0$, since $\langle f, f\rangle_M=0$. In fact, ${\cal H}$ is a Krein space while ${\cal K}_{\pm}$ are the spaces of tempered distributions of the Hilbertian triads associated to the Hilbert spaces ${\cal H}_{\pm}$ equipped with the scalar products $\pm \langle~,~\rangle_M$.
A complete system of orthonormal mode functions, $\{f_{\alpha}\}_{\alpha\in I}\subset{\cal K}_+$ forms a (generalized) basis of positive frequencies in ${\cal K}_+$ related to the negative frequencies one, $\{f^*_{\alpha}\}_{\alpha\in I}\subset{\cal K}_-$. In this manner one defines a frequencies separation associated to a specific vacuum state of the Fock space. It is known that two different bases define different vacuum states when these are related among themselves through a non-trivial Bogolyubov transformation that mixes the positive and negative frequency modes. Otherwise the vacuum state remains stable.
Furthermore, we consider another manifold $(\hat M,\hat g)$ whose local chart $\{\hat x\}$ is defined on the {\em same} domain of the flat model as the chart $\{x\}$ of $(M,g)$. This means that there exists the coordinate transformation $\hat x=\chi(x)$ allowing us to relate the set ${\cal K}$ discussed above to the set $\hat{\cal K}$ of the scalar mode functions on $(\hat M, \hat g)$ equipped with the Hermitian form $\langle\,,\,\rangle_{\hat M}$, defined as in Eq. (\ref{SPgen}). We observe that the physical parts of the sets $\hat{\cal K}$ and ${\cal K}$ are separable Hilbert spaces between which we can define the isometry $\mu: {\cal H}_+\to \hat{\cal H}_+$ which satisfies
\begin{equation}
\langle \mu(f), \mu(f')\rangle_{\hat M}=\langle f, f'\rangle_M\,.
\end{equation}
Then for any normalized mode functions $f_{\alpha}\in {\cal H}_+$ and $\hat f_{\beta}\in \hat{\cal H}_+$ which satisfy
\begin{equation}
\langle f_{\alpha}, f_{\alpha}\rangle_{M}=\langle \hat f_{\beta}, \hat f_{\beta}\rangle_{\hat M}=1\,.
\end{equation}
we can construct the amplitude
\begin{equation}
\langle\alpha|\beta\rangle_t=\left.\langle \mu(f_{\alpha}), \hat f_{\beta}\rangle_{\hat M}\right|_t=\left.\langle f_{\alpha}, \mu^{-1}(\hat f_\beta)\rangle_M\right|_t\,,
\end{equation}
which, in general, depends on time. This gives the quantity $|\langle\alpha|\beta\rangle_t|^2$ which can be interpreted as the probability of measuring at the time $t$ the parameters $\beta$ in the state $\alpha$ prepared on $(M,g)$ or, reversely, as the probability of measuring the parameters $\alpha$ in the state $\beta$ prepared on $(\hat M,\hat g)$. For this reason we say that $\mu(f)\in \hat{\cal K}$ is the projection of $f\in {\cal K}$.
The isometry $\mu$ is complicated since this involves the coordinate transformation
$\hat x=\chi(x)$ but which can be eliminated by choosing the same coordinates for the both manifolds under consideration by taking $\chi=id \to \hat x=x$. Note that this is possible since we assumed that the local charts of $(M,g)$ and $(\hat M,\hat g)$ are included in the same domain of the flat model. With this choice the isometry takes the simple form
\begin{equation}
\mu(f)=\left(\frac{ g^{00}\sqrt{g}}{\hat g^{00}\sqrt{\hat g}}\right)^{\frac{1}{2}}f\,,
\end{equation}
that can be used in applications.
An important particular case is when $(\hat M,\hat g)$ is just the Minkowski spacetime which is the flat model of $(M,g)$. Then we can set at any time $\chi=id$ and, in addition, we get the opportunity of defining in $(M,g)$ states in which one measures exclusively Minkowskian parameters at a given time $t_0$. Thus for any normalized mode function $\hat f \in \hat{\cal K}$ on the Minkoeski spacetime we may define the corresponding {\em Mikowskian state} on $(M,g)$ whose normalized mode function $f\in {\cal K}$ is defined such that the functions,
\begin{equation}\label{def}
\mu(f)=\left(g^{00}\sqrt{g}\right)^{\frac{1}{2}}f\,,
\end{equation}
and $\hat f$ have a {\em contact} of order $k$ at the time $t_0$, satisfying the system of $k+1$ algebraic equations,
\begin{eqnarray}
\mu(f)(t_0)&=&\hat f(t_0)\,,\nonumber\\
\frac{d\mu(f)}{dt}(t_0)&=&\frac{d\hat f}{dt}(t_0)\,,\label{Q}\\
&\vdots& \nonumber\\
\frac{d^k\mu(f)}{dt^k}(t_0)&=&\frac{d^k\hat f}{dt^k}(t_0)\,,\nonumber
\end{eqnarray}
able to give all the integration constants of $f$ in terms of the Minkowskian parameters of the function $\hat f$ we chose. Obviously, the number $k+1$ of equations we may use depends on the number of the undetermined integration constants or other parameters we need to find out. With this method we can apply the definitions of Minkowskian particles or antiparticles to any manifold $(M,g)$ but only at a given time since, in general, these states are evolving in time.
\section{Rest frame vacua}
Let us consider now the family of $(1+3)$-dimensional spatially flat FLRW spacetimes for which we use the {\em same} coordinates of the FLRW chart, $\{t,\vec{x}\}$, i. e. the proper (or cosmic) time $t\in D_t$ and the Cartesian space coordinates $\vec{x}=(x^1,x^2,x^3)\in {\Bbb R}^3$. We denote by $M$ the spacetime whose line element depends on the scale factor $a(t)$ which is assumed to be a smooth function on $D_t$ giving the of conformal time
\begin{equation}
t_c=\int \frac{dt}{a(t)}\in D_{t_c}\,,\end{equation}
of the conformal chart $\{t_c,\vec{x}\}$. The line elements of these charts are
\begin{equation}
ds^2=dt^2-a(t)^2 d\vec{x}\cdot d\vec{x}=a(t_c)^2\left(dt^2- d\vec{x}\cdot d\vec{x}\right)\,,
\end{equation}
where we denoted $a(t_c)=a[t(t_c)]$. The Minkowski spacetime, denoted from now simply as $\hat M$, is the particular case when $a(t)=1$ and $t_c=t$.
In the chart $\{t,\vec{x}\}$ the massive scalar field $\Phi : M\to {\Bbb C}$ of mass $m$ satisfies the Klein-Gordon equation
\begin{equation}
\left(\partial_t^2+\frac{3\dot{a}(t)}{a(t)}\,\partial_t-\frac{1}{a(t)^2}\,\Delta+m^2\right)\Phi(t,\vec{x})=0\,,
\end{equation}
which allows a system of plane wave solutions, i. e. eigenfunctions of the momentum operators ${P}_i=-i\partial _i$ corresponding to the eigenvalues $(p_1,p_2,p_3)$
representing the components of the conserved momentum $\vec{p}$. These mode functions can be written as
\begin{equation}\label{fp}
f_{\vec{p}}(t,\vec{x})=\frac{e^{i \vec{x}\cdot \vec{p}}}{[2\pi a(t)]^{\frac{3}{2}}}{\cal F}_p(t)\,,
\end{equation}
in terms of the time modulation functions ${\cal F}_p: D_t\to {\Bbb C}$ which depend on $p=|\vec{p}|$ satisfying the equation
\begin{equation}\label{KGred}
\left[\frac{d^2}{dt^2}+\frac{p^2}{a(t)^2}+m^2-\frac{3}{2}\frac{\ddot{a}(t)}{a(t)}-\frac{3}{4}\frac{\dot{a}(t)^2}{a(t)^2}\right] {\cal F}_p(t)=0\,.
\end{equation}
which does not determine completely the form of the functions ${\cal F}_p$, remaining with integration constants which have to be fixed by supplemental assumptions.
The fundamental solutions (\ref{fp}) form an orthonormal basis with respect to the scalar product (\ref{SPgen}) that now reads
\begin{eqnarray}
\langle f,f'\rangle_M&=&i\int_{{\Bbb R}^3} d^3x\, a(t)^3\, ( f^*\stackrel{\leftrightarrow}{\partial}_{t}f')\nonumber\\
&=&i\int_{{\Bbb R}^3} d^3x\, a(t_c)^2\, ( f^*\stackrel{\leftrightarrow}{\partial}_{t_c}f') \,,\label{SP}
\end{eqnarray}
allowing us to impose the normalization condition
\begin{equation}
\delta^3(\vec{p}-\vec{p}')=\langle f_{\vec{p}},f_{\vec{p}'}\rangle_M=\delta^3(\vec{p}-\vec{p}') i\,{\cal F}_p^*(t)\stackrel{\leftrightarrow}{\partial}_{t}{\cal F}_p(t)\,,
\end{equation}
requiring the time modulation functions to satisfy
\begin{equation}\label{normF}
i\,{\cal F}_p^*(t)\stackrel{\leftrightarrow}{\partial}_{t}{\cal F}_p(t)=1\,.
\end{equation}
Then the Klein-Gordon field can be expanded as
\begin{equation}
\Phi(x)=\int d^3p\, \left[f_{\vec{p}}(x){\frak a}(\vec{p})+f^*_{\vec{p}}(x){\frak a}^c(\vec{p})^{\dagger}\right]\,,
\end{equation}
in terms of particle, ${\frak a}, {\frak a}^{\dagger}$, and antiparticle, ${\frak a}^c, {\frak a}^{c\,\dagger}$, field operators which satisfy the canonical commutation relations
\begin{eqnarray}
\left[{\frak a}(\vec{p}),{\frak a}(\vec{p}')^{\dagger}\right]&=&\delta^3(\vec{p}-\vec{p}')\,,\\ \left[{\frak a}^c(\vec{p}),{\frak a}^c(\vec{p}')^{\dagger}\right]&=&\delta^3(\vec{p}-\vec{p}')\,.
\end{eqnarray}
In the particular case of the Minkowski spacetime $\hat M$ the mode functions of positive frequencies of a scalar field of mass $\hat{m}$
\begin{equation}\label{fp0}
\hat f_{\vec{p}}(t,\vec{x})=\frac{e^{i \vec{x}\cdot \vec{p}}}{[2\pi ]^{\frac{3}{2}}}\,\hat{\cal F}_p(t)\,,\quad \hat{\cal F}_p(t)=\frac{1}{\sqrt{2 E}}\,e^{-i E t}\,,
\end{equation}
are eigenfunctions of the energy operator $i\partial_t$ depending on the conserved energy $E=\sqrt{p^2+\hat{m}^2}$ and satisfying the orthonormalization condition with respect to the scalar product
\begin{equation}\label{SP0}
\langle\hat f,\hat f'\rangle_{\hat M}=i\int_{{\Bbb R}^3} d^3x\, \hat f^*\stackrel{\leftrightarrow}{\partial}_{t}\hat f' \,.
\end{equation}
On the other hand, we have shown that in any FLRW spacetime there exists an {\em energy operator} that in the FLRW chart, $\{t,\vec{x}\}$, has the form \cite{CSchr,CGRG}
\begin{equation}
H=i\partial_t+\frac{\dot a(t)}{a(t)}\,\vec{x}\cdot\vec{P}\,.
\end{equation}
In general, this operator does not commute with the momentum operator $\vec{P}$ but in the rest frames (where $\vec{p}=0$) this coincides with the Minkowski suggesting us to determine the integration constants of the solutions (\ref{fp}) by separating the frequencies just in such frames by using the Minkowskian rest states on $M$ defined in the previous section. Thus we may set the r.f.v. of the Klein-Gordon field on the FLRW manifold under consideration.
Without introducing new notations we suppose that now the mode functions (\ref{fp}) are the Minkowskian states in which one measures in the rest frame, at the time $t_0$, the parameters of the mode functions (\ref{fp0}) for $p\to 0$ but with another rest energy, $\hat{m}\not=m$, we call here the {\em dynamical mass}. Therefore, we may consider the system (\ref{Q}) with $k=2$ giving the following equations
\begin{eqnarray}
\lim_{p\to 0}\left.\left[{\cal F}_p(t)-\hat{\cal F}_p(t)\right]\right|_{t=t_0}&=&0\,,\label{V1}\\
\lim_{p\to 0}\left.\frac{d}{dt}\left[{\cal F}_p(t)-\hat{\cal F}_p(t)\right]\right|_{t=t_0}&=&0\,,\label{V2}\\
\lim_{p\to 0}\left.\frac{d^2}{dt^2}\left[{\cal F}_p(t)-\hat{\cal F}_p(t)\right]\right|_{t=t_0}&=&0\,.\label{V3}
\end{eqnarray}
which are enough for separating the frequencies in the rest frame and finding the dynamical mass $\hat{m}(t_0)$. Thus the first two equations give the normalized integration constants corresponding to the r.f.v. while the third one helps us to find the associated dynamical mass in the rest frame. All these quantities may depend on the time $t_0$ when we impose the Minkowskian form of the mode functions in the rest frame. This means that, in general, the r.f.v. is dynamic, being associated with a time-dependent dynamical mass. Nevertheless, this vacuum becomes stable on the FLRW manifolds where the energy operator is conserved, i. e. the Minkowski and de Sitter spacetimes, since then the energy operator in the rest frame, $i\partial_t$, {\em commutes} with that of the field equation completing thus the s.c.c.o. but only in the rest frame.
while for the functions $K_{\nu}$ we have to use Eq. (\ref{IK}).
\section{Applications}
For solving concrete examples we may start with a time modulation function of the general form
\begin{equation}\label{Fphi}
{\cal F}_p(t)=c_1\phi_p(t)+c_2\phi^*_p(t)\,,
\end{equation}
where $\phi_p$ is a particular solution satisfying
\begin{equation}
\left(\phi_p, \phi_p\right)=1 ~~\to~~\left(\phi^*_p, \phi^*_p\right)=-1\,.
\end{equation}
The normalized solutions of positive frequency, $f_{\vec{p}}\in {\cal K}_+$, must have time modulation functions which satisfy
\begin{equation}
\left({\cal F}_p , {\cal F}_p\right)=1~~\to~~|c_1|^2-|c_2|^2=1\,.
\end{equation}
In the rest frame (where $p=0$) we denote simply $\phi=\phi_p|_{p=0}$ such that the system (\ref{V2}) can be written as
\begin{eqnarray}
c_1\phi(t_0)+c_2\phi^*(t_0)&=&\frac{1}{\sqrt{2 \hat m}}e^{-i\hat m t_0}\,,\\
c_1\dot{\phi}(t_0)+c_2\dot{\phi}^*(t_0)&=&-i\hat m\frac{1}{\sqrt{2 \hat m}}e^{-i\hat m t_0}\,,\\
c_1\ddot{\phi}(t_0)+c_2\ddot{\phi}^*(t_0)&=&-\hat m^2\frac{1}{\sqrt{2 \hat m}}e^{-i\hat m t_0}\,.
\end{eqnarray}
The first two equations give the normalized integration constants corresponding to the r.f.v.,
\begin{eqnarray}
c_1\to c_1(t_0)&=&\frac{e^{-i\Omega(t_0)t_0}}{\sqrt{2\Omega(t_0)}}\left(\Omega(t_0)\phi^*(t_0)-i\dot \phi^*(t_0)\right)\,,~~~\label{c1t}\\
c_2\to c_2(t_0)&=&\frac{e^{-i\Omega(t_0)t_0}}{\sqrt{2\Omega(t_0)}}\left(-\Omega(t_0)\phi(t_0)+i\dot \phi(t_0)\right)\,,~~~\label{c2t}
\end{eqnarray}
while the third one gives us the associated dynamical mass in the rest frame,
\begin{equation}
\hat m\to \hat m(t_0)=\lim_{p\to 0}\Omega_p(t_0)\equiv \Omega(t_0)\,,
\end{equation}
since $\ddot \phi=-\Omega^2 \phi$ as in Eq. (\ref{KGred}).
Thus we find that a particle prepared in r.f.v. at the time $t_0$ has the mode function
\begin{equation}\label{fp1}
f_{\vec{p},t_0}(t,\vec{x})=\frac{e^{i \vec{x}\cdot \vec{p}}}{[2\pi a(t)]^{\frac{3}{2}}}{\cal F}_p(t_0,t)\,,
\end{equation}
whose time modulation function
\begin{equation}\label{Fptt}
{\cal F}_p(t_0,t)=c_1(t_0)\phi_p(t)+c_2(t_0)\phi^*_p(t)\,.
\end{equation}
depends on the integration constants (\ref{c1t}) and (\ref{c2t}) which comply with the normalization condition
\begin{equation}
|c_1(t_0)|^2-|c_2(t_0)|^2=\left\{
\begin{array}{lll}
1&{\rm if}& \Omega(t_0)^2>0\\
0&{\rm if}& \Omega(t_0)^2<0
\end{array}\right.\,.
\end{equation}
The set $\{f_{\vec{p},t_0}| \vec{p}\in {\Bbb R}^3\}$ forms a basis in ${\cal K}_+$ while the set $\{f_{\vec{p},t_0}^*| \vec{p}\in {\Bbb R}^3\}$ is the corresponding basis of ${\cal K}_-$ in the r.f.v. prepared at $t=t_0$.
In general, the r.f.v. is dynamic, being associated with a time-dependent dynamical mass $\hat m(t)=\Omega(t)\in {\Bbb R}$. The time domain $D_t=D_t^+\cup D_t^-$ is split into the tardyonic part $D_t^+=\{t|\Omega(t)^2>0\}$ and the tachyonic one, $D_t^-=\{t|\Omega(t)^2<0\}$. All the tachyonic states with $\Omega(t)=i|\Omega(t)|$ are eliminated as having null norms. Thus in r.f.v. the scalar field survives only on $D_t^+$.
As mentioned the r.f.v. becomes stable only on the Minkowski and de Sitter spacetimes where the energy operator is conserved, satisfying $[H_0,\Omega]=i\partial_t\Omega=0$.
\subsection{de Sitter expanding universe}
Let us consider first an example of stable r.f.v. on the expanding portion of the de Sitter spacetime, $M$, having the scale factor $a(t)=e^{2\omega t}$ (where $\omega$ is the Hubble de Sitter constant in our notation) defined for $t\in(-\infty,\infty)$, giving the conformal time $t_c$ and the function $a(t_c)$ as
\begin{equation}\label{tc}
t_c=-\frac{1}{\omega} e^{-\omega t}\in (-\infty,0]\,, \quad a(t_c)=-\frac{1}{\omega t_c}\,.
\end{equation}
In the conformal chart the Klein-Gordon equation is analytically solvable giving the mode functions of the momentum basis of the form (\ref{fp}) having the time modulation functions
\begin{equation}\label{fdS}
{\cal F}_p(t_c)=\ c_1\phi_p(t) + c_2\phi_p^*(t)\,,\quad \phi_p(t)=\frac{1}{\sqrt{\pi\omega}}K_{\nu}(ipt_c)\,,
\end{equation}
where
\begin{equation}\label{ndS}
\nu=\left\{\begin{array}{lll}
\sqrt{\frac{9}{4}-\mu^2}&{\rm for} & \mu<\frac{3}{2}\\
i\kappa\,,\quad \kappa= \sqrt{\mu^2-\frac{9}{4}}&{\rm for} & \mu>\frac{3}{2}
\end{array} \right. \,, \quad \mu=\frac{m}{\omega}\,.
\end{equation}
By using Eq. (\ref{KuKu}) we find that the normalization condition (\ref{normF}) is fulfilled only if we take
\begin{equation}\label{norC}
\left|c_1\right|^2-\left|c_2\right|^2=1\,.
\end{equation}
We assume first that $m>\frac{3}{2}\,\omega$ solving the system (\ref{V2}) in the conformal chart $\{t_c,\vec{x}\}$ where the de Sitter time modulation function has the form (\ref{fdS}) with $\nu=i\kappa$ while the Minkowski one (\ref{fp0}) takes the form
\begin{equation}
\hat{\cal F}[t(t_c)]=\frac{(-\omega t_c)^{\frac{i E}{\omega}}}{\sqrt{2E}}\,.
\end{equation}
Moreover, since in this case the limit to $p\to 0$ is sensitive, we solve first this system for $p\not= 0$ and then we evaluate this limit. From the first two equations we obtain the integration constants
\begin{eqnarray}
c_1(p)&=&\frac{(-\omega t_c)^{\frac{i E}{\omega}}}{\sqrt{2 \pi\omega E}}\left[\omega p t_c K_{i\kappa+1}(-ipt_c)\right.\nonumber\\
&&\hspace*{14mm}\left.+(E-\kappa\omega)K_{i\kappa}(-ipt_c)\right]\,,\\
c_2(p)&=&-\frac{(-\omega t_c)^{\frac{i E}{\omega}}}{\sqrt{2 \pi\omega E}}\left[\omega p t_c K_{i\kappa+1}(ipt_c)\right.\nonumber\\
&&\hspace*{14mm}\left.+(E-\kappa\omega)K_{i\kappa}(ipt_c)\right]\,,
\end{eqnarray}
while from the last one,
\begin{equation}
\lim_{p\to 0}\left[E^2-\kappa^2\omega^2-\omega^2p^2 t_c^2\right]=(\hat m^2-\omega^2\kappa^2)=0\,,
\end{equation}
gives the expected dynamical mass
\begin{equation}\label{dm}
\hat m=\omega\kappa=\sqrt{m^2-\frac{9}{4}\,\omega^2}\,,
\end{equation}
related to the well-known rest energy \cite{CGRG}. Then for $p\to 0$ we obtain the constants $c_1=\lim_{p\to0}c_1(p)$ and $c_2=\lim_{p\to0}c_2(p)$ which have the absolute values
\begin{eqnarray}
|c_1|&=&\frac{e^{\pi\kappa}}{\sqrt{e^{2\pi\kappa}-1}}\,,\label{C1}\\
|c_2|&=& \frac{1}{\sqrt{e^{2\pi\kappa}-1}}\,,\label{C2}
\end{eqnarray}
resulted from Eqs. (\ref{IK}) and (\ref{I0}). Finally, by substituting these values in Eq. (\ref{fdS}), we obtain the definitive result of the time modulation functions of positive energy in the r.f.v.,
\begin{equation}
{\cal F}_p(t_c)=\sqrt{\frac{\pi}{\omega}}\left(\frac{p}{2\omega}\right)^{-i\kappa}\frac{I_{i\kappa}(ipt_c)}{\sqrt{e^{2\pi\kappa}-1}}\,,
\end{equation}
where the general phase factor was introduced for assuring the correct limit for $p\to 0$ as given by Eq. (\ref{I0}). These functions are correctly normalized since the integration constants (\ref{C1}) and (\ref{C2}) satisfy the condition (\ref{norC}). Note that these results can be rewritten in terms of the cosmic time $t$ according to Eq. (\ref{tc}).
Furthermore, we consider the case of $m<\frac{3}{2}\,\omega$ applying the same method for fixing the r.f.v.. We solve first two equations of the system (\ref{V2}) for $p\not=0$ and an arbitrary time $t_c$ obtaining
\begin{eqnarray}
c_1(p,t_c)&=&\frac{(-\omega t_c)^{\frac{i E}{\omega}}}{\sqrt{2 \pi\omega E}}\left[(E+i\nu\omega)K_{\nu}(-ipt_c)\right.\nonumber\\
&&\hspace*{18mm}\left. -\omega p t_c K_{\nu+1}(-ipt_c)\right]\,, \\
c_2(p,t_c)&=&\frac{(-\omega t_c)^{\frac{i E}{\omega}}}{\sqrt{2 \pi\omega E}}\left[(E+i\nu\omega)K_{\nu}(ipt_c)\right.\nonumber\\
&&\hspace*{18mm}\left. +\omega p t_c K_{\nu+1}(ipt_c)\right]\,.
\end{eqnarray}
From the thisrd equation we find the expected condition
\begin{equation}
\lim_{p\to 0}\left[E^2+\nu^2\omega^2-\omega^2p^2 t_c^2\right]=(\hat m^2+\omega^2\nu^2)=0\,,
\end{equation}
giving the tachyonic dynamical mass $\hat m=\pm i\nu\omega$. Moreover, we find that in the rest frame we have
\begin{equation}
\lim_{p\to 0} c_1(p,t_c)=\lim_{p\to 0} c_2(p,t_c)=0\,,
\end{equation}
which means that if we set the r.f.v. then the particles with $m<\frac{3}{2}\,\omega$ cannot survive on the de Sitter expanding portion.
The above results can be now gathered in the synthetic form of the mode functions of positive frequency in the conformal chart,
\begin{equation}
f_{\vec{p}}(t_c,\vec{x})=\left(\frac{-\omega t_c}{2\pi}\right)^{\frac{3}{2}}\sqrt{\frac{\pi}{\omega}}\left(\frac{p}{2\omega}\right)^{-\nu}\frac{I_{\nu}(ipt_c)\,e^{i\vec{p}\cdot\vec{x}}}{\sqrt{e^{-2i\pi\nu}-1}}\,,
\end{equation}
that hold for any real or imaginary value of $\nu$, given by Eq. (\ref{ndS}). In the tachyonic case, when $\nu$ takes real values, the squared norm of $f_{\vec{p}}$ vanishes since then we have $ I_{\nu}(-i pt_c) \stackrel{\leftrightarrow}{\partial_{t_c}}I_{\nu}(ipt_c)\propto I_{\nu}(i pt_c) \stackrel{\leftrightarrow}{\partial_{t_c}}I_{\nu}(ipt_c)= 0$.
Thus we have shown that the scalar r.f.v. on the de Sitter expanding universe is stable corresponding to a time-independent dynamical mass (\ref{dm}) which does make sense only when $m>\frac{3}{2}\,\omega$. In other words, the frequencies separation in the rest frames can be done only for the scalar fields which satisfy this condition.
Otherwise we have either to eliminate the scalar fields with $m<\frac{3}{2}\,\omega$ or to resort to another vacuum as the adiabatic Bunch-Davies one \cite{BD} which can be set for particles of any mass by taking $c_1=\frac{1}{\sqrt{\pi\omega}}$ and $c_2=0$.
\subsection{Milne-type spatially flat FLRW spacetime}
Let us consider now an example of manifold $M$ where we do not have adiabatic vacua remaining only with an unstable r.f.v. corresponding to a time-dependent dynamical mass. This is the $(1+3)$-dimensional spatially flat FLRW manifold with the scale factor $a(t)=\omega t$ determining the conformal time as
\begin{equation}\label{tt}
t_c=\int \frac{dt}{a(t)}=\frac{1}{\omega} \ln(\omega t)\in (-\infty,\infty)~\to~ a(t_c)=e^{\omega t_c}\,.
\end{equation}
The constant $\omega$, introduced from dimensional considerations, is an useful free parameter which in the case of the genuine Milne's universe (of negative space curvature) must be fixed to $\omega=1$ for eliminating the gravitational sources \cite{BD}.
This spacetime $M$ is produced by isotropic gravitational sources, i. e. the density $\rho$ and pressure $p$, evolving in time as
\begin{equation}
\rho=\frac{3}{8\pi G}\frac{1}{t^2}\,, \quad p=-\frac{1}{8\pi G}\frac{1}{t^2}\,,
\end{equation}
and vanishing for $t\to\infty$. These sources govern the expansion of $M$ that can be better observed in the chart $\{t, \vec{\hat x}\}$, of 'physical' space coordinates $\hat x^i=\omega t x^i$, where the line element
\begin{equation}
ds^2=\left(1-\frac{1}{t^2}\vec{\hat x}\cdot \vec{\hat x}\right)dt^2 + 2 \vec{\hat x}\cdot d\vec{\hat x}\,\frac{dt}{t}-d\vec{\hat x}\cdot d\vec{\hat x}\,,
\end{equation}
lays out an expanding horizon at $|\vec{\hat x}|=t$ and tends to the Minkowski spacetime when $t\to \infty$ and the gravitational sources vanish.
In the FLRW chart $\{t,\vec{x}\}$ of this spacetime the Klein-Gordon equation is analytically solvable, the fundamental solutions having the time modulation functions
\begin{equation}\label{solM}
{\cal F}_p(t)=c_1\phi_p(t) +c_2\phi_p^*(t)\,,\quad \phi_p(t)=\sqrt{\frac{t}{\pi}}\, K_{\nu}(imt)\,,
\end{equation}
where
\begin{equation}\label{nM}
\nu=\sqrt{1-\frac{p^2}{\omega^2}}\,,
\end{equation}
can take real or pure imaginary values for $p>\omega$.
{ \begin{figure}
\centering
\includegraphics[scale=0.70]{./A}
\caption{The functions $|c_1(t)|$ and $|c_2(t)|$ versus $ct$ for a light particle having the electron mass, $m=m_e$, for which $t_{m_e}\sim 1.1\, 10^{-21} s$. The plotting domain is $0.6\, t_{m_e}<t< 1.8\, t_{m_e}$. }
\end{figure}}
We observe that here we cannot speak about adiabatic vacua as long as the functions (\ref{solM}) are singular in $t=0$. Therefore, we must focus only the r.f.v. for which the time-dependent integration constants,
\begin{eqnarray}
c_1(t)&=&\frac{e^{-it\hat m}}{2 \sqrt{2 t\hat m}}\left[2t mK_0(-imt)\right. \nonumber\\
&&~~~~~~~~~~+\left.(i+2t\hat m))K_1(-imt) \right]\,,\label{CM1}\\
c_2(t)&=&\frac{e^{-it\hat m}}{2 \sqrt{2 t\hat m}}\left[2t mK_0(imt)\right.\nonumber\\
&&~~~~~~~~~~-\left.(i+2t\hat m))K_1(imt) \right]\,,\label{CM2}
\end{eqnarray}
result from Eqs. (\ref{c1t}) and (\ref{c2t}).
The corresponding dynamical mass reads
\begin{equation}\label{mM}
\hat m(t)=\sqrt{m^2-\frac{3}{4\,t^2}}\,.
\end{equation}
The functions (\ref{CM1}) and (\ref{CM2}) are singular in $t=0$ and $t=t_m\equiv\frac{\sqrt{3}}{2 m}$ when $\hat m(t)$ vanishes (as in Fig. 1). From Eq. (\ref{mM}) we see that a particle of mass $m$ has a tachyonic behavior in the domain $D_t^-=(0,t_m)$ and a tardyonic one only if $t\in D_t^+=(t_m, \infty)$. As in the general case, we can verify that
\begin{equation}
|c_1(t)|^2-|c_2(t)|^2=\left\{
\begin{array}{lll}
0&{\rm if}&0< t<t_m\\
{1}&{\rm if}&t>t_m
\end{array}\right.
\end{equation}
showing that on the tachyonic domain the wave function is of null norm having thus no physical meaning.
This means that the scalar particles can be prepared only in the tardyonic domain $t>t_m$ where $\hat m(t)$ increases with $t$ such that for $t\to\infty$, when $M$ becomes just the Minkowski spacetime, this tends to $m$. Moreover, in this limit we recover the usual Minkowski scalar modes since the functions $K$ behave as in Eq. (\ref{Km0}) such that
\begin{equation}\label{limc1c2}
\lim_{t\to\infty}|c_1(t)|=1\,, \quad \lim_{t\to\infty}|c_2(t)|=0\,.
\end{equation}
All these results can be encapsulated in the definitive form of the mode functions of positive frequency, prepared at the time $t_0>t_m$ and defined for $t>t_0$, that read
\begin{eqnarray}
f_{\vec{p}}(t_0,t,\vec{x})&=&\frac{e^{i \vec{p}\cdot\vec{x}}}{(2\pi\omega t)^{\frac{3}{2}}}\left[c_1(t_0) \sqrt{\frac{t}{\pi}}\, K_{\nu}(imt)\right.\nonumber\\
&&\hspace*{12mm}\left.+c_2(t_0) \sqrt{\frac{t}{\pi}}\, K_{\nu}(-imt)\right]\,,
\end{eqnarray}
where $\nu$ depends on $p$ as in Eq. (\ref{nM}).
{ \begin{figure}
\centering
\includegraphics[scale=0.65]{./B}
\caption{The function $n(t_0,t)$ versus $ct$ in the domain $t_m<t<14\, t_m$ for $m=m_e$ and: $t_0=2.4\, t_m$ (1), $t_0=2.6\, t_m$ (2), $t_0=2.8\, t_m$ (3), $t_0=3\, t_m$ (4), $t_0=3.2\, t_m$ (5).}
\end{figure}}
The instability of r.f.v. on this expanding manifold give rise to c.p.c. that can be analyzed thanks to our previous results that hold for any $t>t_0$. Thus we can study how the particles created at $t_0$ can be measured at any moment $t>t_0$ calculating the Bogolyubov coefficients between the bases $f_{\vec{p}}(t_0)$ and $f_{\vec{p}}(t)$ which, according to Eq. (\ref{KuKu}), read
\begin{eqnarray}
\alpha(\vec{p},t_0;\vec{p}',t)&=&\langle f_{\vec{p}}(t_0), f_{\vec{p}'}(t)\rangle=\delta^3(\vec{p}-\vec{p}') \nonumber\\
&\times&\left[c_1^*(t_0)c_1(t)-c_2^*(t_0)c_2(t) \right]\,, \\
\beta(\vec{p},t_0;\vec{p}',t)&=&\langle f_{\vec{p}}^*(t_0), f_{\vec{p}'}(t)\rangle=\delta^3(\vec{p}-\vec{p}') \nonumber\\
&\times& \left[c_2(t_0)c_1(t)-c_1(t_0)c_2(t) \right]\,,
\end{eqnarray}
Then the density of the new particles or antiparticles created between $t_0$ and $t$ is proportional to,
\begin{equation}\label{dens}
n(t_0,t)\propto\left| c_2(t_0)c_1(t)-c_1(t_0)c_2(t)\right|^2_,.
\end{equation}
\noindent In addition, we observe that the rate of c.p.c. can also be estimated as
\begin{equation}\label{Rate}
R(t_0,t)\propto\frac{d\, n(t_0,t)}{dt}\,.
\end{equation}
Thus we can point out the effects of the dynamic r.f.v. which tends to stability when the time is increasing since then
\begin{equation}
\lim_{t\to \infty} n(t_0,t)\sim |c_2(t_0)|^2\,, \quad \lim_{t\to \infty} R(t_0,t)=0\,,
\end{equation}
as we deduce from Eqs. (\ref{limc1c2}). Hereby we see that the dynamical effect is visible only for the very old particles, prepared at $t_0<5\,t_m$, since the function $c_2(t_0)$ decreases rapidly to zero when $t_0$ increases and $\hat m(t_0)\to m$. Thus for the younger particles, prepared at $t_0 > 5-10\, t_m$, the dynamical effect is inhibited remaining with an apparently stable r.f.v. of the Bunch-Davies type (with $c_1=1$ and $c_2=0$) in which the mode functions can be approximated as
\begin{equation}
f_{\vec{p}}(t,\vec{x})\sim \frac{e^{i \vec{p}\cdot\vec{x}}}{(2\pi\omega t)^{\frac{3}{2}}}\sqrt{\frac{t}{\pi}}\, K_{\nu}(imt)\,,
\end{equation}
independent on the moment $t_0$ when the particle was prepared.
{ \begin{figure}
\centering
\includegraphics[scale=0.65]{./C}
\caption{The function $R(t_0,t)$ versus $ct$ in the domain $t_m<t<14\, t_m$ for $m=m_e$ and: $t_0=2.4\, t_m$ (1), $t_0=2.6\, t_m$ (2), $t_0=2.8\, t_m$ (3), $t_0=3\, t_m$ (4), $t_0=3.2\, t_m$ (5).}
\end{figure}}
Finally we must specify that, in general, the dynamic effect discussed above is very fast, during an extremely short period of time, even at quantum scale, since by definition $t_m=\frac{\sqrt{3}}{2m}$ (or $\frac{\sqrt{3}}{2}\frac{\hbar}{m c^2}$ in SI units) is very small. For example, if we take $m$ to be just the electron mass $m_e$ then $t_{m_e} \sim 1.1\, 10^{-21} s$ such that for the particles born at cosmic times $t_0>10^{-20} s$ the r.f.v. is apparently stable. Only the particles prepared at $t_0<10^{-20} s$ lay out this effect as it results from Figs. 2 and 3 where we plot the functions (\ref{dens}) and (\ref{Rate}) versus $ct$ instead of $t$ for avoiding too small numbers. Thus it is obvious that the dynamical effects of the r.f.v. may be of interest only at quantum scale in the cosmology of the very early Milne-type universe.
\section{Concluding remarks}
We proposed here a method of projecting the quantum states from a state space of a given geometry into another state space generated by a different geometry, keeping the correct normalization which is crucial in interpreting the quantum quantities (probabilities, expectation values, transition amplitudes, etc.). This method helped us to define, on any spatially flat FLRW spacetime, the Minkowskian states we need for setting the r.f.v. of the massive scalar field which, in contrast to the Dirac one, does not have a Minkowskian behavior in rest frames on the FLRW manifolds. In this manner, we obtained a stable r.f.v. on the de Sitter expanding universe and, for the first time, we found a dynamical vacuum, corresponding to a time-dependent dynamical mass on a Milne-type spacetime. In this last case, the dynamic r.f.v. gives rise to a very fast c.p.c. that could be of interest but only in the very early Milne-type universe. It is remarkable that in r.f.v. all the possible tachyonic behaviors (e.g. for $m<\frac{3}{2}\,\omega$ in the de Sitter case and $t<t_m$ in the Milne-type universe) are eliminated in a natural manner, the corresponding mode functions resulting to have null norms. These results may improve the study of c.p.c. on the FLRW manifolds combining the r.f.v. with the other vacua proposed so far.
On the other hand, we must stress that the r.f.v. cannot be defined for the massless fields which do not have rest frames. In the case of the Maxwell and massless Dirac fields this is not an impediment since the neutrino and Maxwell equations are confomally covariant such that in the conformal charts of the FLRW spacetimes one may take over the frequency separation from the flat case. The only problem which remains partially unsolved is the vacuum of the massless scalar field whose equation is no longer covariant under conformal transformations. This sensitive case is revisited time by time with the hope of finding a convenient interpretation \cite{coco}.
Another approach is the quantum theory of interacting fields on curved manifolds in which the amplitudes of the quantum transitions can be calculated by using perturbations in terms of free fields \cite{Lot1,Lot2,Lot3,R1,R2,R3,A1,A2} as in our recent de Sitter QED \cite{CQED,Cr1,Cr2}. Even though in this framework only adiabatic vacua were considered so far, we have now the opportunity of using many types of vacua for improving the calculation of the transition amplitudes. Thus, for example, in a collision process we may take the incident beam in the adiabatic vacuum and the target in the r.f.v.. Moreover, for the internal lines of the Feynman diagrams the r.f.v. is the favorite candidate since this can be defined naturally for the massive fields on any spatially flat FLRW spacetime. Thus by using many well-defined vacua we could combine the methods of c.p.c. with those of the perturbative quantum field theory for analysing various quantum effects in evolving universes.
|
1,116,691,498,588 | arxiv | \section*{This is an unnumbered first-level section head}
\section{\bf Introduction} \label{secintro}
The present paper is devoted to the study of interior and boundary $L^q$-integrability for the gradient of weak solutions to time independent quasi-linear equations of the $p$-Schr\"odinger type
\begin{equation}\label{maineqm}
-\mathrm{div}\, (|Du|^{p-2}Du) + V |u|^{p-2}u = 0 \ \ \textrm{ in } \ \Omega,
\end{equation}
where $1<p<\infty$, $\Omega\subset \mr^n$($n\geq2$) is open and bounded, and the non-negative potential $V$ is taken in an appropriate class. We notice that if $p=2,$ the equation \eqref{maineqm} becomes
\begin{equation} \label{propeq}
- \Delta u + Vu=0\ \ \ \textrm{in}\ \ \Omega,
\end{equation}
which is the classical (elliptic) Schr\"odinger equation. In the viewpoint of the calculus of variations, the equation \eqref{maineqm} is the Euler-Lagrange equation of the following functional
$$
W^{1,p}(\Omega)\ni u \ \ \mapsto\ \ \int_{\Omega} \left[|Du|^p+V|u|^p\right]\, dx,
$$
hence it is one of nonlinear generalizations of the Schr\"odinger equation \eqref{propeq} in a natural way. Moreover, problems of this type raise in various areas of physics, such as nonlinear quantum field theory, nonlinear optics, plasma physics, condensed matter physics, biophysics, fluid mechanics, etc. We refer to \cite{APT1,BS1, LS1,MF1,SS} for the general physical background of this equation.
Research on the Schr\"odinger type equations which are fundamental ones of quantum mechanics plays a significant role in the fields of mathematical physics. In particular, $L^q$-regularity theory for linear Schr\"odinger equations was first introduced by Shen \cite{Sh1}. He obtained $L^q$-estimates by assuming that $V$ belongs to the $\mathcal B_{\gamma}$ class for some $\gamma\geq \frac n2$ which is a certain reverse H\"older class (see below for the definition of $\mathcal B_\gamma$). More precisely, for the Schr\"odinger equations with non-divergence data of the form $-\Delta u+Vu=f$ in $\mr^n$, he showed $\Vert D^2u\Vert_{L^q(\mr^n)}+\Vert Vu\Vert_{L^q(\mr^n)} \leq c(q)\|f\|_{L^q(\mr^n)}$ for all $1<q\leq \gamma$, and for the equations with divergence data of the form
\begin{equation}\label{divschrodinger}
-\Delta u+Vu=-\textrm{div}\, F\ \ \ \text{in}\ \ \mr^n,
\end{equation}
he also did
$$
\Vert Du \Vert_{L^{q}(\mr^n)} +\chi_{\{q\leq 2\gamma\}} \Vert V^{\frac12}u \Vert_{L^{q}(\mr^n)} \leq c(q)\Vert F \Vert_{L^{q}(\mr^n)} , \ \ \textrm{for all}\ \ (\gamma^*)' \leq q \leq \gamma^*,
$$
where $\gamma^*=\frac{n\gamma}{n-\gamma}$ when $\gamma<n$ (if $\gamma\geq n$, then $q$ can be any number in $(1,\infty)$). Here, we remark that the range of $q$ is optimal, see \cite[Section 7]{Sh1}. These results have been recently extended to linear elliptic/parabolic Schr\"odinger equations with discontinuous coefficients on sufficiently smooth domains in several papers for instance \cite{BHS1,BBHV1,PT1}, by using the results in \cite{Sh1} together with the commutator method and the standard flattening and covering arguments. We also refer to \cite{CFG1, D1, FPR, K1, PT1, Sh0, Sh1} for the regularity theory for (elliptic) Schr\"odinger equations.
The general aim of this paper is to establish interior and boundary $L^q$-regularity theory for nonlinear Schr\"odinger equations in non-smooth domains. In particular, as mentioned earlier, we deal with quasi-linear equations of $p$-Laplacian type which are the natural generalizations of the classical Schr\"odinger equation in the divergence setting. Moreover, the domains we consider here might be non-graph domains which are
beyond the class of Lipschitz domains. We point out that the approach used in \cite{Sh1} cannot be applied to the nonlinear setting. Indeed, Shen in \cite{Sh1} derived the decay estimates for the fundamental solution by means of the Fefferman-Phong Lemma in \cite{Fe} by introducing an auxiliary function $m(x,V)$ which is well-defined for $q \geq \frac{n}{2}.$ Furthermore, on the boundary region we cannot make use of the flattening argument since our domain is supposed to be non-smooth. Therefore, an alternative approach must be adopted in order to handle the structures of the nonlinear operators and the non-smooth domains. In our best knowledge, the present paper is a new one treating $L^q$-estimates for Schr\"odinger equations in a non-linear setting and even for linear Schr\"odinger equations on non-smooth domains.
Now let us present our main equations. We are concerned with the Dirichlet problem for the quasi-linear Schr\"odinger equation of the form
\begin{equation}
\label{maineq}
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \mathbf{a}(x,Du) + V |u|^{p-2}u& = & -\mathrm{div}\, (|F|^{p-2}F) & \textrm{ in } & \Omega, \\
u & = & 0 & \textrm{ on } & \partial \Omega,
\end{array}\right.
\end{equation}
where $1<p<\infty$, $\Omega$ is open and bounded in $\mr^{n}$ with $n\geq2$, and $V:\Omega\to \mr$ is non-negative and at least satisfies $V\in L^{n/p}(\Omega)$ if $p<n$ and $V\in L^t(\Omega)$ for some $t>1$ if $p\geq n$.
A given vector valued function $ \mathbf{a} : \mr^n \times \mr^n \rightarrow \mr^n $ is a Carath\'eodory function, that is, $ \mathbf{a}$ is measurable in the $x$-variable and differentiable in the $\xi$-variable. We will always assume that $\mathbf{a}$ satisfies the following growth and ellipticity conditions:
\begin{equation}\label{aas1}
| \mathbf{a}(x,\xi)|+ | D_{\xi} \mathbf{a}(x,\xi)||\xi| \leq L |\xi|^{p-1}
\end{equation}
and
\begin{equation}\label{aas2}
D_{\xi} \mathbf{a}(x,\xi)\, \eta \cdot \eta \geq \nu |\xi|^{p-2}|\eta|^2
\end{equation}
for almost all $x \in \mr^n$ and any $\xi, \eta \in \mr^n$ and for some constants $L, \nu$ with $0< \nu \leq 1 \leq L.$ A prime example of the nonlinearlity $\mathbf{a}$ is
$$
\mathbf{a}(x,\xi) = a(x)|\xi|^{p-2}\xi,\ \ \nu\leq a(\cdot)\leq L,
$$
which is the $p$-Laplacian with the coefficient $a(\cdot).$ We also remark that the above condition \eqref{aas2} implies the monotonicity condition:
\begin{equation}\label{mono}
\left( \mathbf{a}(x,\xi) - \mathbf{a}(x,\eta) \right) \cdot (\xi-\eta) \geq c(p,\nu) \left( |\xi|^2 + |\eta|^2 \right)^{\frac{p-2}{2}} |\xi-\eta|^2
\end{equation}
for any $\xi, \eta \in \mr^n$ and a.e. $x \in \mr^n.$
In particular, if $p \geq 2,$ it can be the following
\begin{equation}\label{mono1}
\left( \mathbf{a}(x,\xi) - \mathbf{a}(x,\eta) \right) \cdot (\xi-\eta) \geq c(p,\nu) |\xi-\eta|^p.
\end{equation}
Under the above basic setting, we say that $u \in W^{1,p}_0(\Omega)$ is a weak solution to the problem \eqref{maineq} if
\begin{equation}\label{weakform}
\int_{\Omega} \mathbf{a}(x, Du) \cdot D\varphi \, dx + \int_{\Omega} V |u|^{p-2}u \cdot \varphi \, dx =\int_{\Omega} |F|^{p-2} F \cdot D \varphi\, dx
\end{equation}
holds for any $\varphi \in W_0^{1,p}(\Omega).$ We note that if $u\in W^{1,p}_0(\Omega)$, $\|Du\|_{L^p(\Omega)}$ and $\|Du\|_{L^p(\Omega)}+\|V^{\frac1p}u\|_{L^p(\Omega)}$ are equivalent by the condition of the potential $V$ and Sobolev-Poincar\'e's inequality, and that the existence and the uniqueness of the weak solution of \eqref{maineq} (even in the case of a non-zero Dirichlet boundary condition such that $u=g$ on $\Omega$ with $g\in W^{1,p}(\Omega)$) follow from the theory of nonlinear functional analysis, see for instance \cite[Chapter 2]{Sho1}.
For the potential $V:\Omega\to \mr$ considered in the problem \eqref{maineq}, we suppose that
$V$ belongs to $\mathcal{B}_{\gamma}$ for some $ \gamma \in [ \frac{n}{p}, n)$ when $p<n$ and for some $\gamma \in (1,n)$ when $p\geq n.$ We say that $V:\mr^n\to [0,\infty)$ belongs to $\mathcal{B}_\gamma$ for some $ \gamma >1$ if $V\in L^{\gamma}_{loc}(\mr^n)$ and there exists a constant $b_{\gamma}>0$ such that the \textit{reverse H\"older inequality}
\begin{equation}\label{VBqclass}
\left( \frac{1}{|B|} \int_{B} V^{\gamma} \, dx \right)^{\frac{1}{\gamma}} \leq b_\gamma \left( \frac{1}{|B|} \int_{B} V \, dx \right)
\end{equation}
holds for every ball $B$ in $\mr^n.$ This $\mathcal{B}_\gamma$ class which is a wide class including all nonnegative polynomials was introduced independently by Muckenhoupt \cite{Mu} and Gehring \cite{Ge} in the study of weighted norm inequalities and quasi-conformal mapping, respectively. One notable example of this element is $ V(x) = |x|^{-n/ \gamma} $
which actually belongs to the $\mathcal{B}_{\tilde{\gamma}}$ class for all $\tilde{\gamma} < \gamma.$ Moreover, the $B_\gamma$ class is strongly connected to the Muckenhoupt class, for which we will discuss later in Section \ref{Preliminaries}.
Our main result is the global integrability of $Du$ and also $V^{\frac1p}u$ for the weak solutions $u$ to the problem \eqref{maineq} with respect to the one of $F$, under a suitable discontinuity condition on the nonlinearity $\mathbf{a}$ and a minimal structure condition on the boundary of the domain $\Omega$ that will be described later in Definition \ref{smallbmo} and \ref{Defreifenberg}, respectively. More precisely, we prove that
\begin{equation}\label{implication1}
F\in L^q\ \Longrightarrow \ Du \in L^q\ \ \text{for each } \left\{\begin{array}{cl}
q\in[p,\gamma^*(p-1))& \text{when }\gamma\in [\frac np,n),\\
q\in[p,\infty)& \text{when } \gamma\in[n,\infty),
\end{array}\right.
\end{equation}
\begin{equation}\label{implication2}
F\in L^q \ \ \Longrightarrow \ \ V^{\frac1p}u \in L^q \ \ \ \text{for each } p\leq q \leq p \gamma,
\end{equation}
by obtaining relevant estimates, see Corollary \ref{maincor} and Remark \ref{mainrmk} in the next section. We would like to emphasize that for the Schr\"odinger equation \eqref{divschrodinger}, that is, the equation \eqref{maineq} when $p=2$ and $\ba(x,\xi)\equiv \xi,$ our results cover the ones in \cite[Corollary 0.10]{Sh1} for $q\geq p=2$. Note that, in this linear case, the validity of the implications \eqref{implication1} and \eqref{implication2} for $ \gamma^*<q<2$ can be achieved via the duality argument, see for instance \cite{Um1}.
For the equation \eqref{maineq} with the null potential, i.e., $V\equiv0$, the $L^q$-estimates, which is sometimes called the (nonlinear) Calder\'on-Zygmund estimates, have been widely studied by many authors.
Iwaniec \cite{Iw1} first obtained the $L^q$-estimates for the $p$-Laplace equations with $p\geq 2$, and then DiBenedetto \& Manfredi \cite{DM1} extended his result to the $p$-Laplace systems with $1<p<\infty$. Later, Caffarelli \& Peral \cite{CP1} considered general equations of the $p$-Laplacian type with discontinuous nonlinearities. Furthermore, Acerbi \& Mingione generalized $L^q$-estimates for the parabolic $p$-Laplace systems with discontinuous coefficients \cite{AM1}. We also refer to \cite{BR1,LO1,MP1,KZ1,Mis1} for problems with $p$-Laplacian type and \cite{AM0,BO1,BOR1,CM1,Ok1} for problems with nonstandard growth.
We briefly discuss the outline of the proof of the $L^q$-estimates. As mentioned earlier, our approach is different from the one used in \cite{Sh1} which is based on the linear operator theory. We adopt a perturbation argument which has turned out to be very useful for the study on the regularity theory for linear and nonlinear PDEs. In particular, we employ the method introduced by Acerbi \& Mingione in \cite{AM1}, see also \cite{Min1} for its origin. To be more concrete, we apply an exit time argument to a nonlinear functional of $Du, V^{\frac1p}|u|$ and $F,$ in order to construct a suitable family of balls which covers the level set for $|Du|+V^{\frac1p}|u|$. Then, on each ball, we compare our equation \eqref{maineq} with the homogeneous equation
$$
-\mathrm{div}\, \ba(x,Dw)+V|w|^{p-2}w=0.
$$
The main part at this step is to find the maximal integrability of $Dw$ and $V^{\frac1p}w$ with corresponding estimates. In view of the classical regularity theory we know the $L^\infty$-boundedness of $w$ (see Lemma~\ref{supvlem}), from which together with the result in our recent paper \cite{LO1} (see Theorem~\ref{thmDwbdd}), we see that $Dw \in L^{\gamma^*(p-1)}$ and $V^{\frac1p}w\in L^{p\gamma}$ (see Lemma~\ref{lem42}). Here, we point out that the corresponding estimates \eqref{DwDwVwest} and \eqref{DwDwVwest1} are derived in a very delicate way. Especially, at this stage, the $\mathcal B_\gamma$ condition of $V$ plays a crucial role, so that we take advantage of the idea of Fefferman \& Phong in \cite{Fe} to obtain the modified version of Fefferman-Phong Lemma (see Lemma~\ref{lemrVbddpq}). Then from those corresponding estimates, the $L^q$-estimates for $|Du|+V^{\frac1p}|u|$ is derived by the comparison argument when $q\leq p\gamma$. Furthermore, applying the results in \cite{LO1}, we eventually obtain the $L^q$-estimates for $|Du|$ when $p\gamma<q\leq \gamma^*(p-1).$
The remainder of this paper is organized as follows. In the next section, we state our main results with primary assumptions imposed on the nonlinearlity $\mathbf{a}$ and the domain $\Omega.$ Section~\ref{Preliminaries} deals with the basic properties of $\mathcal{B}_{\gamma}$ class and the auxiliary lemmas to prove the main results. In Section~\ref{sechomo}, we show higher integrability of $Du$ and $V^{\frac1p}u$ for weak solutions $u$ to localized equations of our main problem \eqref{maineq} with $F\equiv 0.$
In Section~\ref{secComestimates}, we obtain the comparison estimates, and finally prove
main results, Theorem~\ref{mainthm} and Corollary \ref{maincor}, in Section~\ref{sec gradient estimates}.
\section{\bf Main result }
\label{secpre}
We start this section with standard notation and definitions. We denote the open ball $\mr^n$ with center $y\in \mr^n$ and radius $r>0$ by $B_r(y)= \{ x \in \mr^{n} : |x-y|< r \}.$ We also denote $\Omega_r (y)= B_r(y) \cap \Omega$ and $ \partial_w\Omega_{r}(y) = B_r(y) \cap \partial \Omega.$
For the sake of simplicity, we write $B_r=B_r(0),$ $B_r^+=B_r^+(0)$ and $\Omega_{r} = \Omega_{r}(0).$
We shall use the notation
$$ \mint_{U} g \; dx := \frac{1}{|U|} \int_{U} g \;dx. $$
The following two definitions are associated with the main assumptions imposed on the nonlinearlity $\mathbf{a}$ and the domain $\Omega.$
\begin{definition} \label{smallbmo} We say that $\mathbf{a}=\mathbf{a}(x,\xi)$ is \textit{$(\delta,R)$-vanishing} if
$$
\sup_{0<\rho\leq R} \ \sup_{y\in\mathbb{R}^n} \mint_{B_{\rho}(y) } \left|\Theta\left( \mathbf{a},B_{\rho}(y) \right)(x) \right| \, dx \leq \delta,
$$
where $$ \Theta\left( \mathbf{a},B_{\rho}(y) \right)(x):= \sup_{\xi \in \mr^n \setminus \{ 0\} } \frac{ \left|\mathbf{a}(x,\xi)-\overline{\mathbf{a}}_{B_{\rho}(y)}(\xi)\right|}{|\xi|^{p-1} }$$
and $$ \overline{\mathbf{a}}_{B_{\rho}(y)}(\xi) := \mint_{B_{\rho}(y)} \mathbf{a}(x,\xi) \;dx.$$
\end{definition}
The above definition implies that the map $x\mapsto \ba(x,\xi)/|\xi|^{-p}$ is a (locally) BMO function with the BMO semi-norm less than or equal to $\delta$ for all $\xi\in\mr^n$. Hence we see that the nonlinearity $\ba$ can be discontinuous for the $x$-variable. In particular, if $\ba(x,\xi)=a(x)|\xi|^{p-2}\xi$, then this definition means that $a(\cdot)$ is a BMO function.
\begin{definition}\label{Defreifenberg} Given $\delta \in (0,\frac18)$ and $R>0,$ we say that $\Omega$ is a $(\delta, R)$-Reifenberg flat domain if for every $x \in \partial\Omega$ and every $\rho \in (0, R],$ there exists a coordinate system $\{ y_1, y_2, \dots, y_n\}$ which may depend on $\rho$ and $x,$ such that in this coordinate system $x=0$ and that
$$ B_{\rho}(0) \cap \{ y_n > \delta \rho \} \subset B_{\rho}(0) \cap \Omega \subset B_{\rho}(0) \cap \{ y_n > -\delta \rho \}.
$$
\end{definition}
In the above definition of the Reifenberg flat domain, $\delta$ is usually supposed to be less than $\frac18$. This number comes from the Sobolev embedding, see for instance \cite{To1}. However, it is not important since we will consider $\delta$ sufficiently small. We note that the Lipschitz domains with the Lipschitz constant less than or equal to $\delta$ belong to the class of $(\delta,R)$-Reifenberg flat domains for some $R>0$. In addition, we remark that the $(\delta,R)$-Reifenberg flat domain $\Omega$ has the following measure density conditions:
\begin{equation}\label{dencon}
\sup_{0<\rho\leq R} \sup_{y \in \overline{\Omega}} \frac{\left|B_{\rho}(y)\right|}{\left|\Omega \cap B_{\rho}(y)\right|} \leq \left( \frac{2}{1-\delta} \right)^{n} \leq \left( \frac{16}{7} \right)^n,
\end{equation}
\begin{equation}\label{dencon1}
\inf_{0<\rho\leq R} \inf_{y \in \partial\Omega} \frac{|\Omega\cap B_{\rho}(y) |}{\left|B_{\rho}(y)\right|} \geq \left( \frac{7}{16} \right)^n.
\end{equation}
We refer to \cite{BW1,PS1,Re1,To1} for more details on the Reifenberg flat domains and their applications.
Now let us state the main results in this paper.
\begin{theorem}
\label{mainthm}
Let $u\in W^{1,p}_0(\Omega)$ be a weak solution to \eqref{maineq}. Suppose that $V \in \mathcal{B}_{\gamma}$ for some $ \gamma \in [ \frac{n}{p}, n)$ when $p<n$ and for some $\gamma \in (1,n)$ when $p\geq n.$ For $p\leq q <\gamma^*(p-1)$, there exists a small $\delta = \delta(n, p, L, \nu)>0$ so that if $\mathbf{a}$ is $( \delta, R)$-vanishing and $\Omega$ is a $( \delta, R)$-Reifenberg flat domain for some $R\in(0,1),$ then we have for any $x_0\in \overline{\Omega}$ and $r \in (0, \frac{R}{4}]$ satisfying
$ (4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r}(x_0))} \leq 1,$
\begin{eqnarray}\label{mainlocalest}
\nonumber\left( \mint_{\Omega_{ r}(x_0)} |Du|^q +\chi_{\{q<p\gamma\}} \left[V^{\frac1p}|u|\right]^{q}\, dx\right)^{\frac1q} &&\\
&&\hspace{-6cm}\leq c \left( \mint_{\Omega_{4r}(x_0)} |Du|^p + \left[ V^{\frac1p} |u|\right]^p \,dx \right)^{\frac{1}{p}}+ c \left(\mint_{\Omega_{4 r}(x_0)} \left|F \right|^{q} \, dx\right)^{\frac1q}
\end{eqnarray}
for some $c=c(n,p,q,\gamma,L,\nu,b_{\gamma})>0,$ where $\chi_{\{q<p\gamma\}}:= 1$ if $q<p\gamma$ and $\chi_{\{q<p\gamma\}}:= 0$ if $q\geq p\gamma.$
\end{theorem}
\begin{remark}
Let $\Omega$ be a $(\delta,R)$ Reifenberg flat domain for some small $\delta>0$ and $R>0$ and $V\in \mathcal B_\gamma$ with $\gamma\geq \frac{n}{p}$ and $p>1$. Define
$$
\rho(y,V):= \sup\left\{r\in(0,R]:r^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{r}(y))} \leq 1 \right\},\quad y\in \overline{\Omega}.
$$
Then by H\"older's inequality, the $\mathcal B_\gamma$ condition of $V$ and \eqref{dencon}, we see that the function $\rho(y,V)$ is comparable to
$$
\tilde\rho(y,V):= \sup\left\{r\in(0,R]:\frac{1}{r^{n-p}} \int_{\Omega_{r}(y)} V \,dx \leq 1 \right\},\quad y\in \overline{\Omega},
$$
i.e. $\frac{1}{c}\tilde\rho(y,V) \leq \rho(y,V)\leq c \tilde\rho(y,V)$ for all $y\in\overline\Omega$ with constant $c$ independent of $y$. When $p=2$, recalling the function $m(y,V)$ defined in \cite[Definition 1.3]{Sh1}, we notice that $\tilde \rho(y,V)$ is a local version of $\frac{1}{m(y,V)}$. In view of this observation, it seems that the restriction $(4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r}(x_0))} \leq 1$ in Theorem~\ref{mainthm} is reasonable.
\end{remark}
\begin{remark}
In Theorem~\ref{mainthm}, we can obtain the estimate \eqref{mainlocalest} uniformly with respect to $x_0$ by taking $r>0$ such that
$$
r \leq \frac{1}{4}\min\left\{R,\Vert V \Vert_{L^{\gamma}(\Omega)}^{-\frac{\gamma}{p\gamma-n}}\right\},
$$
since this together with the fact that $p\gamma>n$ implies
$$ (4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r}(x_0))} \leq (4r)^{p-\frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega)} \leq 1.
$$
\end{remark}
As a consequence of Theorem~\ref{mainthm} and the preceding remark, we obtain the global gradient estimates for solutions to \eqref{maineq}.
\begin{corollary}\label{maincor}
Let $u\in W^{1,p}_0(\Omega)$ be a weak solution to \eqref{maineq}. Suppose that $V \in \mathcal{B}_{\gamma}$ for some $ \gamma \in [ \frac{n}{p}, n)$ when $p<n$ and for some $\gamma \in (1,n)$ when $p\geq n.$ For $p\leq q <\gamma^*(p-1)$, there exists a small $ \delta = \delta(n, p, L, \nu) \in (0,\frac18)$ so that if $\mathbf{a}$ is $( \delta, R)$-vanishing and $\Omega$ is a $( \delta, R)$-Reifenberg flat domain for some $R\in(0,1),$ then we have
\begin{equation}\label{maingloest}
\Vert Du \Vert_{L^{q}(\Omega)} +\chi_{\{q<p\gamma\}} \Vert V^{\frac1p}u \Vert_{L^{q}(\Omega)} \leq c\left(\frac{\mathrm{diam}(\Omega)}{\tilde{R}}\right)^{n\left(\frac{1}{q}-\frac{1}{p}\right)}\Vert F \Vert_{L^{q}(\Omega)}
\end{equation}
for some $c=c(n,p,q,\gamma,L,\nu,b_{\gamma})>0,$ where
$\tilde{R} := \min\left\{ R, \Vert V \Vert_{L^{\gamma}(\Omega)}^{-\frac{1}{p-\frac{n}{\gamma}}}\right\}.$
Here,
$\chi_{\{q<p\gamma\}}:= 1$ if $q<p\gamma$ and $\chi_{\{q<p\gamma\}}:= 0$ if $q\geq p\gamma.$
\end{corollary}
\begin{remark}\label{mainrmk} If $V\in \mathcal B_{\gamma}$, then $V$ belongs to the $\mathcal B_{\gamma+\epsilon}$ class for some small $\epsilon>0$ by virtue of the self improving property of the $\mathcal B_\gamma$ class in Lemma \ref{lemself} below. Therefore, by considering $\gamma+\epsilon$ instead of $\gamma$ in Theorem \ref{mainthm} and Corollary \ref{maincor}, the range of $q$ can be extended to $p\leq q\leq \gamma^*(p-1)$, and the estimates \eqref{mainlocalest} and \eqref{maingloest} can be replaced by
\begin{eqnarray*}
\left( \mint_{\Omega_{ r}(x_0)} |Du|^q +\chi_{\{q\leq p\gamma\}} \left[V^{\frac1p}|u|\right]^{q}\, dx\right)^{\frac1q} &&\\
&&\hspace{-6cm}\leq c \left( \mint_{\Omega_{4r}(x_0)} |Du|^p + \left[ V^{\frac1p} |u|\right]^p \,dx \right)^{\frac{1}{p}}+ c \left(\mint_{\Omega_{4 r}(x_0)} \left|F \right|^{q} \, dx\right)^{\frac1q}
\end{eqnarray*}
and
$$
\Vert Du \Vert_{L^{q}(\Omega)} +\chi_{\{q\leq p\gamma\}} \Vert V^{\frac1p}u \Vert_{L^{q}(\Omega)} \leq c\,\Vert F \Vert_{L^{q}(\Omega)},
$$
respectively.
Finally, if the map $x\mapsto \ba(x,\xi)|\xi|^{-(p-1)}$ is in VMO uniformly for the $\xi$-variable, that is,
$$
\lim_{\rho\to0}\left(\sup_{y\in\mathbb{R}^n} \mint_{B_{\rho}(y) } \left|\Theta\left( \mathbf{a},B_{\rho}(y) \right)(x) \right| \, dx\right)=0,
$$
and the boundary of $\Omega$ is $C^1$, we have the implications \eqref{implication1} and \eqref{implication2} for every $q$ in the ranges stated in there.
\end{remark}
\begin{remark} Under the assumption that $
V\in \mathcal B_{\gamma}\ \ \text{for some }\gamma\in[n,\infty),
$ in stead of $\gamma\in(\frac{n}{p},n),$ we see that the results of Theorem \ref{mainthm} and Corollary \ref{maincor} hold for any $q\in[p,\infty).$ Indeed, if $
V\in \mathcal B_{\gamma}$ for some $\gamma\in[n,\infty),
$
it is easily seen that $V\in \mathcal B_{\gamma'}$ for any $\gamma'\in (1,\gamma)$ with the constant $b_{\gamma'}=b_{\gamma}$, by the definition of the $\mathcal B_{\gamma}$ class. Then for any $q\in[p,\infty),$ choosing $\gamma' = \gamma' (n,p,q) \in (\frac{n}{p},n)$ such that
$$
q<(\gamma')^*(p-1),
$$ we consequently obtain the results of Theorem \ref{mainthm} and Corollary \ref{maincor} for any $q\in[p,\infty).$ Hence, we have the implications \eqref{implication1} for $\gamma\in[n,\infty)$ and \eqref{implication2}.
\end{remark}
\section{Preliminaries}\label{Preliminaries}
\subsection{$\mathcal{B}_\gamma$ class}\ \\
In order to introduce primary features of the $\mathcal{B}_\gamma$ class, let us first recall the Muckenhoupt $A_p$ and $A_\infty$ classes. We say that nonnegative function $V\in L^1_{loc}(\mr^n)$ is in the $A_p$ class, $V\in A_p$, for some $1\leq p<\infty$ if and only if
$$
\sup_B \left(\mint_BV\, dx\right)\left(\mint_B V^{-\frac{1}{p-1}}\, dx\right)^{p-1}<\infty
$$
and that $V\in L^1_{loc}(\mr^n)$ is in the $A_\infty$ class, $V\in A_\infty$, if and only if
$$
\sup_B \left(\mint_BV\, dx\right)\exp\left(\mint_B \log V^{-1}\,dx\right)<\infty,
$$
where the supremum is taken over all balls $B \subset \mr^n.$ From the definition of $\mathcal{B}_{\gamma}$ in \eqref{VBqclass}, we notice that $V\in \mathcal{B}_{\gamma}$ for $\gamma\in(1,\infty)$ if and only if
$$
\sup_B\left(\mint_BV\, dx\right)^{-1} \left(\mint_{B}V^\gamma\,dx\right)^{\frac1\gamma}<\infty,
$$
where the supremum is taken over all balls $B \subset \mr^n,$ which is very similar to the condition of $A_p$, or $A_\infty$, class. Indeed, we have the following equivalent condition. For its proof, we refer to \cite[Theorem 9.3.3]{G1}.
\begin{lemma}\label{lemequiv}Let $V\in L^1_{loc}(\mr^n)$ be nonnegative. The following are equivalent
\begin{itemize}
\item[(1)] $V\in A_\infty$.
\item[(2)] There exist $\theta,\sigma\in(0,1)$ such that
$$
\left|\left\{x\in B: V(x)\leq \theta \mint_{B}V\, dy \right\}\right|\leq \sigma|B|
$$
for every ball $B$ in $\mr^n$.
\item[(3)] $V\in B_{\gamma}$ for some $\gamma>1$.
\item[(4)] $V\in A_p$ form some $p>1$.
\end{itemize}
In particular, if $V\in A_\infty$, then there exists $\theta\in(0,1)$ such that
$$
\left|\left\{x\in B: V(x)\leq \theta \mint_{B}V\, dy \right\}\right|\leq \frac12|B|
$$
for every ball $B$ in $\mr^n$, that is, one can choose that $\sigma=\frac12$.
\end{lemma}
From the above equivalent conditions and the self improving property of the $A_p$ classes, one can deduce the self improving property of the $\mathcal{B}_\gamma$ classes as follows.
\begin{lemma}\label{lemself}
If $V\in \mathcal{B}_\gamma$ for some $\gamma>1$, then $V\in B_{\gamma+\epsilon}$ for some small $\epsilon>0$.
\end{lemma}
\subsection{Gradient estimates for equations with mixed data}\ \\
The next two results are local Calder\'on-Zygmund estimates for elliptic equations of $p$-Laplace type involving mixed data. Let us consider the following problem
\begin{equation}
\label{homoeqfF}
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \mathbf{a}(x,Dw)& = &f - \mathrm{div}\,(|F|^{p-2}F) & \textrm{ in } & \Omega_{2r}(x_0), \\
w & = & 0 & \textrm{ on } & \partial_w\Omega_{2r}(x_0)\ \text{if}\ B_{2r}(x_0)\not\subset\Omega.
\end{array}\right.
\end{equation}
Here, the `mixed data' means $f-\mathrm{div}\,(|F|^{p-2}F)$. We note that if $f\equiv 0$, the Calder\'on-Zygmund estimates have been obtained in for instance \cite{BR1,MP1}, and if $F\equiv 0$ and $2-\frac1n <p<n$, these can be found in for instance \cite{Ph1}. From those results, we can expect a similar result for the mixed problem \eqref{homoeqfF}, and the authors recently obtained the desired one in \cite{LO1}. By the Sobolev's embedding, we consider two cases that $q>\max\{p,\frac{(p-1)n}{n-1}\}$ with $1<p<\infty$ and $p<q\leq \frac{(p-1)n}{n-1}$ with $p>n$.
\begin{theorem}\label{thmDwbdd}
Let $1<p<\infty$ and $q>\max\{p,\frac{(p-1)n}{n-1}\}$. There exists a small $ \delta = \delta(n, L, \nu, p, q) \in (0,\frac18) $ so that if $\mathbf{a}$ is $( \delta, R)$-vanishing and $\Omega$ is a $(\delta,R)$-Reifenberg flat domain for some $R\in(0,1)$, then for any $x_0\in\overline\Omega$, $r\in(0,\frac{R}{2}]$ and weak solution $w\in W^{1,p}(\Omega_{2r}(x_0))$ of \eqref{homoeqfF} with $F\in L^q(\Omega_{2r}(x_0))$ and $f\in L^{(q/(p-1))_*}(\Omega_{2r}(x_0))$, we have
\begin{eqnarray}\label{homoeqestimate}
\nonumber \left(\mint_{\Omega_r(x_0)} |Du|^{q} \, dx\right)^{\frac{1}{q}}
\nonumber &\leq& c \left( \mint_{\Omega_{2r}(x_0)} |Du|^{p}\, dx\right)^{\frac{1}{p}}+c\left(\mint_{\Omega_{2r}(x_0)} |F|^q \,dx\right)^\frac{1}{q} \\
&&+ c \left( \mint_{\Omega_{2r}(x_0)} |r f |^{\left( \frac{q}{p-1}\right)_*} \, dx \right)^{\frac{1}{\left( \frac{q}{p-1}\right)_*(p-1)}}
\end{eqnarray}
for some $c=c(n, L,\nu, p,q)>0.$
\end{theorem}
\begin{theorem}\label{thmDwbdd1}
Let $n<p<\infty$, $p<q\leq \frac{(p-1)n}{n-1}$ and $1<\tilde q <n$. There exists a small $\delta = \delta(n, L, \nu, p, q)\in (0,\frac18) $ so that if $\mathbf{a}$ is $( \delta, R)$-vanishing, $\Omega$ is a $(\delta,R)$-Reifenberg flat for some $R\in(0,1)$, then for any $x_0\in\overline\Omega$, $r\in(0,\frac{R}{2}]$ and for any weak solution $w\in W^{1,p}(\Omega_{2r}(x_0))$ of \eqref{homoeqfF} with $F\in L^q(\Omega_{2r}(x_0))$ and $f\in L^{\tilde q}(\Omega_{2r}(x_0))$, we have
\begin{eqnarray*}
\left(\mint_{\Omega_r(x_0)} |Dw|^{q} \, dx\right)^{\frac{1}{q}} &\leq& c \left( \mint_{\Omega_{2r}(x_0)} |Dw|^{p}\, dx\right)^{\frac{1}{p}}+c\left(\mint_{\Omega_{2r}(x_0)} |F|^q \,dx\right)^\frac{1}{q} \\
&&+ c \left( \mint_{\Omega_{2r}(x_0)} |r f |^{\tilde q} \, dx \right)^{\frac{1}{\tilde q(p-1)}}
\end{eqnarray*}
for some constant $c=c(n, L,\nu, p,q, \tilde q)>0.$
\end{theorem}
\subsection{Auxiliary lemmas}\ \\
We first recall the local boundedness (up to boundaries) for weak solutions to the equation \eqref{maineq} with $F\equiv 0$, which is a classical regularity result and we refer to \cite[Chapter 2.5]{LU} and \cite[Chapter 7]{Gi}. We point out that Reifenberg flat domains $\Omega$ considered in this paper have the measure density conditions \eqref{dencon} and \eqref{dencon1}, which are enough to obtain the boundedness for weak solutions.
\begin{lemma}\label{supvlem}
Let $1<p<\infty$ and suppose that the bounded domain $\Omega\subset\mr^n$ is $(\delta,R)$-Reifenberg flat for some $\delta\in(0,1/2)$ and $R>0$. Assume that $\mathbf{a}$ satisfies
\begin{equation}\label{aAss}
|\ba(x,\xi)|\leq L|\xi|^{p-1} \ \ \text{and}\ \ \ba(x,\xi)\cdot \xi\geq \nu |\xi|^p
\end{equation}
for any $x,\xi\in \mr^n$ and for some $0<\nu\leq L$, and that the nonnegative function $V$ satisfies $V \in L^{\gamma}(\Omega)$ for some $\gamma \in (\frac{n}{p}, n)$ when $p<n$ and for some $\gamma>1$ when $p\geq n.$ Then for any ball $B_{2r}(x_0)$ with $x_0\in\overline\Omega$ and $r\in(0,\frac{R}2]$ satisfying $ (2r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{2r}(x_0))} \leq 1,$
and for any weak solution $w\in W^{1,p}(\Omega_{2r}(x_0))$ of
$$
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \mathbf{a}(x,Dw) + V |w|^{p-2}w& = &0 & \textrm{ in } & \Omega_{2r}(x_0), \\
w & = & 0 & \textrm{ on } & \partial_w\Omega_{2r}(x_0)\ \text{if}\ B_{2r}(x_0)\not\subset\Omega,\end{array}\right.
$$
we have that
\begin{equation*}
\|w\|_{L^\infty(\Omega_r (x_0))} \leq c \left( \mint_{\Omega_{2r}(x_0)} |w|^p \, dx \right)^{\frac{1}{p}}
\end{equation*}
for some constant $c = c(n, p, L, \nu,\gamma) >0.$
\end{lemma}
\begin{proof}
Let us define the rescaled maps
$$ \tilde{\mathbf{a}}(x,\xi) = \mathbf{a}(rx, \xi),\ \tilde{w}(x) = \frac{w(rx)}{r}, \ \tilde{V}(x) = r^p V(rx), \text{ and } \tilde{\Omega} = \left\{ \frac{x}{r} : x \in \Omega \right\}.$$
Then one can check that $ \tilde{\mathbf{a}}$ satisfies the assumption \eqref{aAss} with the same constants $L$ and $\nu$, $\tilde{\Omega}$ is $(\delta,\frac{R}{r})$-Reifenberg flat, $\tilde{V} \in L^{\gamma}(\tilde{\Omega})$, and $\tilde{w} \in W^{1,p}(\Omega_{2}(x_0)) $ is a weak solution of
$$
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \tilde{\mathbf{a}}(x,D\tilde{w}) + \tilde{V} |\tilde{w}|^{p-2}\tilde{w}& = &0 & \textrm{ in } & \tilde{\Omega}_{2}(x_0), \\
\tilde{w} & = & 0 & \textrm{ on } & \partial_w\tilde{\Omega}_{2}(x_0)\ \text{if}\ B_{2}(x_0)\not\subset\tilde{\Omega}. \end{array}\right.
$$
By the classical local boundedness result (see, for instance, \cite[Chapter 2.5]{LU} and \cite[Chapter 7]{Gi}),
we see that
\begin{equation*}
\|\tilde{w}\|_{L^\infty(\tilde{\Omega}_1 (x_0))} \leq c \left( \mint_{\tilde{\Omega}_{2}(x_0)} |\tilde{w}|^p \, dx \right)^{\frac{1}{p}}
\end{equation*}
for some constant $c = c(n, p, L, \nu,\gamma, \Vert \tilde{V} \Vert_{L^{\gamma}(\tilde{\Omega}_{2}(x_0))}) >0.$ Here, since the constant $c$ in the above estimate is increasing as a function of $\Vert \tilde{V} \Vert_{L^{\gamma}(\tilde{\Omega}_{2}(x_0))}$ and
$$ \Vert \tilde{V} \Vert_{L^{\gamma}(\tilde{\Omega}_{2}(x_0))} \leq (2r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{2r}(x_0))} \leq 1,$$
$c$ can be replaced by a larger constant independent of $\Vert \tilde{V} \Vert_{L^{\gamma}(\tilde{\Omega}_{2}(x_0))}$.
Therefore, after scaling back, we can arrive at the desired bound of $w$.
\end{proof}
The following is the standard iteration lemma, whose proof can be found in for instance \cite{HL1}.
\begin{lemma}\label{teclem}
Let $ g :[a,b] \to \mr $ be a bounded nonnegative function. Suppose that for any $s_1,s_2$ with $ 0< a \leq s_1 < s_2 \leq b $,
$$
g(s_1) \leq \tau g(s_2) + \frac{A}{(s_2-s_1)^{\beta}}+B
$$
where $A,B \geq 0, \beta >0$ and $0\leq \tau <1$. Then we have
$$
g(s_1) \leq c\left( \frac{A}{(s_2-s_1)^{\beta}}+ B \right)
$$
for some constant $c=c(\beta, \tau) >0$.
\end{lemma}
We end this section by introducing a basic inequality which will be used later. Although its proof is elementary, we shall give it in detail for the sake of readability.
\begin{lemma}\label{lemineq} For any function $g\in W^{1,p}(B_r)$ with any $r>0,$ we have
$$
\frac{1}{r^{n+p}}\int_{B_r} \int_{B_r} \left| g(x)-g(y) \right|^p \,dx dy \leq c \int_{B_r} \left| Dg(x) \right|^p \,dx
$$
for some $c=c(n,p)>0$.
\end{lemma}
\begin{proof}
Without loss of generality, we shall assume that $g\in C^1(B_r).$ Using H\"older's inequality, Fubini's theorem and the fact that $|x-y|\leq 2r,$ we observe that
\begin{eqnarray*}
\int_{B_r} \int_{B_r} \left| g(x)-g(y) \right|^p \,dx dy &=& \int_{B_r} \int_{B_r} \left| \int_0^1Dg(t(x-y)+y)\cdot (x-y)\,dt \right|^p\, dxdy\\
&\leq & (2r)^p\int_0^1 \int_{B_r} \int_{B_r} |Dg(t(x-y)+y)|^p\, dx dy dt.
\end{eqnarray*}
Here we point out that $t(x-y)+y\in B_r$ for any $x,y\in B_r$. Then we use a change of variable with $x=\tilde x+y$ and apply Fubini's theorem to obtain that
\begin{eqnarray*}
\int_{B_r} \int_{B_r} \left| g(x)-g(y) \right|^p \,dx dy &\leq & (2r)^p\int_0^1 \int_{B_r} \int_{B_r(-y)} |Dg(t\tilde x+y)|^p\, d\tilde x dy dt\\
&\leq & (2r)^p\int_0^1 \int_{B_{2r}} \int_{B_{r-\frac{|\tilde x|}{2}}(-\frac{\tilde x}{2})} |Dg(t\tilde x+y)|^p\, dy d\tilde x dt.
\end{eqnarray*}
Note that $B_{r-\frac{|\tilde x|}{2}}(t\tilde x-\frac{\tilde x}{2})\subset B_r$ for any $\tilde x\in B_{2r}$ and any $t\in (0,1).$ Hence, by letting $\tilde y=t\tilde x+y$, we have
$$
\int_{B_{r-\frac{|\tilde x|}{2}}(-\frac{\tilde x}{2})} |Dg(t\tilde x+y)|^p\, dy= \int_{B_{r-\frac{|\tilde x|}{2}}(t\tilde x-\frac{\tilde x}{2})} |Dg(\tilde y)|^p\, d\tilde y \leq \int_{B_{r}} |Dg(\tilde y)|^p\, d\tilde y,
$$
which implies that
$$
\int_{B_r} \int_{B_r} \left| g(x)-g(y) \right|^p \,dx\, dy \leq (2r)^{n+p}|B_1|\int_{B_{r}} |Dg(\tilde y)|^p\, d\tilde y.
$$
This completes the proof.
\end{proof}
\section{Gradient estimates for homogenous equations}\label{sechomo}
In this section we obtain gradient estimates for weak solutions to localized equations of \eqref{maineq} with $F\equiv 0$. Let us start with the following lemma, which is in fact a key lemma in our proofs.
\begin{lemma}\label{lemrVbddpq}
Let $1 < p < \infty$ and suppose $V \in \mathcal{B}_{\gamma}$ for some $ \gamma \in [ \frac{n}{p}, n)$ when $p<n$ and for some $\gamma \in (1,n)$ when $p\geq n.$
Then for any function $w \in W^{1,p}(B_{r})$ with $0<r<1,$ we have
\begin{eqnarray}\label{prVbddpq}
\nonumber \left( \mint_{B_{r}}\left| \frac{w}{r}\right|^{p} \, dx\right)^{\frac1p} \left( \mint_{B_{r}} [r^pV]^{\gamma}\, dx\right)^{\frac{1}{p\gamma }} &\leq& c\max\left\{\left( \mint_{B_{r}} [r^pV]^{\gamma}\, dx\right)^{\frac{1}{p\gamma }},1\right\} \\
&&\hspace{-1.5cm}\times\, \left[ \left( \mint_{B_{r}} \left| Dw \right|^p \,dx\right)^{\frac1p} + \left(\mint_{B_{r}} V\left| w \right|^p \,dx\right)^{\frac1p} \right]
\end{eqnarray}
for some constant $c=c\left(n,p, b_\gamma \right)>0.$
\end{lemma}
\begin{proof}
By Lemma \ref{lemineq}, we have
\begin{equation*}
\int_{B_r} \left| Dw(x) \right|^p \,dx \geq \frac{c}{r^{n+p}}\int_{B_r} \int_{B_r} \left| w(x)-w(y) \right|^p \,dxdy.
\end{equation*}
Moreover, we also have
\begin{equation*}
\int_{B_r} V(x) \left| w(x) \right|^p \,dx = \frac{1}{r^{n}|B_1|}\int_{B_r} \int_{B_r} V(y) \left| w(y) \right|^p\,dxdy.
\end{equation*}
Then we have that for any constant $c_0>0$,
\begin{eqnarray}\label{pDwVwc0}
\nonumber&&\int_{B_r} \left| Dw(x) \right|^p \,dx + \int_{B_r} V(x)\left| w(x) \right|^p \,dx \\
\nonumber&&\qquad \geq \frac{c}{\max\{c_0,1\}\,r^{n}} \bigg( \int_{B_r}\int_{B_r} \frac{c_0\left| w(x)-w(y) \right|^p}{r^p}\,dy dx \\
&& \hspace{4.5cm} + \int_{B_r} \int_{B_r} V(y)\left| w(y) \right|^p \,dydx\bigg).
\end{eqnarray}
Note that it is easily seen that
\begin{eqnarray*}
&&\frac{c_0\left| w(x)-w(y) \right|^p}{r^p} + V(y)|w(y)|^p \\
&&\quad \geq \min\left\{\frac{c_0}{r^p},V(y)\right\} \left( \left| w(x)-w(y) \right|^p\ + |w(y)|^p \right) \geq \min\left\{\frac{c_0}{r^p},V(y)\right\} \frac{|w(x)|^p}{2^{p-1}} .
\end{eqnarray*}
Hence, inserting this into \eqref{pDwVwc0}, we obtain
\begin{eqnarray*}
&&\int_{B_r} \left| Dw(x) \right|^p \,dx + \int_{B_r} \left| w(x) \right|^p V(x) \,dx \\
&&\qquad \geq \frac{c}{\max\{ c_0,1\}\,r^n} \int_{B_r} \left(\int_{B_r} \min_{y \in B_r}\left\{ \frac{c_0}{r^p} , V(y)\right\} \,dy\right)\left| w(x)\right|^p\,dx.
\end{eqnarray*}
On the other hand, since $V \in \mathcal{B}_\gamma,$ by Lemma \ref{lemequiv} there exists $\theta>0$ such that
$$\left| \left\{ x \in B : V(x) \geq \theta \mint_{B} V(y)\,dy \right\} \right| \geq \frac12 |B|$$
for every ball $B \subset \mr^n.$ Then we take $$c_0:= \theta\, r^p \mint_{B_r} V(y) dy$$ to discover that
$$ \int_{B_r} \min_{y \in B_r}\left\{ \frac{c_0}{r^p} , V(y)\right\} dy \geq \frac{c_0}{2r^p} |B_r| = \frac{c_0\, r^{n-p}|B_1|}{2}. $$
Therefore we get
\begin{eqnarray*}
\frac{c_0r^{-p}}{\max\{ c_0,1\}} \int_{B_r} |w|^p\,dx &\leq& \frac{c}{\max\{ c_0,1\}\,r^n} \int_{B_r} \int_{B_r} \min_{y \in B_r}\left\{ \frac{c_0}{r^p} , V(y)\right\} \left| w(x)\right|^p \,dydx
\\
&\leq& c\left(\int_{B_r} \left| Dw \right|^p \,dx + \int_{B_r}V \left| w \right|^p \,dx\right),
\end{eqnarray*}
which implies that
\begin{equation}\label{lem41pf1}
\frac{c_0}{\max\{ c_0,1\}} \mint_{B_r} \left|\frac{w}{r} \right|^{p} dx \leq c \left( \mint_{B_r} \left| Dw \right|^p \,dx + \mint_{B_r} V\left| w \right|^p \,dx\right) .
\end{equation}
At this stage, if $c_0 < 1,$ we see that
\begin{eqnarray*}
\nonumber \left( \mint_{B_r} r^p\,V \, dx \right)^{\frac1p} \left( \mint_{B_r} \left| \frac{w}{r}\right|^{p} dx\right)^{\frac1p} &= & \left( c_0\mint_{B_r} \left| \frac{w}{r}\right|^{p} dx\right)^{\frac1p} \\
&& \hspace{-2cm}\leq\ c\left[ \left( \mint_{B_r} \left| Dw \right|^p \,dx\right)^{\frac1p} + \left(\mint_{B_r} V \left| w \right|^p \,dx\right)^{\frac1p} \right].
\end{eqnarray*}
Using this and the fact $V\in \mathcal{B}_\gamma$, we have
\begin{eqnarray}\label{pc0>1}
\nonumber\left( \mint_{B_{r}}\left| \frac{w}{r}\right|^{p} \,dx\right)^{\frac1p} \left( \mint_{B_{r}} [r^pV]^{\gamma}\,dx\right)^{\frac{1}{p\gamma }} &&\\
\nonumber&&\hspace{-6.3cm} \leq b_{\gamma}^{\frac1p} \left( \mint_{B_{r}}\left| \frac{w}{r}\right|^{p} dx \right)^{\frac1p} \left( \mint_{B_{r}}r^p V \, dx \right)^{\frac1p}\\
&&\hspace{-6.3cm} \leq c b_{\gamma}^{\frac1p} \left[ \left( \mint_{B_{r}} \left| Dw \right|^p \,dx\right)^{\frac1p} + \left(\mint_{B_{r}} V \left| w \right|^p \,dx\right)^{\frac1p} \right].
\end{eqnarray}
Otherwise, that is, if $c_0 \geq 1,$ we see from \eqref{lem41pf1} that
\begin{equation}\label{pc0<1}
\left( \mint_{B_r} \left| \frac{w}{r}\right|^{p} dx\right)^{\frac1p} \leq c \left[ \left( \mint_{B_r} \left| Dw \right|^p \,dx\right)^{\frac1p} + \left(\mint_{B_r} V \left| w \right|^p\,dx\right)^{\frac1p}\right].
\end{equation}
Then combining \eqref{pc0>1} and \eqref{pc0<1}, we finally obtain the desired estimate \eqref{prVbddpq}.
\end{proof}
Now, let us consider a weak solution $w\in W^{1,p}(\Omega_{4r}(x_0))$ of
\begin{equation}
\label{homoeq1}
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \mathbf{a}(x,Dw) + V |w|^{p-2}w& = &0 & \textrm{ in } & \Omega_{4r}(x_0), \\
w & = & 0 & \textrm{ on } & \partial_w\Omega_{4r}(x_0)\ \text{if}\ B_{4r}(x_0)\not\subset\Omega,\end{array}\right.
\end{equation}
and then we can obtain its gradient estimates as follows.
\begin{lemma}\label{lem42}
Let $1 < p < \infty$, and suppose that $\ba:\mr^n\times\mr^n\to\mr^n$ satisfies \eqref{aas1} and \eqref{aas2} and $V \in \mathcal{B}_{\gamma}$ for some $ \gamma \in [ \frac{n}{p}, n)$ when $p<n$ and for some $\gamma \in (1,n)$ when $p\geq n.$ There exists a small $\delta = \delta(n, p, \gamma, \Lambda, \nu) > 0 $ so that if $\mathbf{a}$ is $( \delta, R)$-vanishing and $\Omega$ is $(\delta,R)$-Reifenberg flat for some $R\in(0,1)$, then for any $x_0\in\overline\Omega$, $r\in(0,\frac{R}{4}]$ satisfying $ (4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r}(x_0))} \leq 1,$ and for any weak solution $w\in W^{1,p}(\Omega_{4r}(x_0))$ of \eqref{homoeq1} we have $|Dw|\in L^{\gamma^*(p-1)}(\Omega_r(x_0))$ with the estimate
\begin{eqnarray}\label{DwDwVwest}
\nonumber&&\left( \mint_{{\Omega}_r(x_0)} \left| D{w}\right|^{\gamma^* (p-1)} \, dx \right)^{\frac{1}{\gamma^* (p-1)}}\\
&&\qquad \leq c \left( \mint_{\Omega_{4r}(x_0)} \left| Dw \right|^p \,dx\right)^{\frac1p} + c \left(\mint_{\Omega_{4r}(x_0)} V\left| w \right|^p \,dx\right)^{\frac1p}.
\end{eqnarray}
Moreover, we have $V^{\frac1p}|w|\in L^{p\gamma }(\Omega_r(x_0))$ with the estimate
\begin{eqnarray}\label{DwDwVwest1}
\nonumber&&\left( \mint_{{\Omega}_r(x_0)} \left[ V^\frac{1}{p}|w|\right]^{p\gamma } \, dx \right)^{\frac{1}{p\gamma }}\\
&&\qquad \leq c\left( \mint_{\Omega_{4r}(x_0)} \left| Dw \right|^p \,dx\right)^{\frac1p} + c \left(\mint_{\Omega_{4r}(x_0)} V\left| w \right|^p \,dx\right)^{\frac1p}.
\end{eqnarray}
Here, the constants $c>0$ in the above estimates depend on $n,p,\gamma,\nu,L$ and $b_\gamma$.
\end{lemma}
\begin{proof}
For simplicity we shall denote $\Omega_{\rho} :=\Omega_{\rho}(x_0)$ and $B_{\rho} := B_{\rho}(x_0)$ for any $\rho>0$ in this proof. We first observe that, in view of Lemma \ref{supvlem} with $r$ replaced by $2r$, \begin{equation}\label{lem42pf1}
\|w\|_{L^\infty(\Omega_{2r})} \leq c \left( \mint_{\Omega_{4r}} |w|^p \, dx \right)^{\frac{1}{p}}.
\end{equation}
Then from the fact $V\in L^\gamma(\Omega),$ we see that $V|w|^{p-2}w\in L^\gamma(\Omega_{2r})$. Therefore, applying Theorem \ref{thmDwbdd} with $q=\gamma^*(p-1)$, $f=V|w|^{p-2}w$ and $F=0$, we have
\begin{eqnarray}\label{DwDwrVwes}
\nonumber&& \left(\mint_{\Omega_r} |Dw|^{\gamma^*(p-1)} \, dx\right)^{\frac{1}{\gamma^*(p-1)}} \\
&&\qquad \leq c \left( \mint_{\Omega_{2r}} |Dw|^{p}\, dx\right)^{\frac{1}{p}}+ c \left( \mint_{\Omega_{2r}} \left[r V|w|^{p-1} \right]^\gamma \, dx\right)^{\frac{1}{\gamma(p-1)}}.
\end{eqnarray}
We now estimate the last term on the right hand side in the previous inequality. Using \eqref{lem42pf1} and \eqref{prVbddpq} with the assumption $ (4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r})} \leq 1,$ we have
\begin{eqnarray}\label{rVwDwVwes}
\nonumber \left( \mint_{\Omega_{2r}} \left[ r V |w|^{p-1}\right]^{\gamma} dx\right)^{\frac{1}{\gamma(p-1)}}
\nonumber& \leq& \frac{ \|w\|_{L^\infty(\Omega_{2r})}}{r} \leq c \left( \mint_{\Omega_{4r}}\left| \frac{w}{r}\right|^{p} \,dx\right)^{\frac1p} \\
& \leq& c \left[ \left( \mint_{\Omega_{4r}} \left| Dw \right|^p \,dx\right)^{\frac1p} + \left(\mint_{\Omega_{4r}} V\left| w \right|^p \,dx\right)^{\frac1p} \right].
\end{eqnarray}
Here, we let $w\equiv 0$ in $B_{4r}\setminus \Omega$ and have used \eqref{dencon}. Hence, inserting \eqref{rVwDwVwes} into \eqref{DwDwrVwes}, we obtain \eqref{DwDwVwest}. In the same way as \eqref{rVwDwVwes}, we can derive \eqref{DwDwVwest1}.
\end{proof}
\section{Comparison estimates}
\label{secComestimates}
In this section, we shall derive comparison estimates between the weak solution to \eqref{maineq} and weak solutions to localized equations of \eqref{maineq} with $F\equiv0.$
\begin{lemma}\label{comestlem}
Let $1 < p < \infty$, and suppose that $\mathbf{a}:\mr^n\times\mr^n\to\mr^n$ satisfies \eqref{aas1}-\eqref{aas2}.
For any $\epsilon \in (0,1),$ there exists a small $ \delta = \delta( \epsilon, n, p, L, \nu) \in(0,1) $ such that if $u \in W^{1,p}(\Omega)$ is the weak solution to \eqref{maineq} with
\begin{equation}\label{lDuurbdd}
\left( \mint_{\Omega_{4r}} \left[|Du| + V^\frac{1}{p}|u|\right]^p \,dx\right)^{\frac{1}{p}} <\lambda
\end{equation}
and
\begin{equation}\label{lFbdd}
\left( \mint_{\Omega_{4r}} |F|^p \,dx\right)^{\frac{1}{p}}<\delta\, \lambda
\end{equation}
for some $r>0$ and $\lambda>0$, then we have
\begin{equation}\label{lDumDvibdd}
\mint_{\Omega_{4r}} |Du-Dw|^{p} + V |u-w|^{p} \, dx \leq \epsilon \lambda^p,
\end{equation}
where $w \in W^{1,p}(\Omega_{4r})$ is the unique weak solution to
\begin{equation}
\label{lhomoeq}
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \mathbf{a}(x,Dw) + V |w|^{p-2}w& = & 0 & \textrm{ in } & \Omega_{4r}, \\
w & = & u & \textrm{ on } & \partial \Omega_{4r}.
\end{array}\right.
\end{equation}
\end{lemma}
\begin{proof}
We first test the equations \eqref{lhomoeq} with the testing function $\varphi=w-u$ in order to discover
$$
\int_{\Omega_{4r}} \mathbf{a}(x, Dw)\cdot ( Dw-Du)\, dx + \int_{\Omega_{4r}} V |w|^{p-2}w \cdot (w-u)\, dx= 0,
$$
and then, in view of \eqref{aas1} and \eqref{mono}, we obtain
$$
\int_{\Omega_{4r}} |Dw|^p\, dx + \int_{\Omega_{4r}} V |w|^p\, dx\leq c\int_{\Omega_{4r}} |Dw|^{p-1}|Du|\, dx + c \int_{\Omega_{4r}} V |w|^{p-1}|u|\, dx.
$$
Therefore, applying Young's inequality and \eqref{lDuurbdd} we have
\begin{equation}\label{lem51pf1}
\left(\int_{\Omega_{4r}} |Dw|^p\, dx \right)^{\frac1p}+\left( \int_{\Omega_{4r}} V |w|^p\, dx\right)^{\frac1p}\leq c\lambda.
\end{equation}
We next test the equations \eqref{maineq} and \eqref{lhomoeq} with the testing function $\varphi=u-w$ in order to discover
\begin{eqnarray*
\nonumber &&\int_{\Omega_{4r}} \left( \mathbf{a}(x, Du) - \mathbf{a}(x, Dw) \right) \cdot ( Du-Dw)\, dx \\
&& \quad+ \int_{\Omega_{4r}} V \left( |u|^{p-2}u - |w|^{p-2}w \right) \cdot (u-w)\, dx= \int_{\Omega_{4r}} |F|^{p-2}F \cdot (Du-Dw)\, dx.
\end{eqnarray*}
By virtue of the monotonicity condition \eqref{mono}, we derive that
\begin{eqnarray*}
&&\mint_{\Omega_{4r}}\left( |Du|^{2}+ |Dw|^2 \right)^{\frac{p-2}{2}} | Du-Dw|^2\, dx \\
&&\qquad +\mint_{\Omega_{4r}}V \left( |u|^{2}+ |w|^2 \right)^{\frac{p-2}{2}} | u-w|^2\, dx \leq c \mint_{\Omega_{4r}} |F|^{p-1}|Du-Dw|\, dx.
\end{eqnarray*}
Note that if $p\geq 2$, by \eqref{mono1} we have
$$
\mint_{\Omega_{4r}}|Du-Dw|^p\, dx +\mint_{\Omega_{4r}}V |u-w|^p\, dx \leq c \mint_{\Omega_{4r}} |F|^{p-1}|Du-Dw|\, dx.
$$
On the other hand, if $1<p<2$, then by Young's inequality we have
\begin{eqnarray*}
|Du-Dw|^p &=& |Du-Dw|^p \left( |Du|^2 + |Dw|^2 \right)^{\frac{p(p-2)}{4}} \left( |Du|^2 + |Dw|^2 \right)^{\frac{p(2-p)}{4}}\\
&\leq& \kappa \left( |Du|^2 + |Dw|^2 \right)^{\frac{p}{2}} +c(\kappa) \left( |Du|^2 + |Dw|^2 \right)^{\frac{p-2}{2}} |Du-Dw|^2
\end{eqnarray*}
and
$$
V |u-w|^p \leq \kappa V \left( |u|^2 + |w|^2 \right)^{\frac{p}{2}} +c(\kappa) V \left( |u|^2 + |w|^2 \right)^{\frac{p-2}{2}} |u-w|^2
$$
for any small $\kappa>0.$ Therefore, combining the above results with \eqref{lDuurbdd}, we have that
\begin{eqnarray*}
\nonumber&&\mint_{\Omega_{4r}} |Du-Dw|^{p} \, dx+\mint_{\Omega_{4r}} V|u-w|^{p} \, dx \\
\nonumber&&\quad \leq \kappa \mint_{\Omega_{4r}} (|Du|^2+|Dw|^2)^\frac{p}{2} \, dx +\kappa \mint_{\Omega_{4r}} V (|u|^2+|w|^2)^\frac{p}{2} \, dx\\\
\nonumber&&\quad \quad+ c(\kappa) \mint_{\Omega_{4r}} \left( |Du|^2 + |Dw|^2 \right)^{\frac{p-2}{2}} |Du-Dw|^{2} \, dx\\
\nonumber&&\quad \quad+ c(\kappa) \mint_{\Omega_{4r}} V\left( |u|^2 + |w|^2 \right)^{\frac{p-2}{2}} |u-w|^{2} \, dx \\
\nonumber&&\quad \leq c_1 \kappa \, \lambda^p+ c(\kappa) \mint_{\Omega_{4 r}} |F|^{p-1}|Du-Dw|\, dx
\end{eqnarray*}
for some $c_1=c_1(n,p,L,\nu)>0$ and $c(\kappa)= c(\kappa,n,p,L,\nu)\geq1$.
Therefore, in any case, we obtain
\begin{eqnarray*}
\nonumber&&\mint_{\Omega_{4r}} |Du-Dw|^{p} \, dx+\mint_{\Omega_{4r}} V \left|u-w\right|^{p} \, dx\\
&& \qquad\leq c_1\kappa \, \lambda^p+ c(\kappa,\tau)\mint_{\Omega_{4r}} |F|^{p}\, dx + c(\kappa)\,\tau \mint_{\Omega_{4r}} |Du-Dw|^p\, dx \\
& & \qquad \leq c_1\kappa \, \lambda^p+ c(\kappa,\tau)\delta^p \,\lambda^p + c(\kappa)\,\tau \mint_{\Omega_{4r}}|Du-Dw|^p\, dx
\end{eqnarray*}
for any small $\kappa,\tau>0$ and for some $c(\kappa,\tau)=c(\kappa,\tau,n,p,L,\nu)\geq 1$. Here, we have used Young's inequality and \eqref{lFbdd}. Taking $\kappa$, $\tau$ and $\delta$ sufficiently small such that
$$
\kappa=\frac{\epsilon}{4c_1},\ \ \ \tau = \frac{1}{2c(\kappa)}\ \ \ \text{and}\ \ \ \delta =\left(\frac{\epsilon}{4c(\kappa,\tau)}\right)^{\frac1p},
$$
we finally obtain \eqref{lDumDvibdd}.
\end{proof}
We notice that $\gamma^*(p-1)>\max\{p,\frac{n(p-1)}{n-1}\}$. Therefore, applying the results in Lemma \ref{lem42} to the weak solution $w$ of \eqref{lhomoeq} in the previous lemma, we obtain the following gradient estimates.
\begin{lemma}\label{comestlem1}
Let $1 < p < \infty$, and suppose that $\mathbf{a}:\mr^n\times\mr^n\to\mr^n$ satisfies \eqref{aas1}-\eqref{aas2} and $V \in \mathcal{B}_\gamma$ for some $\gamma \in [ \frac{n}{p}, n)$ when $p<n$ and for some $\gamma \in (1,n)$ when $p\geq n.$
There exists a small $ \delta = \delta(n, p, \gamma, L, \nu) > 0 $ such that if $\mathbf{a}$ is $( \delta, R)$-vanishing, $\Omega$ is $( \delta, R)$-Reifenberg flat and $u \in W^{1,p}(\Omega)$ is the weak solution to \eqref{maineq} with \eqref{lDuurbdd} and \eqref{lFbdd} for some $r\in(0,\frac R4]$ satisfying $ (4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r})} \leq 1$ and $\lambda>0$, then we have
$$
\left( \mint_{\Omega_{r}} \left| Dw\right|^{\gamma^* (p-1)} \, dx \right)^{\frac{1}{\gamma^* (p-1)}} \leq c\, \lambda
$$
and
$$
\left( \mint_{\Omega_{r}} \left[V^{\frac1p} |w|\right]^{p\gamma } \, dx \right)^{\frac{1}{p\gamma }} \leq c\, \lambda
$$
for some $c=c(n,p, \gamma, L, \nu, b_\gamma)>0,$ where $w$ is the unique weak solution to \eqref{lhomoeq}.
\end{lemma}
\begin{proof}
The estimates above directly follow from \eqref{DwDwVwest}, \eqref{DwDwVwest1} and \eqref{lem51pf1}.
\end{proof}
\section{$L^q$-estimates}
\label{sec gradient estimates}
Now we are ready to prove our main results, Theorem~\ref{mainthm} and Corollary~\ref{maincor}. As we mentioned in the introduction, we employ so-called an exit-time argument introduced by Mingione in \cite{AM1,Min1}.
\subsection{Proof of Theorem \ref{mainthm}}\ \\
Assume that $\ba:\mr^n\times \mr^n\to\mr^n$ is $(\delta,R)$-vanishing and $\Omega$ is $(\delta,R)$-Reifenberg flat for some $R\in(0,1),$ where $\delta\in(0,1)$ will be chosen sufficiently small later.
Now, we prove the estimate \eqref{mainlocalest}. Fix any $x_0\in\overline{\Omega}$ and $r>0$ satisfying $r \leq \frac{R}{4}$ and $(4r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{4r}(x_0))} \leq 1.$ Note that
$$
\rho^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{\rho}(y))}\leq 1
$$
for any $B_\rho(y)\subset B_{4r}(x_0)$ with $y\in B_{4r}(x_0)\cap \overline{\Omega}.$
For the sake of simplicity, we shall write $\Omega_{\rho}:=\Omega_{\rho}(x_0)$, $\rho>0$. Also, we define
\begin{equation}\label{Phi}
\Phi(u,V):= |Du|+V^\frac{1}{p}|u|,
\end{equation}
and for $\lambda,\rho>0$
$$
E(\lambda, \rho) := \{x \in \Omega_{\rho} : \Phi(u,V)(x)>\lambda \}.
$$
The proof goes in five steps.
\vspace{0.2cm}\textit{Step 1. Covering argument.}\ \\
Fix any $s_1, s_2$ with $1\leq s_1 < s_2 \leq 2.$ Then we have $ \Omega_{r}\subset \Omega_{s_1 r } \subset \Omega_{s_2 r } \subset \Omega_{2 r } $. We define
\begin{equation}\label{lambda0}
\lambda_0:= \left( \mint_{\Omega_{2r}} \left[\Phi(u,V)\right]^p \,dx\right)^{\frac1p}+\frac{1}{\delta} \left(
\mint_{\Omega_{2r}} |F|^p \,dx \right)^{\frac1p},
\end{equation}
and consider $\lambda>0$ large enough so that
\begin{equation}\label{lambdarg}
\lambda > A\, \lambda_0,\ \ \textrm{where } A : = \left( \frac{16}{7}\right)^{\frac{n}{p}}\left( \frac{40}{s_2 - s_1}\right)^{\frac{n}{p}}.
\end{equation}
We note that
$ \Omega_{\rho}(y) \subset \Omega_{2r}$ for any $y \in E(\lambda, s_1 r)$ and any $ \rho \in \left( 0, (s_2-s_1)\,r \right].$
By virtue of the measure density condition \eqref{dencon} and the definition of $\lambda_0$ in \eqref{lambda0}, we then deduce that
\begin{eqnarray*}
&&\left(\mint_{\Omega_{\rho}(y)} \left[\Phi(u,V)\right]^p \,dx\right)^\frac1p+
\frac{1}{\delta}\left( \mint_{\Omega_{\rho}(y)} |F|^p \,dx\right)^\frac1p\\
&& \leq \left(\frac{|\Omega_{2r}|}{|\Omega_{\rho}(y)|}\right)^{\frac{1}{p}} \lambda_0 \leq \left( \frac{16}{7}\right)^{\frac np} \left( \frac{2r}{\rho}\right)^{\frac np}\, \lambda_0 \leq A\, \lambda_0 < \lambda,
\end{eqnarray*}
provided that
$$ \frac{(s_2 - s_1)\,r}{20} \leq \rho \leq (s_2-s_1)\,r.$$
On the other hand, Lebesgue's differentiation theorem yields that for almost every $y \in E(\lambda, s_1 r),$
$$ \lim_{\rho \rightarrow 0} \left[\left(\mint_{\Omega_{\rho}(y)} \left[\Phi(u,V)\right]^p \,dx \right)^{\frac1p}+\frac{1}{\delta} \left(
\mint_{\Omega_{\rho}(y)} |F|^p \,dx\right)^{\frac1p}\right] > \lambda.$$
Therefore the continuity of the integral implies that for almost every $y \in E(\lambda, s_1 r),$ there exists
$$ \rho_{y} = \rho(y) \in \left( 0, \frac{(s_2-s_1)\,r}{20}\right)$$
such that
$$ \left(\mint_{\Omega_{\rho_y}(y)} \left[\Phi(u,V)\right]^p \,dx \right)^{\frac1p}+\frac{1}{\delta} \left(
\mint_{\Omega_{\rho_y}(y)} |F|^p \,dx\right)^{\frac1p} = \lambda$$
and, for any $ \rho \in ( \rho_y, (s_2-s_1)r],$
$$ \left(\mint_{\Omega_{\rho}(y)} \left[\Phi(u,V)\right]^p \,dx \right)^{\frac1p}+\frac{1}{\delta} \left(
\mint_{\Omega_{\rho}(y)} |F|^p \,dx\right)^{\frac1p} <\lambda.
$$
Applying Vitali's covering theorem, we have the following:
\begin{lemma}\label{coveringlem}
Given $\lambda > A\, \lambda_0,$ there exists a disjoint family of $\{ \Omega_{\rho_i}(y^i)\}_{i=1}^{\infty}$ with $y^i \in E(\lambda, s_1 r)$ and $\rho_{i} \in \left(0, \frac{(s_2-s_1)\,r}{20} \right)$ such that
$$E(\lambda, s_1 r) \subset \bigcup_{i=1}^{\infty} \Omega_{5\rho_i}(y^i), $$
\begin{equation}\label{covering1}
\left(\mint_{\Omega_{\rho_i}(y^i)} \left[\Phi(u,V)\right]^p \,dx\right)^{\frac1p} +
\frac{1}{\delta}\left( \mint_{\Omega_{\rho_i}(y^i)} |F|^p \,dx\right)^\frac1p = \lambda,
\end{equation}
and for any $ \rho \in ( \rho_i, (s_2-s_1)\,r]$,
\begin{equation}\label{covering2}
\left(\mint_{\Omega_{\rho_i}(y^i)} \left[\Phi(u,V)\right]^p \,dx\right)^{\frac1p} + \frac{1}{\delta}\left( \mint_{\Omega_{\rho_i}(y^i)} |F|^p \,dx\right)^\frac1p<\lambda.
\end{equation}
\end{lemma}
Furthermore, we can deduce from Lemma~\ref{coveringlem}, in particular, \eqref{covering1}, that
\begin{equation}\label{covering3}
\mint_{\Omega_{\rho_i}(y^i)} \left[\Phi(u,V)\right]^p \,dx \geq \left(\frac{\lambda}{2}\right)^p\ \ \ \text{or}\ \ \
\mint_{\Omega_{\rho_i}(y^i)} |F|^p \,dx \geq \left(\frac{\delta\lambda}{2}\right)^p.
\end{equation}
If the first inequality holds, we have
$$
\left|\Omega_{\rho_i}(y^i)\right|\leq \frac{2^p}{\lambda^p}\left( \int_{\Omega_{\rho_i}(y^i)\cap\left\{\Phi(u,V)>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\Phi(u,V)\right]^p \,dx+\frac{\left|\Omega_{\rho_i}(y^i)\right|\lambda^p}{2^{p+1}} \right)
$$
and so
$$
\left|\Omega_{\rho_i}(y^i)\right|\leq \frac{2^{p+1}}{\lambda^p} \int_{\Omega_{\rho_i}(y^i)\cap\left\{\Phi(u,V)>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\Phi(u,V)\right]^p \,dx.
$$
Similarly, if the second inequality in \eqref{covering3} holds, we have
$$
\left|\Omega_{\rho_i}(y^i)\right|\leq \frac{2^{p+1}}{(\delta\lambda)^p} \int_{\Omega_{\rho_i}(y^i)\cap\left\{|F|>\frac{\lambda\delta}{2^{(p+1)/p}} \right\}} |F|^p \,dx.
$$
Therefore, in any case, we have
\begin{eqnarray}\label{omegai}
\nonumber \left|\Omega_{\rho_i}(y^i)\right|&\leq & \frac{2^{p+1}}{\lambda^p} \Bigg(\int_{\Omega_{\rho_i}(y^i)\cap\left\{\Phi(u,V)>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\Phi(u,V)\right]^p \,dx\\
&&\qquad \qquad \quad+ \int_{\Omega_{\rho_i}(y^i)\cap\left\{\frac{|F|}{\delta}>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\frac{|F|}{\delta}\right]^p \,dx\Bigg).
\end{eqnarray}
\vspace{0.2cm}\textit{Step 2. Comparison estimates.}\ \\
From Lemma~\ref{coveringlem}, in particular, \eqref{covering2}, we note that
$$
\left(\mint_{\Omega_{20\rho_i}(y^i)} \left[|Du|+ V^{\frac1p} |u|\right]^p \,dx\right)^\frac1p <\lambda \ \ \ \text{and}\ \ \
\left(\mint_{\Omega_{20\rho_i}(y^i)} |F|^p \,dx\right)^\frac1p <\delta\lambda.
$$
Then applying Lemma~\ref{comestlem} and Lemma~\ref{comestlem1}, for any $\epsilon \in (0,1),$ there exists a small $\delta= \delta( \epsilon, n, p, \gamma, L, \nu)>0$ such that
\begin{equation}\label{tDumDvibdd}
\left( \mint_{\Omega_{20\rho_i}(y^i)} |Du-Dw_i|^{p} \, dx\right)^{\frac1p} +
\left( \mint_{\Omega_{20\rho_i}(y^i)} \left|V^{\frac1p}u-V^{\frac1p}w_i\right|^{p} \, dx\right)^{\frac1p} \leq \epsilon \lambda,
\end{equation}
\begin{equation}\label{tDvihibdd}
\left( \mint_{\Omega_{5\rho_i}(y^i)} \left| Dw_i\right|^{\gamma^* (p-1)} \, dx \right)^{\frac{1}{\gamma^* (p-1)}} \leq c\lambda
\end{equation}
and
\begin{equation}\label{tDvihibdd1}
\left( \mint_{\Omega_{5\rho_i}(y^i)} \left[V^\frac{1}{p}\left| w_i\right|\right]^{p\gamma } \, dx \right)^{\frac{1}{p\gamma }} \leq c\lambda,
\end{equation}
where $w_i \in W^{1,p}(\Omega_{20\rho_i}(y^i))$ is the unique weak solution to
$$
\left\{\begin{array}{rclcc}
-\mathrm{div}\, \mathbf{a}(x,Dw_i) + V |w_i|^{p-2}w_i& = & 0 & \textrm{ in } & \Omega_{20\rho_i}(y^i), \\
w_i & = & u & \textrm{ on } & \partial \Omega_{20\rho_i}(y^i).
\end{array}\right.
$$
Furthermore, recalling the definition of $\Phi$ in \eqref{Phi} and the fact $\gamma^* (p-1)>p\gamma $, we have from \eqref{tDvihibdd} and \eqref{tDvihibdd1} that
\begin{equation}\label{Vvibdd}
\mint_{\Omega_{5\rho_i}(y^i)} \left[\Phi(w_i,V)\right]^{p\gamma } \, dx \leq c\lambda^{p\gamma},
\end{equation}
for some constant $c=c(n,p, \gamma ,L, \nu,b_\gamma)>0.$
\vspace{0.2cm}\textit{Step 3. Estimates for $\Phi(u,V)$.} \ \\
Let $y \in \Omega_{5\rho_i}(y^i)$ such that $\Phi(u,V)(y)> K\lambda,$ where $K\geq 1$ will be chosen later.
We then note that
$$
\Phi(u,V)(y) \leq \Phi(w_i,V)(y)+|Du(y)-Dw_i(y)|+[V(y)]^{\frac1p}|u(y)-w_i(y)|.
$$
Here, we need to consider the two cases:
\begin{eqnarray*}
\textrm{(i)} && \Phi(w_i,V)(y) \leq|Du(y)-Dw_i(y)|+[V(y)]^{\frac1p}|u(y)-w_i(y)|,\\
\textrm{(ii)} && \Phi(w_i,V)(y) > |Du(y)-Dw_i(y)|+[V(y)]^{\frac1p}|u(y)-w_i(y)|.
\end{eqnarray*}
For the case (i), it is clear that
$$
\Phi(u,V)(y) \leq 2\left(|Du(y)-Dw_i(y)|+[V(y)]^{\frac1p}|u(y)-w_i(y)|\right).
$$
For the case (ii), we have that
$$ K\lambda < \Phi(u,V)(y) \leq 2\Phi(w_i,V)(y),$$
from which, it follows that
$$
\Phi(u,V)(y)\leq 2 \Phi(w_i,V)(y) \left[ \frac{2\Phi(w_i,V)(y)}{K\lambda} \right]^{\gamma -1} = \frac{2^\gamma}{(K \lambda)^{\gamma-1}} [\Phi(w_i,V)(y)]^{\gamma} .
$$
In turn, for the both cases (i) and (ii), we have that
\begin{eqnarray*}
[\Phi(u,V)(y)]^p &\leq& 2^{2p-1} \left( |Du(y)-Dw_i(y)|^p + V(y)|u(y)-w_i(y)|^p \right)\\
&& + \frac{2^{p\gamma}}{(K \lambda)^{p\gamma-p}} [\Phi(w_i,V)(y)]^{p\gamma}
\end{eqnarray*}
for any $y \in \Omega_{5\rho_i}(y^i)$ such that $\Phi(u,V)(y)> K\lambda.$
Then applying \eqref{tDumDvibdd}-\eqref{Vvibdd}, we deduce that
\begin{eqnarray*}
\int_{\Omega_{5\rho_i}(y^i) \cap E(K\lambda , s_2 r)} [\Phi(u,V)]^p\, dx &\leq & c \int_{\Omega_{5\rho_i}(y^i)} \left[ |Du - Dw_i|^p +V |u- w_i|^p \right]\, dx\\
&&\quad + \frac{c}{(K \lambda)^{p\gamma-p}} \int_{\Omega_{5\rho_i}(y^i)} [\Phi(w_i,V)(y)]^{p\gamma} \, dx \\
& \leq & c \left(\epsilon \lambda^p + \frac{\lambda^{p\gamma} }{(K \lambda)^{p\gamma-p}}\right) \left| \Omega_{5\rho_i}(y^i)\right|\\
& \leq & c\, \lambda^p \left( \epsilon + \frac{1}{K^{p\gamma-p}} \right) \left| \Omega_{\rho_i}(y^i)\right|\\
& = & c\,\tilde\epsilon \lambda^p \left| \Omega_{\rho_i}(y^i)\right|
\end{eqnarray*}
for some constant $c=c(n,p, \gamma, L, \nu, b_\gamma)>0,$ where
\begin{equation}\label{tepsilon}
\tilde \epsilon:= \epsilon + \frac{1}{K^{p\gamma-p}}.
\end{equation}
Therefore, inserting \eqref{omegai} into the previous estimate, we have that
\begin{eqnarray*}
\int_{\Omega_{5\rho_i}(y^i) \cap E(K\lambda, s_1 r)} [\Phi(u,V)]^p\, dx & \leq& c\tilde \epsilon \Bigg( \int_{\Omega_{\rho_i}(y^i)\cap\left\{\Phi(u,V)>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\Phi(u,V)\right]^p \,dx\\
&&\qquad\quad + \int_{\Omega_{\rho_i}(y^i)\cap\left\{\frac{|F|}{\delta}>\frac{\lambda\delta}{2^{(p+1)/p}} \right\}} \left[\frac{|F|}{\delta}\right]^p \,dx\Bigg).
\end{eqnarray*}
According to Lemma~\ref{coveringlem}, we note that $\Omega_{\rho_i}(y^i)$ is mutually disjoint and
$$
E(K\lambda, s_1 r) \subset E(\lambda, s_1 r )\subset \bigcup_{i=1}^{\infty} \Omega_{5\rho_i}(y^i) \subset \Omega_{s_2 r} ,
$$
since $K \geq 1.$ Then we have that
\begin{eqnarray}\label{EDubddcal}
\nonumber \int_{ E(K\lambda, s_1 r)} [\Phi(u,V)]^p\, dx &\leq& \sum_{i=1}^{\infty} \int_{\Omega_{5\rho_i}(y^i) \cap E(K\lambda, s_1 r)} [\Phi(u,V)]^p\, dx \\
\nonumber & \leq & c\tilde \epsilon \Bigg( \int_{\Omega_{s_2r}\cap\left\{\Phi(u,V)>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\Phi(u,V)\right]^p \,dx\\
& &\qquad \qquad + \int_{\Omega_{s_2r}\cap\left\{\frac{|F|}{\delta}>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\frac{|F|}{\delta}\right]^p \,dx\Bigg)
\end{eqnarray}
for some constant $c=c(n,p, \gamma, L, \nu, b_\gamma)>0.$
\vspace{0.2cm}\textit{Step 4. Proof of \eqref{mainlocalest} when $q\in(p,p\gamma)$.} \ \\
We shall use a truncation argument. For $k \geq A\lambda_0,$ let us define
$$
\Phi(u,V)_k := \min\left\{ \Phi(u,V), k \right\},
$$
and denote the upper level sets with respect to $\Phi(u,V)_k $ by
$$ E_k ( \tilde \lambda, \rho):= \left\{ y \in \Omega_{\rho} : \Phi(u,V)_k> \tilde \lambda\right\}\ \ \text{for }\tilde \lambda,\rho>0. $$
Then since $E_k ( K\lambda, s_1 r) \subset E ( K\lambda, s_1 r)$ and
$$
\left\{ \Phi(u,V)_k >\frac{\lambda}{2^{(p+1)/p}} \right\} = \left\{ \Phi(u,V)>\frac{\lambda}{2^{(p+1)/p}} \right\},
$$
we see from \eqref{EDubddcal} that
\begin{eqnarray*}
\int_{ E_k(K\lambda, s_1 r)} [\Phi(u,V)]^p\, dx & \leq & c\tilde \epsilon \Bigg( \int_{E_k\left(\frac{\lambda}{2^{(p+1)/p}},s_2r\right)} \left[\Phi(u,V)\right]^p \,dx\\
& &\qquad \quad +\int_{\Omega_{s_2r}\cap\left\{\frac{|F|}{\delta}>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\frac{|F|}{\delta}\right]^p \,dx\Bigg).
\end{eqnarray*}
Then by multiplying both sides by $\lambda^{q-p-1}$ and integrating with respect to $\lambda$ over $(A\lambda_0, \infty)$, we have that
\begin{eqnarray}\label{lamDuEk}
\nonumber I_0 &:=&\int_{A\lambda_0}^\infty \lambda^{q-p-1}\int_{ E_k(K\lambda, s_1 r)} [\Phi(u,V)]^p\, dxd\lambda\\
\nonumber & \leq & c\tilde \epsilon \Bigg( \int_{A\lambda_0}^\infty \lambda^{q-p-1} \int_{E_k\left(\frac{\lambda}{2^{(p+1)/p}},s_2r\right)} \left[\Phi(u,V)\right]^p \,dxd\lambda\\
\nonumber & &\qquad \quad +\int_{A\lambda_0}^\infty \lambda^{q-p-1} \int_{\Omega_{s_2r}\cap\left\{\frac{|F|}{\delta}>\frac{\lambda}{2^{(p+1)/p}} \right\}} \left[\frac{|F|}{\delta}\right]^p \,dxd\lambda\Bigg)\\
&=:& c\tilde\epsilon (I_1+I_2).
\end{eqnarray}
Here, Fubini's theorem allows us to deduce that
\begin{eqnarray*}
I_0 &=& \int_{E_k(KA\lambda_0, s_1 r)} [\Phi(u,V)]^p \left( \int_{A\lambda_0}^{\Phi(u,V)_k(x)/K} \lambda^{q-p-1} \, d\lambda \right) dx\\
&=& \frac{1}{q-p}\, \Bigg\{ \int_{E_k(KA\lambda_0, s_1 r)} [\Phi(u,V)]^p \left[\frac{\Phi(u,V)_k}{K}\right]^{q-p} \, dx \\
&&\qquad\qquad\qquad\qquad -(A\lambda_0)^{q-p}\int_{E_k(KA\lambda_0, s_1 r)} [\Phi(u,V)]^p \,dx \Bigg\}.
\end{eqnarray*}
We also employ Fubini's theorem to discover
\begin{eqnarray*}
I_1&=& \int_{E_k\left(\frac{A\lambda_0}{2^{(p+1)/p}},s_2r\right)} [\Phi(u,V)]^p \left(\int_{A\lambda_0}^{ 2^{(p+1)/p} \Phi(u,V)_k(x)} \lambda^{q-p-1} \,d\lambda\right) dx \\
&\leq& \frac{1}{q-p} \int_{E_k\left(\frac{A\lambda_0}{2^{(p+1)/p}},s_2r\right)} [\Phi(u,V)]^p \left[2^{(p+1)/p} \Phi(u,V)_k\right]^{q-p}\, dx \\
&\leq & \frac{2^{(p+1)(q-p)/p}}{q-p} \int_{\Omega_{s_2 r}} [\Phi(u,V)]^p \left[\Phi(u,V)_k\right]^{q-p} \, dx.
\end{eqnarray*}
Similarly, we obtain that
$$
I_2\leq \frac{2^{(p+1)(q-p)/p}}{q-p} \int_{\Omega_{s_2 r}} \left[\frac{|F|}{\delta}\right]^q \, dx.
$$
Therefore, inserting the previous estimates for $I_0$, $I_1$, $I_2$ into \eqref{lamDuEk}, we derive
\begin{eqnarray*}
&&\int_{E_k(KA\lambda_0, s_1 r)} [\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx \\
&& \quad \leq (KA\lambda_0)^{q-p}\int_{\Omega_{s_1 r}} [\Phi(u,V)]^p \,dx\\
&&\quad \quad +c\tilde \epsilon K^{q-p} \left( \int_{\Omega_{s_2 r}} [\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx + \int_{\Omega_{s_2 r}} \left[\frac{|F|}{\delta} \right]^{q} \, dx \right).
\end{eqnarray*}
We also notice that
$$
\int_{\Omega_{s_1 r} \setminus E_k(KA\lambda_0, s_1 r)} [\Phi(u,V)]^p[\Phi_k(u,V)_k]^{q-p}\, dx \leq (KA\lambda_0)^{q-p} \int_{\Omega_{s_1 r}} [\Phi(u,V)]^p \, dx.
$$
Finally, from the last two estimates we have that
\begin{eqnarray*}
\int_{\Omega_{s_1 r}}[\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx & \leq & (KA\lambda_0)^{q-p}\int_{\Omega_{s_1 r}}[\Phi(u,V)]^p\,dx\\
&&\hspace{-4.5cm}+c_2\tilde \epsilon K^{q-p} \left( \int_{\Omega_{s_2 r}} [\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx + \int_{\Omega_{s_2 r}} \left[\frac{|F|}{\delta} \right]^{q} \, dx \right)
\end{eqnarray*}
for some $c_2=c_2(n,p,\gamma,L,\nu,b_\gamma,q)>0$. At this stage, we recall the definition of $\tilde \epsilon$ in \eqref{tepsilon}, and then take large $K>1$ and small $\epsilon\in(0,1)$ depending on $n,p,\gamma,L,\nu,b_\gamma,q$ such that
$$
K\geq (4c_2)^{\frac{1}{p\gamma-q}} \ \ \ \text{and}\ \ \ \epsilon\leq\frac{1}{4c_2K^{q-p}},
$$
hence $\delta=\delta(n,p,\gamma,L,\nu,b_\gamma,q)\in(0,1)$ is finally determined. Consequently, recalling the definition of $A$ in \eqref{lambdarg} we have
\begin{eqnarray*}
\int_{\Omega_{s_1 r}}[\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx & \leq & \frac12 \int_{\Omega_{s_2 r}} [\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx \\
&& \hspace{-2cm}+\frac{c\lambda_0^{q-p}}{(s_2-s_1)^{\frac np}}\int_{\Omega_{2r}}[\Phi(u,V)]^p\,dx+ c \int_{\Omega_{2 r}} |F|^q \, dx .
\end{eqnarray*}
Then applying Lemma~\ref{teclem}, we derive that
$$
\int_{\Omega_{r}}[\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx \leq c\lambda_0^{q-p}\int_{\Omega_{2r}}[\Phi(u,V)]^p\,dx+ c \int_{\Omega_{2 r}} |F|^q \, dx
$$
for any $k>A\lambda_0$. Finally, by Lebesgue's monotone convergence theorem, the definition of $\lambda_0$ in \eqref{lambda0}, H\"older's inequality and Young's inequality, we obtain that
\begin{eqnarray*}
\mint_{\Omega_{r}}[\Phi(u,V)]^q\,dx &= &\lim_{k\to\infty} \mint_{\Omega_{r}}[\Phi(u,V)]^p [\Phi(u,V)_k]^{q-p} \, dx\\
& \leq& c\lambda_0^{q-p}\mint_{\Omega_{2r}}[\Phi(u,V)]^p\,dx+ c \mint_{\Omega_{2 r}} |F|^q \, dx\\
& \leq& c\left(\mint_{\Omega_{2r}}[\Phi(u,V)]^p\,dx\right)^{\frac{q}{p}}+ c \mint_{\Omega_{2 r}} |F|^q \, dx,
\end{eqnarray*}
and so, recalling the definition of $\Phi(u,V)$ in \eqref{Phi},
\begin{eqnarray}\label{mainlocalest1}
\nonumber\left( \mint_{\Omega_{r}} |Du|^q +\left[V^{\frac1p}|u|\right]^{q}\, dx\right)^{\frac1q} &\leq& c \left( \mint_{\Omega_{2r}} |Du|^p + \left[ V^{\frac1p} |u|\right]^p \,dx \right)^{\frac{1}{p}}\\
&&+ c \left(\mint_{\Omega_{2r}} \left|F \right|^{q} \, dx\right)^{\frac1q},
\end{eqnarray}
which derives the estimate \eqref{mainlocalest} for $q\in(p,p\gamma)$.
\vspace{0.2cm}\textit{Step 5. Proof of \eqref{mainlocalest} when $q\in[p\gamma,\gamma^*(p-1))$.} \ \\
Finally, we prove the estimate \eqref{mainlocalest} for the remaining range of $q$. Note that we only consider the gradient of $u$ since $\chi_{\{q<p\gamma\}}=0$.
We first suppose that $q\in[p\gamma,\gamma^*(p-1))$ satisfies
\begin{equation}\label{case1}
\max\left\{p,\frac{n(p-1)}{n-1}\right\}<q.
\end{equation}
Note that if $p\leq n$ we have $\max\{p,\frac{n(p-1)}{n-1}\}=p$ and so the previous inequality is trivial. Then let us set $\tilde q\in (1,\gamma)$ such that
\begin{equation}\label{tq}
q=(p-1)\tilde q^*.
\end{equation}
Then we see from H\"older's inequality that
\begin{eqnarray*}
\left( \mint_{\Omega_{2r}} \left[ r V |u|^{p-1}\right]^{\tilde{q}}\, dx \right)^{\frac{1}{\tilde{q}}} & = &\left( \mint_{\Omega_{2r}} \left[ r V^{\frac1p} \right]^{\tilde{q}} \left[ V^{\frac1p} |u| \right]^{(p-1)\tilde{q}}\, dx \right)^{\frac{1}{\tilde q}} \\
&\leq& \left( \mint_{\Omega_{2r}} [r^pV]^{\tilde{q}}\, dx\right)^{\frac{1}{p\tilde{q}}} \left( \mint_{\Omega_{2r}} \left[V^{\frac{1}{p}}|u|\right]^{p\tilde{q}} \, dx \right)^{\frac{p-1}{p\tilde{q}}} \\
& \leq & c \left( r^{p\gamma-n} \int_{\Omega_{2r}} V^{\gamma}\, dx\right)^{\frac{1}{p\gamma}} \left( \mint_{\Omega_{2r}} \left[V^{\frac{1}{p}}|u|\right]^{p\tilde{q}} \, dx \right)^{\frac{p-1}{p\tilde{q}}}\\
& \leq & c \left( \mint_{\Omega_{2r}} \left[V^{\frac{1}{p}}|u| \right]^{p\tilde{q}} \, dx \right)^{\frac{p-1}{p\tilde{q}}}.
\end{eqnarray*}
Here we have used the facts that $\tilde{q} \in (1,\gamma)$ and $(2r)^{p - \frac{n}{\gamma}} \Vert V \Vert_{L^{\gamma}(\Omega_{2r})}\leq 1.$ Therefore, applying the estimate \eqref{mainlocalest1} with $q$ and $r$ replaced by $p\tilde q$ and $2r$, respectively, we have that $V |u|^{p-2}u\in L^{\tilde q}(\Omega_{2r})$ with the estimate
\begin{eqnarray}\label{mainpf1}
\nonumber \left( \mint_{\Omega_{2r}} \left[ r V |u|^{p-1}\right]^{\tilde{q}}\, dx \right)^{\frac{1}{\tilde{q}}} &\leq& c \left( \mint_{\Omega_{4r}}\left[ |Du| + V^\frac{1}{p}|u|\right]^p \,dx \right)^{\frac{p-1}{p}}\\
&&+c \left(\mint_{\Omega_{4 r}} |F|^{p\tilde{q}} \, dx\right)^{\frac{p-1}{p\tilde{q}}}.
\end{eqnarray}
Finally, by Theorem~\ref{thmDwbdd} with $f=V |u|^{p-2}u$, the previous estimate \eqref{mainpf1} and H\"older's inequality, we have that
\begin{eqnarray}\label{mainpf2}
\nonumber \left(\mint_{\Omega_r} |Du|^{q} \, dx \right)^{\frac{1}{q}} &\leq& c \left( \mint_{\Omega_{2r}} |Du|^{p}\, dx\right)^{\frac{1}{p}} + c \left( \mint_{\Omega_{2r}} \left[ r V |u|^{p-1}\right]^{\tilde{q}} \, dx\right)^{\frac{1}{\tilde{q}(p-1)}} \\
\nonumber && + c \left( \mint_{\Omega_{2r}} |F|^{q} \, dx\right)^{\frac{1}{q}}\\
\nonumber &\leq& c \, \left( \mint_{\Omega_{4r}} \left[ |Du|+ V^{\frac1p} |u|\right]^p \,dx \right)^{\frac{1}{p}} + c \left(\mint_{\Omega_{4 r}} \left|F \right|^{p \tilde{q}} \, dx\right)^{\frac{1}{p \tilde{q}}} \\
\nonumber && + c \left(\mint_{\Omega_{4 r}} \left|F \right|^{q} \, dx\right)^{\frac{1}{q}}\\
&\leq& c \, \left( \mint_{\Omega_{4r}} \left[ |Du|+ V^{\frac1p} |u|\right]^p \,dx \right)^{\frac{1}{p}} + c \left(\mint_{\Omega_{4 r}} \left|F \right|^{q} \, dx\right)^{\frac{1}{q}},
\end{eqnarray}
which proves the estimate \eqref{mainlocalest}.
We next assume that $q\in[p\gamma,\gamma^*(p-1))$ does not satisfies \eqref{case1}, that is,
$$
p \gamma \leq q\leq \max\left\{p,\frac{n(p-1)}{n-1}\right\}.
$$
Note that this happens only for the case that $p>n$ and $1<\gamma\leq \frac{n(p-1)}{p(n-1)}$, and that, in this case, we cannot find $\tilde q\in (1,\gamma)$ satisfying \eqref{tq}. Instead, let us set
$$
\tilde q:= \frac{1+\gamma}{2} \in (1,\gamma).
$$
Then, in the same argument above, we have the estimate \eqref{mainpf1}. Using this, Theorem~\ref{thmDwbdd1} (instead of Theorem \ref{thmDwbdd}) and H\"older's inequality, we obtain the estimate \eqref{mainpf2}. Hence, \eqref{mainlocalest} holds for the remaining range for $q$. This completes the proof.
\subsection{Proof of Corollary \ref{maincor}}\ \\
We take the test function $\varphi=u$ in the weak formulation \eqref{weakform}, and then use Young's inequality to arrive at
\begin{eqnarray*}
&&c(p,\nu)\int_{\Omega} |Du|^{p}\,dx + \int_{\Omega} V|u|^p \, dx \\
&&\quad \leq \int_{\Omega} \mathbf{a}(x, Du) \cdot Du \, dx + \int_{\Omega} V |u|^{p-2}u \cdot u \, dx = \int_{\Omega} |F|^{p-2} F \cdot D u\, dx \\
&&\quad \leq \int_{\Omega} |F|^{p-1} |Du| \,dx \leq c(\tau) \int_{\Omega} |F|^{p} \, dx + \tau \int_{\Omega} |Du|^{p} \,dx
\end{eqnarray*}
for any small $\tau>0.$ Here we have used the inequality that $\mathbf{a}(x, \xi)\cdot \xi \geq c(p,\nu) |\xi|^p,$ which can be easily obtained from \eqref{aas2}. We choose $\tau>0$ so small that
\begin{equation}\label{energyest}
\int_{\Omega} |Du|^{p}\,dx + \int_{\Omega} V|u|^p \, dx \leq c \int_{\Omega} |F|^{p} \, dx. \end{equation}
On the other hand, from the resulting estimates \eqref{mainlocalest} with $r = \frac{\tilde{R}}{4}$ where $\tilde{R} := \min\Big\{ R, \Vert V \Vert_{L^{\gamma}(\Omega)}^{-\frac{1}{p-\frac{n}{\gamma}}}\Big\}$ and $x_0 \in \overline{\Omega},$ we get that \begin{eqnarray}\label{Duestcov}
\nonumber&&\int_{\Omega_{\tilde{R}/4}(x_0)} |Du|^{q} + \chi_{\{q<p\gamma\}} \left[ V^{\frac1p} |u| \right]^q \, dx \\
&&\qquad \leq \frac{c}{\tilde{R}^{n\left(\frac{q}{p}-1\right)}}\left( \int_{\Omega_{\tilde{R}}(x_0)} |Du|^p + \left[ V^{\frac1p} |u| \right]^p \,dx \right)^{\frac{q}{p}} + c \int_{\Omega_{\tilde{R}}(x_0)} \left|F \right|^{q} \, dx.
\end{eqnarray}
Since $\overline{\Omega}$ is compact, by Vitali's covering lemma, there exist finitely many points $x_0^1, \cdots, x_0^N$ in $\overline{\Omega}$ such that $B_{\tilde{R}/20}(x_0^k)$, $k=1,2,\dots,N$ are mutually disjoint and $\Omega \subseteq \bigcup_{k=1}^{N} B_{\tilde{R}/4}(x_0^k).$ Here we note that $\sum_{k=1}^N\chi_{B_{\tilde{R}}(x_0^k)}\leq c(n)$. Therefore from \eqref{Duestcov}, we deduce that
\begin{eqnarray}\label{glDuestcov}
\nonumber&&\int_{\Omega} |Du|^{q} + \chi_{\{q<p\gamma\}} \left[ V^{\frac1p} |u| \right]^q \, dx \\
\nonumber&&\quad \leq \sum_{k=1}^{N} \int_{\Omega_{\tilde{R}/4}(x_0^k)} |Du|^{q} + \chi_{\{q<p\gamma\}} \left[ V^{\frac1p} |u| \right]^q \, dx \\
\nonumber&&\quad\leq c\,\sum_{k=1}^{N} \left\{\frac{1}{\tilde{R}^{n\left(\frac{q}{p}-1\right)}}\left( \int_{\Omega_{\tilde{R}}(x_0^k)} |Du|^p + \left[ V^{\frac1p} |u| \right]^p \,dx \right)^{\frac{q}{p}} + \int_{\Omega_{\tilde{R}}(x_0^k)} \left|F \right|^{q} \, dx\right\}\\
&&\quad \leq \frac{c}{\tilde{R}^{n\left(\frac{q}{p}-1\right)}} \left( \int_{\Omega} |Du|^p + \left[ V^{\frac1p} |u| \right]^p \,dx \right)^{\frac{q}{p}} + c \int_{\Omega} \left|F \right|^{q} \, dx.
\end{eqnarray}
In turn, inserting \eqref{energyest} into \eqref{glDuestcov} and using H\"older's inequality together with the fact that $\mathrm{diam}(\Omega)> \tilde{R}$, we obtain
\begin{eqnarray}
\nonumber&&\int_{\Omega} |Du|^{q} + \chi_{\{q<p\gamma\}} \left[ V^{\frac1p} |u| \right]^q \, dx \leq c\left(\frac{\mathrm{diam}(\Omega)}{\tilde{R}}\right)^{\frac{n(q-p)}{p}}\int_{\Omega} \left|F \right|^{q} \, dx,
\end{eqnarray}
which implies the desired estimates \eqref{maingloest}.
\bibliographystyle{amsplain}
|
1,116,691,498,589 | arxiv | \section*{Introduction}
In their paper \cite{MOY}, Murakami, Ohtsuki, and Yamada gave an interpretation of the quantum $\mathfrak{sl}(N)$ invariant of a knot or link via a state sum model. This is a generalization of the case $N=2$ which is Kauffman's interpretation of the Jones polynomial via the Kauffman bracket. In MOY's work, however, the states of an oriented knot diagram are now planar oriented colored trivalent graphs. Based on this model, Khovanov and Rozansky have given link homology theories \cite{KR1} that categorify the $\mathfrak{sl}(N)$ knot invariant.
There are relationships between these quantum invariants and other invariants of knots and links, such as the knot group, representation spaces of knot groups, or the various Floer homology theories associated to knots and links. Based on observations on $(2,2p+1)$ torus knots, Kronheimer and Mrowka \cite{KM2} have given a relationship between Khovanov homology, which appears as the $N=2$ case of the above mentioned homology theories, and (singular) instanton knot Floer homology
In this paper we relate MOY's polynomial $P_N(\Gamma)$ of colored trivalent graphs $\Gamma$ to a certain moduli space of decorations of the graph which is itself a space of representations of the graph complement in $SU(N)$ with meridional conditions on the edges of the graphs. Our moduli space can be considered as a subspace of a product of projective spaces, which leads us to think our moduli space may have some relation with the construction of a categorification of the $\mathfrak{sl}(N)$ polynomial by Cautis and Kamnitzer \cite{CK}.
\\
In the first section we state our result and give context for it in terms of a conjectural picture for higher rank instanton knot floer homology, the second section contains the proof of the result.
\subsection*{Acknowledgements}
We thank Hans Boden, Sabin Cautis, and Dmitri Panov for useful general discussions, and Saugata Basu for answering some questions of ours on real varieties. We thank CIRM and their Research in Pairs program for providing the environment in which this paper was written.
\section{Background and Results}
\subsection{Colored trivalent graphs and MOY moves}
In \cite{MOY}, Murakami, Ohtsuki, and Yamada give an invariant $P_n(\Gamma)$ of colored trivalent planar graphs $\Gamma$, taking values in the ring $\mathbb{Z}[q, q^{-1}]$ with non-negative coefficients, which determines the quantum invariant of a oriented knot colored with the fundamental representation of $\mathfrak{sl}(N)$ as a state sum via the relationship in Figure~\ref{MOYtoknot}.
\begin{figure}
\centerline{
{
\psfrag{thing1}{$= q^{1-n}$}
\psfrag{thing2}{$-q^{-n}$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{2}{$2$}
\psfrag{x4}{$x_4$}
\psfrag{T+(D)}{$T^+(D)$}
\includegraphics[height=2in,width=3.5in]{MOYtoknot.eps}
}}
\caption{We show how the MOY polynomials associated to colored planar trivalent graphs can be used to compute the quantum $\mathfrak{sl}(N)$ polynomials of the knot. On the left hand side of the local equations are knot diagrams, on the right hand side are trivalent graphs. We have indicated the edges on the right hand side that are colored with $2$, all other edges on the right hand side are colored with $1$.}
\label{MOYtoknot}
\end{figure}
In Figure \ref{MOYtoknot} the `colors' are drawn from the set $\{ 1,2 \}$ and correspond to associating to each edge of the graph either the fundamental $N$-dimensional representation $V$ of $\mathfrak{sl}(N)$ or the representation $V \wedge V$. It is important that we restrict the colorings so that meeting at any vertex there are two edges colored with $1$ and one edge colored with $2$.
More generally, in \cite{MOY}, all colors $\wedge^i V$ for $1 \leq i \leq N-1$ are considered, giving state sum interpretations of the quantum $\mathfrak{sl}(N)$ invariants of knots colored with any antisymmetric representation of $\mathfrak{sl}(N)$. We expect the results of this paper to generalize without too much difficulty to these situations, but here we shall mainly be concerned with the two colors required to give a state sum interpretation of the quantum $\mathfrak{sl}(N)$ polynomial for a knot colored with the standard representation of $\mathfrak{sl}(N)$ as in Figure \ref{MOYtoknot}.
In \cite{MOY} it is important that the graphs $\Gamma$ considered admit a consistent orientation of the edges. Namely at any trivalent vertex of $\Gamma$ the two 1-colored edges either both point in or both point out, and the 2-colored edge does the opposite. Another way of saying this is that the total \emph{flux} into a trivalent point (counting the colors as a quantity of flux) should be zero; this is how the consistency condition generalizes to higher antisymmetric powers. Note that trivalent graphs arising as states of an oriented knot diagram inherit a consistent orientation from the orientation of the knot (see Figure \ref{MOYtoknot}). From now on whenever we write \emph{colored trivalent graph} we will only mean graphs satisfying such a condition.
The polynomial $P_N(\Gamma)$ associated by Murakami, Ohtsuki, and Yamada to a colored trivalent graph $\Gamma$ can be computed by use of the \emph{MOY moves}. These are local relationships that look much like the Reidemeister moves. We give these moves in the next section where we will see that they hold also (up to a shift in some cases or after evaluation at unity in all cases) for the Euler characteristic of a certain moduli space associated to a colored trivalent graph.
\subsection{A moduli problem and the main theorem}
Given a colored trivalent graph $\Gamma$ where each edge is either colored with either $1$ or $2$ as in the last section, decorate each edge colored with $1$ by a point of complex projective $(N-1)$-space $\mathbb{P}^{N-1}$, and each edge colored with $2$ by a point of $\mathbb{G}(2,N)$, the Grassmannian of 2-planes in $\mathbb{C}^N$. We call this decoration \emph{admissible} if the edge decorations at any trivalent vertex correspond to two orthogonal lines in $\mathbb{C}^N$ and the plane that they span.
\begin{definition}
\label{admissdef}
The set of all admissible decorations of such a colored planar trivalent graph $\Gamma$ forms a moduli space which we denote $\mathscr{M}(\Gamma)$.
\end{definition}
There is a natural generalization of this definition which associates a moduli space to a colored trivalent graph with colors drawn from the set $\{1,2, \ldots, N-1 \}$ such that at any vertex the three edges are colored by $a$, $b$, and $a+b$. In this situation admissible decorations are decorations of each $a$-colored edge with a point in $\mathbb{G}(a,N)$ such that at at any vertex the three decorations consist of two orthogonal subspaces of $\mathbb{C}^N$ and the subspace that they span.
As we are mainly motivated by the invariants of knots colored with the standard representation of $\mathfrak{sl}(N)$, we shall mostly restrict our attention to the case when we draw colors from the set $\{1,2\}$. We expand this to the set $\{1,2,3\}$ when doing so is convenient in Subsection \ref{MOYIVsect}.
\begin{figure}
\centerline{
{
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{2}{$2$}
\psfrag{x4}{$x_4$}
\psfrag{T+(D)}{$T^+(D)$}
\psfrag{c}{$c$}
\psfrag{d}{$d$}
\psfrag{e}{$e$}
\psfrag{f}{$f$}
\psfrag{g}{$g$}
\psfrag{h}{$h$}
\includegraphics[height=2.5in,width=2in]{MOYbraidexample.eps}
}}
\caption{We draw a colored graph $G$. In $G$ the edges colored with $2$ are straight and vertical and have been drawn slightly thicker than the edges colored with $1$. The labels $a,b,c,d,e,f,g$ refer to points of projective space $\mathbb{P}^2$ decorating the $1$-colored edges.}
\label{MOYbraidexample}
\end{figure}
In Figure \ref{MOYbraidexample} we draw a colored graph $G$. As an example, we now determine $H_*(\mathscr{M}(G))$ in the case $N=3$. In the discussion we sometimes write the ``span'' of points in projective space: we mean the projectivization of the span of the corresponding subspaces of $\mathbb{C}^N$.
\begin{definition}
Given a space $X$ with homology finitely generated and of bounded degree, we write $\pi(X)$ for the Poincare polynomial of the homology of the space:
\[ \pi(X) = \sum_i q^i \dim{H_i(X)} \in \mathbb{Z}[q] {\rm .} \]
\end{definition}
\begin{prop}
\label{countereg}
For the graph $G$ as drawn in Figure \ref{MOYbraidexample} we have
\[ \pi(\mathscr{M}(G)) = (1+3q^2)(1+q^2)(1+q^2+q^4) {\rm .} \]
\end{prop}
\begin{proof}
If we consider $f \in \mathbb{P}^2$ as fixed while all other decorations are allowed to vary over $\mathbb{P}^2$ we get a moduli space that we call $\mathscr{M}_1$. By evaluation of $f$ we see that $\mathscr{M}(G)$ is a fiber bundle over $\mathbb{P}^2$ with fiber $\mathscr{M}_1$.
With $f$ fixed, we see that $g$ is orthogonal to $f$, and so must lie in a $\mathbb{P}^1$. Writing $\mathscr{M}_2$ for the moduli space in which we fix both $f$ and an orthogonal $g$, we see that $\mathscr{M}_1$ is a fiber bundle over $\mathbb{P}_1$ with fiber $\mathscr{M}_2$.
We focus on $\mathscr{M}_2$. Observe that $h$ has to lie in the $\mathbb{P}^1$ consisting of points orthogonal to $g$. In the case that $h$ is not the unique point of this $\mathbb{P}^1$ that is also orthogonal to $f$, there is a unique admissible decoration of the rest of the graph, namely $a=f$, $b=c=d=g$, $e=h$.
Suppose now that $h$ is orthogonal to both $f$ and $g$. The point $e$ has to lie in the $\mathbb{P}^1$ consisting of points in the span of $g$ and $h$. Suppose that $e \not= h$. Then it is easy to check that there is a unique way to decorate the rest of the graph admissibly. In the case that $e = h$ then any possible choice of $a$ in the $\mathbb{P}^1$ spanned by $f$ and $g$ determines an admissible decoration of $G$.
This shows that topologically $\mathscr{M}_2 = \vee^3 \mathbb{P}^1$. Since this is a space with only even-dimensional homology, as is projective space, we have triviality of all differentials in Serre's spectral sequence computing first $H_*(\mathscr{M}_1)$ and then $H_*(\mathscr{M}(G))$, hence we have the result.
\end{proof}
The motivating hypothesis for this paper is that for a general colored graph $\Gamma$, the Poincare polynomial of the homology $H_*(\mathscr{M}(\Gamma))$ is related to $P_N(\Gamma)$.
\begin{hypothesis}
\label{dreamthm}
We have
\[ q^C \pi(\mathscr{M}(\Gamma)) = P_N(\Gamma) \rm{,}\]
\noindent for some $C \in \mathbb{Z}$.
\end{hypothesis}
Although this hypothesis looks strong when considering the first few MOY moves, it fails in general. In fact for the graph $G$ of Proposition \ref{countereg} we have
\[ P_3(G) = (q + q^{-1})^3 (q^{2} + 1 + q^{-2}) \not= q^C \pi(\mathscr{M}(G)) \,\, {\rm for} \,\, {\rm any} \,\, C \in \mathbb{Z} {\rm .} \]
\noindent Note that this also invalidates the hypothesis for `braid-like' trivalent graphs. Instead we prove that a version of Hypothesis \ref{dreamthm} holds when restricting our attention to the Euler characteristic.
\begin{theorem}
\label{maintheorem}
For a colored planar trivalent graph $\Gamma$ we have
\[ \chi(\mathscr{M}(\Gamma)) = P_N(\Gamma) (1) {\rm ,} \]
\noindent where $\chi$ denotes Euler characteristic.
\end{theorem}
\noindent We believe in fact that the homology of $\mathscr{M}(\Gamma)$ is supported in even degrees, but we are not yet able to show this.
\begin{conjecture}
\label{evenness}
For any graph $\Gamma$ and for $i$ odd we have $H_i(\mathscr{M}(\Gamma)) = 0$.
\end{conjecture}
\noindent Clearly this conjecture and Theorem \ref{maintheorem} would together imply that Hypothesis \ref{dreamthm} holds when setting $q=1$.
\subsection{Relation to representation spaces}
The complement of a planar graph $\Gamma$ in $S^3$ is homotopy equivalent to a wedge of $k$ circles, where $k$ is equal to the first Betti number of $\Gamma$. Therefore, its fundamental group $G_\Gamma$ is isomorphic to the $k$-fold free product of $\mathbb{Z}$, and the space of homomorphisms $\text{Hom}(G_\Gamma,SU(N))$ is just the $k$-fold product of $SU(N)$. Nonetheless, for a planar trivalent graph this homotopy equivalence does not suggest the most natural generators. Instead we use the presentation given in the following Lemma.
\begin{lemma}
The group $G_\Gamma = \pi_1(S^3 \setminus \Gamma)$ admits a presentation given by
\[
\langle \, x_1, \dots, x_m \ | \ R_1, \dots, R_c \ \rangle \ ,
\]
where $m$ is the number of edges, $x_i$ represents a positively oriented meridian to the $i^{th}$ edge, where $c$ is the number of trivalent vertices, and where the relations $R_i$ are the obvious relations at each trivalent vertex.
\end{lemma}
\qed
We can think, once and for all, that the basepoint is fixed somewhere {\em above} the plane of the graph, e.g. in the eye of the observer. Ignoring the dependence on the basepoint in the sequel will make everything well-defined up to global conjugation.
\\
We are motivated by the perspective of a potential relationship between the $\mathfrak{sl}(N)$ knot homology theory of Khovanov and Rozansky \cite{KR1} and the instanton Floer homology associated to the group $SU(N)$, as developed by Kronheimer and Mrowka in \cite{KM1}. Therefore, we are only looking at representations for which a certain condition on the conjugacy class of a meridian is satisfied. In \cite[Section 2.5 Example (ii)]{KM1} a coherent condition is that the conjugacy class of a meridian $m$ is sent by a representation $\rho : G_\Gamma \to SU(N)$ to the conjugacy class of the element $\Phi_1$ given by
\[
\Phi_1:= \, \zeta \, \begin{pmatrix} -1 & 0 & 0 & \dots & 0 \\
0 & 1 & 0 & \dots & 0 \\
0 & 0 & 1 & \dots & 0 \\
\vdots & \vdots & \vdots & \ddots & 0 \\
0 & 0 & \dots & 0 & 1
\end{pmatrix} \ ,
\]
where $\zeta$ is a primitive $N^{th}$ root of $-1$, e.g. $\zeta = \exp(i \pi / N)$. We shall also need another special element of $SU(N)$:
\[
\Phi_2:= \, \zeta^2 \, \begin{pmatrix} -1 & 0 & 0 & \dots & 0 \\
0 & -1 & 0 & \dots & 0 \\
0 & 0 & 1 & \dots & 0 \\
\vdots & \vdots & \vdots & \ddots & 0 \\
0 & 0 & \dots & 0 & 1
\end{pmatrix} \ .
\]
Notice that for $N=2$ the corresponding meridional condition is satisfied with either orientation on the edge, in contrast to the situation where $N \geq 3$.
\\
For the purpose of this section, we shall give an orientation of the 2-colored edges. We orient the 2-colored edge coherently so that we can think of the edges coming with a flux of magnitude given by the color of the edge, and so that there are no sources or sinks of flux at any point of the graph.
We now define a subspace of $\text{Hom}(G_\Gamma,SU(N))$ that we will relate to the moduli space of decorations considered above.
\begin{definition}
Suppose we are given a trivalent oriented graph with 1-colored and 2-colored oriented edges. We denote by $R_{\Phi_1,\Phi_2}(G_\Gamma;SU(N))$ the space of homomorphisms $\rho: G_\Gamma \to SU(N)$ such that for any oriented meridian $m$ to an oriented 1-colored edge $e$, and to any oriented meridian $n$ to a 2-colored edge $E$ we have
\begin{equation*}
\rho(m) \sim \Phi_1 \, \ \ \text{and} \ \ \, \rho(n) \sim \Phi_2 \ ,
\end{equation*}
i.e. $\rho(m)$ is conjugated to $\Phi_1$ inside $SU(N)$ and so is $\rho(n)$ to $\Phi_2$.
\end{definition}
Suppose now that we are given a trivalent oriented graph $\Gamma$ in the plane with the condition that at any trivalent vertex the two bounding oriented 1-colored edges have either both its endpoint or both its starting point.
There is a natural map
\begin{equation}
\label{art deco}
D: R_{\Phi_1,\Phi_2}(G_\Gamma;SU(N)) \to M(\Gamma)
\end{equation}
defined in the following way. To a representation $\rho$ we are going to associate the following decoration of $\Gamma$: Let $e$ be an oriented 1-colored edge and $m$ an oriented meridian of $e$. We decorate $e$ by the 1-dimensional eigenspace of $\rho(m) \in SU(N)$ associated to the eigenvalue $-\zeta$. Let $E$ be a 2-colored edge and $n$ an oriented meridian of $E$. We decorate the edge $E$ by the 2-dimensional eigenspace of $\rho(n) \in SU(N)$ associated to the eigenvalue $-\zeta^2$.
That this map is indeed well-defined is a consequence of the following lemma and the imposed requirement on the orientations of the edges that meet at trivalent vertices.
\begin{lemma}
Let $S, T \in SU(N)$ be two elements that are both conjugate to the element $\Phi_1$ above. Then the composition $ST$ is conjugate to $\Phi_2$ if and only if the $(-\zeta)$ eigenspaces of $S$ and $T$ are orthogonal.
\end{lemma}
\begin{proof}
We just have to prove the `only if' statement as the other direction is trivial. Observe that we have
\[
\text{tr}(\Phi_2) = \zeta^2 (-2 + (N-2)) = \zeta^2 (N-4) \ ,
\]
\noindent and that the trace is clearly an invariant of the conjugacy class of an element in $SU(N)$.
On the other hand it follows from explicit computation that
$\text{tr}(ST)$ is equal to this value only if the two 1-dimensional eigenspaces of $S$ and $T$ are orthogonal to each other.
\end{proof}
\begin{proposition}
Suppose we are given a trivalent oriented graph $\Gamma$ in the plane with the condition that at any trivalent point the two bounding oriented 1-colored edges have either both its endpoint or both its starting point. Then the map
\[ D: R_{\Phi_1,\Phi_2}(G_\Gamma;SU(N)) \to \mathscr{M}(\Gamma)
\]
just defined is a homeomorphism.
\end{proposition}
\begin{proof}
It suffices to give a map $P: \mathscr{M}(\Gamma) \to R_{\Phi_1,\Phi_2}(G_\Gamma;SU(N)) $ that is inverse to $D$. But the orientations of the edges associates to each element of $\mathscr{M}(\Gamma)$ a representation in $R_{\Phi_1,\Phi_2}(G_\Gamma;SU(N)) $, and the two maps are clearly inverses of each other.
\end{proof}
\section{Proofs}
In this section we establish Theorem \ref{maintheorem} by considering the MOY moves, so breaking the proof of the theorem into five subsections, one for each move. The first three of these subsections contain results in terms of fiber bundles and are directed towards Hypothesis \ref{dreamthm} and Conjecture \ref{evenness}. Unfortunately it is not possible to prove such a result in the last case (see Proposition \ref{countereg} and the following discussion for a counterexample) so we prove results on the Euler characteristic instead.
\subsection{MOY move 0}
The $0^{th}$ MOY move, MOY0, states that the invariant $P_n(U)$ associated to a circle $U$ in the plane colored with $1$ is
\[ P_N(U) = \frac{q^N - q^{-N}}{q - q^{-1}} {\rm .}\]
\begin{prop}
\label{P1}
We have $\mathscr{M}(U) = \mathbb{P}^{N-1}$.
\end{prop}
The proof is immediate. Note this shows that $U$ satisfies Hypothesis \ref{dreamthm}.
\begin{figure}
\centerline{
{
\psfrag{thing1}{$= q^{1-n}$}
\psfrag{thing2}{$-q^{-n}$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{2}{$2$}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{c}{$c$}
\includegraphics[height=1in,width=1.5in]{MOY1.eps}
}}
\caption{We call this subgraph $G$. The three edges labelled $a,b,c$ are colored with $1$, we have indicated the edge colored with $2$.}
\label{MOY1}
\end{figure}
\subsection{MOY move I}
Suppose that $\Gamma$ is a colored trivalent graph containing the subgraph $G$ defined in Figure \ref{MOY1} . The $1^{st}$ MOY move, MOY1, states that if $\Gamma'$ is the result of replacing $G$ in $\Gamma$ with a single $1$-colored edge then
\[ P_N(\Gamma) = \frac{q^{N-1} - q^{1-N}}{q - q^{-1}} P_N(\Gamma') {\rm .} \]
\begin{prop}
For $\Gamma$ and $\Gamma'$ as above we have that $\mathscr{M}(\Gamma)$ is a $\mathbb{P}^{N-2}$-bundle over $\mathscr{M}(\Gamma')$.
\end{prop}
\begin{proof}
Observe that any admissible choice of decoration of the graph $\Gamma$ when restricted to the subgraph $G$ must have the same decoration $b$ as $a$. Also, for a choice of decoration $a$, $c$ can be any line in the $(N-1)$-plane orthogonal to $a$.
\end{proof}
\noindent Then immediately we see
\begin{corollary}
\label{P2}
For $\Gamma$ and $\Gamma'$ as above we have that
\[ \chi(\mathscr{M}(\Gamma)) = \chi(\mathbb{P}^{N-2})\chi(\mathscr{M}(\Gamma')) = \left. \frac{q^{N-1} - q^{1-N}}{q - q^{-1}}\right\vert_{q=1} \chi(\mathscr{M}(\Gamma')) \rm{.} \] \qed
\end{corollary}
Observe further that with the assumption that $\mathscr{M}(\Gamma')$ has only even-dimensional homology we have
\[ \pi(\mathscr{M}(\Gamma)) = \pi(\mathbb{P}^{n-2}) \pi(\mathscr{M}(\Gamma')) \]
\noindent by the triviality of the differentials in the Serre spectral sequence. Note that this is in the direction of Hypothesis \ref{dreamthm}.
\subsection{MOY move II}
Suppose that $\Gamma$ is a colored trivalent graph containing as a subgraph $G$ as defined in Figure \ref{MOY2}. Then the move MOY2 states that if $\Gamma'$ is the result of replacing $G$ in $\Gamma$ by a single edge colored by $2$ then
\[ P_N(\Gamma) = (q + q^{-1}) P_N(\Gamma') {\rm .} \]
\begin{figure}
\centerline{
{
\psfrag{thing1}{$= q^{1-n}$}
\psfrag{thing2}{$-q^{-n}$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{2}{$2$}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{c}{$c$}
\includegraphics[height=2in,width=1in]{MOY2.eps}
}}
\caption{We call this subgraph $G$. The edges labelled $a,b$ are colored with $1$, the other edges are colored with $2$ as indicated.}
\label{MOY2}
\end{figure}
\begin{prop}
For $\Gamma$ and $\Gamma'$ as above we have that $\mathscr{M}(\Gamma)$ is a $\mathbb{P}^1$-bundle over $\mathscr{M}(\Gamma')$.
\end{prop}
\begin{proof}
First observe that any admissible decoration of $\Gamma$ must decorate each of the the 2-colored edges of $G$ with the same point of $\mathbb{G}(2,N)$. For such a choice, the decoration $a$ can be any line in the 2-plane decorating the 2-colored edges, and then $b$ must be the unique othogonal line to $a$ in this plane.
\end{proof}
\noindent Then immediately we have
\begin{corollary}
\label{P3}
For $\Gamma$ and $\Gamma'$ as above
\[ \chi(\mathscr{M}(\Gamma)) = \chi(\mathbb{P}^1)\chi(\mathscr{M}(\Gamma')) = (q + q^{-1})\vert_{q=1} \chi(\mathscr{M}(\Gamma')) {\rm .} \] \qed
\end{corollary}
Observe further that with the assumption that $\mathscr{M}(\Gamma')$ has only even dimensional homology we have that
\[ \pi(\mathscr{M}(\Gamma)) = \pi(\mathbb{P}^1)\pi(\mathscr{M}(\Gamma')) \]
\noindent by the Serre spectral sequence. Again, this is in direction of Hypothesis \ref{dreamthm}.
\subsection{MOY move III}
Suppose that $\Gamma$ is a colored trivalent graph containing as a subgraph $G$ as defined in Figure \ref{MOY2opp}. Then the move MOY3 states that if $\Gamma_i$ is the result of replacing $G$ in $\Gamma$ by the subgraphs $G_i$ shown in Figure \ref{MOY2opp} for $i = 1,2$ then
\[ P_N(\Gamma) = \frac{q^{N-2} - q^{2-N}}{q + q^{-1}} P_N(\Gamma_1) + P_N(\Gamma_2) {\rm .} \]
\noindent Note that
\[ \left. \frac{q^{N-2} - q^{2-N}}{q + q^{-1}} \right|_{q=1} = N-2 {\rm .} \]
\begin{figure}
\centerline{
{
\psfrag{thing1}{$= q^{1-n}$}
\psfrag{thing2}{$-q^{-n}$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{c}{$c$}
\psfrag{d}{$d$}
\psfrag{e}{$e$}
\psfrag{f}{$f$}
\psfrag{2}{$2$}
\psfrag{G}{$G$}
\psfrag{G1}{$G_1$}
\psfrag{G2}{$G_2$}
\includegraphics[height=1.4in,width=4.4in]{MOY2opp.eps}
}}
\caption{We show three subgraphs $G$, $G_1$, $G_2$ of colored trivalent graphs. Each edge decorated with a lower case English letter is 1-colored, those shown with $2$ are 2-colored.}
\label{MOY2opp}
\end{figure}
\begin{prop}
\label{hisonalwhatsup}
For $\Gamma$, $\Gamma_1$, $\Gamma_2$ as described above we have
\[ \chi(\mathscr{M}(\Gamma)) = (N-2) \chi(\mathscr{M}(\Gamma_1)) + \chi(\mathscr{M}(\Gamma_2)) {\rm .}\]
\end{prop}
\begin{proof}
Consider the subgraph $G$ of $\Gamma$ as shown in Figure \ref{MOY2opp}. It is easy to see that for an admissible decoration of $\Gamma$ we must have either ($a=b$ and $c=d$) or ($a=c$ and $b=d$). In the case when $a=c\not=b=d$, there is a $\mathbb{P}^{N-3}$ of choices of decoration for the interior edges of $G$ (this $\mathbb{P}^{N-3}$ corresponds to the projectivization of the $(N-2)$-plane perpendicular to the $2$-plane spanned by the lines $a$ and $b$). In the case when $a=b\not=c=d$, there is a unique choice of decoration for the interior edges of $G$. And finally, in the case when $a=b=c=d$, there is a $\mathbb{P}^{N-2}$ of choices for the interior edges of $G$ (this $\mathbb{P}^{N-2}$ being the projectivization of the othogonal complement to the line $a$).
Clearly for admissible decorations of $\Gamma_1$ we must have $a=c$ and $b=d$, and for admissible decorations of $\Gamma_2$ we must have $a=b$ and $c=d$.
Note that our moduli spaces $\mathscr{M}(\Gamma)$, $\mathscr{M}(\Gamma_1)$, $\mathscr{M}(\Gamma_2)$ naturally have the structure of compact real varieties embedded in a product of complex projective spaces (the orthogonality condition at a trivalent vertex is a real polynomial condition not a complex condition).
By evaluation at $a,b,c,d$ we get algebraic maps to $(\mathbb{P}^{N-1})^4$ from each of $\mathscr{M}(\Gamma)$, $\mathscr{M}(\Gamma_1)$, $\mathscr{M}(\Gamma_2)$. We write $V$, $V_1$, $V_2$ for the preimages of the subvariety $\Delta$ of $(\mathbb{P}^{N-1})^4$ given by $a=b=c=d$.
Since we are dealing with an algebraic map of real varieties, for a small enough open set $U \subset (\mathbb{P}^{N-1})^4$ containing $\Delta$ we have that the respective preimages $\widetilde{V}$, $\widetilde{V_1}$, $\widetilde{V_2}$ of $U$ are each homotopy equivalent to $V$, $V_1$, $V_2$ respectively. This is a standard argument using Hardt triviality.
Note also that $V_1 = V_2$ naturally (although we will retain both indices for notational convenience) and $V$ is a $\mathbb{P}^{N-2}$-bundle over $V_1 = V_2$ so that
\[ \chi(\widetilde{V}) = \chi(V) = (N-1) \chi(V_1) = (N-2) \chi(V_1) + \chi(V_2) = (N-2) \chi(\widetilde{V_1}) + \chi(\widetilde{V_2}) {\rm .} \]
\noindent Using the Mayer-Vietoris sequence we see that
\begin{eqnarray*}
\chi(\mathscr{M}(\Gamma)) &=& \chi(\mathscr{M}(\Gamma) \setminus V) + \chi(\tilde{V}) - \chi((\mathscr{M}(\Gamma) \setminus V)\cap \widetilde{V}) \\
&=& \chi(\mathscr{M}(\Gamma) \setminus V) + (N-2) \chi(\widetilde{V_1}) + \chi(\widetilde{V_2}) - \chi(\widetilde{V} \setminus V) {\rm .}
\end{eqnarray*}
Now note that $\mathscr{M}(\Gamma) \setminus V$ is the disjoint union of $\mathscr{M}(\Gamma_2) \setminus V_2$ and a $\mathbb{P}^{N-3}$-bundle over $\mathscr{M}(\Gamma_1) \setminus V_1$, and $\widetilde{V} \setminus V$ is the disjoint union of $\widetilde{V_2} \setminus V_2$ and a $\mathbb{P}^{N-3}$-bundle over $\widetilde{V_1} \setminus V_1$. Hence we have
\begin{eqnarray*}
\chi(\mathscr{M}(\Gamma)) &=& \chi(\mathscr{M}(\Gamma_2) \setminus V_2) + \chi(\widetilde{V_2}) - \chi(\widetilde{V_2} \setminus V_2) \\
&+& (N-2)\chi(\mathscr{M}(\Gamma_1) \setminus V_1) + (N-2) \chi(\widetilde{V_1}) - (N-2)\chi(\widetilde{V_1} \setminus V_1) \\
&=& (N-2)\chi(\mathscr{M}(\Gamma_1)) + \chi(\mathscr{M}(\Gamma_2)) {\rm .}
\end{eqnarray*}
\end{proof}
\subsection{MOY move IV}
\label{MOYIVsect}
\begin{figure}
\centerline{
{
\psfrag{thing1}{$= q^{1-n}$}
\psfrag{thing2}{$-q^{-n}$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{2}{$2$}
\psfrag{a}{$a$}
\psfrag{b}{$b$}
\psfrag{c}{$c$}
\psfrag{d}{$d$}
\psfrag{e}{$e$}
\psfrag{P}{$P$}
\psfrag{Q}{$Q$}
\psfrag{R}{$R$}
\psfrag{Phi}{$\Phi$}
\psfrag{Gamma}{$G$}
\psfrag{Gamma1}{$G_1$}
\psfrag{Gamma2}{$G_2$}
\includegraphics[height=2in,width=3.5in]{MOY3.eps}
}}
\caption{We show three subgraphs $G$, $G_1$, $G_2$ of colored trivalent graphs. Each edge decorated with a lower case English letter is 1-colored, those decorated with upper case English letters are 2-colored, and that decorated with $\Phi$ is 3-colored.}
\label{MOY3}
\end{figure}
\begin{figure}
\centerline{
{
\psfrag{thing1}{$= q^{1-n}$}
\psfrag{thing2}{$-q^{-n}$}
\psfrag{thing3}{$=q^{n-1}$}
\psfrag{thing4}{$-q^n$}
\psfrag{2}{$2$}
\psfrag{a}{$1$}
\psfrag{b}{$1$}
\psfrag{c}{$1$}
\psfrag{d}{$d$}
\psfrag{e}{$e$}
\psfrag{P}{$2$}
\psfrag{Q}{$2$}
\psfrag{R}{$R$}
\psfrag{Phi}{$3$}
\psfrag{Gamma}{$\Gamma$}
\psfrag{Gamma1}{$\Gamma_1$}
\psfrag{Gamma2}{$\Gamma_2$}
\includegraphics[height=2in,width=2in]{MOYtrick.eps}
}}
\caption{If $\Gamma$ is some colored trivalent graph with the left-hand picture appearing as a subgraph and $\Gamma'$ is the result of replacing the subgraph with the right-hand picture, then clearly $\mathscr{M}(\Gamma) = \mathscr{M}(\Gamma')$.}
\label{MOYtrick}
\end{figure}
We refer to the discussion after Definition \ref{admissdef} for the definition of the moduli space associated to a trivalent graph colored with colors from the set $\{1,2,3\}$.
Suppose that $\Gamma$ is a colored trivalent graph with a subgraph $G$ as indicated in Figure \ref{MOY3}. Form $\Gamma_1$ and $\Gamma_2$ from $\Gamma$ by replacing $G$ with $G_1$ or $G_2$ respectively. Then the move MOY4 states that
\[ P_N(\Gamma) = P_N(\Gamma_1) + P_N(\Gamma_2) {\rm .} \]
\begin{proposition}
\label{corblimey}
For such graphs $\Gamma$, $\Gamma_1$, $\Gamma_2$ as above we have
\[ \chi(\mathscr{M}(\Gamma))= \chi(\mathscr{M}(\Gamma_1)) + \chi(\mathscr{M}(\Gamma_2)) {\rm .} \]
\end{proposition}
At first sight, this proposition does not seem to give a relationship between trivalent graphs colored only drawing from the palette $\{ 1 , 2 \}$. However, using the relationship between moduli spaces shown in Figure \ref{MOYtrick} we get such a result. Murakami, Ohtsuki, and Yamada use an analogous trick in \cite{MOY} to get a relationship between the polynomials $P_N$ of $\{ 1 , 2 \}$-colored graphs.
We note that this proposition is less strong than would be demanded by a proof of Hypothesis \ref{dreamthm}, but the example of Proposition \ref{countereg} shows that Proposition \ref{corblimey} does not admit a lift to the Poincare polynomial in general.
\begin{proof}
By evaluation at the endpoints $a,b,P,Q$ we get algebraic maps from each of $\mathscr{M}(\Gamma)$, $\mathscr{M}(\Gamma_1)$, and $\mathscr{M}(\Gamma_2)$ to $(\mathbb{P}^{N-1})^2 \times \mathbb{G}(2,N)^2$. Let $V$, $V_1$, $V_2$ be the respective preimages of the subvariety $\Delta$ of $(\mathbb{P}^{N-1})^2 \times \mathbb{G}(2,N)^2$ carved out by the three requirements $P=Q$, $a=b$, and $a$ is perpendicular to $P$.
Just as in the proof of \ref{hisonalwhatsup}, for a small enough open set $U \subset (\mathbb{P}^{N-1})^2 \times \mathbb{G}(2,N)^2$ containing $\Delta$ we have that the respective preimages $\widetilde{V}$, $\widetilde{V_1}$, $\widetilde{V_2}$ of $U$ are each homotopy equivalent to $V$, $V_1$, $V_2$ respectively.
Note that naturally $V_1 = V_2$ but we will retain both indices for notational convenience.
Consider the subgraph $G$ of $\Gamma$. For $a$ not perpendicular to $P$, one can check that there is a unique way in which to choose decorations for the other edges of $G$. It follows also in this case that $P=Q$ and $b = a$.
For $a$ perpendicular to $P$, it follows that $Q$ is a plane in the 3-space spanned by $a$ and $P$ and $b$ is the unique perpendicular line to $Q$ in this 3-space. In this case, when $Q \not= P$ there is a unique choice of decorations for the other edges of $G$, and when $Q = P$ there is a a $\mathbb{P}^1$ of choices of decorations for the other edges.
Note that this implies that $V$ has the structure of a $\mathbb{P}^1$-bundle over $V_1 = V_2$ so that
\[ \chi(\widetilde{V}) = \chi(V) = 2\chi(V_1) = \chi(V_1) + \chi(V_2) = \chi(\widetilde{V_1}) + \chi(\widetilde{V_2}) {\rm .} \]
\noindent Hence, using the Mayer-Vietoris sequence, we have
\begin{eqnarray*}
\chi(\mathscr{M}(\Gamma)) &=& \chi(\mathscr{M}(\Gamma) \setminus V) + \chi(\widetilde{V}) - \chi((\mathscr{M}(\Gamma) \setminus V)\cap \widetilde{V}) \\
&=& \chi(\mathscr{M}(\Gamma) \setminus V) + \chi(\widetilde{V_1}) + \chi(\widetilde{V_2}) - \chi(\widetilde{V} \setminus V) {\rm .}
\end{eqnarray*}
Finally note that $\mathscr{M}(\Gamma) \setminus V$ is the disjoint union of $\mathscr{M}(\Gamma_1) \setminus V_1$ and $\mathscr{M}(\Gamma_2) \setminus V_2$, and also $\widetilde{V} \setminus V$ is the disjoint union of $\widetilde{V_1} \setminus V_1$ and $\widetilde{V_2} \setminus V_2$. Thus we have
\begin{eqnarray*}
\chi(\mathscr{M}(\Gamma)) &=& \chi(\mathscr{M}(\Gamma_1) \setminus V_1) + \chi(\widetilde{V_1}) - \chi(\widetilde{V_1} \setminus V_1) \\
&+& \chi(\mathscr{M}(\Gamma_2) \setminus V_2) + \chi(\widetilde{V_2}) - \chi(\widetilde{V_2} \setminus V_2) \\
&=& \chi(\mathscr{M}(\Gamma_1)) + \chi(\mathscr{M}(\Gamma_2)) {\rm .}
\end{eqnarray*}
\end{proof}
\subsection{Proof of main theorem}
\begin{proof}[Proof of Theorem \ref{maintheorem}]
Since for any colored trivalent graph $\Gamma$ the value of the polynomial $P_n(\Gamma) \in \mathbb{Z}[q, q^{-1}]$ is determined by the fact that $P_n$ satisfies the five MOY relations it is enough to verify that $\chi(\mathscr{M} (\Gamma))$ satisfies the MOY relations after evaluating at $q=1$. But we have verified this in Propositions \ref{P1}, \ref{hisonalwhatsup}, \ref{corblimey} and Corollaries \ref{P2}, \ref{P3}.
\end{proof}
|
1,116,691,498,590 | arxiv | \section*{Introduction}
This paper deals with what we have come to refer to as \emph{reconstruction theorems}.
By this we mean a procedure that associates to a theory $T$ (possibly under some hypotheses) a topological group-like object that is a complete bi-interpretation invariant for $T$.
In other words, if $T'$ is bi-interpretable with $T$, then we associate to it the same object (up to an appropriate notion of isomorphism), and conversely, the isomorphism class of this object determines the bi-interpretation class of $T$.
The best-known result of this kind is due to Coquand, and appears in Albrandt \& Ziegler \cite{Ahlbrandt-Ziegler:QuasiFinitelyAxiomatisable}.
It states that if $T$ is an $\aleph_0$-categorical theory (in a countable language), then the topological group $G(T) = \Aut(M)$, where $M$ is the unique countable model, is such an invariant.
This was originally proved for theories in classical (Boolean-valued) logic, and subsequently extended by Kaïchouh and the author \cite{BenYaacov-Kaichouh:Reconstruction} to continuous (real-valued) logic.
In \cite{BenYaacov:ReconstructionGroupoid} we proposed a reconstruction result that also covers some non-$\aleph_0$-categorical theories, using a topological groupoid (rather than a group) as invariant.
The result was presented in two times, first for classical logic and then for the more general continuous logic.
This was not done for the sake of presentation (do the more familiar case first), but because of a fundamental difference between the two cases.
In classical logic, we have a straightforward construction of a sort of ``codes of models'' (more about this later).
In continuous logic, on the other hand, no such construction exists in general, and we were reduced to assuming that such a sort (satisfying appropriate axioms) existed, and was given to us.
Worse still, we gave an example of a theory for which no such sort existed, and consequently, for which our reconstruction theorem was inapplicable.
In the present paper we seek to remedy this deficiency, proposing a reconstruction theorem that holds for all theories (in a countable language).
This time, we work exclusively in continuous logic, keeping in mind that this contains classical logic as a special case.
In \autoref{sec:Sorts} we provide a few reminders regarding continuous logic in general, and interpretable sorts in particular.
We (re-)define the notions of interpretation and bi-interpretation, in a manner that is particularly appropriate for the use we shall make of them, and that avoids the rather tedious notions of interpretation schemes.
In \autoref{sec:Coding} we discuss various ways in which one sort $E$ can be ``coded'' in another sort $D$, both uniform (e.g., $E$ is interpretable in $D$) and non-uniform (e.g., each $a \in E$ is in the definable closure of some $b \in D$).
We define a \emph{coding sort} $D$ as a sort which codes models.
Every sort is coded in a coding sort in a non-uniform fashion, and therefore in a uniform fashion as well.
In \autoref{sec:Reconstruction} we associate to a coding sort $D$ a topological groupoid $\bG_D(T)$, from which a theory $T_{2D}$, bi-interpretable with $T$, can be recovered.
In particular, $\bG_D(T)$ determines the bi-interpretation class of $T$.
If, in addition, $D$ only depends on the bi-interpretation class of $T$, then so does $\bG_D(T)$, in which case it is a complete bi-interpretation invariant.
We point out, rather briefly, how previous reconstruction theorems fit in this general setting.
In \autoref{sec:StarSpace} and \autoref{sec:StarSort} we define \emph{star spaces} and \emph{star sorts}.
These, by their very nature, require us to work in continuous (rather than classical) logic.
In particular, we define a notion of a \emph{universal star sort}, and show that if it exists, then it is unique up to definable bijection, and only depends on the bi-interpretation class of $T$.
In \autoref{sec:Witnesses} we use the star sort formalism to give a construction that is analogous to, though not a direct generalisation of, the construction of the coding sort for classical theories in \cite{BenYaacov:ReconstructionGroupoid}.
We then prove that the resulting sort is a universal star sort, so one always exists.
Moreover, the construction is independent of the theory: we simply construct, for any countable language $\cL$, a star sort $D^*$ that is universal in any $\cL$-theory, complete or incomplete.
We conclude in \autoref{sec:UniversalStarSort}, showing that the universal star sort must be a coding sort, whence our most general reconstruction theorem: in a countable language, the groupoid $\bG_{D^*}(T)$ is a complete bi-interpretation invariant for $T$.
We also show that the type-space of the sort $D^*$, relative to any complete theory $T$, is the Lelek fan $L$.
Finally, in case $T$ does fall into one of the cases covered by previous results, we show that our last result can be viewed as some kind of generalisation.
More precisely, using the Lelek fan, we can recover the coding sort $D^*$, and therefore the corresponding groupoid $\bG_{D^*}(T)$, from those given by the earlier results.
\section{Sorts and interpretations}
\label{sec:Sorts}
As said in the introduction, we are going to work exclusively in continuous first order logic, and assume that the reader is familiar with it.
For a general exposition, see \cite{BenYaacov-Usvyatsov:CFO,BenYaacov-Berenstein-Henson-Usvyatsov:NewtonMS}.
We allow formulas to take truth values in arbitrary compact subsets of $\bR$, so connectives are arbitrary continuous functions from $\bR^n$ to $\bR$.
For a countable family of connectives, it will suffice to take all rational constants, addition and multiplication, to which we add the absolute value operation.
Closing these under composition yields a (countable) family of functions that is dense among all continuous functions on each compact subset of $\bR^n$.
\begin{ntn}
\label{ntn:MaxMinDotMinus}
Using the absolute value operation we may define maximum and minimum directly (i.e., without passing to a limit).
We shall use infix notation $\vee$ and $\wedge$ for those.
We shall also write $t \dotminus s$ for the \emph{truncated subtraction} $(t-s) \vee 0$.
\end{ntn}
We allow the language to be many-sorted.
Some of the time we also require the language to be countable, which means in particular that the set of sorts is countable, although this will not be a requirement for the present section.
We are going to talk quite a bit about sorts and interpretations, so let us begin with a few reminders.
By a \emph{sort} we mean an interpretable sort in the sense of continuous logic, as discussed, for example, in \cite{BenYaacov-Kaichouh:Reconstruction,BenYaacov:ReconstructionGroupoid}.
Sorts are obtained by closing the family of basic sorts (namely, sorts named in the language) by
\begin{itemize}
\item adding the constant sort $\{0,1\}$ (so it is always implicitly interpretable),
\item countable product,
\item quotient by a definable pseudo-distance (in a model that is not saturated, this may also require a passage to the completion), and
\item non-empty definable subset.
\end{itemize}
We follow the convention that natural numbers are coded by sets $n = \{0,\ldots,n-1\} \in \bN$, so $\{0,1\}$ may sometimes be denoted by $2$ (this is especially true of its powers: the Cantor space is $2^\bN$).
Throughout, by \emph{definable} we mean definable by a formula, without parameters (unless parameters are given explicitly).
Any function $\{0,1\} \rightarrow \bR$ is a formula on the sort $\{0,1\}$.
Formulas on a finite product of sorts are constructed in the usual way, using function and predicate symbols, connectives and quantifiers, and closing the lot under uniform limits.
In particular, if $\varphi_i(x)$ are formulas on a sort $D$ for $i < 2^n$, then $\varphi(i,x) = \varphi_i(x)$ is a formula on $2^n \times D$.
Formulas on an infinite product of sorts consist of all formulas on finite sub-products (extended to the whole product through the addition of dummy variables), as well as all uniform limits of such (where the sub-products through which they factor may vary).
If $\overline{d}$ is a definable pseudo-distance on a sort $D$ (defined by a formula on $D \times D$), then formulas on the quotient $(D,\overline{d})$ are formulas on $D$ that are uniformly continuous with respect to $\overline{d}$.
Similarly, for formulas on a product of several quotient sorts.
Finally, we recall that a definable subset of a sort $D$ is a subset $E \subseteq D$, the distance to which is definable (this is significantly more involved than the notion of a definable subset in classical logic).
Equivalently, if for every formula $\varphi(x,y)$, where $x$ is a variable in $D$ and $y$ is a tuple of variables in arbitrary sorts, the predicate $\sup_{x \in E} \varphi(x,y)$ is definable by a formula $\psi(y)$.
Formulas on a product of definable subsets of sorts are restrictions of formulas on the corresponding product of ambient sorts.
Notice that every compact metric space is a quotient space of $2^\bN$ by a continuous pseudo-distance, and therefore a sort, on which the formulas are the continuous functions.
Conversely, we could have chosen any non-trivial compact metric space as a basic constant sort in place of $\{0,1\}$ (the other obvious candidate being $[0,1]$), and realise $\{0,1\}$ as any two-point set therein.
\begin{rmk}
\label{rmk:PseudoDistanceFromFormula}
An obvious, yet crucial remark, is that if $\varphi(x,y)$ is an arbitrary formula on $E \times D$, then
\begin{gather*}
d_\varphi(y,y') = \sup_{x \in E} \, |\varphi(x,y) - \varphi(x,y')|
\end{gather*}
defines a pseudo-distance on $D$.
In addition, if $D = E$, and $\varphi$ happens to define a pseudo-distance on $D$, then it agrees with $d_\varphi$.
This has numerous useful consequences, let us state two of them explicitly.
First of all, one may be bothered by the fact that a formula $\varphi(x,y)$ defining a pseudo-distance on a sort $D$ may depend on the structure(s) under consideration.
However, we may restrict the ``quotient by a pseudo-distance'' step to pseudo-distances of the form $d_\varphi$, that always define pseudo-distances, without any loss of generality.
A second consequence is that if $E \subseteq D$ are two sorts, then every definable pseudo-distance $d$ on $E$ extends to one on $D$.
Indeed, extend it first in an arbitrary fashion to a formula $\varphi(x,y)$ on $E \times D$.
Then $d_\varphi$ is a pseudo-distance on $D$, and it agrees with $d$ on $E$.
\end{rmk}
\begin{rmk}
\label{rmk:DependenceOnTheory}
A formula $\psi(x)$ defining the distance to a subset is another property that depends on the structure under consideration, or on its theory.
However, we do not know a general construction of definable sets from arbitrary formulas, analogous to that of \autoref{rmk:PseudoDistanceFromFormula}, and have good reason to believe that none such exists.
In other words, as far as we know, the set of interpretable sorts depends in a non-trivial way on the theory.
This makes it all the more noteworthy that our construction of the universal star sort as $D^*_\Phi$ can be carried out in a manner that depends only on the language, and not on the theory.
\end{rmk}
A \emph{definable map} between two sorts $\sigma\colon D \rightarrow E$ is one whose graph is the zero-set of some formula.
Composing a formula with a definable map yields another formula.
A special case of such a composition is the formula $d\bigl( \sigma(x), y\bigr)$, on the product $D \times E$, whose zero-set is indeed the graph of $\sigma$.
Every formula is uniformly continuous in its arguments, and $d\bigl(\sigma(x), y \bigr)$ is no exception.
It follows that every definable map $\sigma\colon D \rightarrow E$ is uniformly continuous.
Two sorts that admit a definable bijection are, for most intents and purposes (in particular, for those of the present paper) one and the same.
Moreover, every sort is in definable bijection with one obtained from the basic sorts by applying each of the operations once, in the given order, so we may pretend that every sort is indeed of this form.
Similarly, we may say that a sort $D$ (which may be a basic sort, or one that has already been obtained through some interpretation procedure) is \emph{interpretable} in a family of sorts $(E_i)$ if we can construct from this family $(E_i)$ a sort $D'$ that admits a definable bijection with $D$.
Consider two languages $\cL \subseteq \cL'$, where $\cL'$ is allowed to add not only symbols, but also sorts.
If $M'$ is an $\cL'$-structure, and $M$ is the $\cL$-structure obtained by dropping the sorts and symbols not present in $\cL$, then $M$ is \emph{the $\cL$-reduct} of $M'$ and $M'$ is \emph{an $\cL'$-expansion} of $M$.
If $T'$ is an $\cL'$-theory and $T$ is the collection of $\cL$-sentences in $T'$, then $T$ is also the theory of all $\cL$-reducts of models of $T'$ (notice, however, that an arbitrary model of $T$ need only admit an elementary extension that is a reduct of a model of $T'$).
In this situation we say that $T$ is \emph{the $\cL$-reduct} of $T'$ and that $T'$ is \emph{an $\cL'$-expansion} of $T$.
One special case of an expansion is a \emph{definitional expansion}, in which $\cL$ and $\cL'$ have the same sorts, and each new symbol of $\cL'$ admits an $\cL$-definition in $T'$.
In this case, $T'$ is entirely determined by $T$ together with these definitions.
A more general case is that of an \emph{interpretational expansion} of $T$, where $T'$ identifies each new sort of $\cL'$ with an interpretable sort of $T$, and gives $\cL$-definitions to all new symbols in $\cL'$ (for this to work we also require $\cL'$ to contain, in particular, those new symbols that allow $T'$ to identify the new sorts with the corresponding interpretable ones).
Again, $T$, together with the list of interpretations of the new sorts and definitions of the new symbols, determine $T'$.
Moreover, unlike the general situation described in the previous paragraph, here every model of $T$ expands to a model of $T'$.
\begin{dfn}
\label{dfn:Interpretation}
Let $T$ and $T'$ be two theories, say in disjoint languages.
We say that $T'$ is \emph{interpretable} in $T$ if $T'$ is a reduct of an interpretational expansion of $T$.
The two theories are \emph{bi-interpretable} if they admit a common interpretational expansion (which is stronger than just each being interpretable in the other).
\end{dfn}
A theory has the same sorts (up to a natural identification) as an interpretational expansions.
Therefore, somewhat informally, we may say that two theories are bi-interpretable if and only if they have the same sorts.
Let us consider a few more possible constructions of sorts that will become useful at later stages, and show that they can be reduced to the basic construction steps that we allow.
\begin{lem}
\label{lem:SortInverseLimit}
Let
\begin{gather*}
D_0 \stackrel{\pi_0}\twoheadleftarrow D_1 \stackrel{\pi_1}\twoheadleftarrow \cdots
\end{gather*}
be an inverse system of sorts with surjective definable maps $\pi_n\colon D_{n+1} \twoheadrightarrow D_n$.
Then the inverse limit $D = \varprojlim D_n \subseteq \prod D_n$ is again a sort, which we may equip with the distance
\begin{gather}
\label{eq:SortInverseLimitDistance}
d(x,y) = \sum_n \, \Bigl(2^{-n} \wedge d(x_n, y_n)\Bigr)
\end{gather}
(or with the restriction of any other definable distance on $\prod D_n$).
\end{lem}
\begin{proof}
Indeed, $D$ is the zero-set in $\prod D_n$ of the formula
\begin{gather*}
\varphi(x) = \sum_n \, \Bigl(2^{-n} \wedge d\bigl( x_n, \pi_n(x_{n+1}) \bigr)\Bigr).
\end{gather*}
Let $\varepsilon > 0$, and choose $N \in \bN$ large enough depending on $\varepsilon$, and $\delta > 0$ small enough depending on both.
Let $a \in \prod D_n$, and assume that $\varphi(a) < \delta$.
Since the maps are surjective, there exists $b \in D$ such that $b_N = a_N$.
This determines $b_n$ for all $n \leq N$, and having chosen $\delta$ small enough, we have $d(a_n,b_n)$ as small as desired for all $n \leq N$.
Having chosen $N$ large enough, this yields $d(a,D) \leq d(a,b) < \varepsilon$.
In other words, we have found a formula $\varphi(x)$ that vanishes on $D$, such that satisfies $\varphi(x) < \delta = \delta(\varepsilon)$ implies $\varphi(x,D) < \varepsilon$.
This implies that $D$ is a definable subset (see \cite{BenYaacov-Berenstein-Henson-Usvyatsov:NewtonMS}).
\end{proof}
\begin{prp}
\label{prp:IncreasingLimit}
Assume that $(D_n)$ is a sequence of sorts, equipped with isometric definable embeddings $D_n \hookrightarrow D_{n+1}$.
For convenience, let us pretend these embeddings are the identity map, so $D_0 \subseteq D_1 \subseteq \cdots \subseteq D_n \subseteq \cdots$ is a chain.
Assume moreover that the sequence is Cauchy in the Hausdorff distance.
In other words, assume that if $n$ is large enough and $n \leq m$, then
\begin{gather*}
d^H(D_n,D_m) = \sup_{x \in D_m} \, \inf_{y \in D_n} \, d(x,y)
\end{gather*}
is as small as desired.
Then the completion $E = \widehat{\bigcup D_k}$ is a sort (with definable isometric embedding $D_n \subseteq E$).
If $\varphi(x,y)$ is a formula on $E \times F$, for some sort (or product of sorts) $F$, and $\varphi_n$ is its restriction to $D_n \times F$, then $(\varphi_n)$ is an equicontinuous compatible family (by compatible, we mean that each $\varphi_n$ is the restriction of $\varphi_{n+1}$).
Conversely, every such family arises from a unique formula on $E \times F$.
\end{prp}
\begin{proof}
Assume first that we have a large ambient sort $E_1$ and compatible isometric embeddings $D_n \subseteq E_1$.
Since each $D_n$ is a sort, the distance $d(x,D_n) = \inf_{y \in D_n} \, d(x,y)$ is definable in $E_1$.
By hypothesis, these formulas converge uniformly, and their limit is $d(x,E)$.
Then $E$ is a definable subset of $E_1$, and therefore a sort.
In the general case, we are going to construct $E_1$ as a quotient of $E_0 = \prod D_n$, whose members we may view as sequences in $E$.
We may freely pass to a sub-sequence, and assume that $d^H(D_n,D_{n+1}) < 2^{-n-1}$.
Say that $a \in E_0$ \emph{converges quickly} if $d(a_n,a_m) \leq 2^{-n} + 2^{-m}$, or equivalently, if $d(a_n,b) \leq 2^{-n}$ where $a_n \rightarrow b$ in $E$.
By our hypothesis regarding the rate of convergence of $(D_n)$, every $b \in E$ is the limit of a quickly converging sequence.
Recall the \emph{forced limit} construction from \cite{BenYaacov-Usvyatsov:CFO}.
Formally, it consists of a continuous function $\limF\colon \bR^\bN \rightarrow \bR$, which is monotone, $1$-Lipschitz in the supremum norm on $\bR^\bN$, and most importantly, if $t_n \rightarrow s$ fast enough, say $|t_n - s| \leq 2^{-n}$, then $\limF (t_n : n \in \bN) = s$.
We render the expression $\limF (t_n : n \in \bN)$ as $\limF_{n \rightarrow \infty} t_n$, considering it a limit construct.
Since $\limF$ is continuous, we may apply it to formulas.
Let us fix $n$, and define on $D_n \times E_0$ a formula
\begin{gather*}
\rho_n(x,y) = \limF_{m \rightarrow \infty} \, d(x,y_m).
\end{gather*}
If $b \in E_0$ converges quickly to $c \in E$, then $\rho_n(a,b) = d(a, c)$ for every $a \in D_n$.
When $b \in E_0$ does not converge quickly (or possibly, at all), the value $\rho_n(a,b)$ is well defined, but potentially meaningless.
If $n \leq k$, then $\rho_n$ is the restriction of $\rho_k$, so we may just denote all of them by $\rho$.
As in \autoref{rmk:PseudoDistanceFromFormula}, we define pseudo-distances on $E_0$ by
\begin{gather*}
d_{\rho_n}(y,y') = \sup_{x \in D_n} \, |\rho(x,y) - \rho(x,y')|.
\end{gather*}
The sequence of formulas $(d_{\rho_n})$ is increasing.
Moreover, if $x,y \in D_n$ and $z \in E_0$, then
\begin{gather*}
|\rho(x,z) - \rho(y,z)| \leq \sup_m \, |d(x,z_m) - d(y,z_m)| \leq d(x,y),
\end{gather*}
so $d_{\rho_n} \leq d_{\rho_{n+1}} \leq d_{\rho_n} + 2^{-n}$.
Therefore the sequence $(d_{\rho_n})$ converges uniformly to a formula $d_\rho$ on $E_0 \times E_0$, which must define a pseudo-distance as well.
Let $E_1 = (E_0,d_\rho)$ be the quotient sort.
By definition, each $\rho_n(x,y)$ is $1$-Lipschitz in $y$ with respect to $d_\rho$, so it may be viewed as a formula on $D_n \times E_1$.
It is also $1$-Lipschitz in $x$ with respect to the distance on $D_n$.
Consider $a \in D_k$ and $b,c \in E_1$, and assume that $b_n \rightarrow a$ quickly (but $c$ may be quite arbitrary).
We have already observed that $\rho(x,b) = d(x,a)$ for every $x \in D_n$, for every $n$.
Then, for every $n \geq k$:
\begin{gather*}
d_{\rho_n}(b,c)
= \sup_{x \in D_n} \, |\rho(x,b) - \rho(x,c)|
= \sup_{x \in D_n} \, |d(x,a) - \rho(x,c)|
= \rho(a,c),
\end{gather*}
so $d_\rho(b,c) = \rho(a,c) = \rho_k(a,c)$.
If follows that the class of $b$ in $E_1$ only depends on $a$.
Moreover, the map $\sigma_k\colon D_k \rightarrow E_1$, that sends $a$ to the class of any $b \in E_0$ that converges quickly to $a$, is definable, by $d_\rho\bigl( \sigma_k(x), y\bigr) = \rho_k(x,y)$.
If $b,b'\in E_0$ both converge quickly to $a,a' \in D_k$, respectively, then the same reasoning as above yields $d_{\rho_n}(b,b') = d(a,a')$ for every $n \geq k$, and therefore $d_\rho(b,b') = d(a,a')$.
Therefore, $\sigma_k\colon D_k \rightarrow E_1$ is an isometric embedding for each $k$.
Since the $\rho_k$ are restrictions of one another, these embeddings are compatible, and we have successfully reduced to the special case treated in the beginning of the proof.
Regarding formulas, the only thing we need to prove is that any compatible equicontinuous family of formulas $\varphi_n(x,y)$ on $D_n \times F$, is the restriction of a formula on $E \times F$.
Notice that our hypotheses imply that the formula $\varphi_n$ are uniformly bounded, say $|\varphi_n| \leq M$.
We may now construct an inverse modulus of continuity, namely a continuous function $\Delta^{-1}\colon (0,\infty) \rightarrow (0,\infty)$ such that $|\varphi_n(x,y) - \varphi_n(x',y)| \leq \Delta^{-1} \circ d(x,x')$ (see \cite{BenYaacov-Usvyatsov:CFO}; since the family is equicontinuous, we can do this simultaneously for all $\varphi_n$).
Define on $E \times F$ formulas
\begin{gather*}
\psi_n(x,y) = \inf_{x' \in D_n} \bigl( \varphi_n(x',y) + \Delta^{-1} \circ d(x,x') \bigr).
\end{gather*}
Then $\psi_n$ agrees with $\varphi_n$ on $D_n \times F$, and equicontinuity together with the convergence of $(D_n)$ in $d^H$ implies that $(\psi_n)$ converge uniformly to a formula $\psi(x,y)$ on $E \times F$, that must extend each $\varphi_n$, as claimed.
\end{proof}
It was pointed out by James Hanson that our \autoref{prp:IncreasingLimit} already appeared in his Ph.D.\ thesis \cite[Proposition~3.4.8]{Hanson:PhD}.
Similarly, in \cite[Remark~3.5.7]{Hanson:PhD} he asserts (without proof) something that, to the extent that we understand it (terminology and notation being somewhat non-standard), is related to our \autoref{prp:NonUniformCoding}.
\section{Coding sorts in other sorts}
\label{sec:Coding}
If $a$ and $b$ are two elements in sorts $E$ and $D$ in some structure (model of $T$), then $a$ is \emph{definable} from $b$, or lies in the \emph{definable closure} of $b$, in symbols $a \in \dcl(b)$, if $a$ is the unique realisation of $\tp(a/b)$ in that structure, as well as in any elementary extension.
This implies, and indeed, is equivalent to, the predicate $d(x,a)$ being definable with $b$ as parameter, say by a formula $\varphi(x,b)$ (see \cite{BenYaacov:DefinabilityOfGroups}).
Let us consider two sorts $D$ and $E$.
In what sense(s) can $E$ be coded in $D$?
A fairly \emph{uniform} fashion for this to happen is if $E$ is interpretable in $D$, i.e., if it embeds definably in a quotient of $D^\bN$, or, at the very worst, $D^\bN \times 2^\bN$.
This would imply a \emph{non-uniform} version: for every $a \in E$ there exists $b \in D^\bN$ such that that $a \in \dcl(b)$.
In fact, the converse implication holds as well -- this follows fairly easily from \autoref{prp:NonUniformCoding} below, together with the presentation of $\widehat{\bigcup D_n}$ as a subset of a quotient of $\prod D_n$.
In any case, we want to explore a stronger condition of ``non-uniform coding'', by singletons in $D$.
\begin{prp}
\label{prp:NonUniformCoding}
Let $E$ and $D$ be sorts of a theory $T$.
Assume that for every $a \in E$ (in a model of $T$) there exists $b \in D$ (possibly in an elementary extension) such that $a \in \dcl(b)$.
Then $E$ can be embedded in a limit sort of the form $\widehat{\bigcup D_n}$, as per \autoref{prp:IncreasingLimit}, where each $D_n$ is a quotient of $D \times 2^\bN$.
\end{prp}
\begin{proof}
Consider a type $p \in \tS_E(T)$, so $p = \tp(a)$ for some $a \in E$ in a model of $T$.
We may assume that $b \in D$ in the same model is such that $a \in \dcl(b)$, as witnessed by $d(x,a) = \varphi_p(x,b)$.
Let (with $\varepsilon > 0$)
\begin{gather*}
\psi_p(x,y) = \sup_{x'} \, |d(x,x') - \varphi_p(x',y)|,
\\
\chi_{p,\varepsilon}(y) = 1 \dotminus \bigl( \inf_x \, \psi_p(x,y)/\varepsilon \dotminus 1 \bigr).
\end{gather*}
The formula $\psi_p(x,y)$ measures the extent to which $\varphi_p(x',y)$ fails to give us the distance to $x$.
The formula $\chi_{p,\varepsilon}(y)$ tells us whether $x' \mapsto \varphi_p(x',y)$ is close to being the distance to \emph{some} $x \in E$: $\chi_{p,\varepsilon}(y) = 1$ if $y$ codes some $x$ quite well (error less than $\varepsilon$), vanishes if $y$ does not code anything well enough (error at least $2\varepsilon$), and in all cases its value lies in $[0,1]$.
Of course, $\psi_p(a,b) = 0$, so $\inf_y \psi_p(x,y) < \varepsilon$ defines an open neighbourhood of $p$.
Let us fix $\varepsilon > 0$ and let $p$ vary.
Then the conditions $\inf_y \psi_p(x,y) < \varepsilon$ define an open covering of $\tS_E(T)$.
By compactness, there exists a family $(p_i : i < n)$ such that for every $q \in \tS_E(T)$, $\inf_y \psi_{p_i}(q,y) < \varepsilon$ for at least one $i < n$.
Repeating this, with smaller and smaller $\varepsilon$, we may construct a sequence of types $(p_n)$, as well as $\varepsilon_n \rightarrow 0$ such that for every $n_0$, the open conditions $\inf_y \psi_{p_n}(x,y) < \varepsilon_n$ for $n \geq n_0$ cover $\tS_E(T)$.
Let $n \in \bN$.
We may view $n = \{0, \ldots, n-1\}$ as a quotient of $2^\bN$, and similarly for $[0,1]$.
Therefore, $D \times n \times [0,1]$ is a quotient of $D \times 2^\bN$.
For $(x,y,k,t) \in E \times D \times n \times [0,1]$, define
\begin{gather*}
\rho_n(x,y,k,t) = t \cdot \chi_{p_k,\varepsilon_k}(y) \cdot \varphi_{p_k}(x,y).
\end{gather*}
This is indeed a formula, giving rise to a pseudo-distance on $D \times n \times [0,1]$:
\begin{gather*}
d_{\rho_n}(y,k,t,y',k',t') = \sup_{x \in E} \, | \rho_n(x,y,k,t) - \rho_n(x,y',k',t')|.
\end{gather*}
In fact, we may drop $n$ and just write $\rho$ and $d_\rho$: the only role played by $n$ is being greater than $k$.
Let $D_n$ be the quotient $\bigl( D \times n \times [0,1], d_\rho \bigr)$ (which is, in turn, a quotient of $D \times 2^\bN$).
The inclusion $D \times n \times [0,1] \subseteq D \times (n+1) \times [0,1]$ induces an isometric embedding $D_n \hookrightarrow D_{n+1}$.
Therefore, in order to show that the hypotheses of \autoref{prp:IncreasingLimit} are satisfied, all we need to show is that for $n \leq m$ large enough, every member of $D_m$ is close to some member of $D_n$.
Let $\varepsilon > 0$ be given.
Find $n_0$ such that $\varepsilon_n < \varepsilon$ for $n \geq n_0$.
Then, by compactness, find $n_1 > n_0$ such that $\inf_y \psi_{p_n}(x,y) < \varepsilon_n$ for $n_0 \leq n < n_1$ cover $\tS_E(T)$.
Assume now that $n_1 \leq m$, and let $[b,k,t]$ be some class in $D_m$.
If $k < n_1$, then $[b,k,t] \in D_{n_1}$.
If $\inf_x \psi_{p_k}(x,b) \geq 2\varepsilon_k$, then $\rho_n(x,b,k,t) = 0$ regardless of $x$, so $[b,k,t] = [b,0,0] \in D_{n_1}$.
We may therefore assume that $n_1 \leq k < m$ and there exists $a \in E$ such that $\psi_{p_k}(a,b) < 2\varepsilon_k$.
By our hypothesis regarding the covering of $\tS_E(T)$, there exists $n_0 \leq \ell < n_1$ such that $\inf_y \psi_{p_\ell}(a,y) < \varepsilon_\ell$.
Let $c \in D$ be such that $\psi_{p_\ell}(a,c) < \varepsilon_\ell$, and let $s = t \cdot \chi_{p_k,\varepsilon_k}(b)$.
Then
\begin{gather*}
\inf_x \, \psi_{p_\ell}(x,c) < \varepsilon_\ell,
\qquad
\chi_{p_\ell,\varepsilon_\ell}(c) = 1,
\qquad
\rho(x,c,\ell,s) = s \cdot \varphi_{p_\ell}(x,c),
\end{gather*}
so
\begin{align*}
d_\rho(b,k,t,c,\ell,s)
& = s \cdot \sup_x \, \bigl| \varphi_{p_k}(x,b) - \varphi_{p_\ell}(x,c) \bigr|
\\
& \leq \sup_x \, \bigl| \varphi_{p_k}(x,b) - d(x,a) \bigr| + \sup_x \, \bigl| d(x,a) - \varphi_{p_\ell}(x,c) \bigr|
\\
& = \psi_{p_k}(a,b) + \psi_{p_\ell}(a,c) < 2\varepsilon_k + \varepsilon_\ell < 3\varepsilon.
\end{align*}
Then $[c,\ell,s] \in D_{n_1}$ is close enough to $[b,k,t]$.
By \autoref{prp:IncreasingLimit}, a limit sort $F = \widehat{\bigcup D_n}$ exists.
Now let us embed $E \hookrightarrow F$.
We have already constructed a family $(\rho_n)$ of formulas on $E \times D_n$, let us write them as $\rho_n(x,z)$.
Each is $1$-Lipschitz in $z$ by definition of the distance on $D_n$, and they are compatible, so they extend to a formula $\rho(x,z)$ on $E \times F$.
Consider $a \in E$, and let $\varepsilon > 0$.
As above, there exists $\ell$ such that $\varepsilon_\ell < \varepsilon$, and $c \in D$ such that $\psi_{p_\ell}(a,c) < \varepsilon_\ell$.
Let $a' = [c,\ell,1] \in D_{\ell+1} \subseteq F$.
Again, as above, $\chi_{p_\ell,\varepsilon_\ell}(c) = 1$, so $\rho(x,a') = \varphi_{p_\ell}(x,c)$, and
\begin{gather*}
\sup_x \, |d(x,a) - \rho(x,a')|
= \sup_x \, |d(x,a) - \varphi_{p_\ell}(x,c)|
= \psi_{p_\ell}(a,c) < \varepsilon_\ell < \varepsilon.
\end{gather*}
Doing this with $\varepsilon \rightarrow 0$ we obtain a sequence $(a_n)$ in $F$ such that $\rho(x,a_n)$ converges uniformly to $d(x,a)$.
By definition of the distance on $F$ as $d_\rho$, this sequence is Cauchy, with limit $\tilde{a} \in F$, say, and $\rho(x,\tilde{a}) = d(x,a)$.
In particular, for $z \in F$,
\begin{gather*}
d(z,\tilde{a}) = \sup_x | \rho(x,z) - \rho(x,\tilde{a})| = \sup_x \, |\rho(x,z) - d(x,a)|,
\end{gather*}
so $a \mapsto \tilde{a}$ is definable.
By the same reasoning, if $a,a' \in E$, then
\begin{gather*}
d(\tilde{a}, \tilde{a}') = \sup_x | \rho(x,\tilde{a}) - \rho(x,\tilde{a}')| = \sup_x \, |d(x,a) - d(x,a')| = d(a,a'),
\end{gather*}
so the embedding is isometric, completing the proof.
\end{proof}
\begin{rmk}
\label{rmk:NonUniformCoding}
A closer inspection of the proof can yield a necessary and sufficient condition (but we shall not use this):
A sort $E$ can be embedded in a limit sort of the form $\widehat{\bigcup D_n}$, where each $D_n$ is a quotient of $D \times 2^\bN$, if and only if, for every $a \in E$ and $\varepsilon > 0$, there exists $b \in D$ and a formula $\varphi(x,b)$ that approximates $d(x,a)$ with error at most $\varepsilon$.
\end{rmk}
In \autoref{prp:NonUniformCoding}, we cannot replace $D \times 2^\bN$ with just $D$ (if $D$ is a singleton, then any increasing union of quotients of $D$ is a singleton, and yet $E = \{0,1\}$ satisfies the hypothesis of \autoref{prp:NonUniformCoding}).
Instead, let us prove that this does not change much, in the sense that formulas on $D \times 2^\bN$ or on just $D$ are almost the same thing.
\begin{lem}
\label{lem:ConstantSortFormulas}
Let $D$ and $E$ be sorts, and let $\varphi(x,t,y)$ be a formula on $D \times 2^\bN \times E$.
Then $\varphi$ can be expressed as a uniform limit of continuous combinations of formulas on $D \times E$ and on $2^\bN$ separately (where we recall that formulas on $2^\bN$ are just continuous functions $2^\bN \rightarrow \bR$).
\end{lem}
\begin{proof}
For $n \in \bN$ and $k \in 2^n$, let $\delta_{n,k}(t) = 1$ if $t$ extends $k$, and $0$ otherwise.
Let also $\tilde{k} \in 2^\bN$ be the extension of $k$ by zeros, and $\varphi_{n,k}(x,y) = \varphi(x,\tilde{k},y)$.
Then $\varphi_{n,k}$ is a formula on $D \times E$ and $\delta_{n,k}$ is a formula on $2^\bN$, so we may define a formula
\begin{gather*}
\varphi_n(x,t,y) = \sum_{k \in \{0,1\}^n} \delta_{n,k}(t) \varphi_{n,k}(x,y).
\end{gather*}
Since $\varphi(x,t,y)$ is uniformly continuous in $t$, $\varphi_n \rightarrow \varphi$ uniformly.
\end{proof}
\begin{dfn}
\label{dfn:CodingSort}
Let $T$ be a theory, $D$ a sort, and $D^0 \subseteq D$ a definable subset (or even type-definable, namely, the zero-set of a formula).
We say that $D$ is a \emph{coding sort}, with \emph{exceptional set} $D^0$, if the following holds:
\begin{enumerate}
\item
\label{item:CodingSortModel}
\emph{Coding models}: if $M \vDash T$ and $a \in D(M) \setminus D^0(M)$, then there exists $N \preceq M$ such that $\dcl(a) = \dcl(N)$.
We then say that $a$ \emph{codes} $N$.
\item
\label{item:CodingSortDensity}
\emph{Density}: if $M \vDash T$ is separable, then the set of $a \in D(M) \setminus D^0(M)$ that code $M$ is dense in $D(M)$.
\end{enumerate}
We may denote a coding sort by $D$ alone, considering $D^0$ as implicitly given together with $D$.
\end{dfn}
The need for an exceptional set will arise at a later stage -- for the time being, we are simply going to ensure that its presence does not cause any trouble.
\begin{dfn}
\label{dfn:LT2D}
Let $T$ be a theory, say in a language $\cL$, and let $D$ be a coding sort for $T$.
We define a single-sorted language $\cL_{2D}$ to consist of a binary predicate symbol for each formula on $D \times D$ (possibly restricting this to a dense family of such formulas).
We define $T_{2D}$ as the $\cL_{2D}$-theory of $D$ -- namely, the theory of all $D(M)$, viewed naturally as an $\cL_{2D}$-structure, where $M$ varies over models of $T$.
\end{dfn}
Clearly, $T_{2D}$ is interpretable from $T$.
The $2$ is there to remind us that only binary predicates on $D$ are named in the language.
Our aim, in the end, is to recover from a groupoid the theory of some coding sort $D$, and show that is bi-interpretable with $T$.
In particular we need to recover the definable predicates on $D$ from the groupoid.
In \cite{BenYaacov:ReconstructionGroupoid} we managed to recover predicates of all arities, at the price of some additional work.
In the present paper we choose to follow a different path, recovering only binary predicates (i.e., only $T_{2D}$), and instead show that these suffice.
\begin{prp}
\label{prp:CodingSortBiInterpretation}
Let $T$ be a theory, say in a language $\cL$, and let $D$ be a coding sort for $T$.
Then $T_{2D}$ is bi-interpretable with $T$.
\end{prp}
\begin{proof}
Consider $T'$, obtained from $T$ by adjoining $D$ as a new sort, and naming the full induced structure.
It is, by definition, an interpretational expansion of $T$, and it will suffice to show that it is also an interpretational expansion of $T_{2D}$.
By \autoref{lem:ConstantSortFormulas}, every formula on $\bigl( D \times 2^\bN \bigr) \times \bigl( D \times 2^\bN \bigr)$ is definable in $T_{2D}$.
In particular, every quotient of $D \times 2^\bN$ is interpretable in $T_{2D}$, as is every embedding of one such quotient in another.
Therefore, if $(D_n)$ is an increasing chain of quotients of $D \times 2^\bN$, that converges in the sense of \autoref{prp:IncreasingLimit}, then $E = \widehat{\bigcup D_n}$ is interpretable in $T_{2D}$.
Consider now a sort $E$ of $T$.
Every member of $E$ belongs to a separable model of $T$ and is therefore definable from a member of $D$.
By \autoref{prp:NonUniformCoding}, we may embed $E$ in a sort $\tilde{E}$ which is of the form $\widehat{\bigcup D_n}$, for appropriate quotients of $D \times 2^\bN$, as in the previous paragraph.
This presentation of $E$ need not be unique, so let us just fix one such.
Say $E'$ is another sort of $T$, so $E' \subseteq \tilde{E}' = \widehat{\bigcup D_n'}$ as above.
Any formula on $\tilde{E} \times \tilde{E}'$ is, by \autoref{prp:IncreasingLimit}, coded by a sequence of formulas on $D_n \times D_n'$ (its restrictions), i.e., by formulas on $\bigl(D \times 2^\bN\bigr)^2$.
It is therefore definable in $T_{2D}$.
In particular, the distance to (the copy of) $E$ in $\tilde{E}$ is definable in $T_{2D}$, so each sort $E$ of $T$ can be interpreted in $T_{2D}$ (or at least, some isometric copy of $E$ is interpretable).
Similarly, every formula on $E \times E'$, can be extended to a formula on $\tilde{E} \times \tilde{E}'$, so it is definable in $T_{2D}$ (on the copies of $E$ and $E'$).
Consider now a finite product $E = \prod_{i<n} E_i$ of sorts of $T$.
We have already chosen embeddings $E \subseteq \tilde{E}$ and $E_i \subseteq \tilde{E}_i$ as above.
The projection map $\pi_i\colon E \rightarrow E_i$ can be coded by a formula on $E \times E_i$, namely
\begin{gather*}
\Gamma_{\pi_i}(x,y) = d_{E_i}(x_i,y),
\end{gather*}
where $\Gamma$ stands for ``Graph''.
We have already observed that such a formula is definable in $T_{2D}$.
It follows that the structure of $E$ as a product of the $E_i$ is definable in $T_{2D}$.
Finally, any formula on $E_0 \times \cdots \times E_{n-1}$ can be viewed as a unary formula on the product $E$, which is, again, definable in $T_{2D}$.
In conclusion, we can interpret every sort of $T$ in $T_{2D}$, and recover the full structure on these sorts.
In other words, $T'$ is indeed an interpretational expansion of $T_{2D}$, completing the proof.
\end{proof}
\section{Groupoid constructions and reconstruction strategies}
\label{sec:Reconstruction}
In this section we propose a general framework for ``reconstruction theorems''.
To any coding sort $D$ (see \autoref{dfn:CodingSort}) we associate a topological groupoid $\bG_D(T)$ from which the theory $T_{2D}$ of \autoref{prp:CodingSortBiInterpretation} can be reconstructed.
Since $T$ is bi-interpretable with $T_{2D}$, the groupoid $\bG_D(T)$ determines the bi-interpretation class of $T$.
If the coding sort is moreover determined by the bi-interpretation class of $T$ (up to definable bijection), then the groupoid is a bi-interpretation invariant.
Various previously known constructions fit in this framework, as well as the one towards which aims the present paper.
For a general treatment of topological groupoids, we refer the reader to Mackenzie \cite{Mackenzie:LieGroupoids}, or, for the bare essentials we shall need here, to \cite{BenYaacov:ReconstructionGroupoid}.
We recall that a \emph{groupoid} $\bG$ is defined either as a small category in which all morphisms are invertible, or algebraically, as a single set (of all morphisms), equipped with a partial composition law and a total inversion map, satisfying appropriate axioms.
When viewed as a category, the set of objects can be identified with the set of identity morphisms, and we call it the \emph{basis} $\bB$ of $\bG$.
In the algebraic formalism, which we follow here, the basis is $\bB = \{e \in \bG : e^2 = e\} \subseteq \bG$.
If $g \in \bG$, then $s(g) = g^{-1} g$ and $t(g) = g g^{-1}$ are both defined, and belong to $\bB$, being the \emph{source} and \emph{target} of $g$, respectively.
The domain of the composition law is
\begin{gather*}
\dom(\cdot) = \bigl\{ (g,h) : s(g) = t(h) \bigr\} \subseteq \bG^2.
\end{gather*}
A \emph{topological groupoid} is a groupoid equipped with a topology in which the partial composition law and total inversion map are continuous.
In a topological groupoid the source and target maps $s,t\colon \bG \rightarrow \bB$ are continuous as well, $\bB$ is closed in $\bG$, and $\dom(\cdot)$ closed in $\bG^2$.
A topological groupoid $\bG$ is \emph{open} if, in addition, the composition law $\cdot\colon \dom(\cdot) \rightarrow \bG$ is open, or equivalently, if the source map $s\colon \bG \rightarrow \bB$ (or target map $t\colon \bG \rightarrow \bB$) is open.
A (topological) group is a (topological) groupoid whose basis is a singleton.
Such a topological groupoid is always open.
\begin{dfn}
\label{dfn:CodingSortGroupoid}
Let $T$ be a theory in a countable language, and $D$ a coding sort.
We let $\tS_{D \times D}(T)$ denote the space of types of pairs of elements of $D$.
We define the following two subsets of $\tS_{D \times D}(T)$:
\begin{gather*}
\bG_D^0(T) = \bigl\{ \tp(a,a) : a \in D^0 \bigr\},
\\
\bG_D(T) = \bG_D^0(T) \cup \bigl\{ \tp(a,b) : a, b \in D \setminus D^0 \ \& \ \dcl(a) = \dcl(b) \bigr\},
\end{gather*}
where $a$ and $b$ vary over all members of $D$ (or $D^0$) in models of $T$.
We equip $\bG_D(T)$ with the induced topology, as well as with the following inversion law and partial composition law:
\begin{gather*}
\tp(a,b)^{-1} = \tp(b,a),
\qquad
\tp(a,b) \cdot \tp(b,c) = \tp(a,c).
\end{gather*}
We also write $\bB_D(T)$ for $\tS_D(T)$, and identify $\tp(a) \in \bB_D(T)$ with $\tp(a,a) \in \bG_D(T)$.
This identifies $\bB_D^0(T) = \tS_{D^0}(T)$ with $\bG_D^0(T)$.
\end{dfn}
Notice that the density hypothesis in \autoref{dfn:CodingSort} implies that $\bG_D(T)$ is dense in $\tS_{D \times D}(T)$.
\begin{conv}
\label{conv:CodingSortGroupoid}
We usually consider the theory $T$ and the coding sort $D$ to be fixed and drop them from notation, so $\bG = \bG_D(T)$, $\bB = \bB_D(T)$, and so on.
\end{conv}
\begin{lem}
\label{lem:CodingSortGroupoid}
Let $D$ be a coding sort for $T$.
\begin{enumerate}
\item As defined above $\bG = \bG_D(T)$ is a Polish open topological groupoid with basis $\bB = \bB_D(T)$.
\item If $g = \tp(a,b) \in \bG$, then $s(g) = \tp(b) \in \bB$ is its source, and $t(g) = \tp(a) \in \bB$ its target.
\item
\label{item:CodingSortGroupoidBasisNeighbourhoods}
If $d$ is a definable distance on $D$, then the family of sets
\begin{gather*}
U_r = \bigl\{ \tp(a,b) \in \bG : d(a,b) < r \bigr\},
\end{gather*}
for $r > 0$, forms a basis of open neighbourhoods for $\bB$ in $\bG$.
\end{enumerate}
\end{lem}
\begin{proof}
It is easy to check that $\bG$ is a topological groupoid with basis $\bB$ and the stated source and target.
Since the language is countable, the space $\tS_{D \times D}(T)$ is compact metrisable, and therefore Polish.
As a condition on $\tp(a,b)$, the property $\dcl(a) = \dcl(b)$ is $G_\delta$ by \cite[Lemma~5.1]{BenYaacov:ReconstructionGroupoid}, and $a,b \notin D^0$ is open.
Therefore $\bG$ is Polish, as the union of a closed subset and a $G_\delta$ subset of a Polish space.
Each set $U_r$ is open and contains $\bB$.
On the other hand, if $U$ is any open neighbourhood of $\bB$ in $\bG$, then it must be of the form $W \cap \bG$, where $W$ is an open neighbourhood of $\bB$ in $\tS_{D \times D}(T)$.
Since $\bB$ is defined there by the condition $d(x,y) = 0$, and by compactness, $W$ must contains $\bigl[ d(x,y) < r \bigr]$ for some $r > 0$, so $U$ contains $U_r$.
It is left to show that the target map $t\colon \bG \rightarrow \bB$ is open.
First, consider $g \in \bG \setminus \bG^0 \subseteq \tS_{D \times D}(T)$.
Let $[x \in D^0] \subseteq \tS_{D \times D}(T)$ be the set of types $p(x,y)$ that imply $x \in D^0$, and similarly for $y$, observing that $g \notin [x \in D^0] \cup [y \in D^0]$.
Since this union is a closed set, $g$ admits a basis of neighbourhoods in $\tS_{D \times D}(T)$ that are disjoint from $[x \in D^0] \cup [y \in D^0]$.
By Urysohn's Lemma and the identification of formulas with continuous functions on types, $g$ admits a basis of neighbourhoods of the form $\bigl[ \varphi(x,y) > 0 \bigr]$ where $\varphi(x,y)$ vanishes if $x \in D^0$ or $y \in D^0$.
The family of sets $\bigl[ \varphi(x,y) > 0 \bigr] \cap \bG$ for such $\varphi$ is a basis of neighbourhoods for $g$ in $\bG$.
Assume we are given such a neighbourhood $g \in U = \bigl[ \varphi(x,y) > 0 \bigr] \cap \bG$ (so $\varphi(x,y)$ vanishes if $x \in D^0$ or $y \in D^0$).
Let $V = \left[ \sup_y \, \varphi(x,y) > 0 \right] \subseteq \tS_D(T) = \bB$.
Then $V$ is open, and clearly $t(U) \subseteq V$.
Conversely, assume that $\tp(a) \in V$, where $a \in D(M)$ for some $M \vDash T$.
Then there exists $b \in D(M)$ such that $\varphi(a,b) > 0$.
By hypothesis on $\varphi$, it follows that $a,b \notin D^0$.
In particular, $a$ codes a separable $N \preceq M$, and we may assume that $b \in D(N)$.
Now, by the density property and the uniform continuity of $\varphi$, we may assume that $b$ also codes $N$, so $\tp(a,b) \in U$.
This proves that $t(U) = V$.
Now let $g = \tp(a,a) \in \bG^0$.
We have a basis of neighbourhoods of $g$ in $\bG$ consisting of sets of the form
\begin{gather*}
U = \bigl[ \varphi(x) > 0 \bigr] \cap \bigl[ d(x,y) < r \bigr] \cap \bG,
\end{gather*}
where $\varphi(a) > 0$.
It is then easily checked that $t(U) = \bigl[ \varphi(x) > 0 \bigr]$, since we may always take $y = x$ as witness.
This completes the proof.
\end{proof}
\begin{dfn}
\label{dfn:UCC}
Let $\bG$ be a topological groupoid.
Say that a function $\varphi\colon \bG \rightarrow \bR$ is \emph{uniformly continuous and continuous (UCC)} if it is continuous on $\bG$, and in addition satisfies the following uniform continuity condition: for every $\varepsilon > 0$ there exists an open neighbourhood $U$ of the basis $\bB$ such that for every $g \in \bG$,
\begin{gather*}
h \in UgU \quad \Longrightarrow \quad |\varphi(g) - \varphi(h)| < \varepsilon
\end{gather*}
\end{dfn}
Notice that unlike the situation for groups, the uniform continuity condition does not imply continuity (it is very well possible that $g_n \rightarrow h$ while $h \notin \bG g_n \bG$ for any $n$).
\begin{prp}
\label{prp:UCC}
Assume that $D$ is a coding sort for $T$, and let $\bG = \bG_D(T)$.
Let $\varphi(x,y)$ be a formula on $D \times D$, and let $\varphi_\bG\colon \bG \rightarrow \bR$ be the naturally induced function
\begin{gather*}
g = \tp(a,b) \quad \Longrightarrow \quad \varphi_\bG(g) = \varphi(a,b).
\end{gather*}
Then the map $\varphi \mapsto \varphi_\bG$ defines a bijection between formulas on $D \times D$, up to equivalence, and UCC functions on $\bG$.
\end{prp}
\begin{proof}
Let us first check that if $\varphi$ is a formula, then $\varphi_\bG$ is UCC.
It is clearly continuous.
The uniform continuity condition follows from the fact that $\varphi$ is uniformly continuous in each argument, together with the fact that for any $\delta > 0$ we may take choose $U = \bigl[ d(x,y) < \delta \bigr] \cap \bG$.
Conversely, assume that $\psi \colon \bG \rightarrow \bR$ is UCC.
By density, the function $\psi$ admits at most one continuous extension to $\tS_{D \times D}(T)$, and we need to show that one such exists.
In other words, given $p \in \tS_{D \times D}(T)$ and $\varepsilon > 0$, it will suffice to find a neighbourhood $p \in V \subseteq \tS_{D \times D}(T)$ such that $\psi$ varies by less than $\varepsilon$ on $V \cap \bG$.
If $p \in \bG$ this is easy, so we may assume that $p \notin \bG$.
Let us fix $\varepsilon > 0$ first.
By uniform continuity of $\psi$ and \autoref{lem:CodingSortGroupoid}\autoref{item:CodingSortGroupoidBasisNeighbourhoods}, there exists $\delta > 0$ such that $|\psi(g) - \psi(ugv)| < \varepsilon$ whenever $g \in \bG$, $u,v \in \bigl[d(x,y) < \delta\bigr] \cap \bG$, and $ugv$ is defined.
Given $p = \tp(a_0,b_0)$, we may assume that $a_0,b_0 \in D(M)$ for some separable model $M$.
Since $p \notin \bG$, we must have $a_0 \neq b_0$, and (possibly decreasing $\delta$) we may assume that $d(a_0,b_0) > 2\delta$.
By the density property, there exist $a_1,b_1 \in D(M)$ that code $M$, with $d(a_0,a_1) + d(b_0,b_1) < \delta$, so $d(a_1,b_1) > \delta$.
Let $g_1 = \tp(a_1,b_1) \in \bG$.
By continuity, there exists an open neighbourhood $g_1 \in V_1 \subseteq \tS_{D \times D}(T)$ such that $|\psi(g_1) - \psi(h)| < \varepsilon$ for every $h \in V_1 \cap \bG$.
Possibly decreasing $V_1$, we may further assume that $\tp(a,b) \in V_1$ implies $d(a,b) > \delta$
We may even assume that $V_1$ is of the form $[\chi < \delta]$, where $\chi(x,y) \geq 0$ is a formula and $\chi(a_1,b_1) = \chi(g_1) = 0$.
Define
\begin{gather*}
\chi'(x,y) = \inf_{x',y'} \, \bigl[ d(x,x') + d(y,y') + \psi(x',y') \bigr], \\
V = \bigl[ \chi'(x,y) < \delta \bigr] \subseteq \tS_{D \times D}(T).
\end{gather*}
Then $V$ is open, $p \in V$, and $\tp(a,b) \in V$ implies $a \neq b$ (in other words, $V \cap \bB = \emptyset$).
In order to conclude, consider any $g_2 = \tp(a_2,b_2) \in V \cap \bG$.
Since $a_2 \neq b_2$, they cannot belong to the exceptional set, so both code some separable model $N$.
By definition of $V$, there exist $a_3,b_3 \in D(N)$ such that $\chi(a_3,b_3) + d(a_2,a_3) + d(b_2,b_3) < \delta$.
By the density property, and uniform continuity of $\chi$, we may assume that $a_3$ and $b_3$ code $N$ as well.
Let $g_3 = \tp(a_3,b_3)$, $u = \tp(a_3,a_2)$, $v = \tp(b_2,b_3)$.
Then $g_3 = u g_2 v \in V_1$, so
\begin{gather*}
|\psi(g_2) - \psi(g_1)|
\leq |\psi(g_2) - \psi(g_3)| + |\psi(g_3) - \psi(g_1)|
\leq 2\varepsilon.
\end{gather*}
Therefore $\psi$ varies by less than $4\varepsilon$ on $V \cap \bG$, which is good enough.
\end{proof}
\begin{cor}
\label{cor:BoundedUCC}
Every UCC function on $\bG_D(T)$ is bounded.
\end{cor}
\begin{dfn}
\label{dfn:Norm}
Let $\bG$ be a groupoid.
A \emph{semi-norm} on $\bG$ is a function $\rho\colon \bG \rightarrow \bR^+$ that satisfies
\begin{itemize}
\item $\rho\rest_\bB = 0$, and
\item $\rho(g) = \rho(g^{-1})$, and
\item $\rho(gh) \leq \rho(g) + \rho(h)$, when defined.
\end{itemize}
It is a \emph{norm} if $\rho(g) = 0$ implies $g \in \bB$.
A norm $\rho$ is \emph{compatible} with a topology on $\bG$ if it is continuous, and the sets
\begin{gather*}
\{\rho < r\} = \bigl\{g \in \bG : \rho(g) < r\bigr\},
\end{gather*}
for $r > 0$, form a basis of neighbourhoods for $\bB$.
\end{dfn}
\begin{cor}
\label{cor:CorrespondenceDistanceNormUCC}
The correspondence of \autoref{prp:UCC} restricts to a one-to-one correspondence between definable distances $d$ on $D$ and compatible norms on $\bG = \bG_D(T)$.
\end{cor}
\begin{proof}
Let $d$ be a definable distance on $D \times D$ and $\rho_d$ the corresponding UCC function on $\bG$.
Then $\rho_d$ is clearly a continuous norm, and it is a compatible norm by \autoref{lem:CodingSortGroupoid}\autoref{item:CodingSortGroupoidBasisNeighbourhoods}.
The converse is more delicate.
Let $\rho$ be a compatible norm.
Then it is continuous, and it is easy to see that every continuous semi-norm is UCC, so $\rho = \varphi_\bG$ (in the notations of \autoref{prp:UCC}) for some formula $\varphi(x,y)$.
If $a,b,c \in D$ all code the same separable model, then $\varphi(a,a) = 0$ and $\varphi(a,b) \leq \varphi(a,c) + \varphi(b,c)$.
The set of types of such triplets is dense in $\tS_{D \times D \times D}(T)$, by the density property, so the same holds throughout and $\varphi$ defines a pseudo-distance.
It is left to show that $\varphi$ defines a distance (and not merely a pseudo-distance).
Let $d$ be any definable distance on $D$, say the one distinguished in the language.
We already know that $\rho_d$ is a compatible norm.
Therefore, for every $\varepsilon > 0$ there exists $\delta > 0$ such that $\{\rho < \delta\} \subseteq \{\rho_d < \varepsilon\}$.
As in the previous paragraph, this means that the (closed) condition $\varphi(a,b) < \delta \ \Longrightarrow \ d(a,b) \leq \varepsilon$ holds on a dense set of types, and therefore throughout.
In particular, if $\varphi(a,b) = 0$, then $a = b$, and the proof is complete.
\end{proof}
Let $T$ be a theory, $D$ a coding sort for $T$, and $\bG = \bG_D(T)$.
Then from $\bG$, given as a topological groupoid, we can essentially recover the language $\cL_D$ and the theory $T_{2D}$, as follows.
\begin{enumerate}
\item
\label{item:ReconstructionStepNorm}
We choose, arbitrarily, a compatible norm $\rho$ on $\bG$ (which exists, by \autoref{cor:CorrespondenceDistanceNormUCC}).
\item
\label{item:ReconstructionStepLanguage}
We let $\cL_\bG$ consists of a single sort, also named $D$, together with a binary predicate symbol $P_\psi$ for each UCC function $\psi$ on $\bG$.
We know that $\psi$ is bounded (\autoref{cor:BoundedUCC}), and we impose the same bound on $P_\psi$.
We also know that for every $\varepsilon > 0$ there exists a neighbourhood $U$ of $\bB$ such that $h \in UgU$ implies $|\psi(g) - \psi(h)| < \varepsilon$, and since $\rho$ is compatible, there exists $\delta = \delta_\psi(\varepsilon) > 0$ such that the same holds when $U = \{\rho < \delta\}$.
We then impose the corresponding modulus of uniform continuity on $P_\psi$, namely, requiring that
\begin{gather*}
d(x,x') \vee d(y,y') < \delta_\psi(\varepsilon)
\quad \Longrightarrow \quad
|P_\psi(x,y) - P_\psi(x',y')| \leq \varepsilon.
\end{gather*}
We also use the bound on $\rho$ as bound on the distance predicate.
\item
\label{item:ReconstructionModels}
Let us fix $e \in \bB$, and consider the set
\begin{gather*}
e \bG = \{ g \in \bG : t_g = e \}.
\end{gather*}
If $g,h \in e \bG$, then $g^{-1} h$ is defined, and for any UCC $\psi$ we let:
\begin{gather*}
P_\psi(g,h) = \psi(g^{-1} h).
\end{gather*}
In particular, $d(g,h) = P_\rho(g,h) = \rho(g^{-1} h)$ is a distance function on $e \bG$.
Assume now that $g',h' \in e \bG$ as well, and $d(g,g') \vee d(h,h') < \delta = \delta_\psi(\varepsilon)$.
Let $u = g'^{-1} g$ and $v = h^{-1} h'$.
Then $g'^{-1} h' = u g^{-1} h v$, and $u,v \in \{\rho < \delta\}$, so indeed
\begin{gather*}
|P_\psi(g,h) - P_\psi(g',h')| \leq \varepsilon,
\end{gather*}
as required.
The bounds are also respected, so $e \bG$, equipped with the distance and interpretations of $P_\psi$, is an $\cL_\bG$-pre-structure, and its completion $\widehat{e \bG}$ is an $\cL_\bG$-structure.
\item
\label{item:ReconstructionTheory}
We define $T_\bG$ as the theory of the collection of all $\cL_\bG$-structures of this form:
\begin{gather*}
T_\bG = \Th_{\cL_\bG}\Bigl( \widehat{e \bG} : e \in \bB \Bigr).
\end{gather*}
\end{enumerate}
By ``essentially recover'', we mean the following.
\begin{thm}
\label{thm:Reconstruction}
Let $T$ be a theory, $D$ a coding sort for $T$, and $\bG = \bG_D(T)$.
Let $\cL_\bG$ and $T_\bG$ be constructed as in the preceding discussion.
Then $T_\bG$ and $T_{2D}$ are one and the same, up to renaming the binary predicate symbols, and up to an arbitrary choice of the distance on the sort $D$ (from among all definable distances).
In particular, this procedure allows us to recover from $\bG$ a theory $T_\bG$ that is bi-interpretable with $T$.
\end{thm}
\begin{proof}
By \autoref{cor:CorrespondenceDistanceNormUCC}, step \autoref{item:ReconstructionStepNorm} consists exactly of choosing a definable distance $d$ on $D$, and the corresponding norm $\rho = d_\bG$.
This choice is irremediably arbitrary.
By \autoref{prp:UCC}, in step \autoref{item:ReconstructionStepLanguage} there is a natural bijection between symbols of $\cL_D$ (corresponding to formulas $\varphi(x,y)$ on $D \times D$, up to equivalence) and symbols of $\cL_\bG$: to $\varphi$ we associate the UCC function $\psi_\varphi = \varphi_\bG$, to which in turn we associate the symbol $P_{\psi_\varphi}$.
Finally, let $M \vDash T$ be separable, let $a \in D(M)$ be a code for $M$, and let $e = \tp(a) \in \bB$.
Let $D(M)_1$ denote the set of $b \in D(M)$ that also code $M$.
If $b \in D(M)_1$, then $g_b = \tp(a,b) \in e \bG$.
Moreover, if $b,c \in D(M)_1$ and $\varphi$ is a formula on $D \times D$, then $\tp(b,c) = g_b^{-1} g_c \in \bG$, so
\begin{gather*}
\varphi(b,c) = \psi_\varphi(g_b^{-1} g_c) = P_{\psi_\varphi}(g_b,g_c).
\end{gather*}
In particular, $d(b,c) = d(g_b,g_c)$ (where the first is the distance we chose on $D$, and the second the distance we defined on $e \bG$ in step \autoref{item:ReconstructionModels}).
Thus, up to representing $\varphi$ by the symbol $P_{\psi_\varphi}$, the map $b \mapsto g_b$ defines an isomorphism of the $\cL_D$-pre-structure $D(M)_1$ with the $\cL_\bG$-pre-structure $e \bG$.
This extends to an isomorphism of the respective completions: $D(M) \simeq \widehat{e \bG}$.
It follows that, up to this change of language (and choice of distance), the theory $T_\bG$ defined in step \autoref{item:ReconstructionTheory} is the theory of all separable models of $T_{2D}$.
Since $T$ is in a countable language, $T_{2D}$ is in a ``separable language'', so it is equal to the theory of all its separable models.
By \autoref{prp:CodingSortBiInterpretation}, $T$ is bi-interpretable with $T_{2D}$, and therefore also with $T_\bG$.
\end{proof}
Having achieved this, we are ready to start producing reconstruction theorems: all we need is a coding sort that only depends (up to definable bijection) on the bi-interpretation class of $T$.
\begin{exm}
\label{exm:ReconstructionAleph0Categorical}
Let $T$ be an $\aleph_0$-categorical theory.
Let $M$ be its unique separable model, and let $a$ be any sequence (possibly infinite, but countable), in any sort or sorts, such that $\dcl(a) = \dcl(M)$ (for example, any dense sequence will do).
Let $D_{T,0}$ be the set of realisations of $p = \tp(a)$.
Since $T$ is $\aleph_0$-categorical, $D_{T,0}$ is a definable set, i.e., a sort.
It is easy to check that it is a coding sort (with no exceptional set).
If $b$ is another code for $M$, and $D'_{T,0}$ is the set of realisations of $\tp(b)$, then $\dcl(a) = \dcl(b)$ and $\tp(a,b)$ defines the graph of a definable bijection $D_{T,0} \simeq D'_{T,0}$.
Therefore, $D_{T,0}$ does not depend on the choice of $a$.
Moreover, assume that $T'$ is an interpretational expansion of $T$.
Then it has a model $M'$ that expands $M$ accordingly.
But then $\dcl(M') = \dcl(M) = \dcl(a)$ (as calculated when working in $T'$), so $D_{T',0} = D_{T,0}$.
It follows that $D_{T,0}$ only depend on the bi-interpretation class of $T$.
Since $\tS_{D_{T,0}}(T) = \{p\}$ is a singleton, the groupoid
\begin{gather*}
G(T) = \bG_{D_{T,0}}(T)
\end{gather*}
is in fact a group.
It only depends on the bi-interpretation class of $T$ (since $D_{T,0}$ only depends on it) and by \autoref{thm:Reconstruction}, it is a complete bi-interpretation invariant for $T$.
We leave it to the reader to check that
\begin{gather*}
G(T) \simeq \Aut(M),
\end{gather*}
and that the reconstruction result is just a complicated restatement of those of \cite{Ahlbrandt-Ziegler:QuasiFinitelyAxiomatisable,BenYaacov-Kaichouh:Reconstruction}.
\end{exm}
\begin{exm}
\label{exm:ReconstructionClassical}
Let $T$ be a theory in classical logic.
In \cite{BenYaacov:ReconstructionGroupoid}, using an arbitrary parameter $\Phi$, we gave an explicit construction of a set of infinite sequences $D_\Phi$.
We showed that it is a definable set in the sense of continuous logic, and that its interpretation in models of $T$ only depend on the bi-interpretation class of $T$ (up to a definable bijection).
It also follows from what we showed that it is a coding sort (without exceptional set).
Since it is unique, let us denote it by $D_T$ (in fact, we could also just denote it by $D$: its construction only depends on the language, and then we simply restrict our consideration of it to models of $T$).
We then proved that the groupoid
\begin{gather*}
\bG(T) = \bG_{D_T}(T)
\end{gather*}
is a complete bi-interpretation invariant for $T$.
This is a special case of \autoref{thm:Reconstruction}.
\end{exm}
\begin{exm}
\label{exm:ReconstructionUniversalSkloem}
Let $T$ be a (complete) theory in continuous logic.
In \cite{BenYaacov:ReconstructionGroupoid} we defined when a sort $D_T$ is a \emph{universal Skolem sort}, and proved that if such a sort exists, then it is unique, and only depends on the bi-interpretation class of $T$ (in contrast with the previous example, here we do not have a general construction for such a sort, let alone a uniform one, so it really does depend on $T$).
We proved that if $T$ admits a universal Skolem sort $D_T$, then
\begin{gather*}
\bG(T) = \bG_{D_T}(T)
\end{gather*}
is a complete bi-interpretation invariant for $T$.
Again, we also proved that $D_T$ is a coding sort, so this is a special case of \autoref{thm:Reconstruction}.
\end{exm}
\begin{rmk}
\label{rmk:ReconstructionUniversalSkloemSpecialCases}
\autoref{exm:ReconstructionUniversalSkloem} encompasses the two previous examples in the following sense.
\begin{itemize}
\item If $T$ is classical, then the sort $D_T$ of \autoref{exm:ReconstructionClassical} is a universal Skolem sort, so \autoref{exm:ReconstructionClassical} is a special case of \autoref{exm:ReconstructionUniversalSkloem}.
\item If $T$ is $\aleph_0$-categorical, then $D_T = D_{T,0} \times 2^\bN$ is a universal Skolem sort, so
\begin{gather*}
\bG(T) \simeq 2^\bN \times G(T) \times 2^\bN, \qquad \text{with groupoid law} \qquad (\alpha,g,\beta) \cdot (\beta,h,\gamma) = (\alpha, gh, \gamma).
\end{gather*}
Consequently, $\bB(T) = 2^\bN$, and if $e \in \bB(T)$, then $G(T) \simeq e \bG(T) e$.
Therefore, the reconstruction of \autoref{exm:ReconstructionAleph0Categorical} can be recovered from a special case of \autoref{exm:ReconstructionUniversalSkloem}.
\end{itemize}
In both \autoref{exm:ReconstructionClassical} and \autoref{exm:ReconstructionUniversalSkloem}, the basis $\tS_{D_T}(T)$ is homeomorphic to the Cantor space $2^\bN$.
\end{rmk}
However, in \cite{BenYaacov:ReconstructionGroupoid} we also gave an example of a continuous theory which does not admit a universal Skolem sort.
In particular, the explicit construction of $D_T$ as $D_\Phi$ in the case of a classical theory simply does not extend, as is, to continuous logic.
The rest of this article is dedicated to presenting a modified version of this construction, giving rise to a coding sort that \emph{does} have an exceptional set (a very simple one, consisting of a single point), allowing us to prove a reconstruction theorem for every first order theory in a countable language (in continuous logic, or classical one).
\section{Star spaces}
\label{sec:StarSpace}
Before we can construct our coding sort, we require technical detour, where we introduce star sets in general, and, in the model-theoretic context, star sorts.
For the time being, we must ask the reader to bear with us -- the usefulness of these notions for our goal is explained in some detail at the beginning of \autoref{sec:Witnesses}.
\begin{dfn}
\label{dfn:StarSpace}
A \emph{retraction set} is a set $X$ equipped with an action of the multiplicative monoid $[0,1]$.
In particular, $1 \cdot x = x$ for all $x \in X$, and $\alpha (\beta x) = (\alpha \beta) x$ (so this is a little stronger than a homotopy).
It is a \emph{star set} if $0 \cdot x$ does not depend on $x$.
We then denote this common value by $0 \in X$, and call it the \emph{root} of $X$.
A \emph{topological retraction (star) space} is one equipped with a topology making the action $[0,1] \times X \rightarrow X$ continuous.
A \emph{metric star space} is one equipped with a distance function satisfying $d(\alpha x, \alpha y) \leq \alpha d(x,y)$ and $d(\alpha x, \beta x) = |\alpha-\beta| \|x\|$, where $\|x\| = d(x,0)$.
\end{dfn}
Notice that a retraction set $X$ can be fibred over $0 \cdot X$, with each fibre a star set.
We could also define a metric retraction space by putting infinite distance between fibres.
\begin{exm}
\label{exm:StarSpaceLine}
The real half line $\bR^+$ is naturally a topological and metric star space.
The interval $[0,1]$ (or $[0,r]$ for any $r > 0$) is a compact topological and bounded metric star space.
\end{exm}
\begin{exm}
\label{exm:StarSetProduct}
If $X$ and $Y$ are two star sets, then $X \times Y$, equipped with the diagonal action $\alpha(x,y) = (\alpha x, \alpha y)$, is again a star set.
If both are metric star spaces, then equipping the product with the maximum distance makes it a metric star space as well (here the maximum distance is preferable to the sum distance, since it preserves bound hypotheses on the diameter).
\end{exm}
\begin{exm}
\label{exm:Cone}
Let $X$ be a set, and equip $[0,1] \times X$ with the equivalence relation
\begin{gather*}
(\alpha, x) \sim (\beta,y) \qquad \Longleftrightarrow \qquad (\alpha,x) = (\beta,y) \quad \text{or} \quad \alpha = \beta = 0.
\end{gather*}
The \emph{cone} of $X$ is the quotient space
\begin{gather*}
*X = \bigl( [0,1] \times X \bigr) / {\sim}.
\end{gather*}
A member of $*X$ will be denoted $[\alpha,x]$.
We equip it with the action $\alpha \cdot [\beta,x] = [\alpha\beta,x]$.
This makes it a star set, with $[0,x] = 0$ regardless of $x$.
We shall tend to identify $x \in X$ with $[1,x] \in *X$, so $[\alpha,x]$ may also be denoted by $\alpha x$.
When $X$ is a compact Hausdorff space, the relation $\sim$ is closed, $*X$ is again compact and Hausdorff, and the identification $X \subseteq *X$ is a topological embedding.
When $X$ is a bounded metric space, say $\diam(X) \leq 2$, we propose to metrise $*X$ by
\begin{gather}
\label{eq:StarDistance}
d(\alpha x, \beta y) = |\alpha - \beta| + (\alpha \wedge \beta) d(x,y).
\end{gather}
In particular, if either $\alpha$ or $\beta$ vanishes, then the right hand side does not depend on either $x$ or $y$, so $d$ is well defined, and $d(0,x) = 1$ for all $x \in X$.
The only property that is not entirely obvious is the triangle inequality, namely
\begin{gather}
\label{eq:StarDistanceTriangle}
|\alpha-\gamma| + (\alpha \wedge \gamma) d(x,z)
\leq
|\alpha-\beta| + (\alpha \wedge \beta) d(x,y) + |\beta-\gamma| + (\beta \wedge \gamma) d(y,z).
\end{gather}
We may assume that $\alpha \geq \gamma$, so $\alpha \wedge \gamma = \gamma$.
If $\beta \geq \gamma$, then \autoref{eq:StarDistanceTriangle} holds trivially since $\alpha \wedge \beta \geq \gamma = \beta \wedge \gamma$.
If $\beta \leq \gamma$, then the right hand side evaluates to
\begin{gather*}
(\alpha - \gamma) + 2(\gamma - \beta) + \beta d(x,y) + \beta d(y,z).
\end{gather*}
Applying the triangle inequality for $X$ and the hypothesis that $2 \geq d(x,z)$, we obtain \autoref{eq:StarDistanceTriangle} in this case as well.
We conclude that $(*X,d)$ is a metric space.
The embedding $X \subseteq *X$ is isometric, and $\diam(*X) = 1 \vee \diam(X)$.
If $X$ is complete, then so is $*X$.
A special instance of this is the cone of a singleton, which can be identified with the interval $[0,1]$ equipped with the natural star, topological or metric structures.
\end{exm}
\begin{exm}
\label{exm:GeneralisedCone}
More generally, let $S$ be a star set, $X$ an arbitrary set, and define
\begin{gather*}
(s, x) \sim (t,y) \qquad \Longleftrightarrow \qquad (s,x) = (t,y) \quad \text{or} \quad s = t = 0,
\\
S*X = \bigl( S \times X \bigr) / {\sim}.
\end{gather*}
As in the definition of a cone, a member of $S*X$ will be denoted $[s,x]$ or $s * x$ (in analogy with the notation $\alpha x$).
We make $S * X$ into a star set by defining $\alpha \cdot (s * x) = (\alpha s) * x$.
This indeed generalises the cone construction, with $*X = [0,1] * X$.
When $S$ and $X$ are compact Hausdorff spaces, the relation $\sim$ is closed, and $S*X$ is again compact and Hausdorff.
When $S$ and $X$ are bounded metric spaces, say $\diam(X) \leq 2$ and $\|s\| \leq 1$ for all $s \in S$, we equip $S*X$ with the distance function
\begin{gather*}
d(s * x, t * y) = d(s,t) \vee d\bigl( \|s\| x, \|t\| y \bigr),
\end{gather*}
where $d\bigl( \|s\| x, \|t\| y \bigr)$ is calculated in $*X$.
Notice that $\|s * x\| = \|s\|$, and the distance functions on $[0,1] * X$ and $*X$ agree.
\end{exm}
\begin{rmk}
\label{rmk:GeneralisedConeIterated}
The generalised cone construction of \autoref{exm:GeneralisedCone} can be easily iterated: $S * (X \times Y) = (S * X) * Y$, identifying $s* (x,y) = s * x * y$.
In the metric case, assume that $X$ and $Y$ are both of diameter at most two.
Equipping products with the maximum distance, $\diam(X \times Y) \leq 2$ as well, and the obvious map $*(X \times Y) \rightarrow *X \times *Y$ sending $\alpha(x,y) \mapsto (\alpha x,\alpha y)$ is isometric.
It follows that the identification $S \times (X \times Y) = (S * X) * Y$ is isometric:
\begin{align*}
d(s * x * y, t * u * v)
& = d(s * x, t * u) \vee d\bigl( \|s * x\| y, \|t * u\| v\bigr)
\\
& = d(s,t) \vee d\bigl( \|s\| x, \|t\| u \bigr) \vee d\bigl( \|s\|y, \|t\| v\bigr)
\\
& = d(s,t) \vee d\bigl( \|s\| (x,y), \|t\| (u,v) \bigr)
\\
& = d\bigl( s * (x,y), t * (u,v) \bigr).
\end{align*}
In particular, $*(X \times Y) = (*X) * Y$.
\end{rmk}
\begin{dfn}
\label{dfn:Homogeneous}
Let $X$ and $Y$ be two retraction (star) spaces.
A map $f\colon X \rightarrow Y$ is \emph{homogeneous} if $f(\alpha x) = \alpha f(x)$.
It is \emph{sub-homogeneous} if $f(\alpha x) = \beta f(x)$ for some $\beta \leq \alpha$.
The latter will be mostly used when $Y = \bR^+$, in which sub-homogeneity becomes $f(\alpha x) \leq \alpha f(x)$.
\end{dfn}
We may also equip a retraction space with a partial order defined by $\alpha x \leq x$ whenever $\alpha \in [0,1]$.
This induces the usual partial order on $\bR^+$, and sub-homogeneity can be stated as $f(\alpha x) \leq \alpha f(x)$ for arbitrary maps between retraction spaces.
Notice also that our definition of a metric retraction space $X$ simply requires the distance function to be sub-homogeneous on $X \times X$.
\section{Star sorts}
\label{sec:StarSort}
\begin{dfn}
\label{dfn:StarSort}
A \emph{star sort} is a sort equipped with a definable structure of a metric star space.
In particular, this means that the map $(\alpha,x) \mapsto \alpha x$ is definable (and not just $x \mapsto \alpha x$ for each $\alpha$).
Star sorts will usually be denoted by $D^*$, $E^*$, and so on.
\end{dfn}
\begin{dfn}
\label{dfn:SubHomogeneousFormula}
Let $D^*$ be a star sort and $\varphi(u,y)$ a formula on $D^* \times E$.
We say that $\varphi$ is \emph{sub-homogeneous} if it satisfies $\alpha \varphi(u,y) \geq \varphi(\alpha u,y) \geq 0$.
We may specify that it is sub-homogeneous in the variable $u$, especially if $u$ is not the first variable.
More generally, we may say that $\varphi(u,v,\ldots)$ is sub-homogeneous in $(u,v)$ if $\alpha \varphi(u,v,\ldots) \geq \varphi(\alpha u,\alpha v,\ldots) \geq 0$, and similarly for any other tuple of variables.
If it is sub-homogeneous in the tuple of all its variables, we just say that $\varphi$ is \emph{jointly sub-homogeneous}.
\end{dfn}
\begin{exm}
\label{exm:StarSortConstruction}
\begin{itemize}
\item If $D$ is any sort (of diameter at most two), then the cone $*D$, equipped with the distance proposed in \autoref{exm:Cone}, is a star sort.
More generally, if $D^*$ is a star sort and $E$ an arbitrary sort, then $D^* * E$, as per \autoref{exm:GeneralisedCone}, is a star sort.
\item Any finite product of star sorts, equipped with the diagonal action of $[0,1]$ and the maximum or sum distance, is again a star sort.
Similarly, any countable product of star sorts, equipped with $d(u,v) = \sum_n \frac{d_n(u_n,v_n)}{2^n \diam(d_n)}$, is again a star sort, and the same holds with supremum in place of sum.
\item If $D^*$ is a star sort and $d'(u,v)$ a jointly sub-homogeneous definable pseudo-distance on $D^*$, then the quotient $(D^*,d')$ can be equipped with an induced star structure, making it again a star sort.
\item Let $D^*$ be a star sort and $E^* \subseteq D^*$ a definable subset.
Then the distance $d(u,E^*)$ is sub-homogeneous if and only if $E^*$ is closed under multiplication by $\alpha \in [0,1]$, in which case $E^*$ is again a star sort.
\end{itemize}
\end{exm}
Notice that $\varphi(u,y)$ is sub-homogeneous in $u$ if for every fixed parameter $b$, the formula $\varphi(u,b)$ (in $u$ alone) is sub-homogeneous.
For an alternate point of view, notice that a sub-homogeneous formula $\varphi(u,y)$ does not depend on $y$ when $u = 0$.
It can therefore be viewed as a formula $\varphi(u * y)$ in the sort $D^* * E$ (see \autoref{exm:GeneralisedCone}).
Since $\alpha (u * y) = (\alpha u) * y$, a sub-homogeneous (in $u$) formula $\varphi(u,y)$ is the same thing as a sub-homogeneous formula $\varphi(u * y)$ in a single variable from the sort $D^* * E$.
Similarly, a formula $\varphi(u,v)$ on $D^* \times E^*$ is jointly sub-homogeneous if and only if it is sub-homogeneous as a formula on the product star sort.
\begin{qst}
We ordered the clauses of \autoref{exm:StarSortConstruction} in order to reflect the three operations by which we construct sorts in general.
Still, something more probably needs to be said regarding the construction of sub-homogeneous pseudo-distance functions.
In the usual context of plain sorts (and plain pseudo-distances), to every formula $\varphi(x,t)$ on $D \times E$ we can associate a formula on $D \times D$, defined by
\begin{gather*}
d_\varphi(x,y) = \sup_t \, |\varphi(x,t) - \varphi(y,t)|.
\end{gather*}
This is always a definable pseudo-distance on $D$.
Moreover, in the case where $E = D$ and $\varphi$ already defines a pseudo-distance, $d_\varphi$ agrees with $\varphi$.
Can something analogous be done in the present context as well?
\end{qst}
The following essentially asserts that we can retract continuously (with Lipschitz constant one, even) all formulas into sub-homogeneous ones.
The analogous result for a formula in several variables, with respect to joint sub-homogeneity in some of them, follows.
\begin{prp}
\label{prp:SubHomogeneousFromArbitrary}
Let $D^*$ be a star sort and $\varphi(u,y) \geq 0$ a positive formula on $D^* \times E$.
For $k \in \bN$, define
\begin{gather*}
(\SH_k \varphi)(u,y) = \inf_{u',\alpha} \, \Bigl( \alpha \varphi(u',y) + k d(\alpha u',u)\Bigr), \qquad \text{where}\ u' \in D^*, \ \alpha \in [0,1].
\end{gather*}
\begin{enumerate}
\item For any $\varphi \geq 0$ and $k$, the formula $(\SH_k \varphi)(u,y)$ is $k$-Lipschitz and sub-homogeneous in $u$, and $\SH_k \varphi \leq \varphi$.
\item For any two formulas $\varphi, \psi \geq 0$ and $r \geq 0$, if $\varphi \leq \psi + r$, then $\SH_k \varphi \leq (\SH_k \psi) + r$.
Consequently, $|(\SH_k \varphi) - (\SH_k \psi)| \leq |\varphi - \psi|$.
\item If $\varphi$ is sub-homogeneous, then $(\SH_k \varphi) \rightarrow \varphi$ uniformly, at a rate that only depends on the bound and uniform continuity modulus of $\varphi$.
\end{enumerate}
\end{prp}
\begin{proof}
Clearly, $(\SH_k \varphi)(u,y)$ is $k$-Lipschitz in $u$.
If $(\SH_k \varphi)(u,y) < r$ and $\beta \in [0,1]$, then there exist $u'$ and $\alpha$ such that $\alpha \varphi(u',y) + k d(\alpha u',u) < r$.
Then $\alpha \beta \varphi(u',y) + k d(\alpha \beta u', \beta u) < \beta r$, showing that $(\SH_k \varphi)(\beta u) < \beta r$.
This proves sub-homogeneity.
We also always have $(\SH_k \varphi)(u,y) \leq 1 \cdot \varphi(u,y) + d(1 \cdot u, u) = \varphi(u,y)$.
The second item is immediate.
For the third item, we assume that $\varphi$ is sub-homogeneous, in which case
\begin{gather*}
(\SH_k \varphi)(u,y) = \inf_{u'} \, \Bigl( \varphi(u',y) + k d(u',u) \Bigr) \leq \varphi(u,y).
\end{gather*}
Say that $|\varphi| \leq M$ and $d(u,u') < \delta$ implies $|\varphi(u,y) - \varphi(u',y)| < \varepsilon$, and let $k > 2M / \delta$.
If $d(u',u) \geq \delta$, then $\varphi(u',y) + k d(u',u) \geq \varphi(u)$, so such $u'$ may be ignored.
Restricting to those where $d(u',u) < \delta$, we see that $(\SH_k \varphi) \geq \varphi - \varepsilon$.
\end{proof}
\begin{dfn}
\label{dfn:WitnessNormalisedFormula}
We say that a formula $\varphi(x,y)$ is \emph{witness-normalised} (in $x$, unless another variable is specified explicitly) if $\inf_y \varphi = 0$ (equivalently, if $\varphi \geq 0$ and $\sup_x \inf_y \varphi = 0$).
More generally, for $\varepsilon > 0$, we say that $\varphi(x,y)$ is \emph{$\varepsilon$-witness-normalised} (in $x$) if $0 \leq \inf_y \varphi \leq \varepsilon$.
\end{dfn}
Witness-normalised formulas are analogous to formulas $\varphi(x,y)$ in classical logic for which $\exists y \varphi$ is valid: in either case, we require that witnesses exist.
If $\varphi(x,y)$ is any formula, then $\varphi(x,y) - \inf_z \varphi(x,z)$ is witness-normalised (we may say that it is \emph{syntactically} witness normalised), where we subtract a ``normalising'' term.
By definition, a sub-homogeneous or a witness-normalised formula is positive.
If $\varphi$ is witness-normalised in any of its arguments and $\varphi \geq \psi \geq 0$, then so is $\psi$.
This applies in particular to the formulas $\SH_k \varphi$ constructed in \autoref{prp:SubHomogeneousFromArbitrary}, assuming $\varphi$ is witness-normalised.
\begin{dfn}
\label{dfn:StarCorrespondence}
Let $D^*$ and $E^*$ be two star sorts.
A \emph{star correspondence} between $D^*$ and $E^*$ is a formula $\varphi(u,v)$ on $D^* \times E^*$ that is sub-homogeneous in $(u,v)$ and witness-normalised in each of $u$ and $v$.
Similarly, an \emph{$\varepsilon$-star correspondence} is a jointly sub-homogeneous formula that is $\varepsilon$-witness-normalised in each argument.
\end{dfn}
\begin{rmk}
\label{rmk:EpsilonWitnessNormalised}
If $\varphi$ is $\varepsilon$-witness-normalised (in one of its variables), then $\varphi' = \varphi \dotminus \varepsilon$ is witness-normalised (in the same), and $|\varphi - \varphi'| \leq \varepsilon$.
If $\varphi$ is sub-homogeneous, then so is $\varphi \dotminus \varepsilon$,
Therefore, if $\varphi$ is an $\varepsilon$-star correspondence, then $\varphi' = \varphi \dotminus \varepsilon$ is a star correspondence, and $|\varphi - \varphi'| \leq \varepsilon$.
\end{rmk}
Say that a definable map $\sigma\colon D \rightarrow E$ is \emph{densely surjective} if it is surjective in every sufficiently saturated model of the ambient theory, or equivalently, if $\sigma$ has dense image in every model.
Recall that a definable map $\sigma\colon D^* \rightarrow E^*$ between star sorts is \emph{homogeneous} if $\sigma(\alpha u) = \alpha \sigma(u)$.
Notice that a definable map $\sigma \colon D^* \rightarrow E^*$ is homogeneous if and only if the formula $d(\sigma u,v)$ is sub-homogeneous in $(u,v)$, and it is always witness-normalised in $u$.
If $\sigma$ is densely surjective, then it is homogeneous if and only if $d(\sigma u,v)$ is a star correspondence.
If $\sigma$ is bijective, then this is further equivalent to if $d(u, \sigma^{-1} v)$ being a star correspondence.
\begin{dfn}
\label{dfn:UniversalStarSort}
Say that a star sort $D^*$ is \emph{universal} (as a star sort) if for every star sort $E^*$, every star correspondence $\varphi$ between $D^*$ and $E^*$, and every $\varepsilon > 0$, there exists a $1/2$-star correspondence $\psi$ such that, in addition, if $\psi(u,v_i) < 1$ for $i = 0,1$, then $\varphi(u,v_i) < \varepsilon$ and $d(v_0,v_1) < \varepsilon$.
\end{dfn}
This just says that condition \autoref{item:UniversalStarSortMapsOne} of \autoref{prp:UniversalStarSortMaps}, which may be easier to parse, holds ``approximately''.
The choice of one and one half is quite arbitrary, and any two constants $0 < r_1 < r_2$ would do just as well (in the proof of \autoref{prp:UniversalStarSortMaps}\autoref{item:UniversalStarSortMapsZero} below, replace $2\psi \dotminus 1$ with $(\psi \dotminus r_1) / (r_2 - r_1)$).
\begin{prp}
\label{prp:UniversalStarSortMaps}
Let $D^*$ and $E^*$ be star sorts, $\varphi(u,v)$ a star correspondence on $D^* \times E^*$, and $\varepsilon > 0$.
\begin{enumerate}
\item
\label{item:UniversalStarSortMapsZero}
If $D^*$ is a universal star sort, then there exists $\psi$ as in \autoref{dfn:UniversalStarSort} that is a star correspondence (rather than a mere $\varepsilon$-star correspondence).
\item
\label{item:UniversalStarSortMapsOne}
If $D^*$ is a universal star sort, then there exists a densely surjective homogeneous definable map $\sigma\colon D^* \rightarrow E^*$ such that $\varphi(u,\sigma u) \leq \varepsilon$.
\item
\label{item:UniversalStarSortMapsTwo}
If both $D^*$ and $E^*$ are both universal star sorts, then the same can be achieved with $\sigma$ bijective.
\end{enumerate}
\end{prp}
\begin{proof}
For \autoref{item:UniversalStarSortMapsZero}, let $\psi$ be as in the conclusion of \autoref{dfn:UniversalStarSort}.
Then $2\psi \dotminus 1$ will do.
For \autoref{item:UniversalStarSortMapsOne}, define a sequence of formulas $\varphi_n(u,v)$ as follows.
We start with $\varphi_0 = \varphi$, and we may assume that $0 < \varepsilon < 1$.
Then, assuming that $\varphi_n$ is a star correspondence, we find a star correspondence $\varphi_{n+1}$ such that $\varphi_{n+1}(u,v_i) < 1$ implies $\varphi_n(u,v_i) \leq \varepsilon$ and $d(v_0,v_1) < \varepsilon/2^n$.
Let $X_n \subseteq D^* \times E^*$ be the (type-definable) set defined by $\varphi_n \leq \varepsilon$ and $X = \bigcap X_n$.
By hypothesis, for every $u \in D^*$ and $n$, there exists $v \in E^*$ such that $(u,v) \in X_n$.
We also have $X_{n+1} \subseteq X_n$, so in a sufficiently saturated model there exists $v \in E^*$ such that $(u,v) \in X$.
By the second hypothesis on $\varphi_n$, such $v$ is unique, so $X$ is the graph of a definable map $\sigma$ (and $v$ belongs to any model that contains $u$).
By the same reasoning as above, for every $v \in E^*$ there exists $u \in D^*$ (not necessarily unique, so potentially only in a sufficiently saturated model) such that $(u,v) \in X$, so $\sigma$ is densely surjective.
Assume now that $v = \sigma u$, i.e., $(u,v) \in X$.
Since each $\varphi_n$ is sub-homogeneous, $(\alpha u,\alpha v) \in X$ for every $\alpha \in [0,1]$, i.e., $\alpha v = \sigma(\alpha u)$, and $\sigma$ is homogeneous.
Finally, since $\varphi_0 = \varphi$, we have $(u,\sigma u) \in X \subseteq X_0$, so $\varphi(u, \sigma u) \leq \varepsilon$.
For \autoref{item:UniversalStarSortMapsTwo} we use a back-and-forth version of the previous argument, with the roles of $D^*$ and $E^*$ reversed at odd steps.
\end{proof}
Notice that the zero formula is (trivially) a star correspondence on any two star sorts.
Therefore, if a universal star sort exists, then it is unique, up to a homogeneous definable bijection.
\begin{lem}
\label{lem:UniversalStarSortLimit}
Let $(D^*_n)$ be an inverse system of star sorts, where each $\pi_n\colon D^*_{n+1} \rightarrow D^*_n$ is surjective and homogeneous.
\begin{enumerate}
\item The inverse limit $D^* = \varprojlim D^*_n$ is a star sort, with the natural action $\alpha (u_n) = (\alpha u_n)$ and the distance proposed in \autoref{exm:StarSortConstruction}.
\item A star correspondence between $D^*$ and $E^*$ that factors through $D^*_n \times E^*$ is the same thing as a star correspondence between $D^*_n$ and $E^*$.
\item In order for $D^*$ to be a universal star sort, it is enough for it to satisfy the condition of \autoref{dfn:UniversalStarSort} for star-correspondences $\varphi$ that factor through $D^*_n \times E^*$ for some $n$.
\end{enumerate}
\end{lem}
\begin{proof}
The first two assertions are fairly evident.
In what follows, we are going to identify a formula $\varphi(u_n,v)$ on $D^*_n \times E^*$ with the formula $\varphi\bigl( \pi_n(u), v \bigr)$ on $D^* \times E^*$, which is essentially what the second point says.
For the last one, say that $\varphi$ is a star correspondence between $D^*$ and $E^*$, and let $\varepsilon > 0$.
For $n$ large enough we may find a formula $\varphi_1(u_n,v)$ on $D^*_n \times E^*$ such that $\varphi \geq \varphi_1 \geq \varphi \dotminus \varepsilon$ (with the identification proposed in the previous paragraph).
Since $\varphi$ is jointly sub-homogeneous, so is $\varphi \dotminus \varepsilon$.
Using the construction of \autoref{prp:SubHomogeneousFromArbitrary}, this implies that for large enough $k$ we have
\begin{gather*}
\varphi \geq \SH_k \varphi \geq \SH_k \varphi_1 \geq \SH_k (\varphi\dotminus\varepsilon) \geq \varphi \dotminus 2\varepsilon.
\end{gather*}
Since $\varphi' = \SH_k \varphi_1$ is jointly sub-homogeneous, it a star correspondence, and it factors through $D^*_n \times E^*$.
Assume now that $\psi(u,v)$ exists, as per \autoref{dfn:UniversalStarSort}, for $\varphi'$ and $\varepsilon$.
In particular, if $\psi(u,v) < 1$, then $\varphi'(u,v) < \varepsilon$, so $\varphi(u,v) < 3\varepsilon$, which is good enough.
\end{proof}
\section{Sorts with witnesses}
\label{sec:Witnesses}
In this section, we provide an explicit construction of a universal star sort.
We follow a path similar to the construction of $D_\Phi$ in \cite{BenYaacov:ReconstructionGroupoid}, seeking a sort that contains ``all witnesses''.
Let us consider first the case of a single formula $\varphi(x,y)$ on $D \times E$, which we assume to be witness-normalised (namely, such that $\inf_y \varphi = 0$, see \autoref{dfn:WitnessNormalisedFormula}).
The sort $D$ is viewed as the sort of \emph{parameters}, and $E$ is the sort of potential \emph{witnesses}.
One may then wish to consider the set of ``parameters with witnesses'', namely the collection of all pairs $(x,y)$ such that $\varphi(x,y) = 0$, but this may be problematic for several reasons.
First of all, in a fixed (non-saturated) structure, for all $a$ there exist $b$ such that $\varphi(a,b)$ is arbitrarily small, but not necessarily such that $\varphi(a,b) = 0$.
This can be overcome by allowing an error, e.g., by considering all the solution set of $\varphi(x,y) \leq \varepsilon$ for some $\varepsilon > 0$.
In fact, it is enough to consider the solution set of $\varphi(x,y) \leq 1$: if we want a smaller error, we need only replace $\varphi$ with $\varphi/\varepsilon$.
A second, and more serious issue, is that the resulting set(s) need not be definable.
That is to say that it may happen that $1 < \varphi(a,b) < 1+\varepsilon$ for arbitrarily small $\varepsilon > 0$ without there existing a pair $(a',b')$ close to $(a,b)$ such that $\varphi(a',b') \leq 1$.
We can solve this by allowing a \emph{variable error}, considering triplets $(r,x,y)$ where $r \in \bR$ and $\varphi(x,y) \leq r$.
Now, if $\varphi(x,y) < r + \varepsilon$, then the triplet $(r,x,y)$ is very close to $(r+\varepsilon,x,y)$, which does belong to our set.
This may seem too easy, and raises some new issues.
For example, if we allow errors greater than the bound for $\varphi$, then the condition $\varphi(x,y) \leq r$ becomes vacuous.
This is not, in fact, a real problem, since soon enough we are going to let $\varphi$ vary (or more precisely, consider an infinite family of formulas simultaneously), and any finite bound $r$ will be meaningful for \emph{some} of the formulas under consideration.
However, in order for the previous argument to work, $r$ cannot be bounded (we must always be able to replace it with $r + \varepsilon$).
By compactness, $r = +\infty$ must be allowed as well -- and now there is no way around the fact that $\varphi(x,y) \leq \infty$ is vacuous, regardless of $\varphi$.
We seem to be chasing our own tail, each time shovelling the difficulty underneath a different rug -- indeed, a complete solution is impossible, or else we could construct a universal Skolem sort, which was shown in \cite{BenYaacov:ReconstructionGroupoid} to be impossible in general.
What we propose here is a ``second best'': allow infinite error, but use the formalism of star sorts to identify all instances with infinite error as the distinguished root element.
Thus, at the root, all information regarding the (meaningless) witnesses will be lost, while every point outside the root will involve finite error, and therefore meaningful witnesses.
Since we want the root to be at zero, rather than at infinity, we replace $r \in [1,\infty]$ with $\alpha = 1/r \in [0,1]$.
Let $D^*$ be a star sort, $E$ a sort.
The set $D^* * E = \{ u * y : u \in D^*, \, y \in E\}$, as per \autoref{exm:GeneralisedCone}, is again a star sort, in which $0 * y = 0$ regardless of $y$.
\begin{lem}
\label{lem:OneWitness}
Let $D^*$ be a star sort, $E$ a sort, and let $\varphi(u,y)$ a formula on $D^* \times E$, witness-normalised and sub-homogeneous in $u$.
Then
\begin{gather*}
D^*_\varphi
= \bigl\{ u * y : u \in D^* \ \text{and} \ \varphi(u,y) \leq 1 \bigr\}
\subseteq D^* * E
\end{gather*}
is again a star sort, and the natural projection map $D^*_\varphi \rightarrow D^*$, sending $u * y \mapsto u$, is surjective.
\end{lem}
\begin{proof}
We may view $\varphi$ as a formula on $D^* * E$, since, by sub-homogeneity, $\varphi(0,y) = 0$ regardless of $y$.
The set $D^*_\varphi$ is the zero-set in $D^* * E$ of the formula $\varphi \dotminus 1$.
Assume now that $a * b \in D^* * E$ and $\varphi(a,b) \dotminus 1 < \delta$.
Then $(1-\delta) a * b \in D^*_\varphi$, and it is as close as desired (given $\delta$ small enough) to $a * b$.
Therefore, $D^*_\varphi$ is definable.
Since $\varphi$ is sub-homogeneous, $D^*_\varphi$ is closed under multiplication by $\alpha \in [0,1]$ and is therefore a star sort.
Since $\varphi$ is witness-normalised, the projection is onto.
\end{proof}
Let us iterate this construction.
Recall from \autoref{rmk:GeneralisedConeIterated} that $(*D) * E = *(D \times E)$, identifying $(\alpha x) * y = \alpha(x,y)$.
Therefore, if $D^* \subseteq *D$ (with the induced star structure), then $D^* * E \subseteq *(D \times E)$.
\begin{dfn}
\label{dfn:StarPhi}
Fix a sort $D$, as well as a sequence of formulas $\Phi = (\varphi_n)$, where each $\varphi_n(x_{<n},y)$ is a witness-normalised formula on $D^n \times D$.
Since $\Phi$ determines the sort $D$, we shall say that $\Phi$ is a sequence \emph{on} $D$.
We then define
\begin{gather*}
D^*_n = \bigl\{ \alpha x_{<n} : \alpha \varphi_k(x_{<k},x_k) \leq 1 \ \text{for all} \ k < n \bigr\} \subseteq *(D^n), \\
D^*_\Phi = \bigl\{ \alpha x : \alpha \varphi_n(x_{<n},x_n) \leq 1 \ \text{for all} \ n \bigr\} \subseteq *(D^\bN).
\end{gather*}
\end{dfn}
In other words,
\begin{gather*}
D^*_0 = [0,1] = *(\text{singleton}), \qquad
D^*_{n+1} = (D^*_n)_{\varphi_n'}, \qquad
D^*_\Phi = \varprojlim D^*_n,
\end{gather*}
where $\varphi_n'(\alpha x_{<n},y) = \alpha \varphi(x_{<n},y)$.
By \autoref{lem:OneWitness}, each $D^*_n$ is a star sort, and the natural projection $D^*_{n+1} \rightarrow D^*_n$ is onto.
By \autoref{lem:SortInverseLimit}, $D^*_\Phi = \varprojlim D^*_n$ is also a sort, and therefore a star sort by \autoref{lem:UniversalStarSortLimit}.
Notice that any formula in $D^*_n$ can be viewed, implicitly, as a formula in $D^*_k$ for any $k \geq n$, or even in $D^*_\Phi$, via the projections $D^*_k \twoheadrightarrow D^*_n$ or $D^*_\Phi \twoheadrightarrow D^*_n$ (this is, essentially, an addition of dummy variables).
In what follows, variables in $D^*_n$ will be denoted by $u_n$ or $\alpha x_{<n}$ (where $x_{<n} \in D^n$), and similarly, variables in $D^*_\Phi$ will be denoted by $u$ or $\alpha x$.
\begin{dfn}
\label{dfn:RichSequence}
We say that the sequence $\Phi$ on a sort $D$ is \emph{rich} if $D$ admits a definable projection onto any countable product of basic sorts, and for every witness-normalised formula $\varphi(x_{<n},y)$ in $D^n \times D$ and every $\varepsilon > 0$ there exist arbitrarily big $k \geq n$ such that $|\varphi_k(x_{<k},y) - \varphi(x_{<n},y)| < \varepsilon$ (so $\varphi$ is viewed as a formula in $x_{<k},y$ through the addition of dummy variables).
\end{dfn}
\begin{lem}
\label{lem:RichSequenceExists}
Under our standing hypothesis that the language is countable, with countably many basic sorts, there exists a rich sequence $\Phi$ (on an appropriate sort $D$).
Moreover, we may construct $\Phi$ (and $D$) in a manner that only depends on the language and not on the theory of any specific structure.
\end{lem}
\begin{proof}
For $D$ we may take the (countable) product of all infinite countable powers of the basic sorts.
For each $k$ we may choose a countable dense family of formulas on $D^k \times D$, call them $\psi_{k,m}(x_{<k},y)$.
Replacing them with $\chi_{k,m}(x_{<k},y) = \psi_{k,m}(x_{<k},y) - \inf_z \psi_{k,m}(x_{<k},z)$, we obtain a countable dense family of witness-normalised (in $x_{<k}$) formulas on $D^k \times D$.
We may now construct a rich sequence $\Phi$ in which each $\chi_{k,m}$ occurs infinitely often (with additional dummy $x$ variables).
\end{proof}
Let $\Phi = (\varphi_n)$ (and $D$) be fixed, with $\Phi$ rich.
We define a formula on $D^n$ by
\begin{gather*}
\rho_n(x_{<n}) = \frac{1}{1 \vee \bigvee_{k < n} \varphi_k(x_{<k},x_k)}.
\end{gather*}
In other words, $\rho_n(x_{<n})$ is the maximal $\alpha \in [0,1]$ such that $\alpha x_{<n} \in D^*_n$, or equivalently, such that $x_{<n}$ can be extended to $x$ with $\alpha x \in D^*_\Phi$.
\begin{lem}
\label{lem:StarPhiCorrespondenceRho}
Let $\Phi = (\varphi_n)$ be rich.
Let $E^*$ be another star sort, $\psi(u_n,v)$ a star correspondence on $D^*_\Phi \times E^*$ that factors through $D^*_n \times E^*$, and $\varepsilon > 0$.
Then $\psi$ factors through $D^*_k \times E^*$ for every $k \geq n$, and for every large enough $k$ the formula $\psi_1^k(x_{<k},v) = \psi\bigl(\rho_k(x_{<k}) x_{<n}, v \bigr)$ is $\varepsilon$-witness-normalised in either argument.
\end{lem}
\begin{proof}
If $k \geq n$, then $\rho_k(x_{<k}) \leq \rho_n(x_{<n})$, so $\rho_k(x_{<k}) x_{<n} \in D^*_n$.
Since $\psi(u_n,v)$ is witness-normalised in $u_n$, $\psi_1^k(x_{<k},v)$ is witness-normalised in $x_{<k}$.
It is left to show that for $k$ large enough, it is also $\varepsilon$-witness-normalised in $v$.
Our hypothesis regarding $D$ implies, among other things, that there exists a surjective definable map $\chi\colon D \rightarrow [0,1]$ (namely, a surjective formula).
Therefore, for a constant $C$ that we shall choose later, there exists $m \geq n$ such that $C \chi(y) \geq \varphi_m(x_{<m},y) \geq C \chi(y) - 1/C$.
Assume that $k > m$.
For every possible value of $v \in E^*$, which we consider as fixed, there exists $\alpha x_{<n} \in D^*_n$ such that $\psi(\alpha x_{<n},v) < \varepsilon$.
We can always extend $x_{<n}$ to $x_{<m}$ in such a manner that $\rho_m(x_{<m}) = \rho_n(x_{<n}) \geq \alpha$, so $\alpha x_{<m} \in D^*_m$.
We choose $x_m$ so $\chi(x_m) = (\alpha C \vee 1)^{-1}$, and extend $x_{\leq m}$ to $x_{<k}$ so $\rho_k(x_{<k}) = \rho_{m+1}(x_{\leq m})$.
If $\alpha C \geq 1$, then $1/\alpha \geq \varphi_m(x_{<m},x_m) \geq 1/\alpha - 1/C$, so $\alpha \leq \rho_{m+1}(x_{\leq m}) \leq \alpha (1-\alpha/C)^{-1}$.
Having chosen $C$ large enough, $\rho_k(x_{<k}) = \rho_{m+1}(x_{\leq m})$ is as close to $\alpha$ as desired.
If $\alpha C < 1$, then $0 \leq \alpha \leq 1/C$ and $0 < \rho_{k+1}(x_{\leq k}) \leq 1/(C - 1/C)$, so the same conclusion holds.
Either way, having chosen $C$ large enough, $\psi_1^k(x_{<k},v)$ is as close as desired to $\psi(\alpha x_{<n},v)$, and in particular $\psi_1^k(x_{<k},v) < 2\varepsilon$, which is good enough.
\end{proof}
Given our hypothesis regarding $D$, every sort can be expressed as a definable subset of a quotient of $D$ by a pseudo-distance.
Such a quotient will be denoted $(D,\overline{d})$ (which includes an implicit step of identifying points at $\overline{d}$-distance zero).
\begin{conv}
\label{conv:StarPhiEStar}
From this point, and through the proof of \autoref{lem:StarPhiCorrespondenceUniversal}, we fix a star sort $E^*$.
By the preceding remark, we may assume that $(E^*,d_{E^*}) \subseteq (D,\overline{d})$ isometrically, where $\overline{d}$ is a definable pseudo-distance on $D$ which we also fix.
In particular, the distance on $E^*$ will also be denoted by $\overline{d}$.
If $y \in D$, we denote its image in the quotient $(D,\overline{d})$ by $\overline{y}$.
\end{conv}
It is worthwhile to point out that if $\alpha x \in D^*_\Phi$, then for every $k \in \bN$ and $\delta > 0$,
\begin{gather}
\label{eq:DPhiInequality}
(\alpha \delta/2) \Bigl(\varphi_k(x_{<k},x_k) + 1 \Bigr)
=
(\delta/2) \Bigl(\alpha \varphi_k(x_{<k},x_k) + \alpha \Bigr)
\leq \delta.
\end{gather}
Given $n \leq k$ and $\delta > 0$, let us define for $\alpha x \in D^*_\Phi$, $v \in E^*$ and $y \in D$:
\begin{gather*}
\chi^n(\alpha x,y,v) = \inf_{w \in E^*} \, \Bigl[ \overline{d}\bigl( \alpha \rho_n(x_{<n})^{-1} w, v \bigr) + \alpha \overline{d}(\overline{y},w) \Bigr],
\\
\chi^{n,k}(\alpha x,v) = \chi^n(\alpha x, x_k,v) = \inf_{w \in E^*} \, \Bigl[ \overline{d}\bigl( \alpha \rho_n(x_{<n})^{-1} w, v \bigr) + \alpha \overline{d}(\overline{x_k},w) \Bigr].
\end{gather*}
Let us explain this.
First of all, since $\alpha x \in D^*_\Phi$, we must have $\alpha \leq \rho_n(x_{<n})$, so the expression $\alpha \rho_n(x_{<n})^{-1} w$ makes sense.
Also, if $\alpha = 0$, then $\chi^n(\alpha x,y,v) = \|v\|$ does not depend on $x$, so this is well defined.
Now, let $y \in D$ (possibly, $y = x_k$ for some $k \geq n$, but this will happen later).
We want $v$ to be equal to $\alpha \rho_n(x_{<n})^{-1} \overline{y}$, and in particular, we want $\overline{y}$ to belong to $E^*$.
We may not multiply by $\alpha \rho_n(x_{<n})^{-1}$ outside $E^*$, but we may quantify over $E^*$.
Therefore, we ask for $\overline{y}$ to be very close to some $w \in E^*$, and for $\alpha \rho_n(x_{<n})^{-1} w$, which always makes sense, to be close to $v$.
\begin{lem}
\label{lem:StarPhiCorrespondenceChi}
The formula $\chi^{n,k}(u,v)$ has the following properties:
\begin{enumerate}
\item
\label{item:StarPhiCorrespondenceChiSubHomogeneous}
It is jointly sub-homogeneous in its arguments.
\item
\label{item:StarPhiCorrespondenceChiFunctional}
For every $n,\varepsilon > 0$ there exists $\delta = \delta(n,\varepsilon) > 0$ such that, if $\chi^n(u,y,v_i) \leq \delta$ for $i = 0,1$, then $\overline{d}(v_0,v_1) < \varepsilon$.
In particular, for any $k$, if $\chi^{n,k}(u,v_i) \leq \delta$ for $i = 0,1$, then $\overline{d}(v_0,v_1) < \varepsilon$.
\item
\label{item:StarPhiCorrespondenceChiWitnessNormalised}
Assuming that $\varphi_k(x_{<k},y) \geq 2 \overline{d}(\overline{y},E^*) / \delta - 1$, the formula $\chi^{n,k}(u,v)$ is $\delta$-witness-normalised in $u$.
\end{enumerate}
\end{lem}
\begin{proof}
Item \autoref{item:StarPhiCorrespondenceChiSubHomogeneous} is immediate (among other things, we use the fact that $\overline{d}$ is sub-homogeneous on $E^*$).
For \autoref{item:StarPhiCorrespondenceChiFunctional}, assume that $\chi^n(\alpha x,y,v_i) = 0$.
Then either $\alpha = 0$, in which case $v_i = 0$, or $\alpha > 0$, in which case we have $\overline{y} \in E^*$ and $v_i = \alpha \rho_n(x_{<n})^{-1} \overline{y}$.
Either way, $v_0 = v_1$, and in particular $\overline{d}(v_0,v_1) < \varepsilon$.
The conclusion follows by compactness.
For \autoref{item:StarPhiCorrespondenceChiWitnessNormalised}, let $u = \alpha x \in D^*_\Phi$.
By \autoref{eq:DPhiInequality} we have $\alpha \overline{d}(\overline{x_k},E^*) \leq \delta$.
Choose $w \in E^*$ such that $\alpha \overline{d}(\overline{x_k},w) \leq \delta$, and let $v = \alpha \rho_n(x_{<n})^{-1} w$.
Then $\chi^{n,k}(u,v) \leq \delta$.
\end{proof}
\begin{lem}
\label{lem:StarPhiCorrespondenceUniversal}
Let $\Phi = (\varphi_n)$ be rich.
Let $E^* \subseteq (D,\overline{d})$ be a star sort, as per \autoref{conv:StarPhiEStar}, $\psi(u,v)$ a star correspondence on $D^*_\Phi \times E^*$, and $\varepsilon > 0$.
Then there exist $n \leq k$ and $\delta > 0$ such that $\chi^{n,k}(u,v)$ is a $\delta$-star correspondence between $D^*_\Phi$ and $E^*$, and in addition, if $\chi^{n,k}(u,v_i) \leq 2\delta$ for $i = 0,1$, then $\psi(u,v_i) \leq \varepsilon$ and $\overline{d}(v_0,v_i) < \varepsilon$.
\end{lem}
\begin{proof}
By \autoref{lem:UniversalStarSortLimit} and \autoref{lem:StarPhiCorrespondenceRho}, for some $n$ (in fact, any $n$ large enough), we may assume that $\psi$ is a star correspondence that factors as $\psi(u_n,v)$ through $D^*_n \times E^*$, and that $\psi_1(x_{<n},v) = \psi\bigl( \rho_n(x_{<n}) x_{<n}, v \bigr)$ is $\varepsilon$-witness-normalised in either argument.
In particular, $\psi_1 \dotminus \varepsilon$ is witness-normalised.
We may extend $\psi_1 \dotminus \varepsilon$ to $D^n \times (D,\overline{d})$, obtaining a formula $\psi_2(x_{<n},y)$ on $D^n \times D$, which is uniformly $\overline{d}$-continuous in $y$.
Since $\psi_1 \geq 0$, we may assume that $\psi_2 \geq 0$, and even that
\begin{gather*}
\psi_2(x_{<n},y) \geq \overline{d}(\overline{y},E^*).
\end{gather*}
Let us choose $\delta > 0$ small enough, based on choices made so far.
Since $\psi_2(x_{<n},y)$ is witness-normalised in $x_{<n}$ (choosing witnesses $\overline{y} \in E^*$), there exists $k \geq n$ such that $|\varphi_k - 2 \psi_2/\delta| \leq 1$.
By \autoref{lem:StarPhiCorrespondenceChi}, having chosen $\delta$ small enough, the formula $\chi^{n,k}(u,v)$ is jointly sub-homogeneous, $\delta$-witness-normalised in $u$, and $\chi^{n,k}(u,v_i) \leq 2 \delta$ implies $\overline{d}(v_0,v_i) < \varepsilon$.
There are two more properties we need to check.
First, we need to check that $\chi^{n,k}(u,v)$ is $\delta$-witness-normalised in $v$.
Indeed, given $v = \overline{y} \in E^*$, we know that there exists a sequence $x_{<n} \in D^n$ such that $\psi_1(x_{<n},v) \dotminus \varepsilon = 0$.
Let $\alpha = \rho_n(x_{<n})$, so $\alpha x_{<n} \in D^*_n$, and extend the sequence $x_{<n}$ to $x_{<k}$ keeping $\alpha x_{<k} \in D^*_k$.
We now choose $x_k = y$, so $\psi_2(x_{<n},x_k) = 0$ and $\varphi_k(x_{<k},x_k) \leq 1$.
Therefore, $\alpha x_{\leq k} \in D^*_{k+1}$, and we may complete the sequence to $x \in D^\bN$ such that $\alpha x \in D^*_\Phi$.
Then $\chi^{n,k}(\alpha x,v) = 0$, as witnessed by $w = v$ (recalling that we chose $\alpha = \rho_n(x_{<n})$).
Second, we need to check that, having chosen $\delta$ appropriately, $\chi^{n,k}(\alpha x,v) \leq 2\delta$ implies $\psi(\alpha x,v) \leq \varepsilon$.
Indeed, following a path similar to the proof of \autoref{lem:StarPhiCorrespondenceChi}\autoref{item:StarPhiCorrespondenceChiFunctional}, assume that
\begin{gather*}
\chi^n(\alpha x,y,v) = \alpha \psi_2(x_{<n},y) = 0.
\end{gather*}
If $\alpha = 0$, then $v = 0$ and $\psi(\alpha x,v) = \psi(0,0) = 0$.
If $\alpha > 0$, then $\overline{y} \in E^*$, and $v = \alpha \rho_n(x_{<n})^{-1} \overline{y}$, and $\psi\bigl( \rho_n(x_{<n}) x,\overline{y} \bigr) \dotminus \varepsilon = \psi_2(x_{<n},y) = 0$.
Since $(\alpha x,v) = \alpha \rho_n(x_{<n})^{-1} \bigl( \rho_n(x_{<n}) x,\overline{y} \bigr)$, it follows that $\psi(\alpha x,v) \leq \varepsilon$ in this case as well.
By compactness, for $\delta$ small enough, if $\chi^n(\alpha x,y,v) \leq 2\delta$ and $\alpha \psi_2(x_{<n},y) \leq \delta$, then $\psi(\alpha x,v) < 2\varepsilon$.
This last argument does not depend on $k$, so we may assume that $\delta$ was chosen small enough to begin with.
By \autoref{eq:DPhiInequality}, the inequality $\alpha \psi_2(x_{<n},x_k) \leq \delta$ is automatic when $\alpha x \in D^*_\Phi$.
If, in addition, we assume that $\chi^{n,k}(\alpha x,v) = \chi^n(\alpha x,x_k,v) \leq 2\delta$, then $\psi(\alpha x,v) < \varepsilon$, completing the proof.
\end{proof}
\begin{thm}
\label{thm:RichSequenceStarSortUniversal}
Let $\Phi$ be a rich sequence.
Then $D^*_\Phi$ is universal.
In particular, a universal star sort exists.
\end{thm}
\begin{proof}
Immediate from \autoref{lem:StarPhiCorrespondenceUniversal}, using the formula $2\chi^{n,k}/\delta$.
\end{proof}
\section{Further properties of the universal star sort}
\label{sec:UniversalStarSort}
In \autoref{sec:StarSort} we showed that the universal star sort, if it exists, is unique up to a homogeneous definable bijection, and in \autoref{sec:Witnesses} we showed that one exists as $D^*_\Phi$ for any rich sequence $\Phi$.
Let us prove a few additional properties of this special sort.
\begin{conv}
\label{conv:UniversalStarSort}
From now on, $D^*$ denotes any universal star sort.
Since it is unique up to a homogeneous definable bijection, multiplication by $\alpha \in [0,1]$ is well defined regardless of the construction we choose for $D^*$.
In particular, its root is well defined.
Notice that we can construct it as $D^*_\Phi$ in a manner that only depends on the language (and not on $T$): we obtain a universal star sort for $T$ simply by restricting our consideration of this sort to models of $T$.
\end{conv}
The uniqueness of $D^*$ means that we may choose it to be $D^*_\Phi$ for any rich $\Phi$, and in particular, that we are allowed some leverage in choosing a convenient sequence $\Phi$, as in the proof of the following result.
\begin{thm}
\label{thm:UniversalStarSortReconstruction}
The universal star sort $D^*$ is a coding sort for any theory $T$ (see \autoref{dfn:CodingSort}), with the exceptional set being the root $D^0 = \{0\}$.
\end{thm}
\begin{proof}
Being a coding sort (with some exceptional set) is invariant under definable bijections (that preserve the exceptional set).
Therefore, despite the fact that $D^*$ is only well defined up to a homogeneous definable bijection, our statement makes sense.
We may choose a rich sequence $\Phi$ on a sort $D$, as per \autoref{dfn:RichSequence}, and take $D^* = D_\Phi^*$.
Let $M \vDash T$ and $\alpha a \in D^*_\Phi(M) \setminus \{0\}$, and let $N = \dcl(\alpha a) \subseteq M$, necessarily a closed set (if $M$ is multi-sorted, closed in each sort separately).
Then $\alpha \neq 0$, and $N = \dcl(a)$.
In order to show that $N \preceq M$, it will suffice to show that it satisfies the Tarski-Vaught criterion: for every formula $\varphi(x,y)$, where $x$ is in the sort $D^\bN$ and $y$ in one of the basic sorts,
\begin{gather*}
\inf_y \, \varphi(a,y) = \inf_{b \in N} \, \varphi(a,b),
\end{gather*}
where the truth values are calculated in $M$.
Since $D$ projects, by hypothesis, onto any basic sort, we replace $\varphi$ with its pull-back and assume that it is a formula on $D^\bN \times D$.
Replacing $\varphi$ with $\varphi(x,y) - \inf_z \varphi(x,z)$, we may assume that $\varphi$ is witness-normalised and the left hand side vanishes.
Then it is enough to show that for every $\varepsilon > 0$ there exists $b \in N$ such that $\varphi(a,b) < \varepsilon$, and replacing $\varphi$ with an appropriate multiple, it is enough to require $\varphi(a,b) \leq 1 + 1/\alpha$.
Choosing $n$ such that $\varphi_n$ is a good-enough approximation of $\varphi$, it is enough to find $b \in D(N)$ such that $\varphi_n(a_{<n},b) \leq 1/\alpha$.
For this, $b = a_n$ will do.
This proves the coding models property of \autoref{dfn:CodingSort}.
For the density property, assume that $M$ is separable, and let $\alpha a \in D(M)$.
Assume first that $\alpha > 0$.
We may freely assume that $\varphi_k = 0$ infinitely often.
Let us fix $n_0$, and define a sequence $b \in D^\bN$ as follows.
\begin{itemize}
\item We start with $b_{<n_0} = a_{<n_0}$.
\item Having chosen $b_{<k}$ (for $k \geq n_0$) such that $\alpha b_{<k} \in D^*_k$, we can always choose $b_k \in D(M)$ so $\alpha b_{\leq k} \in D^*_{k+1}$.
\item If $\varphi_k = 0$, then we may choose any $b_k \in D(M)$ that we desire.
Since this happens infinitely often, we may ensure that $\dcl(b) = M$.
\end{itemize}
In the end, $\alpha b \in D^*_\Phi$ and $\dcl(\alpha b) = \dcl(b) = M$, so $\alpha b$ codes $M$.
Taking $n_0$ large enough, $\alpha b$ is as close as desired to $\alpha a$.
This argument shows, in particular, that there exists $\alpha a \in D(M)$ that codes $M$.
Let $\alpha_n = \alpha / 2^n$.
Then $\alpha_n a \in D(M)$ codes $M$ for each $n$, and $\alpha_n a \rightarrow 0$, so the root can also be approximated by codes for $M$.
\end{proof}
\begin{dfn}
\label{dfn:StarSortGroupoid}
Let $T$ be any theory in a countable language, and $D^*$ its universal star sort.
View it as a coding sort, as per \autoref{thm:UniversalStarSortReconstruction}, with exceptional set $D^0 = \{0\}$, and define the corresponding groupoid, as per \autoref{dfn:CodingSortGroupoid}:
\begin{gather*}
\bG^*(T) = \bG_{D^*}(T).
\end{gather*}
\end{dfn}
We already know that this is an open Polish topological groupoid, with basis $\bB^*(T) \simeq \tS_{D^*}(T)$.
\begin{thm}
\label{thm:StarSortGroupoid}
The groupoid $\bG^*(T)$ is a complete bi-interpretation invariant for the class of theories in countable languages.
\end{thm}
\begin{proof}
On the one hand, we have seen that $D^*$, and therefore $\bG^*(T)$, only depends on the bi-interpretation class of $T$.
Conversely, by \autoref{thm:Reconstruction}, a theory bi-interpretable with $T$ (namely, the theory $T_{2D^*}$, up to some arbitrary choices of definable distance and symbols for the language) can be recovered from $\bG^*(T)$.
\end{proof}
Our last task is to calculate the basis $\tS_{D^*}(T)$ explicitly, and show how \autoref{thm:StarSortGroupoid} extends previous results, in a style similar to that of \autoref{rmk:ReconstructionUniversalSkloemSpecialCases}.
Let us fix a rich sequence $\Phi$ on a sort $D$, so we may take $D^* = D^*_\Phi$.
We also fix a formula $\chi(y)$ on $D$ that is onto $[0,1]$.
Finally, we may assume that $\varphi_n(x_{<n},y) = n \chi(y)$ for infinitely many $n$.
Let $X = \tS_{D^\bN}(T)$ and $Y = \tS_{D^*_\Phi}(T)$.
We may identify $\tS_{*D^\bN}(T)$ with $*X$, identifying $\tp(\alpha x)$ with $\alpha \tp(x)$ (here we need to assume that $T$ is complete, so there exists a unique possible complete type for $0 \in D^*_\Phi$).
This identifies $Y$ with a subset of $*X$, namely that of all $\alpha p$ where $p(x)$ implies that $\alpha x \in D^*_\Phi$, or equivalently, such that $\alpha \varphi_n(p) \leq 1$ for all $n$.
For $\alpha \in [0,1]$, let
\begin{gather*}
X_\alpha = \{ p \in X : \alpha p \in Y \}.
\end{gather*}
In particular, $X_0 = X$.
Define $\rho\colon X \rightarrow [0,1]$ by
\begin{gather*}
\rho(p) = \sup \, \{ \alpha : \alpha p \in Y \} = \sup \, \{ \alpha : p \in X_\alpha \}.
\end{gather*}
\begin{lem}
\label{lem:UniversalStarSortTypes1}
Let $\alpha > 0$.
Then for every $p \in X$ we have $\alpha \leq \rho(p)$ if and only if $p \in X_\alpha$, and $X_\alpha$ is compact, totally disconnected.
In particular, $\rho \colon X \rightarrow [0,1]$ is upper semi-continuous.
\end{lem}
\begin{proof}
For the first assertion, it is enough to notice that by compactness, the supremum is attained, namely, $p \in X_{\rho(p)}$.
It follows that the condition $\rho(p) \geq \alpha$ is equivalent to $p \in X_\alpha$, so it is closed, and $\rho$ is upper semi-continuous.
Assume that $\alpha q_i \in Y$ and $q_0 \neq q_1$.
Then for some finite $n$, there exists a formula $\psi(x_{<n})$ that separates $q_0$ from $q_1$, say $\psi(q_i) = i$.
We may also find a $[0,1]$-valued formula $\chi(y)$ on $D$ that attains (at least) the values $0$ and $1$.
By Urysohn's Lemma, there exists a formula $\varphi(x_{<n},y) \geq 0$ such that
\begin{gather*}
|\psi(x_{<n}) + \chi(y) - 1| \geq 1/3 \qquad \Longrightarrow \qquad \varphi(x_{<n},y) = 0, \\
|\psi(x_{<n}) + \chi(y) - 1| \leq 1/6 \qquad \Longrightarrow \qquad \varphi(x_{<n},y) = 17/\alpha + 42.
\end{gather*}
Since the formula $\chi$ attains both $0$ and $1$, the formula $\varphi(x_{<n},y)$ is witness-normalised, so there exists $k \geq n$ with $|\varphi - \varphi_k| \leq 1$.
Assume now that $\alpha p \in Y$.
Then $\varphi_k(x_{<k},x_k)^p \leq 1/\alpha$, so $\varphi(x_{<n},x_k)^p \leq 1/\alpha +1 < 17/\alpha + 42$ and $|\psi(x_{<n}) + \chi(x_k) - 1| > 1/6$.
This splits the set $X_\alpha$ in two (cl)open sets, defined by $\psi(x_{<n}) + \chi(x_k) > 7/6$ and $\psi(x_{<n}) + \chi(x_k) < 5/6$, respectively.
Since $\chi$ is $[0,1]$-valued, $q_0$ must belong to the latter and $q_1$ to the former, so they can be separated in $X_\alpha$ by clopen sets, completing the proof.
\end{proof}
\begin{lem}
\label{lem:UniversalStarSortTypes2}
The set $X_{>0} = \bigl\{ p \in X : \rho(p) > 0 \bigr\} = \bigcup_{\alpha > 0} X_\alpha$ is totally disconnected, admitting a countable family of clopen sets $(U_n : n \in \bN)$ that separates points.
\end{lem}
\begin{proof}
We may write $X_{>0}$ as $\bigcup_k X_{2^{-k}}$.
Each $X_{2^{-k}}$ is compact, totally disconnected, and it is metrisable by countability of the language.
Therefore, it admits a basis of clopen sets.
The inclusion $X_{2^{-k}} \subseteq X_{2^{-k-1}}$ is a topological embedding of compact totally disconnected spaces.
Therefore, if $U \subseteq X_{2^{-k}}$ is clopen, then we may find a clopen $U' \subseteq X_{2^{-k-1}}$ such that $U' \cap X_{2^{-k}} = U$.
Proceeding in this fashion, we may find a clopen $\overline{U} \subseteq X_{>0}$ such that $\overline{U} \cap X_{2^{-k}} = U$.
We can therefore produce a countable family of clopen sets $(U_n : n \in \bN)$ in $X_{>0}$ such that for each $k$, $\bigl( U_n \cap X_{2^{-k}} : n \in \bN \bigr)$ is a basis of clopen sets for $X_{2^{-k}}$, and in particular separates points.
It follows that $(U_n)$ separates points in $X_{>0}$.
\end{proof}
Given this family $(U_n)$, we may define a map $\theta_0\colon X_{>0} \rightarrow 2^\bN$, where $\theta_0(p)_n = 0$ if $p \in U_n$ and $\theta_0(p)_n = 1$ otherwise.
It is continuous by definition, and injective since the sequence $(U_n)$ separates points.
If $\alpha p \in Y$, then either $\alpha = 0$ or $p \in X_{>0}$ (or possibly both), and we may define
\begin{gather*}
\theta(\alpha p) = \alpha \theta_0(p) \in *2^\bN,
\end{gather*}
where $\theta(0) = \theta(0 \cdot p) = 0$ regardless of $p$.
It is clearly continuous at $0$, and at every point of $Y$ (since $\theta_0$ is continuous).
It is also injective on $Y$.
Since $Y$ is compact, $\theta\colon Y \rightarrow *2^\bN$ is a topological embedding.
\begin{lem}
\label{lem:UniversalStarSortTypes3}
The set of $\rho(p) p$ for $p \in X_{>0}$ is dense in $Y$.
\end{lem}
\begin{proof}
We already know that $\rho(p) p \in Y$.
Assume now that $U \subseteq Y$ is open and non-empty, so it must contain some point $\alpha p$ with $\alpha > 0$.
We may assume that
\begin{gather*}
U = \Bigl\{ \beta q \in Y : |\beta - \alpha| < \varepsilon, \ q \in V \Bigr\},
\end{gather*}
where $V$ is an open neighbourhood of $p$ in $X$.
The set $V$ may be taken to be defined by a condition $\psi > 0$, where $\psi(x_{<n})$ only involves finitely many variables.
By hypothesis on $\Phi$, possibly increasing $n$, we may assume that $\varphi_n(x_{<n},y) = n \chi(y)$, and we may further assume that $\alpha > 1/n$.
Choose a realisation $a$ of $p$.
Let $b_{<n} = a_{<n}$ and choose $b_n$ so $\chi(b_n) = 1/n\alpha$.
Then $\varphi_n(b_{<n},b_n) = 1/\alpha$, so $\rho_{n+1}(b_{\leq n}) = \alpha$, and we may extend $b_{\leq n}$ to a sequence $b$ such that $\rho(x') = \alpha$.
In particular, $q = \tp(b) \in V \cap X_{>0}$ and $\alpha q = \rho(q)q \in U$.
\end{proof}
Let us recall from Charatonik \cite{Charatonik:UniqueLelekFan} a few definitions and facts regarding fans.
The \emph{Cantor fan} is the space $*2^\bN$.
It is a connected compact metrisable topological space.
More generally, a \emph{fan} $F$ is a connected compact space that embeds in the Cantor fan.
An \emph{endpoint} of $F$ is a point $x \in F$ such that $F \setminus \{x\}$ is connected (or empty, in the extremely degenerate case where $F$ is reduced to a single point).
If the set of endpoints is dense in $F$, then $F$ is a \emph{Lelek fan}.
By the main theorem of Charatonik \cite{Charatonik:UniqueLelekFan}, the Lelek fan is unique up to homeomorphism.
\begin{prp}
\label{prp:UniversalStarSortTypesLelek}
Let $T$ be a complete theory.
Then $\tS_{D^*}(T)$, the type-space of the universal star sort $D^*$ in $T$, is homeomorphic to the Lelek fan.
\end{prp}
\begin{proof}
By \autoref{lem:UniversalStarSortTypes1} to \autoref{lem:UniversalStarSortTypes3}, the space $\tS_{D^*}(T)$ is a Lelek fan.
\end{proof}
This gives us a hint as to how to relate the universal star sort with previously known coding sorts referred to in the examples of \autoref{sec:Reconstruction}.
\begin{thm}
\label{thm:UniversalStarUninversalSkolem}
Assume $T$ admits a universal Skolem sort $D$ in the sense of \cite{BenYaacov:ReconstructionGroupoid}, and let $L$ denote the Lelek fan.
Then $L * D$ is a universal star sort.
\end{thm}
\begin{proof}
We may assume that $L \subseteq *2^\bN$, and moreover, that for every non-empty open subset $U \subseteq 2^\bN$ there exists $\alpha > 0$ and $t \in U$ such that $\alpha t \in L$ (otherwise, we may replace $2^\bN$ with the intersection of all clopen subsets for which this is true).
For each $n \in \bN$ there is a natural initial projection $2^\bN \rightarrow 2^n$.
This induces in turn a projection $*2^\bN \rightarrow *2^n$.
Let $L_n \subseteq *2^n$ be the image of $L$ under this projection, so $L = \varprojlim L_n$.
Consequently, $L * D = \varprojlim \, (L_n * D)$.
Our hypotheses regarding $L$ implies that the enpoints of $L_n$ can be enumerated as $\{ \alpha_t t : t \in 2^n \}$, with $\alpha_t > 0$.
If $m \geq n$, then we have a natural projection $L_m \rightarrow L_n$.
If $t \in 2^n$, $s \in 2^{m-n}$, and $ts \in 2^m$ is the concatenation, then $\alpha_{ts} ts$ gets sent to $\alpha_{ts} t \in L_n$, so $\alpha_{ts} \leq \alpha_t$, and $\alpha_{ts} = \alpha_t$ for at least one $s$.
For any $\delta > 0$, we may always choose $m$ large enough such that for every $t \in 2^n$, the set $\{ \alpha_{ts} : s \in 2^{m-n}\}$ is $\delta$-dense in the interval $[0,\alpha_t]$.
Let $\varphi(u,v)$ be a star correspondence between $L_n * D$ and some other star sort $E^*$, and let $\varepsilon > 0$.
Choose $\delta > 0$ appropriately, and a corresponding $m$ as in the previous paragraph.
Define a formula on $2^n \times 2^{m-n} \times D \times E^*$ by
\begin{gather*}
\varphi'(ts,x,v) = \varphi(\alpha_{ts} t * x,v).
\end{gather*}
On the one hand, since $\varphi$ is witness-normalised in the first argument, $\varphi'$ is witness-normalised in $(ts,x)$.
On the other hand, if $v \in E^*$, then there exist $\alpha t \in L_n$ (so $\alpha \leq \alpha_t$) and $x \in D$ (possibly in an elementary extension) such that $\varphi(\alpha t * x,v) = 0$.
Having chosen $\delta$ small enough to begin with, and $m$ large enough accordingly, we may now find $s \in 2^{m-n}$ such that $\alpha_{ts}$ is close to $\alpha$, sufficiently so that $\varphi'(ts,x,v) = \varphi(\alpha_{ts} t * x,v) < \varepsilon$.
It follows that $\varphi' \dotminus \varepsilon$ is witness-normalised in either $(ts,x)$ or $v$.
Let us now evoke a few black boxes from \cite{BenYaacov:ReconstructionGroupoid}.
First, $2^m \times D$ is again a universal Skolem sort (and therefore stands in definable bijection with $D$).
Second, since $\varphi' \dotminus \varepsilon$ is witness-normalised in either group of arguments, there exists a surjective definable function $\sigma\colon 2^m \times D \rightarrow E^*$ that satisfies $(\varphi' \dotminus \varepsilon) \bigl( ts,x, \sigma(ts,x)\bigr) \leq \varepsilon$, i.e., $\varphi' \bigl( ts,x, \sigma(ts,x)\bigr) \leq 2\varepsilon$.
Define on $L_m * D \times E^*$ (keeping in mind that if $\alpha ts \in L_m$, then $\alpha \leq \alpha_{ts}$):
\begin{gather*}
\psi( \alpha ts * x, v) = d\bigl( v, \alpha \alpha_{ts}^{-1} \sigma(ts,x) \bigr).
\end{gather*}
This formula is jointly sub-homogeneous (since $d$ is, on $E^*$).
It is also witness-normalised in $\alpha ts * x$ (just choose $v = \alpha \alpha_{ts}^{-1} \sigma(ts,x)$), and in $v$ (since $\sigma$ is surjective, and we may always choose $\alpha = \alpha_{ts}$).
By construction, $\varphi\bigl( \alpha_{ts} t * x, \sigma(ts,x) \bigr) \leq 2\varepsilon$, so multiplying all arguments by $\alpha \alpha_{ts}^{-1}$:
\begin{gather*}
\varphi\bigl( \alpha t * x, \alpha \alpha_{ts}^{-1} \sigma(ts,x) \bigr) \leq 2 \varepsilon.
\end{gather*}
Therefore, if $\psi(\alpha ts * x, v)$ is small enough, $\varphi\bigl( \alpha t * x, v \bigr) \leq 3\varepsilon$, and by definition, if $\psi(\alpha ts * x, v_i)$ is small for $i = 0,1$, then $d(v_0,v_1)$ is small.
Replacing $\psi$ with a multiple, we may replace ``small enough'' with ``smaller than one'', and now, by \autoref{lem:UniversalStarSortLimit}, $L * D$ is a universal star sort.
\end{proof}
\begin{cor}
\label{cor:UniversalStarSeparablyCategorical}
Assume that $T$ is $\aleph_0$-categorical and let $D_0$ be as in \autoref{exm:ReconstructionAleph0Categorical}.
In other words, let $M \vDash T$ be the separable model, $a \in M^\bN$ a dense sequence, and $D_0$ the collection of realisations of $\tp(a)$.
Then $D_0$ is a definable set, i.e., a sort, and $L * D_0$ is a universal star sort.
\end{cor}
\begin{proof}
In an $\aleph_0$-categorical theory, every type-definable set is definable.
By \cite[Proposition~4.17]{BenYaacov:ReconstructionGroupoid}, $2^\bN \times D_0$ is a universal Skolem sort.
Now, $L * 2^\bN \subseteq (*2^\bN) * 2^\bN = *(2^\bN \times 2^\bN)$ is easily checked to be a fan, whose set of endpoints is dense, so it is homeomorphic to $L$.
Therefore
\begin{gather*}
L * (2^\bN \times D_0) = (L * 2^\bN) * D_0 \simeq L * D_0.
\end{gather*}
By \autoref{thm:UniversalStarUninversalSkolem}, this is a universal star sort.
\end{proof}
Define $L^{(2)} \subseteq L^2$ as the set of pairs $(x,y)$ such that either both $x = y = 0$, or both are non-zero.
This is a Polish, albeit non-compact, star space, with root $(0,0)$.
When $\bG$ is a topological groupoid, we may equip $L^{(2)} * \bG$ with a groupoid composition law
\begin{gather*}
[x,y,g] \cdot [y,z,h] = [x,z,gh].
\end{gather*}
If $\bB$ is the basis of $\bG$, then $L * \bB$ is the basis of $L^{(2)} * \bG$.
\begin{cor}
\label{cor:StarGroupoidSpecialCases}
Let $T$ be a continuous theory admitting a universal Skolem sort $D$, and let $\bG(T) = \bG_D(T)$, as in \autoref{exm:ReconstructionUniversalSkloem}.
Then $\bG^*(T) \simeq L^{(2)} * \bG(T)$.
If $T$ is $\aleph_0$-categorical, and $G(T)$ is the automorphism group of its unique separable model, then $\bG^*(T) \simeq L^{(2)} * G(T)$.
\end{cor}
\begin{proof}
Just put the identities $D^* = L * D$ and $D^* = L * D_0$ through the groupoid construction.
\end{proof}
\bibliographystyle{begnac}
|
1,116,691,498,591 | arxiv | \section{Introduction}
Epidemic diffusion on complex networks \cite{newman2003structure,pastor2014epidemic,boccaletti2006complex} is a general paradigm to describe a large variety of real world outbreaks of infections, ranging from the strictly
biological case to malware diffusion as well as opinion propagation \cite{bornholdt2006handbook}.
A central issue is the design of efficient immunization strategies able to
prevent or control the epidemic spreading \cite{pastor2014epidemic}.
In this context, numerical simulations are a flexible and well-controlled
framework to study epidemic dynamics. In particular, they allow to understand the effectiveness of
vaccination strategies that we shall broadly classify as preventive {\em vs.} reactive schemes.
{\it Preventive} immunization strategies aim to strengthen the network against
epidemics using information about the {healthy} configuration, {\it i.e.}
identifying the nodes to be immunized according to some {\it score}
before the epidemic event. The score may require local or global knowledge about the network
topological structure.
An important example of the {preventive approach}
is the {\it Targeted Immunization} scheme (TI) \cite{pastor2002immunization} (see also \cite{chen2008finding, hebert2013global, yan2015global}),
originally designed for scale-free networks.
The idea is to target nodes with high connectivity degree because they act as hubs in the infection spreading.
A similar degree-based approach, but exploiting only local information, is the {\it Acquaintance Immunization} (AI) \cite{cohen2003efficient}.
Some variations and improvements are discussed in \cite{stauffer2006dissemination,hu2012immunization}.
Instead,
{\it reactive} immunization strategies start with the network undergoing a propagating infection
and take into account dynamical aspects of the network and of the epidemic itself to identify which are the best sites to be vaccinated.
Several scores have been designed considering, for instance, personal awareness about the epidemics \cite{ruan2012epidemic},
message-passing interactions \cite{altarelli2014containing}, dynamical reaction of the networks \cite{liu2014controlling,perra2012activity}, information from previous infections \cite{yan2014dynamical}, finite time for the vaccination to become effective \cite{pereira2015control}, {\it etc}.
A remarkably simple example of reactive protocols is the so-called {\it High-Risk Immunization} (HRI) \cite{nian2010efficient}, where
the healthy neighbors of infected nodes are vaccinated.\par
In this paper, we propose a modification of TI scheme which exploits a refined score based on a local-global mixed strategy. Specifically, it introduces a modified score that is designed to consider both hubs and individuals at risk of contagion as relevant in the epidemic spreading.
In other words, we attempt to use the infection itself as a source of information and as a probe of
how the network reacts to the disease. On a regular network the infection may display a well defined
propagating front, then, a good strategy is to vaccinate in a neighborhood of it. It is not clear whether this strategy
makes sense on a complex network and we precisely try to answer this question.
The effectiveness of our strategy is tested by a Monte Carlo implementation of the SIR model \cite{kermack1927contribution,may1979population} on a variety of complex theoretical and real
networks {and systematically comparing our proposal with some standard immunization strategies \cite{pastor2002immunization,cohen2003efficient,nian2010efficient}}.
\section{Epidemics modeling and reactive immunizations: a new score}
The SIR model is a simple compartmental model of disease spreading \cite{kermack1927contribution}.
Individuals are divided in three classes: susceptible ($S$), infected ($I$) and recovered ($R$).
The epidemic evolution is then modeled by the transitions $S\rightarrow I$ and $I\rightarrow R$.
In more details, it starts with a single (patient zero) infected node.
Then, at each step of the Monte Carlo process, a randomly chosen infected individual can recover with probability $p_{\text{\tiny{SIR}}}$.
Otherwise, one of its first neighbors is randomly selected and, if susceptible, gets infected. The reactive immunization takes place when a fraction $f$ (the {\it epidemic threshold}) of the population is infected.\footnote{{Due to the stochastic nature of the process, epidemic may die out before reaching the threshold $f$
and immunization does not take place in these cases. The relation between the quantities $g$ and $\langle d_V\rangle$ is $\langle d_V\rangle = P_f g$, where $P_f$ is the probability that the infection reaches the threshold (which of course depends on the network and the threshold itself). We choose to take into account
these events because they give an information about the exposure of a given network to a pandemic outbreak without vaccination. Given the value of $p_{\text{\tiny{SIR}}}$, the non-spreading events are relatively rare, for example for a BA[2] the probability to reach the lower threshold is roughly 90\%.}
}
The vaccination is a single-step process in which a fraction $g$ of susceptibles individuals is immunized according to some score.
The finite size of the network ensures that the system always reaches a steady final state without infected individuals. The density $d_R$ of recovered individuals in this state is clearly related to the spreading strength of the epidemic on the network. A good immunization strategy would therefore reduce the final density $d_R$ at the cost of a relatively low vaccinated density $d_V$. The average values $\langle d_R\rangle$ and $\langle d_V\rangle$ are computed by repeating the SIR evolution with vaccination a large number of times. \par
We propose a novel strategy of vaccination which interpolates between preventive and reactive immunizations. In doing so, we take into account both static information (like the network geometry) and dynamical information (like the pattern of a specific infection). To this aim, we consider the score
\begin{equation}
\label{1.2}
\mathcal{S}_{i} = d_{i}+\,\sum_{j\in N_{i}}\bigg[
\beta \frac{\delta_{j, I}}{(d_{j})^{1/2}}
+\gamma\,\frac{\delta_{j, S}}{d_{i}}\,
\frac{d_{i}-d_{j}}{d_{i}+d_{j}}\bigg],
\end{equation}
where $N_{i}$ denotes the set of
neighbors of the $i$-th node, $d_i$ its degree ({\it i.e.} the number of links pointing to it), $\delta_{j,I}$ and $\delta_{j,S}$ are the Kronecker deltas which select only infected or suspectible neighbors
and $\beta,\gamma$ are free parameters.
We call our proposal
{\it Locally-Modified Targeted Immunization} (LMTI$_{\beta, \gamma}$).
For $\beta=\gamma=0$, the score reduces to that of Targeted Immunization \cite{pastor2002immunization}.
The $\beta$-term in the r.h.s. of (\ref{1.2}) favors the immunization of individuals {\it near} the epidemic front.
The damping factor $(d_{j})^{-1/2}$ selects neighbors with lower connectivity, which constitute
bottlenecks for the epidemic diffusion. It is therefore possible to reduce the contagion by cutting them off.
The $\gamma$-term is a further improvement involving the so-called {\it leverage centrality} \cite{joyce2010new}
restricted to the susceptible neighbors. It measures the reciprocal influence of the $i$-th node and its neighbors in the epidemic diffusion. {In fact, leverage centrality is a natural metric quantifying the local influence of a node on its neighbors and therefore it gives complementary local information with respect to the common (local) clustering coefficient.
}\par
We test the effectiveness of the score (\ref{1.2}) against the following benchmark immunization strategies
\begin{itemize}
\item{\textbf{Targeted Immunization (TI).}} Our implementation of TI follows the original idea: nodes are vaccinated according to their degree.
The only modification is that the immunization is performed as a reactive process when the epidemic reaches the threshold $f$.
Only nodes yet susceptible at the vaccination time are protected.
\item{\textbf{Acquaintance Immunization (AI).}} As in the previous case, AC immunization \cite{cohen2003efficient} is implemented as a reactive process.
The choice of the nodes to be vaccinated follows the original proposal.
Random first neighbors of randomly selected nodes are vaccinated (if susceptible) according to the desired immunized fraction $g$.
\item{\textbf{High Risk Immunization (HRI).}} Our implementation retains the idea of \cite{nian2010efficient} to vaccinate neighbors of infected nodes,
but the process is instantaneous and permanent.
We test this strategy by immunizing up to the $99\%$ of the first neighbors of the infected nodes at the vaccination time.
\end{itemize}
\section{Benchmark complex networks}
We test the effectiveness of our protocol on a variety of networks ranging from theoretical models to a selection of real networks.
In the first class, we consider the classical examples of Barab\`{a}si-Albert (BA) and Watts-Strogatz (WS) models.
The first one is the prototype of scale-free networks \cite{albert2002statistical,bornholdt2006handbook} and it is based on a growth algorithm
with preferential attachment. We denote with BA[$Q$] the network built adding $Q$ new links at each step of the algorithm.
The second one is the prototype of small-world networks \cite{albert2002statistical,bornholdt2006handbook,j1998collective}. WS graphs are built starting from regular ones with $\mathcal{N}$ nodes (each one connected to $2Q$ consecutive sites) and then rewiring the links with probability $\theta$.
Here, we consider WS[$Q$] networks with $Q=2,3$ and $\theta=0.1$, $0.5$.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{all_the_nets}\vspace{-0.5cm}
\caption[]
{\label{fig:villi} Some examples of randomly connected BA networks. The first line shows $5$ BAs connected with 100 and 500 and 2000 random extra links. In the second and third line
the plots show examples of the networks obtained starting with 10 and 20 centers.
}
\end{figure}
We also propose two modifications of BA model.
The first one is based on a partial randomization procedure. We start with a standard BA[$Q$] network with $\mathcal N$ nodes, then we randomly rewire $\mathcal R$ links.
In our tests, we consider $Q=2$, $\mathcal{N}=1000$ and $\mathcal R=100,500,1000,2000$.
The second variant is realized starting with $m$ disconnected BA[2] {\it centers}, further connected
adding $k$ random links between nodes belonging to different BAs.
Here, we consider a starting network with $\mathcal N=5000$ nodes, equally distributed in $m=5,10,20$ initial clusters, and $k=100,500,2000$.
This variant can be thought as a toy model for the epidemic spreading in clustered communities with relatively loose links.
Some examples of the resulting networks are shown in Fig. \ref{fig:villi}.
\par
Besides these theoretical models, we also consider the epidemic spreading
in the following real networks:
\begin{enumerate}
\item { {\sf {Internet\_AS}}, 11174 nodes, 23408 links.} It describes the undirected unweighted Internet Network\footnote{\url{https://sites.google.com/site/cxnets/research222}} \cite{colizza2006detecting}
at the Autonomous System level. The data were collected by the Oregon Route Views Project \url{http://www.routeviews.org/} in May 2001.
Nodes represent Internet service providers and edges connections among them.
\item { {\sf {AA}}, 1057 nodes, 2502 links.}
It describes the interactions between the metabolites of E. coli in the course of
the metabolic cycle\footnote{\url{http://www3.nd.edu/~networks/resources/metabolic/}} \cite{jeong2000large}.
We consider the {\sf {AA}} case.
\item { {\sf {CA-HepTh-pruned}}, 8638 nodes, 24836 links.}
The Arxiv HEP-TH (High Energy Physics - Theory) collaboration network\footnote{\url{http://snap.stanford.edu/data/ca-HepTh.html}}
from the e-print arXiv.
A paper generates a completely connected subgraph in which nodes represent its authors.
\item { {\sf {p2p-Gnutella08}}, 6300 nodes, 20776 links.}
It is a sequence of snapshots of the Gnutella peer-to-peer file sharing network from August 2002.\footnote{\url{http://snap.stanford.edu/data/p2p-Gnutella08.html}}
Nodes represent hosts in the Gnutella network and edges are connections among them.
\item {{\sf {ProteinYeast}}, 1870 nodes, 2350 links.}
It is the Protein Interaction Network\footnote{\url{http://www3.nd.edu/~networks/resources/protein/bo.dat.gz} } \cite{jeong2001lethality}.
\end{enumerate}
To provide some additional informations, in Tab. \ref{tabella} we report the global clustering coefficients and mean distances among the nodes for the above real networks, and a comparison
with the same quantities computed for random networks.
For BA and WS models, we consider $50$ different realizations for each network and perform $10^5$ Monte Carlo runs with different initial conditions for each of them.
For the BA variants, we consider $20$ different realizations of each graph and average $10^4$ runs for each one.
Finally, for real networks the statistics varies from $10^4$ and $10^5$ runs, depending on their size.
{With such a choice, we keep the statistical error on the final recovered density $\langle d_R \rangle$ under control (for instance, it is of the order of 0.1\% in theoretical models).}\footnote{{Statistical fluctuations are mainly determined by the simulation length, \textit{i.e.} by the number of MC steps, while the dependence on the particular network realization is rather weak due to self-averaging.}} In all cases, we fix the recovering probability to \mbox{$p_{\text{\tiny{SIR}}}=0.1$} and consider two epidemic thresholds
$f=0.05$ or \mbox{$f=0.15$}.\footnote{By comparison, in a regular square lattice the epidemic threshold is $p_{c,\text{\tiny{SIR}}}=0.1765$ \cite{sirtome}, so
$p_{\text{\tiny{SIR}}}=0.1$ would be in the spreading phase. {In this work, our main goal is a comparison of the relative effectiveness of the various vaccination strategies. A change in $p_{\text{\tiny{SIR}}}$ will surely affect the final balance of the epidemic, but, from the point of view of the comparison of the strategies, the dependence on $p_{\text{\tiny{SIR}}}$ is not crucial. Provided that $p_{\text{\tiny{SIR}}}$ is low enough to give a spreading epidemic, a change of the value of the recovering probability results in an overall shift of all the curves, but does not change the relative performances.}}
\section{Results}
In this section, we report the main results of our Monte Carlo simulations. In particular, we compare the various immunization strategies according to their ability in reducing the epidemic prevalence $\langle d_R\rangle$ by $50\%$ and $75\%$ (the horizontal dotted lines in the plots) and in reaching the epidemic threshold (red solid line in the plots).\par
\begin{figure}[!hbt]
\centering
\includegraphics[width=1.0\columnwidth]{bab}\vspace{-0.5cm}
\caption{The recovered mean final density $\langle d_R \rangle$ as a function of the mean fraction of vaccinated $\langle d_V \rangle$, for BA[2] with $\mathcal N=1000$ nodes.
The LMTI scheme is compared to TI, AI and HR immunization strategies. The horizontal red solid line is the epidemic threshold, $f=0.05$ (a), $0.15$ (b), while the horizontal dotted lines are $25\%$ and $50\%$ of the mean final density of recovered without any vaccination.}\label{ba}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{wsb}\vspace{-0.5cm}
\caption{The recovered mean final density $\langle d_R \rangle$ as a function of the mean fraction of vaccinated $\langle d_V \rangle$, for WS[2] with $\mathcal N=1000$ nodes and the rewiring probability $\theta=0.1$.
The LMTI scheme is compared to TI, AI and HR immunization strategies. The horizontal red solid line is the epidemic threshold, $f=0.05$ (a), $0.15$ (b), while the horizontal dotted lines are $25\%$ and $50\%$ of the mean final density of recovered without any vaccination.}\label{ws}
\end{figure}
Fig. \ref{ba} collects the results for BA[2] for the two different epidemic thresholds. As it can be expected, degree-based schemes are the most efficient in the pure BA setting. In particular, TI is the best choice in reducing the epidemic prevalence
$\langle d_R \rangle$ by $50\%$. Our strategy (with the optimal choice $\beta =20$ and $\gamma=10$) performs very similarly at low $\langle d_V \rangle$ for both values of the epidemic threshold. However, if we want to reduce the prevalence to the $25\%$, a fast response to the outbreak is crucial, {\it i.e.} $f=0.05$. In this case, TI and LMTI are the most indicated strategies as they requires a vaccinated fraction around $10\%$. Moreover, LMTI can further reduce the epidemic prevalence for lower $\langle d_V \rangle$ than TI. On the other side, a late reaction to the epidemic ($f=0.15$) causes the difficulty in controlling the spreading, so a massive vaccination process is needed. In fact, LMTI (which is the best choice in this eventuality) requires the vaccination of at least the $25\%$ of the entire population. Instead, TI fails for $\langle d_V \rangle < 0.4$. A similar behaviour holds also in the BA[3] case, so we cease to give more details on this.
\par
\begin{figure}[!hbt]
\centering
\includegraphics[width=1.0\columnwidth]{ws15}\vspace{-0.5cm}
\caption{{The recovered mean final density $\langle d_R \rangle$ as a function of the mean fraction of vaccinated $\langle d_V \rangle$, for WS[2] with $\mathcal N=1000$ nodes and the rewiring probability $\theta=0.5$.
The LMTI scheme is compared to TI, AI and HR immunization strategies. The horizontal red solid line is the epidemic threshold, $f=0.05$ (a), $0.15$ (b), while the horizontal dotted lines are $25\%$ and $50\%$ of the mean final density of recovered without any vaccination.}}\label{fig:ws05}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{rewb}\vspace{-0.5cm}
\caption{The recovered mean final density $\langle d_R \rangle$ as a function of the mean fraction of vaccinated $\langle d_V \rangle$, for randomly rewired BA[2] with $\mathcal N=1000$ nodes and \mbox{$\mathcal R$=100 (a), 2000 (b)} rewiring events.
The LMTI scheme is compared to TI immunization strategy. The horizontal red solid line is the epidemic threshold $f=0.05$, while the horizontal dotted lines are $25\%$ and $50\%$ of the mean final density of recovered without any vaccination.}\label{rew}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth]{compositeb}\vspace{-0.5cm}
\caption{The recovered mean final density $\langle d_R \rangle$ as a function of the mean fraction of vaccinated $\langle d_V \rangle$,for randomly connected BA[2] with $\mathcal N=5000$ total nodes, $m=20$ equally populated clusters and \mbox{$ k=100$ (a), $2000$ (b)} new links.
The LMTI scheme is compared to TI immunization strategy. The horizontal red solid line is the epidemic threshold $f=0.05$, while the horizontal dotted lines are $25\%$ and $50\%$ of the mean final density of recovered without any vaccination.
\label{composite}
\end{figure}
In the WS setting, results are radically different, see Fig. \ref{ws} for the WS[2] and $\theta =0.1$ case. Here, TI immunization is a poor strategy when compared to LMTI and HRI. This is a consequence of the absence of nodes acting as hubs for the epidemic spreading. However, both LMTI and HRI allow to reduce the prevalence by $50\%$ for a very small number of vaccinations ($\langle d_V \rangle \lesssim 0.05$ for both values of the epidemic threshold). Most remarkably, our strategy can reduce it to $25\%$ for both values of the epidemic threshold with a vaccinated fraction lower than $10\%$ of the entire population (for comparison, AI has the same effect for $\langle d_V \rangle= 0.2 \div 0.4$). Therefore, a prompt reaction has the only effect of lowering the vaccination coverage needed to reach the aim. WS networks with different $Q$ and $\theta$ present analogous features, with the only difference that HRI dramatically worsens as the rewiring probability increases{, see Fig. \ref{fig:ws05}}. In both BA and WS cases, our strategy allows to reach the epidemic threshold and to effectively stop the epidemic.
\par
The importance of local terms in (\ref{1.2}) can be better appreciated in the BA variants. Figs. \ref{rew} and \ref{composite} collects the results for these models with the epidemic threshold $f=0.05$. In this case, we compare only TI and LMTI, the best performers in the original BA setting.\par
For partially randomized BA[2] with $\mathcal R =100$, the network keeps an approximate BA structure, so the results are very similar to the pure case. As the randomization increases, TI gradually becomes inefficient (except for small $\langle d_V \rangle$ values), so it is convenient to vaccinate nodes near the epidemic front. This is clear in the $\mathcal R=2000$ case.\par
Now, we consider randomly connected BAs with an highly clustered structure ($m=20$). If these clusters are poorly connected ($k=100$), TI and LMTI gives approximately the same performances, with the only difference that our scheme allows to stop the epidemic with a much smaller vaccinated fraction ($\langle d_V \rangle\sim 0.10$) than TI. For a much larger number of connections between the clusters ($k=2000$), the situation radically changes. In fact, the reduction of the prevalence by $50\%$ is better accomplished with TI scheme. For LMTI, the increase of local terms importance worsens the efficiency at low $\langle d_V \rangle$, but drastically improves the performance for a larger number of vaccinations.\par
This behaviour has a simple explanation. When the networks or their clusters have an approximately BA structure, nodes acting as hubs are still present. Therefore, in this case it is convenient to vaccinated nodes with higher degree. As the original structure is lost (increasing the randomization or the number of new links between the original clusters), the importance of hubs in the epidemic spreading is drastically downsized. Once that highest degree nodes are immunized, it is better to give much more importance to individuals near the epidemic front. This also explains the faster decaying of LMTI curves for increasing $\beta$ values.
\par
Finally, in Fig. \ref{real} we report the results for real networks. In order to halve the epidemic prevalence, we note again that TI and LMTI are the most indicated strategies and their performances are almost equivalent. In particular, TI performs slightly better only in {\sf {CA-HepTh-pruned}} and {\sf {AA}}. However, if we want to further reduce the epidemic prevalence up to $25\%$, LMTI is always the best choice. Moreover, it allows to effectively stop the epidemics for a smaller vaccinated fraction than TI. Remarkably, HRI is a rather inefficient choice also in {\sf {ProteinYeast}} and {\sf {Internet\_AS}} networks, which show a great structural resistance to the epidemics (even without any vaccination, the average size of an infection is relatively small). When compared to HRI, AI seems to be stronger, but it is comparable in efficiency to TI and LMTI only in
{\sf {p2p-Gnutella08}}, in which it is more difficult to control the epidemic spreading (without immunization, the average size of an infection is about the $65\%$ of the entire population). This feature can be explained noting that this network is highly and uniformly connected as it presents the highest mean degree and lowest mean vertex eccentricity.
\begin{figure}[!htb]
\centering
\includegraphics[width=1\columnwidth]{realb}\vspace{-0.5cm}
\caption{The recovered mean final density $\langle d_R \rangle$ as a function of the mean fraction of vaccinated $\langle d_V \rangle$ for a set of real networks (a-e). The LMTI scheme is compared to TI immunization strategy. The horizontal red solid line is the epidemic threshold $f$, while the horizontal dotted lines are $25\%$ and $50\%$ of the mean final density of recovered without any vaccination. For {\sf {Internet\_AS}} (a), the horizontal dotted line is the $50\%$ of the mean final density of recovered without any vaccination.}\label{real}
\end{figure}
{\footnotesize
\begin{table}[h]
\centering
\begin{tabular}{|c|cc|cc|}
\hline
& $C$ & $\ell$ & $C_{\text{R}}$ & $\ell_{\text{R}}$ \\%& $C_{\text{\tiny{BA}}}$ & $\ell_{\text{\tiny{BA}}}$ \\ \hline
\hline
\text{{\sf\small {CA-HepTh-pruned}}} & 0.28 & 5.9 & 0.0007 & 5.4 \\%& 0.0017 & 4.9 \\ \hline
\hline
\text{{\sf\small {p2p-Gnutella08}}} & 0.020 & 4.6 & 0.0010 & 4.8 \\%& 0.0023 & 4.8 \\ \hline
\hline
\text{{\sf\small {AA}}} & 0. & 4.4 & 0.0044 & 4.6 \\% & 0.0099 & 4.1 \\ \hline
\hline
\text{{\sf\small {Internet\_AS}}} & 0.0096 & 3.6 & 0.00039 & 6.6 \\%& 0.0014 & 5.1 \\ \hline
\hline
\text{{\sf\small {ProteinYeast}}} & 0.079& 6.8 & 0.0017 & 6.4 \\%& 0.0075 & 4.2 \\ \hline
\hline
%
\end{tabular}
\caption[]{
\label{tabella}
We report the global clustering coefficient $C$ and the mean distance among the nodes $\ell$ for the five real networks.
The last two columns show, as comparison, the same quantities computed for a random graph with the same number of nodes and links.}
\end{table} }
\begin{comment}
Figs. \ref{fig:BA} and \ref{fig:WS_1} collect the results of the various strategies on BA and WS networks. Their relative performances are radically
different in the two contexts. The only exception is LMTI, which is the strategy giving good results in both BA and WS settings.
As it can be expected, {\it degree based} strategies are the most efficient in the BA setting.
While at very low $\langle d_V \rangle$ TI is the best performer and LMTI (with the optimal choice of $\beta=20$) gives similar results,
the latter
becomes the most efficient for slightly larger values of $\langle d_V \rangle$. The leverage term improves
the efficiency of LMTI at low values of $\langle d_V \rangle$.
Most remarkably, LMTI allows to effectively stop the epidemic for a much smaller vaccinated fraction than TI.
Both AI and HRI immunizations are highly inefficient protocols for the BA network.
On the other hand, in WS networks TI is an inefficient choice, due to the absence of nodes playing the role of hubs for the contagion.
In this setting, LMTI is by far the best performer.
HRI is a competitive choice only for small values of the rewiring probability ($\theta= 0.1$): in an almost regular network
it is easy to surround and block the epidemics vaccinating a low number of exposed individuals.
Increasing the rewiring probability to $\theta =0.5$, HRI loses its efficiency,
while the LMTI shows enough flexibility to stop the epidemic spreading
with a relatively low fraction of vaccinated individuals.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{BA_for_draft}
\caption[]{\label{fig:BA} Results for the final density of the recovered $\langle d_R \rangle$ in the BA[2] and BA[3] networks for $f=0.05$ (first line) and $f=0.15$ (second line).
The plots show the final density of recovered individuals $\langle d_R \rangle$ as a function of the fraction of vaccinated $\langle d_V \rangle$ for the tested immunization schemes.
The red dashed line is the threshold $f$.
}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{WS1_for_draft}
\includegraphics[width=0.5\textwidth]{WS2_for_draft}
\caption[]{\label{fig:WS_1} Results for the final density of the recovered $\langle d_R \rangle$ in the WS[2] and WS[3] networks for $f=0.05$ (first line) and $f=0.15$ (second line).
The plots show the final density of recovered individuals $\langle d_R \rangle$ as a function of the fraction of vaccinated $\langle d_V \rangle$.
The red dashed line is the threshold $f$.
}
\end{figure}
The results obtained for the BA variations are collected in Fig. \ref{BA_rew_1} and closely resemble those of the pure BA setting.
In these cases, we compare our score only with TI strategy,
as they are the best performers in the original BA model.
The better flexibility of our score is again clear.
The local $\beta$-term becomes more and more important as the original BA structure
is lost (increasing the randomization or the number of new links among the original clusters).
The freedom to tune the $\beta$ and $\gamma$ parameters is fundamental to have a good reaction to the epidemic.\\
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{rewiring_for_draft}
\includegraphics[width=0.5\textwidth]{BA_MC_for_draft_corretto}
\caption[]
{\label{BA_rew_1} Results for the final density of the recovered $\langle d_R \rangle$ in the randomized BA[2] graphs for different rewiring $\mathcal R = 100, 500, 1000, 2000$ (first panel) and randomly connected BA clusters (second panel). The epidemic threshold (red dashed line) is $f=0.05$. The plots show the final density of recovered individuals as function of the fraction of vaccinated $\langle d_V \rangle$.}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{REAL_for_draft_res300}
\caption[]
{\label{reali_dr}Results for the final density of the recovered $\langle d_R \rangle$ ifor real networks, with epidemic threshold for $f=0.05$. The plot show the final density of recovered individuals as function of the fraction of vaccinated $\langle d_V \rangle$.
}
\end{figure}
The results of the vaccination strategies on real networks are shown in Fig. \ref{reali_dr}.
The networks {\sf {ProteinYeast}} and {\sf {Internet\_AS}} show a great "structural" resistance to the epidemic outbreak. In fact, even without any vaccination,
the average size of an epidemic is relatively small ($\sim 30$\%
of the population). The $k$-shell decomposition of these graphs
\cite{kshell} shows that the first two shells $k=1,2$ contain a very large fraction of the nodes, respectively the 78\% and the 86\%
for {\sf {Internet\_AS}} and {\sf {ProteinYeast}}. For comparison, in the other networks this fraction ranges from the 30\% to the 40\%.
This suggests that both networks have a large number of "peripheral" nodes, poorly connected to a small center.
This feature gives to the networks a natural resistance to epidemics.
In both cases, degree-based immunizations are the most efficient and our score
is the best performer.\\
The other networks are much more exposed to the risk of a "pandemic" outbreak.
Without any vaccination, the final recovered fraction is about the 60\% of the population.
The network {\sf {p2p-Gnutella08}} is the case in which it is more difficult to control the
spreading. This network has the highest mean degree and the lowest mean vertex eccentricity,
suggesting that it is highly and uniformly connected.
TI and LMTI (again, with $\beta=20$) perform similarly on {\sf {p2p-Gnutella08}} and {\sf {CA-HepTh-pruned}} networks,
while HRI is a highly inefficient choice.
In fact, the latter fails not only to stop the epidemic but also to significantly reduce the outbreak prevalence, even vaccinating the 99\% of exposed individuals.
Remarkably, AI is particularly efficient in the {\sf {p2p-Gnutella08}} network, with results close to TI and LMTI ones.
Finally, the results for the {\sf {AA}} network confirm the general pattern: our score, when compared to the TI,
gives the same performances for small values of $\langle d_V \rangle$, but it does better
for slightly larger values of the vaccinated fraction, allowing to stop the epidemic spreading.
\end{comment}
\section{Conclusions}
In this work, we have proposed a new reactive immunization strategy based on a local modification of the Targeted Immunization protocol. The aim of the local term is to actively take into account the presence of the epidemic outbreak and design the reactive vaccination by exploiting the infection itself as a probe
of the complex network. Our proposal fits in the framework of commonly
very appreciated techniques using local knowledge about complex systems, see for instance the Hebbian learning rule \cite{citeulike:500649} for an exemplary model for neural networks {and \cite{0295-5075-94-1-10002} for a detailed analysis.}
By means of explicit simulations we have compared our immunization scheme with other immunization strategies. We have shown that our protocol is a very efficient choice in all the considered cases,
allowing to stop the epidemic with a relatively small vaccinated fraction. The addition of a
local term sensing the infection was motivated by a naive picture of the infection diffusion
valid for a regular network. Nevertheless, it is relevant also on a broad set of complex
network totally far from being regular. {We did not find a way to predict \textit{a priori} the best choice for the parameters of our score. In a purely phenomenological approach, the best free parameters are chosen empirically by looking at the performance of our scheme as $\beta$ and $\gamma$ are changed. Hopefully, a deeper investigation or the application of our score in simpler models could help to
settle this issue.}\footnote{{Since the output of the score is an ordering of the nodes which gives the priority for the immunization, the result turns out to be robust with respect small changes of the two free parameters.}}\par
Several extension{s} of our work can be foreseen. On the theoretical side, one can explore
other classes of ideal networks with good theoretical control,
like weighted or directed graphs. From the point of view of applications,
it could be important to apply our scheme to actual specific diseases, {\em e.g.} Xylella fastidiosa, TBC and Ebola outbreaks. This will require a more realistic propagation model like the delayed SIR
considered in \cite{agliari2013application}, and a detailed cost benefit analysis taking into account the finite resources available for
a real vaccination programme, {see for instance \cite{doi:10.1142/S0217984915501808}}. {Finally,
we remark that our immunization scheme is clearly information-demanding, as it requires the full knowledge of the neighborhood of each node and the pattern of the epidemic at the vaccination time.
This is rather unlikely in real situations and another natural evolution of the present work would be the study of an immunization strategy accounting the possibility of partial or corrupted information about the system. }
\begin{comment}
\section{refs da esaminare}
\begin{verbatim}
@article{vestergaard2016impact,
title={Impact of spatially constrained sampling of temporal contact networks on the evaluation of the epidemic risk},
author={VESTERGAARD, CHRISTIAN L and Valdano, Eugenio and G{\'e}nois, Mathieu and Poletto, Chiara and Colizza, Vittoria and Barrat, Alain},
journal={European Journal of Applied Mathematics},
pages={1--17},
year={2016},
publisher={Cambridge Univ Press}
}
\end{verbatim}
\end{comment}
\bibliographystyle{JHEP}
|
1,116,691,498,592 | arxiv | \section{Introduction}
The time-dependence of a background metric, caused by the motion of a gravitational system, usually exerts a correctional effect on the propagation of an electromagnetic signal or celestial body. This kind of kinematical effect, which is called ``velocity effect" or ``motion effect", has been investigated in detail. In contrast to most previous works
(see, Refs.~\cite{PB1993,CLPS1999}, and references therein), which were focused on the effects in the first-order velocity (\emph{FOV}) and first-order deflection (\emph{FOD}) approximations, Kopeikin and Sch$\ddot{a}$fer~\cite{KopeiSch1999} studied light propagation in the gravitational field of an arbitrarily moving N-body system analytically. Their calculations were performed in the first post-Minkowskian (\emph{1PM}) approximation (weak-field approximation without limiting to low velocity), and were generalized in Ref.~\cite{KopeiMash2002} to consider the spin-dependent gravitomagnetic effects on the propagation of light. In 2003, the solutions of Li\'{e}nard-Wiechert potential \cite{KopeiSch1999,KopeiMash2002} were confirmed in Ref.~\cite{Klioner2003} via exerting a Lorentz boosting on the propagation equations of light in the field of a static source. In order to extend and apply the analytical results achieved in Ref.~\cite{Klioner2003} to astrometric observations, Klioner and Peip~\cite{KlioPeip2003} performed high-resolution numerical simulations for the trajectory of light in the field of moving point masses, in the first post-Newtonian approximation as well as \emph{1PM} approximation. In 2004, Wucknitz and Sperhake~\cite{WuckSperh2004} studied the velocity effects on the first-order deflection of light and massive particles, based on the Lorentz transformation of harmonic Schwarzschild metric. The method of coordinate transformation \cite{PB1993,Klioner2003,WuckSperh2004} was also employed to discuss the \emph{1PM} motion effects due to both the source of light and the lens in microlensing events~\cite{Heyrovsky2005}. There are many other works devoted to the motion effects appeared in the \emph{1PM} equations of light propagation,
e.g., Refs.~\cite{KopeiMaka2007,ZKS2013,HBL2014b,SH2015}.
These velocity effects, especially relativistic velocity effects, are very likely to be detected nowadays, because almost all their magnitudes are larger than the accuracy of the high-resolution ($\mu$as level) astronomical programs such as \emph{GAIA mission}~\cite{Perryman2001,Linet2007} and \emph{Japanese Astrometry Satellite Mission for Infrared Exploration (JASMINE)}~\cite{GTKNYM2002,Gouda2004}. Actually, considering the kinematical effects on higher-order gravitational deflection, not limited to first-order deflection, also becomes important. One reason is that the rapid developments of sub-$\mu$as astrometric surveys have been in progress. For example, the planned \emph{Nearby Earth Astrometric Telescope mission (NEAT)}~\cite{MLS2012,MBLJ2014a,Malbet2014b}, which has absorbed some key techniques of the high-accuracy project \emph{Space Interferometry Mission (SIM)}~\cite{Laskin2006,SN2009}, aims at achieving an unprecedented accuracy of $0.05\mu$as. Within the capability of \emph{NEAT}, the nonrelativistic motion effects on the second-order deflection might be detected. When the lens moves quickly with a relativistic velocity, these correctional effects will become so obvious that $\mu$as-level telescope \emph{GAIA} or even \emph{JASMINE} may also detect them. Therefore, it is deserved to investigate the kinematical corrections to the gravitational deflection of test particles up to second post-Minkowskian order (\emph{2PM}). Notice that the discussions of the explicit post-linear equations of motion and light deflection in the field of the two-body system in Ref.~\cite{Brugmann2005} are limited to the low-velocity case.
In the present paper, we investigate the gravitational deflection up to second order of light and (relativistic) neutral massive particles caused by a radially constantly moving Kerr-Newman (KN) balck hole, based on high-accuracy numerical simulations. We restrict our discussions in the weak-field, small-angle, and thin lens approximations. We focus on the kinematically correctional effects induced by the motion of the source on the second-order (leading high-order) deflection. The paper is organized as follows. In Section~\ref{motion-eq}, we start with the harmonic \emph{2PM} metric of the moving KN black hole, and derive the \emph{2PM} equations of motion for test particles via calculating Christoffel symbols. These equations are verified by Euler-Lagrange Method. In Section~\ref{velocityeffects}, the second-order gravitational deflection of light and massive particles are discussed in detail, with the help of numerical calculations. In Section~\ref{application}, we analyze the possibilities of detecting the motion effects on the second-order deflection. Summary is given in Section~\ref{conclusion}. We use units where $G=c=1$ throughout the paper.
\section{Second post-Minkowskian equations of motion for test particles} \label{motion-eq}
Let $\{\bm{e}_1,~\bm{e}_2,~\bm{e}_3\}$ denote the orthonormal basis of a three-dimensional Cartesian coordinate system. We consider a Kerr-Newman black hole with rest mass $M$, electric charge $Q$ and angular momentum $\bm{J}\,(=J\bm{e}_3)$, moving along the positive $x-$axis with a constant velocity vector $\bm{v}~(=v_1\bm{e}_1\equiv v\bm{e}_1)$ (here we only investigate the effects of radial motion of the gravitational source). We denote the rest frame of the background spacetime and the comoving frame of the gravitational source to be $(t,~x,~y,~z)$ and $(X_0,~X_1,~X_2,~X_3)$, respectively. The \emph{2PM} harmonic metric for this moving Kerr-Newman black hole can be written in the coordinate frame $(t,~x,~y,~z)$ as follows~\cite{LinHe2015,LinJiang2014}
{\small\begin{eqnarray}
\hspace*{-70pt}g_{00}=-1+\frac{2\,(1+v^2)\gamma^2M}{R}-\frac{(1+\gamma^2)M^2}{R^2}-\frac{\gamma^2Q^2}{R^2}-\frac{4\,v\gamma^2 a M X_2}{R^3}+\frac{v^2\gamma^2(M^2-Q^2)X_1^2}{R^4}~,
~\label{g00mKN2} \\
\nonumber \hspace*{-70pt}g_{0i}=\,\gamma\,\zeta_i+v\,\gamma^2\left(-\,\frac{4\,M}{R}+\frac{M^2+Q^2}{R^2}\,\right)\delta_{i1}
-\frac{v\,\gamma\,(\,M^2-Q^2\,)\,X_1\,\left[\,X_i+(\,\gamma-1\,)\,X_1\,\delta_{i1}\,\right]}{R^4} \\
\hspace*{-42pt} +\,\frac{2\left(\gamma^2+v^2\gamma^2-\gamma\right)\,a\,M\,X_2\, \delta_{i1}}{R^3}~,~~~~\label{g0imKN2} \\
\nonumber \hspace*{-70pt}g_{ij}=\!\left(\!1\!+\!\frac{M}{R}\right)^2\delta_{ij}\!+\!v^2\gamma^2\left(\frac{4M}{R}\!-\!\frac{M^2\!+\!Q^2}{R^2}\right)\delta_{i1}\delta_{j1}
\!-\!v\gamma\left[\zeta_i\,\delta_{j1}\!+\!\zeta_j\,\delta_{i1}\!+\!\frac{4(\gamma\!-\!1)\,aM X_2}{R^3}\,\delta_{i1}\delta_{j1}\right] \\
\hspace*{-42pt}+\,\frac{\left(\,M^2-Q^2\,\right)\left[\,X_i+(\gamma-1)\,X_1\,\delta_{i1}\,\right]\left[\,X_j+(\gamma-1)\,X_1\,\delta_{j1}\,\right]}{R^4}~, \label{gijmKN2}
\end{eqnarray}}
where $i,~j = 1,~2,~$or $3$, $\gamma= (1-v^2)^{-\scriptstyle \frac{1}{2}}$ is Lorentz factor, and $\delta_{ij}$ denotes Kronecker delta. $\Phi=-\frac{M}{R}$ represents Newtonian gravitational potential, with $\frac{X_1^2+X_2^2}{R^2+a^2}+\frac{X_3^2}{R^2}=1$ and $\mathbf{X}\!\cdot\! d\mathbf{X}\equiv X_1dX_1\!+\!X_2dX_2\!+\!X_3dX_3$.
The symbol $\bm{\zeta}\equiv \frac{2aM}{R^3}\left(\bm{X} \times \bm{e_3}\right)=(\zeta_1,~\zeta_2,~0)$ is a new vector potential~\cite{Weinberg1972} and $a \equiv \frac{J}{M}$ is the angular momentum per mass. We assume the relation $a^2+Q^2\leq M^2$ to avoid the naked singularity for the black hole. In order to calculate the gravitational deflection of test particles up to second order, we only need the following components of the inverse of the metric up to \emph{1PM} order
\begin{eqnarray}
g^{tt}=-1-\frac{2(1+v^2)\gamma^2M}{R}~, \label{g^tt} \\
g^{xx}=1-\frac{2(1+v^2)\gamma^2M}{R}~, \label{g^xx} \\
g^{yy}=g^{zz}=1-\frac{2M}{R}~, \label{g^yyzz} \\
g^{tx}=g^{xt}=-\frac{4v\gamma^2M}{R}~. \label{g^xt}
\end{eqnarray}
Note that the coordinates $X_0,~X_1,~X_2$, and $X_3$ in Eqs.~(\ref{g00mKN2}) - (\ref{gijmKN2}) are related to the coordinates $t,~x,~y$, and $z$ by the common Lorentz transformation
\begin{eqnarray}
X_0=T=\gamma(t-v x)~, \label{LorentzTran-t} \\
X_1=X=\gamma(x-v t)~, \label{LorentzTran-x} \\
X_2=Y=y~, \label{LorentzTran-y} \\
X_3=Z=z~. \label{LorentzTran-z}
\end{eqnarray}
Thus, the partial derivatives of $R$ with respective to $t,~x,~y$, and $z$ can be expressed as
\begin{eqnarray}
\frac{\partial R}{\partial t}=\frac{-v(x-vt)\gamma^2R}{2R^2+a^2-\left[\gamma^2(x-vt)^2+y^2+z^2\right]}~, \label{partial-t} \\
\frac{\partial R}{\partial x}=\frac{(x-vt)\gamma^2R}{2R^2+a^2-\left[\gamma^2(x-vt)^2+y^2+z^2\right]}~, \label{partial-x}
\end{eqnarray}
\begin{eqnarray}
\frac{\partial R}{\partial y}=\frac{yR}{2R^2+a^2-\left[\gamma^2(x-vt)^2+y^2+z^2\right]}~, \label{partial-y} \\
\frac{\partial R}{\partial z}=\frac{z(R^2+a^2)}{R\left\{2R^2+a^2-\left[\gamma^2(x-vt)^2+y^2+z^2\right]\right\}}~. \label{partial-z}
\end{eqnarray}
For simplicity, we only consider the propagation of test particles confined to the equatorial plane $(z=0)$ of the gravitational lens, and therefore there is one Killing vector field $\left(\frac{\partial }{\partial z}=0\right)$ in the moving KN geometry. After tediously but straightforward calculating the nonvanishing components of the affine connection, as shown in~\ref{A}, we obtain the explicit geodesic equations up to $2PM$ order as follows
{\small\begin{eqnarray}
\hspace*{-70pt}\nn0=\ddot{t}+\frac{v\,\gamma^3\,\dot{t}^{\hspace*{1pt} 2}\,X}{R^2}\left[-\frac{(1+v^2)\,M}{R}+\frac{(v^2-4)\,M^2+Q^2}{R^2}
+\frac{v^2(M^2-Q^2)\,(y^2-X^2)}{R^4}+\frac{6\,v\,aMy}{R^3}\right] \\
\hspace*{-70pt}\nonumber+\frac{\gamma^3\,\dot{x}^2X}{R^2}\left\{\frac{v\,(\,v^2\!-\!3\,)\,M}{R}+\frac{v\,[\,(1\!-\!4v^2)\,M^2\!+\!(\,2\!-\!v^2\,)\,Q^2\,]}{R^2}
\!+\!\frac{v\,(M^2\!-\!Q^2)\,(y^2\!-\!X^2)}{R^4}\!+\!\frac{6\,a\,M\,y}{R^3}\right\} \\
\hspace*{-70pt}\nonumber+\frac{2\gamma^3\,\dot{t}\,\dot{x}X}{R^2}\!\left[\frac{(1\!+\!v^2)M}{R}\!+\!\frac{3v^2M^2\!-\!Q^2}{R^2}
\!-\!\frac{v^2(M^2\!-\!Q^2)(y^2\!-\!X^2)}{R^4}\!-\!\frac{6\,v a M y}{R^3}\right]\!+\!\frac{2(1\!+\!v^2)\gamma^2M\dot{t}\,\dot{y}\,y}{R^3} \\
\hspace*{-70pt}-\frac{4\,v\gamma^2\,M\,\dot{x}\,\dot{y}\,y}{R^3}~,\label{geodesic-t}
\end{eqnarray}
\begin{eqnarray}
\hspace*{-70pt}\nn0=\ddot{x}\!+\!\frac{\gamma^3\hspace*{1.5pt}\dot{t}^{\hspace*{1pt} 2}X}{R^2}\!\left[\frac{(1\!-\!3v^2)M}{R}\!+\!\frac{(v^2-4)M^2\!+\!(2v^2\!-\!1)Q^2}{R^2}
+\frac{v^2(M^2\!-\!Q^2)\,(y^2\!-\!X^2)}{R^4}\!+\!\frac{6v^3aMy}{R^3} \right] \\
\hspace*{-70pt}\nonumber +\frac{\gamma^3\hspace*{1.5pt}\dot{x}^2X}{R^2}\!\!\left[-\frac{(1\!+\!v^2)M}{R}\!+\!\frac{(1\!-\!4v^2)M^2\!+\!v^2Q^2}{R^2}
\!+\!\frac{(M^2\!-\!Q^2)\,(y^2\!-\!X^2)}{R^4}\!+\!\frac{6vaMy}{R^3}\right]\!+\!\frac{2\,v\,\gamma^3\,\dot{t}\,\dot{x}X}{R^2}\!\times\! \\
\hspace*{-70pt} \left[\frac{(1\!+\!v^2)\,M}{R}\!+\!\frac{3M^2\!-\!v^2\,Q^2}{R^2}\!-\!\frac{(M^2\!-\!Q^2)\,(\hspace*{1.5pt}y^2\!-\!X^2\hspace*{1.5pt})}{R^4}
\!-\!\frac{6\,v\,aMy}{R^3}\right]+\frac{2\,\gamma\,M\,(v\dot{X}_0\!-\!\dot{X})\,\dot{y}\,y}{R^3} ~,~ \label{geodesic-x}
\end{eqnarray}
\begin{eqnarray}
\hspace*{-70pt}\nn0=\ddot{y}+\dot{t}^{\hspace*{1pt} 2}
\left\{\frac{\gamma^2\,\,y}{R^2}\left[\frac{(1+v^2\,)\,M}{R}\!-\!\frac{(4+v^2\,)\,M^2+Q^2}{R^2}\!+\!\frac{v^2\,(M^2\!-\!Q^2\,)\,\,(y^2\!-\!X^2)}{R^4}\right]
\!-\!\frac{2\,v\,\gamma^2\,aM}{R^3}\right\} \\
\hspace*{-70pt}\nonumber+\,\,\dot{x}^2\left\{\,\frac{\gamma^2\,\,y}{R^2}\left[\,\frac{(\,1+v^2\,)\,M}{R}\!-\!\frac{(\,1+4\,v^2\,)\,M^2+v^2\,Q^2}{R^2}
\!+\!\frac{(\,M^2\hspace*{-1.2pt}-\hspace*{-0.8pt}Q^2\,)\,\,(\,y^2\!-\!X^2\,)}{R^4}\,\right]\!-\!\frac{2\,v\,\gamma^2\,aM}{R^3}\,\right\} \\
\hspace*{-70pt} +\, 2\,\gamma^2\,\dot{t}\,\dot{x}\!\left[-\frac{2vMy}{R^3}\!+\!\frac{v(5M^2\!+Q^2)y}{R^4}
\!-\!\frac{v(M^2\!-\!Q^2)(y^2\!-\!X^2)y}{R^6}\!+\!\frac{(1\!+\!v^2)aM}{R^3}\right]\!-\!\frac{2M\dot{X}\dot{y}X}{R^3}~, \label{geodesic-y}
\end{eqnarray}}
where dots denote derivatives with respect to $p$ while $p$ is a parameter describing the trajectory, and $\dot{y}$ has been assumed to be the order of $\Phi$. Note that Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) correspond respectively to the $t,~x,$ and $y\,-$component of geodesic equations and that the motion is restricted to the equatorial plane. Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) can also be obtained via Euler-Lagrange Method, as shown in~\ref{B}.
For the case with $v=0$, Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) reduce to the geodesic equations of test particles in the field of a non-moving KN black hole
{\small
\begin{eqnarray}
\hspace*{-70pt}0=\ddot{t}+\frac{2\,\dot{t}\,\left[\,(MR-Q^2)\,\dot{x}\,x+M\,R\,\dot{y}\,y\,\right]}{R^4}+\frac{6\,a\,M\,\dot{x}^2\,x\,y}{R^5}~, \label{geodesic-t-SKN} \\
\hspace*{-70pt}0=\ddot{x}+\frac{\dot{t}^{\hspace*{1pt} 2}x}{R^2}\left(\frac{M}{R}\!-\!\frac{4M^2\!+\!Q^2}{R^2}\right)
\!+\!\frac{\dot{x}^2x}{R^2}\!\left[-\frac{M}{R}\!+\!\frac{M^2}{R^2}\!+\!\frac{(M^2\!-\!Q^2)\,(y^2\!-\!x^2)}{R^4}\right]
\!-\!\frac{2\,M\dot{x}\,\dot{y}\,y}{R^3}~,~\label{geodesic-x-SKN} \\
\nonumber\hspace*{-70pt}0=\ddot{y}+\frac{\dot{t}^{\hspace*{1pt} 2}y}{R^2}\left(\frac{M}{R}\!-\!\frac{4M^2\!+\!Q^2}{R^2}\right)
+\frac{\dot{x}^2y}{R^2}\left[\frac{M}{R}\!-\!\frac{M^2}{R^2}\!+\!\frac{(M^2\!-\!Q^2)\,(y^2\!-\!x^2)}{R^4}\right]
\!-\!\frac{2M\dot{x}\dot{y}x}{R^3}\!+\!\frac{2aM\dot{t}\dot{x}}{R^3}~, \\ \label{geodesic-y-SKN}
\end{eqnarray}}
where $R$ reduces to $\sqrt{x^2+y^2-a^2}$ and can be approximated by $\sqrt{x^2+y^2}$ in the computation of the deflection up to second order.
For the Schwarzschild black hole being the gravitational source $(v=a=Q=0)$, Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) are simplified to
{\small
\begin{eqnarray}
\hspace*{-70pt}0=\ddot{t}+\frac{2M\,\dot{t}\,(x\,\dot{x}+y\,\dot{y})}{R^3}~, \label{geodesic-t-SQKN} \\
\hspace*{-70pt}0=\ddot{x}+\frac{M\,x\!\left[\,(R-4M)\,\dot{t}^{\hspace*{1pt} 2}\!-\!(R\!-\!M)\,\dot{x}^2\,\right]}{R^4}\!-\!\frac{2\,M\,\dot{x}\,\dot{y}\,y}{R^3}
+\frac{M^2\,\dot{x}^2\,(y^2-x^2)\,x}{R^6}~,~~~~~~\label{geodesic-x-SQKN} \\
\hspace*{-70pt}0=\ddot{y}+\frac{M\,y \left[(R-4\,M)\,\dot{t}^{\hspace*{1pt} 2}+(R\!-\!M)\,\dot{x}^2\right]}{R^4}\!-\!\frac{2\,M\,\dot{x}\,\dot{y}\,x}{R^3}
\!+\!\frac{M^2\,\dot{x}^2\,(y^2-x^2)\,y}{R^6}~. \label{geodesic-y-SQKN}
\end{eqnarray}}
\section{Gravitational deflection of test particles due to a moving Kerr-Newman black hole} \label{velocityeffects}
In this section, we numerically study the influences of the motion of Kerr-Newman black hole, especially the relativistic velocity effects, on the propagations of test particles including light. We will concentrate on the kinematical corrections to the second-order deflection, since the \emph{1PM} gravitational deflection has been investigated in detail~\cite{WuckSperh2004}.
\subsection{Notations and basics of numerical simulations} \label{notation}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{Figure1.eps}
\caption{Schematic diagram for gravitational deflection of test particles by a radially moving KN black hole with a constant velocity $\bm{v}=v\bm{e}_1$.
The starting position of a test particle at the time $t=0$ is assumed to be $(-\infty,-b,~0)$. In the numerical calculations, we use the symbol $-x_{max}$, in which $x_{max}~(>0)$ denotes the finite maximum value that $p$ will take in the simulations, to replace the infinity $-\infty$ (for enough large $x_{max}$).
The thick blue line represents the propagation path of a test particle coming from $p=-\infty$ with the initial velocity $\bm{w}|_{p\rightarrow-\infty}~(\approx \bm{w}|_{p\rightarrow-x_{max}})~=w\bm{e}_1~$ ($0<w\leq1$ and $v < w$). As mentioned above, the angular momentum vector $\bm{J}$ of the moving source is along positive $z-$axis $(a>0)$, and thus the test particle takes prograde motion relative to the source's spin. The deflection angle $\alpha$ (being greatly exaggerated), detected in the background's rest frame $(t,~x,~y,~z)$, is positive since $Y|_{p\rightarrow-\infty}~(\approx Y|_{p\rightarrow-x_{max}})~=-b<0$. } \label{Figure1}
\end{center}
\end{figure*}
The initial velocity of a test particle is assumed to be $\bm{w}$, and the impact factor is denoted as $b$. The schematic diagram for gravitational deflection of test particles caused by the moving KN black hole is shown in Fig.~\ref{Figure1}. In order to investigate the kinematically correctional effects, we assume the general form for the gravitational deflection angle of test particles including light up to second order due to the moving KN black hole as
\begin{equation}
\hspace*{-70pt}\alpha(v, \,w)=N_1(v, \,w)\frac{4M}{b}\!+\!N_2(v, \,w)\frac{15\pi}{4}\frac{M^2}{b^2}
\!-\!N_3(v, \,w)\frac{4Ma}{b^2}\!-\!N_4(v, \,w)\frac{3\pi}{4}\frac{Q^2}{b^2} ~, \label{TDAngle0}
\end{equation}
which is based on the analytical formulae in the previous works~\cite{ChakSen2014,ERT2002,ERTprivate,Bhadra2003,EpstShapi1980,EderGod2006}. Here the two-variable function
$N_i(v, \,w)~(i=1,~2,~3,$ or $4)$ represents the kinematical coefficient to characterize the effects of velocities of both the gravitational source and test particle, similar to the definitions in the previous works \cite{WuckSperh2004,LinHe2014}. Notice that for the case of light deflection by a stationary KN black hole ($v=0$ and $w=1$),
$N_1(0, \,1)=N_2(0, \,1)=N_3(0, \,1)=N_4(0, \,1)=1$, and Eq.~(\ref{TDAngle0}) reduces to ~\cite{ChakSen2014}
\begin{equation}
\alpha(0, \,1)=\frac{4M}{b}+\frac{15\pi}{4}\frac{M^2}{b^2}-\frac{4Ma}{b^2}-\frac{3\pi}{4}\frac{Q^2}{b^2} ~. \label{TDAngle1}
\end{equation}
In the numerical simulations, there are six boundary conditions (or starting conditions)
\begin{eqnarray}
\hspace*{-70pt}\left.t(p)\right|_{p=-x_{max}}=-\frac{x_{max}}{w}~,~~~~~~~~\left.x(p)\right|_{p=-x_{max}}=-x_{max}~,~~~~~~~~\left.y(p)\right|_{p=-x_{max}}=-b ~,~~~~ \label{initial-condition-1} \\
\hspace*{-70pt}\left.\dot{t}(p)\right|_{p=-x_{max}}=\frac{1}{w}~,~\hspace*{42pt}~\left.\dot{x}(p)\right|_{p=-x_{max}}=1~,
~\hspace*{46pt}~\left.\dot{y}(p)\right|_{p=-x_{max}}=0~. \label{initial-condition-2}
\end{eqnarray}
The computation domain for the trajectory parameter is set as $p \in [-x_{max},~x_{max}]$. The value of the parameter $b$ is chosen as $1.0\times10^{5}M$ to guarantee a weak field, and $x_{max}$ is much larger than $b$ and chosen as $1.0\times10^{10}M~(=1.0\times10^{5}b)$. The mass $M$ of the gravitational source is set as
$2.5\times10^{6}M_{\odot}~(\sim 3.6875\times10^6 km)$ which is close to the mass of Sagittarius A$^*$ at the galactic center~\cite{BS1999,HRRTCM1996,EckaGen1997,Narayan1998}, where $M_{\odot}$ is the mass of the sun. Notice that the conclusions below are independent on this specifically chosen value of the gravitational mass. The deflection angle of a test particle can be numerically calculated via integrating the geodesic equations (i.e., Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y})) as follow
\begin{equation}
\alpha(v,~w)_{N}=\left(\arctan{\dot{y}}\right)_{p\rightarrow x_{max}}=\left.\arctan{\frac{dy}{dp}}\right|_{p\rightarrow x_{max}}~. \label{Numerical-Angle}
\end{equation}
Here and thereafter, the quantity with the subscript $N$ denotes the value obtained by the numerical calculation. We employ Mathematica to do all calculations,
in which $AccuracyGoal=39$ and $PrecisionGoal=13$ are chosen and the numerical methods {\tt \emph{NDSolve} } and {\tt \emph{ParametricNDSolve}} for solving the equations of motion are used.
\subsection{Gravitational deflection of light up to second order}
For the light deflection, the coefficient $N_i(v, \,w)$ in Eq.~(\ref{TDAngle0}) reduces to $N_i(v, \,1)$, and the analytical forms for the first and third coefficients have been given in previous works~\cite{WuckSperh2004,LinHe2014,Sereno2005}: $N_1(v, \,1)=N_3(v, \,1)=(1-v)\gamma$. Therefore, we concentrate on $N_2(v, \,1)$ and $N_4(v, \,1)$, corresponding to the contributions by the second-order moving-Schwarzschild deflection and the charge-induced deflection respectively.
\subsubsection{Determination of kinematical coefficient $N_2(v, \,1)$} \label{N2}
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{Figure2.eps}
\caption{Comparison between the numerical result (long dashed red line) of $N_2(v, \,1)$ and the analytical coefficient $N_1(v, \,1)=(1-v)\gamma$ (short dashed blue line). } \label{Figure2}
\end{center}
\end{figure*}
The second-order Schwarzschild deflection $\frac{15\pi}{4}\frac{M^2}{b^2}$ is larger than the second-order Kerr term $\frac{4Ma}{b^2}$, hence the kinematical correctional effect on the former is very likely to be more obvious than that on the latter. In order to determine $N_2(v, \,1)$, we can numerically integrate
Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) with $a=Q=0$, utilize the explicit form of $N_1(v, \,1)$, and finally express $N_2(v, \,1)_N$ as follow
\begin{equation}
N_2(v, \,1)_N=\frac{\alpha(v, \,1)_{N-SS}-\frac{4(1-v)\gamma M}{b}}{\frac{15\pi M^2}{4b^2}}~, \label{N2-1}
\end{equation}
where $\alpha(v, \,1)_{N-SS}$ denotes the numerical result of light deflection angle up to second order due to a moving-Schwarzschild source.
Fig.~\ref{Figure2} presents the numerical result of $N_2(v, \,1)$ with various velocity $v$. For comparison $N_1(v, \,1)$ is also plotted in the figure. We surprisedly find that
$N_2(v, \,1)_N$ is consistent with $N_1(v, \,1)$ and the relative error is less than $0.01\%$. In other words, the kinematical coefficient in the second-order moving-Schwarzschild contribution is the same as that in the first-order term, i.e., $N_2(v, \,1)=N_1(v,~1)=(1-v)\gamma$.
\subsubsection{Determination of kinematical coefficient $N_4(v, \,1)$} \label{N4}
\begin{figure*}[t]
\setlength{\unitlength}{1cm}
\begin{center}
\begin{minipage}[b]{12.1cm}
\centering
\includegraphics[width=12.1cm]{Figure3.eps}
\end{minipage}
\caption{$N_4(v, \,1)_N$ (long dashed red line) plotted to compare with $N_1(v, \,1)=N_2(v, \,1)=(1-v)\gamma$ (short dashed blue line), with $Q=0.99M$. Notice that here $Q$ should be chosen as large as possible to reduce the computational error, though any value in the range $(0,M)$ can be chosen theoretically. } \label{Figure3}
\end{center}
\end{figure*}
The charge $Q$ of the black hole can also induce a gravitational deflection of test particle~\cite{ERT2002}. Similarly, we determine $N_4(v, \,1)$ by numerically solving the \emph{2PM} geodesic equations of light in the field of a moving Reissner-Nordstr\"{o}m (RN) black hole, i.e., integrating Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) with $a=0$. Based on the explicit forms of $N_1(v, \,1)$ and $N_2(v, \,1)$, the numerical result of $N_4(v, \,1)$ can be expressed as
\begin{equation}
N_4(v,~\!1)_N=\frac{\alpha(v,~1)_{N-RN}-(1-v)\gamma\left(\frac{4M}{b}+\frac{15\pi}{4}\frac{M^2}{b^2}\right)}{-\frac{3\pi Q^2}{4b^2}}~, \label{N4-1}
\end{equation}
where $\alpha(v,~\!1)_{N-RN}$ denotes the numerical result of light deflection angle up to the second order caused by the moving RN source.
Considering the fact that $N_1(v, \,1)=N_2(v, \,1)=(1-v)\gamma$, we conjecture $N_4(v, \,1)$ might also be $(1-v)\gamma$. Fig.~\ref{Figure3} shows the comparison between $N_4(v, \,1)_N$ and $(1-v)\gamma$, and it can be seen that they match with each other perfectly. Therefore we have $N_4(v, \,1)=(1-v)\gamma$.
\subsubsection{Light deflection angle up to second order}
\begin{table}[t]
\scriptsize
\begin{center}
\begin{tabular}{ccccc}
\hline
$v$ & $\alpha(v, \,1)_N~(rad)$ & $\alpha(v, \,1)~(rad)$ & $\xi_{total} (\%)$ \\
\hline
0.9 & 0.000009176840208 & 0.000009176840232 & 2.62$\times10^{-7}$ \\
0.5 & 0.000023094541442 & 0.000023094541464 & 9.22$\times10^{-8}$ \\
0.1 & 0.000036182192760 & 0.000036182192790 & 8.39$\times10^{-8}$ \\
0.01 & 0.000039602890160 & 0.000039602890194 & 8.77$\times10^{-8}$ \\
0.001 & 0.000039960938218 & 0.000039960938254 & 8.84$\times10^{-8}$ \\
$0.00001$ & 0.000040000519150 & 0.000040000519185 & 8.84$\times10^{-8}$ \\
0 & 0.000040000919157 & 0.000040000919192 & 8.82$\times10^{-8}$ \\
$-0.00001$ & 0.000040001319168 & 0.000040001319204 & 8.84$\times10^{-8}$ \\
$-$0.001 & 0.000040040940096 & 0.000040040940132 & 8.88$\times10^{-8}$ \\
$-$0.01 & 0.000040402948546 & 0.000040402948582 & 8.94$\times10^{-8}$ \\
$-$0.1 & 0.000044222680035 & 0.000044222680077 & 9.57$\times10^{-8}$ \\
$-$0.5 & 0.000069283624270 & 0.000069283624391 & 1.75$\times10^{-7}$ \\
$-$0.9 & 0.000174359962643 & 0.000174359964408 & 1.01$\times10^{-6}$ \\
\hline
\end{tabular}\par
\caption{The relative difference between the analytical and numerical results for the light deflection angle. $\alpha(v, \,1)_N$ is defined in Eq.~(\ref{Numerical-Angle}). $\xi_{total}$ denotes the relative difference (or relative error) between them. As an example, here we set $a=Q=0.5M$ in the numerical simulation.} \label{Table1}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=12.72cm]{Figure4.eps}
\caption{The trajectories of light in the time-dependent gravitational field of the moving KN black hole for various $v$, with $a=Q=0.5M$ (as an example).} \label{Figure4}
\end{center}
\end{figure*}
From the discussions above, the light deflection angle up to second order due to a constantly radially moving Kerr-Newman black hole can be written as
\begin{equation}
\alpha(v, \,1)=(1-v)\gamma\left(\frac{4M}{b}+\frac{15\pi}{4}\frac{M^2}{b^2}-\frac{4Ma}{b^2}-\frac{3\pi}{4}\frac{Q^2}{b^2}\right)~. \label{TDAngle2}
\end{equation}
It should be emphasized that this formula is obtained based on numerical calculations and still needs to be confirmed by an analytical calculation. Table~\ref{Table1} gives the comparison between the analytical and numerical results for the moving KN deflection angle of light, and we can see their difference is very small (average difference is about $0.01\mu$as). Fig.~\ref{Figure4} shows the propagation paths of light in the time-dependent field of the moving KN black hole for various $v$. It can be seen that the correctional effects become distinguishable when the velocity $v$ of the moving source is relativistic such as $|v|\gtrsim 0.02$, and turn to be very obvious for the highly relativistic motion (e.g., $|v|>0.5$).
Eq.~(\ref{TDAngle2}) indicates that the kinematically correctional factor $(1-v)\gamma$ applies not only to the first-order gravito-electric deflection~\cite{WuckSperh2004},
but also to the second-order gravitational deflection, including the second-order gravito-electric, gravito-magnetic, and charge-induced deflections. In the limit of
low velocity ($|v|\ll1$), Eq.~(\ref{TDAngle2}) reduces to
\begin{equation}
\hspace*{-70pt}\alpha(v, \,1)=\frac{4M}{b}\!+\!\frac{15\pi}{4}\frac{M^2}{b^2}\!-\!\frac{4Ma}{b^2}\!-\!\frac{3\pi}{4}\frac{Q^2}{b^2}
\!-v\left(\frac{4M}{b}\!+\!\frac{15\pi}{4}\frac{M^2}{b^2}\!-\!\frac{4Ma}{b^2}\!-\!\frac{3\pi}{4}\frac{Q^2}{b^2}\right)~,~\label{TDAngle2-2}
\end{equation}
which extends the previous kinematically correctional result obtained in the \emph{FOV} and \emph{FOD} approximations~\cite{PB1993,WuckSperh2004,FKN2002,MB2003}
\begin{equation}
\alpha(v, \,1)=\frac{4M}{b}-\frac{4vM}{b}~. \label{TDAngle2-3}
\end{equation}
\subsection{Gravitational deflection of massive particle up to second order}
In this section, we investigate the gravitational deflection of massive particles up to second order, and discuss the kinematical effects on the massive particle deflection due to the moving KN black hole. The small-angle and weak-field approximations are used to restrict the initial velocity $w\in[w_{min},~1)$, where $w_{min} (>0)$ denotes the lower limit of $w$ and depends on the impact factor $b$. For example, $w$ should satisfy the condition $w\gtrsim 0.36$, supposing ${b}$ and the small deflection angle $\alpha$ are set to be $1.0\times10^{5}M$ and $0.01^{\circ}$, respectively.
\subsubsection{Massive particle deflection by a non-moving Schwarzschild black hole}
In the literature, there exist two different analytical formulae for Schwarzschild deflection of massive particle up to second order as follow~\cite{AccRagu2002,BSN2007}
\begin{eqnarray}
\hspace*{-70pt}\alpha(0, \,w)_{AR}=2\left(1+\frac{1}{w^2}\right)\frac{M}{b}+3\pi\left(\frac{1}{4}+\frac{1}{w^2}\right)\frac{M^2}{b^2}~,\\
\hspace*{-70pt}\alpha(0, \,w)_{BSN}=2\left(1+\frac{1}{w^2}\right)\frac{M}{b}+\left[3\pi\left(\frac{1}{4}+\frac{1}{w^2}\right)+2\left(1-\frac{1}{w^4}\right)\right]\frac{M^2}{b^2}~,
\end{eqnarray}
where $\alpha(0, \,w)_{AR}$ and $\alpha(0, \,w)_{BSN}$ denote the analytical formulations given in Refs.~\cite{AccRagu2002} and~\cite{BSN2007}, respectively. The difference is in the second-order terms. Here we can numerically solve the \emph{2PM} geodesic equations of massive particle in the Schwarzschild spacetime, i.e., Eqs.~(\ref{geodesic-t-SQKN}) - (\ref{geodesic-y-SQKN}), to examine the reported formulae.
\begin{table}[t]
\scriptsize
\begin{center}
\begin{tabular}{cccccc}
\hline
w & $\alpha(0, \,w)_{N}$ & $\alpha(0, \,w)_{AR}$ & $\alpha(0, \,w)_{BSN}$ & $\xi_{AR}$ (\%) & $\xi_{BSN}$ (\%) \\
\hline
0.99 & 0.000040407278237 & 0.000040407278246 & 0.000040407270041 & 2.12$\times10^{-8}$ & 2.03$\times10^{-5}$ \\
0.9 & 0.000044692757184 & 0.000044692757197 & 0.000044692652365 & 2.96$\times10^{-8}$ & 2.35$\times10^{-4}$ \\
0.5 & 0.000100004005252 & 0.000100004005531 & 0.000100001005531 & 2.79$\times10^{-7}$ & 3.00$\times10^{-3}$ \\
0.36 & 0.000174328493448 & 0.000174328495479 & 0.000174316787995 & 1.16$\times10^{-6}$ & 6.72$\times10^{-3}$ \\
\hline
\end{tabular}\par
\caption{The comparison among numerical and theoretical Schwarzschild deflection angles of a massive particle up to second order. $\xi_{AR}$ and $\xi_{BSN}$ denote the relative errors of $\alpha(0, \,w)_{AR}$ and $\alpha(0, \,w)_{BSN}$ with respect to the numerical result $\alpha(0, \,w)_{N}$, respectively. } \label{Table2}
\end{center}
\end{table}
Table~\ref{Table2} presents the comparison among $\alpha(0, \,w)_{AR}$, $\alpha(0, \,w)_{BSN}$, and the numerical result $\alpha(0, \,w)_N$ for various velocity $w$ of massive particle. We can see that $\alpha(0, \,w)_{AR}$ agrees with $\alpha(0, \,w)_{N}$ much better than $\alpha(0, \,w)_{BSN}$ does. The difference is small since the first-order terms are dominant. Fig.~\ref{Figure5} shows the comparison among the second-order contributions given by $\alpha(0, \,w)_{AR}$, $\alpha(0, \,w)_{BSN}$, and $\alpha(0, \,w)_{N}$, respectively. We can see that $\alpha(0, \,w)_N$ matches with $\alpha(0, \,w)_{AR}$ perfectly, and differs from $\alpha(0, \,w)_{BSN}$. Note that the numerical result is based on the harmonic Schwarzschild metric, which is different from the approaches in the previous works.
\begin{figure*}
\begin{center}
\includegraphics[width=12.65cm]{Figure5.eps}
\caption{ The comparison among the second-order contributions given by $\alpha(0, \,w)_{AR}$, $\alpha(0, \,w)_{BSN}$, and $\alpha(0, \,w)_{N}$, respectively.} \label{Figure5}
\end{center}
\end{figure*}
\subsubsection{Massive particle deflection by a non-moving Kerr black hole}
Based on the analytical formulations of Schwarzschild deflection of massive particle~\cite{AccRagu2002} and the second-order Kerr contribution~\cite{LinHe2014}, we can write down the deflection angle of a massive particle up to second order due to a stationary Kerr black hole as
\begin{equation}
\alpha(0, \,w)=2\left(1+\frac{1}{w^2}\right)\frac{M}{b}+3\pi\left(\frac{1}{4}+\frac{1}{w^2}\right)\frac{M^2}{b^2}-\frac{1}{w}\frac{4Ma}{b^2}~.\label{KERR-mass}
\end{equation}
Fig.~\ref{Figure6} presents the comparison between the analytical coefficient $N_3(0, \,w)=1/w$ and its numerical computation which is defined as
\begin{equation}
\hspace*{-70pt}N_3(0, \,w)_N=\frac{\left.\arctan{\frac{\partial y(w,~p)}{\partial p}}\right|_{p\rightarrow x_{max}} -2\left(1+\frac{1}{w^2}\right)\frac{M}{b}-3\pi\left(\frac{1}{4}+\frac{1}{w^2}\right)\frac{M^2}{b^2}}{-\frac{4Ma}{b^2}}~. \label{N-staticKerr-N3}
\end{equation}
It can be seen that the theoretical value of $N_3(0, \,w)$ matches with the numerical result very well.
\begin{figure*}
\begin{center}
\includegraphics[width=11.93cm]{Figure6.eps}
\caption{The comparison between the numerical result (long dashed red line) of the coefficient $N_3(0, \,w)$ and its theoretical value (short dashed blue line), with $a=0.99M$ as an example. } \label{Figure6}
\end{center}
\end{figure*}
\subsubsection{Massive particle deflection by a non-moving KN black hole}
The second-order charge-induced contribution to gravitational deflection of massive particle is characterized by the term $N_4(0, \,w)\frac{3\pi}{4}\frac{Q^2}{b^2}$. We can solve Eqs.~(\ref{geodesic-t-SKN}) - (\ref{geodesic-y-SKN}) to obtain the numerical value of the coefficient $N_4(0, \,w)$. Fig.~\ref{Figure7} shows $N_4(0, \,w)_N$ as the function of $w$. For comparison, $N_2(0, \,w)=\frac{1}{5}\left(1+\frac{4}{w^2}\right)$ and $N_3(0, \,w)=\frac{1}{w}$ are also given.
\begin{figure*}
\begin{center}
\includegraphics[width=11.6cm]{Figure7.eps}
\caption{$N_4(0, \,w)_N$ (thick green line) plotted to compare with the analytical coefficients $N_2(0, \,w)$ (dashed blue line) and $N_3(0, \,w)$ (thin black line). As an example, here we set $a=0.1M$ and $Q=0.99M$. } \label{Figure7}
\end{center}
\end{figure*}
\subsubsection{Massive particle deflection by a moving KN black hole}
For a general case $(a\neq0,~Q\neq0,~-1<v<1,~w_{min}\leq w<1)$, we can also numerically solve Eqs.~(\ref{geodesic-t}) - (\ref{geodesic-y}) and utilize Eq.~(\ref{Numerical-Angle}) to calculate the gravitational deflection angle $\alpha(v, \,w)_N$ up to second order. Fig.~\ref{Figure8} presents $\alpha(v, \,w)_N$ as the function of both $v$ and $w$ for the moving Kerr-Newman black hole with $a=Q=0.5M$ as an example.
\begin{figure*}
\begin{center}
\includegraphics[width=14cm]{Figure8.eps}
\caption{$\alpha(v, \,w)_N$ plotted as the function of two variables $v$ and $w$, with $a=Q=0.5M$.
Here we set the range of $v$ to be $-0.999\leq v\leq0.999~(v<w)$ in the simulations. } \label{Figure8}
\end{center}
\end{figure*}
\vspace{20pt}
Up to now, we have discussed the gravitational deflection of test particles up to second order by the moving KN source, with the help of numerical simulations. In the next section, we will analyze the possibilities to detect the kinematical corrections to the second-order deflection in the astronomical observations.
\section{Possibilities to detect the kinematically correctional effects} \label{application}
The techniques of high-accuracy angle measurement at the level of $\mu$as have been achieved in nowadays astronomical projects. The accuracy of ESA's telescope \emph{GAIA}~\cite{Gaia2015} is about $7\mu$as and $25\mu$as for V magnitude $=12$ and $15$, respectively~\cite{MCM2007}. In contrast to \emph{GAIA}, the proposed project \emph{SIM} had achieved a higher accuracy $\sim1\mu$as for narrow angle though it was cancelled. As mentioned above, \emph{NEAT} plans to achieve a much higher accuracy than \emph{SIM}. These high-accuracy astronomical surveys, greatly promote the theoretical investigations of the detectable kinematical effects (especially relativistic motion effects) which appear in leading high-order terms of classic tests of general relativity, such as time delay~\cite{HBL2014a} and frequency shift~\cite{HBL2014a,HBL2012} of electromagnetic waves. The theoretical model established in this work, shows a possibility to observe the velocity effects on the second-order gravitational deflection.
We first consider light deflection. Table~\ref{Table3} gives the magnitude of the kinematical correction
$\Delta(v, \,a, \,Q)=\left[1-(1\!-\!v)\gamma\right]\left(\frac{15\pi}{4}\frac{M^2}{b^2}\!-\!\frac{4Ma}{b^2}\!-\!\frac{3\pi}{4}\frac{Q^2}{b^2}\right)$ to the second-order deflection given in Eq.~(\ref{TDAngle2}). It is found that the correctional effect $\Delta(v, \,a, \,Q)$ on the second-order deflection may be larger than the accuracy of \emph{NEAT}, even though the source is nonrelativistic (not to mention the relativistic case). For example, when the velocity of a moving Schwarzschild source ($a=Q=0$) is about $2.058\times10^{-4}\sim61.7 km/s$, the kinematical correction to the second-order deflection angle will reach $0.05\mu$as. This velocity is lower than the velocities of many celestial bodies, such as star $\mu$ Cas (space velocity $\sim145 km/s$)~\cite{KK1953}, X-ray point source RX J$0822-4300$ (recoil velocity $>500 km/s$)~\cite{HB2006,BPWP2012}, and pulsars B$2224+65$ (transverse velocity $\geq800 km/s$)~\cite{CRL1993} and B$1508+55$ (transverse velocity $\sim1083^{+103}_{-90} km/s$)~\cite{CVB2005}. The heliocentric radial velocity ($\sim620 km/s$) of the first hypervelocity star in \emph{Large Sky Area Multi-Object Fiber Spectroscopic Telescope} \emph{(LAMOST)} survey~\cite{ZZCP2012,CZC2012,Zheng2014} is much larger than this velocity. Even the circular velocity ($\sim220 km/s$) of the sun around galaxy center also exceeds it. Therefore, there is a good possibility to detect the nonrelativistic kinematical effects on the second-order light deflection by high-accuracy telescopes such as \emph{NEAT}.
\begin{table}[t]
\scriptsize
\begin{center}
\begin{tabular}{cccccccc}
\hline
v~$\setminus$~(\,a,~Q\,) & (\,0.99M,~0\,) & (\,0.99M,~0.1M\,) & (\,0.5M,~0.5M\,) & (\,0.1M,~0.99M\,) & (\,0,~0.99M\,) & (\,0,~~0\,) \\
\hline
0.9 & 124.31 & 123.94 & 146.10 & 144.19 & 150.55 & 187.25 \\
0.1 & 15.40 & 15.35 & 18.10 & 17.86 & 18.65 & 23.20 \\
0.01 & 1.61 & 1.60 & 1.89 & 1.86 & 1.94 & 2.42 \\
0.001 & 0.16 & 0.16 & 0.19 & 0.19 & 0.20 & 0.24 \\
0.0001 & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ & $\star$ \\
\hline
\end{tabular}\par
\caption{The magnitude ($\mu$as) of the kinematical correction $\Delta(v, \,a, \,Q)$ to the second-order light deflection for various $v$. Several combinations of $a$ and $Q$ are listed as examples. The star ``$\star$" denotes the value which is less than $0.05\mu$as (the accuracy of \emph{NEAT}). Notice that here we present the cases with high-magnitude charge mainly for illustration, since in most cases the possible original charge of the black hole in the Universe may have been neutralized or become very small up to now. } \label{Table3}
\end{center}
\end{table}
We then consider a massive particle with a relativistic initial velocity as the test particle, such as a high-speed neutron in secondary cosmic rays. We can numerically estimate the magnitude of the kinematical correction $\Delta(v, \,w, \,a, \,Q)$ to the second-order massive particle deflection. We take a neutron with $w\!=\!0.5$ and set $a\!=\!Q\!=\!0.5M$ as an example. It is found that $\Delta(v, \,0.5, \,a, \,Q)$ is about $94.68 \mu$as,~$0.70 \mu$as,~$0.07 \mu$as ($>0.05 \mu$as) for $v=0.1,~0.001,~0.0001$ ($\sim 30km/s$), respectively. We conclude that the possibility to detect the nonrelativistic correctional effects on the second-order deflection of massive particle via these high-accuracy telescopes is also large.
\section{Summary} \label{conclusion}
In this paper, starting from the \emph{2PM} harmonic metric of the radially moving KN black hole, we have derived the explicit equations of motion and investigated the gravitational deflection of test particles including light up to second order, based on the high-accuracy numerical calculations. We focus on discussing the detectable kinematical effects
(including both relativistic and nonrelativistic correctional effects) on the second-order deflection.
Main results are summarized as follows. Firstly, we obtain the analytical form for the gravitational deflection angle of light up to second order due to the moving KN source (see Eq.~(\ref{TDAngle2})). Secondly, our numerical calculations verify the analytical formula (given in Ref.~\cite{AccRagu2002}) for the Schwarzschild deflection of a massive particle up to second order. Thirdly, the analytical massive particle deflection angle up to second order in the Kerr geometry is achieved (see Eq.~(\ref{KERR-mass})). Fourthly, our numerical approach can be used to calculate the deflection angle of a massive particle up to second order due to the moving KN source. Finally, the possibilities for detecting the kinematical effects by the high-resolution astronomical surveys such as \emph{NEAT} are also discussed.
\section*{ACKNOWLEDGEMENT}
We would like to thank the anonymous reviewers for their constructive comments and suggestions on improving the quality of this paper. This work was supported in part by the National Natural Science Foundation of China (Grant No. 11547311), the National Basic Research Program of China (Grant No. 2013CB328904) and the Fundamental Research Funds for the Central Universities (No. 2682014ZT32).
|
1,116,691,498,593 | arxiv | \section{Introduction}
ATLAS and CMS collaborations recently announced a small enhancement over smooth
background of two photon events with invariant mass
750~GeV~\cite{atlas-750gg,cms-750gg}. Though statistical significance of this
enhancement is not large (within 3 standard deviations), it induced a whole
bunch of theoretical papers devoted to its interpretation. The reason
for this explosive activity is clear: maybe the Standard Model of Particle
Physics is changed at one TeV scale, and we are witnessing the first sign of
this change.
Let us suppose that the observed enhancement is due to the $\gamma \gamma$ decay
of a new particle. Then it should be a boson with spin different from one; the
simplest possibility is a scalar particle $S$ with $m_S = 750$~GeV. Since it
decays to two photons, it should be an $SU(3)_\text{c}$ singlet, and in
$pp$-collisions at the LHC it can be produced in gluon-gluon fusion through the
loop of colored particles and in photon-photon fusion through the loop of
charged particles. Let us suppose that particles propagating in the loops are
heavy, and $S$ decays to them are kinematically forbidden.\footnote{
In the opposite case $\mathrm{Br}(S \to \gamma \gamma)$ reduces significantly which
makes $S \to \gamma \gamma$ decays unobservable at the LHC.
}
Production cross section is evidently larger in the case of gluon fusion,
however $S \to \gamma \gamma$ branching ratio is suppressed in this case since
$S \to g g$ decay dominates.
We suppose that the particles propagating in the loop are Dirac fermions, so
they have tree level masses, and that they are $SU(2)_\text{L}$ singlets.
Nonzero hypercharges provide couplings of these particles with photon and
$Z$-boson. These particles can be quark(s) (color triplets) $T_i$ or lepton(s)
(color singlets) $L_i$. They couple with $S$ by Yukawa interactions with
coupling constants $\lambda^i_T$ and $\lambda^i_L$ correspondingly.
In Section~\ref{s:quarkophilic} we will consider $S$ production and decay in the
model with extra heavy quark(s), in which gluon fusion dominates $S$ production;
in Section~\ref{s:leptophilic} we will consider the model with extra heavy
lepton(s), where $S$ production occur in photon fusion, and $S \to \gamma
\gamma$ decay dominates.
\section{Quarkophilic $S$}
\label{s:quarkophilic}
In the case of one heavy quark $T$ the following terms should be added to the
Standard Model lagrangian:
\begin{equation}
\Delta \mathcal{L}
= \tfrac12 (\partial_\mu S)^2
- \tfrac12 m_S^2 S^2
+ \bar T \gamma_\mu (
\partial_\mu
- \tfrac{i}{2} g_s A_\mu^i \lambda_i
- i g' \tfrac{Y_T}{2} B_\mu
) T
+ m_T \bar T T
+ \lambda_T \bar T T S,
\label{lagrangian-t}
\end{equation}
where $A_\mu^i$ and $B_\mu$ are gluon and $U(1)$ gauge fields respectively, and
$\lambda_i$ are Gell-Mann matrices. $S$ coupling with gluons is generated by
the $T$-quark loop:
\begin{equation}
M_{gg}
= \frac{\alpha_s}{6 \pi} \frac{\lambda_T}{m_T} F(\beta)
G_{\mu \nu}^{(1)} G_{\mu \nu}^{(2)} S,
\label{S->gg-amplitude}
\end{equation}
where $\beta = (2 m_T / m_S)^2$,
\begin{equation}
F(\beta)
= \frac32 \beta
\left[ 1 - (\beta - 1) \arctan^2 \frac{1}{\sqrt{\beta - 1}} \right],
\end{equation}
and $F(\beta) \to 1$ for $m_T \gg m_S$.
Inclusive cross section of $S$ production in $pp$ collision at the LHC through gluon
fusion is given by:
\begin{equation}
\sigma_{pp \to SX}
= \frac{\alpha_s^2}{576 \pi}
\left( \frac{\lambda_T}{m_T} \right)^2
\lvert F(\beta) \rvert^2
m_S^2
\left. \frac{dL_{gg}}{d \hat s} \right\rvert_{\hat s = m_S^2},
\label{pp->SX-gluons}
\end{equation}
where the so-called gluon-gluon luminosity is given by the integral over gluon
distributions:
\begin{equation}
\frac{dL_{gg}}{d \hat s}
= \frac{1}{s}
\int\limits_{\ln \sqrt{\tau_0}}^{-\ln \sqrt{\tau_0}}
g(\sqrt{\tau_0} \mathrm{e}^y, Q^2)
g(\sqrt{\tau_0} \mathrm{e}^{-y}, Q^2)
d y,
\label{gg-luminosity}
\end{equation}
$\tau_0 = \hat s / s$, $s = (13 \text{ TeV})^2$, and we use $Q^2 = m_S^2$. In
Fig.~\ref{fig:S-production} the corresponding Feynman diagram is shown.
Integrating gluon distributions from~\cite{mmht} for $\sqrt{\hat s} = 750\text{
GeV}$, $\sqrt{s} = 13\text{ TeV}$, we get $dL_{gg} / d \hat s \approx 4.0$~nb,
$m_S^2 \; dL_{gg} / d \hat s \approx (1 / 0.69\text{ nb}) \cdot 4.0 \text{ nb}
\approx 5.8$. At $\sqrt{s} = 8$~TeV for $\sqrt{\hat s} = 750$~GeV the
luminosity $dL_{gg} / d \hat s$, and therefore cross
section~\eqref{pp->SX-gluons}, is $4.6$ times smaller. In order to take into
account gluon loop corrections,~\eqref{pp->SX-gluons} should be multiplied by
the so-called $K$-factor which is close to 2 for $\sqrt{s} = 13$~TeV, according
to~\cite{djouadi} (see also Fig. 2 in~\cite{harlander}).
\begin{figure}
\centering
\includegraphics{S-production}
\caption{Feynman diagram of $S$ production.}
\label{fig:S-production}
\end{figure}
In this way for $m_T = m_S$ and $\lambda_T = 1$, substituting $\alpha_s(m_S) =
0.090$, we obtain:
\begin{equation}
\sigma_{pp \to SX} \approx 41\text{ fb},
\label{pp->SX-xsection-value}
\end{equation}
which should be multiplied by $\mathrm{Br}(S \to \gamma \gamma)$ in order to be compared with
experimental observations~\cite{atlas-750gg,cms-750gg}. Total width of $S$ is
dominated by the $S \to gg$ decay, and from~\eqref{S->gg-amplitude} we get:
\begin{equation}
\Gamma_{S \to gg}
= \left( \frac{\alpha_s}{6 \pi} \right)^2
\cdot 8 \frac{m_S^3 \lambda_T^2}{16 \pi m_T^2} \lvert F(\beta) \rvert^2
\approx 3.1\text{ MeV},
\end{equation}
four orders of magnitude smaller than the 45~GeV width which (maybe) follows
from the preliminary ATLAS data. Thus we conclude that for the models we
consider, $S$ width should be much smaller than 45~GeV. Let us note that CMS
data prefer narrow $S$; see also~\cite{buckley}.
\begin{figure}[b]
\centering
\includegraphics{S-decay}
\caption{Feynman diagram of $S \to \gamma \gamma$ decay.}
\label{fig:S-decay}
\end{figure}
$T$-quark loop contributes to $S \to \gamma \gamma$ decay as well (see
Fig.~\ref{fig:S-decay}). The corresponding matrix element equals
\begin{equation}
M_{\gamma \gamma}
= \frac{\alpha}{3 \pi} \frac{\lambda_T}{m_T} F(\beta)
F_{\mu \nu}^{(1)} F_{\mu \nu}^{(2)}
\cdot 3_\text{c} Q_T^2,
\end{equation}
where the factor $3_c$ corresponds to the three colors, and $Q_T$ is the
$T$-quark electric charge. For $\gamma \gamma$ width we get:
\begin{equation}
\Gamma_{S \to \gamma \gamma}
= \left( \frac{\alpha}{3 \pi} \right)^2
(3_\text{c} Q_T^2)^2
\frac{m_S^3 \lambda_T^2}{16 \pi m_T^2} \lvert F(\beta) \rvert^2
\approx 22\text{ keV},
\label{S->2gamma-width}
\end{equation}
and
\begin{equation}
\mathrm{Br}(S \to \gamma \gamma)
\approx \left( \frac{\alpha}{\alpha_s} \right)^2
\frac{(3_\text{c} Q_T^2)^2}{2}
\approx 0.0070,
\label{S->2gamma-branching}
\end{equation}
where we substituted $Q_T = 2/3$ and $\alpha = 1/125$.\footnote{
Fine structure constant should be substituted by its running value at $q^2 =
m_S^2$, $\alpha(m_S^2) = 1/125$.
}
Finally, from~\eqref{S->2gamma-branching} and~\eqref{pp->SX-xsection-value} we
obtain:
\begin{equation}
\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to \gamma \gamma) \approx 0.28\text{ fb}.
\label{pp->SX*branching-value}
\end{equation}
Experimental data provides a value approximately 36 times larger:
\begin{equation}
[\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to \gamma \gamma)]_\text{exp}
\approx 10\text{ fb},
\label{pp->SX-experimental-value}
\end{equation}
since with $3\text{ fb}^{-1}$ luminosity collected by each collaboration at
13~TeV and effectivity of $\gamma \gamma$ registration $\varepsilon \approx
0.5$~\cite{atlas-750gg} they see about 15 events each.
In order to reproduce experimental result~\eqref{pp->SX-experimental-value} we
should suppose that six $T$-quarks exist. In this case $\Gamma_{S \to gg} = 36
\cdot 3.1\text{ MeV} \approx 110\text{ MeV}$, $\mathrm{Br}(S \to \gamma \gamma)$ remains
the same, while the cross section of $S$ production~\eqref{pp->SX-xsection-value}
should be multiplied by the same factor 36,
and~\eqref{pp->SX-experimental-value} is reproduced.\footnote{
If at one TeV scale we have a ``mirror image'' of the Standard Model with three
vector-like generations of quarks and leptons, then experimental
result~\eqref{pp->SX-experimental-value} will be reproduced.
}
However, unappealing multiplication of the number of $T$-quarks can be avoided. For $m_T =
400$~GeV we have $F(\beta) = 1.36$ and $\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to
\gamma \gamma)$ is $5.7$ times larger than what is given
in~\eqref{pp->SX*branching-value}. Thus for $\lambda_T = 2.5$ we reproduce
the experimental number.\footnote{
As far as $\lambda_T^2 / 4 \pi$ is a parameter of perturbation theory, this value
of $\lambda_T$ is close to the maximum allowed value in order for the
perturbation theory to make sense.
} In Figure~\ref{fig:pp->SX} isolines of the product
$\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to \gamma \gamma)$ are shown on $(\lambda_T,
m_T)$ plot.
\begin{figure}
\centering
\includegraphics{pp-SX-xsection}
\caption{Contour plot of $\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to \gamma
\gamma)$.}
\label{fig:pp->SX}
\end{figure}
In the following we consider the model with one additional quark $T$ and
\begin{equation}
m_T = 400\text{ GeV}, \ \lambda_T = 2.5.
\label{quark-parameters}
\end{equation}
$S$ can mix with the Standard Model Higgs boson due to renormalizable
interaction term $\mu \Phi^\dagger \Phi S$, where $\Phi$ is the Higgs
isodoublet. Such an extension of the Standard Model was studied in our recent
paper~\cite{isosinglet}. Doublet admixture in the 750~GeV boson wave function
results in tree level decays $S \to WW$, $ZZ$, $t\bar t$ and $hh$, where $h$ is
the 125~GeV Higgs boson. According to Eqs.~(16)--(20) from~\cite{isosinglet},
the sum of these widths equals approximately $\sin^2 \alpha \cdot m_S^3 / 8 \pi
v_\Phi^2 \approx \sin^2 \alpha \cdot 300$~GeV, where $\alpha$ is the mixing
angle, and $v_\Phi = 246$~GeV is the Higgs boson vacuum expectation value. Ratio
of partial widths at small $\alpha$ is
\begin{equation}
\Gamma_{S \to WW} : \Gamma_{S \to ZZ} : \Gamma_{S \to hh}
\approx 2 : 1 : 1.
\end{equation}
As a result, $S$ width grows and $\mathrm{Br}(S \to \gamma \gamma)$ diminishes
correspondingly. Thus, experimental result~\eqref{pp->SX-experimental-value}
will not be reproduced. To reduce this effect we should make the mixing angle
$\alpha$ small enough. For example, for $\sin \alpha < 1/150$ we obtain at most
12~MeV (or 11\%) increase of the width of $S$, which is acceptable. According
to Eq.~(7) from~\cite{isosinglet},
\begin{equation}
\sin \alpha \approx \frac{\lvert\mu\rvert v_\Phi}{m_S^2},
\end{equation}
and it is less than $1/150$ for $\lvert \mu \rvert$ below 15~GeV.
Let us check if $S \to ZZ$ decays do not exceed experimental bounds on their
relative probability obtained at 13 and 8~TeV at the LHC. Since $\mathrm{Br}(S \to ZZ)$
is below $2.3 \cdot 10^{-2}$, we obtain
\begin{equation}
[ \sigma_{pp \to SX} \cdot \mathrm{Br}(S \to ZZ) ]_{13\text{ TeV}} < 33\text{ fb},
\end{equation}
well below experimental upper bound which, according to Fig.~11
from~\cite{atlas-4l}, equals $4\text{ fb} / (\mathrm{Br}(Z \to 4 \ell))^2 = 400\text{ fb}$
at $2 \sigma$ (see also~\cite{franceschini}). Gluon-gluon luminosity is $4.6$
times smaller at 8~TeV, so we get
\begin{equation}
[ \sigma_{pp \to SX} \cdot \mathrm{Br}(S \to ZZ)]_{8\text{ TeV}} < 9.0 \text{ fb},
\end{equation}
which should be compared with 60~fb experimental upper bound (Fig.~12
from~\cite{atlas-zz}).
More stringent upper bound comes from the search of $S \to hh$
decays~\cite{atlas-hh} and equals 40~fb, while in our case the cross section
equals 10~fb.
Since as it has just been written above, at $\sqrt{s} = 8$~TeV the gluon-gluon
luminosity is $4.6$ times smaller that at $\sqrt{s} = 13$~TeV, the CMS bound from
Run~1~\cite{cms-8TeV-bound}
\begin{equation}
[\sigma_{pp \to SX} \mathrm{Br}(S \to \gamma \gamma)]_{8\text{ TeV}} < 1.5\text{ fb}
\end{equation}
is (almost) not violated in the model considered.
It is natural to suppose that $T$-quark mixes with $u$-, $c$-, and $t$-quark
which makes it unstable. To avoid LHC Run~1 bounds on $m_T$ following from the
search of the decays $T \to W b$, $T \to Z t$ and $T \to H
t$~\cite{tt-bound-1,tt-bound-2,tt-bound-3} which exclude $T$-quark with mass
below 700~GeV, we suppose that $T-t$ mixture is small, and $T$-quark mixing with
$u$- and $c$-quarks dominates. In this case bounds~\cite{tt-bound-1, tt-bound-2,
tt-bound-3} are avoided~\cite{buchkremer}.
Concerning $S$ decays, let us note that the dominant $S \to gg$ decay is
hidden by the two jets background produced by strong interactions. At 8~TeV LHC
energy the following upper bound was obtained~\cite{dijet-background}:
\begin{equation}
[\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to gg)]_{8\text{ TeV}}^\text{exp}
< 30 \text{ pb}.
\label{quark-8TeV-bound}
\end{equation}
In our model $\mathrm{Br}(S \to gg) \approx 1$. From Eq.~\eqref{pp->SX-gluons}, using
gluon-gluon luminosity at $\sqrt{s} = 8$~TeV, parameters from
Eq.~\eqref{quark-parameters}, and $K$-factor $2.5$~\cite{djouadi},
\cite{harlander}, we get
\begin{equation}
[\sigma_{pp \to SX}]^\text{theor} \approx 0.39\text{ pb},
\
\mathrm{Br}(S \to gg) \approx 1,
\end{equation}
two orders of magnitude smaller than the upper bound~\eqref{quark-8TeV-bound}.
Three modes of $S$ decays to neutral vector bosons do exist and have the
following hierarchy:
\begin{equation}
\Gamma_{S \to \gamma \gamma} : \Gamma_{S \to Z \gamma} : \Gamma_{S \to ZZ}
= 1 : 2 (s_W / c_W)^2 : (s_W / c_W)^4,
\label{widths-ratio}
\end{equation}
where $s_W$ $(c_W)$ is the sine (cosine) of electroweak mixing angle.\footnote{
In~\eqref{widths-ratio} we suppose that mixing of $S$ with Higgs doublet is
negligible; in the opposite case $\Gamma_{S \to ZZ}$ can exceed $\Gamma_{S \to
\gamma \gamma}$.
} Thus if $S \to \gamma \gamma$ decays will be observed in future Run~2 data, $S
\to \gamma Z$ and $S \to ZZ$ decays should be also looked for.
If the existence of $S$ will be confirmed with larger statistics at the LHC,
then it can be studied at $e^+ e^-$-colliders as well. For the cross section of
two-photon $S$ production in the reaction $e^+ e^- \to e^+ e^- S$, according
to~\cite[Eq.~(48.47)]{pdg},~\cite{ginzburg}, we have:
\begin{equation}
\sigma_{ee \to eeS} (s)
= \frac{8 \alpha^2}{m_S^3} \Gamma_{S \to \gamma \gamma}
\left[
f \left( \frac{m_S^2}{s} \right)
\left( \ln \left( \frac{m_T^2 s}{m_e^2 m_S^2} \right) - 1 \right)^2
- \frac13 \ln^3 \left( \frac{s}{m_S^2} \right)
\right],
\label{ee->eeS-xsection}
\end{equation}
where
\begin{equation}
f(z) = \left( 1 + \tfrac12 z \right)^2 \ln \tfrac{1}{z}
- \tfrac12 (1 - z) (3 + z),
\label{ee->eeS-f}
\end{equation}
and $\Gamma_{S \to \gamma \gamma}$ is given in Eq.~\eqref{S->2gamma-width}. For
$e^+ e^-$ collider CLIC with $s = (3\text{ TeV})^2$, substituting in
Eqs.~\eqref{S->2gamma-width}, \eqref{ee->eeS-xsection} $\lambda_T = 2.5$, $m_T =
400$~GeV, $\alpha(m_S^2) = 1/125$, and $F(\beta) = 1.36$ we obtain:
\begin{equation}
\sigma_{ee \to eeS}^\text{CLIC} \approx 0.46\text{ fb}.
\end{equation}
With projected CLIC luminosity $L = 6 \cdot 10^{34} / (\text{cm}^2 \cdot
\text{sec})$~\cite[p.~393]{pdg}, during one accelerator year ($t = 10^7$~sec)
about 300 $S$ resonances should be produced.
\section{Leptophilic $S$}
\label{s:leptophilic}
Let us suppose that heavy leptons $L_i$ which couple to $S$ have electric
charges $Q_L$, and there are $N$ such degenerate leptons. The lagrangian is
similar to that of the heavy quarks case~\eqref{lagrangian-t}:
\begin{equation}
\Delta \mathcal{L}
= \tfrac12 (\partial_\mu S)^2
- \tfrac12 m_S^2 S^2
+ \bar L_i \gamma_\mu (\partial_\mu - i g' \tfrac{Y_L}{2} B_\mu) L_i
+ m_L \bar L_i L_i
+ \lambda_L \bar L_i L_i S,
\end{equation}
where we assume equal lepton masses and $S$ couplings. For $S \to \gamma
\gamma$ width we obtain:
\begin{equation}
\Gamma_{S \to \gamma \gamma}
= \left( \frac{\alpha}{3 \pi} \right)^2
(N Q_L^2)^2
\frac{m_S^3 \lambda_L^2}{16 \pi m_L^2}
\lvert F(\beta) \rvert^2, \
\beta = \left( \frac{2 m_L}{m_S} \right)^2.
\end{equation}
Production of $S$ at the LHC occurs through fusion of two virtual photons
emitted by quarks which reside in the colliding protons. Let us estimate
the production cross section. For the partonic cross section we get:
\begin{equation}
\sigma_{q_1 q_2 \to q_1 q_2 S}^{(\gamma \gamma)}(\hat s)
= \frac{8 \alpha^2}{m_S^3} e_1^2 e_2^2 \, \Gamma_{S \to \gamma \gamma}
\left[
f \left( \frac{m_S^2}{\hat s} \right)
\left(
\ln \left( \frac{m_L^2 \hat s}{\Lambda_\text{QCD}^2 m_S^2}
\right)
- 1 \right)^2
- \frac13 \ln^3 \left( \frac{\hat s}{m_S^2} \right)
\right],
\label{qq->qqS-xsection}
\end{equation}
where $e_1$ and $e_2$ are charges of the colliding quarks, $\hat s = x_1 x_2 s
\equiv \tau s$ is the invariant mass of the colliding quarks, and $f(z)$ is
given by~\eqref{ee->eeS-f}. We should multiply~\eqref{qq->qqS-xsection} by
quark distribution functions and integrate over $x_1$ and $x_2$:
\begin{equation}
\sigma_{pp \to SX}^{(\gamma \gamma)}(s)
= \sum\limits_{q_1, q_2}
\;
\int\limits_{m_S^2 / s}^1
\sigma_{q_1 q_2 \to q_1 q_2 S}^{(\gamma \gamma)}(\tau s)
d \tau
\cdot s \cdot \frac{dL_{q_1 q_2}}{d \hat s}(Q^2, \tau),
\end{equation}
where the sum should be performed over valence $uu$, $ud$, $du$, and $dd$ quark collisions,
and sea quarks should be taken into account as well.\footnote{
$uu$ contribution constitutes 50\% of the cross section at $\sqrt{s} = 13$~TeV
with another 24\% coming from $ud$ and $\bar u u$.
} Quark luminosity equals:
\begin{equation}
\frac{d L_{q_1 q_2}}{d \hat s}(Q^2, \tau)
= \frac{1}{s}
\int\limits_{\ln \sqrt{\tau}}^{-\ln \sqrt{\tau}}
q_1(x_1, Q^2) q_2(x_2, Q^2)
d y,
\label{qq-luminosity}
\end{equation}
$x_1 = \sqrt{\tau} \mathrm{e}^y$, $x_2 = \sqrt{\tau} \mathrm{e}^{-y}$. We take $Q^2 = m_S^2$ and
use quark distributions from~\cite{mmht}. Quark and gluon luminosity functions
for $s = 13$~TeV and $s = 8$~TeV are shown in Fig.~\ref{fig:luminosity}.
\begin{figure}
\centering
\subfloat[Luminosities for $\sqrt{s} = 13$~TeV.]{\includegraphics{luminosity}}
\\
\subfloat[Luminosities for $\sqrt{s} = 8$~TeV.]{\includegraphics{luminosity-8}}
\caption{Luminosities~\eqref{gg-luminosity}, \eqref{qq-luminosity} for
gluon-gluon, $uu$, $ud$, $dd$ and $u \bar u$ collisions at $Q^2 = (750\text{
GeV})^2$.}
\label{fig:luminosity}
\end{figure}
Cross sections in the case of one heavy lepton with charge $Q_L = 1$, Yukawa
coupling constant $\lambda_L = 2$ and mass $m_L = 400$~GeV are shown in
Table~\ref{lepton-table}. For $\Lambda_\text{QCD} = 300$~MeV and $\sqrt{s} =
13$~TeV we get $\sigma_{pp \to SX}^{(\gamma \gamma)} \approx 11$~ab,\footnote{
According to Eq.~(12) from the recent paper~\cite{ryskin}, this cross section
equals 25~ab.
} while the experimental result~\eqref{pp->SX-experimental-value} is three
orders of magnitude larger. We come to the conclusion that $\sum N Q_L^2 \approx
30$ is needed: we need either 30 leptons with unit charges, or one lepton with
charge 6, or several multicharged leptons.\footnote{
If $\sigma_{pp \to SX}^{(\gamma \gamma)} = 25$~ab, then 30 should be replaced
with 20.
}
It is natural to suppose that leptons with charge one mix with the Standard
Model leptons and become unstable. Search for such particles was performed at
the LHC, and the lower bound $m_L > 170$~GeV was
obtained~\cite{atlas-heavy-lepton}. See also~\cite{ellis}, where bounds on
masses and mixings of $L$ are discussed. For masses above 200~GeV the existence
of $L$ is still relatively unconstrained.
Cross section for quasielastic $S$ production can be estimated with the help of
the following equation:
\begin{equation}
\sigma_{pp \to ppS}
= \frac{8 \alpha^2}{m_S^3} \Gamma_{S \to \gamma \gamma}
\left[
f \left( \frac{m_S^2}{s} \right)
\left( \ln \left( \frac{s}{m_S^2} \right) - 1 \right)^2
- \frac13 \ln^3 \left( \frac{s}{m_S^2} \right)
\right].
\end{equation}
For $\sqrt{s} = 13$~TeV, $\lambda_L = 2$ and $m_L = 400$~GeV it equals
4.1~ab.\footnote{
According to Eq.~(24) from~\cite{ryskin}, quasielastic cross section is two times
smaller.
}
\renewcommand{\arraystretch}{1.5}
\begin{table}[h]
\caption{Cross sections (in ab) for double photon production in the leptophilic
model for different values of $\Lambda_\text{QCD}$ and proton collision
energies.}
\label{lepton-table}
\centering
\begin{tabular}{|cc|ccc|}
\cline{3-5}
\multicolumn{1}{c}{}
&& \multicolumn{3}{c|}{$\Lambda_\text{QCD}$, GeV} \\
\multicolumn{1}{c}{}
& & $ 0.1$ & $ 0.3$ & $ 1.0$ \\ \hline
\multirow{3}{*}{\rotatebox{90}{$\sqrt{s}$, TeV}}
& 7 & $ 2.5$ & $ 1.9$ & $ 1.3$ \\
& 8 & $ 3.8$ & $ 2.9$ & $ 2.0$ \\
&13 & $ 15 $ & $ 11 $ & $ 7.8$ \\
\hline
\end{tabular}
\end{table}
\renewcommand{\arraystretch}{1.0}
\section{Conclusions}
We analyze the possibility that the enhancement at 750~GeV diphoton invariant mass
observed by ATLAS and CMS is due to decays of a new scalar $S$. We found that
production of $S$ in gluon fusion in a minimal model with one additional heavy
Dirac quark $T$ can have value of $\sigma_{pp \to SX} \cdot \mathrm{Br}(S \to \gamma
\gamma)$ compatible with data. An upper bound on the mixing of $S$ with $h(125)$
is obtained. If heavy leptons $L$ are introduced instead of
$T$, then $S$ can be produced at LHC in photon fusion. However, in order to
reproduce experimental data many leptons $L_i$ are needed and\slash{}or they
should be multicharged. If the existence of $S$ will be confirmed by future data
then production of heavy vector-like quarks and\slash{}or leptons at the LHC
should be looked for. The search for $S \to Z \gamma, ZZ, WW$ and $hh$ would be
also of great importance.
S.~G., M.~V. and E.~Zh. are partially supported under the grants RFBR No.
14-02-00995 and 16-02-00342, and by the Russian Federation Government under the
grant NSh-6792.2016.2. S.~G. and E.~Zh. are also supported by MK-4234.2015.2 and
16-32-00241. In addition, S.~G. is supported by RFBR under grants 16-32-60115,
by Dynasty Foundation and by the Russian Federation Government under Grant No.
11.G34.31.0047.
|
1,116,691,498,594 | arxiv | \section{Introduction}
\label{sec:intro}
In recent years, anomalous microwave emission~(AME) has been established as a new Galactic emission mechanism. Identified as an excess of emission between~$\sim$~10~--~100~GHz, AME detections require observations at frequencies both above and below this range to determine the level of the other Galactic emission mechanisms. At frequencies below 10~GHz, emission from the interactions between the free electrons and ions in ionized gas and emission from the acceleration of relativistic electrons in the Galactic magnetic fields dominate, while emission above 100~GHz is due to the thermal emission from big, interstellar dust grains in thermal equilibrium with the exciting radiation field.
Although AME has been found in numerous Galactic objects~\citep[e.g.][]{Casassus:08, Ami:09, Scaife:09, Dickinson:10, Tibbs:10, Planck_Dickinson:11, Tibbs:12} and in diffuse environments at high Galactic latitudes~\citep[e.g.][]{Ghosh:12, Peel:12}, there is still much to learn about this enigmatic emission mechanism. AME has been found to be highly spatially correlated with the dust emitting at infrared~(IR) wavelengths, indicating a direct association with interstellar dust grains, and at present there are two viable explanations for the AME: 1) electric dipole emission due to the rotation of small dust grains characterized by an electric dipole moment~\citep{DaL:98, Ali-Haimoud:09, Hoang:10, Ysard:10, Hoang:11, Silsbee:11}; and 2) magnetic dipole emission due to fluctuations in the magnetism of dust grains containing magnetic materials~\citep{DaL:99, Draine:13}. Of these two emission mechanisms, electric dipole emission from spinning dust grains, commonly referred to as spinning dust emission, is the explanation currently favored by observations.
In this work we focus on the Perseus molecular cloud, which has previously been studied in detail and found to be a source of AME~\citep{Watson:05, Tibbs:10, Planck_Dickinson:11, Tibbs:13}. AME was first detected in this cloud by~\citet{Watson:05}, who combined observations performed with the COSMOSOMAS Experiment with data from low frequency radio surveys, \textit{WMAP} and DIRBE, to produce a complete spectrum for the cloud from the radio to the IR on angular scales of~$\sim$~1~degree. This spectrum exhibited a clear excess of emission between~$\sim$~10~--~60~GHz, that was well fitted by spinning dust models. Follow-up observations of this region performed at 33~GHz with the Very Small Array~(VSA) interferometer by~\citet{Tibbs:10} found excess emission in five features on angular scales of~$\sim~$10~--~40~arcmin. The authors found that the total emission observed with the VSA in these five features accounted for only~$\sim$~10~\% of the emission detected on degree angular scales by~\citet{Watson:05}. In their analysis,~\citet{Tibbs:10} used the GB6 all-sky survey at 4.85~GHz~\citep{Condon:89} to constrain the low frequency emission in the five features as these were the only suitable observations available. However, as pointed out in that analysis, the GB6 data have been filtered to remove emission on angular scales greater than~$\sim$~20~arcmin. Therefore, the VSA data had to be filtered to match the range of angular scales to those which the GB6 observations were sensitive, before the level of the low frequency emission was determined. Here we present new observations of the five AME features with the Robert C. Byrd Green Bank Telescope~(GBT) at 1.4 and 5~GHz. These new observations allow us to directly investigate the level of emission at 1.4 and 5~GHz on the full range of angular scales observed with the VSA. With these observations, we investigate the spatial structure of the low frequency~(1.4 and 5~GHz) emission and how it compares to the five AME features observed with the VSA at 33~GHz.
The layout of this paper is as follows. In Section~\ref{sec:obs} we describe the GBT observations and data reduction. In Section~\ref{sec:discuss} we investigate the level of the low frequency emission with respect to the 33~GHz emission, and in Section~\ref{sec:con} we present our conclusions.
\section{GBT Observations}
\label{sec:obs}
Given the size of the AME emitting region in the Perseus molecular cloud~\citep[$\sim$~2~degrees $\times$ 2~degrees;][]{Watson:05}, it was not feasible to observe the entire region with the GBT. Therefore, we decided to observe three strips~(Strip 1, Strip 2, and Strip 3) across the region. These strips, displayed in Figure~\ref{Fig:MIPS24_VSA_GBT}, were chosen to coincide with the five AME features~(A1, A2, A3, B, and C) observed with the VSA~\citep{Tibbs:10}, while simultaneously providing enough off-source observations to allow for accurate baseline fitting. We observed the three strips with both the GBT L-Band~(1.4~GHz) and C-Band~(5~GHz) receivers during three days in June 2009 for a total observing time of 14~hrs. Including overheads and calibration observations, the observing time was split with $\sim$~5~hrs for L-Band and~$\sim$~9~hrs for C-Band.
The observations were performed using the Digital Continuum Receiver~(DCR) and the scanning was performed in On-The-Fly mapping mode with a sampling rate of 5~Hz. On-The-Fly mapping involves slewing the telescope across the sky and is the standard method for mapping, or in this case, simply scanning along a single strip, for the GBT. Each of the strips was observed multiple times to increase the total integration time and decrease the noise. Full details of the GBT set-up and observations are listed in Tables~\ref{Table:Summary_GBT_Obs} and~\ref{Table:Summary_GBT_Scans}.
During the observations, a noise diode was repeatedly switched on and off to inject a known level of noise into the system. This was used to convert the raw data to antenna temperature, $T_{ant}$, using
\begin{equation}
T_{ant} = \left \langle \frac{T_{cal}}{P_{cal_{on}} - P_{cal_{off}}} \right \rangle \cdot \frac{(P_{cal_{on}} + P_{cal_{off}})}{2}~\mathrm{K}
\label{equ:gbt_cal}
\end{equation}
\noindent
from~\citet{Maddalena:02}, where $T_{cal}$ is the equivalent temperature of the noise diode in K and $P_{cal_{on}}$ and $P_{cal_{off}}$ are the data observed with the noise diode being switched on and off, respectively.
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.48, viewport=10 0 600 600]{Fig1.pdf} \\
\caption{MIPS 24~$\mu$m image~\citep{Tibbs:11} overlaid with the VSA contours and the location of the three strips~(Strip 1, Strip 2, and Strip 3) observed with the GBT illustrating the coverage of the GBT observations with respect to the AME features~(A1, A2, A3, B, and C) observed with the VSA. The contours correspond to 10, 20, 30, 40, 50, 60, 70, 80, and 90~\% of the peak VSA intensity, which is 200~mJy~beam$^{-1}$.}
\label{Fig:MIPS24_VSA_GBT}
\end{center}
\end{figure}
The GBT 1.4 and 5~GHz receiver systems both have two linear polarizations per beam~(XX and YY), which we combined to produce the total power for each band. After converting the data to antenna temperature and combining the two polarizations, the data were converted into flux density units. To do this we used our observations of the calibration source 3C123 that were interspersed with the target observations. These observations involved scanning across 3C123 and were also used to optimize the telescope pointing and focus. We fitted a Gaussian and baseline offset to the observations of 3C123 to obtain antenna temperatures of $T_{ant}$~=~82.45~$\pm$~0.14~K and $T_{ant}$~=~31.61~$\pm$~0.22~K at 1.4 and 5~GHz, respectively. Figure~\ref{Fig:Peak_Scan_3C123} displays one of the scans of 3C123 at both 1.4 and 5~GHz, and the corresponding fit to the data.
\begin{table}
\begin{center}
\caption{GBT Specification for the L-band and C-band Observations}
\begin{tabular}{l c c}
\tableline
\tableline
Parameter & L-Band & C-Band \\
\tableline
Receiver & Gregorian L-Band & Gregorian C-Band \\
Back End & DCR & DCR \\
Observing Mode & On-The-Fly & On-The-Fly \\
Central Frequency & 1.4~GHz & 5.0~GHz \\
Bandwidth & 650~MHz & 2000~MHz \\
Beam (FWHM) & 9~arcmin & 2.5~arcmin \\
Scan Speed & 2~arcmin~s$^{-1}$ & 1~arcmin~s$^{-1}$ \\
Sampling Rate & 5~Hz & 5~Hz \\
Theoretical Noise & $\approx$~0.5~mJy~s$^{1/2}$ & $\approx$~0.3~mJy~s$^{1/2}$ \\
R.M.S. Confusion Level & $\approx$~20~mJy & $\approx$~0.7~mJy \\
\tableline
\end{tabular}
\label{Table:Summary_GBT_Obs}
\end{center}
\end{table}
\begin{table*}
\begin{center}
\caption{Summary of the Targeted Strips}
\begin{tabular}{c c c c c c c c c}
\tableline
\tableline
Target & Central RA & Central Dec & Position Angle & Scan Length & \multicolumn{2}{c}{Number of Scans\tablenotemark{a}}$^{}$ & \multicolumn{2}{c}{Noise Levels\tablenotemark{b}} \\
& (J2000) & (J2000) & (degrees) & (degrees) & \multicolumn{2}{c}{} & \multicolumn{2}{c}{(mJy~beam$^{-1}$)} \\
& & & & & L-Band & C-Band & L-Band & C-Band \\
\tableline
Strip 1 & 03:44:33.2 & +32:10:59.5 & 180.0 & 0.93 & 24 (90) & 58 (81) & 23.9 (0.31) & 2.7 (0.044) \\
Strip 2 & 03:43:16.7 & +31:55:25.6 &189.8 & 0.93 & 35 (90) & 34 (75) & 12.5 (0.24) & 2.4 (0.057) \\
Strip 3 & 03:38:59.3 & +31:22:10.6 & 287.5 & 1.55 & 20 (75) & 47 (90) & 19.5 (0.28) & 3.0 (0.033) \\
\tableline
\end{tabular}
\label{Table:Summary_GBT_Scans}
\\
\vspace{0mm}
\begin{flushleft}
\hspace{15mm} \footnotesize{$^{a}$Listed are the number of scans used in this analysis along with the total number of scans observed in parentheses.}
\\
\hspace{15mm} \footnotesize{$^{b}$Listed are the r.m.s. noise levels along with the thermal noise levels in parentheses.}
\end{flushleft}
\end{center}
\end{table*}
Based on the flux density calibration observations of~\citet{Ott:94}, we adopted a flux density of 48.01 and 15.95~Jy for 3C123 at 1.4 and 5~GHz, respectively. Therefore, combining the measured antenna temperature of 3C123 with the known flux density, we computed a calibration factor of 0.58~$\pm$~0.01~Jy~K$^{-1}$ at 1.4~GHz and 0.50~$\pm$~0.01~Jy~K$^{-1}$ at 5~GHz. These calibration factors were then applied to the data to convert from antenna temperature to flux density.
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.48]{Fig2a.pdf} \\
\includegraphics[angle=0,scale=0.48]{Fig2b.pdf} \\
\caption{Scans of the calibration source 3C123 at L-Band (top) and C-Band (bottom). The data (open diamonds) were fitted with a Gaussian with a baseline offset (solid line). The observations of 3C123 were used to calibrate the data and convert from an antenna temperature scale to a flux density scale~(see Section~\ref{sec:obs} for details).}
\label{Fig:Peak_Scan_3C123}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[angle=0,scale=0.95]{Fig3.pdf} \\
\caption{The final GBT L-Band (left) and C-Band (right) scans for Strips 1~(top), 2~(middle), and 3~(bottom). Also plotted is the 3$\sigma$ upper limit~(dashed line) of the data for each scan. These scans show that we do not detect any significant extended emission. It is also possible to identify the two point sources, NVSS~J034433+321255 and NVSS~J034439+314523, in Strip 1}
\label{Fig:Final_Scans}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=0,scale=0.95]{Fig4.pdf} \\
\caption{Histograms of the final GBT L-Band (left) and C-Band (right) scans for Strips 1~(top), 2~(middle), and 3~(bottom). Also displayed is the 3$\sigma$ limit of each distribution~(dashed line) which was computed ignoring the point sources present in Strip 1. By comparing the distributions and the 3$\sigma$ limit it is possible to see that for Strip 1 there is signal present, while for Strips 2 and 3 the data are consistent with noise. Also displayed on the plots is the skewness. Only the L-band and C-band observations of Strip 1 have a skewness $>$ 1, which again suggests that the other strips are dominated by noise.}
\label{Fig:Final_Histograms}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.48]{Fig5.pdf} \\
\caption{Spectrum of NVSS J034433+321255. The data from the literature~(open diamonds) have been fitted with a power-law~(dashed line) and the measurements from the GBT data at 1.4 and 5~GHz are overplotted~(filled squares). The consistency between the expected flux densities computed from the fit and the measured values from the GBT data confirm the accuracy of the calibrated data.}
\label{Fig:Source_Cal}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.48]{Fig6a.pdf} \\
\includegraphics[angle=0,scale=0.48]{Fig6b.pdf} \\
\includegraphics[angle=0,scale=0.48]{Fig6c.pdf} \\
\caption{Comparison of the GBT scans with the VSA observations for Strips 1~(top), 2~(middle), and 3~(bottom). The VSA emission clearly dominates in both Strips 2 and 3, while in Strip 1, the point source NVSS~J034433+321255 appears to dominate. However, when this point source is scaled from 1.4~GHz to 33GHz, as shown in Figure~\ref{Fig:Source_Cal}, the flux density is 11.22~$\pm$~1.11~mJy, which is below the level of the 33~GHz emission. It is also apparent that the spatial structure of the low frequency emission is not comparable to the emission observed at 33~GHz for the three strips.}
\label{Fig:GBT_VSA_Scans}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=0.48]{Fig7a.pdf} \\
\includegraphics[angle=0,scale=0.48]{Fig7b.pdf} \\
\includegraphics[angle=0,scale=0.48]{Fig7c.pdf} \\
\caption{The distribution of the 3$\sigma$ upper limit of the fraction of free-free emission~(f$_{\mathrm{free-free}}$) at 33~GHz using the L-band and C-Band data for Strips 1~(top), 2~(middle), and 3~(bottom).}
\label{Fig:Fraction_ff}
\end{center}
\end{figure}
Total power observations can be severely affected by the atmospheric opacity. However, at frequencies below 5~GHz, typical values of the zenith opacity are~$\le$~0.01 nepers, which corresponds to an atmospheric attenuation of the order of 1~\% for the elevation of our observations~($\sim$~50~--~80~degrees). However, atmospheric effects are not the only contaminant for total power observations, radio frequency interference~(RFI) and gain variations need to be mitigated. To overcome issues with RFI, all the data were visually inspected and any contaminated data scans were flagged. To help deal with the effects of gain variations, we produced a power spectrum for each data scan. We fitted the power spectrum for the knee frequency, $\nu_{knee}$, above which the data are dominated by white noise only, and the level of the white noise. Since the sampling rate for our GBT observations was 5~Hz, we flagged all data scans for which $\nu_{knee}$~$>$~5~Hz. Finally, to remove the effects of offsets in the data, a baseline subtraction was performed. To determine an accurate baseline level, we binned the data along each scan. The bin sizes were chosen to be approximately equal to the FWHM of the beam, however, we investigated the effects of varying the bin size, and found that the effect was of the order of a few percent. Therefore, we conservatively include a 5~\% uncertainty in the data to include the uncertainty due to the baseline fitting. The median value of the data within each bin was computed, and then the median of all the medians was calculated. This resulted in a median level of the baseline, and we then fitted a first order polynomial to the data within $\pm$~3$\sigma$ of this median value. We only fitted a first order polynomial because a higher order polynomial would potentially remove the structure in which we are interested, while applying the $\pm$~3$\sigma$ cut ensures that any bright sources do not bias the baseline level. The resulting fit was then subtracted from the data. All the data scans for each strip were then combined and the final data scan for each strip was produced by computing the median of the scans. To estimate the thermal noise level in the final data scans, we computed the median of the white noise estimates obtained from fitting the power spectrum, and divided this by the square root of the number of times each strip was observed. For Strips 1, 2, and 3, we estimated a thermal noise level of 0.31, 0.24, and 0.28~mJy~beam$^{-1}$ at L-Band, and 0.044, 0.057, and 0.033 mJy~beam$^{-1}$ for C-Band. These noise levels are consistent with the noise levels obtained by fitting to the power spectrum of the final data scans, confirming that we have been able to reduce the noise level by observing the strips multiple times. However, as we will discuss, the thermal noise is not the dominant source of noise present in the data scans. The final data scans at both L-band and C-band are displayed in Figure~\ref{Fig:Final_Scans} and the final uncertainties on these scans were estimated by combining, in quadrature, a 2~\% uncertainty in the flux density calibration, a 1~\% uncertainty due to the atmospheric opacity, and a 5~\% uncertainty due to the baseline fitting and subtraction.
In Figure~\ref{Fig:Final_Scans} there are two plots for each strip, one at 1.4~GHz and one at 5~GHz. Looking at these plots it is possible to identify point sources and some extended structures. To determine the significance of these point sources and extended structure we investigated the distribution of the data as shown in the histograms displayed in Figure~\ref{Fig:Final_Histograms}. These histograms show that for all of the scans, the peak of the distribution appears to occur around a flux of 0~Jy~beam$^{-1}$. We computed the skewness for each scan and found that all the scans have a positive skewness, although only the L-Band and C-Band observations of Strip 1 have a skewness $>$ 1, which is a result of the point sources that are clearly present in these scans~(see Figure~\ref{Fig:Final_Scans}). The other strips all have a skewness of $<$ 1 and this, combined with the fact that the distributions peak around 0~Jy~beam$^{-1}$, suggests that these data scans are dominated by noise. It is known that for continuum observations with most GBT receivers, gain fluctuations in the receiver and electronics can considerably degrade the sensitivity, in some cases by more than an order of magnitude~(GBT Proposers Guide July 2012\footnote{https://science.nrao.edu/facilities/gbt/proposing/GBTpg.pdf}). Therefore, to obtain an estimate of the noise, we computed the r.m.s. for each data scan. For Strips 1, 2, and 3 we computed an r.m.s. of 23.9, 12.5, and 19.5~mJy~beam$^{-1}$ for L-Band and 2.7, 2.4, and 3.0~mJy~beam$^{-1}$ for C-band. Note that for Strip 1 we excluded the data corresponding to the point sources from the r.m.s. calculation. It is clear that these noise values are much larger than the thermal noise estimates~(see Table~\ref{Table:Summary_GBT_Scans}) and hence the data scans are not dominated by thermal noise. By comparing the distribution of the data to the 3$\sigma$ limit displayed in Figure~\ref{Fig:Final_Histograms} it is evident that for Strip 1 there is significant signal present, while for Strips 2 and 3 the data are consistent with noise and some non-significant emission. The 3$\sigma$ limit for each scan is also displayed as a dashed horizontal line in Figure~\ref{Fig:Final_Scans}.
In Strip 1, we can see that there is a bright point source at a declination of~$\approx$~32.2~degrees that is visible at both L-Band and C-Band, and there is a less bright point source at a declination of~$\approx$~31.75~degrees that is only visible in C-Band. Both of these point sources are detected at greater than 3$\sigma$. In Strip 2 there is some extended structure present but this is very low-level emission with no emission detected at greater than 3$\sigma$. As in Strip 2, in Strip 3 there is some low-level, non-significant extended structure. There is also a possible hint of a point source at declination~$\approx$~31.48~degrees, however, like the extended structure, this is not a significant detection. Based on a search of the NASA Extragalactic Database\footnote{http://ned.ipac.caltech.edu}~(NED), we believe that the brighter point source in Strip 1 is NVSS~J034433+321255, the weaker point source in Strip 1 is NVSS~J034439+314523, and the hint of a point source in Strip 3 is NVSS~J033727+312808.
\begin{table}
\begin{center}
\caption{Flux densities from the literature for NVSS~J034433+321255}
\begin{tabular}{c c c}
\tableline
\tableline
Frequency & S$_{\nu}$ & Reference \\
(GHz) & (mJy) & \\
\tableline
0.074 & 3020~$\pm$~360 & \citet{Cohen:07} \\
0.365 & 706~$\pm$~43.0 & \citet{Douglas:96} \\
0.408 & 640~$\pm$~50.0 & \citet{Colla:70} \\
0.750 & 440~$\pm$~210 & \citet{Pauliny-Toth:66} \\
1.4 & 224.5~$\pm$~7.9 & \citet{Condon:98} \\
4.85 & 53.0~$\pm$~8.0 & \citet{Becker:91} \\
4.85 & 54.0~$\pm$~8.0 & \citet{Gregory:91} \\
\tableline
\end{tabular}
\label{Table:NED_Data}
\end{center}
\end{table}
Although we are interested in the extended emission and how it compares to the 33~GHz emission~(see Section~\ref{sec:discuss}), the significant detection of point sources in Strip 1 allows us to check the calibration levels. Therefore, for both point sources in Strip 1, we simultaneously fitted a Gaussian and baseline offset to the data. For NVSS~J034439+314523, we obtained a flux density of 20.17~$\pm$~1.11~mJy in C-Band, although this flux density may be slightly affected by the fact that the source appears at the very edge of the scan. This source is not seen in the L-Band data and this is likely due to a lack of sensitivity. For NVSS~J034433+321255 we observed a flux density of 194.86~$\pm$~10.68~mJy at 1.4~GHz and 59.65~$\pm$~3.29~mJy at 5~GHz. Based on data obtained from NED, which spanned a frequency range from 74~MHz to 4.85~GHz~(see Table~\ref{Table:NED_Data}), we produced a spectrum for NVSS~J034433+321255, which is displayed in Figure~\ref{Fig:Source_Cal}. We fitted the data from the literature with a power-law of the form S$_{\nu}$~$\propto$~$\nu^{\alpha}$ and found a spectral index of $\alpha$~=~$-$0.93~$\pm$~0.01. This fit results in an expected flux density of 210.46~$\pm$~18.02~mJy at 1.4~GHz and 64.61~$\pm$~5.87~mJy at 5~GHz, and these values are consistent with the flux densities measured from the GBT observations. In Figure~\ref{Fig:Source_Cal}, the GBT data at 1.4 and 5~GHz are overplotted on the spectrum and are consistent with the fit to the data from the literature. The consistency between the GBT observations and the values from the literature confirms the accuracy of the GBT data.
\section{Comparing the GBT Observations with the VSA Observations}
\label{sec:discuss}
Now that we have processed the GBT data and have a measure of the 1.4 and 5~GHz emission in each of the three strips~(Figure~\ref{Fig:Final_Scans}), we wish to compare the level of this low frequency emission with the emission observed at 33~GHz. The 33~GHz emission was observed with the VSA at an angular resolution of~$\sim$~7~arcmin, with a total of 11 individual pointings to cover the entire cloud. A map for each of the pointings was produced using the standard \textsc{aips} routines to perform both the CLEANing and deconvolution. The final map was produced by creating a mosaic of the individual maps, and is sensitive to angular scales of~$\sim$~10~--~40~arcmin~\citep[for further details see][]{Tibbs:10}.
To compare interferometric data and single dish data, the single dish data is usually resampled with the sampling distribution of the interferometer to account for the incomplete sampling in the \textit{u,v} plane. However, since we only have one dimensional GBT scans, this method is not feasible as the result would be completely contaminated by edge effects due to the Fourier transform. Nonetheless, it is still possible to perform a comparison between the GBT and VSA data. Since we fitted a first order polynomial baseline to the GBT scans, the observations are not sensitive to angular scales greater than the length of the strip. As listed in Table~\ref{Table:Summary_GBT_Scans}, the length of these strips is~$\sim$~55~--~90~arcmin and this means that the GBT observations cover the range of angular scales to which the VSA is sensitive. Therefore, comparing the GBT data with the VSA data allows us to determine the extent of the correlation between the emission at 1.4 and 5~GHz and 33~GHz.
To perform the comparison between the VSA data and the GBT data, we convolved both the C-Band GBT scans and the VSA map~\citep{Tibbs:10} to 9~arcmin to match the angular resolution of the GBT L-Band observations. We then extracted the flux from the convolved VSA map along the three strips observed with the GBT. These data scans were then compared with the GBT data scans, and the results are displayed in Figure~\ref{Fig:GBT_VSA_Scans}. These plots show the 33~GHz emission observed along the three strips with the VSA, over plotted with both the L-Band and C-Band data at 1.4 and 5~GHz, respectively. From Figure~\ref{Fig:GBT_VSA_Scans} it is apparent that the emission at 33~GHz dominates the emission at 1.4 and 5~GHz for Strips 2 and 3. In Strip 1, the point source NVSS~J034433+321255 at 1.4~GHz appears to dominate the VSA emission. However, when the flux density of this source is scaled to 33~GHz, as shown in the spectrum displayed in Figure~\ref{Fig:Source_Cal}, the level of the emission is much less than that observed by the VSA~--~the flux density of the point source at 33~GHz is 11.22~$\pm$~1.11~mJy. There is a possibility that NVSS~J034433+321255 could be a gigahertz peaked source, with a rising spectrum at frequencies greater than 5~GHz. However, looking at Figure~\ref{Fig:GBT_VSA_Scans} it is clear that the spatial structure of the emission at 1.4 and 5~GHz is not similar to the emission observed at 33~GHz for any of the strips. In Strip 1, the VSA detects an extended structure while the GBT only detects the point source NVSS~J034433+321255, implying that even if the point source is a gigahertz peaked source, it is not dominating the VSA emission. Similarly, in Strips 2 and 3, the low-level non-significant emission at 1.4 and 5~GHz does not match the emission observed with the VSA. Therefore, this confirms that the emission observed with the GBT at 1.4 and 5~GHz is much weaker than the emission observed at 33~GHz with the VSA. It should also be noted that that GBT data displayed in Figure~\ref{Fig:GBT_VSA_Scans} have not been scaled to 33~GHz. Assuming a canonical spectral index for free-free emission of~$\alpha$~=~$-$0.12~\citep[e.g.][]{Dickinson:03}, this implies that the expected level of the GBT emission at 33~GHz will be lower than the level plotted in Figure~\ref{Fig:GBT_VSA_Scans}. We also note that the although the GBT observations cover the range of angular scales to which the VSA is sensitive, the emission observed by the VSA on this range of angular scales is not uniformly sampled due to the incomplete sampling of the \textit{u,v} plane. This is not true for the GBT emission, and therefore the VSA flux displayed in Figure~\ref{Fig:GBT_VSA_Scans} can actually be regarded as a lower limit. Additionally, the comparison between the convolved C-Band data and the data extracted from the convolved VSA map is not quite accurate because the convolved C-Band data lacks information in the direction perpendicular to the scan on angular scales greater than its original angular resolution. Therefore, to try and characterize this effect, we performed simulations using the GB6 map of the region~\citep{Condon:89}. We extracted data scans from the GB6 map, convolved them to 9~arcmin and then compared them with the identical data scan extracted from the convolved GB6 map. This comparison allows us to determine the effect of convolving a single scan versus convolving a map, and to perform this comparison we computed the distribution of the ratio of the two simulated scans. We found that this effect is strongly dependent on the position of the selected strip, and based on the location of the three strips in this analysis, the median effect for Strip 1, Strip 2, and Strip 3, was found to be a factor of 1.49, 1.09, and 1.36, respectively. Since the angular resolution of the GB6 data~(3.5~arcmin) and the C-Band data~(2.5~arcmin) are not identical, this effect may be slightly stronger. We note that this issue does not apply to the L-band data.
Therefore, given the fact that we detected no significant extended structure in the three strips, and to account for the issue regarding the convolution of the C-Band data, we conservatively decided to use the 3$\sigma$ upper limits to estimate the fraction of free-free emission at 33~GHz. For each strip we scaled the 3$\sigma$ upper limit to 33~GHz assuming a typical free-free spectral index of $-$0.12 and compared this to the VSA emission. The results of this analysis are plotted as histograms in Figure~\ref{Fig:Fraction_ff}. For each strip, the histogram displays the distribution of the fraction of free-free emission at 33~GHz based on the L-Band and C-Band 3$\sigma$ upper limits. Since we are interested in constraining the fraction of free-free emission in the AME features observed with the VSA at 33~GHz, this analysis was restricted to the regions along each strip in which the 33~GHz emission is greater than the 3$\sigma$ upper limits. Regions in which the 33~GHz emission is less than the 3$\sigma$ upper limits are regions in which the 3$\sigma$ upper limits are an over estimate of the free-free emission. As an estimate of the fraction of free-free emission at 33~GHz, we computed the median of the entire distribution~(both the L-Band and C-Band distributions) for each strip. We find that the conservative 3$\sigma$ upper limit on the fraction of free-free emission at 33~GHz is 27~\%, 12~\%, and 18~\% for Strip 1, Strip 2, and Strip 3, respectively. We tested the robustness of this result by integrating the 33~GHz flux along the scan and comparing it with the integrated 3$\sigma$ upper limits scaled to 33~GHz, and found results consistent with those displayed in Figure~\ref{Fig:Fraction_ff}.
Therefore, from the plots in Figures~\ref{Fig:GBT_VSA_Scans} and~\ref{Fig:Fraction_ff} we conclude that the low frequency emission, extrapolated to 33~GHz, is much fainter than the emission observed at 33~GHz with the VSA. Even if we ignore the C-Band data completely and just use the L-Band data, the 3$\sigma$ upper limit accounts for only 49~\%, 24~\%, and 27~\% of the emission at 33~GHz for Strip 1, Strip 2, and Strip 3, respectively. This confirms that the emission observed with the VSA at 33~GHz is in excess over the free-free emission, and hence is clearly AME. The results of this analysis are in agreement with the analysis performed by~\citet{Tibbs:10} who found that the free-free emission accounted for~$\sim$~20~--~25~\% of the 33~GHz emission. It is also consistent with the analyses performed on much larger angular scales by~\citet{Watson:05} and the~\citet{Planck_Dickinson:11}, who detected the presence of an AME component with a free-free emission fraction of~$\sim$~15~--~20~\% at 33~GHz. These works explained this excess emission as a result of spinning dust emission, which is also consistent with the results of a recent analysis by~\citet{Tibbs:11}, who performed a detailed analysis of the dust properties in this environment, and concluded that the emission could be explained in terms of the spinning dust hypothesis.
\section{Conclusions}
\label{sec:con}
We have used the GBT to observe three strips at 1.4 and 5~GHz that intersect the five regions of AME in the Perseus molecular cloud, which were detected with the VSA~\citep{Tibbs:10}. The data were processed to remove scans affected by RFI and gain variations and the remaining scans were baseline subtracted using a first order polynomial. The scans were then stacked and the median scan was computed. The final data scans were compared with the emission observed in the corresponding strips of the VSA map at 9~arcmin angular resolution, and we found that neither the level of the emission, nor the spatial structure of the emission at 1.4 and 5~GHz, was comparable to the 33~GHz emission. We computed conservative 3$\sigma$ upper limits of the fraction of free-free emission at 33~GHz of 27~\%, 12~\%, and 18~\% for Strip 1, Strip 2, and Strip 3, respectively. Although this analysis is based solely on one dimensional scans, the results are consistent with previous analyses of this region, confirming the low level of free-free emission and the existence of AME in the Perseus molecular cloud.
\section*{Acknowledgments}
We thank A. Noriega-Crespo and S. Carey for stimulating discussions. We also thank the referee for providing detailed comments that have improved the content of this paper. This work has been performed within the framework of a NASA/ADP ROSES-2009 grant, no. 09-ADP09-0059. CD acknowledges support from an STFC Advanced Fellowship and an EU Marie-Curie IRG grant under the FP7. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research has made use of the NASA/IPAC Extragalactic Database~(NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
1,116,691,498,595 | arxiv | \section{Introduction}
Full-disc photographs of the Sun in the resonance K line of the singly-ionized calcium, Ca II, at 3933.67 \,\AA~ were first obtained in the early 1890's by Henri Alexandre Deslandres and George Ellery Hale \citep{hale_solar_1893} with the spectroheliographs developed at the Paris and Kenwood observatories, respectively.
Since then, regular observations with similar instruments have been performed at various sites around the globe, \textit{e.g.} at the Kodaikanal (since 1904), Mt. Wilson (since 1915), Mitaka (since 1917), Coimbra (since 1926), and Arcetri (since 1931) observatories.
Ca~{\sc II}~K observations with interference filters started later, e.g at the Rome (since 1964), Kandilli (since 1968), and Big Bear (since 1981) observatories.
These observations are one of the main sources of information on the long-term changes in the lower solar chromosphere, that is the first thousand kilometres above the temperature minimum.
Furthermore, the Ca~{\sc II}~K line provides information about solar magnetism.
This is due to the large increase in the intensity of the line core when sampling bright magnetic regions \citep[plage; see \textit{e.g.}][]{skumanich_sun_1984}.
The potential of the full-disc Ca~{\sc II}~K observations to serve as a proxy of the magnetic field \citep[][and references therein]{schrijver_relations_1989,loukitcheva_relationship_2009,chatzistergos_recovering_2019} makes them very valuable for studies of the evolution of solar activity, as well as for analyses of the magnetic activity of stars other than the Sun.
Also, irradiance reconstructions and Earth's climate-variability studies can significantly benefit from analyses of Ca~{\sc II}~K observations.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_01}
\caption{Examples of the images from the two digitisations of the photographic Ca {\sc II} K observations of the Kodaikanal observatory taken on 02 January 1936. Left: 8-bit digitisation from data set 1, or DS1 in short; Right: 16-bit digitisation from DS2.
The images are shown to their full range of values and were not compensated for the ephemeris. The DS1 image is shown to scale with the DS2 image (i.e. equal number of pixels per cm), while the DS2 has been cropped to a width of $2R+R/6$, where $R$ is the solar radius in pixels.}
\label{fig:rawimgexample_19360102}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_02}
\caption{Number of images per year (\textit{top panel}) and annual coverage (\textit{bottom panel}) of the Kodaikanal data in the two digitised series. DS1 is shown in dotted red, DS2 in solid blue, while the annual coverage by the two archives together is shown in dashed black.}
\label{fig:ndataused}
\end{figure}
Most of the Ca {\sc II} K observations performed so far are stored in photographic archives, a number of which have recently been digitised \citep[see][]{chatzistergos_analysis_2017,chatzistergos_analysis_2019,chatzistergos_historical_2019}.
For example, the photographic archives from the Arcetri (\,1931--\,1974), Kodaikanal (\,1904--\,2007), Kyoto (\,1926--\,1969), McMath-Hulbert (\,1948--\,1979), Meudon (\,1893--\,2002), Mitaka (\,1917--\,1974), Mt. Wilson (\,1915--\,1985), Rome (\,1964--\,1979), and Sacramento Peak (\,1960--\,2002) observatories have been made available in digital form.
The availability of the Ca~{\sc II}~K series as digital data has initiated their extensive exploitation for studies of the long-term variation of the chromospheric magnetic field and for a variety of retrospective analyses.
For example, \cite{foukal_behavior_1996}, \cite{ermolli_comparison_2009}, \cite{tlatov_new_2009}, \cite{chatterjee_butterfly_2016}, and \cite{priyal_long-term_2017} presented plage-area time series derived from the analysis of different archives with distinct image-processing methods.
\cite{harvey_cyclic_1992}, \cite{ermolli_digitized_2009}, and \cite{chatterjee_butterfly_2016} produced butterfly diagrams of plage regions from the Mt. Wilson, Arcetri, and Kodaikanal observations, respectively.
Moreover, \cite{sheeley_carrington_2011}, \cite{chatterjee_butterfly_2016}, and \cite{pevtsov_reconstructing_2016} produced Carrington maps from Ca~{\sc II}~K observations of the Mt. Wilson and Kodaikanal archives.
Recently, \cite{chatterjee_variation_2017} analysed the supergranulation scale variation in historical Kodaikanal Ca {\sc II} K data following a previous similar analysis of modern data by \cite{ermolli_measure_2003} with Rome Precision Solar Photometric Telescope (Rome/PSPT, hereafter) observations.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_03}
\caption{Selected processing steps applied to images from the two digitisations of Kodaikanal observations taken on 02 January 1936 and shown in Figure \ref{fig:rawimgexample_19360102}. The DS2 (16-bit) image is shown in the \textit{upper row}, while the DS1 (8-bit) one in the \textit{lower row}. The columns show the original density image, photometrically calibrated and limb darkening corrected image, and segmentation mask, respectively. The raw images are shown over the entire range of values found within the solar disc, while the limb-darkening-compensated images are saturated at contrast values of [-1,1]. The segmentation masks show the QS regions in blue and the plage regions in orange.}
\label{fig:processedimagessamedayflat} \end{figure}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{fig_04}
\caption{Magnified sections from the raw DS1 and DS2 negative images (\textit{first two columns}) and calibrated and limb-darkening compensated images (\textit{last two columns}) of the Kodaikanal observation taken on 02 January 1936 (see Figures \ref{fig:rawimgexample_19360102} and \ref{fig:processedimagessamedayflat}) displaying a quiet Sun region (\textit{top 2 rows}) and a plage region with a sunspot (\textit{bottom 2 rows}). The image from DS2 is shown in \textit{rows 1 and 3}, while the one from DS1 is shown in \textit{rows 2 and 4}. The sections in \textit{columns 1+3} and \textit{2+4} have widths of $260''$ and $65''$, respectively, which correspond to 200 and 50 pixels for the DS1 image, respectively. The images are shown over their full range of values. \textit{Over-plotted contours} show the plage regions identified with the method applied in our study.}
\label{fig:rawimgexample_zoomin_193601020}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_05}
\caption{Comparison of observations from DS2 (\textit{upper row}) and Rome/PSPT (\textit{lower row}), both taken on 27 May 2000. Shown are the raw images (\textit{left column}), processed and limb-darkening compensated images (\textit{middle column}), and the segmentation masks (\textit{right column}). The raw images are shown over the entire range of values found within the solar disc, while the limb-darkening compensated images are saturated at [-0.5,0.5]. In the segmentation masks the QS regions are coloured blue and the plage regions orange.}
\label{fig:comparisonpspt}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_06}
\caption{Raw density images from DS1 (\textit{left}), DS2 (\textit{middle}), and Rome/PSPT (\textit{right}) taken on 03 April 1999 illustrating saturated plage regions in the Kodaikanal data.}
\label{fig:comparisonpsptsatregions}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_07}
\caption{Enlargement of observations displayed in Figure \ref{fig:comparisonpspt} from DS2 (\textit{left}) and Rome/PSPT (\textit{right}) showing an active region. The images are saturated at [-0.5,0.9]. The contours mark the location of sunspots as identified in the Rome/PSPT image. See Section \ref{sec:imagecomparison} for more details.}
\label{fig:200027-5-0-0kodaplagezoomin}
\end{figure}
Most studies performed on historical Ca~{\sc II}~K data confirm some well-known characteristics of past cycles, \textit{e.g.} they all report that Solar Cycle 19 showed remarkably high plage coverage and a broad latitudinal distribution of active regions. They also agree on the overall increase of solar activity over the first half of the 20th century and the decrease over the last decades.
However, they also show significant inconsistencies, such as large differences in absolute values and in the detected short- and long-term trends \citep{ermolli_potential_2018,chatzistergos_analysis_2019}.
Such differences are seen both, between the results derived from different archives, and between the same archives but processed and analysed with different methodologies.
Some of these discrepancies can be ascribed to the varying quality of the analysed data partly arising from the digitisation.
Most of the series were digitised independently at different periods and using various set-ups.
Furthermore, some series underwent multiple digitisations, \textit{e.g.} the Kodaikanal, Meudon, Mitaka, and Mt. Wilson archives.
The two main reasons for the re-digitisation of the various series were: i) the availability of higher quality digitising devices than those employed in the older digitisations and ii) various problems identified in some of the earlier digitised series.
Among the photographic archives of Ca {\sc II} K observations, the one from the Kodaikanal observatory has probably the largest collection of images, covering more than a century with nearly daily observations between 1904 and 2007. It is thus a particularly important archive.
The Kodaikanal photographic archive underwent three digitisations over the last three decades \citep{kariyappa_variability_1994,makarov_22-years_2004,priyal_long_2014}, although only the last two digitisations included the entire solar disc.
In \cite{chatzistergos_analysis_2019} we consistently analysed with the same method the data from the former digitisation of the Kodaikanal archive along with the archives from Arcetri, McMath-Hulbert, Meudon, Mitaka, Mt. Wilson, Schauinsland, and Wendelstein to derive plage areas. We used the Kodaikanal and Mt. Wilson series as reference to construct two composites of plage areas over the entire 20th century. The two composites showed different absolute levels and discrepancies over different Solar Cycles. To understand whether some of these differences are coming from the digitisation and can potentially be resolved with the new digitisation of the Kodaikanal data and to best exploit the potential of one of the most important Ca~{\sc II}~K series, we here analyse the quality and intrinsic differences in the images derived from the two most recent digitizations of the full-disc Ca~{\sc II}~K observations from the Kodaikanal observatory.
We process the data from both digitisations consistently to derive plage areas and to compare the results to each other, as well as to those presented in the literature.
This article is structured as follows:
in Section \ref{sec:data} we give an overview of the data employed in this study and describe the methods used to process them.
We compare the images from the two digitisations in Section \ref{sec:characteristicsdatasets} where we also discuss characteristics of the series.
We present our results of plage areas from the last digitisation of the Kodaikanal data in Section \ref{sec:plageareas} and compare them to other series reported in the literature.
Finally we draw our conclusions in Section \ref{sec:conclusions}.
\section{Data and Processing}
\label{sec:data}
\subsection{Data}
We analyse the data from the two most recent digitisations of the photographic full-disc Ca~{\sc II}~K Kodaikanal observations.
These data were taken with a spectroheliograph having a nominal bandwidth of 0.5\,\AA.
The first dataset (DS1, hereafter) was obtained by \cite{makarov_22-years_2004} by scanning 22,158 photographic observations taken from 1907 to 1999 with a linear array of 900 pixels.
The data were stored in JPG format as 1800$\times$1800 pixel$^2$ images, with 8-bit accuracy, and an average pixel-scale of $1.3''$pixel$^{-1}$.
The second dataset (DS2, hereafter) was derived by \cite{priyal_long_2014} by scanning 48,928 photographic observations taken from 1904 to 2007 with a CCD camera with the resolution of 4096$\times$4096 pixel$^2$.
The data were stored in FITS format with 16-bit accuracy and an average pixel scale of $0.9''$pixel$^{-1}$.
Figure \ref{fig:rawimgexample_19360102} shows the two digitisations of a single Kodaikanal photographic plate taken on 02 January, 1936.
The raw negative images are displayed to their full range of values, while the image from the DS1 is shown to scale to the DS2 one showing equal number of pixels per cm.
Figure \ref{fig:ndataused} shows the number of images per year in DS1 and DS2.
Only a sub-sample of the available photographic plates were scanned for DS1, while almost all of the existing plates have been scanned for DS2.
We find only 13,835 images of each set referring to the same solar observations, being scans of the same photographic plate.
The fraction of days within each year with at least one observation from either Kodaikanal series is also shown in Figure \ref{fig:ndataused}.
We find 21,405 days with at least one observation from both archives (out of 21,746 and 26,492 days in DS1 and DS2, respectively), as demonstrated by the mismatch between the coverage by DS2 and the total coverage by DS1 and DS2 together. Besides, we notice that there are seven years (1907, 1909, 1911, 1913, 1955, 1973, and 1993) over which DS1 has a better coverage than DS2.
These discrepancies can, at least partly, be explained by errors in reporting both the date and timing of the observation in the digitised files.
Such errors in the digitised series are not surprising considering the large number of plates and should most likely affect images from both digitisations. We find that the errors in the dates are limited to only a few images, while the errors in the time of the observations affect a considerable amount of data. In this regard, it is worth noting that images from DS2 include almost the entire plate, thus allowing us to compare the date on the plate to the one passed in the meta-data of the file. This is not the case for the images from DS1, which have been cropped to include the solar disc and only a small area outside of the disc.
Furthermore, there is an inconsistency in both datasets in the time format. Time is given as Indian Standard Time up to the 1960's and as Coordinated Universal Time afterwards. For our study, all times from both datasets were corrected to be in Indian Standard Time. Note that this choice does not affect the results presented in the following, since the analysis was done on daily mean values.
The radius of the solar disc varies with time in both series, but it has an average value of 685 and 1095 pixel for DS1 and DS2, respectively.
A heliostat was employed at the Kodaikanal Observatory, which allows determining the orientation of the solar disc based on the date and time of the observation.
However, the orientation of the plates during the digitisation introduced another orientation that needs to be taken into account to orient the images. This angle is usually small and in general random.
Markings have been introduced on the plates before the scanning \citep{priyal_long_2014} to identify the north and south poles of the solar disc.
However, the information about which side corresponds to the east or west is lost.
The information from the regions of the original plates lying outside the solar disc seems to have been maintained in DS2.
In DS1, however, there are images where these regions have been saturated (\textit{e.g.} Figure \ref{fig:rawimgexample_19360102}).
Various artefacts, such as scratches and emulsion holes, appear in both digitisations.
Many other artefacts, such as dust or hairs, are found at different locations in images of the two digitisations.
One should also note that the DS2 series was generated more than a decade after the DS1 one, so natural degradation of the photographic plates is also partly responsible for this discrepancy.
Moreover, since the 8-bit digitisation was performed with a linear array, this caused a few images to have rows that are offset in the $x-$direction and hence have distorted discs.
This issue is resolved in the 16-bit digitisation, where a CCD camera was used.
Both sets of digitised images require instrumental calibration of the digitisation camera before any further processing.
Such a calibration has not been applied to the DS1 \citep{ermolli_comparison_2009} data, but it has partially been applied to the DS2 data and the standard data for instrumental calibrations were stored.
In particular, the dark current was removed with a built-in program of the scanning device, while a lab-sphere illuminating a white surface was used to measure the flat-field of the CCD.
The same surface was used to support the photographic plates during their scanning.
Inspection of the available data shows that this surface does not always cover the same area as the plate. As a result, information outside the disc is lost when dividing the digitised observation by the corresponding flat-field image.
Furthermore, the exposure time of the flat was not constant and was different from the one used for scanning the plates.
Since the same CCD recorded the image with different gain in four quadrants, small errors in the photometric calibration of the data could affect the calibrated images showing variations over the quadrants and other residual inhomogeneities. In addition, flat-field images are expected to change over the course of the digitisation, and it is important to have such images created close to the scanning time of each image.
However, flat-field images are missing from several folders in the archive or the included flat image does not always account completely for the difference in the quadrants.
We also use modern CCD-based observations taken with the Rome/PSPT as a comparison because there is partial overlap with both digitised Kodaikanal series.
PSPT is located at Monte Porzio Catone and is operated by the INAF Osservatorio Astronomico di Roma \citep{ermolli_prototype_1998,ermolli_photometric_2007}.
Observations with Rome/PSPT started in May 1996 and continue to the present.
The observations used here are taken with an interference filter centred at the Ca {\sc II} K line with bandwidth of 2.5\,\AA.
The images have dimensions of 2048$\times 2048$ pixel$^2$ and are stored in FITS files after the standard instrumental calibration \citep[][]{ermolli_photometric_2007}.
Therefore, such data can be used to investigate the differences between photographic and CCD observations, as well as between observations taken with a spectroheliograph and an intereference filter.
\subsection{Methods}
In our study we analysed raw DS1 images without calibration of the digitising device, raw DS2 data divided by the flat image taken closest in time to the scanning time, and calibrated Rome/PSPT data.
Some key characteristics of these archives are listed in Table \ref{tab:characteristiscs}.
We used the DS1 and Rome/PSPT data already processed with the technique described by \cite{chatzistergos_analysis_2018,chatzistergos_ca_2018,chatzistergos_analysis_2019} and in the present study we applied exactly the same processing to the data from DS2.
In brief, the images were converted to density images and then photometrically calibrated with a calibration curve (CC, hereafter). The CC was derived by relating the centre-to-limb variation (CLV, hereafter) measured in quiet-Sun regions (QS, hereafter) on the historical observations to a standard reference of QS CLV as measured in modern Rome/PSPT Ca {\sc II} K observations, and linearly extrapolated to the non-QS regions.
Contrast images were constructed by removing the limb darkening, which was determined in an iterative process.
This includes application of a running-window median filter and polynomial fitting along rows, columns, and radial locations after the bright features had been removed.
The contrast images were then segmented with a multiplicative factor to the standard deviation of the QS intensity values. This factor was determined with a method based on the approach of \cite{nesme-ribes_fractal_1996}.
The multiplicative factor for identifying plage was chosen to be $8.5$.
This choice resulted in plage areas from the Rome/PSPT that are comparable to those from the SRPM segmentation scheme \citep{fontenla_semiempirical_2009} applied to the same Rome/PSPT data.
We emphasize that for this work the precise value of the segmentation parameter is of no importance, since the main goal here was to process all series consistently to understand some of the difference between the results presented in the literature.
Figure \ref{fig:processedimagessamedayflat} shows examples of the density images and the processed images (calibrated and limb-darkening compensated images) as well as the segmentation masks for the observation shown in Figure \ref{fig:rawimgexample_19360102}.
Images with severe artefacts (\textit{e.g.} missing parts of disc) were excluded from analysis as was done by \cite{chatzistergos_analysis_2018,chatzistergos_analysis_2019}.
\section{Characteristics of Datasets}
\label{sec:characteristicsdatasets}
\subsection{Image Comparison}
\label{sec:imagecomparison}
Here we compare the images from DS1 and DS2 to study the differences due to the various digitisation set-ups. As an example, Figure \ref{fig:processedimagessamedayflat} shows the same solar observations in the two datasets.
The full-disc images appear rather similar; however, a more detailed inspection reveals significant differences.
Figure \ref{fig:rawimgexample_zoomin_193601020} displays enlarged areas of the observation shown in Figure \ref{fig:rawimgexample_19360102} including a QS and a plage region with a sunspot.
The different magnifications correspond to regions with a width of 200 and 50 pixels, respectively, in the DS1 image.
Due to the lower spatial resolution of the data of DS1 compared to those of DS2, the pixel size in DS1 is slightly larger than in DS2 when projecting the solar disc in the same physical dimensions.
There are evident compression effects in the DS1 images, manifested as smoothed $\approx8\times8$ pixel$^2$ regions.
The images from DS2 reveal much finer solar structures.
Furthermore, the effect of compression seems to affect the QS regions in DS1 more than the plage area.
Figure \ref{fig:rawimgexample_zoomin_193601020} shows also the enlarged areas of the calibrated and limb-darkening-compensated images.
Contours outline the regions that were identified as plage in each image.
The regions identified as plage appear larger and coarser in the DS1 data compared to the DS2 ones.
We now compare the images from DS2 to those from the Rome/PSPT.
Figure \ref{fig:comparisonpspt} shows an image from DS2 and Rome/PSPT before and after selected processing steps.
The DS2 and Rome/PSPT datasets overlap during the period \,1996--\,2007.
Over that period, we generally notice saturated regions in the DS2 data, while the QS CLV in DS2 data is stronger than in the Rome/PSPT images. One example of an image with saturated regions in the DS2 data is shown in Figure \ref{fig:comparisonpsptsatregions} along with the image from the DS1 and Rome/PSPT of the same day. The images from both DS1 and DS2 display saturated regions, while the DS2 image is blurred compared to the DS1 one, suggesting that it might have been taken out of focus.
The contrast in the calibrated and CLV-compensated DS2 image is lower than that in the Rome/PSPT one, which is contrary to the expectation due to the narrower bandwidth of the Kodaikanal observations. This suggests that Kodaikanal observations over that period suffer from severe stray-light effects.
The identified plage regions from both images appear very similar, yet with some differences.
More small regions can be identified in the DS2 image compared to the Rome/PSPT one, while large plage regions appear more extended in the Rome/PSPT image than in the DS2 one.
We note, however, that the regions appear different to some degree due to the time difference between the observations of DS2 and Rome/PSPT, which were taken with a time difference of about six hours on average, with the Kodaikanal one preceding the Rome/PSPT observation.
Figure \ref{fig:200027-5-0-0kodaplagezoomin} shows a magnified section of a plage region from the processed images displayed in Figure \ref{fig:comparisonpspt}.
This sub-array was extracted close to the limb and the plage region is smaller but more squeezed in DS2 than in the Rome/PSPT observation.
We also marked with contours the regions identified as sunspots in the Rome/PSPT image, by picking the regions that lie below $\bar{C}-3\sigma$, where $\bar{C}$ is the mean contrast over the disc and $\sigma$ is the standard deviation of contrast values within the disc.
The same contours are over-plotted in the DS2 image, scaled using the radius ratio of the two images.
The sunspot regions cover roughly the same area, despite the difference in nominal bandwidth used by the two observatories (0.5\,\AA~and 2.5\,\AA~for Kodaikanal and Rome/PSPT data, respectively).
This suggests that the Kodaikanal observations, at least the ones overlapping with Rome/PSPT, might have been taken with a broader bandwidth than the nominal one \citep[see also][]{chatzistergos_analysis_2019}.
However, these could also be, at least partly, differences due to the use of a filter and a spectroheliograph in the Rome/PSPT and DS2 data, respectively. The exact effect of the bandwidth on the sunspot areas would require further investigation.
Furthermore, the contrast of the sunspot regions in the DS2 is reduced compared to that in the Rome/PSPT images. This might be due to stray-light or underexposure over the very dark regions of the disc.
\subsection{Characteristics of Time Series}
\label{sec:characteristicsseries}
\begin{table*}
\caption{Characteristics of the analysed archives. See Section \ref{sec:characteristicsseries} for more details.}
\centering
\begin{tabular}{lccc}
\hline
& DS1 & DS2 & Rome/PSPT \\
\hline
Period covered & \,1907--\,1999 &\,1904--\,2007 & \,1996--\,2018\\
Number of images &22158 &48928 &3292\\
Number of images used &19291 &45519 &3292\\
Number of days &21746 &26492 &3287\\
Number of days used &18963 &25781 &3287\\
Spectral bandwidth [\,\AA] &0.5 &0.5 &2.5\\
Pixel scale [$''$pixel$^{-1}$] &1.3 &0.9 &2\\
Disc eccentricity & $0.10\pm0.05$ & $0.11\pm0.06$ & $0.05\pm0.02$\\
Spatial resolution [$''$] & $3.5\pm1.6$ & $3.3\pm1.2$ & $5.3\pm0.7$\\
Inhomogeneities & $0.07\pm0.02$ & $0.07\pm0.03$& $0.03\pm0.02$\\
$\sigma_D$ raw images & $0.09\pm0.04$ & $0.23\pm0.06$& -\\
$\sigma_C$ processed images & $0.06\pm0.02$ & $0.05\pm0.02$& $0.06\pm0.02$\\
$\sigma_C^{\mathrm{QS}}$ processed images& $0.020\pm0.008$ & $0.019\pm0.010$ & $0.021\pm0.003$\\
Reduced $\chi^2$ &$0.002\pm0.002$ &$0.002\pm0.002$ & -\\
\hline
\end{tabular}
\label{tab:characteristiscs}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_08}
\caption{Solar disc eccentricity (\textit{top panel}) and spatial resolution (\textit{bottom panel}) computed for DS1 (red), DS2 (blue), and Rome/PSPT (black) data as a function of time. Shown are annual mean values (solid lines) along with the asymmetric $1\sigma$ interval (shaded surfaces). The horizontal dotted lines in the \textit{lower panel} indicate the values for the average pixel scale of the data in the archives.}
\label{fig:eccentricities}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_09}
\caption{The maximum and the minimum values of transmittance (\textit{top panel}) and the standard deviation of density values, $\sigma_D$, (\textit{bottom panel}) within the solar disc for the DS1 (red) and DS2 (blue) data as a function of time. Shown are annual mean values (solid lines) along with the asymmetric $1\sigma$ interval (shaded surfaces). These quantities are not defined for the Rome/PSPT data, which are given directly in units of intensity.}
\label{fig:density_raw}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_10}
\caption{Identified large scale inhomogeneities (\textit{top panel}) and reduced $\chi^2$ of the fit to the curve obtained by relating the measured density QS CLV from the Kodaikanal data to a reference QS CLV from Rome/PSPT data (\textit{bottom panel}) for DS1 (red), DS2 (blue), and Rome/PSPT (black, only in the top panel) data as a function of time. Shown are annual mean values (solid lines) along with the asymmetric $1\sigma$ interval (shaded surfaces). The $\chi^2$ is not applicable to the Rome/PSPT data.}
\label{fig:inhomogeneities}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_11}
\caption{The standard deviation of the contrast values, [$\sigma_C$], over the whole solar disc (\textit{top panel}) and in the quiet Sun only, [$\sigma_C^{\mathrm{QS}}$], (\textit{bottom panel}) for the calibrated and limb-darkening compensated DS1 (red), DS2 (blue), and Rome/PSPT (black) data as a function of time. Shown are annual mean values (solid lines) along with the asymmetric $1\sigma$ interval (shaded surfaces).}
\label{fig:density_flat}
\end{figure}
To assess the quality of the two Kodaikanal series and to compare it to that of modern data, we consider different image attributes that allow us to describe both global and local properties of the digital data.
In particular, following \cite{ermolli_comparison_2009} we study the eccentricity of the solar disc, the spatial resolution, the large scale inhomogeneities, similarity of QS CLV to that from the Rome/PSPT, as well as density/intensity characteristics.
Table \ref{tab:characteristiscs} summarizes the results obtained, which are discussed in more detail in the following.
We computed the eccentricity of the recorded solar disc as $e=\sqrt{1-(R_{\mathrm{min}}/R_{\mathrm{max}})}$, where $R_{\mathrm{min}}$ and $R_{\mathrm{max}}$ are the smallest and largest measured radii.
However, the disc in the historical data does not always have an elliptical shape, rather an irregularly distorted shape due to problems with the drive of the spectroheliograph (affecting data from both DS1 and DS2) or with the digitising device (mainly affecting data from DS1).
It is important to have an estimate of the eccentricity of the solar disc, since \cite{chatzistergos_analysis_2019} demonstrated that the uncertainty in the derived plage areas increases with the disc eccentricity.
The eccentricity of the solar disc from the DS1 and DS2 images is shown in Figure \ref{fig:eccentricities}\textbf{a)}.
Annual averages are plotted along with the asymmetric 1$\sigma$ interval.
We get an average value of $\bar{e}=0.10\pm0.05$ for the DS1 and $\bar{e}=0.11\pm0.06$ for the DS2 series.
For both DS1 and DS2, $e$ and $\sigma_e$ increase with time.
The value of $e$ is relatively low prior to 1940 (DS1) and 1930 (DS2), but it is higher and varying afterwards.
\cite{ermolli_comparison_2009} had also studied the eccentricity of the disc in the DS1 and reported qualitatively the same results.
In particular, they also found an increase in $e$ and $\sigma_e$ with time, although the jumps in the results in the early period were less pronounced in their case.
They found a mean eccentricity of $\bar{e}=0.12\pm0.06$, which is slightly higher than derived here.
The eccentricity we derive for DS1 and DS2 is larger than to that for the Rome/PSPT data, which is on average $\bar{e}=0.05\pm0.02$.
This value is consistent with the value of $\bar{e}=0.04\pm0.03$ reported by \cite{ermolli_comparison_2009}. The maximum (RMS) error in the plage areas for the average disc eccentricity found for Kodaikanal data is 0.013 (0.0005), while for Rome/PSPT it is 0.002 (0.0003) as reported by \cite{chatzistergos_analysis_2019}.
The eccentricities for the two Kodaikanal series are similar, although the one for DS2 is higher for most of the time, as can be seen from Figure \ref{fig:eccentricities}\textbf{a)}. Figure \ref{fig:eccentricities}\textbf{a)} shows all data, but this is seen also when doing the same plot for the data that have the same date and time. In principle this can be due to a small inclination of the plate when it was digitised, or issues with the code to identify the limb, which might have been affected by artefacts.
To study the spatial resolution of the images, we evaluate the frequency at which 98\,\% of the power spectral density is taken into account.
The computation was performed on $64\times64$ disc sub-arrays of quiet-Sun regions.
We randomly positioned 100 such segments within the inner $R$/3 of the disc, and the average value from all the segments was adopted.
This method is similar to the approach used by \cite{ermolli_comparison_2009}.
However, we can potentially get a better estimate of the average spatial resolution of the data than \cite{ermolli_comparison_2009}, considering that the solar observation was not recorded instantaneously, but rather in strips with variable spatial resolution.
The derived spatial resolution is shown in Figure \ref{fig:eccentricities}\textbf{b)}.
We find the resolution for DS1 and DS2 in general to be roughly the same and slowly getting worse with time.
Around 1980, the spatial resolution of DS1 degrades, sharply reaching an average value of $15''$.
For the data from DS2, the resolution remains between 3 and $6''$.
The average values over the whole period are $3.5\pm1.6''$ and $3.3\pm1.2''$ for DS1 and DS2, respectively.
The spatial resolution of DS2 after 1990 is at similar levels as that of the Rome/PSPT data, but it is considerably worse for DS1 data. This is most likely due to an issue with the digitisation of DS1 data.
The spatial resolution of the Rome/PSPT data is $5.3\pm0.7''$ and roughly constant over the whole period.
These values are consistent with the $3.3\pm0.1''$ and $5.0\pm0.4''$ for DS1 and Rome/PSPT, respectively reported by \cite{ermolli_comparison_2009}.
To compare the dynamic range of the data in the two digitisations, Figure \ref{fig:density_raw}\textbf{a)} shows the maximum and minimum values of the raw negative images, in units of transmittance, over the solar disc in the DS1 and DS2 data.
The values from each dataset were normalised to the maximum value from the respective digitisation.
On average, DS2 is much more stable, with the exception of two periods around 1959 and 1984 during which the maximum values decrease.
DS1 shows a larger variation with time with abrupt jumps in the transmittance values and gaps. There are periods where the maximum transmittance value is found within the solar disc of DS1 data, hinting at saturation of the low-density regions.
This suggests that DS2 has been digitised in a more consistent manner than DS1.
Figure \ref{fig:density_raw}\textbf{b)} shows the standard deviation of the density values, [$\sigma_D$], over the solar disc from both digitisations.
The standard deviation in DS2 is consistently higher than that in DS1.
In both datasets a slight increase of the standard deviation with time is observed, which could be due to an increase in CLV because the employed bandwidth became broader than before or the observations were not centred at the core of the line.
Next, we assess whether and how strongly the images are affected by the large-scale inhomogeneities and artefacts. For this, we compute the relative difference of the image background calculated by the image processing to the QS CLV.
The image background is a 2D surface map that includes the QS CLV as well as all identified large-scale inhomogeneities and artefacts. This is determined with the iterative process described by \cite{chatzistergos_analysis_2018}.
To make the results between the different datasets comparable, we rescaled (using cubic interpolation) only for this test all images to the same dimensions, such that the radius is always 350 pixel.
Figure \ref{fig:inhomogeneities}\textbf{a)} shows the values of the relative difference of the background to the QS CLV we get for DS1, DS2, and Rome/PSPT data.
The level of inhomogeneities increases with time for both the DS1 and DS2 data.
The level of inhomogeneities in the Rome/PSPT decreases with time and at $0.03\pm0.02$ is lower than the values from both Kodaikanal series.
From the data produced during the image processing, we also analysed the $\chi^2$ of a linear fit to the curve obtained by relating the measured density QS CLV to the logarithm of the reference intensity QS CLV from Rome/PSPT data.
The reduced $\chi^2$ from this fit can be considered as an indication for instrumental changes, \textit{e.g.} in the bandwidth or central wavelength \cite[see][for a discussion on this]{chatzistergos_analysis_2019}.
Figure \ref{fig:inhomogeneities}\textbf{b)} shows the derived reduced $\chi^2$ from the fit for images from DS1 and DS2.
There are no significant differences between the resulting values and their variation with time in the two series.
The $\chi^2$ of the fit increases with time for both DS1 and DS2, hinting at a gradual degradation of image quality rather than the digitisation.
We have also analysed the contrast values in the calibrated and limb-darkening compensated images.
Figure \ref{fig:density_flat}\textbf{a)} shows the standard deviation of contrast values over the entire solar disc: $\sigma_C$, for DS1, DS2, and Rome/PSPT.
Unsurprisingly, $\sigma_C$ over the disc qualitatively follows the solar-cycle variability. However, it is less pronounced before 1920, while the amplitude of $\sigma_C$ over Solar Cycles 18, 19, 21, and 22 is lower than the others suggesting some variations in the instrument parameters, or data quality.
The values from the Rome/PSPT data are at similar levels to those derived for both DS1 and DS2 for most Solar Cycles.
Figure \ref{fig:density_flat}\textbf{b)} shows the same but only for the QS regions: $\sigma_C^{\mathrm{QS}}$.
The standard deviation of the QS decreases with time in both DS1 and DS2 series.
This can be due to increased underexposure with time or broadening of the bandwidth, if the observation is centred more towards the wing of the line than its core, or due to observational and instrumental effects such as worsening seeing at the observing site, degradation of the spectroheliograph, and changes in the quality of the plates.
The value for the standard deviation of the QS for the Rome/PSPT is always higher than for both Kodaikanal series. This again lends support to the argument that the effective bandwidth of the Kodaikanal over the overlapping period might be broader than the one of the Rome/PSPT or that the Kodaikanal observations were off-centre over that period.
However, we note that this is not a conclusive test since any of the instrumental or observational issues mentioned above can also contribute to the change of the standard deviation of the QS with time.
Overall, we find both DS1 and DS2 series to exhibit almost the same characteristics and temporal behaviour, suggesting that the studied characteristics are intrinsic to the original Kodaikanal data and are not artefacts of the digitisation.
These include the worsening of the spatial resolution, the increase of the disc eccentricity, the enhancement of the large-scale inhomogeneities, and the change of the CLV with time.
Hence, these should be ascribed to the original photographic observations and not to image issues introduced by the digitisation.
DS2 shows a more consistent distribution of transparency values, suggesting that the digitisation was performed more consistently than for DS1. The data after 1990 show an improved spatial resolution in DS2 data compared to those in DS1.
\section{Plage Areas}
\label{sec:plageareas}
\subsection{Results from DS1, DS2, and Rome/PSPT}
We have derived plage areas from DS1 and DS2 processed in the same way.
The results are shown in Figure \ref{fig:1discfractionplage} along with plage areas from the Rome/PSPT data and the sunspot areas by \cite{balmaceda_homogeneous_2009}\footnote{The series has been extended to 31 May 2017 and is available at \url{www2.mps.mpg.de/projects/sun-climate/data.html}}.
We give a tentative RMS error in the derived plage areas from the Kodaikanal and Rome/PSPT data of 0.0025 and 0.0003, respectively, in fraction of the disc area. The value for Kodaikanal data is the sum of the RMS errors due to the disc ellipticity and errors of the processing of the images to perform the photometric calibration as evaluated by \cite{chatzistergos_analysis_2018,chatzistergos_analysis_2019}.
The value for Rome/PSPT data is acquired by considering only the effect of the disc ellipticity.
We note, however, that this is not a strictly defined, formal error for the derived plage areas, but our best estimate based on our analysis of synthetic data with our method.
The derived plage areas for DS2 data are distinctly higher than those from DS1 for Solar Cycles 15, 18, and 20 and slightly larger in Cycle 17, while in Cycles 21 and 22 the opposite is the case.
The two series have RMS difference of 0.006 and Pearson coefficient of 0.95 when daily values are considered (shown in Table \ref{tab:agreement}).
However, when considering only the observations that have been taken at the same day and same time the RMS difference becomes 0.005 and the Pearson coefficient 0.98.
In Figure \ref{fig:1discfractionplage} we also show the annual plage areas for the common days in DS1 and DS2.
The RMS differences between the annual values for all data and only the common days in the two series are 0.0004 and 0.003 for DS1 and DS2, respectively.
The smaller effect on the results for DS1 comes as no surprise considering that it is the series with the fewer observations.
The agreement between the areas from the two datasets becomes worse for Solar Cycles 20 and 21 when considering only data taken on the same day and time, while it slightly improved for Cycle 22 and remained unchanged for all other cycles.
Hence, the differences in the computed plage areas do not stem from differences in sampling of the original photographs.
We note, however, that potential errors in the dating of the images could affect our results.
Comparing DS1 and DS2 to the sunspot areas we find that Solar Cycles 21 and 22 in the plage areas fall differ from expectation, being too high relative to the previous cycles. Solar Cycle 18 in DS2 is between Cycle 17 and 19 which is in agreement with the ranking of the Solar Cycles in the sunspot areas.
This is not the case for Solar Cycle 18 in DS1.
We notice that the results for Rome/PSPT are in relatively good agreement with those from both the DS1 and DS2 series within the period of overlap.
The areas from DS2 over the maximum of Solar Cycle 23 are slightly lower by 0.003 than that from the Rome/PSPT, while we also notice an increase in the plage areas of DS2 in 2004, something not seen in the areas from Rome/PSPT or the sunspot areas.
We found merely 26 days with data from all three datasets, DS1, DS2, and the Rome/PSPT in the period 19 May 1997 to 29 May 1999.
Figure \ref{fig:dfcommondayspspt} shows the areas in disc fractions derived from all three datasets, by considering only the days available in all three archives.
For that period, the areas from both DS1 and DS2 lie mostly below the one from Rome/PSPT except for two and four days for which DS1 and DS2 give greater plage areas, respectively.
The RMS difference between the areas derived from DS1 and DS2 to Rome/PSPT is 0.011 and 0.008, respectively, while the maximum absolute difference is 0.033 and 0.019, respectively.
Restricting the comparison between DS2 and Rome/PSPT gives 757 days of overlap over the period 23 April 1997 to 10 September 2007.
The difference between the plage areas derived from DS2 to that from Rome/PSPT is shown in the lower panel of Figure \ref{fig:dfcommondayspspt}.
We get an average RMS difference of 0.01, while the maximum absolute difference reaches up to 0.08.
We notice an annual variation in the differences between the series, with Rome/PSPT giving higher plage areas during Winter periods than DS2, while the opposite occurs during the Summer.
Figure \ref{fig:scatterplotspspt} shows scatter plots between the derived plage areas from DS2 and those from DS1 and Rome/PSPT.
We find a good agreement between all series, with linear correlation of 0.97 and 0.94 for DS1 and Rome/PSPT, respectively.
\subsection{Comparison to Other Results}
Figure \ref{fig:scatterplots} shows scatter plots between the plage areas derived by us from the DS2 data and the various published series obtained from Kodaikanal observations.
In particular we consider the series by \cite{kuriyan_long-term_1983} derived from the physical photographs, the series by \cite{ermolli_comparison_2009} and \cite{tlatov_new_2009} from the DS1 data, and the series by \cite{chatterjee_butterfly_2016}, \cite{priyal_long-term_2017}, and \cite{singh_variations_2018} from the DS2 data.
Note that the series by \cite{kuriyan_long-term_1983} and \cite{tlatov_new_2009} are only available as annual values and most likely include different selections of observations.
For all the other series, we consider only the days common with the data from DS2 we use here.
Besides comparing the daily values, we also compute and compare annual median values.
However, we note that potential errors in the dates and times of the different original archives as well as the copies of them used by the respective authors, affect our results.
The scatter between the various series and ours is rather significant.
Table \ref{tab:agreement} lists the RMS difference and the Pearson coefficient between the various series when daily values are used.
The best agreement is found between our results and those by \cite{chatterjee_butterfly_2016} and \cite{singh_variations_2018}, although when annual values are used then the agreement is better with the series by \cite{ermolli_comparison_2009} and \cite{tlatov_new_2009}. However there are hints of a non-linearity between our plage areas and those by \cite{tlatov_new_2009}.
In \cite{chatzistergos_analysis_2019} we did a similar comparison, but considering the above series and our results derived from analysis of DS1.
Comparing the scatter plots in Figure \ref{fig:scatterplots} and those of \cite{chatzistergos_analysis_2019} we find an improvement in the match between our series and that from \cite{chatterjee_butterfly_2016} with linear correlation factor increasing to 0.94 compared to 0.9 for the annual values.
Our results for the plage areas from DS2 data show worse agreement with the series by \cite{ermolli_comparison_2009} and \cite{tlatov_new_2009} from analysis of DS1 data than our results from DS1 data.
However, we also notice a worse agreement with our results from DS2 data and those by \cite{priyal_long-term_2017} and \cite{singh_variations_2018} from analysis of DS2 data than our results from DS1 data.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_12}
\caption{Fractional disc coverage by plage (Panels \textbf{a} and \textbf{b}) as a function of time, derived with the same processing and segmentation parameters from DS1 (red), DS2 (blue), and Rome/PSPT (black) images. Panel \textbf{a} shows daily values, while panel \textbf{b} displays annual mean values (solid lines). The dashed lines in panel \textbf{b} show the annual values for DS1 (orange) and DS2 (light blue) when only the common days in DS1 and DS2 are considered. Also shown (Panel \textbf{c}) are daily (dots) and annual (solid line) values of sunspot areas from \cite{balmaceda_homogeneous_2009}. }
\label{fig:1discfractionplage}
\end{figure}
\begin{table*}
\caption{Quantification of the agreement between different plage area series. The values above the diagonal are the RMS differences, while those below the diagonal are the Pearson coefficients, both computed for the common days. The number of common days is given within the brackets. The abbreviations CEA16, EEA09, PEA17, and SEA18 refer to the \cite{chatterjee_butterfly_2016}, \cite{ermolli_comparison_2009}, \cite{priyal_long-term_2017}, and \cite{singh_variations_2018} series, respectively.}
\centering
\begin{tabular}{lcccccc}
\hline
&DS1&DS2&CEA16&EEA09&PEA17&SEA18\\
DS1&-&0.006 [18387]&0.013 [12657]&0.012 [18454]&0.012 [10992]&0.009 [14370]\\
DS2&0.955 [18387]&-&0.011 [15479]&0.015 [20385]&0.015 [13112]&0.010 [16357]\\
CEA16&0.886 [12657]&0.896 [15479]&-&0.020 [13492]&0.019 [12499]&0.014 [12940]\\
EEA09&0.879 [18454]&0.838 [20385]&0.845 [13492]&-&0.013 [11593]&0.013 [14511]\\
PEA17&0.827 [10992]&0.784 [13112]&0.777 [12499]&0.780 [11593]&-&0.011 [12063]\\
SEA18&0.885 [14370]&0.867 [16357]&0.857 [12940]&0.827 [14511]&0.854 [12063]&-\\
\hline
\end{tabular}
\label{tab:agreement}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_13}
\caption{\textit{Top: }Plage areas in disc fraction for 26 images taken on the same days found in DS1 (dotted red), DS2 (solid blue), and the Rome/PSPT (dashed black) series plotted against the number of the image. The error bars denote the RMS error in the derived plage areas due to the disc ellipticity and the processing to photometrically calibrate the images (the latter is applicable only to the Kodaikanal data) as found by \cite{chatzistergos_analysis_2019}. \textit{Bottom: } Difference of plage areas derived from the 757 images common to DS2 and Rome/PSPT series plotted against the number of the image. Blue plus signs denote individual values, while the red solid line is for the annual mean value. The dashed vertical lines separate the years, which are written at the top of each panel.}
\label{fig:dfcommondayspspt}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{fig_14}
\caption{Scatter plots between the plage area values derived from images of DS2 ($x$-axis) and those from images ($y$-axis) of DS1 (Panel \textbf{a}) and Rome/PSPT (Panel \textbf{b}). Blue asterisks (orange dots) show the annual (daily) values. The solid black lines have a slope of unity and represent the expectation value. The dashed (dotted) red lines are linear fits to the annual (daily) data. Also shown are the corresponding parameters of the linear fits to the annual values and the linear correlation coefficients of the annual values.}
\label{fig:scatterplotspspt}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{fig_15}
\caption{Plage areas presented in the literature ($y$-axis) versus the ones derived here from DS2 ($x$-axis): (\textbf{a}) \citet[][]{chatterjee_butterfly_2016} from DS2; (\textbf{b}) \citet[][]{ermolli_comparison_2009} from DS1;
(\textbf{c}) \citet{kuriyan_long-term_1983} from the actual photographs; (\textbf{d}) \citet[][]{priyal_long-term_2017} from DS2;
(\textbf{e}) \citet[][]{tlatov_new_2009} from DS1; (\textbf{f}) \citet[][]{singh_variations_2018} from DS2. Blue asterisks (orange dots) show the annual (daily) values. The solid-black lines have a slope of unity. The dashed (dotted) red lines are linear fits to the annual (daily) data. Also shown are the corresponding parameters of the linear fits to the annual values and the linear correlation coefficients of the annual values.}
\label{fig:scatterplots}
\end{figure*}
We now discuss various factors that are responsible for part of the differences.
Such factors are the definition of the centre coordinates and radius, the fraction of the disc used for the normalisation of the plage areas, as well as the processing techniques including the photometric calibration.
For example, different definitions of radius and centre coordinates affect the results of all series.
\cite{priyal_long_2014} defined the radius and centre coordinates for the DS2 by manually selecting three points at the limb.
This information was not stored, but rather was used to crop and centre the images.
These cropped images were then used by \cite{chatterjee_butterfly_2016}, \cite{priyal_long-term_2017}, and \cite{singh_variations_2018} to derive their plage areas.
\cite{ermolli_comparison_2009} and \cite{tlatov_new_2009} defined the radius and centre coordinates for DS1 independently.
\cite{chatzistergos_analysis_2019} used the radius estimates for DS1 made by \cite{ermolli_comparison_2009}, but we corrected a few errors.
Here for DS2 we determined the radius from scratch by using the method described by \cite{chatzistergos_analysis_2019}.
Studies in the literature also differ in the fraction of the disc area that was used to normalise the identified plage areas.
For our study we used the disc area reaching out to 0.98$R$ (i.e., 96\,\% of the total area),
while \cite{ermolli_comparison_2009}, \cite{chatterjee_butterfly_2016}, and \cite{priyal_long-term_2017} used the disc up to 0.97$R$ (94\,\% of the total area) \cite{priyal_long_2014} used the disc up to 0.985$R$ (97\,\% of the total area).
Figure \ref{fig:comparisonchatterjee} shows an example observation from Kodaikanal where the regions considered in the various studies have been marked. Notice that the various studies defined the centre of the solar disc differently.
To get an error estimate for using a slightly different normalising area, we repeated the segmentation of DS1 data by considering the area of the disc up to 0.97$R$.
We found relative differences in the derived plage areas when considering the disc up to 0.97$R$ to 0.98$R$ that are $0.01\pm0.04$.
Processing artefacts contribute more to the systematic differences in the various results presented in the literature.
For example, Figure \ref{fig:comparisonchatterjee2} shows the observation from Figure \ref{fig:comparisonchatterjee} calibrated with our method and with those used by \cite{ermolli_comparison_2009,priyal_long_2014,chatterjee_butterfly_2016}.
The QS regions in the image processed with our method are more uniform compared to the others.
The images processed by \cite{ermolli_comparison_2009} and \cite{priyal_long_2014} show remaining large-scale inhomogeneities that can affect the plage area determination with these methods.
In the image processed by \cite{chatterjee_butterfly_2016} the large-scale inhomogeneities have been accounted for, but the contrast of the plage regions has been suppressed, causing the immediately surrounding areas of large plage regions to become much darker. Thus a smaller part of the large plage areas will be considered as plage, while some network elements might be counted as plage.
\begin{figure}
\centering
\begin{overpic}[width=1\textwidth]{fig_19380120T0737_raw_3} \end{overpic}
\caption{Raw Kodaikanal observation taken on 20 January 1938. Circles enclose the areas considered by \citet[][dashed green]{chatterjee_butterfly_2016}, \citet[][dashed green]{priyal_long-term_2017}, \citet[][dotted yellow]{priyal_long_2014}, and in this work (solid blue).}
\label{fig:comparisonchatterjee}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{fig_17}
\caption{Calibrated images of Kodaikanal observation taken on 20 January 1938 (shown in Figure \ref{fig:comparisonchatterjee}): (\textbf{a}) with our method, and with the methods by (\textbf{b}) \citet[][]{chatterjee_butterfly_2016}, (\textbf{c}) \citet[][]{priyal_long_2014}, and (\textbf{d}) \citet[][]{ermolli_comparison_2009}. The images processed with our method and by \citet{ermolli_comparison_2009} are given as contrast values and are saturated at the same level [-0.02,0.02]. The image processed by \cite{chatterjee_butterfly_2016} is in arbitrary units, while the one by \cite{priyal_long_2014} was provided as a JPG image. Therefore, these images are saturated such that the plage regions visually appear similar to the saturated image with our method. }
\label{fig:comparisonchatterjee2} \end{figure}
\section{Conclusions}
\label{sec:conclusions}
The photographic archive of full-disc Ca {\sc II} K observations of the Kodaikanal observatory is a valuable source of information on past solar activity.
We have compared the two more recent digitisations of this archive to understand if there are any differences in the results and if these differences are responsible for the partly conflicting results presented in the literature.
For this, we have processed images from the two digitisations consistently and applying the same technique.
Thus, we have applied the methods developed and tested by \cite{chatzistergos_analysis_2018} to the 16-bit data series as well.
We applied the same processing on modern CCD-based data from Rome/PSPT.
The plage areas derived from DS1 and DS2 and their variation with time are rather similar to each other. Many of the issues previously reported about the varying quality of the Kodaikanal data are found to apply to the data from the new digitisation too, implying that they are intrinsic characteristics of the physical archive.
These are an increase with time in the disc eccentricity coupled with a worsening spatial resolution, growing large-scale inhomogeneities, and change of the QS CLV with time.
However, we found the quality of the DS1 data after 1990 to deteriorate more than that of the DS2 data. This can introduce significant errors when cross-calibrating plage area series from different archives or for irradiance reconstructions.
Furthermore, both digitisations of Kodaikanal data seem to suffer from errors in the meta-data, especially concerning the observational date and time of the plates.
This issue plagues all analyses from these data.
We find a good match between the plage areas derived from the Kodaikanal archive and those from the Rome/PSPT, with the Kodaikanal areas being slightly lower.
This is in agreement with a drift in the quality of the Kodaikanal data and considering that Rome/PSPT has a nominal bandwidth which is five times broader than that of the Kodaikanal observatory.
The plage areas presented in the literature from the various digitisations of Kodaikanal data show significant differences. We suggest that the diverse employed methods to calibrate the data as well as the different definitions of the recorded solar radius are the main reasons for the discrepancies among the various published results.
Overall, we found the new digitisation of the Kodaikanal archive to offer an improvement in the image quality over the 8-bit series. It also has more than doubled the available images.
With over 48,000 images, the new Kodaikanal series is possibly the richest currently available Ca {\sc II} K archive, so that it has a great potential to improve our understanding of solar activity, especially when the various issues affecting the series have been properly addressed.
\begin{acks}
The authors thank the Kodaikanal and the Rome Solar Groups.
The newly digitized Kodaikanal data as presented here (DS2) are available at \url{kso.iiap.res.in/}.
D. Banerjee. and I. Ermolli thank the International Space Science Institute (Bern, Switzerland) for supporting the International Team 420 "Reconstructing solar and heliospheric magnetic field evolution over the past century".
T. Chatzistergos acknowledges a postgraduate fellowship of the International Max Planck Research School on Physical Processes in the Solar System and Beyond.
This work was supported by grants COST Action ES1005 "TOSCA", FP7 SOLID, 01LG1909C from the German Federal Ministry of Education and Research, and by the BK21 plus program through the National Research Foundation (NRF) funded by the Ministry of Education of Korea.
This research has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 824135 (SOLARNET).
We thank the anonymous referee for the valuable comments which helped to improve this manuscript.
This research has made use of NASA's Astrophysics Data System.
\end{acks}
\textbf{Disclosure of Potential Conflicts of Interest}\\
The authors declare that they have no conflicts of interest.
\bibliographystyle{spr-mp-sola}
|
1,116,691,498,596 | arxiv | \section*{Discovery and follow-up of iPTF\,14gqr}
iPTF\,14gqr was discovered by the intermediate Palomar Transient Factory (iPTF; \cite{Cao2016, Masci2017}) on 2014 October 14.18 UT (Coordinated Universal Time) at a $g$-band optical magnitude of $\approx 20.2$ mag. The source was not detected in the previous observation on 2014 October 13.32 (0.86 days before discovery), with a limiting magnitude of $g\geq$ 21.5\,mag. The transient was found in the outskirts (at a projected offset of $\approx 29$ kpc from the center) of a tidally interacting spiral galaxy (IV Zw 155) at a redshift $z = 0.063$ and luminosity distance $D = 284.5$ Megaparsecs (Figure 1). We obtained rapid ultraviolet (UV), optical and near-infrared (NIR) follow-up observations of the source, including a sequence of four spectra within 24 hours from the first detection \cite{SuppMat}. \\
We also obtained multi-epoch X-ray and radio observations and found that the source remained undetected at these wavelengths \cite{SuppMat}. These upper limits rule out luminous non-thermal emission, such as typically seen in relativistic and gamma-ray burst (GRB) associated SNe, but are not stringent enough to constrain the environment of the progenitor (Figure \ref{fig:14gqr_radioLCcompare}, Figure \ref{fig:14gqr_radioLCModel}).\\
Our photometric follow-up indicated that the source rapidly faded within a day of detection, followed by re-brightening to a second peak on a longer timescale (rising over $\approx 7$ days; \cite{SuppMat}) (Figure 2). The early decline was detected in all optical and UV photometric bands, and characterized by a blackbody spectrum which cooled rapidly from a temperature $T > 32000$ K near first detection to $T \sim 10000$ K at one day after discovery (Figure 3; Figure 4). Our early spectra also exhibit blackbody continua with temperatures consistent with those inferred from the photometry, superimposed with intermediate width emission lines of He~\textsc{ii}, C~\textsc{iii} and C~\textsc{iv}. Such high ionization lines, which are typically associated with elevated pre-explosion mass loss episodes in massive stars, have not been seen in early spectra of previously observed hydrogen-poor SNe. Although similar features are present in the early spectra of some hydrogen-rich core-collapse SNe \cite{GalYam2014, Khazov2016, Yaron2017} (Figure \ref{fig:14gqr_compareFlashSpec}), the relatively large widths of the lines [Full Width at Half Maximum (FWHM) $\sim$ 2000 - 4000 km s$^{-1}$] as well as the rapid evolution of the 4686~\AA\, emission feature (Figure 3) are not.\\
Spectra obtained near the second peak are dominated by emission from the expanding photosphere and exhibit relatively blue continua, with broad absorption features reminiscent of normal stripped envelope SNe of Type Ic, that do not exhibit absorption lines of H or He in the spectra \cite{Gal-Yam2017} (Figure \ref{fig:14gqr_comparePeakSpec}). We find associated absorption velocities of $\sim$ 10,000 km s$^{-1}$ \cite{SuppMat}. The photometric properties of the second peak are broadly consistent with a number of previously observed fast Type Ic events (Figure \ref{fig:14gqr_compareLC}, Figure \ref{fig:14gqr_colorEvol}), but the rapidly declining first peak and the fast rise time to the second peak are unlike previously observed events. The source quickly faded after the second peak, declining at a rate of 0.21 mag day$^{-1}$ in the $g$ band \cite{SuppMat}. Our final spectrum taken at $\approx 34$ days after explosion shows that the source exhibited an early transition to the nebular phase, and on a timescale faster than previously observed core-collapse SNe. The nebular phase spectrum exhibits prominent [Ca~\textsc{ii}] emission similar to several other Type Ic SNe (Figure \ref{fig:14gqr_compareNebSpec}).\\
Multi-color photometry at multiple epochs allow us to trace the evolution of the optical / UV Spectral Energy Distribution (SED), which we use to construct bolometric light curves that contain flux integrated over all wavelengths (Figure 3; Figure 4; \cite{SuppMat}). We fit the pseudo-bolometric light curve of iPTF\,14gqr with a simple Arnett model \cite{Arnett1982} to estimate the explosion parameters. Allowing the explosion time to vary as a free parameter, we estimate an ejecta mass $M_{\textrm{ej}} \approx$ 0.15 -- 0.30 solar masses (M$_{\odot}$), an explosion kinetic energy $E_K \approx$ (1.0 -- 1.9) $\times 10^{50}$ ergs and synthesized Ni mass $M_{\textrm{Ni}} \approx 0.05$ M$_{\odot}$ \cite{SuppMat} (Figure 4, Figure \ref{fig:14gqr_arnettCorner}). The inferred ejecta mass is lower than known core-collapse Type Ic SNe \cite{Drout2011, Lyman2016, Taddia2017}, which have ejecta masses in the higher range of $\sim 0.7 - 15$ M$_{\odot}$, and with a mean of $2 - 3$ M$_{\odot}$ over a sample of $\approx$ 20 SNe. However, the parameters of iPTF\,14gqr are similar to those inferred for the rapidly evolving Type I SNe SN 2005ek \cite{Drout2013} and 2010X \cite{Kasliwal2010}, whose physical origins remain a matter of debate.\\
The rapid decline of the first peak observed in iPTF\,14gqr is reminiscent of shock cooling emission from the outer layers of a progenitor after the core-collapse SN shock breaks out \cite{Nakar2014,Sapir2017} (Figure \ref{fig:14gqr_firstPeak}). We consider alternative explanations \cite{SuppMat} and find them to be inconsistent with the data. In particular, the observed double-peaked light curve in the redder optical bands requires the presence of an extended low mass envelope around the progenitor \cite{Piro2015, Sapir2017}. To constrain the properties of such an envelope, we use models \cite{Piro2015} to construct multi-color light curves for a range of masses and radii of the envelope ($M_e$ and $R_e$ respectively). We find a best-fitting model of $M_e \sim 8 \times 10^{-3}$ M$_{\odot}$ and $R_e \sim 3 \times 10^{13}$ cm ($\sim 450$ solar radii (R$_{\odot}$)) \cite{SuppMat} (Figure 2 \& Figure \ref{fig:14gqr_shockCorner}). Even though the model considered here is simplified (e.g. it ignores the density structure of the envelope), we expect the estimated parameters to be accurate within an order of magnitude \cite{Piro2017}, leading us to conclude that the progenitor was surrounded by an extended envelope with a mass of $\sim 0.01$ M$_{\odot}$ at a radius of $\sim 500$ R$_{\odot}$. \\
We constrain the composition of the outer envelope using the early spectra. The emission lines observed in the early spectra of iPTF\,14gqr can be understood as arising from recombination in the outer regions of the extended circumstellar material (CSM), which was ionized by the high energy radiation produced in the shock breakout (e.g. \cite{GalYam2014, Yaron2017}; Figure \ref{fig:14gqr_compareFlashSpec}). We estimate the location and mass of the emitting He~\textsc{ii} from the luminosity of the early 4686~\AA\, line, and assuming a CSM density profile that varies with radius $r$ as $\propto r^{-2}$ \cite{Yaron2017}. We find the emitting region to be located at $r \sim 6 \times 10^{14} \tau^{-2}$ cm, and contain a helium mass $M_{\textrm{He}} \sim 0.01 \tau^{-3}$ M$_{\odot}$, where $\tau$ is the optical depth of the region \cite{SuppMat}. The absence of prominent Lorentzian scattering profiles in the lines suggest that the optical depth is small and assuming $\tau \approx 1$, we find $r \sim 6 \times 10^{14}$~cm ($8.5 \times 10^{3}$ R$_{\odot}$) and $M_{\textrm{He}} \sim 0.01$ M$_{\odot}$. Because our calculations are based on fitting a simple two-component Gaussian profile to the 4686~\AA\,emission line (to estimate the unknown contamination of C III at 4650~\AA), these estimates are uncertain by a factor of a few. \\
Using the C IV 5801 \AA\,lines and similar methods as above, we estimate a CSM carbon mass of $\sim 4 \times 10^{-3}$ M$_{\odot}$, while the hydrogen mass is constrained to be $< 10^{-3}$ M$_{\odot}$. Additional constraints based on light travel time arguments also suggest that the envelope was located at $r \leq 6 \times 10^{15}$ cm from the progenitor \cite{SuppMat}. The flash-ionized emission lines exhibit complex asymmetric profiles (Figure 3) that we attribute to light travel time effects, given the large size of the envelope and the high inferred wind velocities \cite{Grafener2016,SuppMat}.\\
\section*{An ultra-stripped progenitor}
The low ejecta mass and explosion energy, as well as the presence of an extended He-rich envelope, indicate an unusual progenitor channel for iPTF\,14gqr. The detection of the early shock cooling emission indicates a core-collapse origin of the explosion, while the bright radioactivity powered emission suggests that this explosion is associated with the class of iron core-collapse explosions. The low ejecta mass together with the small remaining amount of He in the progenitor rule out models of single star evolution as well as a non-degenerate massive star companion for the progenitor of iPTF\,14gqr \cite{SuppMat}, leaving only the most compact companions (such as a NS, WD or BH) as possible explanations of the highly stripped (or `ultra-stripped') progenitor. \\
Ultra-stripped explosions have been modeled in the case of He star - NS binaries, where stripping of the He star by a NS in a close orbit leads to the subsequent collapse of an ultra-stripped He star \cite{Tauris2013, Tauris2015, Moriya2017}. Hence, we compare theoretical bolometric light curves for ultra-stripped explosions \cite{Moriya2017} to those of iPTF\,14gqr in Figure 5, for a model with $M_{\textrm{ej}} = 0.2$ M$_{\odot}$, $M_{\textrm{Ni}} = 0.05$ M$_{\odot}$ and $E_K = 2 \times 10^{50}$ ergs. To account for the early declining emission, we also add a component corresponding to shock cooling of an extended envelope, for $M_e = 0.01$ M$_{\odot}$ and $R_e = 6 \times 10^{13}$ cm. The two component light curve matches the light curve data. We also compare the spectroscopic properties of iPTF\,14gqr to those of ultra-stripped SN models in Figure 5. The models \cite{Moriya2017} assumed fully mixed ejecta that led to the production of strong line blanketing features below $4000$ \AA\,, unlike this source. Thus, we re-calculated the models for ejecta with no mixing (as with the light curve calculations), and were able to match to the spectra of iPTF\,14gqr near the second peak (Figure 5, Figure \ref{fig:14gqr_simSpecModels}).\\
Our observations indicate the presence of an extended He-rich envelope around the progenitor at the time of collapse, thus providing insight into the terminal evolution of the progenitors of ultra-stripped SNe, and more broadly, the lowest mass progenitors of core-collapse SNe. Using the line widths in our early spectra, we estimate that the emitting envelope was expanding with a velocity of $\sim 1000 - 2000$ km s$^{-1}$ at the time of collapse, consistent with the escape velocity from a compact He star \cite{SuppMat}. When considered with the inferred size of the envelope (at least $\sim$ 500 R$_{\odot}$), the velocities suggest that the envelope was ejected $\sim 8 - 20$ days prior to the explosion. \\
The temporal coincidence of the ejection with the final SN suggests that the envelope was likely associated with an intense pre-SN mass loss episode of the progenitor \cite{SuppMat}. Despite the close stripping, ultra-stripped progenitors are expected to retain a small amount of He ($\sim 0.01$ M$_{\odot}$) in their outer layers. The prominent He and C lines in the early spectra are consistent with eruptive mass loss when considering the expected surface compositions of ultra-stripped progenitors \cite{Tauris2015}. The timescale of the ejection is similar to that expected for silicon flashes ($\sim 2$ weeks before explosion) in the terminal evolution of low mass metal cores \cite{Woosley2015}, that have been suggested to lead to elevated mass loss episodes prior to the explosion. Such mass loss episodes are relevant to ultra-stripped progenitors as well \cite{Woosley2015,Moriya2017,Muller2018}. \\
iPTF\,14gqr exhibits a projected offset of $\sim 15$ kpc from the nearest spiral arms of its star forming host galaxy \cite{SuppMat}, which is puzzling when compared to the expected locations of ultra-stripped SNe \cite{Tauris2015}. While we do not find evidence of an underlying stellar association or of galaxy emission features in late-time imaging and spectroscopy, the limits are not sensitive enough to rule out the presence of a dwarf galaxy or a star forming H-II region (characterized by its H$\alpha$ emission) at or near the transient location \cite{SuppMat}. Nonetheless, the tidally interacting environment of the host galaxy suggests that outlying star formation in collisional debris is likely in this system \cite{Boquien2009,SuppMat}, which could harbor young stellar systems (with ages of $\sim$ 5 - 100 Myrs) in the faint tidal tails (Figure \ref{fig:14gqr_lateDeep}). Hence, the discovery of a core-collapse SN in these outskirts is consistent with our interpretation.\\
While a number of previously observed fast Type Ic SNe (e.g. SN\,2005ek \cite{Drout2013} and SN\,2010X \cite{Kasliwal2010}) were suggested to be members of the ultra-stripped SN class, it has been difficult to confirm a core-collapse origin for these explosions because these events were discovered only near maximum of the radioactively powered peak. Specifically, without early photometry and spectroscopy that can reveal the presence of a shock cooling component, these fast transients are also consistent with variants of models involving thermonuclear detonations on white dwarfs (e.g. \cite{Shen2010,Metzger2012,Darbha2010}). The early discovery and prompt follow-up of iPTF\,14gqr establish the presence of a shock cooling emission component that requires an extended progenitor consistent with a core-collapse explosion. In the probable scenario that iPTF\,14gqr formed a NS in the explosion (we find a BH remnant to be unlikely given the observed properties of the SN \cite{SuppMat}), the low ejecta mass in the system suggests that the SN results in the formation of a bound and compact NS binary system \cite{SuppMat}. \\
\section*{Implications for formation of compact NS binaries}
Our interpretation of iPTF\,14gqr as an ultra-stripped SN has implications in the wider context of stellar evolution. Compact NS binary systems evolve from binary massive stars that undergo several phases of mass transfer over their lifetime (Figure 6). The initial phases of such evolution, in which two massive stars evolve into interacting binaries consisting of a compact object in orbit around a massive star (X-ray binaries) have been observed in several systems in the local Universe \cite{Benvenuto2017,Walter2015}. However, the subsequent phases that lead to the formation of compact NS binary systems, have not been observed. This is due to the low occurrence rates of such systems, the short lifetimes ($\sim 10^{6}$ years) of the final stages and observational selection effects disfavoring their detection \cite{Tauris2015,Gotberg2017,Zapartas2017}.\\
Binary evolution models suggest that the subsequent evolution proceeds via a common envelope phase, during which the loss of angular momentum via dynamical friction leads to the formation of a close He star - compact object binary \cite{Bhattacharya1991,Tauris2006,Tauris2017}. An additional phase of close gravitational stripping by the compact companion then leads to the formation of an ultra-stripped SN progenitor \cite{Tauris2017}, with properties which can be inferred from our observations of iPTF\,14gqr. The measured orbital properties of known double NS systems suggest that the second NSs were created in weak and low ejecta mass explosions that impart a small natal kick to the newborn NS \cite{Ferdman2013, Beniamini2016a}.\\
The presence of the extended He-rich envelope in iPTF\,14gqr along with the lack of He in the low mass of ejecta suggest that the progenitor was highly stripped by a compact companion, such that only a thin He layer was retained on its surface. This He layer was then ejected in an intense pre-SN mass loss episode, as shown by the high velocity of the envelope. Taken together, these observations provide evidence of the terminal evolution of a post common envelope He star - compact object binary leading to the formation of a compact NS binary system (Figure 6). \\
While wide binaries containing a NS and another compact object may be formed in non-interacting systems of binary massive stars, ultra-stripped SNe have been suggested to precede the formation of almost all compact NS binary systems \cite{Tauris2015}. Thus, these explosions likely represent the only channel to forming NS-NS and NS-BH systems that are compact enough to merge within the age of the universe and produce observable merger signals for joint gravitational wave (e.g. \cite{Abbott2017a}) and electromagnetic (e.g. \cite{Abbott2017b,Pian2017,Kasen2017}) observations \cite{Voss2003,Tauris2015,DNSDynamical}. Given that only a fraction of the systems produced by these explosions will merge within that time, the rates of ultra-stripped explosions must be higher than the rates of their mergers. \\\\
\noindent
\bibliographystyle{Science.bst}
|
1,116,691,498,597 | arxiv | \section{Introduction}
The theory of classifying spaces for principal bundles has a long history
in topology~\cite{Mi,Se,St}
and its importance is well-established.
In this paper, we develop the analogous theory for spaces with a smooth structure.
In brief, given a smooth group $G$, we put a smooth structure on $EG$ and $BG$,
define a smooth principal $G$-bundle $EG \to BG$, and show that this bundle is
universal in an appropriate sense.
We show how these results can be used to classify smooth fiber bundles as
well as smooth vector bundles, laying the foundation for future work on
smooth characteristic classes and smooth $K$-theory.
The framework we use to formulate these results is that of diffeological spaces.
A diffeological space is a set $X$ along with a chosen collection of functions
$U \to X$ (called plots), where $U$ runs over open subsets of Euclidean spaces.
The plots are subject to three simple axioms (see Definition~\ref{de:diffeological-space}).
The category of diffeological spaces and smooth maps between them is a
convenient category in which to make constructions. It includes smooth manifolds
as a full subcategory, as well as function spaces, diffeomorphism groups and
singular spaces such as manifolds with corners and all quotients.
The geometry and homotopy theory of diffeological spaces is well-developed
(see~\cite{CSW,CW1,CW2,I1,I2,So1,So2,Wu} and references therein),
giving a solid framework in which to develop the present theory.
When classifying principal bundles in topology, one has to either
restrict to base spaces that are paracompact or (more generally)
consider only numerable bundles.
The issue is that it is not true in general that if $\pi : E' \to B'$
is a principal bundle and $f, g : B \to B'$ are homotopic, then the
pullback bundles $f^*(\pi)$ and $g^*(\pi)$ over $B$ are isomorphic.
The same issue arises in the smooth setting.
We use results on the homological algebra of diffeological vector spaces~\cite{Wu}
to give examples of this phenomenon that are unique to the smooth setting.
In these new examples, the group is a diffeological vector space.
We also show that a topological example~\cite{M1,M2} adapts to diffeological
spaces.
In all of these cases, the approach is to give a non-trivial principal
bundle $\pi$ over a smoothly contractible space $X$.
It then follows that the identity map $X \to X$ and a constant map $X \to X$
are smoothly homotopic but that the pullbacks of $\pi$ along these maps are not isomorphic.
It also follows that there is no classifying space for such principal bundles.
Because of this, we focus on what we call $D$-numerable bundles, the smooth
analogs of numerable bundles. These are bundles for which one can choose
a smooth partition of unity on the base space subordinate to a trivializing open cover.
Our first substantial result is the following:
\theoremstyle{plain}
\newtheorem*{cor:homotopy-pullback}{Corollary \ref{cor:homotopy-pullback}}
\begin{cor:homotopy-pullback}
If $\pi : E' \to B'$ is a $D$-numerable principal $G$-bundle,
and $f$ and $g$ are smoothly homotopic maps $B \to B'$,
then the pullbacks $f^*(\pi)$ and $g^*(\pi)$ are isomorphic as principal $G$-bundles over $B$.
\end{cor:homotopy-pullback}
Here $G$ is a diffeological group, which is a generalization of a Lie group.
Our method of proof follows~\cite{Hu} in outline, but requires many
changes in the details due to the smoothness requirement.
The main technical difficulty is surmounted using the following
result,\footnote{We thank Chengjie Yu for a sketch of the proof of
Proposition~\ref{prop:functional-testing-zero}, and
Gord Sinnamon and Willie Wong for ideas that led towards this result.}
which may be of independent interest:
\newtheorem*{prop:functional-testing-zero}{Proposition \ref{prop:functional-testing-zero}}
\begin{prop:functional-testing-zero}
There exists a smooth map $F:C^\infty(\mathbb{R},\mathbb{R}^{\geq 0}) \to \mathbb{R}^{\geq 0}$ such that
$F(f)=0$ if and only if $f(x)=0$ for some $x \in [0,1]$.
\end{prop:functional-testing-zero}
This function is used in place of the function sending $f$ to $\min_{x \in [0,1]} f(x)$,
which is not smooth.
Next, given a diffeological group $G$,
we define a principal $G$-bundle $EG \to BG$, show that it is $D$-numerable,
and prove our main result:
\newtheorem*{thm:classify-principal}{Theorem \ref{thm:classify-principal}}
\begin{thm:classify-principal}
For any diffeological space $B$ and any diffeological group $G$, the pullback operation
gives a bijection $[B,BG] \to \Prin_G^D(B)$ which is natural in $B$.
\end{thm:classify-principal}
Here $[B, BG]$ denotes the set of smooth homotopy classes of maps,
and $\Prin_G^D(B)$ denotes the set of isomorphism classes of $D$-numerable
principal bundles over $B$.
Our $EG$ and $BG$ are set-theoretically the same as those of~\cite{Mi}
and~\cite{Hu}, but are endowed with diffeologies.
When the underlying topological group $D(G)$ is locally compact Hausdorff,
we show that the space $D(BG)$ is homeomorphic to the usual classifying
space of $D(G)$.
Note that because $\pi : EG \to BG$ is itself $D$-numerable and the base $B$ is
not constrained, $\pi$ is truly universal and therefore $BG$ is the unique
diffeological space up to smooth homotopy equivalence that classifies
$D$-numerable principal $G$-bundles.
We go on to develop the theory of associated bundles, showing
in Theorem~\ref{thm:classify-diff-bundles} that for any diffeological
space $F$, $B{\mathfrak{D}\mathrm{iff}}(F)$ classifies $D$-numerable diffeological bundles
with fiber $F$.
Here ${\mathfrak{D}\mathrm{iff}}(F)$ is the diffeological group of diffeomorphisms from $F$
to itself, and $D$-numerable diffeological bundles are the smooth
analog of numerable fiber bundles.
We show in Theorem~\ref{thm:naturality-G} that the bijection in
Theorem~\ref{thm:classify-principal} is natural in $G$.
Finally, we define diffeological vector bundles and show
in Theorem~\ref{thm:classify-vb} that for any diffeological
vector space $V$, $B \GL(V)$ classifies the $D$-numerable vector bundles
with fiber $V$.
Here $\GL(V)$ is the diffeological group of smooth linear isomorphisms
from $V$ to $V$.
\smallskip
As mentioned above, many of our arguments follow the topological arguments
in their overall strategy, but differ in the details.
We also adapt a topological result from~\cite{B} in order to correct some
minor gaps in the arguments of~\cite{Hu}.
\smallskip
\textbf{Future work}:
This paper is intended to provide a foundation for future work.
For example, the theory of characteristic classes for bundles over
a smooth manifold $M$ has two incarnations. One can use Chern-Weil theory
to construct explicit de Rham forms on $M$ using invariant polynomials
and a connection on the bundle.
Alternatively, one can study the singular cohomology of the classifying
space $BG$, and pull back singular cohomology classes along the classifying map
$M \to BG$.
By the results of the present paper, $BG$ is a diffeological space, and
so one can work directly with de Rham forms on $BG$.
We intend to explore the theory of connections on diffeological bundles,
and use this to apply Chern-Weil theory to the universal case, thereby
bringing the geometric and topological approaches to characteristic
classes closer together.
(See also the remarks below about~\cite{Mo}.)
We also expect the results of this paper to be useful in the study
of smooth tangent bundles and smooth $K$-theory.
\smallskip
\textbf{Relationship to other work}:
In~\cite{Mo}, Mostow defined smooth versions of classifying spaces of
Lie groups, using a framework called differentiable spaces.
His focus was on studying the cohomology of such classifying spaces,
and so he did not prove analogs of our results showing that these
spaces do indeed classify certain bundles.
His results are related to the ideas described under the \emph{Future Work}
heading above.
We expect to get cleaner and more general results by working with diffeological
spaces, since the theory of diffeological spaces is better developed.
Moreover, the results of the present paper, which show that the universal
bundle is truly universal, would then complete the circle, giving a
close relationship between the Chern-Weil approach to classifying spaces
and the topological approach.
In~\cite{MW}, Magnot and Watts have independently worked on smooth
classifying spaces using diffeological spaces,
and some comments comparing the approaches are in order.
As sets, our $EG$ and $BG$ are the same as the sets defined by Magnot and Watts,
which we'll denote $EG_{MW}$ and $BG_{MW}$.
However, the diffeologies we use have fewer plots, which leads to better properties.
First, our universal bundle is $D$-numerable, while the MW universal
bundle is only weakly $D$-numerable (see~\cite[Definition 2.17]{MW}).
Moreover, our universal bundle classifies $D$-numerable principal bundles over all diffeological spaces,
while $EG_{MW} \to BG_{MW}$ only classifies $D$-numerable principal bundles over
diffeological spaces that are Hausdorff, second-countable and smoothly paracompact.
This greater generality is useful in practice, as one of the aims of diffeological
spaces is to encompass mapping spaces and quotients,
and also means that our classifying
space is uniquely determined up to smooth homotopy equivalence, while $BG_{MW}$ is not.
We also obtain stronger results about the classification of fiber bundles.
Magnot and Watts cover many topics we do not, such as connections and various applications.
\smallskip
\textbf{Organization}:
In Section~\ref{se:background}, we review diffeological spaces,
diffeological groups, diffeological bundles and principal bundles.
In Section~\ref{se:no-classifying}, we give examples of non-trivial
principal bundles over smoothly contractible base spaces,
motivating our focus on $D$-numerable bundles.
In Section~\ref{se:D-numerable}, we develop the theory of smooth partitions of unity,
$D$-numerable diffeological bundles, and $D$-numerable principal bundles,
and prove that, for $D$-numerable bundles, homotopic maps give isomorphic pullbacks.
In Section~\ref{se:classify-Dpb}, we define $EG \to BG$ and prove that it is
a universal $D$-numerable bundle, our main result,
using many of the tools from Section~\ref{se:D-numerable}.
In Sections~\ref{se:classify-bundle} and~\ref{se:classify-vb},
we develop the theories of associated bundles and diffeological vector bundles, respectively.
In Appendix~\ref{se:zeros}, we prove various results in analysis, including
Proposition~\ref{prop:functional-testing-zero}.
\textbf{Conventions}:
Every manifold is assumed to be finite-dimensional, smooth, second-countable, Hausdorff and without
boundary. Every manifold is equipped with the standard diffeology when viewed as a diffeological space.
Every product of diffeological spaces is equipped with the product diffeology.
\section{Background on diffeological spaces and bundles}
\label{se:background}
\subsection{Diffeological spaces}
\begin{de}[\cite{So2}]\label{de:diffeological-space}
A \dfn{diffeological space} is a set $X$
together with a specified set of functions $U \to X$ (called \dfn{plots})
for each open set $U$ in $\mathbb{R}^n$ and each $n \in \mathbb{N}$,
such that for all open subsets $U \subseteq \mathbb{R}^n$ and $V \subseteq \mathbb{R}^m$:
\begin{enumerate}
\item (Covering) Every constant function $U \to X$ is a plot.
\item (Smooth Compatibility) If $U \to X$ is a plot and $V \to U$ is smooth,
then the composite $V \to U \to X$ is also a plot.
\item (Sheaf Condition) If $U=\cup_i U_i$ is an open cover
and $U \to X$ is a function such that each restriction $U_i \to X$ is a plot,
then $U \to X$ is a plot.
\end{enumerate}
A function $f:X \rightarrow Y$ between diffeological spaces is
\dfn{smooth} if for every plot $p:U \to X$ of $X$,
the composite $f \circ p$ is a plot of $Y$.
\end{de}
An isomorphism in the category ${\mathfrak{D}\mathrm{iff}}$ of diffeological spaces and smooth maps will be called a \dfn{diffeomorphism}.
Every manifold $M$ is canonically a diffeological space with the
plots taken to be all smooth maps $U \to M$ in the usual sense.
We call this the \dfn{standard diffeology} on $M$.
It is easy to see that smooth maps in the usual sense between
manifolds coincide with smooth maps between them with the standard diffeology.
For a diffeological space $X$ with an equivalence relation~$\sim$,
the \dfn{quotient diffeology} on $X/{\sim}$ consists of all functions
$U \to X/{\sim}$ that locally factor through the quotient map $X \to X/{\sim}$ via plots of $X$.
A \dfn{subduction} is a map diffeomorphic to a quotient map.
That is, it is a map $X \to Y$ such that the plots in $Y$
are the functions that locally lift to $X$ as plots in $X$.
For a diffeological space $Y$ and a subset $A$ of $Y$,
the \dfn{sub-diffeology} consists of all functions $U \to A$ such that
$U \to A \hookrightarrow Y$ is a plot of $Y$.
An \dfn{induction} is an injective smooth map $A \to Y$ such that a
function $U \to A$ is a plot of $A$ if and only if $U \to A \to Y$ is a plot of $Y$.
More generally, we have the following convenient properties of the category of diffeological spaces:
\begin{thm}
The category ${\mathfrak{D}\mathrm{iff}}$ is complete, cocomplete and cartesian closed.
\end{thm}
For more details, see~\cite[Section~2]{CSW}. The (co)limit of a diagram of diffeological
spaces has as its underlying set the (co)limit of the underlying sets
of the diffeological spaces in the diagram.
Given diffeological spaces $X$ and $Y$,
the set $C^\infty(X,Y)$ of all smooth maps $X \to Y$ has a canonical diffeology so that the exponential
law holds.
Every diffeological space has a canonical topology:
\begin{de}[\cite{I1}]
Given a diffeological space $X$, a subset $A \subseteq X$ is \dfn{$D$-open} if
$p^{-1}(A)$ is open in $U$ for each plot $p : U \to X$.
The $D$-open sets form a topology on $X$ called the \dfn{$D$-topology},
and we write $D(X)$ for the set $X$ equipped with this topology.
\end{de}
\begin{ex}
The $D$-topology of a manifold with the standard diffeology is the usual topology.
\end{ex}
\begin{rem}\label{rem:disjoint-union}
If $X$ is a disjoint union of $D$-open subsets $U_i$, then $X$ is the coproduct of
the $U_i$ in the category of diffeological spaces.
\end{rem}
\subsection{Diffeological bundles}
\begin{de}\label{de:bundles}
Let $F$ be a diffeological space.
A smooth map $\pi:E \to B$ between two diffeological spaces
is \dfn{trivial of fiber type $F$}
if there exists a diffeomorphism $h$
making the following diagram commute:
\[
\xymatrix@C5pt{E \ar[dr]_\pi \ar[rr]^-h && B \times F\ \ar[dl]^{p_1} \\ & B,}
\]
where $p_1$ is the projection.
The map $\pi$ is \dfn{locally trivial of fiber type $F$}
if there exists a $D$-open cover $\{ B_i \}$ of $B$ such that
$\pi|_{B_i}:\pi^{-1}(B_i) \to B_i$ is trivial of fiber type $F$ for each $i$.
The map $\pi$ is a \dfn{diffeological bundle of fiber type $F$} if the pullback of $\pi$ along any plot
of $B$ is locally trivial of fiber type $F$.
In all of these cases, we call $F$ the \dfn{fiber} of $\pi$, $E$ the \dfn{total space}, and $B$ the \dfn{base space}.
Two diffeological bundles $\pi:E \to B$ and $\pi':E' \to B$ are \dfn{isomorphic} if there exists a diffeomorphism
$h:E \to E'$ such that $\pi = \pi' \circ h$.
\end{de}
Here is an equivalent characterization of diffeological bundles:
\begin{thm}[{\cite[8.19]{I2}}]
A smooth map $\pi:E \to B$ between two diffeological spaces
is a diffeological bundle of fiber type $F$ if and only if
the pullback of $\pi$ along any global plot of $B$
(that is, a plot of the form $\mathbb{R}^n \to B$) is trivial of fiber type $F$.
\end{thm}
Note that every locally trivial bundle is a diffeological bundle, but that
the converse fails in general.
\begin{ex}
If $B$ is a manifold and $\pi : E \to B$ is smooth, then $\pi$ is locally trivial
of fiber type $F$ if and only if it is a diffeological bundle of fiber type $F$.
Moreover, if the fiber $F$ is a manifold, then it is also equivalent for $\pi$
to be a smooth fiber bundle in the usual sense.
\end{ex}
\begin{lem}\label{lem:disjoint-union}
If $B$ is a disjoint union of $D$-open sets $B_i$ and $\pi: E \to B$ is a diffeological
bundle which is trivial over each $B_i$, then $\pi$ is trivial.
\end{lem}
\begin{proof}
$E$ is the disjoint union of the open sets $E_i := \pi^{-1}(B_i)$, so
by Remark~\ref{rem:disjoint-union}, $E \to B$ is the coproduct of the
trivial bundles $E_i \to B_i$ and hence is trivial.
\end{proof}
\subsection{Principal bundles}
\begin{de}[\cite{So1}]
A \dfn{diffeological group} is a group object in ${\mathfrak{D}\mathrm{iff}}$.
That is, a diffeological group is both a diffeological space and a group
such that the group operations are smooth maps.
\end{de}
\begin{ex}
Every subgroup of a diffeological group equipped with the sub-diffeology is a diffeological group.
\end{ex}
\begin{ex}
Given a diffeological space $X$, write ${\mathfrak{D}\mathrm{iff}}(X)$ for the set of all diffeomorphisms
$X \to X$. Define $p:U \to {\mathfrak{D}\mathrm{iff}}(X)$ to be a plot if the maps $U \times X \to X$ given by
$(u,x) \mapsto p(u)(x)$ and $(u,x) \mapsto (p(u))^{-1}(x)$ are both smooth. These plots
form a diffeology on ${\mathfrak{D}\mathrm{iff}}(X)$ making it a diffeological group.
We always equip ${\mathfrak{D}\mathrm{iff}}(X)$ with this diffeology.
\end{ex}
Here is a $G$-equivariant version of Definition~\ref{de:bundles}:
\begin{de}
Let $G$ be a diffeological group and let $\pi : E \to B$ be a smooth map
between diffeological spaces.
Assume that $G$ has a \dfn{smooth right action} on $E$, i.e., $E$ has a
right $G$ action and the action map
$E \times G \to E$ is smooth. Also assume that $\pi(x \cdot g) = \pi(x)$
for all $x \in E$ and $g \in G$.
We say that $\pi$ is a \dfn{trivial principal $G$-bundle} if there is a $G$-equivariant
diffeomorphism $h$ making the following diagram commute:
\[
\xymatrix@C5pt{E \ar[dr]_\pi \ar[rr]^-h && B \times G\ \ar[dl]^{p_1} \\ & B.}
\]
Here the action of $G$ on $B \times G$ is defined by $(b, g) \cdot g' = (b, g g')$.
We say that $\pi$ is a \dfn{locally trivial principal $G$-bundle}
if there exists a $D$-open cover $\{ B_i \}$ of $B$ such that
$\pi|_{B_i}:\pi^{-1}(B_i) \to B_i$ is a trivial principal $G$-bundle for each $i$.
The map $\pi$ is a \dfn{(diffeological) principal $G$-bundle} if the pullback of $\pi$ along any plot
of $B$ is a locally trivial $G$-bundle.
Two principal $G$-bundles $\pi:E \to B$ and $\pi':E' \to B$ are \dfn{isomorphic} if there exists a $G$-equivariant
diffeomorphism $h:E \to E'$ such that $\pi = \pi' \circ h$.
\end{de}
Here is an equivalent characterization of principal bundles which will be used frequently later:
\begin{thm}[{\cite[8.11, 8.13]{I2}}]\label{thm:principal}
If $E \to B$ is a principal $G$-bundle, then the smooth map
$a : E \times G \to E \times E$ given by $(x,g) \mapsto (x,x \cdot g)$ is an induction
and there is a diffeomorphism $B \cong E/G$ commuting with the maps from $E$.
Conversely, if $E \times G \to E$ is a smooth action of a diffeological group $G$ on $E$
and the map $a$ is an induction, then the quotient map $E \to E/G$ is a principal $G$-bundle.
\end{thm}
\begin{rem}\label{rem:trivial-pb}
It follows from the above theorem that a principal bundle is trivial if and only if it has a smooth global section~\cite[8.12]{I2}.
Therefore, a principal bundle is locally trivial as a principal bundle if and only if
it is locally trivial as a diffeological bundle.
\end{rem}
As another application of the above theorem, we have:
\begin{prop}[{\cite[8.15]{I2}}]\label{prop:G->G/H-principal-bundle}
Let $G$ be a diffeological group, and let $H$ be a subgroup of $G$ with the sub-diffeology.
Then $G \to G/H$ is a principal $H$-bundle,
where $G/H$ is the set of left cosets of $H$ in $G$, with the quotient diffeology.
\end{prop}
Note that we are \emph{not} requiring the subgroup $H$ to be closed.
In particular, we have the following interesting example:
\begin{ex}[{\cite[8.38]{I2}}]\label{ex:irrational-torus}
Let $T^2=\mathbb{R}^2/\mathbb{Z}^2$ be the usual $2$-torus,
and let $\mathbb{R}_\theta$ be the image of the line $\{ y = \theta x \}$
under the quotient map $\mathbb{R}^2 \to T^2$, with $\theta$ a fixed irrational number.
Note that $T^2$ is an abelian Lie group,
and $\mathbb{R}_\theta$ is a dense subgroup which is diffeomorphic to $\mathbb{R}$.
The quotient group $T^2_\theta := T^2/\mathbb{R}_\theta$ with the quotient diffeology
is called the \dfn{irrational torus of slope $\theta$}, and
by the above proposition,
the quotient map $T^2 \to T^2_\theta$ is a principal $\mathbb{R}$-bundle.
\end{ex}
\begin{prop}[{\cite[8.10, 8.12]{I2}}]\label{prop:pullback}
If $f:B' \to B$ is a smooth map and $E \to B$ is a diffeological
bundle of fiber type $F$ (resp.\ a principal $G$-bundle),
then so is the pullback $f^*(E) \to B'$.
Moreover, pullback preserves triviality and local triviality.
\end{prop}
The following result follows immediately from~\cite[8.13 Note 2]{I2}
and will be useful later:
\begin{prop}\label{prop:commsq=>pullbackdiff}
Let
\[
\xymatrix{E' \ar[r]^f \ar[d]_{\pi'} & E \ar[d]^{\pi} \\ B' \ar[r]_g & B}
\]
be a commutative square in ${\mathfrak{D}\mathrm{iff}}$, where $\pi'$ and $\pi$ are principal $G$-bundles
and $f$ is $G$-equivariant. Then $\pi'$ is isomorphic to $g^*(\pi)$ as principal $G$-bundles over $B'$.
\end{prop}
\section{There is no classifying space for all diffeological principal bundles}
\label{se:no-classifying}
This section motivates our focus on $D$-numerable bundles in later sections.
We first recall the situation in topology.
Let $G$ be a topological group.
We would like to have a space $BG$ such that for any topological space $X$,
the set of isomorphism classes of principal $G$-bundles over $X$ naturally
bijects with the set of homotopy classes of maps from $X$ to $BG$.
This is possible when the space $X$ is restricted to being paracompact,
or, more generally, if one considers only numerable bundles, but is
not possible in general.
One way to show that it is not possible is as follows.
First observe that $BG$ must be path-connected, by taking the case where $X$ is a point.
Next, one shows that there is a non-trivial
principal $G$-bundle $\pi$ over a contractible space $X$. Then there are at
least two non-isomorphic principal $G$-bundles over $X$, but only one homotopy
class of maps $X \to BG$ for any path-connected space $BG$.
In addition, such an example shows that, in general, homotopic maps do not
have isomorphic pullbacks: the pullback of $\pi$ along the identity map
$X \to X$ is $\pi$, while the pullback of $\pi$ along a constant map is trivial.
Analogous results hold in the diffeological context, and the same technique is used.
In the first part of this section, we give a family of examples of non-trivial
diffeological principal bundles over smoothly contractible base spaces, using the
theory of diffeological vector spaces from~\cite{Wu}.
These examples are new, and we are not aware of similar topological examples.
Then, in Example~\ref{ex:Goodwillie}, we give another example of a non-trivial
principal bundle over a smoothly contractible base space.
This example is even locally trivial, and so shows that restricting to this
subclass of diffeological principal bundles does not solve the problem.
This example is a straightforward adaptation of an example from topology~\cite{M1,M2}.
We begin by recalling the concept of smooth homotopy~\cite[Section 3.1]{CW1}:
\begin{de}
Given diffeological spaces $X$ and $Y$, two smooth maps $f,g:X \to Y$ are called \dfn{smoothly homotopic}
if there exists a smooth map $F: X \times \mathbb{R} \to Y$ such that $F(x, 0)=f(x)$ and $F(x, 1)=g(x)$ for each $x$ in $X$.
A diffeological space $X$ is \dfn{smoothly contractible} if the identity map
is smoothly homotopic to a constant map.
\end{de}
Given diffeological spaces $X$ and $Y$, the relation of smooth homotopy on $C^\infty(X,Y)$ is an equivalence relation,
and we denote the quotient set by $[X,Y]$.
\begin{de}
A \dfn{diffeological vector space} $V$ is both a diffeological space and an $\mathbb{R}$-vector space such that addition $V \times V \to V$
and scalar multiplication $\mathbb{R} \times V \to V$ are both smooth.
\end{de}
Observe that every diffeological vector space $V$ is smoothly contractible
via the smooth homotopy sending $(v, t)$ to $t v$.
A \dfn{short exact sequence} in the category ${\mathfrak{D}\mathrm{Vect}}$ of diffeological vector spaces and smooth linear maps
is a diagram
\begin{equation}\label{ses}
\xymatrix{0 \ar[r] & V_1 \ar[r]^i & V_2 \ar[r]^j & V_3 \ar[r] & 0}
\end{equation}
which is a short exact sequence of vector spaces such that $i$ is a linear induction
and $j$ is a linear subduction. For any such short exact sequence, we have a commutative triangle
\[
\xymatrix@C5pt{& V_2 \ar[dl]_j \ar[dr]^\pi \\ V_3 && V_2/V_1, \ar[ll]}
\]
where the horizontal map is an isomorphism of diffeological vector spaces.
Hence, by Proposition~\ref{prop:G->G/H-principal-bundle}, $j$ is a diffeological principal $V_1$-bundle.
This bundle is trivial if and only if the short exact sequence~\eqref{ses} splits smoothly (see~\cite[Theorem~3.16]{Wu}).
In particular, it follows that if~\eqref{ses} does not split smoothly,
then there is no classifying space for principal $V_1$-bundles.
\begin{ex}\label{ex:Borel}
Let $j : C^\infty(\mathbb{R}, \mathbb{R}) \to \prod_{\omega} \mathbb{R}$ be defined by $j(f)_n := f^{(n)}(0)$,
and let $K$ be the kernel. It is shown in~\cite[Example~4.3]{Wu} that this is
a short exact sequence of diffeological vector spaces that does not split smoothly.
Therefore, there is no classifying space for principal $K$-bundles.
\end{ex}
\medskip
We now give additional examples of this flavour, using some results
from~\cite{Wu}.
\begin{de}
A diffeological vector space $P$ is called \dfn{projective} if for any linear subduction $\pi:W_1 \to W_2$ and any
smooth linear map $f:P \to W_2$, there exists a smooth linear map $g:P \to W_1$ such that $f = \pi \circ g$.
\end{de}
\begin{prop}[{\cite[Theorem~6.13]{Wu}}]
For every diffeological vector space $V$, there exists a projective
diffeological vector space $P$ with a linear subduction $P \to V$.
\end{prop}
\begin{thm}\label{thm:non-projective}
Let $V$ be a non-projective diffeological vector space.
Then there exists a diffeological vector space $W$ and
a non-trivial diffeological principal $W$-bundle over $V$.
This implies that there is no classifying space for principal $W$-bundles.
\end{thm}
\begin{proof}
Let $W$ be the kernel of a linear subduction $P \to V$ with $P$ projective.
Since projectives are closed under summands (\cite[Proposition~6.11(3)]{Wu}),
the sequence does not split smoothly.
Thus there is a non-trivial principal $W$-bundle over $V$.
\end{proof}
\begin{ex}
We saw in Example~\ref{ex:Borel} that $\prod_{\omega} \mathbb{R}$ is not projective.
It is shown in~\cite[Example~6.9]{Wu} that $\mathbb{R}$ with the indiscrete\footnote{An
indiscrete diffeological space has all possible functions as plots, and hence has the indiscrete $D$-topology.}
diffeology is also not projective.
\end{ex}
To obtain further examples, including examples where the base space is
not a diffeological vector space, we make use of the following construction.
\begin{prop}[{\cite[Proposition~3.5]{Wu}}]
For every diffeological space $X$, there is a diffeological vector space $F(X)$ together with a smooth map
$i:X \to F(X)$ such that the following universal property holds: for any diffeological vector space $V$ and any smooth
map $f:X \to V$, there exists a unique smooth linear map $g:F(X) \to V$ satisfying $f = g \circ i$.
\end{prop}
We call $F(X)$ the \dfn{free diffeological vector space generated by $X$}.
\begin{ex}
Not every free diffeological vector space is projective.
For example, it is shown in~\cite[Example~6.9]{Wu} that $F(T^2_\theta)$
is not projective, where $T^2_\theta$ is the irrational torus from
Example~\ref{ex:irrational-torus}.
\end{ex}
Some necessary conditions for a free diffeological vector space to be projective have been found
in~\cite[Corollary~3.15]{CW2}.
\begin{cor}
Let $X$ be a diffeological space such that $F(X)$ is not projective. Then there exists a non-trivial diffeological
principal $W$-bundle over $X$, where $W$ is a diffeological vector space.
\end{cor}
\begin{proof}
By Theorem~\ref{thm:non-projective}, there is a non-trivial diffeological principal $W$-bundle
$\pi:P \to F(X)$ with $W$ a diffeological vector space.
Consider its pullback $p : E \to X$ along $i: X \to F(X)$.
We claim that $p$ is not trivial.
Suppose it is.
Then, by Remark~\ref{rem:trivial-pb}, $p$ has a smooth section, which implies
that there exists a smooth map $f:X \to P$ such that $i = \pi \circ f$.
The universal property then implies that $\pi$ has a smooth section over $F(X)$,
which means that $\pi$ is trivial, a contradiction.
\end{proof}
\begin{ex}
If $X$ is an indiscrete diffeological space with more than one point,
then $X$ is smoothly contractible and $F(X)$ is not projective (\cite[Corollary~3.15]{CW2}).
So there exists a non-trivial diffeological principal bundle $\pi : E \to X$.
\end{ex}
The above examples are diffeological principal bundles which may not be locally trivial,
and thus have no direct analog in topology.
We now show that even locally trivial diffeological principal bundles do not have
a classifying space.
\begin{ex}\label{ex:Goodwillie}
The following is adapted from a topological example~\cite{M1,M2}.
Consider the diffeological space $B := (\mathbb{R} \times \{0,1\})/{\sim}$, where $(x,0) \sim (x,1)$ if
$x \in \mathbb{R}^{>0}$. Write $r:\mathbb{R} \times \{0,1\} \to B$ for the quotient map and let $U_i := r(\mathbb{R} \times \{i\})$ for $i=0,1$.
Then each $U_i$ is $D$-open in $B$ and canonically diffeomorphic to $\mathbb{R}$,
and $U_0 \cap U_1$ is canonically diffeomorphic to $\mathbb{R}^{>0}$. Define $E$ to be the pushout of
\[
\xymatrix{U_1 \times \mathbb{R}^{>0} & (U_0 \cap U_1) \times \mathbb{R}^{>0}\ \ar[l] \ar@{^{(}->}[r] & U_0 \times \mathbb{R}^{>0},}
\]
where the first map is given by $(r(x,i),g) \mapsto (r(x,1),xg)$.
The projections $U_i \times \mathbb{R}^{>0} \to U_i \hookrightarrow B$ induce a smooth map $p:E \to B$
which is a locally trivial principal $\mathbb{R}^{>0}$-bundle.
Here we are regarding $\mathbb{R}^{>0}$ as a diffeological group under multiplication.
Consider the smooth map ${(\mathbb{R} \times \{0,1\}) \times \mathbb{R}} \to \mathbb{R} \times \{0,1\}$ defined by
$(x,i,t) \mapsto (\rho(t)+(1-\rho(t))x,\, i)$ for $i=0,1$, where $\rho:\mathbb{R} \to \mathbb{R}$ is a smooth function
with $\rho(0)=0$, $\rho(1)=1$ and $\im(\rho)=[0,1]$.
This induces a smooth homotopy $B \times \mathbb{R} \to B$ between the identity map and a constant map,
which shows that $B$ is smoothly contractible.
Now if $p:E \to B$ were trivial, we would have an $\mathbb{R}^{>0}$-equivariant trivialization
$E \cong B \times \mathbb{R}^{>0}$ over $B$. Restricting to each $U_i \times \mathbb{R}^{>0}$ would give maps
$U_i \times \mathbb{R}^{>0} \to B \times \mathbb{R}^{>0}$ sending $(r(x,i),g)$ to $(r(x,i),\alpha_i(x) g)$
for some smooth functions $\alpha_i : \mathbb{R} \to \mathbb{R}^{>0}$.
Since these restrictions must agree on $(U_0 \cap U_1) \times \mathbb{R}^{>0}$,
we must have that $\alpha_0(x) = \alpha_1(x) x$ for $x > 0$.
But then the identity map $\mathbb{R}^{>0} \to \mathbb{R}^{>0}$ would have a smooth extension $\mathbb{R} \to \mathbb{R}^{>0}$
sending $x$ to $\alpha_0(x)/\alpha_1(x)$,
which is clearly impossible by continuity.
\end{ex}
\section{Partitions of unity and $D$-numerable bundles}
\label{se:D-numerable}
In the previous section, we saw that in general there is no classifying space for all principal $G$-bundles.
Because of this, we restrict our attention to a special class of principal bundles called $D$-numerable principal bundles.
In the next section, we will show that there is a classifying space for $D$-numerable principal bundles over an arbitrary diffeological space.
\subsection{Partitions of unity}
We first recall the concept of smooth partition of unity in the framework of diffeology:
\begin{de}
Let $X$ be a diffeological space.
A collection $\{ U_i \}_{i \in I}$ of subsets of $X$ is \dfn{locally finite} if every point in $X$
has a $D$-open neighbourhood that intersects $U_i$ for only finitely many $i$.
Note that $\{ U_i \}$ is locally finite if and only if
the collection $\{ \bar{U_i} \}$ of $D$-closures is locally finite.
A family of smooth functions $\{\mu_i:X \to \mathbb{R}\}_{i \in I}$ is a
\dfn{smooth partition of unity} if it satisfies the following conditions:
\begin{enumerate}
\item $0 \leq \mu_i(x)$ for each $i \in I$ and $x \in X$;
\item $\{\supp(\mu_i)\}_{i \in I}$ is locally finite;
\item the sum $\sum_{i \in I} \mu_i(x)$, which makes sense because of (2), is equal to $1$ for all $x \in X$.
\end{enumerate}
Here $\supp(\mu_i)$ is the closure of $\mu_i^{-1}((0, \infty))$ in the $D$-topology,
and (2) is equivalent to requiring that $\{ \mu_i^{-1}((0, \infty)) \}$ is locally finite.
If $\mathfrak{X}=\{X_i\}_{i \in I}$ is a collection of subsets of $X$ indexed by the
same indexing set, we say that our partition of unity is \dfn{subordinate to $\mathfrak{X}$} if
$\supp(\mu_i) \subseteq X_i$ for each $i \in I$.
\end{de}
If instead of (3), we have that $\sum_{i \in I} \mu_i(x)$ is nonzero for each $x \in X$,
or equivalently that the sets $\mu_i^{-1}((0, \infty))$ form a cover of $X$, then one
can scale the functions to obtain a smooth partition of unity.
The following is a smooth version of a result that can be found in~\cite[Section 4]{B}.
It tells us how to adjust a partition of unity to reduce the supports,
allowing us to fill some minor gaps in the arguments of~\cite{Hu}.
\begin{lem}\label{lem:Bourbaki}
Let $X$ be a diffeological space.
If $\{\rho_i:X \to \mathbb{R}\}_{i \in I}$ is a smooth partition of unity,
then there is a smooth partition of unity $\{ \mu_{i}:X \to \mathbb{R} \}_{i \in I}$ subordinate to $\{\rho_i^{-1}((0,\infty))\}_{i \in I}$.
\end{lem}
\begin{proof}
Define $\sigma : X \to \mathbb{R}$ by $\sigma(x) = \sum_i \rho_i(x)^2$.
Note that $\sigma$ is smooth, nowhere zero and
\[
\sigma(x) \leq (\sup_i \rho_i(x)) \sum_i \rho_i(x) = \sup_i \rho_i(x)
\]
for each $x$.
Let $\phi$ be a smooth function such that $\phi(t) = 0$ for $t \leq 0$ and $\phi(t) > 0$ for $t > 0$,
and define a smooth function $\mu_i : X \to \mathbb{R}$ by $\mu_i(x) = \phi(\rho_i(x) - \sigma(x)/2)$ for each $i$.
We will show that $\supp(\mu_i) \subseteq \rho_i^{-1}((0,\infty))$, which then implies that $\{\supp(\mu_i)\}_{i \in I}$ is
locally finite.
Suppose that $\rho_i(y) = 0$.
Then there is a $D$-open neighbourhood $V$ of $y$ such that $\rho_i(x) - \sigma(x)/2 < 0$
for $x$ in $V$. That is, $\mu_i(x) = 0$ for each $x$ in $V$.
Therefore, $y \not\in \supp(\mu_i)$, as required.
Since $\{ \supp(\mu_i) \}$ is locally finite, $\sum_i \mu_i(x)$ is well-defined.
Note that for each $x$ there is a $j$ such that
$\rho_j(x) = \sup_i \rho_i(x) \geq \sigma(x) > \sigma(x)/2$.
For this $j$, $\mu_j(x) \neq 0$, and so $\sum_i \mu_i(x)$ is nowhere zero.
Therefore the functions
$\mu_i$ can be scaled to form a smooth partition of unity subordinate to $\{\rho_i^{-1}((0,\infty))\}_{i \in I}$.
\end{proof}
Our next lemma shows that one can replace any partition of unity with a
related countable one.
\begin{lem}\label{lem:countable}
Let $B$ be a diffeological space and
let $\{\rho_i:B \to \mathbb{R}\}_{i \in I}$ be a smooth partition of unity.
Then there exists a countable smooth partition of unity $\{\tau_n:B \to \mathbb{R}\}_{n \in \mathbb{N}}$
such that each $\tau_n^{-1}((0,\infty))$ is a disjoint union of $D$-open sets each of which
is contained in $\rho_i^{-1}((0,\infty))$ for some $i \in I$.
\end{lem}
\begin{proof}
Fix a smooth function
$\phi:\mathbb{R} \to \mathbb{R}$ with $\phi(t)=0$ if $t \leq 0$ and $\phi(t) > 0$ if $t > 0$.
For any non-empty finite subset $J$ of the indexing set $I$, define $\sigma_J:B \to \mathbb{R}$ by
$\sigma_J(b) = \prod_{j \in J} \phi(\rho_j(b) - \sum_{k \in I \setminus J} \rho_k(b))$.
By local finiteness of $\{\supp(\rho_i)\}_{i \in I}$, it is straightforward to check that $\sigma_J$ is well-defined
and smooth. Write $B_J := \sigma_J^{-1}((0,\infty))$.
Since each $b \in B$ is in $B_J$, where $J = \{ j \in I \mid \rho_j(b) \neq 0 \}$,
we have that $\cup_J \, B_J = B$. Moreover, each $B_J \subseteq \rho_j^{-1}((0,\infty))$ for
any $j \in J$.
Write $|J|$ for the cardinality of the set $J$. Then for any $J \neq J'$ with $|J| = |J'|$,
we have $B_J \cap B_{J'} = \emptyset$. Otherwise, let $b \in B_J \cap B_{J'}$,
and choose $j \in J \setminus J'$ and $j' \in J' \setminus J$.
Since $b \in B_J$, we have that $\rho_j(b) - \sum_{k \in I \setminus J} \rho_k(b) > 0$,
which implies that $\rho_j(b) > \rho_{j'}(b)$.
But $b \in B_{J'}$ implies that $\rho_j(b) < \rho_{j'}(b)$, a contradiction.
For $n \in \mathbb{N}^{>0}$, define $\tau_n:B \to \mathbb{R}$ by $\tau_n(b) = \sum_{J \subseteq I, |J|=n} \sigma_J(b)$.
Then $B_n := \tau_n^{-1}((0,\infty)) = \cup_{J \subseteq I, |J|=n} \, B_J$ is a disjoint union
of sets $B_J$ each of which is contained in some $\rho_j^{-1}((0,\infty))$.
(Also define $\tau_0$ to be the zero function, with $B_0 = \emptyset$.)
By local finiteness of $\{\rho_i^{-1}(0,\infty)\}_{i \in I}$, one sees that $\{B_n\}_{n \in \mathbb{N}}$ is locally finite,
and therefore that $\{ \supp(\tau_n) \}_{n \in \mathbb{N}}$ is locally finite.
The result then follows by normalizing the $\tau_n$'s.
\end{proof}
\subsection{$D$-numerable diffeological bundles}
\begin{de}
Let $F$ be a diffeological space. A smooth map $\pi:E \to B$ is called a \dfn{$D$-numerable diffeological bundle of fiber type $F$} if there
exists a smooth partition of unity $\{\mu_i:B \to \mathbb{R}\}_{i \in I}$ subordinate to a $D$-open cover $\{B_i\}_{i \in I}$ of $B$ such that each
$\pi|_{B_i}$ is trivial of fiber type $F$.
\end{de}
Clearly,
\[
\text{trivial} \implies \text{$D$-numerable} \implies \text{locally trivial} \implies \text{diffeological bundle}.
\]
By Lemma~\ref{lem:Bourbaki}, our definition of $D$-numerable agrees with that of~\cite{MW}.
\begin{ex}
If $B$ is a manifold, then the following concepts (over $B$) coincide:
\begin{enumerate}
\item $D$-numerable diffeological bundle;
\item locally trivial bundle;
\item diffeological bundle.
\end{enumerate}
\end{ex}
\begin{ex}
If a diffeological space $B$ has indiscrete $D$-topology, then the only $D$-numerable diffeological bundle over $B$ is the trivial
bundle. In particular, the only $D$-numerable diffeological bundle over an irrational torus or an indiscrete diffeological space is trivial.
\end{ex}
\begin{lem}\label{lem:pullback-D-numerable}
The pullback of a $D$-numerable diffeological bundle of fiber type $F$ is again $D$-numerable of fiber type $F$.
\end{lem}
\begin{proof}
This is straightforward.
\end{proof}
We now show that one can assume that the indexing set is countable.
\begin{prop}\label{prop:countable-diff}
Let $\pi:E \to B$ be a $D$-numerable diffeological bundle.
Then there exists a countable smooth partition of unity $\{\mu_n:B \to \mathbb{R}\}_{n \in \mathbb{N}}$ subordinate to
a locally finite $D$-open cover $\{B_n\}_{n \in \mathbb{N}}$ of $B$ such that
$\pi|_{B_n}:\pi^{-1}(B_n) \to B_n$ is trivial for each $n$.
\end{prop}
\begin{proof}
Let $\{\rho_i:B \to \mathbb{R}\}_{i \in I}$ be a smooth partition of unity subordinate to a
$D$-open cover $\{U_i\}_{i \in I}$ of $B$ such that each
$\pi|_{U_i}$ is a trivial diffeological bundle.
By Lemma~\ref{lem:countable}, there is a countable smooth partition of unity
$\{ \tau_n : B \to \mathbb{R} \}_{n \in \mathbb{N}}$ such that each $B_n := \tau_n^{-1}((0, \infty))$ is a disjoint
union of $D$-open sets each of which is contained in $\rho_i^{-1}((0, \infty))$ for some $i$.
It follows from Lemma~\ref{lem:disjoint-union} that $\pi|_{B_n} : \pi^{-1}(B_n) \to B_n$ is trivial for each $n$.
By Lemma~\ref{lem:Bourbaki}, we can find another countable smooth partition of unity
$\{ \mu_n : B \to \mathbb{R} \}_{n \in \mathbb{N}}$ subordinate to $\{ B_n \}_{n \in \mathbb{N}}$,
which completes the argument.
\end{proof}
\subsection{$D$-numerable principal bundles}
\begin{de}
Let $G$ be a diffeological group. A principal $G$-bundle $\pi:E \to B$ is \dfn{$D$-numerable} if there
exists a smooth partition of unity $\{\mu_i:B \to \mathbb{R}\}_{i \in I}$ subordinate to a $D$-open cover $\{B_i\}_{i \in I}$ of $B$ such that each
$\pi|_{B_i}$ is a trivial principal $G$-bundle.
\end{de}
By Remark~\ref{rem:trivial-pb}, it is equivalent to require that $\pi$ is
$D$-numerable as a diffeological bundle.
Just as for diffeological bundles, we can assume that the indexing set is countable.
This will be used in the proof of Proposition~\ref{prop:theta-surjective}.
\begin{prop}\label{prop:countable}
Let $\pi:E \to B$ be a $D$-numerable principal $G$-bundle.
Then there exists a countable smooth partition of unity $\{\mu_n:B \to \mathbb{R}\}_{n \in \mathbb{N}}$ subordinate to
a locally finite $D$-open cover $\{B_n\}_{n \in \mathbb{N}}$ of $B$ such that
$\pi|_{B_n}:\pi^{-1}(B_n) \to B_n$ is trivial for each $n$.
\end{prop}
\begin{proof}
This follows from Proposition~\ref{prop:countable-diff}.
\end{proof}
Our next goal is to show that pulling back a $D$-numerable principal bundle along
homotopic maps gives isomorphic bundles.
While the general argument follows existing approaches from topology,
several key steps need novel proofs in order to work in the smooth setting.
\begin{lem}\label{lem:pullback-D-numerable-G}
The pullback of a $D$-numerable principal $G$-bundle is a $D$-numerable principal $G$-bundle.
\end{lem}
\begin{proof}
This is straightforward.
\end{proof}
\begin{prop}\label{prop:Dpb-on-product}
For every $D$-numerable principal $G$-bundle $\pi:E \to B \times \mathbb{R}$, there exists a $D$-open cover
$\{B_k\}_{k \in K}$ of $B$ together with a smooth partition of unity subordinate to it such that
$\pi|_{B_k \times [0,1]}:\pi^{-1}(B_k \times [0,1]) \to B_k \times [0,1]$ is trivial for each $k \in K$.
\end{prop}
This proof is based on the proof of~\cite[Lemma~4.9.5]{Hu}, with
the function $F$ from Proposition~\ref{prop:functional-testing-zero}
playing the role of the $\min$ function.
\begin{proof}
Let $\{\rho_i:B \times \mathbb{R} \to \mathbb{R}\}_{i \in I}$ be a smooth partition of unity
such that $\pi$ is trivial over each set $\rho_i^{-1}((0, \infty))$.
By Proposition~\ref{prop:functional-testing-zero}, there exists
a smooth map $F:C^\infty(\mathbb{R},\mathbb{R}^{\geq 0}) \to \mathbb{R}^{\geq 0}$ such that
$F(f)=0$ if and only if $f(s)=0$ for some $s \in [0,1]$.
For every $n \in \mathbb{Z}^{>0}$ and $k=(k(1),\ldots,k(n)) \in I^n$, define
$\hat{\rho}_k:B \to \mathbb{R}$ by $b \mapsto \prod_{i=1}^n F(\tilde{\rho}_{k(i)}(b))$, where
$\tilde{\rho}_{k(i)}:B \to C^\infty(\mathbb{R},\mathbb{R}^{\geq 0})$ is defined by
$\tilde{\rho}_{k(i)}(b)(s)=\rho_{k(i)}(b,\frac{2s+i-3/2}{n})$, using cartesian closedness of ${\mathfrak{D}\mathrm{iff}}$.
Write $B_k := \hat{\rho}_k^{-1}((0,\infty))$, which is $D$-open in $B$ since $\hat{\rho}_k$ is smooth.
Then
$b \in B_k$ if and only if $\{b\} \times [\frac{i-3/2}{n},\frac{i+1/2}{n}] \subseteq \rho_{k(i)}^{-1}((0, \infty))$
for each $i \in \{1,2,\ldots,n\}$,
which implies that $\pi$ is trivial on each $B_k \times (\frac{i-3/2}{n},\frac{i+1/2}{n})$.
By~\cite[Lemma~1 in 8.19]{I2}, we see that $\pi|_{B_k \times [0,1]}:\pi^{-1}(B_k \times [0,1]) \to B_k \times [0,1]$ is
trivial.
Let $K = \cup_n I^n$ and
write $\mathfrak{B}=\{B_k\}_{k \in K}$. Since $[0,1]$ is compact, it is easy to see that for every
$b \in B$, there exists $l \in K$ such that $b \in B_l$, i.e., $\mathfrak{B}$ is a $D$-open cover of $B$.
By~\cite[Lemma~4.1]{CSW}, the $D$-topology on $B \times \mathbb{R}$ coincides with the product topology.
Fix $b \in B$ and $n \in \mathbb{N}$.
For $i = 1, \ldots, n$, there exist $D$-open sets $U_i \subseteq B$ and $V_i \subseteq \mathbb{R}$
such that $(b, i/n) \in U_i \times V_i$ and $U_i \times V_i$ intersects only finitely many
of the sets $\rho_j^{-1}((0,\infty))$ for $j \in I$.
Let $U := \cap_{i=1}^n \, U_i$, so the same properties hold for each $U \times V_i$.
For $k \in I^n$, $b \in B_k$ implies that $(b, i/n) \in
\{ b \} \times [\frac{i-3/2}{n},\frac{i+1/2}{n}] \subseteq \rho_{k(i)}^{-1}((0, \infty))$
for each $i \in \{1,2,\ldots,n\}$,
and so there are only finitely many $k \in I^n$ so that $U$ intersects $B_k$.
We next tweak the functions in order to make their supports locally finite as $n$ varies as well.
For each $r \in \mathbb{N}^{>1}$, write $\tau_r$ for the sum of all $\hat{\rho}_{k'}$ with $k' \in I^n$ and $n<r$,
and write $\tau_0 = \tau_1 = 0$.
Each $\tau_r : B \to \mathbb{R}$ is smooth, by the previous paragraph.
Fix a smooth function $\phi:\mathbb{R} \to \mathbb{R}$ with $\phi(t)=0$ for all $t \leq 0$ and
$\phi(t)>0$ for all $t>0$. For $k \in I^r$, define $\sigma_k:B \to \mathbb{R}$ by
$\sigma_k(b)=\phi(\hat{\rho}_k(b) - r \tau_r(b))$.
For fixed $b \in B$, we have a $\bar{k} \in I^{\bar{r}}$ with $\bar{r}$ minimal with respect to
the property that $\hat{\rho}_{\bar{k}}(b)>0$. From this, one obtains that
$\sigma_{\bar{k}}(b)=\phi(\hat{\rho}_{\bar{k}}(b) - \bar{r} \tau_{\bar{r}}(b)) = \phi(\hat{\rho}_{\bar{k}}(b)) > 0$.
On the other hand, let $m \in \mathbb{N}$ be such that $m > \bar{r}$ and $\hat{\rho}_{\bar{k}}(b) > 1/m$.
Since $\hat{\rho}_{\bar{k}}:B \to \mathbb{R}$ is smooth, there exists a $D$-open neighborhood $V$ of $b$ such that
for every $x \in V$, $\hat{\rho}_{\bar{k}}(x) > 1/m$. Then for any $l \geq m$, we have $l \tau_l(x) \geq m \tau_m(x) \geq m \hat{\rho}_{\bar{k}}(x) > 1$
for all $x \in V$,
i.e., $\sigma_k(x)=0$ for all $k \in I^l$ and $x \in V$.
Therefore, $\{\sigma_k^{-1}((0,\infty))\}_{k \in K}$ is locally finite.
Therefore, after scaling, the conditions in Lemma~\ref{lem:Bourbaki} hold for $\{\sigma_k\}_k$, and we get a smooth partition of unity
subordinate to $\{\sigma_k^{-1}((0,\infty))\}_k$. It is easy to check that $\sigma_k^{-1}((0,\infty)) \subseteq B_k$,
so we are done.
\end{proof}
\begin{prop}
Let $\pi : E \to B \times \mathbb{R}$ be a $D$-numerable principal $G$-bundle.
Define $p$ to be the pullback
\[
\xymatrix{E_1 \ar[r] \ar[d]_p & E \ar[d]^\pi \\ B \ar[r]_-i & B \times \mathbb{R}}
\]
where $i(b) = (b,1)$.
Then there exists an isomorphism of principal $G$-bundles:
\[
\xymatrix@C5pt{\pi^{-1}(B \times [0,1]) \ar[rr]^-\alpha \ar[dr]_{\pi|_{B \times [0,1]}} && E_1 \times [0,1] \ar[dl]^{p \times 1_{[0,1]}} \\
& B \times [0,1].}
\]
\end{prop}
\begin{proof}
We first show that there is a commutative diagram in ${\mathfrak{D}\mathrm{iff}}$
\begin{equation}\label{eq:goal1}
\cxymatrix{\pi^{-1}(B \times [0,1]) \ar[d]_{\pi|_{B \times [0,1]}} \ar[r]^f & \pi^{-1}(B \times [0,1]) \ar[d]^{\pi|_{B \times [0,1]}} \\
B \times [0,1] \ar[r]_r & B \times [0,1] ,}
\end{equation}
where $f$ is $G$-equivariant and $r(b,t) = (b,1)$.
By the previous proposition, there is a smooth partition of unity $\{\rho_k:B \to \mathbb{R}\}_{k \in K}$
subordinate to a $D$-open cover $\{B_k\}_{k \in K}$ of $B$
such that $\pi$ is trivial over $B_k \times [0,1]$ for each $k$.
As in the proof of Lemma~\ref{lem:Bourbaki},
define $\sigma : B \to \mathbb{R}$ by $\sigma(b) = \sum_k \rho_k(b)^2$.
Note that $\sigma$ is smooth, nowhere zero and
$\sigma(b) \leq \sup_k \rho_k(b)$.
Let $u_k(b) = \phi(\rho_k(b)/\sigma(b))$, where $\phi:\mathbb{R} \to \mathbb{R}$ is a smooth function
such that $\phi(t) = 0$ for $t \leq 0$, $\phi(t) = 1$ for $t \geq 1$ and $\im(\phi)=[0,1]$.
Then $\sup_k u_k(b) = 1$ for each $b$ and $\supp(u_k) \subseteq B_k$.
For each $k$, define $r_k : B \times [0,1] \to B \times [0,1]$ by
$r_k(b,t) = (b, H(u_k(b), t))$, where $H : [0,1] \times [0,1] \to [0,1]$ is
defined by $H(s,t) = (1-t) s + t$.
Note that if $u_k(b) = 0$, $r_k(b,t) = (b,t)$, so for any given $b$,
only finitely many $r_k$'s are not the identity.
Also, if $u_k(b) = 1$, then $r_k(b,t) = (b, 1)$.
Now choose a $G$-equivariant trivialization $h_k : B_k \times [0,1] \times G \to \pi^{-1}(B_k \times [0,1])$
and define a function $f_k : E \to E$ over $r_k$ by setting
$f_k(h_k(b,t,g)) = h_k(r_k(b,t), g)$ for $b$ in $B_k$ and $f_k(x) = x$ otherwise. Then $f_k$ is $G$-equivariant.
Since $r_k$ is the identity outside of the support of $u_k$, $f_k$ is smooth.
Choose a well-ordering of the indexing set $K$.
Define $f : E \to E$ to be the composite $f_{k_n} \circ \cdots \circ f_{k_1}$ on $\pi^{-1}(\{b\} \times [0,1])$,
where $\{k_1, \ldots, k_n\} = \{ k \in K \mid u_k(b) \neq 0 \}$ and $k_1 < \cdots < k_n$.
This respects the $G$-action, and lies over $r_{k_n} \circ \cdots \circ r_{k_1}$.
The latter composite sends $(b,t)$ to $(b,1)$, since at least one $r_{k_i}$ does,
and every $r_k$ sends $(b,1)$ to $(b,1)$.
It remains to show that $f$ is smooth, and it suffices to check this on an open cover.
For each $b$ in $B$, choose a $D$-open neighbourhood $U$ of $b$
so that $\{ k \in K \mid U \cap B_k \neq \emptyset \}$ is finite, enumerated as
$\{j_1, \ldots, j_n\}$ with $j_1 < \cdots < j_n$.
Then, on $\pi^{-1}(U \times [0,1])$, we have that $f$ is equal to the composite
$f_{j_n} \cdots f_{j_1}$,
since a map $f_j$ is the identity over $\{b\} \times [0,1]$ if $u_j(b) = 0$.
This shows that $f$ is locally smooth and therefore smooth.
Thus, we have the required diagram~\eqref{eq:goal1}.
Since $r$ factors through $i : B \to B \times [0,1]$ and $p$ is a pullback, we get
a commutative square
\[
\xymatrix{\pi^{-1}(B \times [0,1]) \ar[d]_{\pi|_{B \times [0,1]}} \ar[r] & E_1 \ar[d]^p \\
B \times [0,1] \ar[r]_-{p_1} & B ,}
\]
where $p_1$ is the projection.
By Proposition~\ref{prop:commsq=>pullbackdiff}, $\pi|_{B \times [0,1]}$ is isomorphic to the
pullback of $p$ along $p_1$, which is the product $p \times 1_{[0,1]}$, as required.
\end{proof}
\begin{cor}\label{cor:homotopy-pullback}
If $\pi : E' \to B'$ is a $D$-numerable principal $G$-bundle,
and $f$ and $g$ are smoothly homotopic maps $B \to B'$,
then the pullbacks $f^*(\pi)$ and $g^*(\pi)$ are isomorphic as principal $G$-bundles over $B$.
\end{cor}
\begin{proof}
Let $F:B \times \mathbb{R} \to B'$ be a smooth homotopy between $f$ and $g$.
Then $F^*(\pi)$ is a $D$-numerable principal $G$-bundle over $B \times \mathbb{R}$ by
Lemma~\ref{lem:pullback-D-numerable-G}.
By the previous proposition, $F^*(\pi)$ is isomorphic
to a product $E_1 \times [0,1] \to B \times [0,1]$ for a certain principal $G$-bundle
$p : E_1 \to B$.
Thus the restrictions to $B \times \{0\}$ and $B \times \{1\}$ are both isomorphic to $p$.
\end{proof}
Recall that we saw in Section~\ref{se:no-classifying} that this property does not
hold for an arbitrary principal $G$-bundle.
\begin{cor}
If $\pi : E' \to B'$ is a $D$-numerable diffeological bundle,
and $f$ and $g$ are smoothly homotopic maps $B \to B'$,
then the pullbacks $f^*(\pi)$ and $g^*(\pi)$ are isomorphic as diffeological bundles over $B$.
\end{cor}
\begin{proof}
This follows from~\cite[8.16]{I2} (see Section~\ref{se:classify-bundle}) and
Corollary~\ref{cor:homotopy-pullback}.
\end{proof}
\section{Classifying $D$-numerable principal bundles}
\label{se:classify-Dpb}
In this section, which forms the heart of the paper,
we construct a classifying space for all $D$-numerable principal bundles.
Let $G$ be a diffeological group with identity $e$. Consider the infinite simplex
\[
\Delta^\omega := \{(t_0,t_1,\ldots) \in \oplus_\omega \mathbb{R} \,\mid\, \sum_{i=0}^\infty t_i = 1 \text{ and } t_i \geq 0 \text{ for each } i\},
\]
equipped with the sub-diffeology of $\oplus_\omega \mathbb{R}$,
where $\oplus_\omega \mathbb{R}$ is the coproduct of countably many
copies of $\mathbb{R}$ in ${\mathfrak{D}\mathrm{Vect}}$ (see~\cite[Proposition~3.2]{Wu}).
Explicitly, a function $t : U \to \Delta^{\omega}$ is a plot if and only if each
component function $t_i : U \to \mathbb{R}$ is smooth
and for each $u \in U$ there are an open neighbourhood $V$ of $u$ and $n \in \mathbb{N}$
such that $t_i(v) = 0$ for all $v \in V$ and $i > n$.
Put another way, any such plot $t$ locally lands in
\[
\Delta^n := \{(t_0,t_1,\ldots,t_n) \in \mathbb{R}^{n+1} \,\mid\, \sum_{i=0}^n t_i = 1 \text{ and } t_i \geq 0 \text{ for each } i\}
\]
for some $n$, where $\Delta^n$ has the sub-diffeology of $\mathbb{R}^{n+1}$, and is
also naturally a diffeological subspace of $\Delta^{\omega}$.
On $\Delta^\omega \times \prod_\omega G$,
define $(t_i,g_i) \sim (t_i',g_i')$ if the following conditions are satisfied:
\begin{enumerate}
\item $t_i=t_i'$ for each $i \in \omega$;
\item if $t_i = t_i' \neq 0$, then $g_i=g_i'$.
\end{enumerate}
This is an equivalence relation on $\Delta^\omega \times \prod_\omega G$, and we write $EG$ for the
quotient diffeological space and $[t_i, g_i]_{EG}$ or simply $[t_i, g_i]$ for an equivalence class.
Now we consider group actions. Define $(\Delta^\omega \times \prod_\omega G) \times G \to \Delta^\omega \times \prod_\omega G$
by $((t_i,g_i),g) \mapsto (t_i,g_i g)$. Note that this is smooth and compatible with the equivalence relation $\sim$, and hence induces a
smooth right action $EG \times G \to EG$. It is easy to see that this action is free, i.e., $[t_i,g_i] \cdot g = [t_i,g_i]$
implies that $g=e$. We write $BG$ for the corresponding orbit space with the quotient diffeology
and write elements in $BG$ as $[t_i,g_i]_{BG}$
or simply $[t_i,g_i]$ if no confusion will occur.
Both $E$ and $B$ are functors from the
category of diffeological groups and smooth group homomorphisms to ${\mathfrak{D}\mathrm{iff}}$.
Our first goal is to show that the quotient map $\pi : EG \to BG$ is a $D$-numerable
principal $G$-bundle. This requires a lemma that we will use implicitly in various
places, and a remark.
\begin{lem}
The function $f_i : BG \to \mathbb{R}$ sending $[t_j,g_j]$ to $t_i$ is smooth for each $i$.
\end{lem}
\begin{proof}
It suffices to show that the composite $\Delta^{\omega} \times \prod_{\omega} G \to EG \to BG \to \mathbb{R}$ is smooth,
where the first two maps are the quotient maps and the third map is $f_i$.
This composite is equal to the composite
$\Delta^{\omega} \times \prod_{\omega} G \to \Delta^{\omega} \hookrightarrow \oplus_{\omega} \mathbb{R} \to \mathbb{R}$,
where the first map is the projection, the second is the inclusion, and the third is projection onto the
$i^{th}$ summand, all of which are smooth.
\end{proof}
\begin{rem}\label{rem:EG_n}
Any plot $p : U \to EG$ locally factors through the quotient map
$\Delta^{\omega} \times \prod_{\omega} G \to EG$.
Therefore, by the description of the diffeology on $\Delta^{\omega}$, it
locally lands in $\Delta^n \times \prod_{\omega} G$ for some $n$.
This lift can be adjusted so that its values $(t_i, g_i)$ have
$g_i = e$ for $i > n$, which means that it factors through the natural map
from $\Delta^n \times G^{n+1}$.
In particular, if we let $EG_n \subseteq EG$ consist of those points $[t_i, g_i]$ with $t_i = 0$ for all $i > n$,
then $p$ locally factors through $EG_n$ for some $n$.
Similarly, a plot $p : U \to BG$ locally factors through $\Delta^n \times G^{n+1}$ for some $n$.
In particular, if we define $BG_n \subseteq BG$ analogously, $p$ locally factors through
$BG_n$ for some $n$.
These facts can also be phrased as saying that $EG = \colim EG_n$ and $BG = \colim BG_n$,
where the colimits are in the category of diffeological spaces.
It follows from this and~\cite[Lemmas~3.17 and~4.1]{CSW} that if $D(G)$ is locally compact
Hausdorff, then $D(BG) \cong B_{\textrm{Top}}(D(G))$, where the right-hand-side denotes the
usual classifying space construction applied to the topological group $D(G)$.
\end{rem}
\begin{thm}\label{thm:EG->BG-D-numerable-principal}
The quotient map $\pi:EG \to BG$ is a $D$-numerable principal $G$-bundle.
\end{thm}
\begin{proof}
We begin by showing that $\pi$ is a principal $G$-bundle.
By Theorem~\ref{thm:principal}, it is enough to show that $EG \times G \to EG \times EG$ defined by
$([t_i,g_i],g) \mapsto ([t_i,g_i],[t_i,g_ig])$ is an induction. It is easy to see that it is injective. Assume that we have a
commutative triangle in ${\mathfrak{S}\mathrm{et}}$
\[
\xymatrix{& U \ar[dl]_{(\theta,\tau)} \ar[d]^{(\alpha,\beta)} \\ EG \times G \ar[r] & EG \times EG}
\]
with $\alpha$ and $\beta$ smooth. Then $\theta = \alpha$ and $\beta(u) = \alpha(u) \cdot \tau(u)$ for each $u \in U$. We
are left to show that $\tau:U \to G$ is smooth. By working locally, we may assume that
$\alpha(u) = [t(u),g^\alpha(u)]$ and $\beta(u) = [t(u),g^\beta(u)]$ for smooth maps
$t: U \to \Delta^\omega$ and $g^\alpha,g^\beta:U \to \prod_{\omega} G$.
Whenever $t_i(u) \neq 0$ we have $g^\alpha_i(u) \cdot \tau(u) = g^\beta_i(u)$.
Note that $\{U_i := \{u \in U \mid t_i(u) \neq 0\}\}_{i \in \omega}$ is an open cover of $U$, and
$\tau|_{U_i}:U_i \to G$ satisfies $\tau|_{U_i}(u) = (g^\alpha_i(u))^{-1} \cdot g^\beta_i(u)$ and hence is smooth.
Therefore, $\tau:U \to G$ is smooth.
Now we show that $\pi$ is $D$-numerable.
Let $B_i := \{[t_j,g_j] \in BG \mid t_i > 1/2^{i+2}\}$. Then $B_i$ is $D$-open in $BG$. Since
$\sum_{i=0}^\infty 1/2^{i+2} = 1/2 < 1$, $\cup_{i=0}^\infty \, B_i = BG$.
We claim that this $D$-open cover is locally finite. For any $[t_j,g_j] \in BG$, choose $N$ so that $t_i=0$ for all $i>N$.
Let $B := \{[t_s,g_s] \in BG \mid t_i < 1/2^{i+2} \text{ for all } i>N\}$. Then $[t_j,g_j] \in B$ and $B$ only intersects
finitely many $B_i$'s. We are left to show that $B$ is $D$-open. Let $p:U \to BG$ be a plot.
By Remark~\ref{rem:EG_n}, we can replace $U$ by a smaller open subset so
that there exist $n \in \mathbb{N}$ and a smooth map $U \to \Delta^n \times G^{n+1}$ such that the following
diagram commutes:
\[
\xymatrix{U \ar[d] \ar[dr]^p \\ \Delta^n \times G^{n+1} \ar[r] & BG.}
\]
Since the preimage of $B$ in $\Delta^n \times G^{n+1}$ under the horizontal map in the above diagram
is $D$-open, as it is a finite intersection of $D$-open subsets, $p^{-1}(B)$ is open in $U$. Hence $B$ is $D$-open in $BG$.
Fix any smooth function $\rho:\mathbb{R} \to \mathbb{R}$ such that $\rho(t)=0$ for all $t \leq 0$ and $\rho$ is strictly increasing on
$(0,\infty)$. Let $\rho_i:\mathbb{R} \to \mathbb{R}$ be defined by $\rho_i(t)=\rho(t-1/2^{i+1})$. Define $\tau_i:BG \to \mathbb{R}$ by
\[
\tau_i([t_j,g_j])=\frac{\rho_i(t_i)}{\sum_{j=0}^\infty \rho_j(t_j)}.
\]
Every plot $q:W \to BG$ locally lands in some $BG_n$, and so the denominator above
is locally a finite sum.
Since $\sum_{i=0}^n 1/2^{i+1} < 1$ for each $n$, the denominator is never zero and
$\tau_i$ is smooth. Moreover, $\sum_{i=0}^\infty \tau_i=1$ and
$\supp(\tau_i) \subseteq \{[t_j,g_j] \in BG \mid t_i \geq 1/2^{i+1}\} \subseteq B_i$. Since $\{B_i\}_{i \in \omega}$ is locally finite,
so is $\{\supp(\tau_i)\}_{i \in \omega}$. So we get a smooth partition of unity $\{\tau_i:BG \to \mathbb{R}\}$ subordinate to the open cover
$\{B_i\}_{i \in \omega}$ of $BG$.
Define $s : B_i \to EG$ by sending $[t_j, g_j]$ to $[t_j, g_j g_i^{-1}]$.
It is straightforward to see that $s$ is a well-defined smooth section
of $\pi$ over $B_i$ and so it follows from Remark~\ref{rem:trivial-pb}
that $\pi|_{B_i}$ is trivial for each $i$.
Therefore, $\pi:EG \to BG$ is $D$-numerable.
\end{proof}
The next result will imply that $EG$ is contractible and is a key
step in proving that $\pi : EG \to BG$ is a universal $D$-numerable bundle.
\begin{prop}\label{prop:EG-subterminal}
Let $E$ be any diffeological space with a right $G$ action,
and let $h_0, h_1 : E \to EG$ be $G$-equivariant maps.
Then there is a smooth $G$-equivariant homotopy $h_0 \simeq h_1$.
\end{prop}
By a $G$-equivariant homotopy, we mean a homotopy through $G$-equivariant maps.
\begin{proof}
Fix a smooth function $\rho:\mathbb{R} \to \mathbb{R}$ such that there exists $\epsilon >0$ with $\rho(t)=0$ if $t < \epsilon$,
$\rho(t)=1$ if $t > 1 - \epsilon$, and $\im(\rho) = [0,1]$.
Define $H^{\od} :EG \times \mathbb{R} \to EG$ by sending $([t_i, g_i], t)$ to $[t_i', g_i']$
defined as follows.
If $t \leq 0$, then $[t_i', g_i'] = [t_i, g_i]$.
If $t$ is in the interval $\left[ \frac{1}{n+1}, \frac{1}{n} \right)$ for $n \in \mathbb{N}^{>0}$,
then
\begin{align*}
t_i' &= \begin{cases}
\hfill t_i,\hspace{12pt} & \text{if $i < n$,} \\
(1 - \alpha(t)) t_{n+j}, & \text{if $i = n + 2j$ for $j \in \mathbb{N}$,} \\
\hfill \alpha(t) \, t_{n+j}, & \text{if $i = n + 2j + 1$ for $j \in \mathbb{N}$,} \\
\end{cases}\\
\intertext{where}
\alpha(t) &= \rho \! \left( \frac{t-\frac{1}{n+1}}{\frac{1}{n}-\frac{1}{n+1}} \right), \\
\intertext{and}
g_i' &= \begin{cases}
g_i, & \text{if $i < n$,} \\
g_{n+j}, & \text{if $i = n + 2j$ for $j \in \mathbb{N}$,} \\
g_{n+j}, & \text{if $i = n + 2j + 1$ for $j \in \mathbb{N}$.} \\
\end{cases}
\end{align*}
If $t \geq 1$, then $t_{2j}' = t_j$, $t_{2j+1}' = 0$, $g_{2j}' = g_j$ and $g_{2j+1}' = e$ for $j \in \mathbb{N}$.
Although $g_i$ is not well-defined when $t_i = 0$, $H^{\od}$ is well-defined.
Also, $H^{\od}|_{t=0} = 1_{EG}$ and $H^{\od}|_{t=1}$ lands in the
subset $EG^{\od} := \{ [t_i, g_i] \in EG \mid t_i = 0 \text{ for $i$ odd}\}$.
One can see that $H^{\od}$ is smooth, using that
every plot of $EG$ locally factors through $\Delta^n \times G^{n+1}$ for some $n$
(Remark~\ref{rem:EG_n}).
Also, $H^{\od}$ is a homotopy through $G$-equivariant maps.
It follows that $h_0$ is $G$-equivariantly homotopic to a map $h_0'$ landing in $EG^{\od}$.
Similarly, we can show that $h_1$ is $G$-equivariantly homotopic to a map $h_1'$ landing
in $EG^{\ev} := \{ [t_i, g_i] \in EG \mid t_i = 0 \text{ for $i$ even}\}$.
Now define $H : E \times \mathbb{R} \to EG$ as follows. Given $(x,t) \in E \times \mathbb{R}$,
suppose $h_s'(x) = [t^s_i, g^s_i]$ for $s = 0, 1$.
Define $H(x,t)$ to be $[t_i, g_i]$, where
\begin{align*}
t_i &= \begin{cases}
(1 - \rho(t)) t^0_i, & \text{if $i$ is even,} \\
\hfill \rho(t) \, t^1_i, & \text{if $i$ is odd,} \\
\end{cases}\\
g_i &= \begin{cases}
\hspace*{44pt} g^0_i, & \text{if $i$ is even,} \\
\hspace*{44pt} g^1_i, & \text{if $i$ is odd.} \\
\end{cases}
\end{align*}
Although $g^s_i$ is not well-defined when $t^s_i = 0$, $H(x,t)$ is well-defined.
In fact, by Remark~\ref{rem:EG_n}, we can locally make smooth choices of
representatives $g^s_i$, which shows that $H$ is smooth.
Since $h_0'$ and $h_1'$ are $G$-equivariant, so is $H$.
And clearly $H$ is a homotopy between $h_0'$ and $h_1'$,
which shows that $h_0$ and $h_1$ are smoothly $G$-equivariantly homotopic.
\end{proof}
\begin{cor}\label{cor:EG-contractible}
For any diffeological group $G$, $EG$ is smoothly contractible.
\end{cor}
\begin{proof}
Let $B$ be any diffeological space. Then smooth maps $B \to EG$ biject
with $G$-equivariant maps $B \times G \to EG$.
Given two smooth maps $f_0, f_1 : B \to EG$, the associated maps
$B \times G \to EG$ are smoothly homotopic, by Proposition~\ref{prop:EG-subterminal}.
Restricting to $e \in G$ gives a smooth homotopy $f_0 \simeq f_1$.
Therefore, $EG$ is smoothly contractible.
\end{proof}
\begin{rem}
Since every diffeological group is fibrant (\cite[Proposition~4.30]{CW1}),
and every diffeological bundle with fibrant fiber is a fibration (\cite[Proposition~4.28]{CW1}),
we know that $\pi:EG \to BG$ is always a fibration.
Also, by the long exact sequence of smooth homotopy groups of a diffeological bundle (\cite[8.21]{I2}) together with
Corollary~\ref{cor:EG-contractible}, we have a group isomorphism $\pi_{n+1}^D(BG,b) \cong \pi_n^D(G,e)$ for every
$n \in \mathbb{N}$ and $b \in BG$.
In addition, $BG$ is path-connected. Indeed, given a point $[t_i, g_i]$ in $BG$,
choose a path in the infinite simplex from $(t_i)$ to $(1, 0, 0, \ldots)$.
This gives a path in $BG$ from $[t_i, g_i]$ to $[(1, 0, 0, \ldots), (g_0, g_1, \ldots)] = [(1, 0, 0, \ldots), (e, e, \ldots)]$.
\end{rem}
\begin{de}
Let $G$ be a diffeological group and let $B$ be a diffeological space.
Write $\Prin_G(B)$ (resp.\ $\Prin_G^D(B)$) for the set of all (resp.\ $D$-numerable) principal $G$-bundles over $B$
modulo isomorphism of principal $G$-bundles.
Let
\[
\theta:[B,BG] \to \Prin_G^D(B)
\]
be defined by $[f] \mapsto f^*(\pi:EG \to BG)$.
This is well-defined by Corollary~\ref{cor:homotopy-pullback}.
\end{de}
The final goal of this section is to prove that $\theta$ is a bijection for every $B$.
We break the proof into two propositions.
\begin{prop}\label{prop:theta-injective}
The map $\theta:[B,BG] \to \Prin_G^D(B)$ is injective.
\end{prop}
\begin{proof}
Let $f_0,f_1:B \to BG$ be smooth maps such that $f_0^*(\pi:EG \to BG)$ and $f_1^*(\pi:EG \to BG)$ are isomorphic principal
$G$-bundles over $B$.
Say they are isomorphic to the principal $G$-bundle $p:E \to B$.
Then there exist smooth maps $h_0,h_1:E \to EG$ making the following diagrams commutative:
\[
\begin{minipage}[b]{0.5\linewidth}
\xymatrix{E \ar[r]^-{h_0} \ar[d]_p & EG \ar[d]^\pi \\ B \ar[r]_-{f_0} & BG}
\end{minipage}
\hspace{2cm}
\begin{minipage}[b]{0.5\linewidth}
\xymatrix{E \ar[r]^-{h_1} \ar[d]_p & EG \ar[d]^\pi \\ B \ar[r]_-{f_1} & BG.}
\end{minipage}
\]
By Proposition~\ref{prop:EG-subterminal}, there is a smooth $G$-equivariant homotopy $H$
between $h_0$ and $h_1$.
By $G$-equivariance, $H$ induces a smooth homotopy $f_0 \simeq f_1$.
\end{proof}
\begin{prop}\label{prop:theta-surjective}
The map $\theta:[B,BG] \to \Prin_G^D(B)$ is surjective.
\end{prop}
\begin{proof}
Let $p:E \to B$ be a $D$-numerable principal $G$-bundle. By Proposition~\ref{prop:commsq=>pullbackdiff}, it is enough
to show that there exist a $G$-equivariant smooth map $f:E \to EG$ and a smooth map $g:B \to BG$ making the following diagram
commutative:
\[
\xymatrix{E \ar[d]_p \ar[r]^f & EG \ar[d]^\pi \\ B \ar[r]_g & BG.}
\]
By Proposition~\ref{prop:countable}, there exists a countable smooth partition of unity $\{\tau_n:B \to \mathbb{R}\}_{n \in \mathbb{N}}$
subordinate to a locally finite $D$-open cover $\{B_n\}_{n \in \mathbb{N}}$ of $B$ such that
$p:p^{-1}(B_n) \to B_n$ is trivial for each $n$.
Let $h_n:B_n \times G \to p^{-1}(B_n)$ be a $G$-equivariant trivialization over $B_n$,
and let $q_n:B_n \times G \to G$ be the projection.
Define $f:E \to EG$ by $x \mapsto [\tau_i(p(x)),q_i(h_i^{-1}(x))]$.
Note that whenever $h_i^{-1}(x)$ is undefined, $\tau_i(p(x)) = 0$, and we define $q_i(h_i^{-1}(x)) = e$.
Hence, $f$ is well-defined. It is easy to check that $f$ is $G$-equivariant, and therefore induces a function
$g:B \to BG$ making the required square commutative. So we are left to show that $f$ is smooth.
Since $\{p^{-1}(B_n)\}_{n \in \mathbb{N}}$ is a locally finite $D$-open cover of $E$, for every $x \in E$,
there exists a $D$-open subset $V$ of $x$ in $E$ such that $V$ only intersects $p^{-1}(B_{i_1}),\ldots,p^{-1}(B_{i_s})$
for a finite subset $I_V := \{i_1,\ldots,i_s\} \subset \mathbb{N}$. Then $I_V$ is a disjoint union of $I_V'$ and $I_V''$ with
$x \in p^{-1}(B_i)$ for every $i \in I_V'$ and $x \notin p^{-1}(B_j)$ for every $j \in I_V''$.
Then $E_x := V \cap (\cap_{i \in I_V'} p^{-1}(B_i)) \cap (\cap_{j \in I_V''} E \setminus p^{-1}(\supp(\tau_j)))$ is a
$D$-open neighborhood of $x$ in $E$. By definition of $f$ and $EG$, it is clear that $f|_{E_x}$ is smooth,
and therefore $f$ is smooth.
\end{proof}
In summary, we have proved:
\begin{thm}\label{thm:classify-principal}
For any diffeological space $B$ and any diffeological group $G$, there is a bijection $\theta : [B,BG] \to \Prin_G^D(B)$
which is natural in $B$.
\end{thm}
The naturality of $\theta$ with respect to $G$ will be explained in Theorem~\ref{thm:naturality-G} in the next section.
\begin{ex}
For any smoothly contractible diffeological space $B$, the only $D$-numerable principal bundle over $B$ is the trivial bundle.
For example, this applies when $B$ is an indiscrete diffeological space or a diffeological vector space.
\end{ex}
As an immediate consequence of the above theorem, we have:
\begin{cor}
Classifying spaces are unique up to smooth homotopy, in the sense that if a diffeological space $X$ has the property that
there is a bijection $[B,X] \to \Prin_G^D(B)$ which is natural in $B$, then $X$ is smoothly homotopy equivalent to $BG$.
\end{cor}
Note that this corollary uses the fact that we classify certain bundles over \emph{all} diffeological spaces.
We use this to calculate some examples of $BG$:
\begin{prop}
Let $V$ be a diffeological vector space, and let $G$ be an additive subgroup. Assume that the principal bundle
$V \to V/G$ is $D$-numerable. Then $BG$ is smoothly homotopy equivalent to $V/G$. In particular,
$B V$ is smoothly contractible and
$B \mathbb{Z}^n$ is smoothly homotopy equivalent to $T^n = (S^1)^n$.
\end{prop}
\begin{proof}
By the universality of $EG \to BG$ (Theorem~\ref{thm:classify-principal}), we get a $G$-equivariant smooth map $f:V \to EG$.
On the other hand, we can define $g:EG \to V$ by sending $[t_i,g_i]$ to $\sum_i t_i g_i$.
It is straightforward to see that $g$ is well-defined, smooth, and $G$-equivariant.
By Proposition~\ref{prop:EG-subterminal}, we know that $f \circ g$ is $G$-equivariantly smoothly homotopic to $1_{EG}$.
Since every $G$-equivariant smooth map $h:V \to V$ is $G$-equivariantly smoothly homotopic to $1_V$ via the affine homotopy
$F(v,t) := t h(v) + (1-t) v$, we know that $g \circ f$ is $G$-equivariantly smoothly homotopic to $1_V$.
Therefore, $EG$ is $G$-equivariantly smoothly homotopy equivalent to $V$.
It follows that $BG$ is smoothly homotopy equivalent to $V/G$.
Taking $G = V$, we have that $V \to V/G = *$ is a $D$-numerable principal $V$-bundle
and so $B V$ is smoothly homotopy equivalent to a point.
To see that $B \mathbb{Z}^n$ is smoothly homotopy equivalent to $T^n$,
take $V = \mathbb{R}^n$ and observe that we have a $D$-numerable principal
$\mathbb{Z}^n$-bundle $\mathbb{R}^n \to \mathbb{R}^n/\mathbb{Z}^n \cong T^n$.
\end{proof}
On $\oplus_\omega \mathbb{R}$, we have a smooth inner product defined by $\langle (x_i),(y_i) \rangle = \sum_i x_i y_i$.
Let $S^\infty$ be the subspace of $\oplus_\omega \mathbb{R}$ consisting of the elements of norm $1$.
The discrete multiplicative group $\mathbb{Z}/2 = \{\pm 1\}$ acts on $S^\infty$ by $(x_i) \cdot (-1) = (-x_i)$.
Write $\mathbb{R} P^\infty$ for the orbit space.
Identifying $\oplus_{\omega} \mathbb{R}$ with $\oplus_{\omega} \mathbb{C}$, $S^{\infty}$ can also be thought of
as the unit vectors in $\oplus_{\omega} \mathbb{C}$.
Therefore, the Lie group $S^1$ acts on $S^\infty$ by pointwise multiplication.
Write $\mathbb{C} P^\infty$ for the orbit space.
\begin{prop}\
\begin{enumerate}
\item $B \mathbb{Z}/2$ is smoothly homotopy equivalent to $\mathbb{R} P^\infty$.
\item $B S^1$ is smoothly homotopy equivalent to $\mathbb{C} P^\infty$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) We first show that the quotient map $p:S^\infty \to \mathbb{R} P^\infty$ is a $D$-numerable principal $\mathbb{Z}/2$-bundle.
Let $U_j := \{[x_i] \in \mathbb{R} P^\infty \mid |x_j| > 1/(2j+2)\}$. Then $\{U_j\}_{j \in \omega}$ is a $D$-open cover
of $\mathbb{R} P^\infty$. Define $\mu_j:\mathbb{R} P^\infty \to \mathbb{R}$ by
\[
\mu_j([x_i]) = \begin{cases} \exp(\frac{-1}{|x_j| - 1/(2j+2)}), & \textrm{if $|x_j| > 1/(2j+2)$,} \\ 0, & \textrm{else.} \end{cases}
\]
Then $\mu_j$ is smooth, and $\mu_j^{-1}((0,\infty)) = U_j$.
By an argument similar to the proof of Theorem~\ref{thm:EG->BG-D-numerable-principal},
one can show that $\{U_j\}_{j \in \omega}$ is locally finite.
By Lemma~\ref{lem:Bourbaki}, there is a smooth partition of unity subordinate to $\{U_j\}_{j \in \omega}$.
It is straightforward to check that $p|_{U_j}$ is trivial for each $j$.
Therefore, $p:S^\infty \to \mathbb{R} P^\infty$ is a $D$-numerable principal $\mathbb{Z}/2$-bundle.
By the universality of $E \mathbb{Z}/2 \to B \mathbb{Z}/2$, we have a $\mathbb{Z}/2$-equivariant smooth map $f:S^\infty \to E \mathbb{Z}/2$.
Now define $g:E \mathbb{Z}/2 \to S^\infty$ by $g([t_i,g_i]) = (\frac{ g_i t_i }{ \sqrt{\sum_i t_i^2}})$.
It is smooth and $\mathbb{Z}/2$-equivariant. By Proposition~\ref{prop:EG-subterminal}, we know that $f \circ g$ is
$\mathbb{Z}/2$-equivariantly smoothly homotopic to $1_{E(\mathbb{Z}/2)}$.
Next we show that $1_{S^\infty}$ is $\mathbb{Z}/2$-equivariantly smoothly homotopic to both $i_{\ev}$ and $i_{\od}$.
Here $i_{\ev}:S^\infty \to S^\infty$ sends $(x_i)$ to $(y_i)$ with $y_{2i} = x_i$ and $y_{2i+1}=0$,
and similarly, $i_{\od}:S^\infty \to S^\infty$ sends $(x_i)$ to $(y_i)$ with $y_{2i+1} = x_i$ and $y_{2i}=0$.
We show $1_{S^\infty} \simeq_{\mathbb{Z}/2} i_{\ev}$ below, and the other case is similar.
Fix a smooth function $\rho:\mathbb{R} \to \mathbb{R}$ such that there exists $\epsilon > 0$ with $\rho(t) = 0$ if $t < \epsilon$,
$\rho(t) = 1$ if $t > 1 - \epsilon$, and $\im(\rho) = [0,1]$.
Define $H:S^\infty \times \mathbb{R} \to S^\infty$ by sending $((x_i),t)$ to $(y_i)$.
When $t \leq 0$, $y_i = x_i$. When $t \in [1/(n+1),1/n)$,
\[
y_i = \begin{cases} x_i, & \textrm{if $i < n$,} \\ \cos(2 \pi \alpha(t)) \, x_{n+j}, & \textrm{if $i = n+2j$ for $j \in \mathbb{N}$,} \\
\sin(2 \pi \alpha(t)) \, x_{n+j}, & \textrm{if $i = n+2j+1$ for $j \in \mathbb{N}$,} \end{cases}
\]
where
\[
\alpha(t) = \rho \left( \frac{t - \frac{1}{n+1}}{\frac{1}{n} - \frac{1}{n+1}} \right).
\]
When $t \geq 1$, $y_{2i} = x_i$ and $y_{2i+1} = 0$. Since $H$ is smooth and $\mathbb{Z}/2$-equivariant,
we have $1_{S^\infty} \simeq_{\mathbb{Z}/2} i_{\ev}$.
Given any $\mathbb{Z}/2$-equivariant smooth map $h:S^\infty \to S^\infty$, define $K:S^\infty \times \mathbb{R} \to S^\infty$
by sending $(x,t)$ to $\cos(2 \pi \alpha(t)) \, i_{\ev}(h(x)) + \sin(2 \pi \alpha(t)) \, i_{\od}(x)$.
Since $K$ is smooth and $\mathbb{Z}/2$-equivariant, we have $i_{\ev} \circ h \simeq_{\mathbb{Z}/2} i_{\od}$.
So we have $h \simeq_{\mathbb{Z}/2} i_{\ev} \circ h \simeq_{\mathbb{Z}/2} i_{\od} \simeq_{\mathbb{Z}/2} 1_{S^\infty}$.
Hence, $g \circ f$ is $\mathbb{Z}/2$-equivariantly smoothly homotopic to $1_{S^\infty}$.
Therefore, $B \mathbb{Z}/2$ is smoothly homotopy equivalent to $\mathbb{R} P^\infty$.
(2) This can be proved similarly, by considering the $D$-numerable principal $S^1$-bundle $S^\infty \to \mathbb{C} P^\infty$.
\end{proof}
We also have:
\begin{prop}
Let $G$ and $H$ be diffeological groups.
Then $B(G \times H)$ and $BG \times BH$ are smoothly homotopy equivalent.
\end{prop}
\begin{proof}
There is a natural $(G \times H)$-equivariant smooth map $g : E(G \times H) \to EG \times EH$
defined by sending $[t_i, (g_i, h_i)]$ to $([t_i, g_i], [t_i, h_i])$.
The $D$-topology of a product is not the same as the product of the $D$-topologies
in general. Nevertheless, if $U$ is $D$-open in $BG$ and $V$ is $D$-open in $BH$,
then $U \times V$ is $D$-open in $BG \times BH$.
Moreover, if $\{ \sigma_i \}_{i \in I}$ and $\{ \tau_j \}_{j \in J}$ are
smooth partitions of unity for $BG$ and $BH$ respectively, then
$\{ \rho_{ij} \}_{(i,j) \in I \times J}$ is a partition of unity for $BG \times BH$,
where $\rho_{ij}(x,y) := \sigma_i(x) \tau_j(y)$.
It follows that $EG \times EH \to BG \times BH$ is a $D$-numerable $G \times H$
principal bundle.
Therefore, we have a $(G \times H)$-equivariant smooth map $f : EG \times EH \to E(G \times H)$.
By Proposition~\ref{prop:EG-subterminal}, we know that $f \circ g$ is
$(G \times H)$-equivariantly smoothly homotopic to $1_{E(G \times H)}$.
If $X$ has a $(G \times H)$-action, then a $(G \times H)$-equivariant map
$X \to EG \times EH$ is the same as a $G$-equivariant map $X \to EG$
and an $H$-equivariant map $X \to EH$.
Therefore, using Proposition~\ref{prop:EG-subterminal} on each factor,
we conclude that any two $(G \times H)$-equivariant maps $X \to EG \times EH$
are $(G \times H)$-equivariantly smoothly homotopic to each other.
In particular, $g \circ f$ is $(G \times H)$-equivariantly smoothly homotopic
to $1_{EG \times EH}$.
The claim follows.
\end{proof}
\section{Classifying $D$-numerable diffeological bundles}
\label{se:classify-bundle}
For diffeological spaces $B$ and $F$, write $\Bun_F(B)$ (resp.\ $\Bun_F^D(B)$) for the set of isomorphism classes of
all (resp.\ $D$-numerable) diffeological bundles over $B$ with fiber $F$.
This is a functor of $B$ under pullback of bundles.
It was shown in~\cite[8.16]{I2} that given a principal $G$-bundle $r:E \to B$ and a diffeological
space $F$ with a left $G$-action, we can form an associated diffeological bundle $t:E \times_G F \to B$ with fiber $F$.
Here $E \times_G F := (E \times F)/{\sim}$, where $(y,f) \sim (y \cdot g,\, g^{-1} \cdot f)$ for all $g \in G$, and $t([y,f]) = r(y)$.
Moreover, if $r$ is trivial (as a principal $G$-bundle), then so is $t$ (as a diffeological bundle).
This gives a natural transformation $\assoc : \Prin_G(B) \to \Bun_F(B)$ that depends on $F$ and the $G$-action,
and sends $D$-numerable bundles to $D$-numerable bundles.
On the other hand, it was shown in~\cite[8.14]{I2} that given a diffeological bundle $\pi:E \to B$ with fiber $F$,
there exists an associated principal ${\mathfrak{D}\mathrm{iff}}(F)$-bundle $s:E' \to B$ which we call the \dfn{frame bundle}.
As a set, $E' = \coprod_{b \in B} {\mathfrak{D}\mathrm{iff}}(F_b,F)$, where $F_b = \pi^{-1}(b)$ and ${\mathfrak{D}\mathrm{iff}}(F_b,F)$ consists of all diffeomorphisms $F_b \to F$.
The map $s$ sends $f:F_b \to F$ to $b$.
We equip $E'$ with the following diffeology: $p:U \to E'$ is a plot if and only if
all of the following conditions hold:
\begin{enumerate}
\item $s \circ p:U \to B$ is smooth;
\item $\{(u,y) \in U \times E \mid s(p(u)) = \pi(y)\} \to F$ sending $(u,y)$ to $p(u)(y)$ is smooth;
\item $U \times F \to E$ sending $(u,x)$ to $(p(u))^{-1}(x)$ is smooth.
\end{enumerate}
The action $E' \times {\mathfrak{D}\mathrm{iff}}(F) \to E'$ is given by $(f,g) \mapsto g^{-1} \circ f$.
Moreover, by~\cite[8.16]{I2}, if $\pi$ is trivial (as a diffeological bundle), then so is $s$ (as a principal
${\mathfrak{D}\mathrm{iff}}(F)$-bundle).
This gives a natural transformation $\frme : \Bun_F(B) \to \Prin_{{\mathfrak{D}\mathrm{iff}}(F)}(B)$ that
sends $D$-numerable bundles to $D$-numerable bundles.
In~\cite[8.16]{I2} it is shown that $\assoc \circ \frme$ is the identity, where
to define $\assoc$ we use the natural action of ${\mathfrak{D}\mathrm{iff}}(F)$ on $F$.
That is, if we start with a diffeological bundle $\pi:E \to B$ with fiber $F$,
form the associated principal ${\mathfrak{D}\mathrm{iff}}(F)$-bundle, and then take the associated $F$-bundle,
we get a bundle isomorphic to $\pi$.
In fact, these operations are inverse to each other:
\begin{thm}\label{thm:bijection-bundlevsprincipal}
We have a natural isomorphism $\assoc : \Prin_{{\mathfrak{D}\mathrm{iff}}(F)}(B) \to \Bun_F(B)$
which restricts to a natural isomorphism $\assoc : \Prin_{{\mathfrak{D}\mathrm{iff}}(F)}^D(B) \to \Bun_F^D(B)$.
\end{thm}
\begin{proof}
We saw that $\assoc \circ \frme$ is the identity.
To check that $\frme \circ \assoc$ is the identity, we start with a principal ${\mathfrak{D}\mathrm{iff}}(F)$-bundle $r:E \to B$
and show it is isomorphic to the frame bundle $s: E' \to B$ of
the associated bundle $t : E \times_{{\mathfrak{D}\mathrm{iff}}(F)} F \to B$.
It is enough to construct a ${\mathfrak{D}\mathrm{iff}}(F)$-equivariant smooth map $\alpha:E \to E'$ such that
$s \circ \alpha = r$.
For $y \in E$ we define $\alpha(y) : t^{-1}(r(y)) \to F$ by sending $[x,f]$ in $E \times_{{\mathfrak{D}\mathrm{iff}}(F)} F$
to $\theta(f)$, where $\theta$ is the unique element of ${\mathfrak{D}\mathrm{iff}}(F)$ such that $x = y \cdot \theta$.
Such a $\theta$ exists because $r(x) = r(y)$, and $\alpha(y)$ is well-defined because
$[x \cdot \phi,\, \phi^{-1}(f)]$ is sent to $(\theta \phi)(\phi^{-1}(f)) = \theta(f)$ as well.
It is then not hard to check that $\alpha(y)$
is a diffeomorphism for each $y \in E$, and that $\alpha$ is ${\mathfrak{D}\mathrm{iff}}(F)$-equivariant and smooth.
The last claim follows from the fact that both $\assoc$ and $\frme$ preserve $D$-numerable bundles.
\end{proof}
Combining this result with Theorem~\ref{thm:classify-principal}, we get:
\begin{thm}\label{thm:classify-diff-bundles}
There is a bijection $[B,B{\mathfrak{D}\mathrm{iff}}(F)] \to \Bun_F^D(B)$ which is natural in $B$.
\end{thm}
Using the techniques from this section, we can also show that the bijection
in Theorem~\ref{thm:classify-principal} is natural with respect to the diffeological group.
\begin{de}[Functoriality of $\Prin_G(B)$]
Let $h:G \to G'$ be a smooth homomorphism between diffeological groups.
Define a left action of $G$ on $G'$ by $g \cdot g' := h(g) g'$.
Given a principal $G$-bundle $E \to B$, we can form the associated diffeological
bundle $E \times_G G' \to B$ with fiber $G'$.
We can define a right action of $G'$ on $E \times_G G'$ by $[x, g'] \cdot \hat{g}' := [x, g' \hat{g}']$.
One can check that this is a principal $G'$-bundle, and that this defines
a function $h_* : \Prin_G(B) \to \Prin_{G'}(B)$ making $\Prin_G(B)$ into a functor of $G$.
Moreover, if $E \to B$ is $D$-numerable, then so is $E' \to B$,
so we see that $\Prin_G^D(B)$ is also functorial in $G$.
\end{de}
\begin{thm}\label{thm:naturality-G}
The bijection $\theta : [B, BG] \to \Prin_G^D(B)$ from Theorem~\ref{thm:classify-principal}
is natural in $G$.
That is, for any smooth homomorphism $h:G \to G'$ between diffeological groups,
the following diagram commutes:
\[
\xymatrix{[B,BG] \ar[r]^-\theta \ar[d]_{Bh_*} & \Prin_G^D(B) \ar[d]^{h_*} \\ [B,BG'] \ar[r]_-\theta & \Prin_{G'}^D(B).}
\]
\end{thm}
\begin{proof}
We first consider the universal case, where $B = BG$ and we start with
the identity map $BG \to BG$.
Define a map $EG \times G' \to EG'$ by sending $([t_i, g_i], g')$
to $[t_i, h(g_i) g']$, and notice that this is well-defined on the
associated principal $G'$-bundle $EG \times_G G'$.
It is also $G'$-equivariant, and makes the square
\[
\xymatrix{
EG \times_G G' \ar[d] \ar[r] & EG' \ar[d] \\
BG \ar[r]_{Bh} & BG'
}
\]
commute. Thus, by Proposition~\ref{prop:commsq=>pullbackdiff}, it is a pullback
square, as required.
Now, given a map $f : B \to BG$, we compute the pullback of $EG'$ along
the composite $B \to BG \to BG'$ as
\begin{align*}
B \times_{BG'} EG' &\cong B \times_{BG} (BG \times_{BG'} EG') & \text{(by functoriality of pullback)}\\
&\cong B \times_{BG} (EG \times_G G') & \text{(by the previous paragraph)} \\
&\cong (B \times_{BG} EG) \times_G G' & \text{(by naturality of $\assoc$)\rlap{,}}
\end{align*}
which shows that the square commutes.
\end{proof}
\section{Classifying $D$-numerable vector bundles}
\label{se:classify-vb}
We first recall the following definition from~\cite{CW2}:
\begin{de}
Let $B$ be a diffeological space. A \dfn{diffeological vector space over $B$} is a diffeological space $E$, a smooth
map $\pi:E \to B$ and a vector space structure on each of the fibers $\pi^{-1}(b)$ such that the addition
$E \times_B E \to E$, the scalar multiplication $\mathbb{R} \times E \to E$ and the zero section $B \to E$ are all smooth.
\end{de}
In the case when $B$ is a point, we recover the concept of diffeological vector space.
More generally, for any $b \in B$, $\pi^{-1}(b)$ equipped with the sub-diffeology of $E$ is a diffeological vector space.
\begin{lem}
Let $\pi:E \to B$ be a diffeological vector space over $B$, and let $f:B' \to B$ be a smooth map.
Then the pullback $f^*(\pi)$ is a diffeological vector space over $B'$.
\end{lem}
\begin{proof}
This is straightforward.
\end{proof}
\begin{de}
Let $V$ be a diffeological vector space. A diffeological vector space $\pi:E \to B$ over $B$ is called \dfn{trivial of fiber type $V$} if there exists
a diffeomorphism $h:E \to B \times V$ over $B$, such that for every $b \in B$, the restriction $h|_b:\pi^{-1}(b) \to V$ is an isomorphism of
diffeological vector spaces.
A diffeological vector space $\pi:E \to B$ over $B$ is called \dfn{locally trivial of fiber type $V$} if there exists a $D$-open cover $\{B_i\}$ of $B$
such that each restriction $\pi|_{B_i}:\pi^{-1}(B_i) \to B_i$ is trivial of fiber type $V$.
A diffeological vector space $\pi:E \to B$ over $B$ is called a \dfn{vector bundle of fiber type $V$} if the pullback along every plot of $B$ is
locally trivial of fiber type $V$.
\end{de}
\begin{de}
Let $V$ be a diffeological vector space. A vector bundle $\pi:E \to B$ of fiber type $V$ is called \dfn{$D$-numerable} if
there exists a smooth partition of unity subordinate to a $D$-open cover $\{B_i\}_{i \in I}$
of $B$ such that each $\pi|_{B_i}$ is trivial of fiber type $V$.
\end{de}
Let $V$ be a diffeological vector space and let $\GL(V)$ be the set of all linear isomorphisms $V \to V$ equipped with the
sub-diffeology of ${\mathfrak{D}\mathrm{iff}}(V)$. Then $\GL(V)$ is a diffeological group. Let $G$ be a diffeological group. A \dfn{(left) linear $G$-action on $V$}
is a smooth group homomorphism $G \to \GL(V)$. Given a principal $G$-bundle $r:E \to B$ and a linear $G$-action on $V$,
we have an associated diffeological bundle $t:E \times_G V \to B$.
\begin{lem}
Under the above assumptions, $t:E \times_G V \to B$ is a vector bundle of fiber type $V$.
\end{lem}
\begin{proof}
We make $E \times_G V \to B$ into a diffeological vector space over $B$ using the following maps.
The addition map
\[
(E \times_G V) \times_B (E \times_G V) \to E \times_G V
\]
sends $([x,v],[x',v'])$ to $[x,v + g \cdot v']$,
where $g \in G$ is chosen so that $x' = x \cdot g$, which is possible since $r(x)=r(x')$.
The scalar multiplication map
\[
\mathbb{R} \times (E \times_G V) \to E \times_G V
\]
sends $(\alpha, [x, v])$ to $[x, \alpha v]$.
And the zero section
\[
B \to E \times_G V
\]
sends $b$ to $[x, 0]$, where $x$ is any element of $\pi^{-1}(b)$.
It is straightforward to check that these maps are all smooth and make
$t:E \times_G V \to B$ into a diffeological vector space over $B$,
and that $t$ is a vector bundle.
\end{proof}
Write $\VB_V(B)$ (resp.\ $\VB_V^D(B)$) for the set of isomorphism classes
of (resp.\ $D$-numerable) vector bundles over $B$. Therefore, we have a
natural transformation $\assoc:\Prin_G(B) \to \VB_V(B)$ that depends on the diffeological
vector space $V$ and the linear $G$-action, and sends $D$-numerable bundles to
$D$-numerable bundles.
On the other hand, given a vector bundle $\pi:E \to B$ of fiber type $V$,
let $E''=\coprod_{b \in B} \Isom(\pi^{-1}(b),V)$ be equipped with the sub-diffeology of $E'$ defined in
Section~\ref{se:classify-bundle}, where $\Isom(\pi^{-1}(b),V)$ denotes the set of all isomorphisms
$\pi^{-1}(b) \to V$ of diffeological vector spaces. So we have a composite of smooth maps
$E'' \hookrightarrow E' \to B$, denoted by $s$, which sends each $f : \pi^{-1}(b) \to V$ to $b$.
\begin{lem}
Under the above assumptions, $s:E'' \to B$ is a principal $\GL(V)$-bundle.
\end{lem}
\begin{proof}
It is easy to see that there is a commutative square
\[
\xymatrix{E'' \times \GL(V) \ar[d] \ar[r]^-{a''} & E'' \times E'' \ar[d] \\ E' \times {\mathfrak{D}\mathrm{iff}}(V) \ar[r]_-{a'} & E' \times E',}
\]
where the vertical maps are inclusions and the horizontal ones are the action maps
as in Theorem~\ref{thm:principal}. Since all the other maps in the square are inductions,
so is $a''$. Therefore, we have a commutative triangle
\[
\xymatrix@C5pt{& E'' \ar[dl]_q \ar[dr]^s \\ X \ar[rr] && B,}
\]
where $X$ is the orbit space of $E''$ under the $GL(V)$-action, the quotient map $q$ is a principal bundle,
and the horizontal map is a smooth bijection.
We will show that this horizontal map is a diffeomorphism, and for this
it is enough to show that $s:E'' \to B$ is a subduction. Let $p:U \to B$ be an arbitrary plot.
Since $\pi:E \to B$ is a vector bundle, without loss of generality, we may assume that there is a diffeomorphism $\alpha:U \times V \to
\{(u,x) \in U \times E \mid p(u)=\pi(x)\}$ over $U$ such that for each $u \in U$, the restriction
$\alpha_u:V \to \pi^{-1}(p(u))$ is an isomorphism of diffeological vector spaces.
It is then easy to check that $\hat{\alpha}:U \to E''$ defined by $\hat{\alpha}(u) := \alpha_u^{-1}$ is smooth.
This gives a commutative triangle
\[
\xymatrix{& E'' \ar[d]^s \\ U \ar[ur]^{\hat{\alpha}} \ar[r]_p & B,}
\]
which implies that $s$ is a subduction.
\end{proof}
Therefore, we have a natural transformation $\frme:\VB_V(B) \to \Prin_{\GL(V)}(B)$ that sends
$D$-numerable bundles to $D$-numerable bundles.
\begin{thm}\label{thm:classify-vb}
We have a natural isomorphism $\assoc:\Prin_{\GL(V)}(B) \to \VB_V(B)$ which restricts to a natural isomorphism
$\assoc:\Prin_{\GL(V)}^D(B) \to \VB_V^D(B)$.
\end{thm}
\begin{proof}
The proof that $\frme \circ \assoc$ is the identity is the same as that of Theorem~\ref{thm:bijection-bundlevsprincipal}.
Now we show that $\assoc \circ \frme$ is the identity. Let $\pi:E \to B$ be a vector bundle with fiber $V$.
We need to show that $E'' \times_{\GL(V)} V \to B$ and $\pi$ are isomorphic vector bundles over $B$.
It is straightforward to check that $\alpha:E'' \times V \to E$ defined by $\alpha(f,v) := f^{-1}(v)$ is smooth.
Therefore, $\alpha$ induces a smooth bijection $\bar{\alpha}$ making the triangle
\[
\xymatrix@C5pt{E'' \times_{\GL(V)} V \ar[rr]^-{\bar{\alpha}} \ar[dr] && E \ar[dl]^\pi \\ & B}
\]
commute. So we are left to show that $\alpha$ is a subduction.
This follows from the argument used in the proof of the previous lemma.
The last claim follows from the fact that both $\assoc$ and $\frme$ preserve $D$-numerable bundles.
\end{proof}
Combining this result with Theorem~\ref{thm:classify-principal}, we get:
\begin{thm}
There is a bijection $[B,B\GL(V)] \to \VB_V^D(B)$ which is natural in $B$.
\end{thm}
|
1,116,691,498,598 | arxiv | \section{Introduction}
In this article, we are interested in variational Bayes (VB), which is widely used as a computationally effective method for approximating the posterior distribution of a Bayesian problem. Let $y^*$ be the observed data and $\theta\in\mathbb{R}^p$ be the parameter of interest. The posterior distribution $p(\theta|y^*)\propto p(\theta)p(y^*|\theta)$, where $p(\theta)$ is the prior and $p(y^*|\theta)$ is the likelihood function. VB approximates the posterior by a tractable distribution $q(\theta)$ within certain distribution families, chosen to minimize the Kullback-Leibler (KL) divergence between the VB distribution $q(\theta)$ and the posterior $p(\theta|y^*)$. The optimization problem is usually solved by using the stochastic gradient decent (SGD) algorithm \cite{DJ:2018}. It calls for computing the gradient of the KL divergence. A difficulty with SGD is that plain Monte Carlo (MC) sampling to estimate the gradient can be error prone or inefficient. Some variance reduction methods have been adopted to improve SGD \cite{MF:2017,PBJ:2012}. On the other hand, randomized quasi-Monte Carlo (RQMC) methods have been used to improve SGD in the VB setting \cite{BWM:2018}. Recently, Liu and Owen \cite{LO:2021} combined RQMC with a second order limited memory method known as L-BFGS for VB. RQMC methods such as scrambled digital nets proposed by \cite{Owen1995} were known to provide a favorable rate of convergence in numerical integration \cite{owen1997a}. Improved sampling accuracy translates directly to improved
optimization as shown in \cite{BWM:2018,LO:2021}.
A second difficulty with SGD is due to the absence of the likelihood function $p(y^*|\theta)$.
In many applications, the likelihood function is intractable making it difficult to render an unbiased gradient estimator of the KL divergence. For example, the likelihood is an intractable high-dimensional integral over the state variables governed by a Markov process in state space-space models \cite{durbin:2012}. More examples can be found in the context of approximate Bayesian computation (ABC). ABC methods provide a way of approximating the posterior $p(\theta|y^*)$ when the likelihood function is difficult to compute but it is possible to simulate data from the model \cite{peters2012,tavare1997}.
Likelihood-free inference is an active area in Bayesian computation. There are some progresses on using VB in the likelihood-free context. Barthelm{\'e} and Chopin \cite{BC:2014} used a
variational approximation algorithm known as expectation propagation in approximating ABC posteriors. Tran et al. \cite{tran:2017} developed a new VB with intractable likelihood (VBIL) method, which can be applied to commonly used statistical models without requiring an analytical solution to model-based expectations. Ong et al. \cite{ong:2018} modified the VBIL method to work with unbiased log-likelihood estimates in the synthetic likelihood framework, resulting in the VB synthetic likelihood (VBSL) method.
We focus on the problems in which the likelihoods are formulated as an intractable expectation. The KL divergence turns out to be a nested expectation and so does its gradient. It is natural to use nested simulation for estimating these quantities. However, the plain nested estimator is biased. It is critical to develop unbiased gradient estimators for stochastic gradient-based optimization algorithms. To this end, we use the unbiased multilevel Monte Carlo (MLMC) proposed by \cite{Rhee2015} in the framework of nested simulation. MLMC is a sophisticated variance reduction technique introduced by \cite{Hein1998} for parametric integration and by \cite{Giles2008} for the estimation of the expectations arising from stochastic differential equations. Nowadays MLMC methods have been extended extensively. For a thorough review of MLMC methods, we refer to \cite{Giles2015}.
Nested simulation combined with the MLMC method has been widely studied in the literature due to its broad applicability \cite{Bujok:2015,giles:2018b,GG2019,GHI:2020}.
In this paper, we develop an unbiased nested MLMC-based VB method to deal with intractable likelihoods. Our work is related to \cite{goda:2020}, who developed an unbiased MLMC stochastic gradient-based optimization method for Bayesian experimental designs.
Our proposed VB algorithm finds a better parameter value and a larger evidence lower bounded (ELBO) thanks to unbiased gradient and ELBO estimators. This leads to a better estimate of the marginal likelihood $p(y^*)$ compared to the VBIL method, which is an important factor in model selection. We also incorporate the RQMC sampling within the gradient and the ELBO estimators, which reduces the computational complexity effectively. Goda et al. \cite{goda:2020} worked on the MC sampling rather than RQMC. We provide some numerical analysis for both MC and RQMC settings.
The rest of this paper is organized as follows. In \Cref{se:VBIL}, we review some VB methods with intractable likelihoods, such as VBIL and VBSL, and illuminate their limitations. In \Cref{se:UMLMC}, we provide our unbiased MLMC methods for VB and discuss two different estimators of gradient, which are the score function gradient and re-parameterization gradient. In \Cref{Gaussian}, we provide the details of our algorithms when using Gaussian variational family in VB. In \Cref{se:RQMC}, we improve the algorithms by incorporating RQMC and do some numerical analysis. Finally, in \Cref{sec:num}, some numerical experiments are conducted to support the advantages of our proposed methods. Section~\ref{sec:concl} concludes this paper.
\section{Variational Bayes with an intractable likelihood}\label{se:VBIL}
Recall that our target is to estimate the posterior distribution
\begin{equation}
p(\theta|y^*)= \frac{p(\theta)p(y^*|\theta)}{p(y^*)},\label{eq:post}
\end{equation}
where $p(y^*) = \int p(\theta)p(y^*|\theta) \mathrm{d} \theta$ is usually an unknown constant (called the marginal likelihood or evidence).
In many applications such as state-space models and ABC, the likelihood is analytically intractable.
For these cases, the likelihood $p(y^*|\theta)$ is usally formulated as an expectation
\begin{equation}
p(y^*|\theta)=\mathbb{E}[f(x;y^*)|\theta],\label{eq:intrlik}
\end{equation}
where $x\sim p(x|\theta)$ is the latent variable.
Suppose that there exists an unbiased estimator $\hat{p}_N(y^*|\theta)$ for the intractable likelihood $p(y^*|\theta)$ for given $\theta$, where $N$ is an algorithmic parameter relating to the precision in estimating the likelihood. For estimating \cref{eq:intrlik}, one can take the sample-mean estimator
\begin{equation}
\hat{p}_N(y^*|\theta) = \frac 1N\sum_{i=1}^N f(x_i;y^*) ,\label{eq:samplemean}
\end{equation}
where $x_i$ are iid copies of $x$ for a given $\theta$.
In this paper, we restrict our attention to the sample-mean estimator \cref{eq:samplemean}. We should note that for the state-space models, the likelihood can be unbiasedly estimated by an importance sampling estimator \cite{DK:1997}, or by a particle filter estimator \cite{PSGK:2012}. The later case does not fit into our framework.
VB approximates the posterior distribution $p(\theta|y^*)$ by a tractable density $q_\lambda(\theta)$ with a variational parameter $\lambda$, chosen to minimize the KL divergence from $q_\lambda(\theta)$ to $p(\theta|y^*)$, which is defined by
\begin{equation*}
\mathrm{KL}(\lambda) = \mathrm{KL}(q_\lambda(\theta)||p(\theta|y^*)) = \mathbb{E}_{q_\lambda(\theta)}[\log q_\lambda(\theta)-\log p(\theta|y^*)].\label{eq:kl}
\end{equation*}
Using \cref{eq:post}, we have
$$\log p(y^*) = \mathrm{KL}(\lambda)+ L(\lambda),$$
where $L(\lambda)$ is defined by
\begin{align*}
L(\lambda) &= \mathbb{E}_{q_\lambda(\theta)}[\log p(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)].
\end{align*}
Since $\mathrm{KL}(\lambda)\ge 0$, $L(\lambda)$ is a lower bound of the log-evidence $\log p(y^*)$, which is called the ELBO. The minimization
of KL is translated to the maximization of the ELBO since the marginal likelihood $p(y^*)$ is fixed. The problem turns out to solve
$$\lambda^* = \arg\max_{\lambda\in\Lambda} L(\lambda),$$
where $\Lambda$ is the feasible region of $\lambda$.
Stochastic gradient method and its variants are widely used to solve such a problem.
They use a sequence of steps $$\lambda^{(t+1)}= \lambda^{(t)}+\rho_t \nabla_\lambda L(\lambda^{(t)}),$$ where
$\nabla_\lambda L(\lambda)$ is gradient of the ELBO and $\rho_t>0$ is the learning rate satisfying the Robbins-Monro
conditions: $\sum_{t=0}^\infty \rho_t=\infty$ and $\sum_{t=0}^\infty \rho_t^2<\infty$.
A simple choice is $\rho_t = a/(t+b)$ for some constants $a,b>0$. Some adaptive methods for choosing the learning rate $\rho_t$ were proposed in the literature, notably AdaGrad \cite{DHS:2011} and Adam \cite{KB:2014}.
The key in stochastic gradient methods is to estimate the gradient $\nabla_\lambda L(\lambda)$ unbiasedly. In the literature, the re-parameterization (RP) trick \cite{KW:2013} and the score function (SF) are two popular methods to derive unbiased gradient estimators. Allowing the interchange of differentiation and expectation as required in the SF method, we have
\begin{align*}
\nabla_\lambda L(\lambda) &=\nabla_\lambda \mathbb{E}_{q_\lambda(\theta)}[\log p(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)]\notag\\
&= \mathbb{E}_{q_\lambda(\theta)}[\nabla_\lambda \log q_\lambda(\theta) (\log p(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta))],
\end{align*}
where we used the fact that $\mathbb{E}_{q_\lambda(\theta)}[\nabla_\lambda \log q_\lambda(\theta)]=0$. If the likelihood function $p(y^*|\theta)$ is known, it is straightforward to derive an unbiased estimator for $\nabla_\lambda L(\lambda)$ by sampling $\theta\sim q_\lambda(\theta)$ repeatedly. However, in our setting, $\log p(y^*|\theta)$ is intractable. The question is how to use the unbiased estimator $\hat{p}_N(y^*|\theta)$ of the likelihood to construct an unbiased SF estimator for $\nabla_\lambda L(\lambda)$.
On the other hand, for applying the RP trick, we assume that there exists a transformation $\theta = \Gamma(\bm u;\lambda)\sim q_\lambda(\theta)$, where the random variate $\bm u\sim p_1(\bm u)$ independently of $\lambda$. Allowing the interchange of differentiation and expectation again, we have
\begin{align}
\nabla_\lambda L(\lambda) &=\nabla_\lambda \mathbb{E}_{q_\lambda(\theta)}[\log p(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)]\notag\\
&=\nabla_\lambda \mathbb{E}_{\bm u}[\log p(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)]\notag\\
&=\mathbb{E}_{\bm u}[\nabla_\lambda\Gamma(\bm u;\lambda)\cdot (\nabla_\theta\log p(y^*|\theta)+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta))],\label{eq:rep}
\end{align}
where $\nabla_\lambda\Gamma(\bm u;\lambda)$ is the Jacobian matrix with entries $[\nabla_\lambda\Gamma(\bm u;\lambda)]_{ij}=\partial\Gamma_j(\bm u;\lambda)/\partial\lambda_i$.
The RP gradient is much complicated than the SF gradient. In \cref{eq:rep}, one needs to estimate the intractable gradient of log-likelihood $\nabla_\theta\log p(y^*|\theta)$ unbiasedly. Due to the absence of likelihood, the SF and RP methods for the traditional VB cannot be applied directly.
The VBIL method proposed by \cite{tran:2017} works with the augmented space $(\theta,z)$, where $z=\log \hat{p}_N(y^*|\theta)-\log p(\theta|y^*)$. Let $g_N(z|\theta)$ be the distribution of $z$ given $\theta$. Tran et al. \cite{tran:2017} applied the variational inference for the target distribution
$$p_N(\theta,z) = p(\theta|y^*)\exp(z)g_N(z|\theta)$$
with a family of distributions of the form $q_\lambda(\theta,z)= q_\lambda(\theta)g_N(z|\theta)$.
The KL divergence in the augmented space is
$$\widetilde{\mathrm{KL}}(\lambda) = \mathrm{KL}(q_\lambda(\theta,z)||p_N(\theta,z)) = \mathbb{E}_{q_\lambda(\theta,z)}[\log q_\lambda(\theta)-\log p(\theta|y^*)-z].$$
The ELBO in the augmented space is
\begin{align}
\tilde L(\lambda) &= \mathbb{E}_{q_\lambda(\theta,z)}[\log \hat p_N(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)]\label{eq:elbo_aug}\\
&=L(\lambda) + \mathbb{E}_{q_\lambda(\theta,z)}[z].\notag
\end{align}
Note that
$$\mathbb{E}[z|\theta] = \mathbb{E}[\log \hat p_N(y^*|\theta)]-\log p(y^*|\theta)\le \log\mathbb{E}[\hat p_N(y^*|\theta)]-\log p(y^*|\theta)= 0$$
by using Jensen's inequality. As a result, $L(\lambda)\ge \tilde{L}(\lambda)$. The equality holds if and only if $\hat p_N(y^*|\theta)$ is a constant with probability 1 (w.p.1).
Generally, the maximization of $\tilde L(\lambda)$ is not the same as the maximization of $L(\lambda)$ unless $\mathbb{E}_{q_\lambda(\theta,z)}[z]$ is independent of $\lambda$. Tran et al. \cite{tran:2017} made an attempt to choose $N$ as a function of $\theta$ such that $\mathbb{E}[z|\theta]\equiv\tau$ does not depend on $\theta$. By doing so, $\mathbb{E}_{q_\lambda(\theta,z)}[z]=\tau$ does not depend on $\lambda$. Hence, in practice, one needs to adapt $N$ so that the variance of the log-likelihood estimator is approximately constant with $\theta$. Ong et al. \cite{ong:2018} suggested to set some minimum value $N'$ for the initially estimating the likelihood. Then, if some target value for the log-likelihood variance is exceed based on an empirical estimate, an additional number of samples is repeatedly simulated until the target accuracy is achieved. Although the two ELBOs have the same maximizer, there is a gap (i.e., $\tau$) between the maximums of the two ELBOs. The smaller the target accuracy is, the more work is required in estimating the likelihood. Actually, $L(\lambda)$ is a locally marginalized version of $\tilde{L}(\lambda)$, which is tighter. This can help to approximate the evidence better. Furthermore, this tighter lower bound can potentially help to compute the criterion for model selection such as perplexity used in topic modeling.
In fact, if we use an unbiased estimator of $\log p(y^*|\theta)$ to replace $\log \hat p_N(y^*|\theta)$ in \cref{eq:elbo_aug}, then the resulting ELBO corresponds to the original ELBO $L(\lambda)$. However, an unbiased estimator of $\log p(y^*|\theta)$ is not trivial. To overcome this, \cite{ong:2018} proposed to use a synthetic likelihood. Suppose we have a summary statistic $\mathcal{S}=\mathcal{S}(y^*)$ of dimension $d\ge p$ and the inference is based on the observed value $s$ of the summary statistic $\mathcal{S}$, which is thought to be informative about $\theta$. Assume that the statistic $\mathcal{S}$ is exactly Gaussian conditional on each value of $\theta$, that is $p(s|\theta)=\phi(s;\mu(\theta),\Sigma(\theta))$, where $\phi$ is the density of multivariate normal with $\mu(\theta)=\mathbb{E}[\mathcal{S}|\theta]$ and $\Sigma(\theta) = \mathrm{Cov}(\mathcal{S}|\theta)$. Now the posterior density is given by
$$
p(\theta|s) \propto p(\theta)p(s|\theta) = p(\theta)\phi(s;\mu(\theta),\Sigma(\theta)).
$$
For a given $\theta$, we may simulate summary statistics $\mathcal{S}_1,\dots,\mathcal{S}_N$ under the model given $\theta$. The mean vector $\mu(\theta)$ and the covariance matrix $\Sigma(\theta)$ are then estimated by
\begin{align*}
\hat{\mu}(\theta)&=\frac 1 N \sum_{i=1}^{N} \mathcal{S}_i,\\
\hat\Sigma(\theta)&=\frac 1 {N-1} \sum_{i=1}^N (\mathcal{S}_i-\hat{\mu}(\theta))(\mathcal{S}_i-\hat{\mu}(\theta))^\top,
\end{align*}
respectively. Then an unbiased estimate of the log-synthetic likelihood $\log p(s|\theta)$ is given by
\begin{align*}
\hat\ell_N (s|\theta) = &-\frac{d}{2}\log(2\pi) - \frac{1}{2}\left\lbrace\log \abs{\hat\Sigma(\theta)}+d\log \left(\frac{N-1}{2}\right)-\sum_{i=1}^d\psi\left(\frac{N-i}{2}\right)\right\rbrace\\
&-\frac{1}{2}\left\lbrace\frac{N-d-2}{N-1}(s-\hat\Sigma(\theta))^\top\hat\Sigma(\theta)^{-1}(s-\hat\Sigma(\theta))-\frac{d}{N} \right\rbrace,
\end{align*}
where $\psi(t)= \Gamma'(t)/\Gamma(t)$ denotes the digamma function and $N>d+2$.
By replacing $\log \hat p_N(y^*|\theta)$ with $\hat\ell_N (s|\theta)$, then $\tilde{L}(\lambda)=L(\lambda)$. However, it should be noted that the unbiasedness of $\hat\ell_N (s|\theta)$ relies heavily on the assumption of the normality of $\mathcal{S}|\theta$, and the inference is based on the information of the summary statistic $s$ rather than the full data $y^*$.
\section{Unbiased MLMC for variational Bayes}\label{se:UMLMC}
To fix our idea, we work on the likelihood \cref{eq:intrlik} with an unbiased estimate \cref{eq:samplemean}. Now the ELBO is a nested expectation
$$
L(\lambda) = \mathbb{E}_{q_\lambda(\theta)}[\log \mathbb{E}[f(x;y^*)|\theta]+ \log p(\theta)-\log q_\lambda(\theta)].
$$
\subsection{Score function gradient}
Applying the SF method, we reformulate the gradient as
\begin{equation*}
\nabla_\lambda L(\lambda) = \mathbb{E}_{q_\lambda(\theta)}[\nabla_\lambda \log q_\lambda(\theta) (\log \mathbb{E}[f(x;y^*)|\theta]+ \log p(\theta)-\log q_\lambda(\theta))],
\end{equation*}
which is a nested expectation. Define
\begin{equation}
\mathrm{SF}_{N}(\lambda) = \nabla_\lambda \log q_\lambda(\theta) [\log \hat{p}_N(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)],\label{eq:nested}
\end{equation}
where $\hat{p}_N(y^*|\theta)$ is given by \cref{eq:samplemean}, and $(\theta,x)\sim q_\lambda(\theta)p(x|\theta)$. Although $\hat{p}_N(y^*|\theta)$ is an unbiased likelihood estimator, $\mathrm{SF}_{N}(\lambda)$ is generally biased for estimating the gradient $\nabla_\lambda L(\lambda)$. We next show how to find an unbiased estimator for the log-likelihood by using unbiased MLMC. Let $\psi_{\theta,N}=\log \hat{p}_N(y^*|\theta)$. It is clear that
$$\lim_{N\to\infty}\mathbb{E}[\psi_{\theta,N}|\theta]=\log p(y^*|\theta).$$
Consider an increasing sequence $0<M_0<M_1<\cdots$ such that $M_\ell\to \infty$ as $\ell\to \infty$.
Then the following telescoping sum holds,
$$\log p(y^*|\theta) = \mathbb{E}[ \psi_{\theta,M_0}|\theta]+\sum_{\ell= 1}^\infty\mathbb{E}[ \psi_{\theta,M_\ell}- \psi_{\theta,M_{\ell-1}}|\theta].$$
More generally, if we have a sequence of correction random variables $\Delta \psi_{\theta,\ell}$, $\ell\ge 0$ such that $\mathbb{E}[\Delta \psi_{\theta,0}|\theta]=\mathbb{E}[\psi_{\theta,M_0}|\theta]$ and for $\ell>0$,
$$\mathbb{E}[\Delta \psi_{\theta,\ell}|\theta]=\mathbb{E}[\psi_{\theta,M_\ell}-\psi_{\theta,M_{\ell-1}}|\theta],$$
then it follows that
$$\log p(y^*|\theta) = \sum_{\ell= 0}^\infty \mathbb{E}[\Delta \psi_{\theta,\ell}|\theta].$$
Let $w_\ell>0$ satisfying $\sum_{\ell=0}^\infty w_\ell =1$, and let $I$ be an independent discrete random variable with $\mathbb{P}(I=\ell) = w_\ell$. We then have
$$\log p(y^*|\theta) = \mathbb{E}\left[\frac{\Delta \psi_{\theta,I}}{w_I}\bigg|\theta\right].$$
Define
\begin{equation}
\mathrm{SF}_{\text{MLMC}}(\lambda) = \nabla_\lambda \log q_\lambda(\theta) \left[\frac{\Delta \psi_{\theta,I}}{w_I}+ \log p(\theta)-\log q_\lambda(\theta)\right],
\end{equation}\label{eq:SF}
which is unbiased for the gradient $\nabla_\lambda L(\lambda)$.
For any number of outer samples $S\ge 1$, the following gradient estimator,
\begin{equation}
\widehat{\nabla_\lambda L}^{\mathrm{SF}}(\lambda) =\frac{1}{S} \sum_{i=1}^S \mathrm{SF}_{\text{MLMC}}^{(i)}(\lambda),\label{eq:SF_esti}
\end{equation}
is unbiased, where $\mathrm{SF}_{\text{MLMC}}^{(i)}(\lambda)$ are iid copy of $\mathrm{SF}_{\text{MLMC}}(\lambda)$ for the MC sampling.
Now
$$\psi_{\theta,M_\ell} =\log \hat{p}_{M_\ell}(y^*|\theta)= \log \left(\frac 1 {M_\ell}\sum_{i=1}^{M_\ell} f(x_i;y^*)\right),$$
where $x_i\sim p(x|\theta)$ independently.
We take $\Delta \psi_{\theta,0}=\psi_{\theta,M_0}$. For $\ell\ge 1$, we take an antithetic coupling estimator $$\Delta \psi_{\theta,\ell}=\psi_{\theta,M_\ell}-\frac{1}{2}\left(\psi_{\theta,M_{\ell-1}}^{(a)}+\psi_{\theta,M_{\ell-1}}^{(b)}\right),$$ where
$$\psi_{\theta,M_{\ell-1}}^{(a)} = \log \left(\frac 1 {M_{\ell-1}}\sum_{i=1}^{M_{\ell-1}} f(x_i;y^*)\right),\ \psi_{\theta,M_{\ell-1}}^{(b)} = \log \left(\frac 1 {M_{\ell-1}}\sum_{i=M_{\ell-1}+1}^{M_\ell} f(x_i;y^*)\right).$$
The strategy of antithetic coupling is widely used in the MLMC literature \cite{GZ:2014,GHI:2020}, which yields a better rate of convergence for smooth functions. Denote $C_\ell$ as the expected cost of computing $\Delta \psi_{\theta,\ell}$, which is proportional to $M_\ell$. To ensure a finite variance and finite expected computational cost of $\mathrm{SF}_{\text{MLMC}}(\lambda)$, it is required that
\begin{equation}\label{eq:finiteCondition}
\sum_{\ell=0}^\infty \frac{\mathbb{E}[\Delta \psi_{\theta,\ell}^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2]}{w_\ell}<\infty\text{ and }\sum_{\ell=0}^{\infty}C_\ell w_\ell<\infty.
\end{equation}
In this paper, we take $M_\ell=M_02^\ell$ for some $M_0\ge 1$ and all $\ell\ge 0$, implying $C_\ell =O(2^\ell)$. Assume that $\mathbb{E}[\Delta \psi_{\theta,\ell}^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2] = O(2^{-r\ell})$ for some $r>1$. Let $w_\ell = w_0 2^{-\alpha\ell}$ for $w_0=1-2^{-\alpha}$ and $\alpha>0$. Then \cref{eq:finiteCondition} holds if we take $\alpha\in (1,r)$. The expected computational cost is then proportional to
\begin{equation}
C(\alpha,M_0)= \sum_{\ell=0}^\infty M_\ell w_\ell = \sum_{\ell=0}^\infty M_0w_02^{(1-\alpha)\ell}=\left(1+\frac{1}{2^{\alpha}-2}\right)M_0. \label{eq:cost}
\end{equation}
\begin{lem}\label{lem:1}
Let $X$ be a random variable with zero mean, and let $\bar{X}_N$ be an average of $N$ iid samples of $X$. If $\mathbb{E}[\abs{X}^p]<\infty$ for $p>2$, then there exists a constant $C_p$ depending only on $p$ such that
$$\mathbb{E}[\abs{\bar{X}_N}^p]\le C_p \frac{\mathbb{E}[\abs{X}^p]}{N^{p/2}}.$$
\end{lem}
\Cref{lem:1} is stated as Lemma~1 in \cite{GG2019}, with which we have the following theorem.
\begin{theorem}\label{thm:sfmc}
Suppose that $f(x;y^*)>0$ w.p.1, and there exist $p,q>2$ with $(p-2)(q-2)>4$ such that
$$\mathbb{E}\left[\abs{\frac{f(x;y^*)}{p(y^*|\theta)}}^p\right]<\infty \text{ and }\mathbb{E}\left[\left(1+\abs{\log \frac{f(x;y^*)}{p(y^*|\theta)}}^q\right)||\nabla_\lambda \log q_\lambda(\theta)||_2^q\right]<\infty,$$
where the expectations are taken with respect to $(\theta,x)\sim q_\lambda(\theta)p(x|\theta)$, then $$\mathbb{E}[\Delta \psi_{\theta,\ell}^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2] = O(2^{-r\ell}) \text{ with } r=\min\left(\frac{p(q-2)}{2q},2\right)\in(1,2].$$
\end{theorem}
\begin{proof}
This proof is in line with Theorem 2 of \cite{GHI:2020}, which developed MLMC for a nested expectation of the form $\mathbb{E}_{X,Y}[\log[g(X,Y)|Y]]$. Let
$$R = \frac 1 {M_{\ell}}\sum_{i=1}^{M_{\ell}} \frac{f(x_i;y^*)}{p(y^*|\theta)},$$
$$R^{(a)} = \frac 1 {M_{\ell-1}}\sum_{i=1}^{M_{\ell-1}} \frac{f(x_i;y^*)}{p(y^*|\theta)},\ R^{(b)} =\frac 1 {M_{\ell-1}}\sum_{i=M_{\ell-1}+1}^{M_\ell} \frac{f(x_i;y^*)}{p(y^*|\theta)}.$$
We then have
$$\Delta \psi_{\theta,\ell} = (\log R-R+1)-\frac{1}{2}\left[(\log R^{(a)}-R^{(a)}+1)+(\log R^{(b)}-R^{(b)}+1)\right].$$
Applying Jensen's inequality gives
$$\Delta \psi_{\theta,\ell}^2\le 2(\log R-R+1)^2+(\log R^{(a)}-R^{(a)}+1)^2+(\log R^{(b)}-R^{(b)}+1)^2.$$
Note that $|\log x-x+1|\le |x-1|^r\max(-\log x,1)$ for any $x>0$ and any $1< r\le 2$. By Holder's inequality, we have
\begin{align*}
\mathbb{E}[(\log R-R+1)^2&||\nabla_\lambda \log q_\lambda(\theta)||_2^2]\le \mathbb{E}[(R-1)^{2r}\max(-\log R,1)^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2]\\
&\le \mathbb{E}[(R-1)^{2rs}]^{1/s}\mathbb{E}[\max(-\log R,1)^{2t}||\nabla_\lambda \log q_\lambda(\theta)||_2^{2t}]^{1/t}
\end{align*}
for any $s,t\ge 1$ satisfying $1/s+1/t=1$.
Note that $\mathbb{E}[R-1]=0$. Hence, if $2rs\le p$, then it follows from \Cref{lem:1} that
$$\mathbb{E}[(R-1)^{2rs}]\le \frac{C_{2sr}}{M_\ell^{sr}}\mathbb{E}\left[|f(x;y^*)/p(y^*|\theta)-1|^{2rs}\right],$$
where $\mathbb{E}\left[|f(x;y^*)/p(y^*|\theta)-1|^{2rs}\right]<\infty$.
Notice that the function $\max(-\log x,1)^{2t}$ is convex for $x>0$. Thus, applying Jensen's inequality and using $f(x_i;y^*)>0$, we have
\begin{align*}
\max(-\log R,1)^{2t}&=\max\left(-\log \frac 1 {M_{\ell}}\sum_{i=1}^{M_{\ell}} \frac{f(x_i;y^*)}{p(y^*|\theta)},1\right)^{2t}\\
&\le\frac 1 {M_{\ell}}\sum_{i=1}^{M_{\ell}} \max\left(-\log \frac{f(x_i;y^*)}{p(y^*|\theta)},1\right)^{2t}\\
&\le 1+\frac 1 {M_{\ell}}\sum_{i=1}^{M_{\ell}}\abs{\log \frac{f(x_i;y^*)}{p(y^*|\theta)}}^{2t}.
\end{align*}
As a result, as long as $2t\le q$, we have
\begin{equation}\label{eq:Jensen}
\begin{aligned}
&\mathbb{E}[\max(-\log R,1)^{2t}||\nabla_\lambda \log q_\lambda(\theta)||_2^{2t}]\\
&\le \mathbb{E}\left[\left(1+\abs{\log \frac{f(x;y^*)}{p(y^*|\theta)}}^{2t}\right)||\nabla_\lambda \log q_\lambda(\theta)||_2^{2t}\right]<\infty.
\end{aligned}
\end{equation}
Particularly, we take $s=q/(q-2)$, $t=q/2$ and $r=\min(p(q-2)/(2q),2)$. Since $(p-2)(q-2)> 4$, $r> 1$. Therefore, $\mathbb{E}[(\log R-R+1)^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2]=O(M_\ell^{-r})$. This argument holds also by replacing $R$ with $R^{(a)}$ or $R^{(b)}$. We thus have $\mathbb{E}[\Delta \psi_{\theta,\ell}^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2] = O(M_\ell^{-r}) = O(2^{-r\ell})$.
\end{proof}
It should be noticed that \Cref{thm:sfmc} requires $f(x;y^*)>0$ w.p.1. If not, the inequalities in \cref{eq:Jensen} do not hold. This implies that our result rules out the case of indicator functions in formulating likelihoods.
\subsection{Re-parameterization gradient}
Assume that there exists a transformation $x= \Lambda(\bm v;\theta)\sim p(x|\theta)$, where $\bm v\sim p_2(\bm v)$ independently of $\theta$ and $\nabla_{\theta} \Lambda(\bm v;\theta)$ exists. Using $\theta=\Gamma(\bm u;\lambda)$ as before gives $x=\Lambda(\bm v;\Gamma(\bm u;\lambda))$.
Allowing the interchange of expectation and differentiation, the gradient \cref{eq:rep} is then rewritten as
\begin{align*}
&\nabla_\lambda L(\lambda) =\mathbb{E}_{\bm u}[\nabla_\lambda \Gamma(\bm u;\lambda)\cdot (\nabla_\theta\log \mathbb{E}_{x}[f(x;y^*)]+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta))]\notag\\
&=\mathbb{E}_{\bm u}[\nabla_\lambda \Gamma(\bm u;\lambda)\cdot (\nabla_\theta\log \mathbb{E}_{\bm v}[f(x;y^*)]+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta))]\notag\\
&=\mathbb{E}_{\bm u}\left[\nabla_\lambda \Gamma(\bm u;\lambda)\cdot \left(\frac{\mathbb{E}_{\bm v}[\nabla_\theta f(x;y^*)]}{\mathbb{E}_{\bm v}[f(x;y^*)]}+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta)\right)\right]\notag\\
&=\mathbb{E}_{\bm u}\left[\nabla_\lambda \Gamma(\bm u;\lambda)\cdot \left(\frac{\mathbb{E}_{\bm v}[\nabla_\theta \Lambda(\bm v;\theta)\nabla_xf(x;y^*)]}{\mathbb{E}_{\bm v}[f(x;y^*)]}+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta)\right)\right],\label{eq:RP}
\end{align*}
where $\nabla_\lambda\Gamma(\bm u;\lambda)$ is the Jacobian matrix with entries $[\nabla_\lambda\Gamma(\bm u;\lambda)]_{ij}=\partial\Gamma_j(\bm u;\lambda)/\partial\lambda_i$.
Define
\begin{equation}
\mathrm{RP}_{N}(\lambda) =\nabla_\lambda \Gamma(\bm u;\lambda)\cdot \left(\frac{\nabla_\theta \hat{p}_N(y^*|\theta)}{\hat{p}_N(y^*|\theta)}+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta)\right),\label{eq:nestedRP}
\end{equation}
where
$$\hat{p}_N(y^*|\theta) = \frac{1}{N}\sum_{i=1}^Nf(x_i;y^*)\text{ with }x_i=\Lambda(\bm v_i;\theta),$$
$$\nabla_\theta\hat{p}_N(y^*|\theta) = \frac{1}{N}\sum_{i=1}^N\nabla_\theta f(x_i;y^*)=\frac{1}{N}\sum_{i=1}^N\nabla_\theta \Lambda(\bm v_i;\theta)\nabla_x f(x_i;y^*),$$
with $[\nabla_\theta \Lambda(\bm v;\theta)]_{ij}=\partial \Lambda_j(\bm v;\theta)/\partial \theta_i$ and $\bm v_i\sim p_2(\bm v)$ independently.
The estimator \eqref{eq:nestedRP} is also biased. Now we take
$$\tilde{\psi}_{\theta,M_\ell}=\frac{\nabla_\theta\hat{p}_{M_\ell}(y^*|\theta)}{\hat{p}_{M_\ell}(y^*|\theta)}=\frac{\sum_{i=1}^{M_\ell}\nabla_\theta \Lambda(\bm v_i;\theta)\nabla_x f(x_i;y^*)}{\sum_{i=1}^{M_\ell}f(x_i;y^*)},$$
to differ from $\psi_{\theta,M_\ell}$ in the SF method.
Analogously, we take $\Delta \tilde \psi_{\theta,0}=\tilde\psi_{\theta,M_0}$. For $\ell\ge 1$, we use an antithetic coupling estimator again
\begin{equation}\label{eq:RPdelta}
\Delta \tilde\psi_{\theta,\ell}=\tilde\psi_{\theta,M_\ell}-\frac{1}{2}\left(\tilde\psi_{\theta,M_{\ell-1}}^{(a)}+\tilde\psi_{\theta,M_{\ell-1}}^{(b)}\right),
\end{equation}
where
\begin{align*}
&\tilde\psi_{\theta,M_{\ell-1}}^{(a)} = \frac{\sum_{i=1}^{M_{\ell-1}}\nabla_\theta \Lambda(\bm v_i;\theta)\nabla_x f(x_i;y^*)}{\sum_{i=1}^{M_{\ell-1}}f(x_i;y^*)},\\ &\tilde\psi_{\theta,M_{\ell-1}}^{(b)} = \frac{\sum_{i=M_{\ell-1}+1}^{M_{\ell}}\nabla_\theta \Lambda(\bm v_i;\theta)\nabla_x f(x_i;y^*)}{\sum_{i=M_{\ell-1}+1}^{M_{\ell}}f(x_i;y^*)}.
\end{align*}
Define
\begin{equation}\label{eq:rp}
\mathrm{RP}_{\text{MLMC}}(\lambda) = \nabla_\lambda \Gamma(\bm u;\lambda)\cdot \left(\frac{\Delta \tilde\psi_{\theta,I}}{w_I}+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta)\right),
\end{equation}
where $\theta = \Gamma(\bm u;\lambda)$ and $w_I$ is defined as in the SF method. For any number of outer samples $S\ge 1$, the gradient estimator
$$\widehat{\nabla_\lambda L}^{\mathrm{RP}}(\lambda) =\frac{1}{S} \sum_{i=1}^S \mathrm{RP}_{\text{MLMC}}^{(i)}(\lambda),$$
is unbiased, where $\mathrm{RP}_{\text{MLMC}}^{(i)}(\lambda)$ are iid copy of $\mathrm{RP}_{\text{MLMC}}(\lambda)$.
Similarly, to ensure a finite variance and finite expected computational cost of $\mathrm{RP}_{\text{MLMC}}(\lambda)$, it suffices to show $\mathbb{E}[||\nabla_\lambda \Gamma(\bm u;\lambda)\cdot\Delta \tilde \psi_{\theta,\ell}||_2^2] = O(2^{-r\ell})$ for some $r>1$. This can be achieved as shown in the following theorem.
\begin{theorem}\label{thm:remc}
If
$$\sup_{x} ||\nabla_\lambda \log f(x;y^*))||_\infty<\infty,$$
where $x=\Lambda(\bm v;\Gamma(\bm u;\lambda))$, and assume that there exists $p>2$ such that
$$\mathbb{E}\left[\abs{\frac{f(x,y^*)}{p(y^*|\theta)}}^p\right]<\infty,$$
then $$\mathbb{E}[||\nabla_\lambda \Gamma(\bm u;\lambda)\Delta \tilde \psi_{\theta,\ell}||_2^2] = O(2^{-r\ell}) \text{ with } r=\min(p/2,2)\in(1,2].$$
\end{theorem}
\begin{proof}
The proof follows an argument similar to Theorem 3.1 in \cite{goda:2020}, which considered a nested expectation involving a ratio of two inner conditional expectations.
\end{proof}
\section{Parameterizations in Gaussian variational family}\label{Gaussian}
Throughout this paper, we use the Gaussian family $N(\mu,\Sigma)$ as the variational family. For the SF method, we take the variational parameters as $\lambda = (\mu,\mathrm{vech}(C))$, where $C$ is the Cholesky decomposition (lower triangular) of $\Sigma^{-1}$ and $\mathrm{vech}(C)$ denotes a
vector obtained by stacking the
lower triangular elements of $C$. The number of variational parameters $d_\lambda = p+p(p+1)/2$. Since $\log q_\lambda(\theta) = \log |\det(C)|-\frac 1 2(\theta-\mu)^\top CC^\top(\theta-\mu)$, $\nabla_\lambda \log q_\lambda(\theta)=(\nabla_\mu \log q_\lambda(\theta),\nabla_{\mathrm{vech}(C)} \log q_\lambda(\theta))$ with
\begin{align*}
\nabla_\mu \log q_\lambda(\theta) &= CC^\top(\theta-\mu),\\
\nabla_{\mathrm{vech}(C)} \log q_\lambda(\theta) &= \mathrm{vech}(\diag{1/C}-(\theta-\mu)(\theta-\mu)^\top C),
\end{align*}
where $\diag{1/C}$ denotes the diagonal matrix with the same dimensions as $C$ with $i$th diagonal entry $1/C_{ii}$. Note that the score function $\nabla_\lambda \log q_\lambda(\theta)$ is model-free. The SF estimator $\mathrm{SF}_{N}(\lambda)$ can be easily obtained by \cref{eq:nested}. It is common to use control variate (CV) to reduce the noise in estimating the gradient \cite{MF:2017,PBJ:2012}.
Note that $\mathbb{E}[\nabla_\lambda\log q_\lambda(\theta)]=0$. For any constant vector $c=(c_1,\dots,c_p)\in\mathbb{R}^p$, the estimator is also unbiased for the gradient,
\begin{equation*}
\mathrm{SF}^{\text{CV}}_{\text{MLMC}}(\lambda,c) = \nabla_\lambda \log q_\lambda(\theta) \left[\frac{\Delta \psi_{\theta,I}}{w_I}+ \log p(\theta)-\log q_\lambda(\theta)-c\right].
\end{equation*}
We can take an optimal $c_i$ to minimize the variance of the $i$th entry of $\mathrm{SF}^{\text{CV}}_{\text{MLMC}}(\lambda,c)$. Solving
$$c^*_i = \arg \min_{c_i\in \mathbb{R}} \var{\mathrm{SF}^{\text{CV}}_{\text{MLMC},i}(\lambda,c_i)}$$
gives
\begin{equation}
c^*_i = \frac{\mathbb{E}[\left(\nabla_{\lambda_i} \log q_\lambda(\theta)\right)^2\xi]}{\mathbb{E}[\left(\nabla_{\lambda_i} \log q_\lambda(\theta)\right)^2]}=\frac{\cov{\nabla_{\lambda_i} \log q_\lambda(\theta),\nabla_{\lambda_i} \log q_\lambda(\theta)\xi}}{\var{\nabla_{\lambda_i} \log q_\lambda(\theta)}},\label{eq:cv}
\end{equation}
where $\xi=\frac{\Delta \psi_{\theta,I}}{w_I}+ \log p(\theta)-\log q_\lambda(\theta)$.
In practice, $c^*_i$ ($i=1,\dots,p$) are estimated by using the samples in the previous iteration.
The whole procedure is summarized in \Cref{alg:em1}.
\begin{algorithm}
\caption{Unbiased MLMC with the SF gradient estimator\label{alg:em1}}
\begin{algorithmic}[1]
\STATE Initialize $\lambda^{(0)} = (\mu^{(0)},\mathrm{vech}(C^{(0)}))$, $t=0$, $M$ the number of outer samples, $\alpha\in(1,r)$ and $w_\ell\propto 2^{-\alpha \ell}$ such that $\sum_{\ell=0}^\infty w_\ell =1$ and all $w_\ell>0$.
\STATE Repeat
(a) Generate $\theta^{(t)}_1,\dots,\theta^{(t)}_m\sim N(\mu^{(t)},(C^{(t)}{C^{(t)}}^\top)^{-1})$ independently and $I_1^{(t)},\dots,I_m^{(t)}$ independently and randomly with probability $w_\ell$.
(b) Let $n_i = M_02^{I_i^{(t)}}$. For $i=1,\dots,m$, generate $x_{i1}^{(t)},\dots,x_{in_i}^{(t)}\sim p(x|\theta^{(t)}_i)$ independently. Compute the associated samples of the correction $\Delta \psi_{\theta,I}$, denoted by $\Delta \psi^{(t)}_i$, $i=1,\dots,m$.
(c) Estimate $c^*$ defined by \eqref{eq:cv} by the samples $\theta^{(t)}_i,\ I_i^{(t)},\ \Delta \psi^{(t)}_i$, $i=1,\dots,m$, resulting in $c^{(t)}$.
(d) If $t>0$, compute the gradient estimator
\begin{align*}
&\widehat{\nabla_\lambda L}^{\mathrm{SF}}(\lambda^{(t)}) \\
= &\frac{1}{m}\sum_{i=1}^m \nabla_\lambda \log q_\lambda(\theta^{(t)}_i) \left[\frac{ \Delta \psi^{(t)}_i}{w_{I_i^{(t)}}}+ \log p(\theta^{(t)}_i)-\log q_{\lambda^{(t)}}(\theta^{(t)}_i)-c^{(t-1)}\right],
\end{align*}
and the ELBO estimator
$$\mathrm{LB}(\lambda^{(t)}) = \frac{1}{S}\sum_{i=1}^S \frac{ \Delta \psi^{(t)}_i}{w_{I_i^{(t)}}}+ \log p(\theta^{(t)}_i)-\log q_{\lambda^{(t)}}(\theta^{(t)}_i).$$
Update the VB parameter:
$$\lambda^{(t+1)} = \lambda^{(t)}+\rho_t \widehat{\nabla_\lambda L}^{\mathrm{SF}}(\lambda^{(t)}).$$
If $t=0$, then set $\lambda^{(t+1)} = \lambda^{(t)}$. Note that this step is used to initialize $c^*$ rather than updating the VB parameter.
(e) $t = t+1$
until some stopping rule is satisfied.
\end{algorithmic}
\end{algorithm}
Using the RP method, we take the variational parameter as $\lambda = (\mu,\mathrm{vech}(L))$, where $L$ is the Cholesky decomposition of $\Sigma$, which is different from the parameterizations in the SF method. For this case, $\theta = \Gamma(\bm u;\lambda) = \mu+L\bm u \sim N(\mu,\Sigma)$, where $\bm u\in \mathbb{R}^{p\times 1}$ is a standard normal. Let
$$G = \frac{\Delta \tilde\psi_{\theta,I}}{w_I}+ \nabla_\theta\log p(\theta)-\nabla_\theta\log q_\lambda(\theta)\in \mathbb{R}^{p\times 1},$$
where $\Delta \tilde\psi_{\theta,\ell}$ is given by \cref{eq:RPdelta} and $\nabla_\theta\log q_\lambda(\theta)=-\Sigma^{-1}(\theta-\mu)=-(LL^\top)^{-1}(\theta-\mu)$. Then the RP estimator is given by $$\mathrm{RP}_{\text{MLMC}}(\lambda)=(G,\mathrm{vech}(G\bm u^\top))\in \mathbb{R}^{d_\lambda\times 1}.$$
The second term $\nabla_\theta\log p(\theta)$ in $G$ depends on the prior. Particularly, if the prior is normally distributed, say, $N(\mu_0,\Sigma_0)$, then $\nabla_\theta\log p(\theta)=-\Sigma_0^{-1}(\theta-\mu_0)$. It is crucial to work out the term $\nabla_\theta \Lambda(\bm v;\theta)\nabla_x f(x;y^*)$ used in $\Delta \tilde\psi_{\theta,\ell}$, which is model-specific. The whole procedure for the RP method is summarized in \Cref{alg:em2}.
\begin{algorithm}
\caption{Unbiased MLMC with the RP estimator\label{alg:em2}}
\begin{algorithmic}[1]
\STATE Initialize $\lambda^{(0)} = (\mu^{(0)},\mathrm{vech}(L^{(0)}))$, $t=0$, $M$ the number of outer samples, $\alpha\in(1,r)$ and $w_\ell\propto 2^{-\alpha \ell}$ such that $\sum_{\ell=0}^\infty w_\ell =1$ and all $w_\ell>0$.
\STATE Repeat
(a) Generate $\bm{u}_1^{(t)},\dots, \bm{u}_m^{(t)}\sim N(0,I_p)$ independently and set $\theta^{(t)}_i = \mu^{(t)}+L^{(t)}\bm u_i^{(t)}$. Generate $I_1^{(t)},\dots,I_m^{(t)}$ independently and randomly with probability $w_\ell$.
(b) Let $n_i = M_02^{I_i^{(t)}}$. For $i=1,\dots,m$, generate
$\bm{v}_{i1}^{(t)},\dots,\bm{v}_{in_i}^{(t)}\sim p_2(\bm v)$ independently
and set $x_{ij}^{(t)} = \Lambda(\bm{v}_{ij}^{(t)};\theta^{(t)}_i)$, $j=1,\dots,n_i$. Compute the associated samples of the corrections $\Delta \psi_{\theta,I}$ and $\Delta \tilde\psi_{\theta,I}$, denoted by $\Delta \psi^{(t)}_i$ and $\Delta \tilde\psi^{(t)}_i$, respectively.
(c) Compute the gradient estimator
$$\widehat{\nabla_\lambda L}^{\mathrm{RP}}(\lambda^{(t)}) = \frac{1}{m}\sum_{i=1}^m (G_i^{(t)},\mathrm{vech}(G_i^{(t)}\bm u_i^\top)),$$
where $$G_i^{(t)}= \frac{\Delta \tilde\psi^{(t)}_i}{w_{I_i^{(t)}}}+ \nabla_\theta\log p(\theta^{(t)}_i)-\nabla_\theta\log q_{\lambda^{(t)}}(\theta^{(t)}_i),$$
and compute the ELBO estimator
$$\mathrm{LB}(\lambda^{(t)}) = \frac{1}{S}\sum_{i=1}^S \frac{ \Delta \psi^{(t)}_i}{w_{I_i^{(t)}}}+ \log p(\theta^{(t)}_i)-\log q_{\lambda^{(t)}}(\theta^{(t)}_i).$$
Update the VB parameter:
$$\lambda^{(t+1)} = \lambda^{(t)}+\rho_t \widehat{\nabla_\lambda L}^{\mathrm{RP}}(\lambda^{(t)}).$$
(d) $t = t+1$
until some stopping rule is satisfied.
\end{algorithmic}
\end{algorithm}
Notice that not only the gradient estimators but also the ELBO estimators are unbiased in Algorithms \Cref{alg:em1,alg:em2}. The unbiased MLMC methods can be expected to estimate the ELBO more accurately.
\section{Incorporating RQMC}\label{se:RQMC}
We now incorporate RQMC sampling based scrambled $(t,s)$-sequences within the MLMC estimators. Quasi-Monte Carlo (QMC) is designed for computing expectations of $f(\bm v)$ for $\bm v\sim U[0,1]^s$. We should note that in our present context, the underlying distributions are not the form of uniforms. To fit QMC in practice, one must transform the base distribution $U[0,1]^s$ to the underlying distributions. Suppose that there exists a transformation $\psi(\cdot)$ such $\psi(\bm v)\sim p$, where $p$ is the underlying distribution. Below we subsume any such transformation $\psi(\cdot)$ into the definition of $f$.
To estimate $\mu = \int_{[0,1]^s}f(\bm v)\mathrm{d} \bm v$, QMC methods use a sample-mean estimator
$$\hat \mu =\frac{1}{N}\sum_{i=1}^N f(\bm v_i),$$
where $\bm v_1,\dots,\bm v_N$ are the first $N$ points of a low discrepancy sequence. By the Koksma-Hlawka inequality, we have
$$\abs{\hat \mu-\mu}\le V_{\mathrm{HK}}(f)D^*(\bm v_1,\dots,\bm v_N),$$
where $V_{\mathrm{HK}}(f)$ is the variation of the integrand $f(\cdot)$ in the sense of Hardy and Krause, and $D^*(\bm v_1,\dots,\bm v_N)$ is the star discrepancy of the point set $\{\bm v_1,\dots,\bm v_N\}$. For $(t,s)$-sequences, we have $$D^*(\bm v_1,\dots,\bm v_N)=O(N^{-1}(\log N)^{s})=O(N^{-1+\epsilon}),$$ where we use an arbitrarily small $\epsilon>0$ for hiding the logarithm term throughout this paper. If $f$ is of bounded variation in the sense of Hardy and Krause (BVHK), one gets a QMC error of $O(N^{-1+\epsilon})$.
To get a practical error estimate, RQMC methods were introduced, see \cite{LEcuyer2005} for a review. In this paper, we use the scrambling technique proposed by \cite{Owen1995} to randomize $(t,s)$-sequences. In RQMC, each $\bm v_i\sim U[0,1]^s$ marginally, implying that $\hat \mu$ is unbiased for $\mu$. More importantly, scrambled $(t,s)$-sequence retains a $(t,s)$-sequence w.p.1. This leads to
$$\var{\hat \mu}=\mathbb{E}[(\hat \mu-\mu)^2]\le V_{\mathrm{HK}}(f)^2D^*(\bm v_1,\dots,\bm v_N)^2,$$
where the expectation is taken with respect to the randomness of scrambling. Apparently, the RQMC variance is of $O(N^{-2+\epsilon})$ if $f$ is of BVHK.
Now we focus on how to incorporate RQMC within the MLMC estimators. In fact, for both the SF and RP estimators, one needs to sample $\theta\sim q_\lambda(\theta)$, $x_1,\dots,x_{M_I}\sim p(x|\theta,I)$ and $I$ from a discrete distribution with $P(I=i)=w_i$ as stated above. For each realization, the number of random variables depends on $I$, which takes values in $\mathbb{N}$. It is not possible to use a scrambled $(t,s)$-sequence to sample all random variables in a single run because we need determine the dimension $s$ in advance. Instead, we use hybrid sequences within the MLMC estimators. Specifically, we still use MC to sample $\theta$ and $I$, but use RQMC in inner simulation. That is, $x_1,\dots,x_{M_I}$ is based on a scrambled $(t,s)$-sequence. To this end, we assume that there exists a transformation $\Lambda$ such that
$$x=\Lambda(\bm v;\theta)\sim p(x|\theta),$$
where $\bm v\sim U[0,1]^s$. We then takes $x_i = \Lambda(\bm v_i;\theta)$ in the inner simulation, where $\bm v_1,\dots,\bm v_{M_I}$ are the first $M_I$ points of a scrambled $(t,s)$-sequence. Since RQMC estimates are unbiased, the replacement of RQMC will not change the unbiasedness of the gradient estimators.
We are ready to establish an RQMC version of \Cref{thm:sfmc} for the SF gradient. We should note that \Cref{thm:sfmc} may not be extended to the RQMC setting since \Cref{lem:1} holds only for iid samples. Recently, for proving strong law of large numbers for scrambled net
integration, \cite{OR:2021} showed that $\mathbb{E}[\abs{\bar{X}_N}^p]\le C_pN^{1-p}$ for $p\in(1,2)$ via the Riesz-Thorin interpolation theorem, where $\bar{X}_N$ is an average of $N$ RQMC samples of $X$ with $\mathbb{E}[X]=0$. However, this result is not for the case $p>2$ required in \Cref{lem:1}. It is not clear whether the RQMC version of \Cref{lem:1} holds. This is left for future research. The theorem we provide below is totally different from \Cref{thm:sfmc}, and the proof of which does not depend on \Cref{lem:1}.
\begin{theorem}\label{thm:sfqmc}
Suppose that samples $x_i = \Lambda(\bm v_i;\theta),i=1,\dots,M_\ell$ are used in the SF estimator \cref{eq:SF}, where $\bm v_i\in [0,1]^s$ are the first $M_\ell$ points of a scrambled $(t,s)$-sequence. If
$$\mathbb{E}\left[\frac{V_{\mathrm{HK}}(f_\theta)^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2}{f_\theta(\bm v)^2}\right]<\infty,$$
where $\bm v\sim U[0,1]^s$, $f_\theta(\bm v) = f(\Lambda(\bm v;\theta);y^*)$, and $\Lambda(\bm v;\theta)\sim p(x|\theta)$,
then we have
$$\mathbb{E}[\Delta \psi_{\theta,\ell}^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2] = O(2^{-r\ell}) \text{ with } r=2-\epsilon$$
for arbitrarily small $\epsilon>0$.
\end{theorem}
\begin{proof}
Note that $p(y^*|\theta) =\mathbb{E}[f_\theta(\bm v)|\theta]$. Let $$P_\ell = \frac{1}{M_\ell} \sum_{i=1}^{M_{\ell}} f(x_i;y^*)=\frac{1}{M_\ell} \sum_{i=1}^{M_{\ell}} f_\theta(\bm v_i),$$
with $P_{\ell-1}^{(a)} = \frac{1}{M_{\ell-1}}\sum_{i=1}^{M_{\ell-1}}f_\theta(\bm v_i)$, and $P_{\ell-1}^{(b)} = \frac{1}{M_{\ell-1}}\sum_{i=M_{\ell-1}+1}^{M_{\ell}} f_\theta(\bm v_i)$.
All of them are RQMC estimators for $p(y^*|\theta)$. We have
$$\Delta \psi_{\theta,\ell} = [\log P_\ell-\log p(y^*|\theta)]-\frac{1}{2}[(\log P_\ell^{(a)}-\log p(y^*|\theta))+(\log P_\ell^{(b)}-\log p(y^*|\theta))].$$
Applying Jensen's inequality gives
$$\Delta \psi_{\theta,\ell}^2\le 2(\log P_\ell-\log p(y^*|\theta))^2+(\log P_\ell^{(a)}-\log p(y^*|\theta))^2+(\log P_\ell^{(b)}-\log p(y^*|\theta))^2.$$
Note that $\abs{\log t}\le \max(1,1/t)\abs{t-1}\le (1+1/t)\abs{t-1}$ for any $t>0$. We thus have
$$\abs{\log P_\ell-\log p(y^*|\theta)}\le (1/p(y^*|\theta)+ 1/P_\ell)\abs{P_\ell-p(y^*|\theta)}.$$
By the Koksma-Hlawka inequality, we have
$$\abs{P_\ell-p(y^*|\theta)}\le V_{\mathrm{HK}}(f_\theta)D_{\ell},$$
where $D_{\ell}=D^*(\bm v_1,\dots,\bm v_{M_\ell})$.
This implies that
$$(\log P_\ell-\log p(y^*|\theta))^2\le 2V_{\mathrm{HK}}(f_\theta)^2D_\ell^2\left(\frac 1{p(y^*|\theta)^2}+ \frac 1{P_\ell^2}\right).$$
Let $H(\theta)=V_{\mathrm{HK}}(f_\theta)||\nabla_\lambda \log q_\lambda(\theta)||_2$. We then have
\begin{align*}
\mathbb{E}[(\log P_\ell-\log p(y^*|\theta))^2&||\nabla_\lambda \log q_\lambda(\theta)||_2^2] \le 2D_{\ell}^2\left(\mathbb{E}\left[\frac {H(\theta)^2}{p(y^*|\theta)^2}\right]+ \mathbb{E}\left[\frac {H(\theta)^2}{P_\ell^2}\right]\right).
\end{align*}
By Jensen's inequality, we have
\begin{equation}\label{eq:pl}
\frac{1}{P_\ell^2}=\left(\frac{1}{\frac{1}{M_\ell} \sum_{i=1}^{M_{\ell}} f_\theta(\bm v_i) }\right)^2\le \frac{1}{M_\ell} \sum_{i=1}^{M_{\ell}} \frac{1}{ f_\theta(\bm v_i)^2}.
\end{equation}
By the unbiasedness of RQMC estimators and the law of total expectation,
\begin{align*}
\mathbb{E}\left[\frac {H(\theta)^2}{P_\ell^2}\right]&\le \mathbb{E}\left[\frac{H(\theta)^2}{M_\ell} \sum_{i=1}^{M_{\ell}} \frac{1}{f(x_i;y^*)^2}\right]\\&=\mathbb{E}\left[H(\theta)^2\mathbb{E}\left[\frac{1}{M_\ell} \sum_{i=1}^{M_{\ell}} \frac{1}{f(x_i;y^*)^2}\bigg|\theta\right]\right]\\&=\mathbb{E}\left[H(\theta)^2\mathbb{E}\left[\frac {1}{f(x;y^*)^2}\bigg |\theta\right]\right]=\mathbb{E}\left[\frac {H(\theta)^2}{f(x;y^*)^2}\right]<\infty.
\end{align*}
On the other hand, by using Jensen's inequality and the law of total expectation again,
\begin{align*}
&\mathbb{E}\left[\frac {H(\theta)^2}{p(y^*|\theta)^2}\right]=\mathbb{E}\left[\frac {H(\theta)^2}{\mathbb{E}[f(x;y^*)|\theta]^2}\right]\\
\le &\mathbb{E}\left[ H(\theta)^2\mathbb{E}\left[\frac{1}{f(x;y^*)^2}\bigg|\theta\right]\right]=\mathbb{E}\left[ \frac{H(\theta)^2}{f(x;y^*)^2}\right]<\infty.
\end{align*}
We therefore have
$$\mathbb{E}[(\log P_\ell-\log p(y^*|\theta))^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2]=O(D_\ell^2) = O(M_\ell^{-2+\epsilon})=O(2^{-r\ell})$$
for $r=2-\epsilon$ and any $\epsilon>0$. This argument holds also by replacing $P_\ell$ with $P_\ell^{(a)}$ or $P_\ell^{(b)}$. We thus have $\mathbb{E}[\Delta \psi_{\theta,\ell}^2||\nabla_\lambda \log q_\lambda(\theta)||_2^2] = O(2^{-r\ell})$.
\end{proof}
We next establish an RQMC version of \Cref{thm:remc} for the RP gradient. \Cref{thm:remc} cannot be extended to the RQMC setting since its proof depends on \Cref{lem:1} as well.
\begin{theorem}\label{thm:rpqmc}
Suppose that samples $x_i = \Lambda(\bm v_i;\theta),\ i=1,\dots,M_\ell$ in the RP estimator \cref{eq:rp}, where $\bm v_i\in [0,1]^s$ are the first $M_\ell$ points of a scrambled $(t,s)$-sequence. If
$$\mathbb{E}\left[\frac{||\nabla_\lambda \Gamma(\bm u;\lambda)||_{\max}^2}{p(y^*|\theta)^2}\left(\frac{(\norm{\nabla p(y^*|\theta)}_2^2+\norm{V_{\mathrm{HK}}(\nabla_{\theta} f_\theta)}_2^2)V_{\mathrm{HK}}(f_\theta)^2}{f_\theta(\bm v)^2}+\norm{V_{\mathrm{HK}}(\nabla_{\theta} f_\theta)}_2^2\right) \right]$$
is finite, where $\bm v\sim U[0,1]^s$, $f_\theta(\bm v) = f(\Lambda(\bm v;\theta);y^*)$, $\Lambda(\bm v;\theta)\sim p(x|\theta)$, $\theta =\Gamma(\bm u;\lambda)\sim q_\lambda(\theta)$, $V_{\mathrm{HK}}(\nabla_{\theta} f_\theta)$ denotes a vector of $V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)$, and $||A||_{\max}$ denotes the largest absolute value of the entries of the matrix $A$, we have
$$\mathbb{E}[||\nabla_\lambda \Gamma(\bm u;\lambda)\Delta \tilde \psi_{\theta,\ell}||_2^2] = O(2^{-r\ell}) \text{ with } r=2-\epsilon$$
for arbitrarily small $\epsilon>0$.
\end{theorem}
\begin{proof}
We use the notations $P_\ell$, $P_{\ell-1}^{(a)}$ and $P_{\ell-1}^{(b)}$ defined in the proof of \Cref{thm:sfqmc}, and define
$$\mathcal{N}_\ell=\frac{1}{M_\ell}\sum_{i=1}^{M_\ell}\nabla_\theta \Lambda(\bm v_i;\theta)\nabla_x f(x_i;y^*)=\frac{1}{M_\ell}\sum_{i=1}^{M_\ell}\nabla_{\theta} f_\theta(\bm v_i),$$
with $\mathcal{N}_{\ell-1}^{(a)} = \frac{1}{M_{\ell-1}}\sum_{i=1}^{M_{\ell-1}}\nabla_{\theta} f_\theta(\bm v_i)$, and $\mathcal{N}_{\ell-1}^{(b)} = \frac{1}{M_{\ell-1}}\sum_{i=M_{\ell-1}+1}^{M_{\ell}}\nabla_{\theta} f_\theta(\bm v_i)$.
It is clear that $\mathbb{E}[\mathcal{N}_\ell|\theta]=\mathbb{E}[\mathcal{N}_{\ell-1}^{(a)}|\theta]=\mathbb{E}[\mathcal{N}_{\ell-1}^{(b)}|\theta]=\nabla_\theta p(y^*|\theta)$, and $\mathbb{E}[P_\ell|\theta]=\mathbb{E}[P_{\ell-1}^{(a)}|\theta]=\mathbb{E}[P_{\ell-1}^{(b)}|\theta]= p(y^*|\theta)$. Note that
\begin{align*}
\Delta \tilde \psi_{\theta,\ell} &= \frac{\mathcal{N}_\ell}{P_\ell}-\frac{1}{2}\left(\frac{\mathcal{N}_{\ell-1}^{(a)}}{P_{\ell-1}^{(a)}}+\frac{\mathcal{N}_{\ell-1}^{(b)}}{P_{\ell-1}^{(b)}}\right)\\
&=\left[\frac{\mathcal{N}_\ell}{P_\ell}-\frac{\nabla_\theta p(y^*|\theta)}{p(y^*|\theta)}\right] -\frac{1}{2}\left[\frac{\mathcal{N}_{\ell-1}^{(a)}}{P_{\ell-1}^{(a)}}-\frac{\nabla_\theta p(y^*|\theta)}{p(y^*|\theta)}\right]-\frac{1}{2}\left[\frac{\mathcal{N}_{\ell-1}^{(b)}}{P_{\ell-1}^{(b)}}-\frac{\nabla_\theta p(y^*|\theta)}{p(y^*|\theta)}\right].
\end{align*}
Let $\mathcal{N}_{\ell,i}$ be the $i$th entry of $\mathcal{N}_\ell$, which is an unbiased estimator for $\partial_{\theta_i} p(y^*|\theta)$. By the triangle inequality, we find that
\begin{align}
\left(\frac{\mathcal{N}_{\ell,i}}{P_\ell}-\frac{\partial_{\theta_i} p(y^*|\theta)}{p(y^*|\theta)}\right)^2&=\left(\frac{\mathcal{N}_{\ell,i}}{P_\ell}-\frac{\mathcal{N}_{\ell,i}}{p(y^*|\theta)}+\frac{\mathcal{N}_{\ell,i}}{p(y^*|\theta)}-\frac{\partial_{\theta_i} p(y^*|\theta)}{p(y^*|\theta)}\right)^2\notag\\
&\le \frac{2}{p(y^*|\theta)^2}\left[\frac{\mathcal{N}_{\ell,i}^2}{P_\ell^2}(P_\ell-p(y^*|\theta))^2+(\mathcal{N}_{\ell,i}-\partial_{\theta_i} p(y^*|\theta))^2 \right].\label{eq:delta}
\end{align}
By the Koksma-Hlawka inequality, we have
$$\abs{P_\ell-p(y^*|\theta)}\le V_{\mathrm{HK}}(f_\theta)D_{\ell},$$
$$\abs{\mathcal{N}_{\ell,i}-\partial_{\theta_i} p(y^*|\theta)}\le V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)D_{\ell},$$
where $D_{\ell}=D^*(\bm v_1,\dots,\bm v_{M_\ell})$.
For large enough $\ell$, it is reasonable to assume that $D_\ell<1$.
Together with \cref{eq:pl} and \cref{eq:delta}, we then have
\begin{align*}
&\left(\frac{\mathcal{N}_{\ell,i}}{P_\ell}-\frac{\partial_{\theta_i} p(y^*|\theta)}{p(y^*|\theta)}\right)^2\\
\le& \frac{2D_{\ell}^2}{p(y^*|\theta)^2}\left[\frac{\mathcal{N}_{\ell,i}^2}{P_\ell^2}V_{\mathrm{HK}}(f_\theta)^2+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2 \right]\\
\le& \frac{4D_{\ell}^2}{p(y^*|\theta)^2}\left[\frac{\partial_{\theta_i} p(y^*|\theta)^2+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2}{P_\ell^2}V_{\mathrm{HK}}(f_\theta)^2+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2 \right]\\
\le& \frac{4D_{\ell}^2}{p(y^*|\theta)^2}\left[\frac{(\partial_{\theta_i} p(y^*|\theta)^2+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2)V_{\mathrm{HK}}(f_\theta)^2}{M_\ell} \sum_{i=1}^{M_{\ell}} \frac{1}{f_\theta(\bm v_i)^2}+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2 \right].
\end{align*}
Let $n_r$ and $n_c$ be the number of rows and columns of the Jacobian matrix $\nabla_\lambda \Gamma(\bm u;\lambda)$, respectively, and $M_\lambda=||\nabla_\lambda \Gamma(\bm u;\lambda)||_{\max}$. As a result,
\begin{align*}
&\mathbb{E}\left[\norm{\nabla_\lambda \Gamma(\bm u;\lambda)\cdot\left(\frac{\mathcal{N}_\ell}{P_\ell}-\frac{\nabla_\theta p(y^*|\theta)}{p(y^*|\theta)}\right)}_2^2\right]\\
\le& n_cn_r\mathbb{E}\left[\sum_{i=1}^{n_r}M_\lambda^2\left(\frac{\mathcal{N}_{\ell,i}}{P_\ell}-\frac{\partial_{\theta_i} p(y^*|\theta)}{p(y^*|\theta)}\right)^2\right]\\
\le& C_{\ell}\mathbb{E}\left[\frac{M_\lambda^2}{p(y^*|\theta)^2}\sum_{i=1}^{n_r}\left(\frac{(\partial_{\theta_i} p(y^*|\theta)^2+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2)V_{\mathrm{HK}}(f_\theta)^2}{f_\theta(\bm v)^2}+V_{\mathrm{HK}}(\partial_{\theta_i} f_\theta)^2\right) \right]\\
=&C_{\ell}\mathbb{E}\left[\frac{M_\lambda^2}{p(y^*|\theta)^2}\left(\frac{(\norm{\nabla p(y^*|\theta)}_2^2+\norm{V_{\mathrm{HK}}(\nabla_{\theta} f_\theta)}_2^2)V_{\mathrm{HK}}(f_\theta)^2}{f_\theta(\bm v)^2}+\norm{V_{\mathrm{HK}}(\nabla_{\theta} f_\theta)}_2^2\right) \right]\\
=&O(M_\ell^{-2+\epsilon})=O(2^{-r\ell})
\end{align*}
with $C_\ell=4n_cn_rD_{\ell}^2$ for $r=2-\epsilon$ and any $\epsilon>0$. By a similar argument in the proof of \Cref{thm:sfqmc}, we have $\mathbb{E}[||\nabla_\lambda \Gamma(\bm u;\lambda)\Delta \tilde \psi_{\theta,\ell}||_2^2] = O(2^{-r\ell})$.
\end{proof}
In \Cref{thm:sfqmc,thm:rpqmc}, the integrands in RQMC quadratures need to be BVHK. For practical problems, it may be very hard to verify such a condition. Particularly, if the integrands are not smooth enough, the BVHK condition does not hold. For such cases, one may get a lower rate $r$. For any integrand in $L^2[0,1]^s$, scrambled nets have variance $o(1/N)$ without requiring the BVHK condition \cite{owen1997a}. Additionally, for any fixed $N$, the scrambled nets variance is no worse than a constant times the MC variance. From this point of view, under the same conditions in \Cref{thm:sfmc,thm:remc}, we can expect that the rate $r$ for RQMC is no worse than that of MC. Finally, we should note that the rates established in \Cref{thm:sfqmc,thm:rpqmc} do not benefit from the antithetic coupling, implying that the results also hold for the usual way of coupling. One might get a better rate by taking account for the form of antithetic coupling.
There are some other ways to incorporate RQMC in MLMC. For example, one can use RQMC in the outer simulation. That is, the samples of $\theta$ are based on a scrambled $(t,s')$-sequence while the inner samples $x_i$ and the samples of $I$ are based on MC. To this end, assuming $\theta = \Gamma_\lambda(\bm u)\sim q_\lambda(\theta)$ with $\bm u\sim U[0,1]^{s'}$, we take
$$\theta_i = \Gamma_\lambda(\bm u_i),\ i=1,\dots,S,$$
where $\bm u_1,\dots,\bm u_S$ are the first $S$ points of a scrambled $(t,s')$ sequence.
Taking the SF gradient estimator \cref{eq:SF_esti} for instance, we have
\begin{align}
\var{\widehat{\nabla_\lambda L}^{\mathrm{SF}}(\lambda)} &=\mathbb{E}\left[\var{\widehat{\nabla_\lambda L}^{\mathrm{SF}}(\lambda)|\theta_{\{1:S\}}}\right] + \var{\mathbb{E}[\widehat{\nabla_\lambda L}^{\mathrm{SF}}(\lambda)|\theta_{\{1:S\}}]}\notag\\
&=\frac{1}{S} \mathbb{E}[\var{\mathrm{SF}_{\text{MLMC}}(\lambda)|\theta}] +\var{\frac{1}{S}\sum_{i=1}^S \mathbb{E}[\mathrm{SF}_{\text{MLMC}}^{(i)}(\lambda)|\theta_i]}\notag\\
&=\frac{1}{S} \mathbb{E}[\var{\mathrm{SF}_{\text{MLMC}}(\lambda)|\theta}] +\var{\frac{1}{S}\sum_{i=1}^S H(\theta_i)},\label{eq:decom}
\end{align}
where $H(\theta):=\nabla_\lambda \log q_\lambda(\theta) \left[\log p(y^*|\theta)+ \log p(\theta)-\log q_\lambda(\theta)\right]$, $\theta_{\{1:S\}}=\{\theta_1,\dots,\theta_S\}$ and $\var{\cdot}$ and $\mathbb{E}[\cdot]$ are applied component-wisely.
The second term in \cref{eq:decom} is $O(1/S)$ when the $\theta_i$'s are generated using MC, while it should be $o(1/S)$ when the $\theta_i$'s are generated using RQMC, or even better $O(S^{-2+\epsilon})$ if $H\circ \Gamma_\lambda$ is of BVHK.
The first term in \cref{eq:decom} is $O(1/S)$ for both cases. As a result, this strategy helps to reduce the variance in the outer sampling. Buchholz and Chopin \cite{BC:2019} applied this strategy in ABC. They found that the resulting ABC estimate has a lower variance than the MC counter-part. However, the rate of convergence cannot be improved due to the first term in \cref{eq:decom}. This strategy cannot improve the rates $r$ in \Cref{thm:sfmc,thm:remc} either.
One the other hand, we can also use a two-stage RQMC strategy. In the outer samples, we use a scrambled $(t,s')$-sequence to simulate $\theta$; while in each inner simulation, we use another independent branch of scrambled $(t,s)$-sequence to sample $x_i$. This two-stage RQMC strategy helps to reduce the noise in both inner and outer simulations. In our numerical experiments, we shall compare the effects of the three ways of using RQMC in MLMC.
\section{Numerical experiments}\label{sec:num}
\subsection{Approximate Bayesian computation}
ABC method is a generic tool in likelihood-free inference provided that it is easy to generate $y\sim p (y|\theta)$. However, ABC methods do not target the exact posterior, but an approximation to some extent. More specially, let $\mathcal{S}(\cdot):\mathbb{R}^n\to \mathbb{R}^d$ be a vector of summary statistics, and $K_h(\cdot,\cdot)$ be a $d$-dimensional kernel density with bandwidth $h>0$.
ABC posterior density of $\theta$ is given by
\begin{equation*}
p_{\mathrm{ABC}}(\theta|y^*)\propto p(\theta)\tilde{p}(y^*|\theta),
\end{equation*}
where the intractable likelihood is given by
\begin{equation}
\tilde{p}(\theta|y^*)=\int K_h(\mathcal{S}(y),\mathcal{S}(y^*))p(y|\theta) \mathrm{d} y=\mathbb{E}_{p(y|\theta)}[K_h(\mathcal{S}(y),\mathcal{S}(y^*))].\label{eq:abc}
\end{equation}
To fit the form \cref{eq:intrlik}, one gets $f(y;y^*):=K_h(\mathcal{S}(y),\mathcal{S}(y^*))$, in which the latent variable $x$ is replaced by $y$. To ensure $f(y;y^*)>0$, we particularly take the Gaussian kernel
\begin{equation*}
K_h(s,s^*)=(2\pi h)^{-d/2}\exp\left\lbrace-\frac{(s-s^*)^\top(s-s^*)}{2h}\right\rbrace,
\end{equation*}
where $d$ denotes the dimension of the summary statistics $\mathcal{S}(y)$.
If $\mathcal{S}(y^*)$ is a sufficient statistic, then $p_{\mathrm{ABC}}(\theta|y^*)$ converges to the exact posterior $p (\theta |y^*)$ as $h\to 0$. Otherwise, $p_{\mathrm{ABC}}(\theta|y^*)$ converges to the posterior $p(\theta |\mathcal{S}(y^*))$ as $h\to 0$, where is a gap between $p (\theta |\mathcal{S}(y^*))$ and $p (\theta |y^*)$.
To apply the SF method, it suffices to provide the sample-mean likelihood estimator
\begin{equation*}
\hat{p}_N(y^*|\theta) = \frac{1}{N}\sum_{i=1}^N K_h(\mathcal{S}(y^{[i]}),\mathcal{S}(y^*)),
\end{equation*}
where $y^{[i]}$ are iid sample of $p(y|\theta)$. To apply the RP methods, we need to find the mappings such that
$$\theta=\Gamma(\bm u;\lambda)\sim q_\lambda(\theta)\text{ and }y =\Lambda(\bm v;\theta)\sim p(y|\theta),$$
where the distributions of $\bm u,\bm v$ do not depend on $\lambda$ and $\theta$, respectively. We also require the closed forms of $\nabla_yf(y;y^*)$, $\nabla_\theta \Lambda(\bm v;\theta)$, and $\nabla_\lambda\Gamma(\bm u;\lambda)$. Note that
\begin{align*}
\frac{\partial f(y;y^*)}{\partial y_i} &= \sum_{j=1}^d \frac{\partial K_h(\mathcal{S}(y),\mathcal{S}(y^*))}{\partial \mathcal{S}_j} \frac{\partial \mathcal{S}_j(y)}{\partial y_i}\\
&=\frac{K_h(\mathcal{S}(y),\mathcal{S}(y^*)}{h}\sum_{j=1}^d [\mathcal{S}_j(y^*)-\mathcal{S}_j(y)]\frac{\partial \mathcal{S}_j(y)}{\partial y_i}.
\end{align*}
As a result, $$\nabla_yf(y;y^*) =\frac{K_h(\mathcal{S}(y),\mathcal{S}(y^*))\nabla_y\mathcal{S}(y)[\mathcal{S}(y^*)-\mathcal{S}(y)]}{h}.$$
It reduces to verify and then compute the Jacobian matrix $\nabla_y\mathcal{S}(y)$. If we take the entire data as the summary statistics, then $\nabla_y\mathcal{S}(y)$ is an identity matrix. If the summary statistics $\mathcal{S}(y)$ are sample moments, $\nabla_y\mathcal{S}(y)$ can be easily computed. However, if the summary statistics $\mathcal{S}(y)$ are functions of sample quantiles, $\nabla_y\mathcal{S}(y)$ does not exist. So the SF method has a wider scope than the RP method.
\subsubsection{A toy example}
To show the unbiasedness of our methods visually, we consider a toy example of ABC which is investigated in \cite{ong:2018}. Let the data $y_1,\dots,y_n$ be from a Gaussian distribution with unknown mean $\theta$ and unit variance. We assume further the prior of $\theta$ is a standard normal distribution $N(0,1)$. Under this setting, the posterior distribution is tractable actually, which is $\theta|y^*\sim N(n/(1+n)\bar{y}^*,1/(1+n))$, where $\bar{y}^*$ is the sample mean, but we still approximate the posterior distribution by VB methods for comparisons. Naturally, we take variational distribution $q(\theta)$ to be a normal $N(\mu,\sigma^2)$.
We take the entire data set $y^*$ as the summary statistics (i.e., $\mathcal{S}(y)=y$) to compare the VBIL, VBSL and MLMC methods. The distribution of the summary statistic is normal, and so VBSL renders an unbiased estimator acting as a benchmark.
With the Gaussian kernel, the ABC likelihood \cref{eq:abc} can be calculated analytically actually, which gives a guidance to choose a proper $h$. The details have been stated in \cite{ong:2018}. We take $h=0.1$ for the kernel function $K_h$ to guarantee the accuracy of the kernel approximation to the true posterior.
We test the SF and RP methods under the MC framework, respectively. In the all simulations, we consider $d=n=4$ and set the number of outer samples $S=100$ and the number of inner samples $N=100$ for all of the methods. We set the learning rate $\rho_t=1/(5+t)$. And $\alpha=1.3$ is taken for the SF methods while $\alpha=1.1$ is taken for the RP methods. We initialize the starting points for $q(\theta)$ to be $N(\bar{y}^*,1)$ and $y^*=(0,\dots,0)$.
Figure \ref{fig:NLM} illustrates the variational posterior approximations of $\theta$ and corresponding ELBOs of VBSL, VBIL and unbiased MLMC method under the SF and RP frameworks respectively. Observed from the left panel of Figure \ref{fig:NLM}, the estimated densities of the MLMC methods and the benchmark method (VBSL) overlap considerably. On the contrary, the VBIL methods yield inaccurate densities and lower ELBOs. The ELBO of unbiased MLMC methods has more volatility than the other methods. A possible explanation is that, although the MLMC method eliminates bias, it may introduce more randomness. Nevertheless, it is apparent that MLMC methods find better variational parameters which benefit from the unbiasedness of the gradient estimators.
\begin{figure}[htbp]
\centering
\caption{Comparison of VBIL, VBSL and unbiased MLMC.\label{fig:NLM}}
\subfigure[posterior distribution of $\theta$]{\includegraphics[width=4.5cm]{thetadistribution}}
\subfigure[ELBO]{\includegraphics[width=7.5cm]{NLMLB}}
\end{figure}
\subsubsection{The g-and-k model}
The univariate $g$-and-$k$ distribution is a flexible unimodal distribution that
is able to describe data with significant amounts of skewness and kurtosis \cite{RM:2002}. Its density function has no closed form, but
is alternatively defined through its quantile function as:
$$Q(q|\theta)=A+B\left[1+0.8\frac{1-\exp\{-gz(q)\}}{1+\exp\{-gz(q)\}}\right](1+z(q)^2)^kz(q),$$
where $\theta=(A,B,g,k)$, $B>0, k>-1/2$, and $z(q)=\Phi^{-1}(q)$ denotes the inverse CDF of $N(0,1)$. If $g=k=0$, it reduces to a normal distribution. As shown in \cite{AKM:2009}, ABC is a good candidate for handling this model.
Suppose that the observations $y^*$ of length $T=1000$ are independently generated from the $g$-and-$k$ distribution with parameter $\theta_0=(3,1,2,0.5)$. We use the unconstrained parameter
$\tilde{\theta}=(A,\log B,g,\log(k+1/2))$ in the VB and take the prior density for $\tilde{\theta}$ as $N(0,4\cdot I_4)$. As suggested in \cite{DP:2011}, we take the summary statistics $\mathcal{S}(y)=(\mathcal{S}_A,\mathcal{S}_B,\mathcal{S}_g,\mathcal{S}_k)$ with
\begin{align*}
\mathcal{S}_A &= E_4,\\
\mathcal{S}_B &= E_6-E_2,\\
\mathcal{S}_g &= (E_6+E_2-2E_4)/S_B,\\
\mathcal{S}_k &= (E_7-E_5+E_3-E_1)/S_B,
\end{align*}
where $E_1\le E_2\le\dots\le E_7$ are the octiles of $y$.
Note that $\mathcal{S}(y)$ is not differentiable, and thus the RP method cannot be applied. The observed summary statistics $\mathcal{S}(y^*)=(3.05,1.63,1.58,0.42)$.
We compare MLMC and VBIL for a large bandwidth ($h=5$) and a small bandwidth ($h=0.5$), and look at the effect of bandwidth. The benchmark is the ABC acceptance-rejection (ABC-AR) samples of size $10^4$. When $h=5$, the acceptance rate of ABC sampling is about $18\%$, while $h=0.5$, the acceptance rate reduces to $1\%$. We take $\alpha=1.3$ when $h=5$ while $\alpha=1.1$ when $h=0.5$ for the minor $h$ has effect on the smoothness of the inner function.
\begin{figure}[htbp]
\centering
\caption{Comparison of marginal posterior distributions.\label{fig:g-kdistribution}}
\subfigure[$h=5$]{\includegraphics[width=13cm]{g-kh5distribution}}
\subfigure[$h=0.5$]{\includegraphics[width=13cm]{g-kh05distribution}}
\end{figure}
\Cref{fig:g-kdistribution} shows the variational posterior distributions of VBIL and unbiased MLMC. As we can see, unbiased MLMC-based VB approximates the ABC posterior well, particularly for the marginal distributions of $A$ and $g$. Again, as shown in \Cref{fig:g-kLB}, unbiased MLMC leads to a larger ELBO.
\begin{figure}[htbp]
\centering
\caption{Comparison of ELBOs. \label{fig:g-kLB}}
\subfigure[$h=5$]{\includegraphics[width=6.2cm]{g-kh5LB}}
\subfigure[$h=0.5$]{\includegraphics[width=6.2cm]{g-kh05LB}}
\end{figure}
Using RQMC in MLMC is minor for this example (the results are similar to \Cref{fig:g-kdistribution,fig:g-kLB}, and are thus omitted for saving space). The reason is two-fold. First, it is required $1000$-dimensional RQMC points in the inner simulation, which is quite large. On the other hand, the summary statistics are functions of sampling quantiles, which are not smooth enough. Due to the high-dimensionality and the absence of smoothness in the integrands, RQMC may not perform well as expected. To overcome this, one may design some dimension reduction techniques for handling the integrand in \cref{eq:abc}.
\subsection{Generalized linear mixed models}
Generalized linear mixed models (GLMM) use a vector of random effects $\alpha_i$ to account for the dependence between the observations $y_i=\{y_{ij},j=1,\dots,n_i\}$ which are measured on the same individual $i$. The joint likelihood function of the model parameters $\theta$ and the random effects $\alpha=(\alpha_1,\dots,\alpha_n)$ is $p(y^*,\alpha|\theta)=\prod_{i=1}^n p(\alpha_i|\theta)p(y_i|\theta,\alpha_i)$ which is tractable. However, the likelihood function $p(y^*|\theta)=\prod_{i=1}^n p(y_i|\theta)$ with
$$p(y_i|\theta)=\int p(y_i|\theta,\alpha_i)p(\alpha_i|\theta)\mathrm{d} \alpha_i$$
is analytically intractable in most cases, while it can be easily estimated unbiasedly with importance sampling. Suppose $h_i(\alpha_i|y^*,\theta)$ is an importance density for $\alpha_i$, then the likelihood $p(y_i|\theta)$ is estimated unbiasedly by
$$\hat p_{N_i}(y_i|\theta)=\frac{1}{N_i}
\sum_{j=1}^{N_i}\frac{p(y_i|\alpha_i^{(j)},\theta)p(\alpha_i^{(j)}|\theta)}{h_i(\alpha_i^{(j)}|y^*,\theta)},$$
with $\alpha_i^{(j)}\stackrel{iid}{\sim} h_i(\cdot|y^*,\theta)$.
We now compare the VBIL method and the unbiased MLMC methods using the Six City data in \cite{Fitz:1993}. The data consist of binary responses $y_{ij}$ which is the wheezing status (1 if wheezing, 0 if not wheezing) of the $i$th child at time-point $j$, where $i=1,\dots,537$ which represent 537 children and $j=1,2,3,4$ which denote $7,8,9,10$ year-old centered at 9 years correspondingly. Covariates are $A_{ij}, $the age of the $i$th child at time-point $j$ and $S_i$ the $i$th maternal smoking status (0 or 1). We consider the logistic regression model with a random intercept $y_{ij}|\beta,\alpha\sim \text{Binomial}(1,p_{ij})$, where $\text{logit}(p_{ij})=\beta_1+\beta_2A_{ij}+\beta_3S_i+\alpha_i$ with $\alpha_i\sim N(0,\tau^2)$. The parameters of this model are $\theta=(\beta,\tau^2)$. Then the likelihood function is given by
$$p(y^*|\theta)=\prod_{i=1}^{537}\int\prod_{j=1}^4 \frac{\exp\{y_{ij}(\beta_1+\beta_2A_{ij}+\beta_3S_i+\alpha_i)\}}{1+\exp\{\beta_1+\beta_2A_{ij}+\beta_3S_i+\alpha_i\}}
\cdot \frac{1}{\sqrt{2\pi\tau^2}}\exp\{-\frac{\alpha_i^2}{2\tau^2}\}\mathrm{d}\alpha_i.$$
A normal prior $N(0,50I_3)$ is taken for $\beta$ with a $\text{Gamma}(1,0.1)$ prior for $\tau$, the square root of $\tau^2$.
We set the variational distribution $q_\lambda(\theta)$ to be a 4-dimensional normal $N(\mu,\Sigma)$, where we let $(\theta_1,\theta_2,\theta_3,\theta_4)$ denote $(\beta_1,\beta_2,\beta_3,\log\tau^2)$, which means the variational distribution of $\beta$ is a 3-dimensional normal distribution and $\tau^2$ is a log-normal distribution. This example was also investigated in \cite{tran:2017}. We focus on the RP method in this example because there is overwhelming empirical evidence in the literature showing the superiority of RP than SF. Some theoretical explanation can be found in \cite{Xu:2019}.
In the RP method, we take $\theta=(\beta,\log\tau^2)=\mu+L\bm u$, where $\bm u\sim N(0,I_4)$.
In the inner simulation, we take $x_i=(x_{i1},\dots,x_{i4})=(z_{i1},\dots,z_{i4})+\sqrt{\tau^2}\bm v_i\cdot1_4$, where $z_{ij}=\beta_1+\beta_2A_{ij}+\beta_3S_i$, $\bm v_i\sim N(0,1)$ and $1_4$ denotes the vector $(1,1,1,1)$.
Firstly, we test the decreasing rates of $\mathbb{E}[\|\nabla_\lambda \Gamma(\bm u;\lambda)\Delta \tilde \psi_{\theta,\ell}\|_2^2]$ for testing MLMC-based gradient estimation and $\mathbb{E}[|\Delta \psi_{\theta,\ell}|^2]$ for testing MLMC-based ELBO estimation. We run the algorithms starting with $\mu=(0,0,0,0)^T,\Sigma=I_4$ and $M_0=16$. We compare the cases of using MC and RQMC in the inner simulation. To get accurate estimates of these quantities, we use RQMC in the outer sampling. As shown in \Cref{fig:norm}, we find that $r=1.52$ for the gradient estimator when RQMC is used in the inner simulation while $r=1.43$ for MC in the inner. Also, RQMC leads to a larger $r=1.96$ for the ELBO estimator. When MC is used in the inner, we take $\alpha=1.4$ to finalize the probability distribution of $w_\ell$. While $\alpha=1.5$ when RQMC is used in the inner. A large $\alpha$ speeds up the VB algorithm. According to \cref{eq:cost}, RQMC reduces the cost by a factor of $16\%$ compared to MC.
\begin{figure}[htbp]
\centering
\caption{Tests of the decrease rates.\label{fig:norm}}
\subfigure[Gradient of ELBO]{\includegraphics[width=4.2cm]{L2norm}}
\subfigure[ELBO]{\includegraphics[width=4.2cm]{LBrate}}
\end{figure}
\begin{table}[htbp]
\scriptsize
\centering
\caption{Variances of unbiased MLMC-based gradient estimators for the initial variational parameters. `I' is short for `Inner', `O' for `Outer', `M' for `MC' and `Q' for `RQMC'. }
\label{tab:variance}
\begin{tabular}{ccccc ccccc ccccc}
\toprule
I/O&$\beta_1$&$\beta_2$&$\beta_3$&$\tau^2$&$L_{11}$&$L_{21}$&$L_{31}$&$L_{41}$&$L_{22}$&$L_{23}$&$L_{24}$&$L_{33}$&$L_{34}$&$L_{44}$\\\hline
M/M&152&164&30&41&253&182&54&118&226&28&74&55&99&32 \\
M/Q&69&97&11&30&171&142&26&99&215&18&89&29&87&30 \\
Q/M&111&84&17&22&260&148&34&49&162&31&47&27&33&17 \\
Q/Q&82&80&13&40&170&146&30&93&161&24&86&20&30&11\\
\bottomrule
\end{tabular}
\end{table}
The results in \Cref{fig:norm} show that RQMC can improve the sampling accuracy in the inner simulation with a large $r$, but the effect of RQMC used in the outer simulation is still unclear. To this end, we estimate the variance of the unbiased MLMC-based gradient estimator for the initial variational parameters by $50$ repetitions. The empirical variances are shown in \Cref{tab:variance}. It can be seen that using RQMC in either inner or outer simulation reduce the variances for most parameters. Variance reduction of gradient estimates should help to improve VB.
\begin{figure}[htbp]
\centering
\caption{Comparison of VBIL and four unbiased MLMC methods: MC+MC, MC+RQMC, RQMC+MC and RQMC+RQMC.\label{fig:RPmethod}}
\subfigure[Marginal posterior distributions]{\includegraphics[width=13cm]{RPall}}
\subfigure[ELBO]{\includegraphics[width=13cm]{RPLB}}
\end{figure}
Finally, we compare VBIL with four unbiased MLMC methods: MC+MC, MC+RQMC, RQMC+MC and RQMC+RQMC, where for example, MC+RQMC means the MC method is used in the outer while the RQMC method is used in the inner and so on. We take $M_0=8$ for the unbiased MLMC methods and $N=16$ for VBIL. The RStan package `rstanarm' is used to sample from $p(\theta|y^*)$ as a benchmark, which performs posterior analysis for
models with dependent data such as GLMMs. As shown in Figure \ref{fig:RPmethod}, unbiased MLMC-based methods show great consistency with the benchmark distribution (labeled as RS). On the other hand, all unbiased MLMC methods lead to larger ELBOs than VBIL.
\section{Concluding remarks}\label{sec:concl}
In this paper, we developed a general method to deal with VB problems with intractable likelihoods. The central point is to find an unbiased gradient estimator in stochastic gradient-based optimization. We achieve this goal by designing unbiased nested MLMC estimators for both the SF and RP gradients. Compared to VBIL, our proposed methods find a better fitting of the posterior distribution and a tighter estimate of the marginal likelihood. Compared to VBSL, our methods work with general distributions of summary statistics. To improve the sampling efficiency, we incorporated RQMC in the inner and the outer simulations. Using RQMC in the inner simulation can reduce the average cost of unbiased MLMC. Using RQMC in the outer simulation can reduce the variance of the gradient estimator.
Both aspects speed up the VB algorithm.
|
1,116,691,498,599 | arxiv | \section{Introduction}
The fast and reliable discrimination of chaotic and ordered orbits
of conservative dynamical systems is of crucial interest in many
problems of nonlinear science. By the term conservative we
characterize here systems which preserve phase space volume (or some
other positive function of the phase space variables) during time
evolution. Important examples in this class are $N$ -- degree of
freedom (dof) Hamiltonian systems and $2N$ -- dimensional symplectic
maps. As is well known, in such systems chaotic and regular orbits
are distributed in phase space in very complicated ways, which often
makes it very difficult to distinguish between them. In recent
years, several methods have been developed and applied to various
problems of physical interest in an effort to distinguish
efficiently between ordered and chaotic dynamics. Their
discrimination abilities and overall performance, however, varies
significantly, making some of them more preferable than others in
certain situations.
One of the most common approaches is to extract information about
the nature of a given orbit from the dynamics of small deviations,
evaluating the maximal Lyapunov Characteristic Exponent (LCE)
$\sigma_1$. If $\sigma_1 > 0$ the orbit is characterized as chaotic.
The theory of Lyapunov exponents was first applied to characterize
chaotic orbits by Oseledec \cite{O68}, while the connection between
Lyapunov exponents and exponential divergence of nearby orbits was
given in \cite{BGS76,P77}. Benettin et al. \cite{BGGS80a} studied
the problem of the computation of all LCEs theoretically and
proposed in \cite{BGGS80b} an algorithm for their efficient
numerical computation. In particular, $\sigma_1$ is computed as the
limit for $t \rightarrow \infty$ of the quantity:
\begin{equation}
L_1(t)=\frac{1}{t}\, \ln \frac{\|\vec{w}(t)\|}{\|\vec{w}(0)\|}\,
,\, \mbox{i.e.}\,\, \sigma_1 = \lim_{t\rightarrow \infty} L_1 (t) \,
, \label{eq:lyap1_def}
\end{equation}
where $\vec{w}(0)$, $\vec{w}(t)$ are deviation vectors from a given
orbit, at times $t=0$ and $t>0$ respectively. It has been shown that
the above limit is finite, independent of the choice of the metric
for the phase space and converges to $\sigma_1$ for almost all
initial vectors $\vec{w}(0)$ \cite{O68,BGGS80a,BGGS80b}. Similarly,
all other LCEs, $\sigma_2$, $\sigma_3$ etc. are computed as limits
for $t \rightarrow \infty$ of some appropriate quantities, $L_2(t)$,
$L_3(t)$ etc. (see for example \cite{BGGS80b}). We note here that
throughout the paper, whenever we need to compute the values of the
maximal LCE or of several LCEs we apply respectively the algorithms
proposed by Benettin et al. \cite{BGS76,BGGS80b}.
Over the years, several variants of this approach have been
introduced to distinguish between order and chaos such as: The Fast
Lyapunov Indicator (FLI) \cite{FLG97,FGL97,FLFF02,GLF02,B05}, the
Mean Exponential Growth of Nearby Orbits (MEGNO) \cite{CS00,CGS03},
the Smaller Alignment Index (SALI)
\cite{S01,Skokos_et._al_PTPS,Skokos_et._al_JPA}, the Relative
Lyapunov Indicator (RLI) \cite{SESF04}, as well as methods based on
the study of power spectra of deviation vectors \cite{VVT00} and
spectra of quantities related to these vectors
\cite{FFL93,LFD93,VC94}.
Recently, the SALI method was generalized to yield a much more
comprehensive approach to study chaos and order in $2N$ --
dimensional conservative systems, called the GALI$_m$ indices
\cite{Skokos_et._al_GALI,AThesis}. These indices represent the
volume elements formed by $m$ deviation vectors ($2\leq m \leq 2N$)
about any reference orbit and have been shown to: (a) Distinguish
the regular or chaotic nature of the orbit faster than other
methods, (b) identify the dimensionality of the space of regular
motion and (c) predict the slow (chaotic) diffusion of orbits, long
before it is observed in the actual oscillations.
In the present paper, we improve the GALI method by introducing the
Linear Dependence Indices (LDI$_m$). The new indices retain the
advantages of the GALI$_m$ and display the same values as GALI, in
regular as well as chaotic cases. More importantly, however, the
computation of the LDI$_m$ is much faster in CPU time, especially if
the dimensionality of phase space becomes large ($N\gg10$). The main
purpose of this paper, therefore, is to strongly advocate the use of
LDI, for the most rapid and efficient study of the dynamics of multi
-- dimensional conservative systems.
For the computation of the LDI$_m$ we use information from the
evolution of $m\geq 2$ deviation vectors from the reference orbit,
as GALI does. However, while GALI requires the computation of many
$m\times m$ determinants at every time step
\cite{Skokos_et._al_GALI,AThesis} in order to evaluate the norm of
the corresponding wedge product, LDI achieves the same purpose
simply by applying Singular Value Decomposition (SVD) to the
$2N\times m$ matrix formed by the deviation vectors. LDI is then
computed as the product of the corresponding singular values of the
above matrix. This not only provides the same numerical values as
the corresponding GALI$_m$, it also requires much less CPU time.
The paper is organized as follows: In section \ref{definition_LDI}
we introduce the new index, explain in detail its computation and
justify its validity theoretically. In section \ref{FPU_example}, we
demonstrate the usefulness of the LDI method, by applying it to the
famous Fermi -- Pasta -- Ulam (FPU) lattice model of $N$ dof, for
small and large $N$. Finally, in section \ref{conclusions} we
present our conclusions, highlighting especially the advantages of
the new index.
\section{Definition of the Linear Dependence Index (LDI)}\label{definition_LDI}
Let us consider the $2N$ -- dimensional phase space of a Hamiltonian
system:
\begin{equation}\label{1}
H\equiv H(q_1(t),\ldots,q_N(t),p_1(t),\ldots,p_N(t))=E
\end{equation}
where $q_i(t),\;i=1,\ldots,N$ are the canonical coordinates,
$p_i(t),\;i=1,\ldots,N$ are the corresponding conjugate momenta and
$E$ is the total energy. The time evolution of an orbit $\vec{x}(t)$
of (\ref{1}) associated with the initial condition:
$$\vec{x}(t_{0})=(q_1(t_{0}),\ldots,q_N(t_{0}),p_1(t_{0}),\ldots,p_N(t_{0}))$$ at initial time $t_0$ is
defined as the solution of the system of $2N$ first order
differential equations (ODE):
\begin{equation}
\frac{dq_i(t)}{dt}=\frac{\partial H}{\partial
p_i(t)},\quad\frac{dp_i(t)}{dt}=-\frac{\partial H}{\partial
q_i(t)},\;i=1,\ldots,N.\label{2}
\end{equation}
Eqs. (\ref{2}) are known as Hamilton's equations of motion and the
reference orbit under study is the solution $\vec{x}(t)$ which
passes by the initial condition $\vec{x}(t_{0})$.
In order to define the Linear Dependence Index (LDI) we need to
introduce the variational equations. These are the corresponding
linearized equations of the ODE (\ref{2}), about the reference orbit
$\vec{x}(t)$ defined by the relation:
\begin{equation}\label{3}
\frac{d\vec{\upsilon_i}(t)}{dt}=\mathcal{J}(\vec{x}(t))\cdot\vec{\upsilon_i}(t),\;i=1,\ldots,2N
\end{equation}
where $\mathcal{J}(\vec{x}(t))$ is the Jacobian of the right hand
side of the system of ODEs (\ref{2}) calculated about the orbit
$\vec{x}(t)$. Vectors
$\vec{\upsilon_i}(t)=(\upsilon_{i,1}(t),\ldots,\upsilon_{i,2N}(t)),\;i=1,\ldots,2N$
are known as deviation vectors and belong to the tangent space of
the reference orbit at every time $t$.
We then choose $m\in[2,2N]$ initially linearly independent deviation
vectors $\vec{\upsilon_m}(0)$ and integrate equation (\ref{3})
together with the equations of motion (\ref{2}). These vectors form
the columns of a $2N\times m$ matrix $\mathcal{A}(t)$ and are taken
to lie along the orthogonal axes of a unit ball in the tangent space
of the orbit $\vec{x}(t)$ so that $\vec{\upsilon}_m(0)$ are
orthonormal. Thus, at every time step, we check the linear
dependence of the deviation vectors by performing Singular Value
Decomposition on $\mathcal{A}(t)$ decomposing it as follows:
\begin{eqnarray}\label{SVD_based_theorem}
\mathcal{A}(t)=U(t)\cdot W(t)\cdot V(t)^{\top},
\end{eqnarray}
where $U(t)$ is a $2N\times m$ matrix, $V(t)$ is an $m\times m$
matrix whose columns are the $\vec{\upsilon}_m(t)$ deviation vectors
and $W(t)$ is a diagonal $m\times m$ matrix, whose entries
$w_1(t),\ldots,w_m(t)$ are zero or positive real numbers. They are
called the {\it singular values} of $\mathcal{A}(t)$. Matrices
$U(t)$ and $V(t)$ are orthogonal so that $U^{\top}(t)\cdot
U(t)=V^{\top}(t)\cdot V(t)=I$, where $I$ is the rectangular
$2N\times 2N$ unit matrix.
We next define the generalized Linear Dependence Index of order $m$
or $\mathrm{LDI}_m$ as the function:
\begin{eqnarray}\label{LDI_definition}
\mathrm{LDI}_m(t)=\prod_{j=1}^m w_j(t)
\end{eqnarray}
with $m=2,3,\ldots,2N$, where $N$ is the number of dof of (\ref{1}).
The reason for defining $\mathrm{LDI}$ through relation
(\ref{LDI_definition}) is the following: According to
\cite{Skokos_et._al_GALI} it is possible to determine whether an
orbit is chaotic or lies on a $d$ -- dimensional torus by choosing
$m$ deviation vectors and computing the GALI$_m$ index. If GALI$_m
\approx\mbox{const.}$ for $m=2,3,\ldots,d$ and for $m>d$ decay by a
power law, the motion lies on a $d$ -- dimensional torus. If, on the
other hand, all GALI$_m$ indices decay exponentially the motion is
chaotic. Thus, to characterize orbits we often have to compute
GALI$_m$ indices for $m$ as high as $N$ or higher.
A serious limitation appears, of course, in the case of Hamiltonian
systems of large $N$, where GALI$_N(t)$ involves the computation of
$\begin{pmatrix}
2N \\
N \\
\end{pmatrix}=\frac{(2N)!}{(N!)^2}$
determinants at every time step. For example, in a Hamiltonian
system of $N=15$ dof, $\mathrm{GALI}_{15}(t)$ requires, for a given
orbit, the computation of $155117520$ determinants at every time
step while $\mathrm{LDI(t)=LDI_{15}}(t)$ requires only the
application of the SVD method for a $30\times 15$ matrix
$\mathcal{A}(t)$!
Clearly, at every point of the orbit $\vec{x}(t)$ the $2\leq
m\leq2N$ deviation vectors span a subspace of the $2N$ --
dimensional tangent space of the orbit, which is isomorphic to the
Euclidean $2N$ -- dimensional phase space of the Hamiltonian system
(\ref{2}). Thus, if $k$ of the $m$ singular values
$w_k(t),k=1,\ldots,m$ are equal to zero, then $k$ columns of matrix
$\mathcal{A}(t)$ of deviation vectors are linearly dependent with
the remaining ones and the subspace spanned by the column vectors of
matrix $\mathcal{A}(t)$ is $d(=m-k)$ -- dimensional.
From a more geometrical point of view, let us note that the $m$
variational equations (\ref{3}) combined with the equations of
motion (\ref{2}) describe the evolution of an initial $m$ --
dimensional unit ball into an $m$ (or less) dimensional ellipsoid in
the tangent space of the Hamiltonian flow. Now, the deviation
vectors $\vec{\upsilon_i}(t)$ forming the columns of
$\mathcal{A}(t)$ do not necessarily coincide with the ellipsoid's
principal axes. On the other hand, in the case of a chaotic orbit,
every generically chosen initial deviation vector has a component in
the direction of the maximum (positive) Lyapunov exponent, so that
all initial tangent vectors in the long run, will be aligned with
the longest principal axis of the ellipsoid. The key idea behind the
LDI method is to take advantage of this fact to overcome the costly
calculation of the many determinants arising in the GALI$_m$ method
and characterize a reference orbit as chaotic or not, via the trends
of the stretching and shrinking of the $m$ principal axes of the
ellipsoid.
Thus, LDI solves the problem of orbit characterization by finding
new orthogonal axes for the ellipsoid at every time step and taking
advantage of the SVD method. Since the matrix $V$ in (\ref{3}) is
orthogonal, we have $V^{\top}=V^{-1}$, so that equation
(\ref{SVD_based_theorem}) gives:
\begin{equation}\label{7}
\mathcal{A}_{2N\times m}\cdot V_{m\times m}=U_{2N\times m}\cdot
W_{m\times m}
\end{equation}
at every time step. Geometrically, Eq. (\ref{7}) implies that the
image formed by the column vectors of matrix $V$ is equal to an
ellipsoid whose $i^{\mbox{th}}$ principal axis direction in the
tangent space of the reference orbit is given by:
\begin{equation}
w_i\cdot \textbf{$u_i$}
\end{equation}
where $w_i$ are the singular values of matrix $\mathcal{A}(t)$ and
$\textbf{$u_i$}$ is the $i^{\mbox{th}}$ column of matrix $U(t)$.
This is, in fact, the content of a famous theorem stating that:
\begin{theorem}(\cite{Alligood})
Let $\mathcal{A}$ be a $2N\times m$ matrix, and let $U$ and $W$ be
matrices resulting from the SVD of $\mathcal{A}$. Then, the columns
of $\mathcal{A}$ span an ellipsoid whose $i^{\mbox{th}}$ principal
axis is $w_i\cdot \textbf{$u_i$}$, where
$W=diag(w_1,w_2,\ldots,w_m)$ (singular values) and
$\{\textbf{$u_i$}\}_{i=1}^{m}$ are the columns of $U$.
\end{theorem}
According to this theorem, the principal axes of the ellipsoid
created by the time evolution of equation (\ref{3}) in the tangent
space of the reference orbit $\vec{x}(t)$ at every time $t$, are
stretched or shrunk, according to the singular values of $w_i>1$
or $w_i<1$ respectively for $i=1,\ldots,m$.
If it so happens that $k$ of the singular values $w_i=0$ as $t$
grows, then the corresponding principal axes of the ellipsoid vanish
and the ellipsoid is less than $m$ -- dimensional in the tangent
space of the reference orbit because the corresponding deviation
vectors of matrix $\mathcal{A}$ have become linearly dependent.
Thus, two distinct cases exist depending on whether the reference
orbit $\vec{x}(t)$ is chaotic or ordered:
\begin{enumerate}
\item{If the orbit is chaotic, the $m$ deviation vectors become
linearly dependent so that $\mathrm{GALI_m}(t)\rightarrow 0$
exponentially \cite{Skokos_et._al_GALI}. Consequently, at least
one of the singular values $w_i(t),i=2,\ldots,m$ becomes zero and
$\mathrm{LDI}_m(t)=\prod_{j=1}^m w_j(t)\rightarrow 0$ (also
$\mathrm{LDI}(t)\rightarrow 0$) for all $m\geq i$}.
\item{If the orbit is ordered (i.e. quasiperiodic) lying on a $d$ -- dimensional torus,
there is no reason \cite{Skokos_et._al_GALI,Skokos_et._al_PTPS} for
the $m$ deviation vectors to become linearly dependent, as long as
$m\leq d$. No principal axis of the ellipsoid is eliminated, since
all singular values $w_i,\;i=1,\ldots,m$ are nonzero and
$\mathrm{LDI_m}(t)$ fluctuates around nonzero positive values. On
the other hand, for $m\geq d$, the singular values
$w_i,\;i=d+1,\ldots,m$ tend to zero following a power law
\cite{Skokos_et._al_GALI}, since $m-d$ deviations will eventually
become linearly dependent with those spanning the $d$ -- dimensional
tangent space of the torus
\cite{Skokos_et._al_GALI,Skokos_et._al_PTPS}}.
\end{enumerate}
In the remainder of the paper, we apply the LDI indices and
numerically demonstrate that:
\begin{equation}\label{LDI_m}
\mathrm{LDI}_m=\mathrm{GALI}_m,\quad m=2,\ldots,2N
\end{equation}
for the same choice of $m$ initially linearly independent deviation
vectors $\vec{\upsilon_i}(0),\;i=1,\ldots,m$. In particular, we
present evidence that supports the validity of relation
(\ref{LDI_m}) and exploit it to identify rapidly and reliably
ordered and chaotic orbits in a $1$ -- dimensional, $N$ degree of
freedom Fermi -- Pasta -- Ulam lattice under fixed and periodic
boundary conditions
\cite{Antonopoulos_et_al_IJBC,Antonopoulos_et._al_PRE}. We propose
that the validity of (\ref{LDI_m}) is due to the fact that both
quantities measure the volume of the same ellipsoid, the difference
being that, in the case of the LDI, the principal axes of the
ellipsoid are orthogonal. As we have not proved it, however, this is
a point to which we intend to return in a future publication.
\section{Application to the FPU Hamiltonian
System}\label{FPU_example}
In this section, we apply the LDI method to the case of a
multidimensional Hamiltonian system. Our aim is the comparison of
its performance and effectiveness in distinguishing between ordered
and chaotic behavior compared with Lyapunov exponents as well as the
SALI and GALI methods.
We shall use the $N$ dof Hamiltonian system of the $1$D lattice of
the Fermi -- Pasta -- Ulam (FPU) $\beta$ -- model. The system is
described by a Hamiltonian function containing quadratic and quartic
nearest neighbor interactions:
\begin{equation}\label{FPU_Hamiltonian}
H_N=\frac{1}{2}\sum_{j=1}^{N}\dot{x}_{j}^{2}+\sum_{j=0}^{N}\biggl
(\frac{1}{2}(x_{j+1}-x_{j})^2+\frac{1}{4}\beta(x_{j+1}-x_{j})^4\biggr)=E
\end{equation}
where $x_{j}$ is the displacement of the $j^{\mbox{th}}$ particle
from its equilibrium position, $\dot{x}_{j}$ is the corresponding
conjugate momentum, $\beta$ is a positive real constant and $E$ is
the constant energy of the system.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85]{LE_N=4_nrg=2.EPS}
\includegraphics[scale=0.85]{SALI_GALI2_LDI2_N=4_nrg=2.EPS}
\includegraphics[scale=0.85]{GALI3_4_LDI3_4_N=4_nrg=2.EPS}
\includegraphics[scale=0.841]{GALI5_6_7_8_LDI5_6_7_8_N=4_nrg=2.EPS}
\end{center}
\caption{The case of an ordered orbit:(a) The time evolution of the
three maximal positive Lyapunov exponents. (b) The time evolution of GALI$_2$ and
LDI$_2$. (c) The time evolution of GALI$_3$, LDI$_3$ and GALI$_4$,
LDI$_4$. (d) The time evolution of GALI$_m$, LDI$_m$ with
$m=5,\ldots,8$ and the corresponding slopes of the fall to zero. In
all panels we have chosen a neighboring orbit at a distance
$\approx2.1$ from the OPM of the FPU Hamiltonian
(\ref{FPU_Hamiltonian}) with periodic boundary conditions for $N=4$
and $E=2$. All axes are logarithmic.} \label{fig_2}
\end{figure}
We start by focusing on an ordered case choosing a neighboring orbit
of the stable out of phase mode (OPM)
\cite{Budinsky,Poggi,Antonopoulos_et_al_IJBC}, which is a simple
periodic orbit of the FPU Hamiltonian (\ref{FPU_Hamiltonian}). This
solution exists for every $N$, for fixed as well as periodic
boundary condition (PBC):
\begin{equation}\label{FPU_periodic_boundary_conditions_OPM}
x_{N+1}(t)=x_{1}(t),\;\forall t
\end{equation}
and is given by:
\begin{equation}\label{FPU_non_lin_mode_periodic_boundary_conditions_OPM}
x_{j}(t)=-x_{j+1}(t),\;\dot{x}_j(t)=0,\;j=1,\ldots,N,\forall t.
\end{equation}
In \cite{Budinsky,Antonopoulos_et_al_IJBC} the stability properties
of the OPM mode with periodic boundary conditions were determined
using Floquet theory and monodromy matrix analysis and the energy
range $0\leq E(N)\leq E_{c}^{\mathrm{OPM}}(N)$ over which it is
linearly stable was studied in detail.
It is known that for $N=4$ and $\beta=1$, the solution
(\ref{FPU_non_lin_mode_periodic_boundary_conditions_OPM}) with
periodic boundary condition
(\ref{FPU_non_lin_mode_periodic_boundary_conditions_OPM}) is
destabilized for the first time at the critical energy
$E_c^{\mathrm{OPM}}\approx4.51$. Below this critical energy, the OPM
is linearly stable and is surrounded by a sizable island of
stability. By contrast, for $E>E_c^{\mathrm{OPM}}$, the OPM is
linearly unstable with no island of stability around it.
In Fig. \ref{fig_2}(a), we have calculated the three maximal
Lyapunov exponents of a neighboring orbit located at distance
$\approx2.1$ away from the OPM at $E=2<E_c^{\mathrm{OPM}}$. At this
energy, the OPM is linearly stable and thus all Lyapunov Exponents
tend to zero following a simple power law. Next, in Fig.
\ref{fig_2}(b), we compute GALI$_2$ and LDI$_2$ for a final
integration time $t=8\times10^4$ and observe that GALI$_2$ and
LDI$_2$ practically coincide fluctuating around non zero values
indicating the ordered nature of the orbit. GALI$_2$ needs $558$
seconds of computation time while LDI$_2$ takes about $912$ seconds
in a Pentium 4 3.2GHz computer.
In Fig. \ref{fig_2}(c), we compute GALI$_3$, LDI$_3$ and GALI$_4$,
LDI$_4$ for the same energy and initial condition. We see once more
that GALI$_3$, LDI$_3$ and GALI$_4$, LDI$_4$ coincide fluctuating
around non zero values. The GALI$_3$ computation now takes about
$1044$ seconds, LDI$_3$ about $838$ seconds, GALI$_4$ needs $898$
seconds and LDI$_4$ $753$ seconds.
Finally, in Fig. \ref{fig_2}(d), we present GALI$_m$, LDI$_m$ with
$m=5,\ldots,8$ as a function of time for the same energy and initial
condition. We observe again that GALI$_m$ and LDI$_m$ with
$m=5,\ldots,8$, have the same values and tend to zero following a
power law of the form $t^{-2(k-N)}$. All these results are in
accordance with the formulae reported in \cite{Skokos_et._al_GALI}
and suggest that the torus on which the orbit lies is 4 --
dimensional, as expected from the fact that the number of dof of the
system is $N=4$.
In \cite{Antonopoulos_et_al_IJBC} we also studied the stability
properties of a different simple periodic orbit of FPU called the
SPO1 mode with fixed boundary conditions (FBC). Using monodromy
matrix analysis we found that for $N=5$ and $\beta=1.04$, the SPO1
mode with FBC is destabilized for the first time at the critical
energy $E_c^{\mathrm{SPO1}}\approx6.4932$.
Thus, in order to study a chaotic case where things are different,
we choose initial condition at distance of
$\approx1.27\times10^{-4}$ from the SPO1 orbit at the energy $E=11$,
where it is unstable.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85]{LE_N=5_nrg=11.EPS}
\includegraphics[scale=0.85]{SALI_GALI2_LDI2_N=5_nrg=11.EPS}
\includegraphics[scale=0.85]{GALI5_LDI5_N=5_nrg=11.EPS}
\end{center}
\caption{The case of a chaotic orbit:(a) The time evolution of the
four maximal Lyapunov exponents. (b) The time evolution of GALI$_2$, LDI$_2$ follows
the approximate formula $e^{-(\sigma_1-\sigma_2)t}$ where
$\sigma_1\approx0.124$ (solid straight line) and
$\sigma_2\approx0.056$ for $t=71$. (c) The time evolution of
GALI$_5$, LDI$_5$ follows the approximate formula $\propto
e^{-[(\sigma_1-\sigma_2)+(\sigma_1-\sigma_3)+(\sigma_1-\sigma_4)+(\sigma_1-\sigma_5)]t}\approx
e^{-0.069t}$ (solid straight line) where $\sigma_1\approx0.197$,
$\sigma_2\approx0.095$, $\sigma_3\approx0.047$,
$\sigma_4\approx0.026$ and $\sigma_5\approx0.022$ for $t\approx44$.
We have used, in all figures, the same orbit of a distance of
$1.27\times10^{-4}$ from the SPO1 of the FPU Hamiltonian system
(\ref{FPU_Hamiltonian}) with fixed boundary conditions for $N=5$ and
$E=11$.} \label{fig_3}
\end{figure}
In Fig. \ref{fig_3}(a), we calculate Lyapunov exponents~of the above mentioned
orbit and find that the four maximal Lyapunov exponents~tend to positive values.
This is strong evidence that the nature of the orbit is chaotic.
Next, in Fig. \ref{fig_3}(b) we calculate GALI$_2$ and LDI$_2$ up to
$t=1200$. We see the indices again coincide and tend to zero as
$\propto e^{-(\sigma_1-\sigma_2)t}$ (solid straight line), as
predicted by our theory \cite{Skokos_et._al_JPA,Skokos_et._al_GALI}.
In this figure, we find $\sigma_1\approx0.124$ and
$\sigma_2\approx0.056$ for time $t=71$. The corresponding CPU time
required for the calculation of all indices does not differ
significantly, as they become quite small in magnitude, rather
quickly.
Nevertheless, LDI$_2$ requires less CPU time than GALI$_2$. In Fig.
\ref{fig_3}(c), we calculate GALI$_5$ and LDI$_5$ for the same energy
and initial condition as in the previous panels. We observe now that
GALI$_5$ and LDI$_5$ coincide falling to zero as GALI$_5\propto
e^{-[(\sigma_1-\sigma_2)+(\sigma_1-\sigma_3)+(\sigma_1-\sigma_4)+(\sigma_1-\sigma_5)]t}$
(solid straight line) where $\sigma_1\approx0.197$,
$\sigma_2\approx0.095$, $\sigma_3\approx0.047$,
$\sigma_4\approx0.026$ and $\sigma_5\approx0.022$ for $t\approx44$.
Clearly, GALI$_5$ and LDI$_5$ distinguish the chaotic character of
the orbit faster than GALI$_2$ or LDI$_2$. This is so, because
GALI$_2$ or LDI$_2$ reaches the threshold $10^{-8}$
\cite{Skokos_et._al_PTPS,Skokos_et._al_JPA,Skokos_et._al_GALI} for
$t\approx750$ while GALI$_5$ and LDI$_5$ for $t\approx35$! The CPU
times required for the calculation of GALI$_5$ and LDI$_5$ up to
$t=80$ are approximately $1.5$ seconds each.
Thus, we conclude from these results that the LDI method performs at
least as well as the GALI, predicting correctly the ordered or
chaotic nature of orbits in Hamiltonian systems for low dimensions,
i.e. at 2, 4 and 5 degrees of freedom. However, in higher
dimensional cases, GALI indices become very impractical as they
demand the computations of millions of determinants at every time
step making the LDI method much more useful.
In order to show the advantages of the LDI method concerning the CPU
time, we repeat the above analysis for the same Hamiltonian system
(\ref{FPU_Hamiltonian}), but now for $N=15$ and energy $E=2$, and
for an initial condition very close to the unstable SPO1
\cite{Antonopoulos_et_al_IJBC}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.845]{LE_N=15_nrg=2.EPS}
\includegraphics[scale=0.845]{GALI8_LDI8_N=15_nrg=2.EPS}
\end{center}
\caption{(a) The time evolution of the five positive Lyapunov exponents. (b) The
time evolution of the GALI$_8$, LDI$_8$ follows the approximate
formula $\propto
e^{-[(\sigma_1-\sigma_2)+(\sigma_1-\sigma_3)+\ldots+(\sigma_1-\sigma_8)]t}\approx
e^{-0.385t}$ (solid straight line) where $\sigma_1\approx0.061$,
$\sigma_2\approx0.011$, $\sigma_3\approx0.006$,
$\sigma_4\approx0.005$, $\sigma_5\approx0.005$,
$\sigma_6\approx0.004$, $\sigma_7\approx0.004$ and
$\sigma_8\approx0.004$ for time $t\approx141$. In all panels we have
used initial conditions at a distance of $9\times10^{-5}$ from the
SPO1 orbit of Hamiltonian system (\ref{FPU_Hamiltonian}) with fixed
boundary conditions, $N=15$ and $E=2$.} \label{fig_4_LDI}
\end{figure}
In \cite{Antonopoulos_et_al_IJBC} it has also been shown that for
$N=15$ and $\beta=1.04$, the SPO1 with fixed boundary conditions
destabilizes at the critical energy $E_c\approx1.55$. Thus, for
energies smaller than $E_c$, SPO1 is linearly stable, while for
$E>E_c$ it is unstable and is surrounded by a chaotic region.
In Fig. \ref{fig_4_LDI}(a) we depict the time evolution of the five
maximal Lyapunov exponents~which converge to positive values for high enough $t$
suggesting that the neighboring orbit is chaotic. In the second
panel of the same figure we present the evolution of GALI$_8$ and
LDI$_8$ together with the approximate exponential law. We remark
once more that the values of the corresponding indices coincide
until they become numerically zero. More interestingly, the CPU time
required for the calculation of GALI$_8$ up to $t\approx100$ is about
$186$ seconds while for the LDI$_8$ it takes only one second! This
difference is very important, showing why LDI is preferable compared
to the corresponding GALI~index in Hamiltonian systems of many
degrees of freedom.
\section{Conclusions}\label{conclusions}
In this paper we have introduced a new method for distinguishing
quickly and reliably between ordered and chaotic orbits of
multidimensional Hamiltonian systems and argued about its validity
justifying it in the ordered and chaotic case. It is based on the
recently introduced theory of the Generalized Alignment Indices
(GALI). Following this theory, the key point in the distinction
between order and chaos is the linear dependence (or independence)
of deviation vectors from a reference orbit. Consequently, the
method of LDI takes advantage of this property and analyzes $m$
deviation vectors using Singular Value Decomposition to decide
whether the reference orbit is chaotic or ordered. If the orbit
under consideration is chaotic then the deviation vectors are
aligned with the direction of the maximal Lyapunov exponent and thus
become linearly dependent. On the other hand, if the reference orbit
is ordered then there is no unstable direction and $m=1,2,...,d\leq
N$ deviation vectors are linearly independent. As a consequence, the
LDI of order $m$ (LDI$_m$) becomes either zero if the reference
orbit is chaotic or it fluctuates around non zero values if the
orbit is ordered if $m\leq d$.
After introducing the new method, we presented strong numerical
evidence about its validity and efficiency in the interesting case
of multidimensional Hamiltonian systems. One first main result is
that GALI$_m$ and LDI$_m$ coincide numerically for the same $m$
number of deviation vectors and for the same reference orbit.
Moreover, it follows that it is preferable to use the LDI method
rather than the equivalent GALI method especially in the
multidimensional case of Hamiltonian systems, since the LDI needs
considerably less CPU time than the corresponding GALI method for
the same number of deviation vectors.
\section{Acknowledgements}
This work was partially supported by the European Social Fund (ESF),
Operational Program for Educational and Vocational Training II
(EPEAEK II) and particularly the Program PYTHAGORAS II. We thank Dr.
Charalambos Skokos and Miss Eleni Christodoulidi for very fruitful
discussions on the comparison between the GALI and LDI indices.
|
1,116,691,498,600 | arxiv | \section{Introduction}
\subsection{Main results}
This paper is the follow-up of \cite{Two-Green-interior}, in which we proved the existence of {the} two-curve Green's function for $2$-SLE$_\kappa$ at an interior point, and obtained the formula for the Green's function up to a multiplicative constant. In the present paper, we will study the case when the interior point is replaced by a boundary point.
As a particular case of multiple SLE$_\kappa$, a $2$-SLE$_\kappa$ consists of two random curves in a simply connected domain connecting two pairs of boundary points (more precisely, prime ends), which satisfy the property that, when any one curve is given, the conditional law of the other curve is that of a chordal SLE$_\kappa$ in a complement{ary} domain of the first curve.
The two-curve Green's function of a $2$-SLE$_\kappa$ is {defined to be} the rescaled limit of the probability that the two curves in the $2$-SLE$_\kappa$ both approach a marked point in $\overline D$. More specifically, it was proved in \cite{Two-Green-interior} that, for any $\kappa\in(0,8)$, if $(\eta_1,\eta_2)$ is a $2$-SLE$_\kappa$ in $D$, and $z_0\in D$, then the limit
\begin{equation} G(z_0):=\lim_{r\to 0^+} r^{-\alpha}\mathbb{P}[\dist(\eta_j,z_0)<r,j=1,2] \label{Gz0}\end{equation}
{is} a positive number, where the exponent $\alpha$ is $\alpha_0:=\frac{(12-\kappa)(\kappa+4)}{8\kappa}$. The limit $G(z_0)$ is called the (interior) two-curve Green's function for $(\eta_1,\eta_2)$. The paper \cite{Two-Green-interior} also derived the convergence rate and the exact formula for $G(z_0)$ up to an unknown constant.
In this paper we study the limit in the case that $z_0\in\partial D$ assuming that $\partial D$ is analytic near $z_0${\footnote{By saying that $\mathbb{D}$ is analytic near $z_0$, we mean that there is a conformal map $f$ defined on $\{|z|<1\}$ such that $f(0)=z_0$, $f(\{|z|<1,\mbox{Im}\, z>0\})=f(\{|z|<1\})\cap D$, and $f(\{|z|<1,\mbox{Im}\, z\le 0\})=f(\{|z|<1\})\cap D^c$.}}. Below is our main theorem.
\begin{Theorem}
Let $\kappa\in(0,8)$. Let $(\eta_1,\eta_2)$ be a $2$-SLE$_\kappa$ in a simply connected domain $D$. Let $z_0\in\partial D$. Suppose $\partial D$ is analytic near $z_0$. We have the following results in two cases.
\begin{enumerate}
\item [(A)] If $z_0$ is not {an} endpoint of $\eta_1$ or $\eta_2$, then the limit in (\ref{Gz0}) exists and lies in $(0,\infty)$ for $\alpha= \alpha_1= \alpha_2:=2(\frac{12}{\kappa}-1)$.
\item [(B)] If $z_0$ is one of the endpoints of $\eta_1$ {or} $\eta_2$, then the limit in (\ref{Gz0}) exists and lies in $(0,\infty)$ for $\alpha=\alpha_3:= \frac{12}{\kappa}-1$.
\end{enumerate}
Moreover, in each case we may compute $G_D(z_0)$ up to some constant $C>0$ as follows. Let $F$ denote the hypergeometric function $_2F_1(\frac 4\kappa,1-\frac 4\kappa;\frac 8\kappa,\cdot)$.
Let $f$ map $D$ conformally onto $\mathbb{H}$ such that $f(z_0)=\infty$. Let $J$ denote the map $z\mapsto -1/z$.
\begin{enumerate}
\item [(A1)] Suppose Case (A) happens and {neither} $\eta_1$ {nor} $\eta_2$ separates $z_0$ from the other curve. We label the $f$-images of the four endpoints of $\eta_1$ and $\eta_2$ by $v_-<w_-<w_+<v_+$. Then
$$G_D(z_0)= C_1 |(J\circ f)'(z_0)|^{\alpha_1} G_{1 }(\underline w;\underline v),$$
where $C_1>0$ is a constant depending only on $\kappa$, and
\begin{equation} G_1(\underline w;\underline v):=\prod_{\sigma\in\{+,-\}}( |w_\sigma-v_\sigma|^{\frac 8\kappa-1} |w_\sigma-v_{-\sigma}|^{\frac 4\kappa} ) F\Big(\frac{(w_+-w_-)(v_+-v_-)}{(w_+-v_-)(v_+-w_-)}\Big)^{-1}.\label{G1(w,v)}\end{equation}
\item [(A2)] Suppose Case (A) happens and one of $\eta_1$ and $\eta_2$ separates $z_0$ from the other curve. We label the $f$-images of the four endpoints of $\eta_1$ and $\eta_2$ by $v_-<w_-<w_+<v_+$. Then
$$G_D(z_0)= C_2 |(J\circ f)'(z_0)|^{\alpha_2} G_{2}(\underline w;\underline v)$$
where $C_2>0$ is a constant depending only on $\kappa$, and
\begin{equation} G_2(\underline w;\underline v):=\prod_{u\in\{w,v\}} |u_+-u_-|^{\frac 8\kappa -1} \prod_{\sigma\in\{+,-\}} |w_\sigma-v_{-\sigma}|^{\frac {4}\kappa} F\Big(\frac{(v_+-w_+)(w_--v_-)}{(w_+-v_-)(v_+-w_-)}\Big)^{-1}. \label{G2(w,v)}\end{equation}
\item [(B)] Suppose Case (B) happens. We label the $f$-images of the other three endpoints of $\eta_1$ and $\eta_2$ by $w_+,w_-,v_+$, such that $f^{-1}(v_+)$ and $z_0$ are endpoints of the same curve, and $w_+,v_+$ lie on the same side of $w_-$. Then
$$G_D(z_0)= C_3 |(J\circ f)'(z_0)|^{\alpha_3} G_{3}(\underline w;v_+ ),$$
where $C_3>0$ is a constant depending only on $\kappa$, and
\begin{equation} G_3(\underline w;v_+)=|w_+-w_-|^{\frac 8\kappa -1} |v_+-w_{-}|^{\frac {4}\kappa} F\Big(\frac{ v_+-w_+ }{v_+-w_- }\Big)^{-1}. \label{G3(w,v)}\end{equation}
\end{enumerate}
{In each case, the function $G_D$ does not depend on the choice of $f$ because $G_1$ and $G_2$ are homogeneous of degree $2(\frac 8\kappa -1+\frac 4\kappa)=\alpha_1=\alpha_2$, and $G_3$ is homogeneous of degree $\frac 8\kappa -1+\frac 4\kappa=\alpha_3$.}
\label{main-Thm1}
\end{Theorem}
Our long-term goal is to prove the existence of Minkowski content of double points of chordal SLE$_\kappa$ for $\kappa\in(4,8)$, which may be transformed into the existence of Minkowski content of the intersection of the two curves in a $2$-SLE$_\kappa$. Following the approach in \cite{LR}, we need to prove the existence of two-curve two-point Green's function for $2$-SLE$_\kappa$, where Theorem \ref{main-Thm1} is expected to serve as the boundary estimate in the proof.
{The paper uses a two-curve technique introduced in \cite{Two-Green-interior}, which will be described in Section \ref{Strategy}. Besides \cite{Two-Green-interior} and this paper, the technique was recently used in \cite{Green-cut} to study the Green's function for the cut points of chordal SLE$_\kappa$, $\kappa\in(4,8)$. One future application of the technique is the interior two-curve Green's function for a commuting pair of SLE$_\kappa(2,\underline\rho)$ curves (which may arise as a commuting pair of flow lines in the imaginary geometry theory (\cite{MS1}) in the case $\kappa\ne 4$, or a commuting pair of Gaussian free field level lines (\cite{WW-level}) in the case $\kappa =4$, cf.\ Section \ref{section-commuting-SLE-kappa-rho}). The result is expected to lead to the existence of the Minkowski content of the intersection of the two curves (subject to the existence of the two-curve two-point Green's function for this commuting pair).
}
\subsection{Strategy} \label{Strategy}
\begin{figure}
\begin{center}
\includegraphics[width=1.9in]{Case1.png} \quad
\includegraphics[width=1.9in]{Case2.png}\quad
\includegraphics[width=1.9in]{Case3.png}
\end{center}
\caption{{The three figures above respectively illustrate Case (A1), Case (A2), and Case (B).}} \label{fig-1}
\end{figure}
{We now briefly explain how the two-curve technique works for the boundary two-curve Green's function.}
By conformal invariance of $2$-SLE$_\kappa$, we may assume that $D=\mathbb{H}:=\{z\in\mathbb{C}:\mbox{Im}\, z>0\}$, and $z_0=\infty$. It suffices to consider the limit
\begin{equation} \lim_{L\to \infty} L^{\alpha} \mathbb{P}[\eta_j\cap \{|z|>L\}\ne\emptyset,j=1,2].\label{limit-L}\end{equation} In Case (A) of Theorem \ref{main-Thm1}, we label the four endpoints of $\eta_1$ and $\eta_2$ by $v_+>w_+>w_->v_-$. There are two possible link patterns: $(w_+\leftrightarrow v_+;w_-\leftrightarrow v_-)$ and $(w_+\leftrightarrow w_-;v_+\leftrightarrow v_-)$, which respectively correspond to Case (A1) and Case (A2) of Theorem \ref{main-Thm1}. {See Figure \ref{fig-1} for illustrations of the thee cases of Theorem \ref{main-Thm1}.}
For the first link pattern, we label the two curves by $\eta_+$ and $\eta_-$. By translation and dilation, we may assume that $v_\pm=\pm1$. {Additionally}, we assume that {$\frac{v_++v_-}2\in [w_-,w_+]$. After converting $v_\pm$ to $\pm 1$ using translation and dilation, we get}
$0\in[w_-,w_+]$. Let $v_0=0$. {We orient $\eta_+$ and $\eta_-$ from $w_+$ and $w_-$ to $v_+$ and $v_-$, and grow the two curves simultaneously with some speeds to be described later. The growing process stops at the time $T^u$ when either curve reaches its target, or separates $v_+$ or $v_-$ from $\infty$.} For each $t$ in the lifespan $[0,T^u)$, let $H_t$ denote the unbounded connected component of $\mathbb{H}\setminus (\eta_+[0,t]\cup \eta_-[0,t])$. During the lifespan $[0,T^u)$ of the process, the speeds of $\eta_+$ and $\eta_-$ are controlled by two factors:
\begin{enumerate}
\item [(F1)] the harmonic measure of $[v_-,v_+]\cup \eta_+[0,t]\cup \eta_-[0,t]$ in $H_t$ viewed from $\infty$ increases in $t$ exponentially with factor $2$, and
\item [(F2)] $[v_-,v_0]\cup \eta_-[0,t]$ and $[v_0,v_+]\cup \eta_+[0,t]$ have the same harmonic measure viewed from $\infty$.
\end{enumerate}
Suppose $g_t$ maps $H_t$ conformally onto $\mathbb{H}$ and satisfies $g_t(z)=z+o(1)$ as $z\to\infty$. Define
$V_+(t)=\lim_{x\downarrow \max([v_0,v_+]\cup \eta_+[0,t]\cap \mathbb{R})} g_t(x)$ and $V_-(t)=\lim_{x\uparrow \max([v_-,v_0]\cup \eta_-[0,t]\cap \mathbb{R})} g_t(x)$. Then (F1) is equivalent to {the condition} that
$V_+(t)-V_-(t)=e^{2t}(v_+-v_-)$. The inverse $g_t^{-1}$ extends continuously to $\overline\mathbb{H}$. We will see that there is a unique $V_0(t)\in (V_-(t),V_+(t))$ such that $g_t^{-1}$ maps $[ V_0(t),V_\sigma(t)]$ into $[v_0,v_\sigma]\cup \eta_\sigma[0,t]$ for $\sigma\in\{+,-\}$. Then (F2) is equivalent to {the condition} that $V_+(t)-V_0(t)=V_0(t)-V_-(t)$.
{If $\kappa\in(0,4]$, $\eta_+$ and $\eta_-$ are disjoint, and do not disconnect $v_+,v_-,v_0$ from $\infty$. In this case,} $V_\sigma(t)$ is simply $g_t(v_\sigma)$ for $\sigma\in\{+,-,0\}$. {If $\kappa\in(4,8)$, $\eta_+$ and $\eta_-$ may or may not intersect, and they together may disconnect some $v_\sigma$, $\sigma\in\{+,-,0\}$ from $\infty$. When the disconnection happens, the function $V_\sigma(t)$ is more complicated. The two curves do not cross each other even if they intersect. }
At the time $T^u$, one of the two curves, say $\eta_+$, separates $v_+$ or $v_-$ from $\infty$. If $\eta_+$ separates $v_+$, the rest of $\eta_+$ grows in a bounded connected component of $\mathbb{H}\setminus \eta_+[0,T^u)$; if $\eta_+$ separates $v_-$, the whole $\eta_-$ is disconnected from $\infty$ by $\eta_+[0,T^u)$. Thus, after $T^u$, at least one curve {cannot} get closer to $\infty$. So we may focus on the parts of $\eta_+$ and $\eta_-$ before $T^u$.
Using Koebe's $1/4$ theorem (applied to $g_t$ at $\infty$) and Beurling's estimate (applied to a planar Brownian motion started near $\infty$), we find that for $0\le t<T^u$, the diameter of both $\eta_+[0,t]$ and $\eta_-[0,t]$ are comparable to $e^{2t}$.
We define a two-dimensional diffusion process $\underline R(t)=(R_+(t),R_-(t))\in [0,1]^2$, $0\le t<T^u$, by $R_\sigma(t)=\frac{W_\sigma(t)-V_0(t)}{V_\sigma(t)-V_0(t)}$, $\sigma\in\{+,-\}$, where $W_\sigma(t)=g_t(\eta_\sigma(t))\in [V_0(t),V_\sigma(t)]$. Here $\eta_\sigma(t)$ is understood as a prime end of $H_t$.
We then use the knowledge of $2$-SLE$_\kappa$ partition function and a technique of orthogonal polynomials to derive the transition density of $(\underline R)$.
{From the transition density, we find that $(\underline R)$ has a quasi-invariant distribution, which means that if $(\underline R)$ starts with this distribution, then the lifetime $T^u$ follows an exponential distribution, and the distribution of $\underline R(t)$ conditionally on the event $\{T^u>t\}$ stays unchanged. Moreover, if we start $(\underline R)$ from any other distribution, then the distribution of $\underline R(t)$ conditionally on $\{T^u>t\}$ converges exponentially to the quasi-invariant distribution.}
{To prove the existence of the limit in (\ref{limit-L}), we first prove that the limit exists if the condition $\eta_j\cap \{|z|>L\}\ne\emptyset$ , $j=1,2$, is replaced by the condition that $e^{2 T^u}>L$. The value $e^{2 T^u}$ plays the role of the conformal radius: by Koebe's $1/4$ theorem, the supremum of the set of $L>0$ such that $\eta_j\cap \{|z|>L\}\ne\emptyset$ , $j=1,2$, is comparable to $e^{2T^u}$. Suppose $(\underline R)$ starts from its quasi-invariant distribution. Then $L^{\alpha} \mathbb{P}[e^{2 T^u}>L]$ stays constant for $L\in(0,\infty)$ by the property of the quasi-invariant distribution, and so its limit as $L\to\infty$ exists. If $(\underline R)$ starts from a deterministic point, then the existence of the limit follows from the convergence of the conditional distribution of $\underline R(t)$ to its quasi-invariant distribution. After this step, we remove the additional assumption that $\frac{v_++v_-}2\in [w_-,w_+]$ by growing a segment of one of the two curves. Finally, we use a technique in \cite{LR} (obtaining the Euclidian distance Green's function from the conformal radius Green's function) to prove the existence of the limit in (\ref{limit-L}).
}
For the link pattern $(w_+\leftrightarrow w_-;v_+\leftrightarrow v_-)$, we label the curves by $\eta_w$ and $\eta_v$. We observe that $\eta_v$ disconnects $\eta_w$ from $\infty$. Thus, for $L>\max\{|v_+|,|v_-|\}$, $\eta_w$ intersects $\{|z|>L\}$ implies that $\eta_v$ does the intersection as well. Then the two-curve Green's function reduces to a single-curve Green's function. But we will still use a two curve approach. We assume that $v_\pm=\pm1$ and $0\in(w_-,w_+)$, and let $v_0=0$ as in the previous case. This time, we grow $\eta_+$ and $\eta_-$ simultaneously along the same curve $\eta_w$ such that $\eta_\sigma$ runs from $w_\sigma$ towards $w_{-\sigma}$, $\sigma\in\{+,-\}$. The growth is stopped if $\eta_+$ and $\eta_-$ together exhaust the range of $\eta_w$, or any of them disconnects its target from $\infty$. The speeds of the curves are also controlled by (F1) and (F2). {The relation between $\eta_1$ and $\eta_2$ is similar to that in Case (A1) depending on whether $\kappa\in(0,4]$ or $\kappa\in(4,8)$.} Then we define $V_0,V_\pm,W_\pm,R_\pm$ in the same way as before, and derive the transition density of $\underline R=(R_+,R_-)$, {which will be used to prove the existence of the limit in (\ref{limit-L}) following the same approach as in Case (A1)}.
In Case (B), we may assume that $v_+=1$ and $w_++w_-=0$. Now we introduce two new points: $v_0=0$ and $v_-=-1$. Unlike the previous cases, $v_-$ is not an end point of any curve. For this case, we grow $\eta_+$ and $\eta_-$ simultaneously from $w_+$ and $w_-$ along the same curve $\eta_w$ as in Case (A2). The rest of the proof almost follows the same approach as in Case ({A1}).
\subsection{Outline}
Below is the outline of the paper. In Section \ref{section-prel}, we recall definitions, notations, and some basic results that will be needed in this paper. In Section \ref{section-deterministic} we develop a framework on a commuting pair of deterministic chordal Loewner curves, which do not cross but may touch each other. The work extends the disjoint ensemble of Loewner curves that appeared in \cite{reversibility,duality}. At the end of the section, we describe the way to grow the two curves simultaneously with properties (F1) and (F2). In Section \ref{section-commuting-SLE-kappa-rho}, we use the results from the previous section to study a pair of multi-force-point SLE$_\kappa(\underline\rho)$ curves, which commute with each other in the sense of \cite{Julien}. We obtain a two-dimensional diffusion process $\underline R(t)=(R_+(t),R_-(t))$, $0\le t<\infty$, and derive its transition density using orthogonal two-variable polynomials. In Section \ref{section-other-commut}, we study three types of commuting pairs of hSLE$_\kappa$ curves, which correspond to the three cases in Theorem \ref{main-Thm1}. We prove that each of them is {\it locally} absolutely continuous w.r.t.\ a commuting pair of SLE$_\kappa(\underline\rho)$ curves for certain force values, and also find the Radon-Nikodym derivative at different times. For each commuting pair of hSLE$_\kappa$ curves, we obtain a two-dimensional diffusion process $\underline R(t)=(R_+(t),R_-(t))$ with random finite lifetime, and derive its transition density and quasi-invariant density. In the last section we finish the proof of Theorem \ref{main-Thm1}.
\section*{Acknowledgments}
The author thanks Xin Sun for suggesting the problem on the (interior and boundary) two-curve Green's function for $2$-SLE.
\section{Preliminar{ies}} \label{section-prel}
We first fix some notation. Let $\mathbb{H}=\{z\in\mathbb{C}:\mbox{Im}\, z>0\}$. For $z_0\in\mathbb{C}$ and $S\subset \mathbb{C}$, let $\rad_{z_0}(S)=\sup\{|z-z_0|:z\in S\cup\{z_0\}\}$. If a function $f$ is absolutely continuous on a real interval $I$, and $f'=g$ a.e.\ on $I$, then we write $f'\overset{\mathrm{ae}}{=} g$ on $I$. This means that $f(x_2)-f(x_1)=\int_{x_1}^{x_2} g(x)dx$ for any $x_1<x_2\in I$. Here $g$ may not be defined on a subset of $I$ with Lebesgue measure zero. We will also use ``$\overset{\mathrm{ae}}{=}$'' for PDE or SDE in some similar sense. {For example, if $B_t$ is a standard Brownian motion, the SDE $dX_t\overset{\mathrm{ae}}{=} dB_t+ g_t dt$ means that a.s.\ $f_t:=X_t-B_t$ is absolutely continuous, and $f'=g$ a.e.\ in the lifespan. }
\subsection{$\mathbb{H}$-hulls and chordal Loewner equation}
A relatively closed subset $K$ of $\mathbb{H}$ is called an $\mathbb{H}$-hull if $K$ is bounded and $\mathbb{H}\setminus K$ is a simply connected domain. For a set $S\subset\mathbb{C}$, if there is an $\mathbb{H}$-hull $K$ such that $\mathbb{H}\setminus K$ is
the unbounded connected component of $\mathbb{H}\setminus \overline S$, then we say that $K$ is the $\mathbb{H}$-hull generated by $S$, and write $K=\Hull(S)$.
For an $\mathbb{H}$-hull $K$, there is a unique conformal map $g_K$ from $\mathbb{H}\setminus K$ onto $\mathbb{H}$ such that $g_K(z)=z+\frac cz+O(1/z^2)$ as $z\to \infty$ for some $c\ge 0$. The constant $c$, denoted by $\hcap(K)$, is called the $\mathbb{H}$-capacity of $K$, which is zero iff $K=\emptyset$. We write $\hcap_2(K)$ for $\hcap(K)/2$. If $\partial(\mathbb{H}\setminus K)$ is locally connected, then $g_K^{-1} $ extends continuously from $\mathbb{H}$ to $\overline\mathbb{H}$, and we use $f_K$ to denote the continuation. If $K=\Hull(S)$, then we write $g_S,f_S, \hcap(S),\hcap_2(S)$ for $g_K,f_K,\hcap(K),\hcap_2(K)$, respectively.
If $K_1\subset K_2$ are two $\mathbb{H}$-hulls, then we define $K_2/K_1=g_{K_1}(K_2\setminus K_1)$, which is also an $\mathbb{H}$-hull. Note that $g_{K_2}=g_{K_2/K_1}\circ g_{K_1}$ and $\hcap(K_2)=\hcap(K_2/K_1)+\hcap(K_1)$, which impl{ies} that $\hcap(K_1),\hcap(K_2/K_1)\le \hcap (K_2)$. If $K_1\subset K_2\subset K_3$ are $\mathbb{H}$-hulls, then $K_2/K_1\subset K_3/K_1$ and
\begin{equation} (K_3/K_1)/(K_2/K_1)=K_3/K_2. \label{K123}\end{equation}
Let $K$ be a non-empty $\mathbb{H}$-hull. Let $K^{\doub}=\overline K\cup\{\overline z:z\in K\}$, where $\overline K$ is the closure of $K$, and $\overline z$ is the complex conjugate of $z$. By {the} Schwarz reflection principle, there is a compact set $S_K\subset\mathbb{R}$ such that $g_K$ extends to a conformal map from $\mathbb{C}\setminus K^{\doub}$ onto $\mathbb{C}\setminus S_K$. Let $a_K=\min(\overline{K}\cap \mathbb{R})$, $b_K=\max(\overline{K}\cap\mathbb{R})$, $c_K=\min S_K$, $d_K=\max S_K$. Then the extended $g_K$ maps $\mathbb{C}\setminus (K^{\doub}\cup [a_K,b_K])$ conformally onto $\mathbb{C}\setminus [c_K,d_K]$. Since $g_K(z)=z+o(1)$ as $z\to\infty$, by Koebe's $1/4$ theorem, $\diam(K)\asymp \diam(K^{\doub}\cup [a_K,b_K])\asymp d_K-c_K$.
\vskip 2mm
\noindent{\bf Example}.
Let $x_0\in\mathbb{R}$, $r>0$. Then $H:=\{z\in\mathbb{H}:|z-x_0|\le r\}$ is an $\mathbb{H}$-hull with $g_H(z)=z+\frac{r^2}{z-x_0}$, $\hcap(H)=r^2$, $a_H=x_0-r$, $b_H=x_0+r$, $H^{\doub}=\{z\in\mathbb{C}:|z-x_0|\le r\}$, $c_H=x_0-2r$, $d_H=x_0+2r$.
\vskip 2mm
The next proposition combines \cite[Lemmas 5.2 and 5.3]{LERW}.
\begin{Proposition}
If $L\subset K$ are two non-empty $\mathbb{H}$-hulls, then $[a_K,b_K]\subset [c_K,d_K]$, $[c_L,d_L]\subset [c_K,d_K]$, and $[c_{K/L},d_{K/L}]\subset [c_K,d_K]$. \label{abcdK}
\end{Proposition}
\begin{Proposition}
For any $x\in\mathbb{R}\setminus K^{\doub}$, $0<g_K'(x)\le 1$. Moreover, $g_K'$ is decreasing on $(-\infty,a_K)$ and increasing on $(b_K,\infty)$.
\label{Prop-contraction}
\end{Proposition}
\begin{proof}
By \cite[Lemma C.1]{BSLE}, there is a measure $\mu_K$ supported on $S_K$ with $|\mu_K|=\hcap(K)$ such that $g_K^{-1}(z)-z=\int_{S_K} \frac{-1}{z-y}d\mu_K(y)$ for any $x\in\mathbb{R}\setminus S_K$. Differentiating this formula and letting $z=x\in\mathbb{R}\setminus S_K$, we get $(g_K^{-1})'(x)=1+\int_{S_K} \frac{1}{(x-y)^2}d\mu_K(y)\ge 1$. So $0<g_K'\le 1$ on $\mathbb{R}\setminus K^{\doub}$. Further differentiating the integral formula w.r.t.\ $x$, we find that $(g_K^{-1})''(x)=\int_{S_K} \frac{-2}{(x-y)^3}d\mu_K(y)$ is positive on $(-\infty,c_K)$ and negative on $(d_K,\infty)$, which means that $(g_K^{-1})'$ is increasing on $(-\infty,c_K)$ and decreasing on $(d_K,\infty)$. Since $g_K$ maps $(-\infty,a_K)$ and $(b_K,\infty)$ onto $(-\infty,c_K)$ and $(d_K,\infty)$, respectively, we get the monotonicity of $g_K'$ .
\end{proof}
\begin{Proposition}
If $K$ is an $\mathbb{H}$-hull with $\rad_{x_0}(K)\le r$ for some $x_0\in\mathbb{R}$, then $\hcap(K)\le r^2$, $\rad_{x_0}(S_K)\le 2r$, and $ |g_K(z)-z|\le 3r$ for any $z\in\mathbb{C}\setminus K^{\doub}$. \label{g-z-sup}
\end{Proposition}
\begin{proof}
We have $K\subset H:=\{z\in\mathbb{H}:|z-x_0|\le r\}$. By Proposition \ref{abcdK}, $\hcap(K)\le \hcap(H)=r^2$, $S_K\subset [c_K,d_K]\subset[c_H,d_H]=[x_0-2r,x_0+2r]$. {If $x_0=0$, the inequality $\hcap(K)\le r^2$ is just \cite[Formula 3.9]{Law-SLE}; and the inequality $ |g_K(z)-z|\le 3r$ for any $z\in\mathbb{H}\setminus K$ is just \cite[Formula 3.12]{Law-SLE}. By translation, continuation, and reflection, we then extend these inequalities to general $x_0\in\mathbb{R}$ and all $z\in \mathbb{C}\setminus K^{\doub}$.}
\end{proof}
\begin{Proposition}
For two nonempty $\mathbb{H}$-hulls $ K_1\subset K_2$ such that $\overline{K_2/K_1}\cap [c_{K_1},d_{K_1}]\ne \emptyset$, we have $|c_{K_{1}}-c_{K_{2}}|,|d_{K_{1}}-d_{K_{2}}|\le 4 \diam(K_{2}/K_{1})$.
\label{Prop-cd-continuity}
\end{Proposition}
\begin{proof}
By symmetry it suffices to estimate $|c_{K_1}-c_{K_2}|$. Let $c_1'=\lim_{x\uparrow a_{K_2}} g_{K_1}(x)$ and $\Delta K=K_2/K_1$. Since $g_{K_1}$ maps $\mathbb{H}\setminus K_2$ onto $\mathbb{H}\setminus \Delta K$, we have $c_1'=\min\{c_{K_1},a_{\Delta K}\}$. Since $\overline{\Delta K}\cap [c_{K_1},d_{K_1}]\ne \emptyset$, $ c_1'\ge c_{K_1}-\diam(\Delta K)$. Thus, by Proposition \ref{g-z-sup},
$$c_{K_2}=\lim_{x\uparrow a_{K_2}} g_{\Delta K}\circ g_{K_1}(x)=\lim_{y\uparrow c_1'} g_{\Delta K}(y)\ge c_1'-3\diam(\Delta K)\ge c_{K_1}-4\diam(\Delta K).$$
By Proposition \ref{abcdK}, $c_{K_2}\le c_{K_1}$. So we get $|c_{K_{1}}-c_{K_{2}}| \le 4 \diam(\Delta K)$.
\end{proof}
The following proposition is \cite[Proposition 3.42]{Law-SLE}.
\begin{Proposition}
Suppose $K_0,K_1,K_2$ are $\mathbb{H}$-hulls such that $K_0\subset K_1\cap K_2$. Then
$$\hcap(K_1)+\hcap(K_2)\ge \hcap(\Hull(K_1\cup K_2))+\hcap(K_0).$$ \label{hcap-concave}
\end{Proposition}
Let $\widehat w \in C([0,T),\mathbb{R})$ for some $T\in(0,\infty]$. The chordal Loewner equation driven by $\widehat w$ is
$$\partial_t g_t(z)= \frac{2}{g_t(z)-\widehat w(t)},\quad 0\le t<T;\quad g_0(z)=z.$$
For every $z\in\mathbb{C}$, let $\tau_z$ be the first time that the solution $g_\cdot(z)$ blows up; if such {a} time does not exist, then set $\tau_z=\infty$.
For $t\in[0,T)$, let $K_t=\{z\in\mathbb{H}:\tau_z\le t\}$. It turns out that each $K_t$ is an $\mathbb{H}$-hull with $\hcap_2(K_t)=t$, $K_t^{\doub}=\{z\in\mathbb{C}:\tau_z\le t\}$, which is connected, and each $g_t$ agrees with $g_{K_t}$. We call $g_t$ and $K_t$ the chordal Loewner maps and hulls, respectively, driven by $\widehat w$ {(cf.\ \cite[Section 2.2]{Wer-SLE})}.
If for every $t\in[0,T)$, $f_{K_t}$ is well defined, and $\eta(t):=f_{K_t}({\widehat w(t)})$, $0\le t<T$, is continuous in $t$, then we say that $\eta$ is the chordal Loewner curve driven by $\widehat w$. Such $\eta$ may not exist in general. When it exists, we have $\eta(0)=\widehat w(0)\in\mathbb{R}$, and $K_t=\Hull(\eta[0,t])$ for all $t$, and we say that $K_t$, $0\le t<T$, are generated by $\eta$.
Let $u$ be a continuous and strictly increasing function on $[0,T)$. Let $v$ be the inverse of $u-u(0)$. Suppose that $g^u_t$ and $K^u_t$, $0\le t<T$, satisfy that $g^u_{v(t)}$ and $K^u_{v(t)}$, $0\le t<u(T)-u(0)$, are chordal Loewner maps and hulls, respectively, driven by $\widehat w\circ v$. Then we say that $g^u_t$ and $K^u_t$, $0\le t<T$, are chordal Loewner maps and hulls, respectively, driven by $\widehat w$ with speed $u$, and call $(K^u_{v(t)})$ the normalization of $(K^u_t)$. If $(K^u_t)$ {is} generated by a curve $\eta^u$, i.e., $K^u_t=\Hull(\eta^u[0,t])$ for all $t$, then $\eta^u$ is called a chordal Loewner curve driven by $\widehat w$ with speed $u$, and $\eta^u\circ v$ is called the normalization of $\eta^u$.
If $u$ is absolutely continuous with $u'\overset{\mathrm{ae}}{=} q$, then we also say that the speed is $q$. In this case, the chordal Loewner maps satisfy the differential equation $\partial_t g^u_t(z)\overset{\mathrm{ae}}{=} \frac{2q(t)}{g^u_t-\widehat w(t)}$. We omit the speed when it is constant {and equal to} $1$.
The following proposition is straightforward.
\begin{Proposition}
Suppose $K_t$, $0\le t<T$, are chordal Loewner hulls driven by $\widehat w(t)$, $0\le t<T$, with speed $u$. Then for any $t_0\in[0,T)$, $K_{t_0+t}/K_{t_0}$, $0\le t<T-t_0$, are chordal Loewner hulls driven by $\widehat w(t_0+t)$, $0\le t<T-t_0$, with speed $u(t_0+\cdot)$. One immediate consequence is that, for any $t_1<t_2\in[0,T)$, $\overline{K_{t_2}/K_{t_1}}$ is connected. \label{prop-connected}
\end{Proposition}
\begin{comment}
The following proposition is a slight variation of \cite[Lemma 4.13]{Law-SLE}.
\begin{Proposition}
Suppose $K_t$, $0\le t<T$, are chordal Loewner hulls driven by $\widehat w(t)$, $0\le t<T$, with speed $u$. Then for any $0\le t<T$, $$\rad_{\widehat w(0)}(K_t)\le 4\max\{\sqrt{u(t)-u(0)},\rad_{\widehat w(0)}( \widehat w[0,t])\}.$$ \label{size-K}
\end{Proposition}
\end{comment}
The following proposition is a slight variation of \cite[Theorem 2.6]{LSW1}.
\begin{Proposition}
The $\mathbb{H}$-hulls $K_t$, $0\le t<T$, are chordal Loewner hulls with some speed if and only if for any fixed $a\in[0,T)$, $\lim_{\delta\downarrow 0} \sup_{0\le t\le a} \diam(K_{t+\delta}/K_t)=0$. Moreover, the driving function $\widehat w$ satisfies {the property} that $\{\widehat w(t)\}=\bigcap_{\delta>0} \overline{K_{t+\delta}/K_{t}}$, $0\le t< T$; and the speed $u$ could be chosen to be $u(t)=\hcap_2(K_t)$, $0\le t<T$.
\label{Loewner-chain}
\end{Proposition}
\begin{Proposition}
Suppose $K_t$, $0\le t<T$, are chordal Loewner hulls driven by $\widehat w$ with some speed. Then for any $t_0\in(0,T)$, $c_{K_{t_0}}\le \widehat w(t)\le d_{K_{t_0}}$ for all $ t\in[0, t_0]$. \label{winK}
\end{Proposition}
\begin{proof}
Let $t_0\in(0,T)$. If $0\le t<t_0$, by Propositions \ref{abcdK} and \ref{Loewner-chain}, $\widehat w(t)\in[a_{K_{t_0}/K_t},b_{K_{t_0}/K_t}]\subset [c_{K_{t_0}/K_t},d_{K_{t_0}/K_t}]\subset [c_{K_{t_0} },d_{K_{t_0} }]$.
By the continuity of $\widehat w$, we also have $\widehat w(t_0)\in [c_{K_{t_0} },d_{K_{t_0} }]$.
\end{proof}
The following proposition combines \cite[Lemma 2.5]{MS1} and \cite[Lemma 3.3]{MS2}.
\begin{Proposition}
Suppose $\widehat w\in C([0,T),\mathbb{R})$ generates a chordal Loewner curve $\eta$ and chordal Loewner hulls $K_t$, $0\le t<T$. Then the set $\{t\in [0,T):\eta(t)\in\mathbb{R}\}$ has Lebesgue measure zero. Moreover, if the Lebesgue measure of $\eta[0,T)\cap\mathbb{R}$ is zero, then the functions $c(t)$ and $d(t)$ defined by $c(0)=d(0):=\widehat w(0)$, and $c(t):=c_{K_t}$ and $d(t):=d_{K_t}$, $0< t<T$, {satisfy that (i) $c\le \widehat w\le d$ on $[0,T)$; (ii) the set of $t\in[0,T)$ such that $c(t)=\widehat w(t)$ or $\widehat w(t)=d(t)$ (which implies that $\eta(t)\in\mathbb{R}$) has Lebesgue measure zero; and (iii) $c'(t)\overset{\mathrm{ae}}{=} \frac{2}{c(t)-\widehat w(t)}$ and $d'(t)\overset{\mathrm{ae}}{=} \frac{2}{d(t)-\widehat w(t)}$ on $[0,T)$. Thus, $c$ and $d$ are respectively strictly decreasing and increasing on $[0,T)$}. Moreover, $c(t)$ and $d(t)$ are continuously differentiable at the set of times $t$ such that $\eta(t)\not\in\mathbb{R}$, and in {this} case ``$\overset{\mathrm{ae}}{=}$'' can be replaced by ``$=$''. \label{prop-Lebesgue}
\end{Proposition}
\begin{Definition} We define the following notation.
\begin{enumerate}
\item [(i)] Modified real line. For $w\in\mathbb{R}$, we define $\mathbb{R}_w^{}=(\mathbb{R}\setminus\{w\})\cup \{w^-,w^+\}$, which has a total order endowed from $\mathbb{R}$ and the relation $x<w^- <w^+<y$ for any $x,y\in\mathbb{R}$ such that $x<w$ and $y>w$. It is assigned the topology such that $(-\infty,w^-]:=(-\infty,w)\cup\{w^-\}$ and $[w^+,\infty):=\{w^+\}\cup(w,\infty)$ are two connected components, and are respectively homeomorphic to $(-\infty,w]$ and $[w,\infty)$ through the map $\pi_w:\mathbb{R}_w\to \mathbb{R}$ with $\pi_w(w^\pm)=w$ and $\pi_w(x)=x$ for $x\in\mathbb{R}\setminus \{w\}$.
\item [(ii)] Modified Loewner map. Let $K$ be an $\mathbb{H}$-hull and $w\in\mathbb{R}$. Let $a^w_K=\min\{w,a_K\}$, $b^w_K=\max\{w,b_K\}$, $c^w_K=\lim_{x\uparrow a^w_K} g_K(x)$, and $d^w_K=\lim_{x\downarrow b^w_K} g_K(x)$. They are all equal to $w$ if $K=\emptyset$.
Define $g_K^w$ on $\mathbb{R}_w\cup\{+\infty,-\infty\}$ such that $g_K^w(\pm\infty)=\pm\infty$, $g_K^w(x)=g_K(x)$ if $x\in\mathbb{R}\setminus [a_K^w,b_K^w]$; $g^w_K(x)=c^w_K$ if $x=w^-$ or $x\in[a_K^w,b_K^w]\cap (-\infty,w)$; and $g_K^w(x)=d^w_K$ if $x=w^+$ or $x\in [a_K^w,b_K^w]\cap(w,\infty)$.
Note that $g_K^w$ is continuous and increasing.
\end{enumerate} \label{Def-Rw}
\end{Definition}
\noindent{{\bf Example}.
Let $x_0\in\mathbb{R}$, $r>0$, and $H=\{z\in\mathbb{H}:|z-x_0|\le r\}$. Let $w\in[x_0-r,x_0+r]$. Then $a^w_H=x_0-r$, $b^w_H=x_0+r$, $c^w_H=x_0-2r$, $d^w_H=x_0+2r$, and
$$g^{w}_H(x)=\left\{\begin{array}{ll} x+\frac{r^2}{x-x_0}, &\mbox{if }x\in \mathbb{R}\setminus [x_0-r,x_0+r]\\
x_0+2r,&\mbox{if }x\in \{w^+\}\cup (w,x_0+r]\\
x_0-2r,&\mbox{if }x\in \{w^-\}\cup [x_0-r,w).
\end{array}
\right.$$
For the case $w\not\in [x_0-r,x_0+r]$, we assume $w\in (x_0+r,\infty)$ by symmetry. In this case, $a^w_H=x_0-r$, $b^w_H=w$, $c^w_H=x_0-2r$, $d^w_H=g_H(w)=w+\frac{r^2}{w-x_0}$, and
$$g^{w}_H(x)=\left\{\begin{array}{ll} x+\frac{r^2}{x-x_0}, &\mbox{if }x\in \mathbb{R}\setminus [x_0-r,w]\\
w+\frac{r^2}{w-x_0},&\mbox{if }x=w^+\\
x_0-2r,&\mbox{if }x\in \{w^-\}\cup [x_0-r,w).
\end{array}
\right.$$
}
\begin{Proposition}
Let $K_1\subset K_2$ be two $\mathbb{H}$-hulls. Let $w\in\mathbb{R}$ and $\widetilde w\in[c^w_{K_1},d^w_{K_1}]$. Revise $g^w_{K_1}$ such that when $g^w_{K_1}(w)=\widetilde w$, we define $g^w_{K_1}(x)=\widetilde w^{\sign(x-w)}$.
Then
\begin{equation} g^{\widetilde w}_{K_2/K_1}\circ g^w_{K_1} =g^w_{K_2},\quad \mbox{on } \mathbb{R}_w\cup\{+\infty,-\infty\}.\label{comp-g}\end{equation}
\label{prop-comp-g}
\end{Proposition}
\begin{proof}
By symmetry, it suffices to show that (\ref{comp-g}) holds on $[w^+,\infty]$. Since for $x\ge w^+$, $g^w_{K_1}(x)\ge d^w_{K_1}\ge \widetilde w$, the revised $g^w_{K_1}$ is a continuous map from $[w^+,\infty]$ into $[\widetilde w^+,\infty]$, and so both sides of (\ref{comp-g}) are continuous on $[w^+,\infty]$.
If $x>b^w_{K_2}$, then $x>\max\{b^w_{K_1},b_{K_2}\}$, which implies that $g_{K_1}^w(x)=g_{K_1}(x)>\max\{d^w_{K_1},b_{K_2/K_1}\}\ge b^{\widetilde w}_{K_2/K_1}$. Thus, $g^{\widetilde w}_{K_2/K_1}\circ g_{K_1}^w(x)=g _{K_2/K_1}\circ g_{K_1} (x)=g_{K_2}(x)=g^w_{K_2}(x)$ on $(b^w_{K_2},\infty]$.
We know that $g^w_{K_2}$ is constant on $[w^+,b^w_{K_2}]$. To prove that (\ref{comp-g}) holds on $[w^+,\infty]$, by continuity it suffices to show that the LHS of (\ref{comp-g}) is constant on $ [w^+,b^w_{K_2}]$. This is obvious if $b^w_{K_1}=b^w_{K_2}$ since $g^w_{K_1}$ is constant on $[w^+,b^w_{K_1}]$. Suppose $b^w_{K_1}<b^w_{K_2}$. Then we have $b_{K_1},w<b^w_{K_2}=b_{K_2}$. So $[w^+,b^w_{K_2}]$ is mapped by $g^w_{K_1}$ onto $[d^w_{K_1}, b_{K_2/K_1}]$ (or $[\widetilde w^+,b_{K_2/K_1}]$), which is in turn mapped by $g^{\widetilde w}_{K_2/K_1}$ to a constant.
\end{proof}
\begin{Proposition}
Let $K_t$ and $\eta(t)$, $0\le t<T$, be chordal Loewner hulls and curve driven by $\widehat w$ with speed $q$. Suppose the Lebesgue measure of $\eta[0,T)\cap\mathbb{R}$ is $0$. Let $w=\widehat w(0)$, and $x \in\mathbb{R}_w$. Define $X(t)=g_{K_t}^w(x)$, $0\le t<T$. Then $X$ satisfies {(i)} $X'(t)\overset{\mathrm{ae}}{=} \frac{2q(t)}{X(t)-\widehat w(t)}$ on $[0,T)$; {(ii) the set of $t$ such that $X(t)=\widehat w(t)$ has Lebesgue measure zero; and (iii)} if $x>w$ (resp.\ $x<w$), then $X(t)\ge \widehat w(t)$ (resp.\ $X(t)\le \widehat w(t)$) on $[0,T)$, and so $X$
is {strictly} increasing (resp.\ decreasing) on $[0,T)$. Moreover, for any $0\le t_1<t_2<T$, $|X(t_1)-X(t_2)|\le 4 \diam(K_{t_2}/K_{t_1})$.
\label{Prop-cd-continuity'}
\end{Proposition}
\begin{proof}
We may assume that the speed $q$ is constant {and equal to} $1$.
By symmetry, we may assume that $x\in(-\infty,w^-]$. If $x=w^-$, then $X(t)=c_{K_t}$ for $t>0$ and $X(0)=\widehat w(0)$. Then the conclusion follows from Propositions \ref{Prop-cd-continuity} and \ref{prop-Lebesgue}. Now suppose $x\in(-\infty,w)$.
Fix $0\le t_1<t_2<T$. We first prove the upper bound for $|X(t_1)-X(t_2)|$. There are three cases. Case 1. $x\not\in \overline{K_{t_j}}$, $j=1,2$. In this case,
$X (t_2)=g_{K_{t_2}/K_{t_1}}(X(t_1))$, and the upper bound for $|X(t_1)-X(t_2)|$ follows from Proposition \ref{g-z-sup}. Case 2. $x\in\overline{K_{t_1}}\subset \overline{K_{t_2}}$. In this case $X(t_j)=c_{K_{t_j}}$, $j=1,2$, and the conclusion follows from Proposition \ref{Prop-cd-continuity}. Case 3. $x\not\in\overline{K_{t_1}}$ and $x\in \overline{K_{t_2}}$. Then $X(t_1)=g_{K_{t_1}}(x_0)<c_{K_{t_1}}$ and $X(t_2)=c_{K_{t_2}}$. Moreover, we have $\tau_{x}\in(t_1,t_2]$, $\lim_{t\uparrow \tau_{x}} X(t)=\widehat w(\tau_{x})$, and $X(t)$ satisfies $X'(t)=\frac{2}{X(t)-\widehat w(t)}<0$ on $[t_1,\tau_{x})$. By Propositions \ref{winK} and \ref{abcdK}, $c_{K(t_1)}>X(t_1)\ge \widehat w(\tau_{x})\ge c_{K_{\tau_{x}}}\ge c_{K_{t_2}}=X(t_2)$. So we have $|X(t_1)-X(t_2)|\le |c_{K_{t_1}}-c_{K_{t_2}}|\le 4 \diam(K_{t_2}/K_{t_1})$ by Propositions \ref{Prop-cd-continuity}. By Proposition \ref{Loewner-chain}, $X$ is continuous on $[0,T)$.
Since $X(t)=g_{K_t}(x)$ satisfies {$X(t)<\widehat w(t)$ and} the chordal Loewner equation driven by $\widehat w$ up to $\tau_{x}$, we know that $X'(t)=\frac{2}{X(t)-\widehat w(t)}{<0}$ on $[0,\tau_{x})$. {So $X$ is strictly decreasing on $[0,\tau_x)$.} From Proposition \ref{prop-Lebesgue} we know that {$X(t)=c_{K_t}$ is strictly decreasing, the set of $t\in[\tau_x,T)$ such that $X(t)=\widehat w(t)$ has Lebesgue measure zero, and} $X'(t)\overset{\mathrm{ae}}{=} \frac{2}{X(t)-\widehat w(t)}$ on ${[}\tau_{x},T)$. {By the continuity of $X$, we conclude that $X$ has these properties throughout $[0,T)$.} \end{proof}
\subsection{Chordal SLE$_\kappa$ and $2$-SLE$_\kappa$}
If $\widehat w(t)=\sqrt\kappa B(t)$, $0\le t<\infty$, where $\kappa>0$ and $B(t)$ is a standard Brownian motion, then the chordal Loewner curve $\eta$ driven by $\widehat w$ is known to exist (cf.\ \cite{RS}). We now call it a standard chordal SLE$_\kappa$ curve. It satisfies {the property} that $\eta(0)=0$ and $\lim_{t\to\infty} \eta(t)=\infty$. The behavior of $\eta$ depends on $\kappa$: if $\kappa\in(0,4]$, $\eta$ is simple and intersects $\mathbb{R}$ only at $0$; if $\kappa\ge 8$, $\eta$ is space-filling, i.e., $\overline\mathbb{H}=\eta(\mathbb{R}_+)$; if $\kappa\in(4,8)$, $\eta$ is neither simple nor space-filling. If $D$ is a simply connected domain with two distinct marked boundary points (or more precisely, prime ends) $a$ and $b$, the chordal SLE$_\kappa$ curve in $D$ from $a$ to $b$ is defined to be the conformal image of a standard chordal SLE$_\kappa$ curve under a conformal map from $(\mathbb{H};0,\infty)$ onto $(D;a,b)$.
Chordal SLE$_\kappa$ satisfies {the} Domain Markov Property (DMP): if $\eta$ is a chordal SLE$_\kappa$ curve in $D$ from $a$ to $b$, and $T$ is a stopping time, then conditionally on the part of $\eta$ before $T$ and the event that {$\eta$ has not reached $b$ by time $T$}, the part of $\eta$ after $T$ is a chordal SLE$_\kappa$ curve from $\eta(T)$ to $b$ in a connected component of $D\setminus \eta[0,T]$.
We will focus on the range $\kappa\in(0,8)$ so that SLE$_\kappa$ is non-space-filling. One remarkable property of these chordal SLE$_\kappa$ is reversibility: the time-reversal of a chordal SLE$_\kappa$ curve in $D$ from $a$ to $b$ is a chordal SLE$_\kappa$ curve in $D$ from $b$ to $a$, up to a time-change (\cite{reversibility,MS3}).
Another fact that is important to us is the existence of $2$-SLE$_\kappa$.
\begin{Definition}
Let $D$ be a simply connected domain with pairwise distinct boundary points $a_1,b_1,a_2,b_2$ such that $a_1$ and $b_1$ together do not separate $a_2$ from $b_2$ on $\partial D$ (and vice versa). A $2$-SLE$_\kappa$ in $D$ with link pattern $(a_1\leftrightarrow b_1;a_2\leftrightarrow b_2)$ is a pair of random curves $(\eta_1,\eta_2)$ in $\overline D$ such that $\eta_j$ connects $a_j$ with $b_j$ for $j=1,2$, and conditionally on {the whole image of} any one curve, the other {curve} is a chordal SLE$_\kappa$ curve in a complement{ary} domain of the given curve in $D$.
\end{Definition}
Because of reversibility, we do not need to specify the orientation of $\eta_1$ and $\eta_2$. If we want to emphasize the orientation, then we use an arrow $a_1\to b_1$ in the link pattern.
The existence of $2$-SLE$_\kappa$ was proved in \cite{multiple} for $\kappa\in(0,4]$ using {the} Brownian loop measure and in \cite{MS1,MS3} for $\kappa\in(4,8)$ using {the} imaginary geometry theory. The uniqueness of $2$-SLE$_\kappa$ (for a fixed domain and link pattern) was proved in \cite{MS2} (for $\kappa\in(0,4]$) and \cite{MSW} (for $\kappa\in(4,8)$).
\subsection{SLE$_\kappa({\protect\underline{\rho}})$ processes}
First introduced in \cite{LSW-8/3}, SLE$_\kappa(\underline\rho)$ processes are natural variations of SLE$_\kappa$, where one keeps track of additional marked points, often called force points, which may lie on the boundary or interior. For the generality needed here, all force points will lie on the boundary. In this subsection, we review the definition and properties of SLE$_\kappa(\underline \rho)$ developed in \cite{MS1}.
Let $n\in\mathbb{N}$, $\kappa>0$, $\underline \rho=(\rho_1,\dots,\rho_n)\in\mathbb{R}^n$. Let $w \in\mathbb{R}$ and $\underline v=(v_1,\dots,v_n)\in \mathbb{R}_w^n$. The chordal SLE$_\kappa(\underline\rho)$ process in $\mathbb{H}$ started from $w$ with force points $\underline v$ is the chordal Loewner process driven by the function $\widehat w(t)$, which drives chordal Loewner hulls $K_t$, and solves the SDE
$$d\widehat w(t)\overset{\mathrm{ae}}{=} \sqrt{\kappa }dB(t)+\sum_{j=1}^n \frac{\rho_j}{\widehat w(t)-g^w_{K_t}(v_j)}\,dt,\quad \widehat w(0)=w,$$
where $B(t)$ is a standard Brownian motion, and we used Definition \ref{Def-Rw}. We require that for $\sigma\in\{+,-\}$, $\sum_{j:v_j=w^\sigma}\rho_j>-2$. The solution exists uniquely up to the first time (called a continuation threshold) that
$ \sum_{j: \widehat v_j(t)=c_{K_t}} \rho_j\le -2$ or $\sum_{j: \widehat v_j(t)=d_{K_t}} \rho_j \le -2$, whichever comes first. If a continuation threshold does not exist, then the lifetime is $\infty$.
For each $j$, $\widehat v_j(t):=g^w_{K(t)}(v_j)$ is called the force point function started from $v_j$, satisfies $\widehat v_j'\overset{\mathrm{ae}}{=} \frac2{\widehat v_j-\widehat w}$, and is monotonically increasing or decreasing depending on whether $v_j>w$ or $v_j<w$.
\begin{comment}
Using Proposition \ref{Prop-cd-continuity'} and Levy's characterization of Brownian motion, we get the following proposition.
\begin{Proposition}
A continuous function $\widehat w(t)$, $0\le t<T$, is the driving function of a chordal SLE$_\kappa(\rho_1,\dots,\rho_n)$ process with force points $(v_1,\dots,v_n)$ if and only if
$\widehat u(t):=\widehat w(t)+\sum_{j=1}^n \frac {\rho_j}2 g^{\widehat w(0)}_{K_t}(v_j)$ is a local martingale with $\langle \widehat u\rangle _t=\kappa t$ up to $T$, where $K_t$ are chordal Loewner hulls driven by $\widehat w$. \label{Prop-SLE-kappa-rho}
\end{Proposition}
\end{comment}
A chordal SLE$_\kappa(\underline\rho)$ process generates a chordal Loewner curve $\eta$ in $\overline\mathbb{H}$ started from $w$ up to the continuation threshold. If no force point is swallowed by the process at any time, this fact follows from the existence of {the} chordal SLE$_\kappa$ curve (\cite{RS}) and Girsanov{'s} Theorem. The existence of the curve in the general case was proved in \cite[{Theorem 1.3}]{MS1} {(for $\kappa\ne 4$) and \cite[Theorem 1.1.3]{WW-level} (for $\kappa=4$)}. By Proposition \ref{prop-comp-g} and the Markov property of Brownian motion, a chordal SLE$_\kappa(\underline \rho)$ curve $\eta$ satisfies the following DMP. If $\tau$ is a stopping time for $\eta$, then conditionally on the process before $\tau$ and the event that $\tau$ is less than the lifetime $T$, $\widehat w(\tau+t)$ and $\widehat v_j(\tau+t)$, $1\le j\le n$, $0\le t<T-\tau$, are the driving function and force point functions for a chordal SLE$_\kappa(\underline\rho)$ curve $\eta^\tau$ started from $\widehat w(\tau)$ with force points at $\widehat v_1(\tau),\dots,\widehat v_n(\tau)$, and $\eta(\tau+\cdot)=f_{K_\tau}(\eta^\tau)$, where $K_\tau:=\Hull(\eta[0,\tau])$. Here if $\widehat v_j(\tau)=\widehat w(\tau)$, then $\widehat v_j(\tau)$ as a force point is treated as $\widehat w(\tau)^{\sign(v_j-w)}$.
We now relabel the force points $v_1,\dots,v_n$ by $v^{{(-)}}_{n_-}\le\cdots\le v^{{(-)}}_{1}{\le w^-<} w{<w^+\le} v^{{(+)}}_1\le \cdots\le v^{{(+)}}_{n_+}$, where $n_-+n_+=n$ ($n_-$ or $n_+$ could be $0$). Then for any $t$ in the lifespan, $\widehat v^{{(-)}}_{n_-}(t)\le\cdots\le \widehat v^{{(-)}}_1(t)\le \widehat w(t)\le \widehat v^{{(+)}}_1(t)\le \cdots\le \widehat v^{{(+)}}_{n_+}(t)$. If for any $\sigma\in\{-,+\}$ and $1\le k\le n_\sigma$, $\sum_{j=1}^k \rho^{{(\sigma)}}_j>-2$, then the process will never reach a continuation threshold, and so its lifetime is $\infty$, in which case $\lim_{t\to\infty} \eta(t)=\infty$ {(cf.\ \cite[Theorem 1.3]{MS1},\cite[Theorem 1.1.3]{WW-level})}.
If for some $\sigma\in\{+,-\}$ and $1\le k\le n_\sigma$, $\sum_{j=1}^k \rho^{(\sigma)}_j\ge \frac{\kappa}{2}-2$, then {(cf.\ \cite[Remark 5.3]{MS1})} $\eta$ does not hit $v^{{(\sigma)}}_k$ and the open interval between $v^{{(\sigma)}}_k$ and $v^{{(\sigma)}}_{k+1}$ ($v^{{(\sigma)}} _{n_\sigma+1}:=\sigma\cdot\infty$).
If $\kappa\in(0,8)$ and for any $\sigma\in\{+,-\}$ and $1\le k\le n_\sigma$, $\sum_{j=1}^k \rho^{{(\sigma)}}_j> \frac{\kappa}{2}-4$, then for every $x\in\mathbb{R}\setminus \{w\}$, a.s.\ $\eta$ does not visit $x$, which implies by Fubini Theorem that a.s.\ $\eta\cap\mathbb{R}$ has Lebesgue measure zero.
{The last statement is similar to \cite[Lemma 7.16]{MS1} and \cite[Lemma 2.5.2]{WW-level}, and its proof resembles that of \cite[Lemma 5.2]{MS1}. The key idea is that, for a fixed $x\in(w,\infty)$, on the event $E$ that $\eta$ visits $x$, we may compare $\eta$ with an SLE$_\kappa(\rho)$ curve $\eta'$ in $\mathbb{H}$ started from $w$ with the force point at $x$, where $\widehat\rho=\sum\{\rho^{(+)}_j: v^{(+)}_j\le x\}$. It can be show that the law of $\eta$ is absolutely continuous w.r.t.\ that of $\eta'$ on $E$. We have $\rho>\frac\kappa 2-4$ by assumption. Let $f$ be a M\"obius automorphism of $\mathbb{H}$, which fixes $w$, maps $x$ to $\infty$, and maps $\infty$ to some $y\in(-\infty,x)$. By \cite{SW}, after a time-change, $f\circ \eta'$ up to the time that $\eta'$ separates $x$ from $\infty$ becomes a chordal SLE$_\kappa(\widetilde\rho)$ curve $\widetilde\eta$ in $\mathbb{H}$ started from $w$ with the force point at $y$ up to the time that it separates $y$ from $\infty$, where $\widetilde\rho:=\kappa-6-\rho<\frac \kappa 2-2$. Let $\widetilde w$ and $\widetilde v$ be respectively the driving function and force point function. Then $\widetilde w-\widetilde v$ is a rescaled Bessel process of dimension $\frac{2\widetilde\rho+4}\kappa+1<2$. So $\widetilde \eta$ a.s.\ hits $(-\infty,y]$, which implies that $\eta'$ a.s.\ does not hit $x$. The same statement then holds for $\eta$ by the absolute continuity between the laws of $\eta$ and $\eta'$ on $E$.
}
\subsection{Hypergeometric SLE} \label{hSLE}
For $a,b,c\in\mathbb{C}$ such that $c\not\in\{0,-1,-2,\cdots\}$, the hypergeometric function $\,_2F_1(a,b;c;z)$ (cf.\ \cite[Chapter 15]{NIST:DLMF}) is defined by the Gauss series on the disc $\{|z|<1\}$:
\begin{equation} \,_2F_1(a,b;c;z)=\sum_{n=0}^\infty \frac{(a)_n(b)_n}{(c)_nn!}\,z^n,\label{series}\end{equation}
where $(x)_n$ is rising factorial: $(x)_0=1$ and $(x)_n=x(x+1)\cdots(x+n-1)$ if $n\ge 1$. It satisfies the ODE
\begin{equation} z(1-z)F''(z)-[(a+b+1)z-c]F'(z)-ab F(z)=0.\label{ODE-hyper}\end{equation}
For the purpose of this paper, we {set} $a,b,c$ by $a=\frac{4}\kappa$, $b=1-\frac{4}\kappa$, $c=\frac{8}\kappa$, and define $ F(x) =\,_2F_1(1-\frac{4}\kappa,\frac{4}\kappa; \frac{8}\kappa;x )$.
{Since $c-a-b=\frac 8\kappa-1>0$, by \cite[15.4.20]{NIST:DLMF}, $F$ extends continuously to $[0,1]$ with
$$F(1)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}=\frac{\Gamma(\frac 8\kappa)\Gamma( \frac\kappa 8-1)}{\Gamma(\frac4\kappa)\Gamma(\frac{12}\kappa-1)}>0.$$}
{We claim that $F>0$ on $[0,1]$. If $b\ge 0$, then since $a,c>0$, every term of the series in (\ref{series}) is nonnegative for $z\in[0,1)$, and equals $1$ for $n=0$, which implies that $F>0$ on $[0,1)$. Suppose $b<0$. If $F>0$ on $[0,1]$ does not hold, then from $F(1)>0$ and $F(0)=1>0$, we see that there is $x_0\in(0,1)$ such that $F(x_0)<0$, $F'(x_0)=0$, and $F''(x_0)\ge 0$. Since $ab<0$, (\ref{ODE-hyper}) does not hold at $x_0$, which is a contradiction. So the claim is proved.} Let
$ \widetilde G(x)=\kappa x \frac { F'(x)}{F(x)}+2$.
\begin{Definition}
Let $\kappa\in(0,8)$. Let $v_1\le v_2\in [0^+,+\infty]$ or $v_1\ge v_2\in [-\infty,0^-]$. Suppose $\widehat w(t)$, $0\le t<\infty$, solves the following SDE:
$$
d \widehat w(t)\overset{\mathrm{ae}}{=} \sqrt\kappa dB(t)+\Big(\frac{1}{\widehat w(t)-\widehat v_1(t)}-\frac 1{\widehat w(t)-\widehat v_2(t)}\Big)\widetilde G\Big(\frac{\widehat w(t)-\widehat v_1(t)}{\widehat w(t)-\widehat v_2(t)}\Big)dt,\quad \widehat w(0)=0,
$$
where $B(t)$ is a standard Brownian motion, $\widehat v_j(t)=g_{K_t}^0(v_j)$, $j=1,2$, and $K_t$ are chordal Loewner hulls driven by $\widehat w$.
The chordal Loewner curve driven by $\widehat w$ is called a hypergeometric SLE$_\kappa$, or simply hSLE$_\kappa$, curve in $\mathbb{H}$ from $0$ to $\infty$ with force points $v_1,v_2$. We call $v_j(t)$ the force point function started from $v_j$, $j=1,2$.
{If $f$ maps $\mathbb{H}$ conformally onto a simply connected domain $D$, then the $f$-image of an hSLE$_\kappa$ curve in $\mathbb{H}$ from $0$ to $\infty$ with force points $v_1,v_2$ is called an hSLE$_\kappa$ curve in $D$ from $f(0)$ to $f(\infty)$ with force points $f(v_1),f(v_2)$}
\label{Def-hSLE}
\end{Definition}
{
\begin{Remark}
The existence of the solutions of the SDE follows from Girsanov's theorem. We start with an SLE$_\kappa(2,-2)$ curve $\eta$ in $\mathbb{H}$ started from $w$ with force points $v_1,v_2$. We assume $0^+\le v_1<v_2$ by symmetry. Let $\widehat w$ be the driving function, and $\widehat v_j$ be the force point function started from $v_j$, $j=1,2$. Let $g_t$ be the chordal Loewner maps. Define $R(t)=\frac{\widehat v_1(t)-\widehat w(t)}{\widehat v_2(t)-\widehat w(t)}\in [0,1)$ and $I(t)=\frac{|\widehat v_2(t)-\widehat v_1(t)|}{g_t'(v_2)}$ for $0\le t<\tau_{v_2}$. By Loewner's equation, $I_j$ is decreasing. Then we define
$$M(t)=\frac{F(R(t))}{F(R(0))}\Big(\frac{ I(t)}{I(0)}\Big)^{1-\frac 4\kappa}, \quad 0\le t<\tau_{v_2}.$$
Using It\^o's formula and (\ref{ODE-hyper}), we see that $M$ is a positive local martingale. If $\kappa\le 4$, then a.s.\ $\tau_{v_2}=\infty$, and $M$ is defined on $(0,\infty)$. If $\kappa>4$, then a.s.\ $\tau_{v_2}<\infty$, and as $t\uparrow \tau_{v_2}$, $M(t)$ converges to a positive number. We then extend $M$ to $[0,\infty)$ such that $M$ is constant and equals $\lim_{t\uparrow \tau_{v_2}} M(t)$ on $[\tau_{v_2},\infty)$. After the extension, $M$ is a positive continuous local martingale defined on $[0,\infty)$. Suppose $T$ is any stopping time such that $M_{T\wedge \cdot}$ is bounded. We may weight the underlying probability measure by $M(T)$ and get another probability measure. Under the new measure, the $\widehat w,\widehat v_1,\widehat v_2$ satisfy the SDE in Definition \ref{Def-hSLE} for some standard Brownian notion $B$ up to the time $T$.
\end{Remark}
}
{
\begin{Remark}
The definition of SLE using hypergeometric functions first appeared in \cite{kappa-rho}, in which the new SLE were called intermediate SLE$_\kappa(\rho)$. The notion of hypergeometric SLE and hSLE first appeared in W.\ Qian's \cite{QW}, which generalized the intermediate SLE$_\kappa(\rho)$. Here we follow the definition of hSLE used in the later paper \cite{Wu-hSLE}, in which hSLE$_\kappa$ agrees with intermediate SLE$_\kappa(2)$, and so is only a very special case of Qian's hSLE.
\end{Remark}
}
Hypergeometric SLE is important because
if $(\eta_1,\eta_2)$ is a $2$-SLE$_\kappa$ in $D$ with link pattern $(a_1\to b_1;a_2\to b_2)$, then for $j=1,2$, the marginal law of $\eta_j$ is that of an hSLE$_\kappa$ curve in $D$ from ${a_j}$ to ${b_j}$ with force points ${b_{3-j}}$ and ${a_{3-j}}$ (cf.\ \cite[Proposition 6.10]{Wu-hSLE}).
\begin{comment}
\begin{Proposition}
For all $\kappa\in(0,8)$, hSLE$_\kappa$ satisfies reversibility, i.e., the time-reversal of an hSLE$_\kappa$ curve in $D$ from $w_1$ to $w_2$ with force points $v_1$ and $v_2$ is an hSLE$_\kappa$ curve in $D$ from $w_2$ to $w_1$ with force points $v_2$ and $v_1$. \label{hSLE-reversibility}
\end{Proposition}
\begin{proof}
The reversibility in the case that $\kappa\in(0,4)$ is proved by \cite{kappa-rho}, where hSLE$_\kappa$ is called intermediate SLE$_\kappa(2)$. If $\kappa=4$, then $F\equiv 1$, and so hSLE$_4$ is just a chordal SLE$_4(2,-2)$, whose reversibility is proved in \cite{duality}. The reversibility of hSLE$_\kappa$ for $\kappa\in(4,8)$ is proved in \cite{Wu-hSLE}.
\end{proof}
\end{comment}
Using the standard argument in \cite{SW}, we obtain the following proposition describing an hSLE$_\kappa$ curve in $\mathbb{H}$ {``}in the chordal coordinate{''} in the case that the target is not $\infty$.
\begin{Proposition}
Let $w_0\ne w_\infty\in\mathbb{R}$. Let $v_1\in \mathbb{R}_{w_0}\cup\{\infty\}\setminus \{w_\infty\}$ and $v_2\in\mathbb{R}_{w_\infty}\cup \{\infty\}\setminus \{w_0\}$ be such that the cross ratio $R:=\frac{(w_0-v_1)(w_\infty-v_2)}{(w_0-v_2)(w_\infty-v_1)}\in [0^+,1)$. Let $\kappa\in(0,8)$. Let $\widehat\eta$ be an hSLE$_\kappa$ curve in $\mathbb{H}$ from $w_0$ to $w_\infty$ with force points at $v_1,v_2$. Stop $\widehat\eta$ at the first time that it separates $w_\infty$ from $\infty$, and parametrize the stopped curve by $\mathbb{H}$-capacity. Then the new curve, denoted by $\eta$, is the chordal Loewner curve driven by some function $\widehat w_0$, which satisfies the following SDE with initial value $\widehat w_0(0)=w_0$:
\begin{align*}
d\widehat w_0(t)\overset{\mathrm{ae}}{=} &\sqrt\kappa dB(t)+ \frac{\kappa-6}{\widehat w_0(t)-\widehat w_\infty(t)}\, dt+ \\
& + \Big( \frac 1{\widehat w_0(t)-\widehat v_1(t)} -\frac 1 {\widehat w_0(t)-\widehat v_2(t)}\Big )\cdot \widetilde G\Big( \frac{ (\widehat w_0(t) -\widehat v_1(t) ) (\widehat v_2(t)-\widehat w_\infty(t))}{ (\widehat w_0(t)-\widehat v_2(t)) (\widehat v_1(t)-\widehat w_\infty(t))}\Big)\,dt,
\end{align*}
where $B(t)$ is a standard Brownian motion, $\widehat w_\infty(t)=g_{K_t} (w_\infty)$ and $\widehat v_j(t)=g_{K_t}^{w_0}(v_j)$, $j=1,2$, and $K_t$ are the chordal Loewner hulls driven by $\widehat w_0$.
\label{Prop-iSLE-2}
\end{Proposition}
\begin{Definition}
We call the $\eta$ in Proposition \ref{Prop-iSLE-2} an hSLE$_\kappa$ curve in $\mathbb{H}$ from $w_0$ to $w_\infty$ with force points at $v_1,v_2$, {``}in the chordal coordinate{''}; call $\widehat w_0$ the driving function; and call $\widehat w_\infty$, $\widehat v_1$ and $\widehat v_2$ the force point functions respectively started from $w_\infty$, $v_1$ and $v_2$. \label{Def-iSLE-chordal}
\end{Definition}
\begin{Proposition}
We adopt the notation in the last proposition. Let $T$ be the first time that $w_\infty$ or $v_2$ is swallowed by the hulls. Note that $|\widehat w_0-\widehat w_\infty|$, $|\widehat v_1-\widehat v_2|$, $\widehat w_0-\widehat v_2|$, and $|\widehat w_\infty-\widehat v_1|$ are all positive on $[0,T)$. We define $M$ on $[0,T)$ by $M=G_1(\widehat w_0,\widehat v_1;\widehat w_\infty,\widehat v_2)$, where $G_1$ is given by (\ref{G1(w,v)}).
Then $M$ is a positive local martingale, and if we tilt the law of $\eta$ by $M$, then we get the law of a chordal SLE$_\kappa(2,2,2)$ curve in $\mathbb{H}$ started from $w_0$ with force points $w_\infty$, $v_1$ and $v_2$. More precisely, if $\tau<T$ is a stopping time such that $M$ is uniformly bounded on $[0,\tau]$, then if we weight the underlying probability measure by $M(\tau)/M(0)$, then we get a probability measure under which the law of $\eta$ stopped at the time $\tau$ is that of a chordal SLE$_\kappa(2,2,2)$ curve in $\mathbb{H}$ started from $w_0$ with force points $w_\infty$, $v_1$ and $v_2$ stopped at the time $\tau$. \label{Prop-iSLE-3}
\end{Proposition}
\begin{proof}
This follows from straightforward applications of It\^o's formula and Girsanov{'s} Theorem, where we use (\ref{ODE-hyper}), Propositions \ref{Prop-cd-continuity'} and \ref{Prop-iSLE-2}. Actually, the calculation could be simpler if we tilt the law of a chordal SLE$_\kappa(2,2,2)$ curve by $M^{-1}$ to get an hSLE$_\kappa $ curve.
\end{proof}
\subsection{Two-parameter stochastic processes}
In this subsection we briefly recall the framework in \cite[Section 2.3]{Two-Green-interior}. {The framework will help us to study stochastic processes defined on some random subset $\cal D$ of $\mathbb{R}_+^2$, where every element in $\cal D$ is understood as a two-dimensional random variable.}
We assign a partial order $\le$ to $\mathbb{R}_+^2=[0,\infty)^2$ such that $\underline t=(t_+,t_-)\le(s_+,s_-)= \underline s$ iff $t_+\le s_+$ and $t_-\le s_-$. It has a minimal element $\underline 0=(0,0)$. We write $\underline t<\underline s$ if $t_+<s_+$ and $t_-<s_-$. We define $\underline t\wedge \underline s=(t_1\wedge s_1,t_2\wedge s_2)$. Given $\underline t,\underline s\in\mathbb{R}_+^2$, we define $[\underline t,\underline s]=\{\underline r\in{\mathbb{R}_+^2}:\underline t\le \underline r\le \underline s\}$. Let $\underline e_+=(1,0)$ and $\underline e_-=(0,1)$. So $(t_+,t_-)=t_+\underline e_++ t_-\underline e_-$.
\begin{Definition}
An ${\mathbb{R}_+^2}$-indexed filtration ${\cal F}$ on a measurable space $\Omega$ is a family of $\sigma$-algebras ${\cal F}_{\underline t}$, $\underline t\in\mathbb{R}_+^2$, on $\Omega$ such that ${\cal F}_{\underline t}\subset {\cal F}_{\underline s}$ whenever $\underline t\le \underline s$. Define $\overline{\cal F}$ by $\overline{\cal F}_{\underline t}=\bigcap_{\underline s>\underline t}{\cal F}_{\underline s}$, $\underline t\in\mathbb{R}_+^2$. Then we call $\overline{\cal F}$ the right-continuous augmentation of ${\cal F}$. We say that ${\cal F}$ is right-continuous if $\overline{\cal F}={\cal F}$. A process $X=(X({\underline t}))_{\underline t\in\mathbb{R}_+^2}$ defined on $\Omega$ is called ${\cal F}$-adapted if for any $\underline t\in\mathbb{R}_+^2$, $X({\underline t})$ is ${\cal F}_{\underline t}$-measurable. It is called continuous if $\underline t\mapsto X({\underline t})$ is sample-wise continuous.
\end{Definition}
For the rest of this subsection, let ${\cal F}$ be an $\mathbb{R}_+^2$-indexed filtration with right-continuous augmentation $\overline{\cal F}$, and let ${\cal F}_{\underline\infty}=\bigvee_{\underline t\in\mathbb{R}_+^2} {\cal F}_{\underline t}$.
\begin{Definition}
A $[0,\infty]^2$-valued random element $\underline T$ is called an ${\cal F}$-stopping time if for any deterministic $\underline t\in{\mathbb{R}_+^2}$, $\{\underline T\le \underline t\}\in {\cal F}_{\underline t}$. It is called finite if $\underline T\in\mathbb{R}_+^2$, and is called bounded if there is a deterministic $\underline t\in \mathbb{R}_+^2$ such that $\underline T\le \underline t$.
For an ${\cal F}$-stopping time $\underline T$, we define a new $\sigma$-algebra ${\cal F}_{\underline T}$ by
${\cal F}_{\underline T}=\{A\in{\cal F}_{\underline\infty}:A\cap \{\underline T\leq \underline t\}\in {\cal F}_{\underline t}, \forall \underline t\in{\mathbb{R}_+^2}\}$.
\end{Definition}
The following proposition follows from a standard argument.
\begin{Proposition}
The right-continuous augmentation of $\overline{\cal F}$ is itself, and so $\overline{\cal F} $ is right-continuous. A $[0,\infty]^2$-valued random map $\underline T$ is an $\overline{\cal F}$-stopping time if and only if $\{\underline T<\underline t\}\in{\cal F}_{\underline t}$ for any $\underline t\in\mathbb{R}_+^2$. For an $\overline{\cal F}$-stopping time $\underline T$, $A\in\overline{\cal F}_{\underline T}$ if and only if $A\cap\{\underline T<\underline t\}\in{\cal F}_{\underline t}$ for any $\underline t\in\mathbb{R}_+^2$. If $(\underline T^n)_{n\in\mathbb{N}}$ is a decreasing sequence of $\overline{\cal F}$-stopping times, then $\underline T:=\inf_n \underline T^n$ is also an $\overline{\cal F}$-stopping time, and $\overline{\cal F}_{\underline T}=\bigcap_n \overline{\cal F}_{\underline T^n}$.
\label{right-continuous-0}
\end{Proposition}
\begin{Definition}
A relatively open subset $\cal R$ of $\mathbb{R}_+^2$ is called a history complete region, or simply an HC region, if for any $\underline t\in \cal R$, we have $[\underline 0, \underline t]\subset\cal R$. {We use the name because we view (the rectangle) $[\underline 0, \underline t]$ as the history of $\underline t$.} Given an HC region $\cal R$, for $\sigma\in\{+,-\}$, define $T^{\cal R}_\sigma:\mathbb{R}_+\to\mathbb{R}_+\cup\{\infty\}$ by $T^{\cal R}_\sigma(t)=\sup\{s\ge 0:s \underline e_\sigma+t\underline e_{-\sigma}\in{\cal R}\}$, where we set $\sup\emptyset =0$. {See Figure \ref{fig-HC} for an illustration}
An HC region-valued random element $\cal D$ is called an ${\cal F}$-stopping region if for any $\underline t\in\mathbb{R}_+^2$, $\{\omega\in\Omega:\underline t\in {\cal D}(\omega)\}\in{\cal F}_{ \underline t}$. A random function $X({\underline t})$ with a random domain $\cal D$ is called an ${\cal F}$-adapted HC process if $\cal D$ is an ${\cal F}$-stopping region, and for every $\underline t\in\mathbb{R}_+^2$, $X_{\underline t}$ restricted to $\{\underline t\in\cal D\}$ is ${\cal F}_{ \underline t}$-measurable. \label{Def-HC}
\end{Definition}
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{HC2.png}
\end{center}
\caption{{ The figure above illustrates an HC region $\cal R$ (grey), the function $T^{\cal R}_+$ valued at $t_-^1,t_-^2$, and the function $T^{\cal R}_-$ valued at $t_+^1,t_+^2,t_+^3$.}} \label{fig-HC}
\end{figure}
The following propositions are
\cite[Lemmas 2.7 and 2.9]{Two-Green-interior}.
\begin{Proposition}
Let $\underline T$ and $\underline S$ be two ${\cal F}$-stopping times. Then (i) $\{\underline T\le \underline S\} \in{\cal F}_{\underline S}$; (ii) if $\underline S$ is a constant $\underline s\in\mathbb{R}_+^2$, then $\{\underline T\le \underline S\} \in{\cal F}_{ \underline T}$; and (iii) if $f$ is an ${\cal F}_{ \underline T}$-measurable function, then ${\bf 1}_{\{\underline T\le \underline S\}}f$ is ${\cal F}_{ \underline S}$-measurable. In particular, if $\underline T\le \underline S$, then ${\cal F}_{ \underline T}\subset {\cal F}_{ \underline S}$. \label{T<S}
\end{Proposition}
\begin{comment}
\begin{Proposition}
Let $(X_{\underline t})_{\underline t\in\mathbb{R}_+^2}$ be a continuous ${\cal F}$-adapted process. Let $\underline T$ be an ${\cal F}$-stopping time. Then $X_{\underline T}$ is ${\cal F}_{\underline T}$-measurable on $\{\underline T\in\mathbb{R}_+^2\}$. \label{measurable}
\end{Proposition}
\end{comment}
We will need the following proposition to do localization. The reader should note that for an ${\cal F}$-stopping time $\underline T$ and a deterministic time $\underline t\in\mathbb{R}_+^2$, $\underline T\wedge \underline t$ may not be an ${\cal F}$-stopping time. This is the reason why we introduce a more complicated stopping time.
\begin{Proposition}
Let $\underline T$ be an ${\cal F}$-stopping time. Fix a deterministic time $\underline t\in\mathbb{R}_+^2$. Define $\underline T^{\underline t}$ such that if $\underline T\le \underline t$, then $\underline T^{\underline t}=\underline T$; and if $\underline T\not\le \underline t$, then $\underline T^{\underline t}=\underline t$. Then $\underline T^{\underline t}$ is an ${\cal F}$-stopping time bounded above by $\underline t$, and ${\cal F}_{\underline T^{\underline t}}$ agrees with ${\cal F}_{\underline T}$ on $\{\underline T\le \underline t\}$, i.e., $\{\underline T\le \underline t\}\in {\cal F}_{\underline T^{\underline t}}\cap {\cal F}_{\underline T}$, and for any $A\subset \{\underline T\le \underline t\}$, $A\in {\cal F}_{\underline T^{\underline t}}$ if and only if $A\in {\cal F}_{\underline T}$. \label{prop-local}
\end{Proposition}
\begin{proof}
Clearly $\underline T^{\underline t}\le \underline t$. Let $\underline s\in\mathbb{R}_+^2$. If $\underline t\le \underline s$, then $\{\underline T^{\underline t}\le \underline s\}$ is the whole space. If $\underline t\not\le \underline s$, then $\{\underline T^{\underline t}\le \underline s\}=\{\underline T\le \underline t\}\cap \{\underline T\le \underline s\}=\{\underline T\le \underline t\wedge \underline s\}\in {\cal F}_{ \underline t\wedge \underline s}\subset {\cal F}_{ \underline s}$. So $\underline T^{\underline t}$ is an ${\cal F}$-stopping time.
By Proposition \ref{T<S}, $\{\underline T\le \underline t\}\in {\cal F}_{\underline T}$. Suppose $A\subset \{\underline T\le \underline t\}$ and $A\in {\cal F}_{\underline T}$. Let $\underline s\in\mathbb{R}_+^2$. If $\underline t\le \underline s$, then $A\cap \{\underline T^{\underline t}\le \underline s\}=A=A\cap \{\underline T\le \underline t\} \in{\cal F}_{\underline t}\subset {\cal F}_{\underline s}$. If $\underline t\not\le \underline s$, then $A\cap \{\underline T^{\underline t}\le \underline s\}= A\cap\{\underline T\le \underline t\wedge \underline s\}\in {\cal F}_{\underline t\wedge \underline s}\subset {\cal F}_{\underline s}$.
So $A\in{\cal F}_{\underline T^{\underline t}}$. In particular, $\{\underline T\le \underline t\}\in{\cal F}_{\underline T^{\underline t}}$. On the other hand, suppose $A\subset \{\underline T\le \underline t\}$ and $A\in {\cal F}_{\underline T^{\underline t}}$. Let $\underline s\in\mathbb{R}_+^2$. If $\underline t\le \underline s$, then $A\cap \{\underline T\le \underline s\}=A=A\cap \{\underline T^{\underline t}\le \underline t\}\in{\cal F}_{\underline t}\subset{\cal F}_{\underline s}$. If $\underline t\not\le \underline s$, then $A\cap \{\underline T\le \underline s\}=A\cap \{\underline T\le \underline t\}\cap \{\underline T\le \underline s\}=A\cap\{\underline T^{\underline t}\le \underline s\}\in{\cal F}_{\underline s}$. Thus, $A\in {\cal F}_{\underline T}$. So for $A\subset \{\underline T\le \underline t\}$, $A\in {\cal F}_{\underline T^{\underline t}}$ if and only if $A\in {\cal F}_{\underline T}$.
\end{proof}
Now we fix a probability measure $\mathbb{P}$, and let $\mathbb{ E}$ denote the corresponding expectation.
\begin{Definition}
An ${\cal F}$-adapted process $(X_{\underline t})_{\underline t\in\mathbb{R}_+^2}$ is called an ${\cal F}$-martingale (w.r.t.\ $ \mathbb{P}$) if for any $\underline s\le \underline t\in\mathbb{R}_+^2$, a.s.\ $\mathbb{ E}[X_{\underline t}|{\cal F}_{\underline s}]=X_{\underline s}$. If there is $\zeta\in L^1(\Omega,{\cal F},\mathbb{P})$ such that $X_{\underline t}=\mathbb{ E}[\zeta|{\cal F}_{ \underline t}]$ for every $\underline t\in\mathbb{R}_+^2$, then we say that $X$ is an ${\cal F}$-martingale closed by $\zeta$.
\end{Definition}
\begin{comment}
\begin{Proposition}
A continuous ${\cal F}$-martingale is also an $\overline{\cal F}$-martingale. \label{right-continuous}
\end{Proposition}
\begin{proof}
Let $X$ be a continuous ${\cal F} $-martingale.
Let $\underline s\le \underline t\in\mathbb{R}_+^2$, and $A\in \overline{\cal F}_{\underline s}$. Fix $\underline\varepsilon\in\mathbb{R}_+^2$ with $\underline\varepsilon>\underline 0$. Then $A\in{\cal F}_{\underline s+\underline\varepsilon}$. From $\mathbb{ E}[X(\underline t+\underline\varepsilon)|{\cal F}_{\underline s+\underline\varepsilon}]=X(\underline s+\underline\varepsilon)$ we get $\mathbb{ E}[{\bf 1}_A X(\underline t+\underline\varepsilon)]=\mathbb{ E}[{\bf 1}_A X(\underline s+\underline\varepsilon)]$. By letting $\underline\varepsilon\downarrow \underline 0$ and using sample-wise continuity of $X$ and uniform integrability of $X|_{[\underline 0,\underline t+(1,1)]}$, we get $\mathbb{ E}[{\bf 1}_A X(\underline t )]=\mathbb{ E}[{\bf 1}_A X(\underline s )]$. This implies that $\mathbb{ E}[X(\underline t)|\overline{\cal F}_{\underline s}]=X(\underline s)$, as desired.
\end{proof}
\end{comment}
The following proposition is \cite[Lemma 2.11]{Two-Green-interior}.
\begin{Proposition} [Optional Stopping Theorem]
Suppose $X$ is a continuous ${\cal F}$-martingale.
The following are true. (i) If $X$ is closed by $\zeta$, then for any finite ${\cal F}$-stopping time $\underline T$, $X_{\underline T}=\mathbb{ E}[\zeta|{\cal F}_{ \underline T}]$. (ii) If $\underline T\le \underline S$ are two bounded ${\cal F}$-stopping times, then $\mathbb{ E}[X_{\underline S}|{\cal F}_{\underline T}]=X_{\underline T}$.\label{OST}
\end{Proposition}
\subsection{Jacobi polynomials}
For $\alpha,\beta>-1$, Jacobi polynomials (\cite[Chapter 18]{NIST:DLMF}) $P^{(\alpha,\beta)}_n(x)$, $n=0,1,2,3,\dots$, are a class of classical orthogonal polynomials with respect to the weight $\Psi^{(\alpha,\beta)}(x):={\bf 1}_{(-1,1)}(1-x)^\alpha(1+x)^\beta$. This means that each $P^{(\alpha,\beta)}_n(x)$ is a polynomial of degree $n$, and for the inner product defined by $\langle f,g\rangle_{\Psi^{(\alpha,\beta)}}:=\int_{-1}^1 f(x)g(x)\Psi^{(\alpha,\beta)}(x)dx$, we have $\langle P^{(\alpha,\beta)}_n, P^{(\alpha,\beta)}_m\rangle_{\Psi^{(\alpha,\beta)}}=0$ when $n\ne m$. The normalization is that $P^{(\alpha,\beta)}_n(1)=\frac{\Gamma(\alpha+n+1)}{n!\Gamma(\alpha+1)}$, $P^{(\alpha,\beta)}_n(-1)=(-1)^n\frac{\Gamma(\beta+n+1)}{n!\Gamma(\beta+1)}$, and
\begin{equation}\| P^{(\alpha,\beta)}_n\|_{\Psi^{(\alpha,\beta)}}^2 =\frac{2^{\alpha+\beta+1}}{2n+\alpha+\beta+1}\cdot \frac{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)} {n!\Gamma(n+\alpha+\beta+1)}.\label{norm}\end{equation}
For each $n\ge 0$, $P^{(\alpha,\beta)}_n(x)$ is a solution of the second order differential equation:
\begin{equation} (1-x^2)y''-[(\alpha+\beta+2)x+(\alpha-\beta)]y'+n(n+\alpha+\beta+1)y=0.\label{Jacobi-ODE}\end{equation}
When $\max\{\alpha,\beta\}>-\frac 12$, we have an exact value of the supernorm of $P^{(\alpha,\beta)}_n$ over $[-1,1]$:
\begin{equation} \|P^{(\alpha,\beta)}_n\|_\infty= \max\{|P^{(\alpha,\beta)}_n(1)|,|P^{(\alpha,\beta)}_n(-1)|\} =\frac{\Gamma(\max\{\alpha,\beta\}+n+1)}{n!\Gamma(\max\{\alpha,\beta\}+1)}.\label{super-exact}\end{equation}
For general $\alpha,\beta>-1$, we get an upper bound {on} $\|P^{(\alpha,\beta)}_n\|_\infty$ using (\ref{super-exact}), the exact value of $P^{(\alpha,\beta)}_n(1)$, and the derivative formula $\frac d{dx} P^{(\alpha,\beta)}_n(x)=\frac{ \alpha+\beta+n+1}{2} P^{(\alpha+1,\beta+1)}_{n-1}(x)$ for $n\ge 1$:
\begin{equation} \|P^{(\alpha,\beta)}_n\|_\infty\le \frac{\Gamma(\alpha+n+1)}{n!\Gamma(\alpha+1)}+ ({ \alpha+\beta+n+1} )\cdot \frac{\Gamma(\max\{\alpha,\beta\}+n+1)}{\Gamma(n)\Gamma(\max\{\alpha,\beta\}+2)}.\label{super-upper}\end{equation}
\section{Deterministic Ensemble of Two Chordal Loewner Curves} \label{section-deterministic}
In this section, we develop a framework about commuting pairs of deterministic chordal Loewner curves, which will be needed to study the commuting pairs of random chordal Loewner curves in the next two sections. The major length of this section is caused by the fact that we allow the two Loewner curves {to} have intersections. This is needed in order to handle the case $\kappa\in(4,8)$. The ensemble without intersections appeared earlier in \cite{reversibility,duality}.
\subsection{Ensemble with possible intersections} \label{section-deterministic1}
Let $w_-<w_+\in\mathbb{R}$. Suppose for $\sigma\in\{+,-\}$, $\eta_\sigma(t)$, $0\le t<T_\sigma$, is a chordal Loewner curve (with speed $1$) driven by $\widehat w_\sigma$ started from $w_\sigma$, such that $\eta_+$ does not hit $(-\infty, w_-]$, and $\eta_-$ does not hit $[w_+,\infty)$. Let $K_\sigma(t_\sigma)=\Hull(\eta[0,t_\sigma])$, $0\le t_\sigma<T_\sigma$, $\sigma\in\{+,-\}$. Then $K_\sigma(\cdot)$ are chordal Loewner hulls driven by $\widehat w_\sigma$, $\hcap_2(K_\sigma(t_\sigma))=t_\sigma$, and by Proposition \ref{Loewner-chain},
\begin{equation} \{\widehat w_\sigma(t_\sigma)\}=\bigcap_{\delta>0} \overline{K_\sigma(t_\sigma+\delta)/K_\sigma(t_\sigma)},\quad 0\le t_\sigma<T_\sigma.\label{haw=}\end{equation}
The corresponding chordal Loewner maps are $g_{K_\sigma(t)}$, $0\le t<T_\sigma$, $\sigma\in\{+,-\}$. From the assumption on $\eta_+$ and $\eta_-$ we get
\begin{equation} a_{K_-(t_-)}\le w_-< a_{K_+(t_+)},\quad b_{K_-(t_-)}< w_+\le b_{K_+(t_+)},\quad\mbox{for } t_\sigma\in(0,T_\sigma),\quad \sigma\in\{+,-\}. \label{lem-aabb}\end{equation}
Since each $K_\sigma(t)$ is generated by a curve, $f_{K_\sigma(t)}$ is well defined.
Let ${\cal I}_\sigma =[0,T_\sigma)$, $\sigma\in\{+,-\}$, and for $\underline t=(t_+,t_-)\in{\cal I}_+\times{\cal I}_-$, define
\begin{equation} K(\underline t)=\Hull(\eta_+[0,t_+]\cup \eta_-[0,t_-]),\quad \mA(\underline t)=\hcap_2(K(\underline t)),\quad H(\underline t)=\mathbb{H}\setminus K(\underline t).\label{KmA}\end{equation} It is obvious that $K(\cdot,\cdot)$ and $\mA(\cdot,\cdot)$ are increasing ({although not necessarily strictly}) in both variables. Since $\partial K(t_+,t_-)$ is locally connected, $f_{K(t_+,t_-)}$ is well defined. For $\sigma\in\{+,-\}$, $t_{-\sigma}\in{\cal I}_{-\sigma}$ and $t_\sigma\in{\cal I}_\sigma$, define $K_{\sigma}^{t_{-\sigma}}(t_\sigma)=K(t_+,t_-)/K_{-\sigma}(t_{-\sigma})$. Then we have
\begin{equation} g_{K(t_+,t_-)}=g_{K_{+}^{t_-}(t_+)}\circ g_{K_-(t_-)}=g_{K_{-}^{t_+}(t_-)}\circ g_{K_+(t_+)}.\label{circ-g}\end{equation}
By (\ref{lem-aabb}) and the assumption on $\eta_+,\eta_-$, we have $a_{K(t_+,t_-)}=a_{K_-(t_-)}$ if $t_->0$, and $b_{K(t_+,t_-)}=b_{K_+(t_+)}$ if $t_+>0$.
\begin{Lemma}
For any $t_+\le t_+'\in{\cal I}_+$ and $t_-\le t_-'\in {\cal I}_-$, we have
\begin{equation} \mA(t_+',t_-')-\mA(t_+',t_-)-\mA(t_+,t_-')+\mA(t_+,t_-)\le 0.\label{mA-concave}\end{equation}
{In particular}, $\mA$ is Lipschitz continuous with constant $1$ in any variable, and so is continuous on ${\cal I}_+\times {\cal I}_-$.
\label{lem-lips}
\end{Lemma}
\begin{proof}
Let $t_+\le t_+'\in{\cal I}_+$ and $t_-\le t_-'\in {\cal I}_-$. Since $K(t_+',t_-)$ and $K(t_+,t_-')$ together generate the $\mathbb{H}$-hull $K(t_+',t_-')$, and they both contain $K(t_+,t_-)$, we obtain (\ref{mA-concave}) from Proposition \ref{hcap-concave}. The rest of the statements follow easily from (\ref{mA-concave}), the monotonicity of $\mA$, and the fact that $\mA(t_\sigma \overline e_\sigma)=t_\sigma$ for any $t_\sigma\in{\cal I}_\sigma$, $\sigma\in\{+,-\}$.
\end{proof}
\begin{Definition}
Let $\cal D\subset {\cal I}_+\times {\cal I}_-$ be an HC region as in Definition \ref{Def-HC}. Suppose that there are dense subsets ${\cal I}_+^*$ and ${\cal I}_-^*$ of ${\cal I}_+$ and ${\cal I}_-$, respectively, such that for any $\sigma\in\{+,-\}$ and $t_{-\sigma}\in {\cal I}^*_{-\sigma}$, the following two conditions hold:
\begin{enumerate}
\item[(I)] $K_\sigma^{t_{-\sigma}}(t_{{\sigma}})$, $0\le t_\sigma<T^{\cal D}_\sigma(t_{-\sigma})$, are chordal Loewner hulls generated by a chordal Loewner curve, denoted by $\eta_{\sigma}^{t_{-\sigma}}$, with some speed.
\item [(II)] $ \eta_{\sigma}^{t_{-\sigma}}[0,T^{\cal D}_\sigma(t_{-\sigma}))\cap \mathbb{R}$ has Lebesgue measure zero.
\end{enumerate}
Then we call $(\eta_+,\eta_-;{\cal D})$ a commuting pair of chordal Loewner curves, and call $K(\cdot,\cdot)$ and $\mA(\cdot,\cdot)$ the hull function and the capacity function, respectively, for this pair.
\label{commuting-Loewner}
\end{Definition}
\begin{Remark}
{The theory developed in this section will later be applied to the random setting. In Section \ref{section-commuting-SLE-kappa-rho}, we will study a commuting pair of SLE$_\kappa(2,\underline\rho)$ curves $(\eta_+,\eta_-)$. They have random lifespan ${\cal I}_+$ and ${\cal I}_+$, and satisfy the property that, for any $\sigma\in\{+,-\}$ and $t_{-\sigma}\in \mathbb{R}_+$, a.s.\ on the event $\{t_{-\sigma}\in {\cal I}_{-\sigma}\}$, Conditions (I) and (II) are satisfied. Letting ${\cal I}_\pm^*=I_\pm\cap\mathbb{Q}$, we see that $\eta_+$ and $\eta_-$ a.s.\ satisfy the condition of Definition \ref{commuting-Loewner}. So they are a.s.\ a (random) commuting pair of chordal Loewner curves.
}
Later in Lemma \ref{Lebesgue} we will show that for the commuting pair in Definition \ref{commuting-Loewner}, Conditions (I) and (II) actually hold for all $t_{-\sigma}\in {\cal I}_{-\sigma}$, $\sigma\in\{+,-\}$.
\end{Remark}
From now on, let $(\eta_+,\eta_-;{\cal D})$ be a commuting pair of chordal Loewner curves, and let ${\cal I}_+^*$ and ${\cal I}_-^*$ be as in Definition \ref{commuting-Loewner}.
\begin{Lemma}
$K(\cdot,\cdot)$ and $\mA(\cdot,\cdot)$ restricted to $\cal D$ are strictly increasing in both variables. \label{lem-strict}
\end{Lemma}
\begin{proof}
By Condition (I), for any $\sigma\in\{+,-\}$ and $t_{-\sigma}\in{\cal I}_{-\sigma}^*$, $t\mapsto K(t_{-\sigma}\underline e_{-\sigma}+t\underline e_\sigma)$ and $t\mapsto \mA(t_{-\sigma}\underline e_{-\sigma}+t\underline e_\sigma)$ are strictly increasing on $[0,T^{\cal D}_\sigma(t_{-\sigma}))$. By (\ref{mA-concave}) and the denseness of ${\cal I}_{-\sigma}^*$ in ${\cal I}_{-\sigma}$, this property extends to any $t_{-\sigma}\in{\cal I}_{-\sigma}$.
\end{proof}
In the rest of the section, when we talk about $K(t_+,t_-)$, $\mA(t_+,t_-)$, $K_{+}^{t_-}(t_+)$ and $K_{-}^{t_+}(t_-)$, it is always implicitly assumed that $(t_+,t_-)\in\cal D$. So we may now simply say that $K(\cdot,\cdot)$ and $\mA(\cdot,\cdot)$ are strictly increasing in both variables.
\begin{Lemma} We have the following facts.
\begin{enumerate}
\item [(i)] Let $\underline a=(a_+,a_-)\in\cal D$ and $L={\rad_0}(K(a_+,a_-))$. Let $\sigma\in\{+,-\}$. Suppose $ t_\sigma<t_\sigma'\in [0,a_\sigma]$ satisfy that $\diam(\eta_\sigma[t_\sigma,t_\sigma'])<r$ for some $r\in(0,L)$. Then for any $t_{-\sigma}\in[0,a_{-\sigma}]$,
$\diam(K_{\sigma}^{t_{-\sigma}}({t_\sigma'})/K_{\sigma}^{t_{-\sigma}}(t_\sigma))\le 10\pi L \log(L/r)^{-1/2}$.
\item [(ii)] For any $ (a_+,a_-)\in \cal D$ and $\sigma\in\{+,-\}$,
$$\lim_{\delta\downarrow 0}\, \sup_{0\le t_\sigma\le a_\sigma}\,\sup_{t_\sigma'\in(t_\sigma,t_\sigma+\delta)}\, \sup_{0\le t_{-\sigma}\le a_{-\sigma}}\, \sup_{z\in\mathbb{C}\setminus K_{\sigma}^{t_{-\sigma}}(t_\sigma')^{\doub} }\,|g_{K_{\sigma}^{t_{-\sigma}}(t_\sigma')}(z)-g_{K_{\sigma}^{t_{-\sigma}}(t_\sigma)}(z)|=0.$$
\item [(iii)] The map $(\underline t,z)\mapsto g_{K(\underline t)}(z)$ is continuous on $\{(\underline t,z) :\underline t\in{\cal D}, z\in\mathbb{C}\setminus K(\underline t)^{\doub}\}$.
\end{enumerate} \label{lem-uniform}
\end{Lemma}
\begin{proof}
(i) Suppose $\sigma=+$ by symmetry. We {first} assume that $a_\pm \in{\cal I}_\pm^*$.
Let $\Delta \eta_+=\eta_+[t_+,t_+']$ and $S=\{|z-\eta_+(t_+)|=r\}$. By assumption, $\Delta\eta_+\subset \{|z-\eta_+(t_+)|<r\}$. By Lemma \ref{lem-strict}, there is $z_*\in \Delta \eta_+\cap H(t_+,a_-)\subset H(t_+,t_-)$. Since $z_*\in \{|z-\eta_+(t_+)|<r\}$, the set $ S\cap H(t_+,t_-)$ has a connected component, denoted by $J$, which separates $z_*$ from $\infty$ in $H(t_+,t_-)$. Such $J$ is a crosscut of $H(t_+,t_-)$\footnote{{A crosscut of a domain $D$ is the image of a simple curve $\gamma:(\alpha,\beta)\to D$ such that the limits $\lim_{t\to \alpha^+} \gamma(t)$ and $\lim_{t\to \beta^-}\gamma(t)$ both exist and lie on $\partial D$.}}, which divides $H(t_+,t_-)$ into two domains, where the bounded domain, denoted by $D_J$, contains $z_*$.
Now $\Delta \eta_+\cap H(t_+,a_-)\subset H(t_+,a_-)\setminus J$. We claim that $\Delta \eta_+\cap H(t_+,a_-)$ is contained in one connected component of $H(t_+,a_-)\setminus J$. Note that $J\cap H(t_+,a_-)$ is a disjoint union of crosscuts, each of which divides $H(t_+,a_-)$ into two domains. To prove the claim, it suffices to show that, for each connected component $J_0$ of $J\cap H(t_+,a_-)$, $\Delta \eta_+\cap H(t_+,a_-)$ is contained in one connected component of $H(t_+,a_-)\setminus J_0$. Suppose that this is not true for some $J_0$. Let $J_e=g_{K(t_+,a_-)}(J_0)$. Then $J_e$ is a crosscut of $\mathbb{H}$, which divides $\mathbb{H}$ into two domains, both of which intersect $\Delta\widehat\eta_+:=g_{K(t_+,a_-)}(\Delta \eta_+\cap H(t_+,a_-))$. Since $\Delta\eta_+$ has positive distance from $S\supset J$, and $g_{K(t_+,a_-)}^{-1}|_{\mathbb{H}}$ extends continuously to $\overline\mathbb{H}$, $\Delta\widehat\eta_+$ has positive distance from $J_e$. Thus, there is another crosscut $J_i$ of $\mathbb{H}$, which is disjoint from and surrounded by $J_e$, such that the subdomain $\mathbb{H}$ bounded by $J_i$ and $J_e$ is disjoint from $\Delta\widehat\eta_+$. Label the three connected components of $\mathbb{H}\setminus (J_e\cup J_i)$ by $D_e,A,D_i$, respectively, from outside to inside. Then $\Delta\widehat\eta_+$ intersects both $D_e$ and $D_i$, but is disjoint from $\overline A$. Let $K_i=D_i\cup J_i$ and $K_e=K_i\cup A\cup J_e$ be two $\mathbb{H}$-hulls.
Let $\eta_+^* =\eta_+(t_++\cdot)$ and $\widehat\eta_+^* =g_{K(t_+,a_-)}\circ \eta_+^* $, whose domain is $S:=\{s\in [0,T_+-t_+): \eta_+^*(s)\in\mathbb{H}\setminus K(t_+,a_-)\}$. For each $s\in[0,\delta:=t_+'-t_+]$, $K(t_++s,a_-)=\Hull(K(t_+,a_-)\cup\Delta\eta_+^s)$, and so $K_+'(s):=K_{+}^{a_-}(t_++s)/K_{+}^{a_-}(t_+)= K(t_++s,a_-)/K(t_+,a_-)$ (by (\ref{K123})) is the $\mathbb{H}$-hull generated by $\widehat\eta_+^*([0,s]\cap S)$. For $0\le s\le \delta$, since $A$ is disjoint from $\widehat \eta_+^*([0,\delta]\cap S)\subset \Delta\widehat\eta_+$, it is either contained in or disjoint from $K_+'(s)$. If $K_+'(s)\supset A$, then $K_+'(s)\supset \Hull(A)=K_i$; if $K_+'(s)\cap A=\emptyset$, then by the connectedness of $\overline{K_+'(s)}$, $K_+'(s)$ is contained in either $K_i$ or $\mathbb{H}\setminus(K_i\cup A)$.
Since $K_+'(\delta)\supset \Delta\widehat \eta_+$ intersects both $D_e$ and $D_i$, we get $K_+'(\delta)\supset K_e$. Let $s_0=\inf \{s\in [0,\delta]: K_e\subset K_+'(s)\}$. By Proposition \ref{Loewner-chain}, we have $s_0\in(0,\delta]$ and $K_+'(s_0)\supset K_e$. By the increasingness of $K_+'(\cdot)$, $\bigcup_{0\le s<s_0} K_+'(s)$ is contained in either $K_i$ or $\mathbb{H}\setminus (K_i\cup A)$. Let $S_0=\{s\in(s_0,T_+-t_+):\eta_+^*(s)\in\mathbb{H}\setminus K(t_++s_0,a_-)\}$. Then $\widehat \eta_+^*(S_0)\subset g_{K(t_+,a_-)}(\mathbb{H}\setminus K(t_++s_0,a_-))= \mathbb{H}\setminus K_+'(s_0)\subset \mathbb{H}\setminus K_e=D_e$. By Lemma \ref{lem-strict}, $S$ is dense in $[s_0,T_+-t_+]$. Thus, $\widehat \eta_+^*([s_0,\delta]\cap S)\subset \overline{D_e}$. Since $\Delta\widehat\eta_+=\widehat \eta_+^*([0,\delta]\cap S)$ intersects both $D_e$ and $D_i$, we conclude that $\widehat \eta_+^*([0,s_0)\cap S)$ intersects $D_i$. So $K_+'(s)\subset K_i$ for $0\le s<s_0$, which implies that $K_+'(s_0)=\Hull(\bigcup_{0\le s<s_0} K_+'(s))\subset K_i$. This contradicts that $K_i\subsetneqq K_e\subset K_+'(s_0)$. So the claim is proved.
\begin{comment}
Let $\Delta\eta_+^s=\eta_+[t_+,t_++s]$ and $\Delta\widehat \eta_+^s= g_{K(t_+,a_-)}(\Delta \eta_+^s\cap H(t_+,a_-))$, $0\le s\le \delta:=t_+'-t_+$. For each $s\in[0,\delta]$, $K(t_++s,a_-)=\Hull(K(t_+,a_-)\cup\Delta\eta_+^s)$, and so $K_+'(s):=K_{+}^{a_-}(t_++s)/K_{+}^{a_-}(t_+)= K(t_++s,a_-)/K(t_+,a_-)$ (by (\ref{K123})) is the $\mathbb{H}$-hull generated by $\Delta\widehat \eta_+^s$. Since $A$ is disjoint from $\Delta\widehat \eta_+^s\subset \Delta\widehat\eta_+$, it is either contained in or is disjoint from $K_+'(s)$. If $K_+'(s)\supset A$, then $K_+'(s)\supset \Hull(A)=K_i$; if $K_+'(s)\cap A=\emptyset$, then by the connectedness of $\overline{K_+'(s)}$, $K_+'(s)$ is contained in either $D_i$ or $D_e$. Since $K_+'(\delta)\supset \Delta\widehat\eta_+^\delta=\Delta\widehat \eta_+$ intersects both $D_e$ and $D_i$, we get $K_+'(\delta)\supset K_e$. Let $s_0=\inf \{s\in [0,\delta]: K_e\subset K_+'(s)\}$. By Proposition \ref{Loewner-chain} $s_0\in(0,\delta]$; $ K_+'(s_0)\supset K_e$; and $\bigcup_{0\le s<s_0} K_+'(s)$ is contained in either $D_i$ or $D_e$. Let $S=\{s\in(s_0,T_+-t_+):\eta_+(t_++s)\in\mathbb{H}\setminus K(t_++s_0,a_-)\}$. Then $g_{K(t_+,a_-)}(\eta_+(t_++S))\subset \mathbb{H}\setminus K_+'(s_0)\subset \mathbb{H}\setminus K_e=D_e$.
By Lemma \ref{lem-strict}, $S$ is dense in $[s_0,T_+-t_+]$. Thus, $g_{K(t_+,a_-)}(\eta_+([t_++s_0,t_+'])\cap H(t_+,a_-))\subset \overline{D_e}$. Since $\Delta\widehat\eta_+^\delta$ intersects both $D_i$ and $D_e$, there is $s_*\in(0,s_0)$ such that $\Delta\widehat\eta_+^{s_*}$ intersects $D_i$. Thus, $\bigcup_{0\le s<s_0} K_+'(s)\subset D_i$, which implies that $K_+'(s_0)=\Hull(\bigcup_{0\le s<s_0} K_+'(s))\subset \Hull(D_i)=K_i$. Since $K_i\subsetneqq K_e$, we get a contradiction.
Let $\Delta\eta_+^s=\eta_+[t_+,t_++s]$ and $\Delta\widehat \eta_+^s= g_{K(t_+,a_-)}(\Delta \eta_+^s\cap H(t_+,a_-))$, $0\le s\le \delta:=t_+'-t_+$. For each $s\in[0,\delta]$, $K(t_++s,a_-)=\Hull(K(t_+,a_-)\cup\Delta\eta_+^s)$. So $K_+'(s):=K_{+}^{a_-}(t_++s)/K_{+}^{a_-}(t_+)= K(t_++s,a_-)/K(t_+,a_-)$ (by (\ref{K123})) is the $\mathbb{H}$-hull generated by $\Delta\widehat \eta_+^s$. Since $A$ is disjoint from $\Delta\widehat \eta_+^s$, it is either contained in or is disjoint from $K_+'(s)$.
Since $a_-\in {\cal I}_-^*$, by Condition (I) and Proposition \ref{prop-connected}, $K_+'(s)$, $0\le s\le \delta$, are chordal Loewner hulls with some speed, and so the closure of each $K_+'(s)$ is connected.
By choosing $s$ small enough, we can make the diameter of $K_+'(s)$ less than the diameter of $A$. Then $A$ is not contained in $K_+'(s)$, and so must be disjoint from $K_+'(s)$. By the connectedness of its closure, $K_+'(s)$ is then contained in either $D_e$ or $D_i$. On the other hand, since $\widehat\eta_+^\delta$ intersects both $D_e$ and $D_i$, $K_+'(\delta)$ does the same thing. Thus, there is $s_0\in (0,\delta)$ such that for all $s\in(s_0,\delta]$, $K_+'(s)$ intersects both $D_e$ and $D_i$, and $\bigcup_{0\le s<s_0}K_+'(s)$ is contained in either $D_e$ or $D_i$. For $s>s_0$, because $\overline{K_+'(s)}$ is connected, $K_+'(s)$ intersects $A$, and so must contain $A$. Since $\mathbb{H}\setminus K_+'(s)$ is connected and unbounded, we get $A\cup D_i\subset K_+'(s)$ for $s>s_0$. If $\bigcup_{0\le s<s_0}K_+'(s)\subset D_i$, then $\hcap(K_+'(s))\le \hcap ({D_i} )$ for $s<s_0$. However, $\hcap (K_+'(s))\ge \hcap ({D_i\cup A})>\hcap(D_i)$ for $s>s_0$, which contradicts the continuity of $s\mapsto \hcap(K_+'(s))$. Suppose $\bigcup_{0\le s<s_0}K_+'(s)\subset D_e$. Since $\Delta\widehat\eta_+^\delta$ intersects $D_i$, there is $s_*\in(s_0,\delta]$ such that $\eta_+(t_++s_*) \in H(t_+,a_-)$, and $g_{K(t_+,a_-)}(\eta_+(t_++s_*))\in D_i$. By Lemma \ref{lem-strict}, there is a sequence $s_n\downarrow s_*$ such that $\eta_+(t_++s_n)\in H(t_++s_*,a_-)$. Then $ \mathbb{H}\setminus K_+'(s_*)\ni g_{K(t_+,a_-)}(\eta_+(t_++s_n))\to g_{K(t_+,a_-)}(\eta_+(t_++s_*))\in D_i$. But this is impossible since $\mathbb{H}\setminus K_+'(s_*)\subset D_e$ and $\dist(D_e,D_i)>0$. The claim is now proved.
\end{comment}
Let $N$ denote the connected component of $H(t_+,a_-)\setminus J$ that contains $\Delta \eta_+\cap H(t_+,a_-)$.
Then $N$ is contained in one connected component of $H(t_+,t_-)\setminus J$. Since $N\supset \Delta \eta_+\cap H(t_+,a_-)\ni z_*$ and $z_*$ lies in the connected component $D_J$ of $H(t_+,t_-)\setminus J$, we get $\Delta \eta_+\cap H(t_+,a_-)\subset N\subset D_J$. Since $\Delta \eta_+\cap H(t_+,a_-)$ is dense in $\Delta \eta_+\cap H(t_+,t_-)$ (Lemma \ref{lem-strict}), and $\Delta\eta_+$ has positive distance from $J$, we get $\Delta \eta_+\cap H(t_+,t_-)\subset D_J$. Since $K(t_+',t_-)$ is the $\mathbb{H}$-hull generated by $K(t_+,t_-)$ and $\Delta\eta_+\cap H(t_+,t_-)$, we get $K(t_+',t_-)\setminus K(t_+,t_-)\subset D_J$, and so $K_+'(\delta)=K(t_+',t_-)/ K(t_+,t_-)$ is enclosed by the crosscut $g (J)$, where $g:= g_{K(t_+,t_-)}$. Thus, $\diam(K_+'(\delta))\le \diam(g ( J))$.
Let $R=2L$. From $\eta_+(t_+)\in \overline{K(\underline a)}$, we get $|\eta_+(t_+)|\le L$. Recall that $J\subset S=\{|z-\eta_+(t_+)|=r\}$. So the arc $J$ and the circle $\{|z-\eta_+(t_+)|=R\}$ are separated by the annulus centered at $\eta_+(t_+)$ with inner radius $r$ and outer radius $R-L=L$. Let $J'=\{|z-\eta_+(t_+)|=R\}\cap \mathbb{H}$ and $D_{J'}=(\mathbb{H}\cap \{|z-\eta_+(t_+)|<R\})\setminus K(t_+,t_-)$.
By comparison principle (\cite{Ahl}), the extremal length of the curves in $D_{J'}$ that separate $J$ from $J'$ is bounded above by $2\pi/\log(L/r)$. By conformal invariance, the extremal length of the curves in the subdomain of $\mathbb{H}$ bounded by the crosscut $g ( J')$, denoted by $D_{g(J')}$, that separate $g ( J)$ from $g ( J')$ is also bounded above by $2\pi/\log(L/r)$. By Proposition \ref{g-z-sup}, $g ( J')\subset \{|z|\le R+3L=5L\}$. So the Euclidean area of $D_{g(J')}$ is bounded above by $25\pi L^2/2$. By the definition of extremal length, there exists a curve in $\Omega$ with Euclidean length less than $10\pi L (\log(L/r))^{-1/2}$,
which separates $g( J)$ from $g( J'\})$. This implies that the $\diam(g( J))$ is bounded above by $10\pi L (\log(L/r))^{-1/2}$, and so is that of $K_+'(\delta)=K_{+}^{t_-}(t_+')/K_{+}^{t_-}(t_+)$. This finishes the proof of (i){ in the case $a_\pm\in {\cal I}_\pm^*$}.
{Now we do not assume $a_\pm\in {\cal I}_\pm^*$. Let $L'>L$. By the denseness of ${\cal I}_\pm^*$ in ${\cal I}_\pm$ and the continuity of $\eta_\pm$, we can find $a_\pm'>a_\pm$ such that $a_\pm'\in{\cal I}_\pm^*$, $(a_+',a_-')\in \cal D$, and $\rad_0(K(a_+',a_-'))<L'$. Since $ t_\sigma<t_\sigma'\in [0,a_\sigma']$, $\diam(\eta_\sigma[t_\sigma,t_\sigma'])<r<L$, and $t_{-\sigma}\in[0,a_{-\sigma}']$, by the above result, we get $\diam(K_{\sigma}^{t_{-\sigma}}({t_\sigma'})/K_{\sigma}^{t_{-\sigma}}(t_\sigma))\le 10\pi L' \log(L'/r)^{-1/2}$. Since the inequality holds for any $L'>L$, it also holds with $L$ in place of $L'$.
}
(ii) This follows from (i), Proposition \ref{g-z-sup}, the continuity of $\eta_\sigma$, the {fact that} $ L\log(L/r)^{-1/2}$ {tends to $0$ as $r\downarrow 0$}, and the equality $g_{K_{{\sigma}}^{t_{{-\sigma}}}(t_{{\sigma}}')} =g_{K_{{\sigma}}^{t_{{-\sigma}}}(t_{{\sigma}}')/K_{{\sigma}}^{t_{{-\sigma}}}(t_{{\sigma}})}\circ g_{K_{{\sigma}}^{t_{{-\sigma}}}(t_{{\sigma}})}$.
(iii) This follows from (ii), (\ref{circ-g}) and the fact that $g_{K(\underline t)}$ is analytic on $\mathbb{C}\setminus K(\underline t)^{\doub}$.
\end{proof}
For a function $X$ defined on $\cal D$, $\sigma \in\{+,-\}$ and $t_{-\sigma} \in\mathbb{R}_+$, we use $X|^{-\sigma}_{t_{-\sigma}} $ to denote the single-variable function $t_{\sigma}\mapsto X(t_\sigma \underline e_\sigma +t_{-\sigma}\underline e_{-\sigma})$, $0\le t_\sigma<T^{\cal D}_\sigma(t_{-\sigma})$, and use $\partial_{\sigma} X(t_+,t_-)$ to denote the derivative of this function at $t_{\sigma}$.
\begin{Lemma}
There are two functions $W_+,W_-\in C({\cal D},\mathbb{R})$ such that for any $\sigma\in\{+,-\}$ and $t_{-\sigma}\in{\cal I}_{-\sigma}$,
$K_{\sigma}^{t_{-\sigma}}(t_\sigma)$, $0\le t_\sigma<T^{\cal D}_\sigma(t_{-\sigma})$, are chordal Loewner hulls driven by $W_\sigma|^{-\sigma}_{t_{-\sigma}}$ with speed $\mA|^{-\sigma}_{t_{-\sigma}}$, and for any $\underline t=(t_+,t_-)\in\cal D$, $\eta_\sigma(t_\sigma)=f_{K(\underline t)}( W_\sigma(\underline t))$.
\label{Lem-W}
\end{Lemma}
\begin{proof}
By symmetry, we only need to prove the case that $\sigma=+$. Since
$$\hcap_2(K_{+}^{t_-}(t_++\delta))-\hcap_2(K_{+}^{t_-}(t_+))=\mA(t_++\delta,t_-)-\mA(t_+,t_-),$$
by Lemma \ref{lem-uniform} (i), the continuity of $\eta_\sigma$ and Proposition \ref{Loewner-chain}, for every $t_-\in{\cal I}_-$, $K_{+}^{t_-}(t_+)$, $0\le t_+<T^{\cal D}_+(t_-)$, are chordal Loewner hulls with speed $\mA|^-_{t_-}$, and the driving function, denoted by $W_+(\cdot,t_-)$, satisfies {the property} that
\begin{equation} \{W_+(t_+,t_-)\}= \bigcap_{\delta>0} \overline{K_{+}^{t_-}(t_++\delta)/K_{+}^{t_-}(t_+)}=\bigcap_{\delta>0} \overline{K(t_++\delta,t_-)/K(t_+,t_-)}.\label{W-def}\end{equation}
Fix $\underline t=(t_+,t_-)\in\cal D$. We now show that $f_{K(\underline t)}( W_+(\underline t)) =\eta_+(t_+)$. By Lemma \ref{lem-strict}, there exists a sequence $t_+^n\downarrow t_+$ such that $\eta_+(t_+^n)\in K(t_+^n,t_-)\setminus K(t_+,t_-)$ for all $n$. Then $g_{K(\underline t)}(\eta_+(t_+^n))\in K(t_+^n,t_-)/K(\underline t)=K_{+}^{t_-}(t_+^n)/K_{+}^{t_-}(t_+)$. So we have $g_{K(\underline t)}( \eta_+(t_+^n))\to W_+(\underline t)$ by (\ref{W-def}). From the continuity of $f_{K(\underline t)} $ and $\eta_+$, we then get
$$\eta_+(t_+)=\lim_{n\to \infty} \eta_+(t_+^n)=\lim_{n\to \infty} f_{K(\underline t)}( g_{K(\underline t)}( \eta_+(t_+^n)))=f_{K(\underline t)}( W_+(\underline t)).$$
It remains to show that $W_+$ is continuous on $\cal D$.
Let $t_+,t_-^1,t_-^2\in\mathbb{R}_+$ be such that $t_-^1<t_-^2$ and $(t_+,t_-^2)\in \cal D$.
By Lemma \ref{lem-strict}, there is a sequence $\delta_n\downarrow 0$ such that $z_n:=\eta_+(t_++\delta_n)\in H(t_+,t_-^2)$. Then $g_{K(t_+,t_-^j)}(z_n)\in K(t_++\delta_n,t_-^j)/ K(t_+,t_-^j)=K_{+}^{t_-^j}(t_++\delta_n)/K_{+}^{t_-^j}(t_+)$, $j=1,2$. From (\ref{W-def}) we get
$$|W_+(t_+,t_-^j)-g_{K(t_+,t_-^j)}(z_n)|\le \diam(K_{+}^{t_-^j}(t_++\delta_n)/K_{+}^{t_-^j}(t_+)),\quad j=1,2.$$
Since $g_{K(t_+,t_-^2)}(z_n)=g_{K(t_+,t_-^2)/K(t_+,t_-^1)}\circ g_{K(t_+,t_-^1)}(z_n)$, by Proposition \ref{g-z-sup} we get
$$|g_{K(t_+,t_-^2)}(z_n)-g_{K(t_+,t_-^1)}(z_n)|\le 3\diam(K(t_+,t_-^2)/K(t_+,t_-^1))= 3\diam(K_{-}^{t_+}(t_-^2)/K_{-}^{t_+}(t_-^1)) .$$
Combining the above displayed formulas and letting $n\to\infty$, we get
\begin{equation} |W_+(t_+,t_-^2)-W_+(t_+,t_-^1)|\le 3\diam(K_{-}^{t_+}(t_-^2)/K_{-}^{t_+}(t_-^1)) ,\label{W+continuity}\end{equation}
which together with Lemma \ref{lem-uniform} (i) implies that, for any $(a_+,a_-)\in\cal D$, the family of functions $[0,a_-]\ni t_-\mapsto W_+(t_+,t_-)$, $0\le t_+\le a_+$, are equicontinuous. Since $W_+$ is continuous in $t_+$ as a driving function, we conclude that $W_+$ is continuous on $\cal D$.
\end{proof}
\begin{Definition}
We call $W_+$ and $W_-$ the driving functions for the commuting pair $(\eta_+,\eta_-;{\cal D})$.
It is obvious that $W_\sigma|^{-\sigma}_0=\widehat w_\sigma$, $\sigma\in\{+,-\}$.
\end{Definition}
\begin{Remark} By (\ref{W-def}) and Propositions \ref{prop-connected} and \ref{winK}, for $t_+^1<t_+^2\in {\cal I}_+$ and $t_-\in {\cal I}_-$ such that $(t_+^2,t_-)\in \cal D$,
$|W_+(t_+^2,t_-)-W_+(t_+^1,t_-)|\le 4\diam(K_{+}^{t_-}(t_+^2)/K_{+}^{t_-}(t_+^1)) $.
This combined with (\ref{W+continuity}) and Lemma \ref{lem-uniform} (i) implies that, if $\eta_\sigma$ extends continuously to $[0,T_\sigma]$ for $\sigma\in\{+,-\}$, then $W_+$ and $W_-$ are uniformly continuous on $\cal D$, and so extend continuously to $\overline{\cal D}$.
\label{Remark-continuity-W}
\end{Remark}
\begin{Lemma}
For any $\sigma\in\{+,-\}$ and $t_{-\sigma}\in{\cal I}_{-\sigma}$, the chordal Loewner hulls $K_{\sigma}^{t_{-\sigma}}(t_\sigma)=K(t_+,t_-)/K_{-\sigma}(t_{-\sigma})$, $0\le t_\sigma<T^{\cal D}_\sigma(t_{-\sigma})$, are generated by a chordal Loewner curve, denoted by $\eta_{\sigma}^{t_{-\sigma}}$, which intersects $\mathbb{R}$ at a set with Lebesgue measure zero such that $\eta_\sigma|_{[0,T^{\cal D}_\sigma(t_{-\sigma}))}=f_{K_{-\sigma}(t_{-\sigma})}\circ \eta_{\sigma}^{t_{-\sigma}}$. Moreover, for $\sigma\in\{+,-\}$, $(t_+,t_-)\mapsto \eta_{\sigma}^{t_{-\sigma}}(t_\sigma)$ is continuous on $\cal D$.
\label{Lebesgue}
\end{Lemma}
\begin{proof}
It suffices to work {in} the case that $\sigma=+$. First we show that there exists a continuous function $(t_+,t_-)\mapsto \eta_{+}^{t_-}(t_+)$ from $\cal D$ into $\overline\mathbb{H}$ such that
\begin{equation} \eta_+(t_+)=f_{K_-(t_-)}(\eta_{+}^{t_-}(t_+)),\quad \forall (t_+,t_-)\in\cal D.\label{eta-ft}\end{equation}
Let $(t_+,t_-)\in\cal D$. By Lemma \ref{lem-strict}, there is a sequence $t_+^n\downarrow t_+$ such that for all $n$, $(t_+^n,t_-)\in\cal D$ and $\eta_+(t_+^n)\in\mathbb{H}\setminus K(t_+,t_-)$. Then we get $g_{K_-(t_-)}(\eta_+(t_+^n))\in g_{K_-(t_-)}(K(t_+^n,t_-)\setminus K(t_+,t_-))=K_{+}^{t_-}(t_+^n)/ K_{+}^{t_-}(t_+)$. If $t_-\in{\cal I}_-^*$, then
by Condition (I), $\bigcap_n \overline{K_{+}^{t_-}(t_+^n)/K_{+}^{t_-}(t_+)}=\{\eta_{+}^{t_-}(t_+)\}$, which implies that $g_{K_-(t_-)}(\eta_+(t_+^n))\to \eta_{+}^{t_-}(t_+)$. From the continuity of $f_{K_-(t_-)}$ and $\eta_+$, we find that (\ref{eta-ft}) holds if $t_-\in{\cal I}_-^*$. Thus,
\begin{equation} \eta_{+}^{t_-}(t_+)=g_{K_-(t_-)}(\eta_+(t_+)),\quad \mbox{if } (t_+,t_-)\in{\cal D}, \,t_-\in{\cal I}_-^*\mbox{ and }\eta_+(t_+)\in\mathbb{H}\setminus K_-(t_-).\label{eta-gt}\end{equation}
Fix $a_-\in{\cal I}_-^*$. Let ${\cal R}=\{t_+\in{\cal I}_+:(t_+,a_-)\in{\cal D},\eta_+(t_+)\in \mathbb{H}\setminus K_-(a_-)\}$, which by Lemma \ref{lem-strict} is dense in $[0,T^{\cal D}_+(a_-))$.
By Propositions \ref{g-z-sup} and \ref{Loewner-chain},
\begin{equation} \lim_{\delta\downarrow 0}\,\sup_{ t_-\in [0,a_-]}\,\,\sup_{t_-'\in[0,a_-]\cap (t_--\delta,t_-+\delta)} \,\, \sup_{t_+\in{\cal R}}\, |g_{K_-(t_-)}( \eta_+(t_+))-g_{K_-(t_-')}(\eta_+(t_+))|=0.\label{unif-g-g}\end{equation}
This combined with (\ref{eta-gt}) implies that
\begin{equation} \lim_{\delta\downarrow 0}\,\sup_{ {t_-\in [0,a_-]\cap {\cal I}_-^*}}\,\,\sup_{t_-'\in[0,a_-]\cap{\cal I}_-^*\cap (t_--\delta,t_-+\delta)} \,\, \sup_{t_+\in{\cal R}} \, |\eta_{+}^{t_-}(t_+)-\eta_{+}^{t_-'}(t_+)|=0.\label{unif-g-g'}\end{equation}
By the denseness of $\cal R$ in $[0,T^{\cal D}_+(a_-))$ and the continuity of each $\eta_{+}^{t_-}$, $t_-\in{\cal I}_-^*$, we know that (\ref{unif-g-g'}) still holds if $\sup_{t_+\in{\cal R}}$ is replaced by $\sup_{t_+\in[0,T^{\cal D}_+(a_-))}$. Since ${\cal I}_-^*$ is dense in ${\cal I}_-$, the continuity of each $\eta_{+}^{t_-}$, $t_-\in{\cal I}_-^*$, together with (\ref{unif-g-g'}) implies that there exists a continuous function $[0,T^{\cal D}_+(a_-))\times [0,a_-]\ni (t_+,t_-)\mapsto \eta_{+}^{t_-}(t_+)\in\overline\mathbb{H}$, which extends those $\eta_{+}^{t_-}(t_+)$ for $t_-\in{\cal I}_-^*\cap [0,a_-]$ and $t_+\in [0,T^{\cal D}_+(a_-))$. Running $a_-$ from $0$ to $T_-$, we get a continuous function ${\cal D}\ni (t_+,t_-)\mapsto \eta_{+}^{t_-}(t_+)\in\overline\mathbb{H}$, which extends those $\eta_{+}^{t_-}(t_+)$ for $(t_+,t_-)\in\cal D$ and $t_-\in{\cal I}_-^*$.
Since $\eta_{+}^{t_-}(t_+)=g_{K_-(t_-)}(\eta_+(t_+))$ for all $t_+\in{\cal R}$ and $t_-\in[0,a_-]\cap {\cal I}_-^*$, from (\ref{eta-gt},\ref{unif-g-g}) we know that it is also true for any $t_-\in[0,a_-]$. Thus, $\eta_+(t_+)=f_{K_-(t_-)}(\eta_{+}^{t_-}(t_+))$ for all $t_+\in{\cal R}$ and $t_-\in[0,a]$. By the denseness of $\cal R$ in $[0,T^{\cal D}_+(a_-))$ and the continuity of $\eta_+$, $f_{K_-(t_-)} $ and $\eta_{+}^{t_-}$, we get (\ref{eta-ft}) for all $t_-\in[0,a_-]$ and $t_+\in [0,T^{\cal D}_+(a_-))$. So (\ref{eta-ft}) holds for all $(t_+,t_-)\in\cal D$.
For $(t_+,t_-)\in\cal D$, since $K(t_+,t_-)=\Hull(K_-(t_-)\cup(\eta_+[0,t_+]\cap (\mathbb{H}\setminus K_-(t_-)))$, we see that $K_{+}^{t_-}(t_+)=g_{K_-(t_-)}(K(t_+,t_-)\setminus K_-(t_-))$ is the $\mathbb{H}$-hull generated by $g_{K_-(t_-)}(\eta_+[0,t_+]\cap (\mathbb{H}\setminus K_-(t_-)))=\eta_{+}^{t_-}[0,t_+]\cap \mathbb{H}$. So $K_{+}^{t_-}(t_+)=\Hull(\eta_{+}^{t_-}[0,t_+])$. By Lemma \ref{Lem-W}, for any $t_-\in [0,T_-)$, $\eta_{+}^{t_-}(t_+)$, $0\le t_+<T^{\cal D}_+(t_-)$, is the chordal Loewner curve driven by $W_+(\cdot,t_-)$ with speed $\mA(\cdot,t_-)$. So we have $\eta_{+}^{t_-}(t_+)=f_{K_{+}^{t_-}(t_+)}(W_+(t_+,t_-))$, which together with $\eta_+(t_+)=f_{K(t_+,t_-)}( W_+(t_+,t_-))$ implies that $\eta_+(t_+)=f_{K_-(t_-)}(\eta_{+}^{t_-}(t_+))$.
Finally, we show that $\eta_{+}^{t_-}\cap\mathbb{R}$ has Lebesgue measure zero for all $t_-\in{\cal I}_-$. Fix $t_-\in {\cal I}_-$ and $\widehat t_+\in{\cal I}_+$ such that $(\widehat t_+,t_-)\in\cal D$. It suffices to show that $\eta_{+}^{t_-}[0,\widehat t_+]\cap \mathbb{R}$ has Lebesgue measure zero. There exists a sequence ${\cal I}_-^*\ni t_-^n\downarrow t_-$ such that $(\widehat t_+,t_-^n)\in\cal D$ for all $n$. Let $K_n=K_-(t^n_-)/K_-(t_-)$, $g_n=g_{K_n}$, and $f_n=g_n^{-1}$.
Then $f_{K_-(t_-)}=f_{K_-(t_-^n )}\circ g_n$ on $\mathbb{H}\setminus K_n$. Let $t_+\in[0,\widehat t_+]$. From $f_{K_-(t_-)}( \eta_{+}^{t_-}(t_+))=\eta_+(t_+)=f_{K_-(t_-^n )}( \eta_{+}^{t_-^n}(t_+))$ we get
$ \eta_{+}^{t_-^n}(t_+)=g_n(\eta_{+}^{t_-}(t_+))$ if $\eta_{+}^{t_-}(t_+)\in \mathbb{H}\setminus K_n$. By continuity we get $ \eta_{+}^{t_-}(t_+)=f_n(\eta_{+}^{t_-}(t_+^n))$ if $\eta_{+}^{t_-^n}(t_+)\in \mathbb{R} \setminus [c_{K_n},d_{K_n}]$, $0\le t_+\le \widehat t_+$. Thus, $\eta_{+}^{t_-}[0,\widehat t_+]\cap (\mathbb{R}\setminus [a_{K_n},b_{K_n}])\subset f_n(\eta_{+}^{t_-^n}[0,\widehat t_+]\cap (\mathbb{R}\setminus [c_{K_n},d_{K_n}]))$. Since $t_-^n\in{\cal I}_-^*$, by Condition (II) and the analyticity of $f_n$ on $\mathbb{R}\setminus [c_{K_n},d_{K_n}]$ we know that $\eta_{+}^{t_-}[0,\widehat t_+]\cap (\mathbb{R}\setminus [a_{K_n},b_{K_n}])$ has Lebesgue measure zero for each $n$. Sending $n\to \infty$ and using the fact that $[a_{K_n},b_{K_n}]\downarrow \{\widehat w_-(t_-)\}$, we see that $\eta_{+}^{t_-}[0,\widehat t_+]\cap\mathbb{R}$ also has Lebesgue measure zero.
\end{proof}
\begin{Lemma}
For any $\sigma\in\{+,-\}$ and $(t_+,t_-)\in\cal D$, $\widehat w_\sigma(t_\sigma)= f_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}(W_\sigma(t_+,t_-))\in \partial (\mathbb{H}\setminus K_{-\sigma}^{t_\sigma}(t_{-\sigma}))$. \label{lem-w-W}
\end{Lemma}
\begin{proof}
By symmetry, it suffices to work on the case $\sigma=+$. For any $(t_+,t_-)\in\cal D$, by Lemma \ref{lem-strict} there is a sequence $t^n_+\downarrow t_+$ such that $\eta_+(t^n_+)\in K(t^n_+,t_-)\setminus K(t_+,t_-)$ for all $n$.
From (\ref{haw=}) and Lemma \ref{Lem-W} we get $g_{K_+(t_+)}(\eta_+(t^n_+))\to \widehat w_+(t_+)$ and $g_{K(t_+,t_-)}(\eta_+(t^n_+))\to W_+(t_+,t_-)$. From (\ref{circ-g}) we get $g_{K_+(t_+)} =f_{K_{-}^{t_+}(t_-)}\circ g_{K(t_+,t_-)}$. From the continuity of $f_{K_{-}^{t_+}(t_-)}$ on $\overline\mathbb{H}$, we then get $\widehat w_+(t_+)= f_{K_{-}^{t_+}(t_-)}( W_+(t_+,t_-))$. Finally, $\widehat w_+(t_+)\in \partial(\mathbb{H}\setminus K_{-}^{t_+}(t_-))$ because $W_+(t_+,t_-)\in\partial\mathbb{H}$ and $f_{K_{-}^{t_+}(t_-)}$ maps $\mathbb{H}$ conformally onto $\mathbb{H}\setminus K_-^{t_+}(t_-)$.
\end{proof}
\subsection{Force point functions}\label{section-V}
For $\sigma\in\{+,-\}$, define $C_\sigma$ and $D_\sigma$ on $\cal D$ such that if $t_\sigma>0$, $C_\sigma(t_+,t_-)=c_{K_{\sigma}^{t_{-\sigma}}(t_\sigma)}$ and $D_\sigma(t_+,t_-)=d_{K_{\sigma}^{t_{-\sigma}}(t_\sigma)}$; and if $t_\sigma=0$, then $C_\sigma =D_\sigma =W_\sigma$ at $t_{-\sigma}\underline e_{-\sigma}$. Since $K_{\sigma}^{t_{-\sigma}}(\cdot)$ are chordal Loewner hulls driven by $W_\sigma|^{-\sigma}_{t_{-\sigma}}$ with some speed, by Proposition \ref{winK} we get
\begin{equation} C_\sigma\le W_\sigma\le D_\sigma\quad \mbox{on }\cal D,\quad \sigma\in\{+,-\}.\label{CWD}\end{equation}
Since $K_{\sigma}^{t_{-\sigma}}(t_\sigma)$ is the $\mathbb{H}$-hull generated by $\eta_{\sigma}^{t_{-\sigma}}[0,t_\sigma]$, we get
\begin{equation} f_{K_{\sigma}^{t_{-\sigma}}(t_\sigma)}[C_\sigma(t_+,t_-),D_\sigma(t_+,t_-)]\subset \eta_{\sigma}^{t_{-\sigma}}[0,t_\sigma].\label{f[C,D]}\end{equation}
Recall that $w_-<w_+\in\mathbb{R}$. We write $\underline w$ for $(w_+,w_-)$. Define $R_{\underline w}=(\mathbb{R}\setminus \{w_+,w_-\})\cup\{w_+^+,w_+^-,w_-^+,w_-^-\}$ with the obvious order endowed from $\mathbb{R}$. Assign the topology to $\mathbb{R}_{\underline w}$ such that $I_-:=(-\infty,w_-^-],I_0:=[w_-^+,w_+^-],I_+:=[w_+^+,\infty)$ are three connected components of $\mathbb{R}_{\underline w}$, which are respectively homeomorphic to $(-\infty,w_-],[w_-,w_+],[w_+,\infty)$. Recall that for $\sigma\in\{+,-\}$ and $t\in{\cal I}_\sigma$, $g_{K_\sigma(t)}^{w_\sigma}$ (Definition \ref{Def-Rw}) is defined on $\mathbb{R}_{w_\sigma}$, and agrees with $g_{K_\sigma(t)}$ on $\mathbb{R}\setminus ([a_{K_\sigma(t)},b_{K_\sigma(t)}]\cup\{w_\sigma\})$. By Lemma \ref{lem-w-W} and the fact that $w_{-\sigma}\not\in [a_{K_\sigma(t)},b_{K_\sigma(t)}]\cup\{w_\sigma\}$, we then know that $g_{K_\sigma(t)}^{w_\sigma}(w_{-\sigma})=W_{-\sigma}(t\underline e_\sigma)$. So we define $g_{K_\sigma(t)}^{w_\sigma}(w_{-\sigma}^\pm)=W_{-\sigma}(t\underline e_\sigma)^\pm$, and understand $g_{K_\sigma(t)}^{w_\sigma}$ as a continuous function from $\mathbb{R}_{\underline w}$ to $\mathbb{R}_{W_{-\sigma}(t\underline e_\sigma)}$.
\begin{Lemma}
For any $\underline t=(t_+,t_-)\in\cal D$, $g_{K_{+}^{t_-}(t_+)}^{W_+(0,t_-)}\circ g_{K_-(t_-)}^{w_-}$ and $g_{K_{-}^{t_+}(t_-)}^{W_-(t_+,0)}\circ g_{K_+(t_+)}^{w_+} $ agree on $\mathbb{R}_{\underline w}$, and the common function in the equality, denoted by $g_{K(\underline t)}^{\underline w}$, satisfies the following properties.
\begin{enumerate}
\item [(i)] $g_{K(\underline t)}^{\underline w}$ is increasing and continuous on $\mathbb{R}_{\underline w}$, and agrees with $g_{K(\underline t)}$ on $\mathbb{R}\setminus \overline{K(\underline t)}$.
\item [(ii)] $g_{K(\underline t)}^{\underline w}$ maps $I_+\cap (\overline{K(\underline t)}\cup\{w_+^+\})$ and $I_-\cap (\overline{K(\underline t)}\cup \{w_-^-\})$ to $ D_+(\underline t) $ and $ C_-(\underline t) $, respectively.
\item [(iii)] If $\overline{K_+(t_+)}\cap \overline{K_-(t_-)}=\emptyset$, $g_{K(\underline t)}^{\underline w}$ maps $I_0\cap (\overline{K_+(t_+)}\cup\{w_+^-\})$ and $I_0\cap (\overline{K_-(t_-)}\cup \{w_-^+\})$ to $ C_+(\underline t) $ and $ D_-(\underline t) $, respectively.
\item [(iv)] If $\overline{K_+(t_+)}\cap \overline{K_-(t_-)}\ne\emptyset$, $g_{K(\underline t)}^{\underline w}$ maps $I_0$ to $ C_+(\underline t) =D_-(\underline t) $.
\item [(v)] The map $(\underline t,v)\mapsto g_{K(\underline t)}^{\underline w}(v)$ from ${\cal D}\times \mathbb{R}_{\underline w}$ to $\mathbb{R}$ is jointly continuous.
\end{enumerate}
\label{common-function}
\end{Lemma}
\begin{proof}
Fix $\underline t=(t_+,t_-)\in\cal D$. For $\sigma\in\{+,-\}$, we write $K$ for $K(\underline t)$, $K_\sigma$ for $K_\sigma(t_\sigma)$, $\widetilde K_\sigma$ for $K_{\sigma}^{t_{-\sigma}}(t_\sigma)$, $\widetilde w_\sigma$ for $W_\sigma(t_{-\sigma}\underline e_{-\sigma})$,
$C_\sigma$ for $C_\sigma(\underline t)$, and $D_\sigma$ for $D_\sigma(\underline t)$.
The equality now reads $g_{\widetilde K_+}^{\widetilde w_+}\circ g_{K_-}^{w_-}=g_{\widetilde K_-}^{\widetilde w_-}\circ g_{K_+}^{w_+}$. Before proving the equality, we first show that both sides are well defined and satisfy (i-iii) and a weaker version of (iv) (see below). First consider $g_{\widetilde K_-}^{\widetilde w_-}\circ g_{K_+}^{w_+}$. Since $g_{K_+}^{w_+}:\mathbb{R}_{\underline w}\to \mathbb{R}_{\widetilde w_-}$, the composition is well defined on $\mathbb{R}_{\underline w}$. We denote it by $g_{K(\underline t)}^{\underline w,+}$.
(i)
The continuity and monotonicity of the composition follows from the continuity and monotonicity of both $g_{\widetilde K_-}^{\widetilde w_-}$ and $g_{K_+}^{w_+}$.
Let $v \in \mathbb{R} \setminus \overline{K}$. Then $v\not\in \overline{K_+}$, and $g_{K_+}^{w_+}(v)=g_{K_+}(v)$.
Since $\widetilde K_-=K/K_+$, $K\setminus K_+=f_{K_+}(\widetilde K_-)$. From $v=f_{K_+}(g_{K_+}(v))\not\in \overline{K\setminus K_+}$ and the continuity of $f_{K_+}$ on $\overline\mathbb{H}$, we know that $g_{K_+}(v)\not\in \overline{\widetilde K_-}$, which implies that $g_{K(\underline t)}^{\underline w,+}(v)=g_{\widetilde K_-} \circ g_{K_+} (v)=g_K(v)$.
In the proof of (ii,iii) below, we write $\eta_\sigma$ for $\eta_\sigma[0,t_\sigma]$ and $\widetilde\eta_\sigma$ for $\eta_{\sigma}^{t_{-\sigma}}[0,t_\sigma]$;
when $t_\sigma=0$, i.e., $K_\sigma=\widetilde K_\sigma=\emptyset$, we understand $a_{K_\sigma}=b_{K_\sigma}=c_{K_\sigma}=d_{K_\sigma}=w_\sigma$, and $a_{\widetilde K_\sigma}=b_{\widetilde K_\sigma}=c_{\widetilde K_\sigma}=d_{\widetilde K_\sigma}=\widetilde w_\sigma$. Then it is always true that $a_{K_\sigma}=\min\{\eta_\sigma \cap \mathbb{R}\}$, $b_{K_\sigma}=\max\{\eta_\sigma \cap \mathbb{R}\}$,
$a_{\widetilde K_\sigma}=\min\{\widetilde \eta_\sigma \cap \mathbb{R}\}$, $b_{\widetilde K_\sigma}=\max\{\widetilde \eta_\sigma \cap \mathbb{R}\}$, $c_{\widetilde K_\sigma}=C_\sigma$, and $d_{\widetilde K_\sigma}=D_\sigma$. Since $\eta_\pm=f_{K_{\mp}}(\widetilde \eta_\pm)$, we get $ b_{\widetilde K_+}=g_{K_-}(b_{K_+})$, $a_{\widetilde K_-}=g_{K_+}(a_{K_-})$. If $\overline{K_+}\cap \overline{K_-}=\emptyset$, then $a_{\widetilde K_+}=g_{K_-}(a_{K_+})$, $b_{\widetilde K_-}=g_{K_+}(b_{K_-})$.
(ii) Since $I_+\cap (\overline K\cup\{w_+^+\})=\{w_+^+\}\cup (w_+,b_K]=\{w_+^+\}\cup (w_+^+,b_{K_+}]$ is mapped by $g_{K_+}^{w_+}$ to a single point, it is also mapped by $g_{K(\underline t)}^{\underline w,+} $ to a single point, which by (i) is equal to
$$\lim_{x\downarrow b_{K}} g_K(x)=\lim_{x\downarrow b_{K_+}} g_{\widetilde K_+}\circ g_{K_-}(x)=\lim_{y\downarrow b_{\widetilde K_+}} g_{\widetilde K_+}(y)=d_{\widetilde K_+}=D_+.$$
To show that $I_-\cap( \overline K\cup\{w_-^-\})=[a_K,w_-^-)\cup\{w_-^-\}$ is mapped by $g_{K(\underline t)}^{\underline w,+} $ to $C_-$, by (i) it suffices to show that $\lim_{x\uparrow a_{K}}g_K(x)=g_{\widetilde K_-}^{\widetilde w_-}\circ g_{K_+}^{w_+}(w_-^-)=c_{\widetilde K_-}$. This holds because $$g_{\widetilde K_-}^{\widetilde w_-}\circ g_{K_+}^{w_+}(w_-^-)= g_{\widetilde K_-}^{\widetilde w_-}(\widetilde w_-^-)=c_{\widetilde K_-}=\lim_{x\uparrow a_{\widetilde K_-}} g_{\widetilde K_-}(x)=\lim_{x\uparrow a_{ K_-}} g_{\widetilde K_-}\circ g_{K_+}(x)=\lim_{x\uparrow a_{K}}g_K(x).$$
(iii) Suppose $\overline{K_+}\cap \overline{K_-}=\emptyset$. Then $I_0\cap (\overline{K_+}\cup\{w_+^-\})=[a_{K_+},w_+^-)\cup\{w_+^-\}$ is mapped by $g_{K_+}^{w_+}$ to a single point, so is also mapped by $g_{K(\underline t)}^{\underline w,+}$ to a single point. By (i) the latter point is
$$\lim_{x\uparrow a_{K_+}}g_K(x)=\lim_{x\uparrow a_{K_+}} g_{\widetilde K_+}\circ g_{K_-}(x)=\lim_{y\uparrow a_{\widetilde K_+}} g_{\widetilde K_+}(y)=c_{\widetilde K_+}=C_+.$$
Since $I_0\cap (\overline{K_-}\cup\{w_-^+\})=\{w_-^+\}\cup (w_-^+,b_{K_-}]$ is mapped by $g_{K_+}^{w_+}$ to $\{\widetilde w_-^+\}\cup (\widetilde w_-^+,b_{\widetilde K_-}]$, which is further mapped by $g_{\widetilde K_-}^{\widetilde w_-}$ to $d_{\widetilde K_-}= D_-$, we see that
$g_{K(\underline t)}^{\underline w,+}$ maps $I_0\cap (\overline{K_-}\cup\{w_-^+\})$ to $ D_-$.
(iv) Suppose $\overline{K_+}\cap \overline{K_-}\ne\emptyset$. For now, we only show that $I_0$ is mapped by $g_{K(\underline t)}^{\underline w,+}$ to $D_-$. By the assumption we have $t_+,t_->0$ and $[c_{K_+},d_{K_+}]\cap \overline{\widetilde K_-}\ne \emptyset$, which implies that $c_{K_+}\le b_{\widetilde K_-}$. Thus, $g_{K_+}^{w_+}(I_0)= [\widetilde w_-^+, c_{K_+}]\subset [\widetilde w_-^+, b_{\widetilde K_-}]$, from which follows that $g_{K(\underline t)}^{\underline w,+}(I_0)=\{d_{\widetilde K_-}\}=\{D_-\}$.
Now $g_{\widetilde K_-}^{\widetilde w_-}\circ g_{K_+}^{w_+}$ satisfies (i-iii) and a weaker version of (iv). By symmetry, this is also true for $g_{\widetilde K_+}^{\widetilde w_+}\circ g_{K_-}^{w_-}$, where for (iv), $I_0$ is mapped to $\{C_+\}$. We now show that the two functions agree on $\mathbb{R}_{\underline w}$.
By (i), $g_{\widetilde K_+}^{\widetilde w_+}\circ g_{K_-}^{w_-}$ and $g_{\widetilde K_-}^{\widetilde w_-}\circ g_{K_+}^{w_+}$ agree on $\mathbb{R} \setminus \overline{K}$. By (ii), the two functions also agree on $I_+\cap (\overline{K(\underline t)}\cup\{w_+^+\})$ and $I_-\cap (\overline{K(\underline t)}\cup \{w_-^-\})$. Thus they agree on both $I_+$ and $I_-$. By (i,iii) they agree on $I_0$ when $\overline{K_+}\cap \overline{K_-}=\emptyset$. To prove that they agree on $I_0$ when $\overline{K_+}\cap \overline{K_-}\ne\emptyset$, by the weaker versions of (iv) we only need to show that $c_{\widetilde K_+}=d_{\widetilde K_-}$ in that case.
First, we show that $d_{\widetilde K_-}\le c_{\widetilde K_+} $. Suppose $d_{\widetilde K_-}> c_{\widetilde K_+}$. Then $J:=(c_{\widetilde K_+},d_{\widetilde K_-})\subset [c_{\widetilde K_-},d_{\widetilde K_-}]\cap [c_{\widetilde K_+},d_{\widetilde K_+}]$. So $ f_{\widetilde K_+}(J)\subset \partial (\mathbb{H}\setminus \widetilde K_+)$. If $f_{\widetilde K_+}(J)\subset \mathbb{R}$, then it is disjoint from $\overline{\widetilde K_+}$, and so is disjoint from $[a_{\widetilde K_+},b_{\widetilde K_+}]$ since $\widetilde K_+$ is generated by $\widetilde\eta_+ $, which does not spend any nonempty interval of time on $\mathbb{R}$. That $f_{\widetilde K_+}(J)\cap [a_{\widetilde K_+},b_{\widetilde K_+}]=\emptyset$ then implies that $J\cap [c_{\widetilde K_+},d_{\widetilde K_+}]=\emptyset$, a contradiction. So there is $x_0\in J$ such that $f_{\widetilde K_+}(x_0)\subset \mathbb{H}$, which implies that $f_K(x_0)=f_{K_-}\circ f_{\widetilde K_+}(x_0)\in \mathbb{H}\setminus K_-$. On the other hand, since $x_0\in [c_{\widetilde K_-},d_{\widetilde K_-}]$,
$f_K(x_0)=f_{K_+}\circ f_{\widetilde K_-}(x_0) \subset f_{K_+}(\widetilde\eta_- )=\eta_- $, which contradicts that $f_K(x_0)\in \mathbb{H}\setminus K_-$. So $d_{\widetilde K_-}\le c_{\widetilde K_+} $.
Second, we show that $d_{\widetilde K_-}\ge c_{\widetilde K_+} $. Suppose $d_{\widetilde K_-}< c_{\widetilde K_+}$. Let $J=(d_{\widetilde K_-}, c_{\widetilde K_+})$. Then $f_{\widetilde K_+}(J)=(f_{\widetilde K_+}(d_{\widetilde K_-}),a_{\widetilde K_+})$. From $\overline{K_+}\cap \overline{K_-}\ne\emptyset$ we know $a_{\widetilde K_+}\le d_{K_-}$. From $a_{\widetilde K_-}=g_{K_+}(a_{K_-})$
we get $d_{\widetilde K_-}\ge c_{\widetilde K_-}=\lim_{x\uparrow a_{\widetilde K_-}}g_{\widetilde K_-}(x)
=\lim_{y\uparrow a_{ K_-}}g_{\widetilde K_-}\circ g_{K_+}(y)=\lim_{y\uparrow a_{ K_-}} g_{\widetilde K_+}\circ g_{K_-}(y)$.
Thus, $f_{\widetilde K_+}(d_{\widetilde K_-})\ge \lim_{y\uparrow a_{ K_-}} g_{K_-}(y)=c_{K_-}$. So we get $f_{\widetilde K_+}(J)\subset [c_{K_-},d_{K_-}]$, which is mapped into $\eta_-$ by $f_{K_-}$. Thus, $f_K(J)\subset \eta_-$. Symmetrically, $f_K(J)\subset \eta_+$. Since $\eta_-=f_{K_+}(\widetilde \eta_-)$ and $f_K(J)\subset \partial(\mathbb{H}\setminus K)$, for every $x \in J$, there is $z_-\in\widetilde\eta_-\cap \partial(\mathbb{H}\setminus \widetilde K_-)$ such that $f_K(x )=f_{K_+}(z_-)$.
Then there is $y_-\in [c_{\widetilde K_-},d_{\widetilde K_-}]$ such that $z_-=f_{\widetilde K_-}(y_-)$. So $f_K(x )=f_K(y_-)$. Similarly, for every $x \in J$, there is $y_+\in [c_{\widetilde K_-},d_{\widetilde K_-}]$ such that $f_K(x )=f_K(y_+)$. Pick $x^1<x^2\in J$ such that $f_K(x^1)\ne f_K(x^2)$. This is possible because $f_K(J)$ has positive harmonic measure in $\mathbb{H}\setminus K$. Then there exist $y^1_+\in [c_{\widetilde K_+},d_{\widetilde K_+}]$ and $y^2_-\in [c_{\widetilde K_-},d_{\widetilde K_-}]$ such that $f_K(x^1)=f_K(y_1^+)$ and $f_K(x^2)=f_K(y^2_-)$. This contradicts that $y^1_+>x^2>x^1>y^2_-$.
So $d_{\widetilde K_-}\ge c_{\widetilde K_+} $.
Combining the last two paragraphs, we get $c_{\widetilde K_+}=d_{\widetilde K_-}$. So $g_{K_{+}^{t_-}(t_+)}^{W_+(0,t_-)}\circ g_{K_-(t_-)}^{w_-}$ and $g_{K_{-}^{t_+}(t_-)}^{W_-(t_+,0)}\circ g_{K_+(t_+)}^{w_+} $ agree on $I_+\cup I_-\cup I_0=\mathbb{R}_{\underline w}$, and the original (iv) holds for both functions.
(v) By (i), the composition $g_{K(\underline t)}^{\underline w}$ is continuous on $\mathbb{R}_{\underline w}$ for any $\underline t\in\cal D$. It suffices to show that, for any $(a_+,a_-)\in\cal D$ and $\sigma\in\{+,-\}$, the family of maps $[0,a_\sigma]\ni t_\sigma\mapsto g_{K(\underline t)}^{\underline w}(v)$, $(t_{-\sigma},v)\in [0,a_{-\sigma}]\times \mathbb{R}_{\underline w}$, are equicontinuous. This statement follows from the expression $g_{K(\underline t)}^{\underline w}=g_{K_{\sigma}^{t_{-\sigma}}(t_\sigma)}^{W_\sigma(t_{-\sigma}\underline e_{-\sigma})}\circ g_{K_{-\sigma}(t_{-\sigma})}^{w_{-\sigma}}$, Proposition \ref{Prop-cd-continuity'} and Lemma \ref{lem-uniform} (i).
\end{proof}
\begin{Lemma}
For any $(t_+,t_-)\in\cal D$ and $\sigma\in\{+,-\}$, $W_\sigma(t_+,t_-)=g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}^{W_{-\sigma}(t_\sigma \underline e_\sigma)}(\widehat w_\sigma(t_\sigma))$. \label{W=gw}
\end{Lemma}
\begin{proof}
Fix $\underline t=(t_+,t_-)\in\cal D$. By symmetry, we may assume that $\sigma=+$.
If $t_-=0$, it is obvious since $W_+(\cdot,0)=\widehat w_+$ and $K_{-}^{t_+}(0)=\emptyset$. Suppose $t_->0$. From (\ref{CWD}) and Lemma \ref{common-function} (i,iii,iv) we know that $W_+(\underline t)\ge C_+(\underline t)\ge D_-(\underline t)=d_{K_{-}^{t_+}(t_-)}$. Since $\widehat w_+(t_+)= f_{K_{-}^{t_+}(t_-)}(W_+(\underline t))$ by Lemma \ref{lem-w-W}, we find that either $W_+(\underline t) =d_{K_{-}^{t_+}(t_-)}$ and $\widehat w_+(t_+)=b_{K_{-}^{t_+}(t_-)}$, or $W_+(\underline t)>d_{K_{-}^{t_+}(t_-)}$ and $W_+(\underline t)=g_{K_{-}^{t_+}(t_-)}(\widehat w_+(t_+))$. In either case, we get $W_+(\underline t)=g_{K_{-}^{t_+}(t_-)}^{W_-(t_+,0)}(\widehat w_+(t_+))$.
\end{proof}
\begin{Definition}
For $v\in\mathbb{R}_{\underline w}$, we call $V(\underline t):=g_{K(\underline t)}^{\underline w}(v)$, $\underline t\in \cal D$, the force point function {started from $v$} (for the commuting pair $(\eta_+,\eta_-;\cal D)$), which is continuous by Lemma \ref{common-function} (v). {The $v$ is called the force point for this function $V(\underline t)$. } \label{def-force-function}
\end{Definition}
{
\begin{Remark}
The name in Definition \ref{def-force-function} comes from the following fact.
In Section \ref{section-commuting-SLE-kappa-rho}, we will study a commuting pair of SLE$_\kappa(2, \rho,\rho_+,\rho_-)$ curves $(\eta_+,\eta_-)$, which is a.s.\ a commuting pair of chordal Loewner curves. For $\sigma\in\{+,-\}$, $\eta_\sigma$ starts from $w_\sigma$ with force points $w_{-\sigma},v_0,v_+,v_-$. Let $V_\nu$ be the force point function started from $v_\nu$, $\nu\in\{0,+,-\}$, for the commuting pair $(\eta_+,\eta_-)$. The two curves commute in the sense that, for $\sigma\in\{+,-\}$, if $\tau$ is an ${\cal F}^{-\sigma}$-stopping time, then conditionally on ${\cal F}^{-\sigma}_\tau$ and the event that $\eta_{-\sigma}$ is not complete by time $\tau$, $\eta_\sigma^\tau$ is an SLE$_\kappa(2, \rho_0,\rho_+,\rho_-)$ curve with some speed, whose driving function is $W_\sigma|^{-\sigma}_\tau$, and whose force point functions for $2,\rho_0,\rho_+,\rho_-$ are respectively $W_{-\sigma}|^{-\sigma}_\tau$, $V_0|^{-\sigma}_\tau$, $V_+|^{-\sigma}_\tau$, and $V_-|^{-\sigma}_\tau$.
\end{Remark}
}
\begin{Definition} Let $(\eta_+,\eta_-;{\cal D})$ be a commuting pair of chordal Loewner curves started from $(w_+,w_-)$ with hull function $K(\cdot,\cdot)$.
\begin{enumerate}
\item [(i)] For $\sigma\in\{+,-\}$, let $\phi_\sigma$ be a continuous and strictly increasing function defined on the lifespan of $\eta_\sigma$ with $\phi_\sigma(0)=0$, and let $\phi_\oplus (t_+,t_-)=(\phi_+(t_+),\phi_-(t_-))$. Let $\widetilde\eta_\sigma=\eta\circ \phi_\sigma^{-1}$, $\sigma\in\{+,-\}$, and $\widetilde{\cal D}=\phi_\oplus({\cal D})$.
Then we call $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ a commuting pair of chordal Loewner curves with speeds $(\phi_+,\phi_-)$, and call $(\eta_+,\eta_-;{\cal D})$ its normalization.
\item [(ii)] Let $\underline\tau\in\cal D$. Suppose there is a commuting pair of chordal Loewner curves $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ with some speeds such that $\widetilde{\cal D}=\{\underline t\in\mathbb{R}_+^2: \underline\tau+\underline t\in{\cal D}\}$, and $\eta_{\sigma}(\tau_\sigma+\cdot)=f_{K(\underline\tau)}\circ \eta_\sigma$, $\sigma\in\{+,-\}$. Then we call $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ the part of $(\eta_+,\eta_-;{\cal D})$ after $\underline\tau$ up to a conformal map.
\end{enumerate}
\label{Def-speeds}
\end{Definition}
For a commuting pair $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ with some speeds, we still define the hull function $\widetilde K(\cdot,\cdot)$ and the capacity function $\widetilde\mA(\cdot,\cdot)$ using (\ref{KmA}), define the driving functions $\widetilde W_+$ and $\widetilde W_-$ using Lemma \ref{Lem-W}, and define the force point functions by $\widetilde V (\underline t)=g_{\widetilde K(\underline t)}^{\underline w}(v )$ started from $v$ for any $v\in\mathbb{R}_{\underline w}$. {All} lemmas in this section still hold except that {in Lemma \ref{lem-lips},} $\mA$ may not be Lipschitz continuous.
\begin{Lemma}
\begin{enumerate}
\item [(i)] For the $(\eta_+,\eta_-;{\cal D})$, $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ and $\phi_\oplus$ in Definition \ref{Def-speeds} (i), we have $\widetilde X=X\circ \phi_\oplus^{-1}$ for $X\in\{K,\mA,W_\pm,V\}$, where $V$ and $\widetilde V$ are force point functions respectively for $(\eta_+,\eta_-;{\cal D})$ and $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ started from the same $v\in\mathbb{R}_{\underline w}$.
\item [(ii)] For the $(\eta_+,\eta_-;{\cal D})$, $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ and $\underline\tau$ in Definition \ref{Def-speeds} (ii), we have $\widetilde K =K(\underline\tau+\cdot)/K(\underline\tau)$, $\widetilde \mA =\mA(\underline\tau+\cdot)-\mA(\underline\tau)$, and $\widetilde W_\sigma =W_\sigma(\underline\tau+\cdot)$, $\sigma\in\{+,-\}$. Let $v\in\mathbb{R}_{(w_+,w_-)}$, and let $V$ be the force point function for $(\eta_+,\eta_-;{\cal D})$ started from $v$. Let $\widetilde w_\pm=\widetilde W_\pm(\underline 0)=W_\pm(\underline\tau)$. Define $\widetilde v\in\mathbb{R}_{(\widetilde w_+,\widetilde w_-)}$ such that if $V(\underline\tau)\not\in \{\widetilde w_+,\widetilde w_-\}$, then $\widetilde v=V(\underline\tau)$; and if $V(\underline\tau)=\widetilde w_\sigma$, then $\widetilde v=\widetilde w_\sigma^{\sign(v-w_\sigma)}$, $\sigma\in\{+,-\}$. Let $\widetilde V$ be the force point function for $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D})$ started from $\widetilde v$. Then $\widetilde V =V (\underline\tau+\cdot)$ on $\widetilde {\cal D}$.
\end{enumerate}
\label{DMP-determin-1}
\end{Lemma}
\begin{proof}
Part (i) is obvious. We now work on (ii). Let $\underline t=(t_+,t_-)\in\widetilde{\cal D}$. From $K(\underline\tau+\underline t)=\Hull(\bigcup_\sigma \eta_\sigma[0,\tau_\sigma+t_\sigma])$, we get
$$K(\underline\tau+\underline t)=\Hull(K(\underline\tau)\cup \bigcup_\sigma \eta_\sigma[\tau_\sigma,\tau_\sigma+t_\sigma])=\Hull(K(\underline\tau)\cup f_{K(\underline\tau)}( \bigcup_\sigma \widetilde \eta_\sigma[0,t_\sigma])).$$
This implies that $\widetilde K(\underline t)=\Hull( \bigcup_\sigma \widetilde \eta_\sigma[0,t_\sigma])=K(\underline\tau+\underline t)/K(\underline\tau)$, which then implies that $\widetilde \mA(\underline t)=\mA(\underline\tau+\underline t)-\mA(\underline\tau)$. It together with (\ref{K123},\ref{W-def}) implies that $\widetilde W_\sigma(\underline t)=W_\sigma(\underline\tau+\underline t)$.
By (i), Proposition \ref{prop-comp-g} and Lemma \ref{common-function}, if $V(\underline\tau)\not\in \{\widetilde w_+,\widetilde w_-\}$,
$$\widetilde V(\underline t)=g_{\widetilde K(\underline t)/\widetilde K_-(t_-)}^{\widetilde W_+(0,t_-)}\circ g_{\widetilde K_-(t_-)}^{\widetilde w_-}(\widetilde v)=g_{K(\underline\tau+\underline t)/ K(\tau_+,\tau_-+t_-)}^{W_+(\tau_+,\tau_-+t_-)}\circ g_{K(\tau_+,\tau_-+t_-)/K(\underline\tau)}^{W_-(\underline \tau)}(\widetilde v)$$
$$=g_{ K(\underline \tau+\underline t)/K(\tau_+,\tau_-+t_-)}^{W_+(\tau_+,\tau_-+t_-)}\circ g_{K(\tau_+,\tau_-+t_-)/K(\underline\tau)}^{W_-(\underline \tau)}\circ g^{W_-(\tau_+,0)}_{K(\underline\tau)/K(\tau_+,0)} \circ g^{w_+}_{K(\tau_+,0)}(v)$$
$$=g_{ K(\underline \tau+\underline t)/K(\tau_+,\tau_-+t_-)}^{W_+(\tau_+,\tau_-+t_-)}\circ g_{K(\tau_+,\tau_-+t_-)/K(\tau_+,0)}^{W_-( \tau_+,0)} \circ g^{w_+}_{K(\tau_+,0)}(v)$$
$$=g_{ K(\underline \tau+\underline t)/K(\tau_+,\tau_-+t_-)}^{W_+(\tau_+,\tau_-+t_-)}\circ g_{K(\tau_+,\tau_-+t_-)/K(0,\tau_-+t_-)}^{W_+(0,\tau_-+t_-)} \circ g^{w_-}_{K(0,\tau_-+t_-)}(v)$$
$$=g_{ K(\underline \tau+\underline t) }^{W_+(0,\tau_-+t_-)}\circ g^{w_-}_{K(0,\tau_-+t_-)}(v) =g_{K(\underline\tau+\underline t)}^{(w_+,w_-)}(v)=V(\underline\tau+\underline t).$$
Here the ``$=$'' in the $1^\text{st}$ line follow from the definition of $\widetilde V(\underline t)$ and that $\widetilde K =K(\underline\tau+\cdot)/K(\underline\tau)$, the ``$=$'' in the $2^\text{nd}$ line follows from the definition of $\widetilde v$,
the ``$=$'' in the $3^\text{rd}$ line and the first ``$=$'' in the $5^\text{th}$ line follow from Proposition \ref{prop-comp-g}, and the ``$=$'' in the $4^\text{th}$ line follows from Lemma \ref{common-function}.
Now suppose $V(\underline\tau)\in \{\widetilde w_+,\widetilde w_-\}$. By symmetry, assume that $V(\underline\tau)=\widetilde w_-=W_-(\underline\tau)$. If $v\ge w_-^+$, we understand $g^{W_-(\tau_+,0)}_{K(\underline\tau)/K(\tau_+,0)}$ as a map from $[W_-(\tau_+,0)^+,\infty)$ into $[\widetilde w_-^+,\infty)$, i.e., when $g^{W_-(\tau_+,0)}_{K(\underline\tau)/K(\tau_+,0)}$ takes value $\widetilde w_-$ at some point in $[W_-(\tau_+,0)^+,\infty)$, we redefine the value as $\widetilde w_-^+$. Then the above displayed formula still holds. The case that $v\le w_-^-$ is similar, in which we understand $g^{W_-(\tau_+,0)}_{K(\underline\tau)/K(\tau_+,0)}$ as a map from $(-\infty, W_-(\tau_+,0)^-]$ into $(-\infty,\widetilde w_-^-]$.
\end{proof}
From now on, we fix $v_0\in I_0= [w_-^+,w_+^-]$, $v_+\in I_+=[w_+^+,\infty)$, and $v_-\in I_-=(-\infty,w_-^-]$, and let $V_\nu(\underline t)$, $\underline t\in\cal D$, be the force point function started from $v_\nu$, $\nu\in\{0,+,-\}$. By Lemma \ref{common-function}, $V_-\le C_-\le D_-\le V_0\le C_+\le D_+\le V_+$, which combined with (\ref{CWD}) implies
\begin{equation} V_-\le C_-\le W_-\le D_-\le V_0\le C_+\le W_+\le D_+\le V_+. \label{VWVWV}\end{equation}
\begin{Lemma}
For any $\underline t=(t_+,t_-)\in\cal D$, we have
\begin{equation} |V_+(\underline t)-V_-(\underline t)|/4\le \diam(K(\underline t)\cup [v_-,v_+])\le |V_+(\underline t)-V_-(\underline t)|.\label{V-V}\end{equation}
\begin{equation} f_{K(\underline t)}[V_0(\underline t),V_\nu(\underline t)]\subset \eta_\nu[0,t_\nu]\cup [v_0,v_\nu],\quad \nu\in\{+,-\}\label{fV1}\end{equation}
Here for $x,y\in\mathbb{R}$, the $[x,y]$ in (\ref{fV1}) is the line segment connecting $x$ with $y$, which is the same as $[y,x]$; and if any $v_\nu$, $\nu\in\{0,+,-\}$, takes value $w_\sigma^\pm$ for some $\sigma\in\{+,-\}$, then its appearance in (\ref{V-V},\ref{fV1}) is understood as $w_\sigma$. {See Figure \ref{V-figure}.} \label{lem-V0}
\end{Lemma}
\begin{figure}
\begin{center}
\includegraphics[width=3in]{V20.png}\quad
\includegraphics[width=3in]{V30.png}
\end{center}
\caption{{The above figures illustrate two situations. In each situation, the curves $\eta_+$ and $\eta_-$ are respectively stopped at the time $t_+$ and $t_-$. Let $\underline t=(t_+,t_-)$. For $\nu\in\{0,+,-\}$, the point $f_{K(\underline t)}(V_\nu(\underline t))$ is labeled by $V_\nu^f(\underline t)$, which agrees with $v_\nu$ in the case $v_\nu\not\in \overline{K(\underline t)}$. The sets $f_{K(\underline t)}[V_0(\underline t),V_+(\underline t)]$ and $f_{K(\underline t)}[V_-(\underline t),V_0(\underline t)]$ are respectively colored green and red.}} \label{V-figure}
\end{figure}
\begin{proof}
Fix $\underline t=(t_+,t_-)\in\cal D$. We write $K$ for $K(\underline t)$, $K_\pm$ for $K_\pm(t_\pm)$, $\widetilde K_\pm$ for $K_{\pm}^{t_\mp}(t_\pm)$, $\eta_\pm$ for $\eta_\pm[0,t_\pm]$, $\widetilde\eta_\pm$ for $\eta_{\pm}^{t_\mp}[0,t_\pm]$, and $X$ for $X(\underline t)$, $X\in\{V_0,V_+,V_-,C_+,C_-,D_+,D_-\}$.
Since $g_{K }$ maps $\mathbb{C}\setminus (K^{\doub}\cup [v_-,v_+])$ conformally onto $\mathbb{C}\setminus [V_- ,V_+ ]$, fixes $\infty$, and has derivative $1$ at $\infty$, by Koebe's $1/4$ theorem, we get (\ref{V-V}).
For (\ref{fV1}) by symmetry we only need to prove the case $\nu=+$. By (\ref{VWVWV}), $V_0\le C_+\le D_+\le V_+$. By (\ref{f[C,D]}),
$f_K[C_+,D_+]\subset f_{K_-}(\widetilde\eta_+)=\eta_+$. It remains to show that $f_K(D_+,V_+]\subset [w_0,v_+]$ and $f_K[V_0,C_+)\subset [v_0,w_0]$. If $V_+=D_+$, then $(D_+,V_+]=\emptyset$, and $f_K(D_+,V_+]=\emptyset \subset [w_+,v_+]$. Suppose $V_+>D_+$. By Lemma \ref{common-function}, $D_+=\lim_{x\downarrow \max((\overline K\cap \mathbb{R})\cup\{w_+\})} g_K(x)$, and $V_+=g_K(v_+)$. So $f_K(D_+,V_+]=( \max((\overline K\cap \mathbb{R})\cup\{w_+\}),v_+]\subset [w_+,v_+]$.
If $V_0=C_+$, then $[V_0,C_+)=\emptyset$, and $f_K[V_0,C_+)=\emptyset \subset [v_0,w_+]$. If $V_0<C_+$, by Lemma \ref{common-function} (iii,iv), $\overline{K_+}\cap\overline{K_-}=\emptyset$, $v_0\ne \overline{K_+}$, and $C_+=\lim_{x\uparrow \min((\overline{K_+}\cap\mathbb{R})\cup\{w_+\})} g_K(x)$. Now either $v_0\not\in \overline K\cup\{w_-^+\}$ and $V_0=g_K(v_0)$, or $v_0\in \overline {K_-}\cup\{w_-^+\}$ and $V_0=D_-$. In the former case, $f_K[V_0,C_+)\subset [v_0,\min((\overline{K_+}\cap\mathbb{R})\cup\{w_+\}))\subset [v_0,w_+]$. In the latter case, $f_K[V_0,C_+)= [\max((\overline{K_-}\cap\mathbb{R})\cup\{w_-\}),\min((\overline{K_+}\cap\mathbb{R})\cup\{w_+\}))\subset [v_0,w_+]$. \end{proof}
\begin{Lemma}
Suppose for some $\underline t=(t_+,t_-)\in\cal D$ and $\sigma\in\{+,-\}$, $\eta_\sigma(t_\sigma)\in \eta_{-\sigma}[0,t_{-\sigma}]\cup [v_{-\sigma},v_0]$. Then $W_\sigma(\underline t)=V_0(\underline t)$.
\label{W=V}
\end{Lemma}
\begin{proof}
We assume $\sigma=+$ by symmetry. By Lemma \ref{W=gw}, $W_+(\underline t)= g_{K_-^{t_+}(t_-)}^{W_-(t_+,0)}(\widehat w_+(t_+))$. By Lemma \ref{common-function}, $V_0(\underline t)=g_{K_-^{t_+}(t_-)}^{W_-(t_+,0)}\circ g_{K_+(t_+)}^{w_+}(v_0)$. If $\eta_+(t_+)\in[v_-,v_0]$, then $\widehat w_+(t_+)=c_{K_+(t_+)}=g_{K_+(t_+)}^{w_+}(v_0)$, and so we get $W_\sigma(\underline t)=V_0(\underline t)$. Now suppose $\eta_+(t_+)\in \eta_-[0,t_-]$. Then $\widehat w_+(t_+)\in \overline{K_-^{t_+}(t_-)}$, which together with $\widehat w_+(t_+)=W_+(t_+,0)\ge V_0(t_+,0)=g_{K_+(t_+)}^{w_+}(v_0)\ge W_-(t_+,0)$ implies that $g_{K_-^{t_+}(t_-)}^{W_-(t_+,0)}\circ g_{K_+(t_+)}^{w_+}(v_0)=g_{K_-^{t_+}(t_-)}^{W_-(t_+,0)}(\widehat w_+(t_+))$, as desired.
\end{proof}
\subsection{Ensemble without intersections} \label{section-deterministic-2}
We say that $(\eta_+,\eta_-;{\cal D})$ is disjoint if $\overline {K_+(t_+)}\cap \overline{K_-(t_-)}=\emptyset$ for any $(t_+,t_-)\in\cal D$. Given a commuting pair $(\eta_+,\eta_-;{\cal D})$, we get a disjoint commuting par $(\eta_+,\eta_-;{\cal D}_{\disj})$ by defining
\begin{equation} {\cal D}_{\disj}=\{(t_+,t_-)\in{\cal D}:\overline {K_+(t_+)}\cap \overline{K_-(t_-)}=\emptyset\}.\label{D-disj}\end{equation}
In this subsection, we assume that $(\eta_+,\eta_-;{\cal D})$ is disjoint. {In addition to the $W_\pm,V_0,V_\pm$ defined in the previous subsection, we are going to define on $\cal D$ the functions $W_{\pm,j}$, $j=1,2,3$, $W_{\pm,S}$, $Q$, $W_{\pm,N}$, $V_{\nu,N}$, $\nu\in\{0,+,-\}$, and $E_{X,Y}$ for $X\ne Y\in\{W_+,W_-,V_0,V_+,V_-\}$. We will also derive differential equations for these functions, which will be applied to the random setting in Section \ref{section-indep} to construct some two-time-parameter local martingale.}
We now write $g_\sigma^{t_{-\sigma}}(t_\sigma,\cdot)$ for $g_{K_\sigma^{t_{-\sigma}}(t_\sigma)}$, $\sigma\in\{+,-\}$.
For $(t_+,t_-)\in\cal D$ and $\sigma\in\{+,-\}$, {since} $\overline {K_+(t_+)}\cap \overline{K_-(t_-)}=\emptyset$, $ [c_{K_\sigma(t_\sigma)},d_{K_\sigma(t_\sigma)}]$ has positive distance from $K_{-\sigma}^{t_\sigma}(t_{{-}\sigma}))$. So $g_{-\sigma}^{t_\sigma}(t_{-\sigma},\cdot)$ is analytic at $\widehat w_\sigma(t_\sigma)\in [c_{K_\sigma(t_\sigma)},d_{K_\sigma(t_\sigma)}]$. By Lemma \ref{W=gw}, $W_{\sigma }(t_+,t_-)=g_{-\sigma}^{t_\sigma}(t_{-\sigma}, \widehat w_\sigma(t_\sigma))$.
We further define $W_{\sigma,j}$, $j=1,2,3$, and $W_{\sigma,S}$ on ${\cal D}$ by \begin{equation} W_{\sigma,j}(t_+,t_-)=(g_{-\sigma}^{t_\sigma})^{(j)}(t_{-\sigma},\widehat w_\sigma(t_\sigma)),\quad W_{\sigma,S}=\frac{W_{\sigma,3}}{W_{\sigma,1}}-\frac 32\Big(\frac{W_{\sigma,2}}{W_{\sigma,1}}\Big)^2,\quad \sigma\in\{+,-\}.\label{WS}\end{equation}
Here the superscript $(j)$ means the $j$-th complex derivative w.r.t.\ the space variable.
The functions are all continuous on $\cal D$ because $(t_+,t_-,z)\mapsto (g_{-\sigma}^{t_\sigma})^{(j)}(t_{-\sigma},z)$ is continuous by Lemma \ref{lem-uniform}.
Note that $W_{\sigma,S}(t_+,t_-)$ is the Schwarzian derivative of $g_{-\sigma}^{t_\sigma}(t_{-\sigma},\cdot)$ at $\widehat w_\sigma(t_\sigma)$.
\begin{Lemma}
$\mA $ is continuously differentiable with $\partial_\sigma \mA= W_{\sigma, 1} ^2$, $\sigma\in\{+,-\}$.
\label{lem-positive}
\end{Lemma}
\begin{proof}
This follows from a standard argument, which first appeared in \cite[Lemma 2.8]{LSW1}. The statement for ensemble of chordal Loewner curves appeared in \cite[Formula (3.7)]{reversibility}.
\end{proof}
So for any $\sigma\in\{+,-\}$ and $t_{-\sigma}\in{\cal I}_{-\sigma}$, $K_{\sigma}^{t_{-\sigma}}(t_\sigma)$, $0\le t_\sigma<T_\sigma^{\cal D}(t_{-\sigma})$, are chordal Loewner hulls driven by $W_{\sigma}|^{-\sigma}_{t_{-\sigma}}$ with speed $(W_{\sigma,1}|^{-\sigma}_{t_{-\sigma}})^2$, and we get the differential equation:
\begin{equation} \partial_{t_\sigma} g_{\sigma}^{t_{-\sigma}}(t_\sigma,z)= \frac{2 (W_{\sigma,1}(t_+,t_-)^2}{ g_{\sigma}^{t_{-\sigma}}(t_\sigma,z)-W_\sigma(t_+,t_-)},\label{pag}\end{equation}
which together with Lemmas \ref{W=gw} and \ref{common-function}
implies the differential equations for $V_0,V_+,V_-$:
\begin{equation} \partial_\sigma V_\nu \overset{\mathrm{ae}}{=} \frac{2W_{\sigma,1}^2}{V_\nu-W_\sigma},\quad \nu\in\{0,+,-\},\label{pa-X}\end{equation}
and the differential equations for $W_{-\sigma}$, $W_{-\sigma,1}$ and $W_{-\sigma,S}$:
\begin{equation} \partial_{\sigma} W_{-\sigma} = \frac{2W_{\sigma,1}^2}{W_{-\sigma}-W_{\sigma}},\quad \frac{\partial_{\sigma} W_{-\sigma,1}}{W_{-\sigma,1}}=\frac{-2W_{\sigma,1}^2}{(W_+-W_-)^2},\quad \partial_{\sigma} W_{-\sigma,S}=-\frac{12 W_{+,1}^2 W_{-,1}^2}{(W_+-W_-)^4}.\label{pajW}\end{equation}
Define $Q$ on ${\cal D}$ by
\begin{equation} Q(\underline t)=\exp\Big(\int_{[\underline 0,\underline t]} -\frac{12 W_{+,1}(\underline s) ^2W_{-,1}(\underline s) ^2}{(W_+(\underline s) -W_-(\underline s) )^4}d^2\underline s\Big).\label{F}\end{equation}
Then $Q$ is continuous and positive with $Q(t_+,t_-)=1$ when {$t_+=0$ or $t_-=0$}. From (\ref{pajW}) we get
\begin{equation} \frac{\partial_\sigma Q}{Q}=W_{\sigma,S},\quad \sigma\in\{+,-\}.\label{paF}\end{equation}
{
\begin{Remark} The function $Q$ implicitly appeared earlier in \cite{reversibility,duality}: $Q^{-\frac{\cc}6}$ agrees with the second factor on the RHS of \cite[Formula (4.3)]{reversibility}. It is related to the Brownian loop measure (\cite{loop}) by the fact that $-\frac 16 \log Q(\underline t)$ equals the Brownian loop measure of the set of loops in $\mathbb{H}$ that intersect both $K_+(t_+)$ and $K_-(t_-)$. The ODE (\ref{paF}) reflects the facts that the Brownian loop measure can be decomposed into Brownian bubble measures along a curve, and the Schwarzian derivatives are related to the Brownian bubble measures.
\end{Remark}
}
By (\ref{VWVWV}), $V_+\ge W_+\ge C_+\ge V_0\ge D_- \ge W_-\ge V_-$ on $\cal D$. Because of disjointness, we further have $C_+>D_-$ by Lemma \ref{common-function}.
By the same lemma,
\begin{equation} V_\sigma(t_+,t_-)=g_{-\sigma}^{t_\sigma}(t_{-\sigma},V_\sigma(t_\sigma \underline e_\sigma));\label{WV+g}\end{equation}
\begin{equation} V_0(t_+,t_-)=g_{-\sigma}^{t_\sigma}(t_{-\sigma},V_0(t_\sigma \underline e_\sigma)),\quad \mbox{if }v_0\not\in\overline{K_{-\sigma}(t_{-\sigma})}.\label{V0-g}\end{equation}
Let $\underline t=(t_+,t_-)\in {\cal D}$. For $\sigma\in\{+,-\}$, differentiating (\ref{circ-g}) w.r.t.\ $t_\sigma$, letting $\widehat z=g_{K_\sigma(t_\sigma)}(z)$, and using Lemma \ref{W=gw} and (\ref{pag},\ref{WS}) we get
\begin{equation} \partial_{t_\sigma} g_{-\sigma}^{t_\sigma}(t_{-\sigma},\widehat z)=\frac{2 (g_{-\sigma}^{t_\sigma})'(t_{-\sigma},\widehat w_\sigma(t_\sigma))^2}{g_{-\sigma}^{t_\sigma}(t_{-\sigma},\widehat z)-g_{-\sigma}^{t_\sigma}(t_{-\sigma},\widehat w_\sigma(t_\sigma))}-\frac{2(g_{-\sigma}^{t_\sigma})'(t_{-\sigma},\widehat z)}{\widehat z-\widehat w_\sigma(t_\sigma)}.\label{diff}\end{equation}
Letting $\mathbb{H}\setminus {K_{-\sigma}^{t_\sigma}(t_{-\sigma})}\ni \widehat z\to \widehat w_\sigma(t_\sigma)$ and using the power series expansion of $g_{-\sigma}^{t_\sigma}(t_{-\sigma},\cdot)$ at $\widehat w_\sigma(t_\sigma)$, we get
\begin{equation} \partial_{t_\sigma} g_{-\sigma}^{t_\sigma}(t_{-\sigma},\widehat z)|_{\widehat z=\widehat w_\sigma(t_\sigma)}=-3 W_{\sigma,2}(\underline t),\quad \sigma\in\{+,-\}.\label{-3}\end{equation}
Differentiating (\ref{diff}) w.r.t.\ $\widehat z$ and letting $ \widehat z\to \widehat w_\sigma(t_\sigma)$, we get
\begin{equation} \frac{ \partial_{t_\sigma} (g_{-\sigma}^{t_\sigma})'(t_{-\sigma},\widehat z)}{ (g_{-\sigma}^{t_\sigma})'(t_{-\sigma},\widehat z)}\bigg|_{\widehat z=\widehat w_\sigma(t_\sigma)}=\frac 12\Big(\frac{W_{\sigma,2}(\underline t)}{W_{\sigma,1}(\underline t)}\Big)^2-\frac 43\frac{W_{\sigma,3}(\underline t)}{W_{\sigma,1}(\underline t)},\quad \sigma\in\{+,-\}.\label{1/2-4/3}\end{equation}
For $\sigma\in\{+,-\}$, define $ W_{\sigma,N}$ on ${\cal D}$ by $W_{\sigma,N} =\frac{ W_{\sigma,1} }{ W_{\sigma,1}|^{\sigma}_0 }$. Since $W_{\sigma,1}|^{-\sigma}_0\equiv 1$, we get $ W_{\sigma,N}(t_+,t_-)=1$ when $t_+t_-=0$. From (\ref{pajW}) we get
\begin{equation} \frac{\partial_{\sigma} W_{-\sigma,N}}{ W_{-\sigma,N}}=\frac{-2 W_{\sigma,1}^2}{(W_{-\sigma}-W_{\sigma})^2}\,\partial t_\sigma-\frac{-2 W_{\sigma,1}^2}{(W_{-\sigma}-W_{\sigma})^2}\bigg|^{-\sigma}_0 \,\partial t_\sigma,\quad \sigma\in\{+,-\}.
\label{pajhaW}\end{equation}
We now define $ V_{0,N},V_{+,N}, V_{-,N}$ on ${\cal D} $ by
$$ V_{\nu,N}(\underline t)=(g_{-\nu}^{t_\nu})'(t_{-\nu}, V_\nu(t_\nu \underline e_\nu))/(g_{-\nu}^{0})'(t_{-\nu}, v_\nu),\quad \nu\in\{+,-\};$$
\begin{equation} V_{0,N}(\underline t)=(g_{-\sigma}^{t_\sigma})'(t_{-\sigma},V_0(t_\sigma \underline e_\sigma))/(g_{-\sigma}^{t_\sigma})'(t_{-\sigma},v_0),\quad \mbox{if } v_0\not\in \overline{K_{-\sigma}(t_{-\sigma})},\quad \sigma\in\{+,-\}.\label{V1V2}\end{equation}
The functions are well defined because either $v_0\not\in \overline{K_+(t_+)}$ or $v_0\not\in \overline{K_-(t_-)}$, and when they both hold, the RHS of (\ref{V1V2}) equals $g_{K(t_+,t_-)}'(v_0)/(g_{K_+(t_+)}'(v_0)g_{K_-(t_-)}'(v_0))$ by (\ref{circ-g}).
Note that $ V_{\nu,N}(t_+,t_-)=1$ if $t_+t_-=0$ for $\nu\in\{0,+,-\}$. From (\ref{WV+g}-\ref{V0-g}) and (\ref{circ-g},\ref{pag}) we find that these functions satisfy the following differential equations on ${\cal D}$:
\begin{equation} \frac{\partial_{\sigma} V_{\nu,N}} { V_{\nu,N}}= \frac{-2W_{\sigma,1}^2}{(V_{\nu}-W_\sigma)^2}\,\partial t_\sigma-\frac{-2W_{\sigma,1}^2}{(V_{\nu}-W_\sigma)^2}\bigg|^{-\sigma}_0\,\partial t_\sigma,\quad \sigma\in\{+,-\},\quad \nu\in\{0,-\sigma\},\quad \mbox{if }v_\nu\not\in \overline{K_\sigma(t_\sigma)}. \label{paV1}\end{equation}
We now define $E_{X,Y}$ on ${\cal D}$ for $X\ne Y\in\{W_+,W_-,V_0,V_+,V_-\}$ as follows. First, let
\begin{equation} E_{X,Y}(t_+,t_-)=\frac{(X(t_+,t_-)-Y(t_+,t_-))(X(0,0)-Y(0,0))} {(X(t_+,0)-Y(t_+,0))(X(0,t_-)-Y(0,t_-))},\label{RXY}\end{equation}
if the denominator is not $0$. If the denominator is $0$, then since $V_+\ge W_+\ge V_0\ge W_-\ge V_-$ and $W_+>W_-$, there is $\sigma\in\{+,-\}$ such that $\{X,Y\}\subset\{W_\sigma,V_\sigma,V_0\}$. By symmetry, we will only describe the definition of $E_{X,Y}$ in the case that $\sigma=+$. If $X(t_+,0)=Y(t_+,0)$, by Lemmas \ref{common-function} and \ref{W=gw}, $X(t_+,\cdot)\equiv Y(t_+,\cdot)$. If $X(0,t_-)=Y(0,t_-)$, then we must have $X(\underline 0)=Y(\underline 0)$, and so $X(0,\cdot)\equiv Y(0,\cdot)$. For the definition of $E_{X,Y}$, we modify (\ref{RXY}) by writing the RHS as $\frac{X(t_+,t_-)-Y(t_+,t_-)}{X(t_+,0)-Y(t_+,0) }:\frac{ X(0,t_-)-Y(0,t_-)}{ X(0,0)-Y(0,0)}$, replacing the first factor (before ``$:$'') by $(g_{-}^{t_+})'(t_-,X(t_+,0))$ when $X(t_+,0)=Y(t_+,0)$, replacing the second factor (after ``$:$'') by $g_{K_-(t_-)}'(X(0,0))$ when $X(0,t_-)=Y(0,t_-)$; and do both replacements when two equalities both hold. Then in all cases, $E_{X,Y}$ is continuous and positive on ${\cal D}$, and $E_{X,Y}(t_+,t_-)=1$ if {$t_+=0$ or $t_-=0$}. By (\ref{pa-X},\ref{pajW}), for $\sigma\in\{+,-\}$, if $X,Y\ne W_\sigma$, then
\begin{equation} \frac{\partial_\sigma E_{X,Y}}{E_{X,Y}}\overset{\mathrm{ae}}{=} \frac{-2W_{\sigma,1}^2}{(X-W_\sigma)(Y-W_\sigma)}\,\partial t_\sigma- \frac{-2W_{\sigma,1}^2}{(X-W_\sigma)(Y-W_\sigma)}\Big|^{-\sigma}_0\,\partial t_\sigma.\label{paEXY}\end{equation}
\subsection{A time curve in the time region} \label{time curve}
{We call the set $\cal D$ a time region, which is composed of two-dimensional time variables, whose two components are one-dimensional time variables for $\eta_+$ and $\eta_-$ respectively. A time curve in $\cal D$ is a continuous and strictly increasing function $\underline u=(u_+,u_-):[0,T^u)\to \cal D$ with $\underline u(0)=\underline 0$. Such a time curve can be used to grow $\eta_+$ and $\eta_-$ simultaneously. This means that we construct two curves $\eta^u_\sigma:=\eta_\sigma\circ u_\sigma$, $\sigma\in\{+,-\}$, on the same interval $[0,T^u)$, which are time-changes of some initial segments of $\eta_+$ and $\eta_-$. In this subsection, we are going to construct a special time curve in $\cal D$ such that if we grow $\eta_+$ and $\eta_-$ simultaneously using $\underline u$, then the factors (F1) and (F2) in Section \ref{Strategy} are satisfied. Later in the next section, for a commuting pair of SLE$_\kappa(2,\rho_0,\rho_+,\rho_-)$ curves, we will use the driving functions $W_\sigma$, $\sigma\in\{+,-\}$, and the force point functions $V_\nu$, $\nu\in\{0,+,-\}$, and the time-curve $\underline u$ to construct a diffusion process $(\underline R(t))_{0\le t<T^u}$, whose transition density then leads to the proof of the main theorem of the paper.
}
We {use the settings and results in the previous subsections except that we }
do not assume that $(\eta_+,\eta_-;\cal D)$ is disjoint.
We {made an additional assumption} in this subsection that
\begin{equation} v_+-v_0=v_0-v_-.\label{add-assump}\end{equation}
Define $\Lambda$ and $\Upsilon$ on $\cal D$ by $\Lambda=\frac 12\log\frac{V_+-V_0}{V_0-V_-}$ and $\Upsilon=\frac 12\log\frac{V_+-V_-}{v_+-v_-}$. By {the additional} assumption (\ref{add-assump}), we have $\Lambda(\underline 0)=\Upsilon(\underline 0)=0$.
\begin{Lemma} There exists a unique continuous and strictly increasing function $\underline u:[0,T^u)\to {\cal D}$, for some $T^u\in(0,\infty]$, with $\underline u(0)=\underline 0$, such that for any $0\le t<T^u$ and $\sigma\in\{+,-\}$, $|V_\sigma(\underline u(t))-V_0(\underline u(t))|=e^{2t}|v_\sigma-v_0|$; and $\underline u$ {cannot} be extended beyond $T^u$ {while still satisfying this property}. \label{time curve-lem}
\end{Lemma}
\begin{proof}
The proof resembles the argument in \cite[Section 4]{Two-Green-interior}. It is clear that the property of $\underline u$ is equivalent to that $\Lambda(\underline u(t))=0$ and $\Upsilon (\underline u(t))=t$ for $0\le t<T^u$.
By (\ref{VWVWV}), the definition of $V_\nu$ ($V_\nu(\underline t)=g_{K(\underline t)}^{\underline w}(v_\nu)$), Lemma \ref{common-function} (the definition of $g_{K(\underline t)}^{\underline w}$) and Proposition \ref{Prop-cd-continuity'}, we see that, for $\sigma\in\{+,-\}$, $|V_\sigma-V_0|$ and $|V_\sigma-V_{-\sigma}|$ are strictly increasing in $t_\sigma$, and $|V_0-V_{-\sigma}|$ is strictly decreasing in $t_\sigma$. Thus, $\Lambda$ is strictly increasing in $t_+$ and strictly decreasing in $t_-$, and $\Upsilon$ is strictly increasing in both $t_+$ and $t_-$. Since $\Lambda(\underline 0)=0$, we see that $\Lambda>0$ on $[0,T_+)\times \{0\}$ and $<0$ on $\{0\}\times [0,T_-)$.
For $\sigma\in \{+,-\}$, let
$$T^u_\sigma=\sup\{t_\sigma\in[0,T_\sigma): \exists t_{-\sigma}\in [0,T_{-\sigma}^{\cal D}(t_\sigma))\mbox{ such that } \sigma \Lambda(t_\sigma \underline e_\sigma+t_{-\sigma}\underline e_{-\sigma})\le 0\}.$$
By the definition of $T^u_+$, $\Lambda>0$ on ${\cal D}\cap (T^u_+,\infty)\times \mathbb{R}_+$. By the continuity of $\Lambda$, we have $\Lambda\ge 0$ on ${\cal D}\cap [T^u_+,\infty)\times \mathbb{R}_+$. If $\Lambda(T^u_+,t_-)=0$ for some $t_-\in\mathbb{R}_+$ such that $(T^u_+,t_-)\in\cal D$, then by the strictly deceasingness of $\Lambda$ in $t_-$, there is $t_-'>t_-$ such that $(T^u_+,t_-')\in\cal D$ and $\Lambda(T^u_+,t_-)<0$, which is a contradiction. So $\Lambda>0$ on ${\cal D}\cap [T^u_+,\infty)\times \mathbb{R}_+$. For $t_+\in[0,T^u_+)$, by intermediate value theorem and the strictly decreasingness of $\Lambda$ in $t_-$, there exists a unique $t_-\in [0,T^{\cal D}_-(t_+))$ such that $\Lambda(t_+,t_-)=0$. We then define a function $u_{+\to -}:\to [0,T^u_+)\to [0,T_-)$ such that $\Lambda(t_+,u_{+\to -}(t_+))=0$, $0\le t_+<T^u_+$. Note that $u_{+\to -}(0)=0$. Since $\Lambda$ is strictly increasing (resp.\ decreasing) in $t_+$ (resp.\ $t_-$), $u_{+\to -}$ is strictly increasing.
Similarly, $\Lambda <0$ on ${\cal D}\cap \mathbb{R}_+\times [T^u_-,\infty)$, and there is a strictly increasing function $u_{-\to +}:[0,T^u_-)\to [0,T_+)$ with $u_{-\to +}(0)=0$ such that $\Lambda(u_{-\to +}(t_-),t_-)=0$, $0\le t_-<T^u_-$. Since $\Lambda>0$ on ${\cal D}\cap [T^u_+,\infty)\times \mathbb{R}_+$. We see that $u_{-\to +}$ takes values in $[0,T^u_+)$. Similarly, $u_{+\to -}$ takes values in $[0,T^u_-)$. From $\Lambda(t_+,u_{+\to -}(t_+))=\Lambda(u_{-\to +}(t_-),t_-)=0$ and the monotonicity of $\Lambda$, we see that $u_{+\to -}$ and $u_{-\to +}$ are inverse of each other, and are both continuous
By the continuity and strictly increasingness of $u_{+\to -}$ and $\Upsilon$, the map $[0,T^u_+)\ni t\mapsto \Upsilon(t,u_{+\to -}(t))$ is continuous and strictly increasing, and so its range is $[0,T^u)$ for some $T^u\in (0,\infty]$. Let $u_+$ denote the inverse of this map, and let $u_-=u_{+\to -}\circ u_+$. Then for $\sigma\in\{+,-\}$, $u_\sigma$ is a continuous and strictly increasing function from $[0,T^u)$ onto $[0,T^u_\sigma)$. Let $\underline u=(u_+,u_-)$. Then for $0\le t<T^u$, $\Lambda(\underline u(t))=0$ and $\Upsilon(\underline u(t))=t$. So $\underline u$ satisfies the desired property on $[0,T^u)$. It cannot be extended beyond $T^u$ while keeping this property because $\sup u_\sigma[0,T^u)=T^u_\sigma$, and $\Lambda>0$ on ${\cal D}\cap [T^u_+,\infty)\times \mathbb{R}_+$, and $\Lambda<0$ on ${\cal D}\cap \mathbb{R}_+\times [T^u_-,\infty)$.
\end{proof}
\begin{Lemma}
For any $t\in [0,T^u)$ and $\sigma\in\{+,-\}$,
\begin{equation} e^{2t}|v_+-v_-|/128\le \rad_{v_0} (\eta_\sigma[0,u_\sigma(t)]\cup[v_0,v_\sigma])\le e^{2t} |v_+-v_-| .\label{V-V'}\end{equation}
If $T^u<\infty$, then $\lim_{t\uparrow T^u} \underline u(t)$ {is} a point in $\partial{\cal D}\cap (0,\infty)^2$. If ${\cal D}=\mathbb{R}_+^2$, then $T^u=\infty$. If $T^u=\infty$, then $\diam(\eta_+)=\diam(\eta_-)=\infty$.
\label{Beurling}
\end{Lemma}
\begin{proof}
Fix $t\in[0,T^u)$. For $\sigma\in\{+,.-\}$, let $S_\sigma= [v_0,v_\sigma]\cup\eta_\sigma[0,u_\sigma(t)]\cup \overline{\eta_\sigma[0,u_\sigma(t)]}$, where the bar stands for complex conjugation, and $L_\sigma =\rad_{v_0} (S_\sigma)$. From (\ref{V-V}) and that $|V_+(\underline u(t))-V_-(\underline u(t))|=e^{2t}|v_+-v_-|$, we get $ e^{2t}|v_+-v_-|/8\le L_+\vee L_-\le e^{2t}|v_+-v_-|$. Since $V_+(\underline u(t))-V_0(\underline u(t))=V_0(\underline u(t))-V_-(\underline u(t))$, by Lemma \ref{lem-V0}, $S_+$ and $S_-$ have the same harmonic measure viewed from $\infty$. By Beurling's estimate, $ L_+\vee L_- \le 16 (L_+\wedge L_-)$. So we get (\ref{V-V'}).
For any $\sigma\in\{+,-\}$, $u_\sigma(t)=\hcap_2(\eta_\sigma[0,u_\sigma(t)])\le L_\sigma^2\le e^{4t} |v_+-v_-|^2$. Suppose $T^u<\infty$. Then $u_+$ and $u_-$ are bounded on $[0,T^u)$. Since $\underline u$ is increasing, $\lim_{t\uparrow T^u} \underline u(t)$ {is} a point in $(0,\infty)^2$, which must lie on $\partial \cal D$ because $\underline u$ cannot be extended beyond $T^u$. If ${\cal D}=\mathbb{R}_+^2$, then $\partial{\cal D}\cap(0,\infty)^2=\emptyset$, and so $T^u=\infty$. If $T^u=\infty$, then by (\ref{V-V'}), $\diam(\eta_\pm)=\infty$.
\end{proof}
{For any function $X$ on $\cal D$,} define
$X^u=X\circ \underline u$ {on $[0,T^u)$}. Let $I=|v_+-v_0|=|v_--v_0|$.
From the definition of $\underline u$, we have $|V^u_\pm(t)-V^u_0(t)|=e^{2t}I$ for any $t\in[0,T^u)$. Let
{ $$R_\sigma =\frac{W_\sigma^u -V_0^u }{V_\sigma^u -V_0^u }\in [0,1],\quad \sigma\in\{+,-\};\quad \underline R=(R_+,R_-).$$ } Let $e^{c\cdot}$ denote the function $t\mapsto e^{ct}$ for $c\in\mathbb{R}$.
\begin{Lemma}
Let ${\cal D}_{\disj}$ be defined by (\ref{D-disj}). Let $T^u_{\disj}\in(0,T^u]$ be such that $\underline u(t)\in{\cal D}_{\disj}$ for $0\le t<T^u_{\disj}$. Then $\underline u$ is continuously differentiable on $[0,T^u_{\disj})$, and
\begin{equation} (W^u_{\sigma,1})^2 u_\sigma'= \frac{R_\sigma(1-R_\sigma^2)}{R_++R_-}\,e^{4\cdot } I^2 \mbox{ on }[0,T^u_{\disj}),\quad \sigma\in\{+,-\}. \label{uj''}\end{equation}
\label{lem-uj}
\end{Lemma}
\begin{proof}
By (\ref{pa-X}), $\Lambda$ and $\Upsilon$ satisfy the following differential equations on ${\cal D}_{\disj}$:
$$ \partial_{\sigma} \Lambda \overset{\mathrm{ae}}{=}\frac{(V_+-V_-) W_{\sigma,1}^2}{\prod_{\nu\in\{0,+,-\}} (V_\nu^u -W_\sigma^u )}\,\mbox{ and }\,
\partial_{\sigma}\Upsilon\overset{\mathrm{ae}}{=} \frac{- W_{\sigma,1}^2}{\prod_{\nu\in\{+,-\}} (V_\nu^u -W_\sigma^u )},\quad \sigma\in\{+,-\}.$$
From $\Lambda^u(t)= 0$ and $\Upsilon^u(t)=t$, we get
$$\sum_{\sigma\in\{+,-\}}\frac{ (W_{\sigma,1}^u) ^2u_\sigma '}{\prod_{\nu\in\{0,+,-\}} (V_\nu^u -W_\sigma^u )}\overset{\mathrm{ae}}{=} 0\,\mbox{ and }\,
\sum_{\sigma\in\{+,-\}} \frac{- (W_{\sigma,1}^u)^2 u_\sigma '}{\prod_{\nu\in\{+,-\}} (V_\nu^u -W_\sigma^u )}\overset{\mathrm{ae}}{=} 1.$$
Solving the system of equations, we get $(W^u_{\sigma,1})^2 u_\sigma'\overset{\mathrm{ae}}{=} (\prod_{\nu\in\{0,+,-\}} (V_\nu^u -W_\sigma^u ))/(W_\sigma-W_{-\sigma})$, $\sigma\in \{+,-\}$. Using $V_\sigma^u-V_0^u=\sigma e^{2\cdot} I$ and $W^u_\sigma-V^u_0=R_\sigma (V_\sigma^u-V_0^u)$, we find that (\ref{uj''}) holds with ``$\overset{\mathrm{ae}}{=}$'' in place of ``$=$''. Since $W_+>W_-$ on ${\cal D}_{\disj}$, we get $R_++R_->0$ on $[0,T^u_{\disj})$. So the original (\ref{uj''}) holds by the continuity of its RHS.
\end{proof}
Now suppose that $\eta_+$ and $\eta_-$ are random curves, and $\cal D$ is a random region. Then $\underline u$ and $T^u$ are also random. Suppose that there is an $\mathbb{R}_+^2$-indexed filtration ${\cal F}$ such that $\cal D$ is an ${\cal F}$-stopping region, and $V_0,V_+,V_-$ are all ${\cal F}$-adapted. Now we extend $\underline u$ to $\mathbb{R}_+$ such that if $T^u<\infty$, then $\underline u(s)=\lim_{t\uparrow T^u} \underline u(t)$ for $s\in [T^u,\infty)$. The following proposition is similar to \cite[Lemma 4.1]{Two-Green-interior}.
\begin{Proposition}
For every $t\in\mathbb{R}_+$, $\underline u(t)$ is an ${\cal F}$-stopping time.\label{Prop-u(t)}
\end{Proposition}
Since $\underline u$ is non-decreasing, we get an $\mathbb{R}_+$-indexed filtration ${\cal F}^u$: ${\cal F}^u_t= {\cal F} _{\underline u(t)}$, ${t\ge 0}$, by Propositions \ref{T<S} and \ref{Prop-u(t)}.
\section{Commuting Pair{s} of SLE$_\kappa(2,{\protect\underline{\rho}})$ Curves}\label{section-commuting-SLE-kappa-rho}
In this section, we apply the results from the previous section to study a pair of commuting SLE$_\kappa(2, \underline\rho)$ curves, which arise as flow lines {($\kappa\ne 4$) or level lines (for $\kappa=4$)} of a GFF with piecewise constant boundary data (cf.\ {\cite{MS1,WW-level}}).
The results of this section will be used in the next section to study {the} commuting pair{s} of hSLE$_\kappa$ curves that we are mostly interested in.
\subsection{Martingale and domain Markov property} \label{subsection-commuting-SLE-kappa-rho}
Throughout this section, we fix $\kappa,\rho_0,\rho_+,\rho_-$ such that $\kappa\in(0,8)$, $\rho_+,\rho_->\max\{-2,\frac \kappa 2-4\}$, $\rho_0\ge \frac{\kappa}{4}-2$, and $\rho_0+\rho_\sigma\ge \frac{\kappa}{2}-4$, $\sigma\in\{+,-\}$. Let $w_-<w_+\in\mathbb{R}$. Let $v_+\in[w_+^+,\infty)$, $v_-\in(-\infty,w_-^-]$, and $v_0\in[w_-^+,w_+^-]$. Write $\underline\rho$ for $(\rho_0,\rho_+,\rho_-)$.
From \cite{MS1} {(for $\kappa\ne 4$) and \cite{WW-level} (for $\kappa=4$)}, we know that there is a coupling of two chordal Loewner curves $\eta_+(t_+)$, $0\le t_+<\infty$, and $\eta_-(t_-)$, $0\le t_-<\infty$, driven by $\widehat w_+$ and $\widehat w_-$ (with speed $1$), respectively, with the following properties.
\begin{enumerate}
\item [(A)] For $\sigma\in\{+,-\}$, $\eta_\sigma$ is a chordal SLE$_\kappa(2,\underline\rho)$ curve in $\mathbb{H}$ started from $w_\sigma$ with force points at $w_{-\sigma}$ and $v_\nu$, $\nu\in\{0,+,-\}$.
Let $\widehat w_{-\sigma}^\sigma,\widehat v_\nu^\sigma $ denote the force point functions for $\eta_\sigma$ started from $w_{-\sigma},v_\nu$, $\nu\in\{0,+,-\}$, respectively.
\item [(B)] Let $\sigma\in\{+,-\}$. If $\tau_{-\sigma}$ is a finite stopping time w.r.t.\ the
filtration ${\cal F}^{-\sigma}$ generated by $\eta_{-\sigma}$, then a.s.\ there is a chordal Loewner curve $\eta_{\sigma}^{t_{-\sigma}}(t)$, $0\le t<\infty$, with some speed such that $\eta_\sigma =f_{K_{-\sigma}(\tau_{-\sigma})}\circ \eta_{\sigma}^{\tau_{-\sigma}} $. Moreover, the conditional law of the normalization of $\eta_{\sigma}^{\tau_{-\sigma}}$ given ${\cal F}^{-\sigma}_{\tau_{-\sigma}}$ is that of a chordal SLE$_\kappa(2,\underline\rho)$ curve in $\mathbb{H}$ started from $\widehat w_\sigma^{-\sigma}(\tau_{-\sigma})$ with force points at $\widehat w_{-\sigma}(\tau_{-\sigma}),\widehat v_\nu^{-\sigma}(\tau_{-\sigma})$, $\nu\in\{0,+,-\}$.
\end{enumerate}
{There are some tiny flaws in the above two properties, which will be described and fixed as follows. First, since $\eta_\sigma$ starts from $w_\sigma$, its force points must take values in $\mathbb{R}_{w_\sigma}$. However, some $v_\nu$ may take value $w_{-\sigma}^+$ or $w_{-\sigma}^-$, which does not belong to $\mathbb{R}_{w_\sigma}$. When this happens, as a force point for $\eta_\sigma$, $v_\nu$ is treated as $w_{-\sigma}$. Second, it may happen that $\widehat v_\nu^{-\sigma}(\tau_{-\sigma})=\widehat w_\sigma^{-\sigma}(\tau_{-\sigma})$ for some $\nu$. When this happens, as a force point for the $\eta_{\sigma}^{\tau_{-\sigma}}$ (started from $\widehat w_\sigma^{-\sigma}(\tau_{-\sigma})$), $\widehat v_\nu^{-\sigma}(\tau_{-\sigma})$ is treated as $\widehat w_\sigma^{-\sigma}(\tau_{-\sigma})^\mu$ for some $\mu\in\{+,-\}$, which is chosen such that, if $\nu\in\{+,-\}$, then $\mu=\nu$, and if $\nu=0$, then $\mu=-\sigma$. We choose $\mu$ in this way because $\widehat v_-^{-\sigma}\le \widehat w_-^{-\sigma}\le \widehat v_0^{-\sigma}\le \widehat w_+^{-\sigma}\le \widehat v_+^{-\sigma}$.
}
One may construct $\eta_+$ and $\eta_-$ as two flow lines of the same GFF on $\mathbb{H}$ with some piecewise boundary conditions (cf.\ \cite{MS1}). The conditions that $\kappa\in(0,8)$, $\rho_0,\rho_+,\rho_->\max\{-2,\frac \kappa 2-4\}$ and $\rho_0+\rho_\sigma\ge \frac{\kappa}{2}-4$, $\sigma\in\{+,-\}$, ensure that (i) there is no continuation threshold for either $\eta_+$ or $\eta_-$, and so $\eta_+$ and $\eta_-$ both have lifetime $\infty$ and $\eta_\pm(t)\to\infty$ as $t\to\infty$; (ii) $\eta_+$ does not hit $(-\infty,w_-]$, and $\eta_-$ does not hit $[w_+,\infty)$; and (iii) $\eta_\pm\cap\mathbb{R}$ has Lebesgue measure zero. The stronger condition that $\rho_0\ge \frac{\kappa}{4}-2$ (which implies that $\rho_0>\max\{-2,\frac \kappa 2-4\}$) will be needed later (see Remark \ref{Remark-rho0}).
We call the above $(\eta_+,\eta_-)$ a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves in $\mathbb{H}$ started from $(w_+,w_-;v_0,v_+,v_-)$. If $\rho_0=0$, which satisfies $\rho_0>\frac \kappa 4-2$ since $\kappa<8$, the $v_0$ does not play a role, and we omit $\rho_0$ and $v_0$ in the name.
We may take $\tau_{-\sigma}$ in (B) to be a deterministic time. So for each $t_{-\sigma}\in\mathbb{Q}_+$, a.s.\ there is an SLE$_\kappa$-type curve $\eta_{\sigma}^{t_{-\sigma}}$ defined on $\mathbb{R}_+$ such that $\eta_\sigma=f_{K_{{-\sigma}}(t_{-\sigma})}\circ \eta_{\sigma}^{t_{-\sigma}}$. The conditions on $\kappa$ and $\underline\rho$ implies that a.s.\ the Lebesgue measure of $\eta_{\sigma}^{t_{-\sigma}}\cap \mathbb{R}$ is $0$. This implies that a.s.\ $\eta_+$ and $\eta_-$ satisfy the conditions in Definition \ref{commuting-Loewner} with ${\cal I}_+={\cal I}_-=\mathbb{R}_+$, ${\cal I}_+^*={\cal I}_-^*=\mathbb{Q}_+$, and ${\cal D}=\mathbb{R}_+^2$. So $(\eta_+,\eta_-)$ is a.s.\ a commuting pair of chordal Loewner curves. Here we omit $\cal D$ when it is $\mathbb{R}_+^2$. Let $K$ and $\mA$ be the hull function and the capacity function, $W_+,W_-$ be the driving functions, and $V_0,V_+,V_-$ be the force point functions started from $v_0,v_+,v_-$, respectively. Then $\widehat w_\sigma=W_\sigma|^{-\sigma}_0$, $\widehat w_{-\sigma}^\sigma=W_{-\sigma}|^{-\sigma}_0$, and $\widehat v_\nu^\sigma=V_\nu|^{-\sigma}_0$, $\nu\in\{0,+,-\}$. For each ${\cal F}^{-\sigma}$-stopping time $\tau_{-\sigma}$, $\eta_{\sigma}^{\tau_{-\sigma}}$ is the chordal Loewner curve driven by $W_\sigma|^{-\sigma}_{\tau_{-\sigma}}$ with speed $\mA|^{-\sigma}_{\tau_{-\sigma}}$, and the force point functions are $W_{-\sigma}|^{-\sigma}_{\tau_{-\sigma}}$ and $V_\nu|^{-\sigma}_{\tau_{-\sigma}}$, $\nu\in\{0,+,-\}$.
Let ${\cal F}^\pm$ be the $\mathbb{R}_+$-indexed filtration as in (B). Let ${\cal F}$ be the separable ${\mathbb{R}_+^2}$-indexed filtration generated by ${\cal F}^+$ and ${\cal F}^-$. From (A) we know that, for $\sigma\in\{+,-\}$, there exist standard ${\cal F}^\sigma$-Brownian motions $B_\sigma$ such that the driving functions $\widehat w_\sigma$ satisfies the SDE
\begin{equation} d\widehat w_\sigma\overset{\mathrm{ae}}{=} \sqrt{\kappa}dB_\sigma +\Big[\frac 2{\widehat w_\sigma-\widehat w_{-\sigma}^\sigma}+\sum_{\nu\in\{0,+,-\}} \frac{\rho_{\nu}}{\widehat w_\sigma-\widehat v_\nu^\sigma}\Big]dt_\sigma.\label{dhaw}\end{equation}
\begin{Lemma}[Two-curve DMP] Let $\cal G$ be a $\sigma$-algebra. Let ${\cal D}=\mathbb{R}_+^2$. Suppose that, conditionally on $\cal G$, $(\eta_+,\eta_-;{\cal D})$ is a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves started from $(w_+,w_-;v_0,v_+,v_-)$, which are ${\cal G}$-measurable random points. Let $K,W_\sigma,V_\nu$, $\sigma\in\{+,-\}$, $\nu\in\{0,+,-\}$, be respectively the hull function, driving functions, and force point functions. {For $\sigma\in\{+,-\}$, l}et ${\cal F}^\sigma{=({\cal F}^\sigma_t)_{t\ge 0}}$ be the $\mathbb{R}_+$-indexed filtration {such that for $t\ge 0$,} ${\cal F}^\sigma_t$ {is the $\sigma$-algebra generated by} ${\cal G}$ {and} $\eta_\sigma(s))$, $0\le s\le t$. Let $\overline{\cal F}$ be the right-continuous augmentation of the separable $\mathbb{R}_+^2$-indexed filtration generated by ${\cal F}^+$ and ${\cal F}^-$. Let $\underline\tau$ be an $\overline{\cal F}$-stopping time. Then on the event $E_{\underline\tau}:=\{\underline\tau\in\mathbb{R}_+^2,\eta_\sigma(\tau_\sigma)\not\in \eta_{-\sigma}[0,\tau_{-\sigma}],\sigma\in\{+,-\}\}$, there is a random commuting pair of chordal Loewner curves $(\widetilde\eta_1,\widetilde\eta_2;\widetilde{\cal D})$ with some speeds, which is the part of $(\eta_+,\eta_-;{\cal D})$ after $\underline\tau$ up to a conformal map (Definition \ref{Def-speeds}), and whose normalization conditionally on $\overline{\cal F}_{\underline\tau}\cap E_{\underline\tau}$ has the law of a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves started from $(W_+ ,W_- ;V_0 ,V_+ ,V_-)|_{\underline\tau}$. Here if $V_\nu(\underline\tau)=W_\sigma(\underline\tau)$ for some $\sigma\in \{+,-\}$ and $\nu\in\{0,+,-\}$, then as a force point $V_\nu(\underline\tau)$ is treated as $W_\sigma(\underline\tau)^{\sign(v_\nu-w_\sigma)}$.
\label{DMP}
\end{Lemma}
\begin{proof}
This lemma is similar to \cite[Lemma A.5]{Green-cut}, which is about the two-directional DMP of chordal SLE$_\kappa$ for $\kappa\le 8$. The argument also works here. See \cite[Remark A.4]{Green-cut}.
\end{proof}
\begin{Remark}
Here is an intuition why Lemma \ref{DMP} is true. By Properties (A) and (B), it is easy to see that Lemma \ref{DMP} holds if the stopping time $\underline\tau$ has the form $(\tau_+,0)$ or $(0,\tau_-)$, which means that we only grow one curve up to a stopping time. Applying this argument for a second time, we see that the lemma holds if $\underline\tau=(\tau_+,\tau_-)$ is such that $\tau_+$ is an $({\cal G},{\cal F}^+)$-stopping time, and $\tau_-$ is a stopping time w.r.t.\ the filtration generated by $\cal G$, ${\cal F}^+_{\tau_+}$, and ${\cal F}^-$, which means that we first grow $\eta_+$ up to some stopping time, and then grow $\eta_-$ up to some stopping time depending on the part of $\eta_+$ that has grown. We may further alternatively grow $\eta_+$ and $\eta_-$ up to stopping times depending on previously grown parts, and conclude that the lemma holds for the $\underline\tau$ constructed in this way. However, not all $\overline{\cal F}$-stopping time can be constructed by this way. To deal with the general case, an approximation argument was used in \cite{Green-cut}.
\end{Remark}
\subsection{Relation with the independent coupling}\label{section-indep}
Write $\underline w$ and $\underline v$ respectively for $(w_+,w_-)$ and $(v_0,v_+,v_-)$.
Let $\mathbb{P}^{\underline\rho}_{\underline w;\underline v}$ denote the joint law of the driving functions $(\widehat w_+,\widehat w_-)$ of a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves in $\mathbb{H}$ started from $(\underline w;\underline v)$. Now we fix $\underline w$ and $\underline v$, and omit the subscript in the joint law.
The $\mathbb{P}^{\underline\rho}_{{\underline w;\underline v}}$ is a probability measure on $\Sigma^2$, where $\Sigma:=\bigcup_{0<T\le\infty} C([0,T),\mathbb{R})$ was defined in \cite[Section 2]{decomposition}.
A random element in $\Sigma$ is a continuous stochastic process with random lifetime. The space $\Sigma^2$ is equipped with an ${\mathbb{R}_+^2}$-indexed filtration ${\cal F}$ defined by ${\cal F}_{(t_+,t_-)}={\cal F}^+_{t_+}\vee {\cal F}^-_{t_-}$, where ${\cal F}^+$ and ${\cal F}^-$ are $\mathbb{R}_+$-indexed filtrations generated by the first function and the second function, respectively.
Let $\mathbb{P}^{\underline\rho}_+$ and $\mathbb{P}^{\underline\rho}_-$ respectively denote the first and second marginal laws of $\mathbb{P}^{\underline\rho}$ on $\Sigma$. Then $\mathbb{P}^{\underline\rho}$ is different from the product measure $\mathbb{P}^{\underline\rho}_i:=\mathbb{P}^{\underline\rho}_+\times \mathbb{P}^{\underline\rho}_-$. We will derive some relation between $\mathbb{P}^{\underline\rho}$ and $\mathbb{P}^{\underline\rho}_i$. Suppose now that $(\widehat w_+,\widehat w_-)$ follows the law $\mathbb{P}^{\underline\rho}_i$ instead of $\mathbb{P}^{\underline\rho}$. Then (\ref{dhaw}) holds for two independent Brownian motions $B_+$ and $B_-$, and $\eta_+$ and $\eta_-$ are independent. Define ${\cal D}_{\disj}$ by (\ref{D-disj}). Then $(\eta_+,\eta_-;{\cal D}_{\disj})$ is a disjoint commuting pair of chordal Loewner curves. Since $B_+$ and $B_-$ are independent, for any $\sigma\in\{+,-\}$, $B_\sigma$ is a Brownian motion w.r.t.\ the filtration $({\cal F}^\sigma_{t}\vee {\cal F}^{-\sigma}_{\infty})_{t\ge 0}$, and we may view (\ref{dhaw}) as an $({\cal F}^\sigma_{t_\sigma}\vee {\cal F}^{-\sigma}_{\infty})_{t_\sigma\ge 0}$-adapted SDE. We will repeatedly apply It\^o's formula (cf.\ \cite{RY}) in this subsection, where $\sigma\in\{+,-\}$, the variable $t_{-\sigma}$ of every function is a fixed finite ${\cal F}^{-\sigma}$-stopping time $t_{-\sigma}$ unless it is set to be zero using $|^{-\sigma}_0$, and all SDE are $({\cal F}^\sigma_{t_\sigma}\vee {\cal F}^{-\sigma}_{\infty})_{t_\sigma\ge 0}$-adapted in $t_\sigma$.
By (\ref{-3}) we get the SDE for $W_\sigma$:
\begin{equation} \partial_\sigma W_\sigma=W_{\sigma,1} \partial \widehat w_\sigma+\Big(\frac{\kappa}{2}-3\Big) W_{\sigma,2}\partial t_\sigma .\label{paWsigma}\end{equation}
We will use the boundary scaling exponent $\bb$ and central charge $\cc$ in the literature defined by
$ \bb=\frac{6-\kappa}{2\kappa}$ and $\cc=\frac{(3\kappa-8)(6-\kappa)}{2\kappa}$.
By (\ref{1/2-4/3}) we get the SDE for $W_{\sigma,N}^{\bb}$:
\begin{equation} \frac{\partial_\sigma W_{\sigma,N}^{\bb}}{W_{\sigma,N}^{\bb}}=\bb \frac{W_{\sigma,2}}{W_{\sigma,1}} \partial \widehat w_\sigma+ \frac{\cc}6 W_{\sigma,S}\partial t_\sigma . \label{paWsigmaN}\end{equation}
Recall the $E_{X,Y}$ defined in (\ref{RXY}). For $Y\in \{W_{-\sigma},V_0,V_+,V_-\}$, $E_{W_\sigma,Y}(t_+,t_-)$ equals a function in $t_{-\sigma}$ times $f(\underline t,W_\sigma(t_\sigma \underline e_\sigma), Y(t_\sigma \underline e_\sigma))$, where
\begin{equation} f(\underline t,w,y):=\left\{
\begin{array}{ll}
(g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}(w)-g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}(y))/(w-y), &w\ne y;\\
g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}'(w), & w=y.
\end{array}\right.
\label{f(t,w,y)}\end{equation}
Using (\ref{pa-X},\ref{paWsigma}) and (\ref{WV+g}-\ref{V0-g}) we see that $E_{W_\sigma,Y}$ satisfies the SDE
$$\frac{\partial_\sigma E_{W_\sigma,Y}}{E_{W_\sigma,Y}}\overset{\mathrm{ae}}{=} \Big[\frac{W_{\sigma,1}}{W_{\sigma}-Y}-\frac{W_{\sigma,1}}{W_{\sigma}-Y}\Big|^{-\sigma}_0 \Big]d\widehat w_\sigma +\Big[\frac{2W_{\sigma,1}^2}{(W_\sigma-Y)^2}-\frac{2W_{\sigma,1}^2}{(W_\sigma-Y)^2} \Big|^{-\sigma}_0\Big]\partial t_\sigma$$
\begin{equation} -\frac{\kappa }{W_\sigma-Y}\Big|^{-\sigma}_0 \cdot \Big[\frac{W_{\sigma,1}}{W_{\sigma }-Y}- \frac{W_{\sigma,1}}{W_{\sigma }-Y}\Big|^{-\sigma}_0 \Big]\partial t_\sigma+\Big(\frac\kappa 2-3\Big)\frac{W_{\sigma,2}}{W_\sigma-Y}\partial t_\sigma.\label{paEWY}\end{equation}
Recall the $Q$ defined in (\ref{F}).
Define a positive continuous function $M$ on ${\cal D}_{\disj}$ by
\begin{equation} M=Q^{-\frac{\cc}6} E_{W_+,W_-}^{\frac 2\kappa} \prod_{\nu_1<\nu_2\in\{0,+,-\}} E_{V_{\nu_1},V_{\nu_2}}^{\frac{\rho_{\nu_1}\rho_{\nu_2}}{2\kappa}} \prod_{\sigma\in\{+,-\}} \Big[ W_{\sigma,N}^{\bb} \prod_{\nu\in\{0,+,-\}} E_{W_\sigma,V_\nu}^{\frac{\rho_\nu}\kappa} V_{\nu,N}^{\frac{\rho_\nu(\rho_\nu+4-\kappa)}{4\kappa}}\Big]. \label{Mirho-crho}\end{equation}
Then $M(t_+,t_-)=1$ if {$t_+=0$ or $t_-=0$}. Combining (\ref{paF},\ref{pajhaW},\ref{paV1},\ref{paEXY},\ref{dhaw},\ref{paWsigmaN},\ref{paEWY}) and using the facts that $\widehat w_\sigma=W_\sigma|^{-\sigma}_0$, $\widehat w_{-\sigma}^\sigma=W_{-\sigma}|^{-\sigma}_0$ and $\widehat v_\nu^\sigma=V_\nu|^{-\sigma}_0$, we get the SDE for $M$ in $t_\sigma$
:
$$\frac{\partial_\sigma M}{M}=\bb \frac{W_{\sigma,2}}{W_{\sigma,1}}\partial B_\sigma-\Big[\frac 2{\widehat w_\sigma-\widehat w^\sigma_{-\sigma}}+\sum_{\nu\in\{0,+,-\}} \frac{\rho_{\nu}}{\widehat w_\sigma-\widehat v_\nu^\sigma}\Big] \frac{\partial B_\sigma}{\sqrt\kappa}+$$
\begin{equation}+ \Big[\frac {2W_{\sigma,1} }{W_\sigma-W_{-\sigma}}+\sum_{\nu\in\{0,+,-\}} \frac{\rho_{\nu}W_{\sigma,1} }{W_\sigma-V_\nu}\Big]\frac{\partial B_\sigma}{\sqrt\kappa} .\label{paM}\end{equation}
This means that $M|^{-\sigma}_{t_{-\sigma}}$ is a local martingale in $t_\sigma$.
For $\sigma\in\{+,-\}$, let $\Xi_\sigma$ denote the space of simple crosscuts of $\mathbb{H}$ that separate $w_\sigma$ from $w_{-\sigma} $ and $\infty$. Note that the crosscuts also separate $w_\sigma$ from $v_{-\sigma}$ since $v_{-\sigma}$ is further away from $w_\sigma$ than $w_{-\sigma}$.
But the crosscuts {may not} separate $w_\sigma$ from $v_\sigma$ or $v_0$. For $\sigma\in\{+,-\}$ and $\xi_\sigma\in\Xi_\sigma$, let $\tau^\sigma_{\xi_\sigma}$ be the first time that $\eta_\sigma$ hits the closure of $\xi_\sigma$; or $\infty$ if such time does not exist. We see that $\tau^\sigma_{\xi_\sigma}\le \hcap_2(\xi_j)<\infty$.
Let $\Xi=\{(\xi_+,\xi_-)\in\Xi_+\times\Xi_-,\dist(\xi_+,\xi_-)>0\}$. For $\underline\xi=(\xi_+,\xi_-)\in\Xi$, let $\tau_{\underline\xi}=(\tau^+_{\xi_+},\tau^-_{\xi_-})$. {Let $\Xi^*$ be the set of $(\xi_+,\xi_-)\in\Xi$ such that $\xi_+$ and $\xi_-$ are polygonal curves whose vertices have rational coordinates.}
{Then $\Xi^*$ is a countable subset of} $\Xi$ such that for every $\underline\xi=(\xi_+,\xi_-)\in\Xi$ there is $(\xi_+^*,\xi_-^*)\in\Xi^*$ such that $\xi_\sigma$ is enclosed by $\xi_\sigma^*$, $\sigma\in\{+,-\}$. See Figure \ref{fig-crosscut}.
\begin{figure}
\begin{center}
\includegraphics[width=5in]{crosscut.png}
\end{center}
\caption{{ The figure above illustrates an element $(\xi_+,\xi_-)\in \Xi^*$ and the corresponding stopping times $\tau^+_{\xi_+}$ and $\tau^-_{\xi_-}$ for the curves $\eta_+$ and $\eta_-$.}} \label{fig-crosscut}
\end{figure}
\begin{Lemma}
For any $\underline\xi\in\Xi$ and $R>0$, there is a constant $C>0$ depending only on $\kappa,\underline\rho, \underline\xi, R$, such that if $|v_+-v_-|\le R$, then $|\log M|\le C$ on $[\underline 0, \tau_{\underline\xi}]$. \label{uniform}
\end{Lemma}
\begin{proof}
Fix $\underline\xi=(\xi_+,\xi_-)\in\Xi$ and $R>0$. Suppose $|v_+-v_-|\le R$. Throughout the proof, a constant is a number that depends only on $\underline\xi, R$; and a function defined on $[\underline 0, \tau_{\underline\xi}]$ is said to be uniformly bounded if its absolute value on $[\underline 0, \tau_{\underline\xi}]$ is bounded above by a constant. By the definition of $M$, it suffices to prove that $|\log Q|$, $|\log E_{Y_1,Y_2}|$, $Y_1\ne Y_2\in\{W_+,W_-,V_0,V_+,V_-\}$, $|\log W_{\sigma,N}|$, $\sigma\in\{+,-\}$, and $|\log V_{\nu,N}|$, $\nu\in\{0,+,-\}$, are all uniformly bounded.
Let $K_{\xi_\sigma}=\Hull(\xi_\sigma)$, $\sigma\in\{+,-\}$ and $K_{\underline \xi}=K_{\xi_+}\cup K_{\xi_-}$. Let $I=(\max(\overline\xi_-\cap\mathbb{R}),\min(\overline\xi_+\cap\mathbb{R}))$. Then $|g_{K_{\underline\xi}}(I)|$ is a positive constant. By symmetry we assume that either $v_0\in\overline{K_{\xi_-}}$ or $v_0\in I$ and $g_{K_{\underline\xi}}(v_0)$ is no more than the middle of $g_{K_{\underline\xi}}(I)$. So we may pick $v_0^1<v_0^2\in I$ with $v_0\le v_0^1$ such that $|g_{K_{\underline\xi}}(v_0^2)-g_{K_{\underline\xi}}(v_0^1)|\ge |g_{K_{\underline\xi}}(I)|/3$. Let $V_0^j$ be the force point function started from $v_0^j$, $j=1,2$. By (\ref{VWVWV}), $V_+\ge W_+\ge V_0^2>V_0^1\ge V_0\ge W_-\ge V_-$ on $[\underline 0,\tau_{\underline\xi}]$.
By Proposition \ref{Prop-contraction}, $W_{+,1},W_{-,1}$ are uniformly bounded by $1$.
For $\sigma\in\{+,-\}$, the function $(t_+,t_-)\mapsto t_\sigma$ is bounded on $[\underline 0, \tau_{\underline\xi}]$ by $\hcap_2(K_{\underline\xi})$.
For any $\underline t\in [\underline 0, \tau_{\underline\xi}]$, since $g_{K_{\underline\xi}}=g_{K_{\underline\xi}/K(\underline t)}\circ g_{K(\underline t)}$, by Proposition \ref{Prop-contraction} we get $0<g_{K_{\underline\xi}}'\le g_{K(\underline t)}'\le 1$ on $[v_0^1,v_0^2]$.
Since $V_0^j(\underline t)=g_{K(\underline t)}(v_0^j)$, $j=1,2$, we have
$|V_0^2(\underline t)-V_0^1(\underline t)|\ge |g_{K_{\underline\xi}}(v_0^2)-g_{K_{\underline\xi}}(v_0^1)|\ge |g_{K_{\underline\xi}}(I)|/3$.
So $\frac 1{V_0^2-V_0^1}$ is uniformly bounded, which then implies that $\frac1{|W_\sigma-W_{-\sigma}|}$ and $\frac1{|W_\sigma-V_{-\sigma}|}$ are uniformly bounded, $\sigma\in\{+,-\}$. From (\ref{F}) we see that $|\log Q|$ is uniformly bounded. From (\ref{pajhaW},\ref{paV1}) and the fact that $W_{-\sigma,N}|^\sigma_0=V_{-\sigma,N}|^\sigma_0=1$, we see that $|\log W_{-\sigma,N}|$ and $|\log V_{-\sigma,N}|$, $\sigma\in\{+,-\}$, are uniformly bounded. We also know that $\frac1{|W_+-V_0|}\le \frac 1{|V_0^2-V_0^{1}|}$ is uniformly bounded. From (\ref{paV1}) with $\nu=0$ and $\sigma=+$ and the fact that $ V_{0,N}|^+_0\equiv 1$ we find that $|\log V_{0,N}|$ is uniformly bounded.
Now we estimate $|\log E_{Y_1,Y_2}|$. By (\ref{V-V}), $|V_+-V_-|$ is uniformly bounded. Thus, for any $Y_1\ne Y_2\in \{W_+,W_-,V_0,V_+,V_-\}$, $|Y_1-Y_2|\le |V_+-V_-|$ is uniformly bounded. If $Y_1\in\{W_+,V_+\}$ and $Y_2\in\{W_-,V_-\}$, then $\frac1{|Y_1-Y_2|}\le \frac 1{ |V_0^1-V_0^2|}$ is uniformly bounded. From (\ref{RXY}) we see that $|\log E_{Y_1,Y_2}|$ is uniformly bounded. If $Y_1,Y_2\in \{W_{-\sigma},V_{-\sigma}\}$ for some $\sigma\in\{+,-\}$, then $\frac 1{|Y_j-W_\sigma|}$, $j=1,2$, are uniformly bounded, and then the uniformly boundedness of $|\log E_{Y_1,Y_2}|$ follows from (\ref{paEXY}) and the fact that $E_{Y_1,Y_2}|^\sigma_0\equiv 1$. Finally, we consider the case that $Y_1=V_0$. If $Y_2\in\{W_+,V_+\}$, then $\frac 1{|Y_2-V_0|}\le \frac 1{|V_0^2-V_0^1|}$, which is uniformly bounded. We can again use (\ref{RXY}) to get the uniformly boundedness of $|\log E_{V_0,Y_2}|$. If $Y_2\in\{W_-,V_-\}$, then $\frac 1{|V_0-W_+|}$ and $\frac 1{|Y_2-W_+|}$ are uniformly bounded. The uniformly boundedness of $|\log E_{V_0,Y_2}|$ then follows from (\ref{paEXY}) with $\sigma=+$, $X=V_0$, $Y=Y_2$, and the fact that $E_{V_0,Y_2}|^+_0\equiv 1$.
\end{proof}
\begin{Corollary}
For any $\underline\xi\in\Xi$, $M(\cdot \wedge \tau_{\underline\xi})$ is an ${\cal F}$-martingale closed by $M(\tau_{\underline\xi})$ w.r.t.\ $\mathbb{P}^{\underline\rho}_i$. \label{Doob}
\end{Corollary}
\begin{proof}
This follows from (\ref{paM}), Lemma \ref{uniform}, and the same argument {used to prove} \cite[Corollary 3.2]{Two-Green-interior}.
\end{proof}
\begin{Lemma}
For any $\underline\xi=(\xi_+,\xi_-)\in\Xi$, $\mathbb{P}^{\underline\rho}$ is absolutely continuous w.r.t.\ $\mathbb{P}^{\underline\rho}_i$ on ${\cal F}^1_{\tau^1_{\xi_1}}\vee{\cal F}^2_{\tau^2_{\xi_2}}$, and the RN derivative is $M(\tau_{\underline\xi})$. \label{RN-M-ic}
\end{Lemma}
\begin{proof}
Let $\underline\xi=(\xi_+,\xi_-)\in\Xi$. The above corollary implies that $\mathbb{ E}^{\underline\rho}_i[M(\tau_{\underline\xi})]=M(\underline 0)=1$. So we may define a new probability measure $\mathbb{P}^{\underline\rho}_{\underline \xi}$ by $ {d \mathbb{P}^{\underline\rho}_{\underline \xi}}= {M(\tau_{\underline\xi})} {d\mathbb{P}^{\underline\rho}_i}$.
Since $M(t_+,t_-)=1$ when $t_+t_-=0$, from the above corollary we know that the marginal laws of $\mathbb{P}^{\underline\rho}_{\underline \xi}$ agree with that of $\mathbb{P}^{\underline\rho}_i$, which are $\mathbb{P}^{\underline\rho}_+$ and $\mathbb{P}^{\underline\rho}_-$. Suppose $(\widehat w_+,\widehat w_-)$ follows the law $\mathbb{P}^{\underline\rho}_{\underline \xi}$. Then they satisfy Condition (A) in Section \ref{subsection-commuting-SLE-kappa-rho}. Now we write $\tau_\pm$ for $\tau^\pm_{\xi_\pm}$, and $\underline\tau$ for $\tau_{\underline\xi}$. Let $\sigma_-\le \tau_-$ be an ${\cal F}^-$-stopping time. From Lemma \ref{OST} and Corollary \ref{Doob},
$\frac{d \mathbb{P}^{\underline\rho}_{\underline \xi}|{\cal F} _{(t_+,\sigma_-)}}{d\mathbb{P}^{\underline\rho}_i|{\cal F} _{(t_+,\sigma_-)}}= M(t_+\wedge {\tau_+},{\sigma_-})$, $0\le t_+<\infty$.
From Girsanov{'s} Theorem and (\ref{dhaw},\ref{paM}), we see that, under $\mathbb{P}^{\underline\rho}_{\underline \xi}$, $\widehat w_+$ satisfies the following SDE up to $\tau_+$:
\begin{align*}
d\widehat w_+=&\sqrt\kappa d B^{\tau_-}_{+}+\kappa \bb \frac{W_{+,2} }{W_{+,1} }\Big|^-_{\tau_-} d t_++ \frac{2 W_{+,1} }{W_+ -W_{-} }\Big|^-_{\tau_-}\,d t_+ + \sum_{\nu\in\{0,+,-\}} \frac{\rho_\nu W_{+,1} }{W_+ -V_\nu }\Big|^-_{\tau_-}\,d t_+ ,
\end{align*}
where $B^{\tau_-}_{+}$ is a standard $({\cal F} _{(t_+,\sigma_-)})_{t_+\ge 0}$-Brownian motion under $\mathbb{P}^{\underline\rho}_{\underline \xi}$. Using Lemma \ref{W=gw} and (\ref{-3}) we find that $W_+(\cdot, \sigma_-)$ under $\mathbb{P}^{\underline\rho}_{\underline \xi}$ satisfies the following SDE up to $\tau_+$:
\begin{equation} d W_+|^{-}_{\sigma_{-}} \overset{\mathrm{ae}}{=} \sqrt\kappa W_{+,1} |^{-}_{\sigma_{-}}d B_{+}^{\sigma_{-}} + \frac {2W_{+,1}^2}{W_{+}-W_{-}}\bigg |^{-}_{\sigma_{-}} d t_+ +\sum_{\nu\in \{0,+,-\}} \frac {\rho_\nu W_{+,1} ^2}{W_{+}-V_{\nu} }\bigg|^{-}_{\sigma_{-}} d t_+ .\label{SDE-paW}\end{equation}
Note that the SDE (\ref{SDE-paW}) agrees with the SDE for $W_+(\cdot,\sigma_-)$ if $(\eta_+,\eta_-)$ is a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves started from $(\underline w;\underline v)$, where the speed is $W_{+,1}(\cdot,\sigma_-)^2$. There is a similar SDE for $W_-(\sigma_+,\cdot)$ if $\sigma_+\le \tau_+$ is an ${\cal F}^+$-stopping time. Thus, $ { \mathbb{P}^{\underline\rho}_{\underline \xi}}$ agrees with $\mathbb{P}^{\underline\rho}$ on ${\cal F}^1_{\tau^1_{\xi_1}}\vee{\cal F}^2_{\tau^2_{\xi_2}}$, which implies the conclusion of the lemma.
\end{proof}
\begin{Corollary}
If $\underline T$ is an ${\cal F}$-stopping time, then $\mathbb{P}^{\underline\rho} $ is absolutely continuous w.r.t.\ $\mathbb{P}^{\underline\rho}_i $ on ${\cal F}_{\underline T}\cap \{\underline T\in{\cal D}_{\disj}\}$, and the RN derivative is $M(\underline T)$. In other words, if $A\in {\cal F}_{\underline T}$ and $A\subset \{\underline T\in{\cal D}_{\disj}\}$, then $\mathbb{P}^{\underline\rho} [A]=\mathbb{ E}^{\underline\rho}_i [{\bf 1}_A M(\underline T)]$. \label{RN-M-ic-cor}
\end{Corollary}
\begin{proof}
Since $\{\underline T\in{\cal D}_{\disj}\}= \bigcup_{\underline\xi\in \Xi^*} \{\underline T <\tau_{\underline\xi}\}$ and $\Xi^*$ is countable, it suffices to prove the statement with $ \{\underline T <\tau_{\underline\xi}\}$ in place of $ \{\underline T\in{\cal D}_{\disj}\}$ for every $\underline\xi\in\Xi^*$. Fix $\underline\xi=(\xi_+,\xi_-)\in\Xi^*$. We write ${\cal F}_{\underline\xi}$ for ${\cal F}^+_{\tau^+_{\xi^+}}\vee {\cal F}^-_{\tau^-_{\xi^-}}$. Let $A\in{\cal F}_{\underline T}\cap \{\underline T <\tau_{\underline\xi}\}$. Fix $\underline t=(t_+,t_-)\in\mathbb{Q}_+^2$. Let $A_{\underline t}=A\cap \{\underline T\le \underline t <\tau_{\underline\xi}\}$. For every $B_+\in{\cal F}^+_{t_+}$ and $B_-\in{\cal F}^-_{t_-}$, $B_+\cap B_-\cap \{\underline t<\tau_{\underline\xi}\}\in {\cal F}^+_{\tau^+_{\xi^+}}\vee {\cal F}^-_{\tau^-_{\xi^-}}={\cal F}_{\underline\xi}$. Using a monotone class argument, we conclude that ${\cal F}_{\underline t}\cap \{\underline t<\tau_{\underline\xi}\}\in {\cal F}_{\underline\xi}$. Thus, $A_{\underline t}\in{\cal F}_{\underline t}\cap \{\underline t<\tau_{\underline\xi}\}\subset {\cal F}_{\underline\xi}$. Since $A=\bigcup_{\underline t\in\mathbb{Q}_+^2} A_{\underline t}$, we get $A\in{\cal F}_{\underline\xi}$. By Lemma \ref{RN-M-ic}, Proposition \ref{OST}, and the martingale property of $M(\cdot\wedge \tau_{\underline\xi})$, we get
$\mathbb{ E}^{\underline\rho}[A]=\mathbb{ E}^{\underline\rho}_i [{\bf 1}_A M(\tau_{\underline\xi})]=\mathbb{ E}^{\underline\rho}_i [{\bf 1}_A M(\underline T\wedge \tau_{\underline\xi})]=\mathbb{ E}^{\underline\rho}_i [{\bf 1}_A M(\underline T)]$.
\end{proof}
{
\begin{Remark}
We call the $M$ a two-time-parameter local martingale. It plays the same role as the $M$ defined in \cite[Formula (4.3)]{reversibility} and the $M$ defined in \cite[Formula (4.34)]{duality}.
\end{Remark}
}
\subsection{SDE along a time curve up to intersection} \label{section-diffusion}
{Recall that $(\eta_+,\eta_-)$ is a.s.\ a commuting pair of chordal Loewner curves with the time region $\mathbb{R}_+^2$.} Now assume that $v_+-v_0=v_0-v_-$.
Let $\underline u=(u_+,u_-):[0,T^u)\to \mathbb{R}_+^2$ be {the function $\underline u$ developed} in Section \ref{time curve} {for this random pair $(\eta_+,\eta_-)$}. By Lemma \ref{Beurling}, a.s.\ $T^u=\infty$.
By Proposition \ref{Prop-u(t)}, $\underline u(t)$ is an $({\cal F}_{ \underline t})$-stopping time for each $t\ge 0$. We then get an $\mathbb{R}_+$-indexed filtration ${\cal F}^u$ by ${\cal F}^u_t:={\cal F}_{ \underline u(t)}$, $t\ge 0$. For $\underline \xi=(\xi_+,\xi_-)\in\Xi$, let $\tau^u_{\underline\xi}$ denote the first $t\ge 0$ such that $u_1(t)=\tau^1_{\xi_1}$ or $u_2(t)=\tau^2_{\xi_2}$, whichever comes first.
Note that such time exists and is finite because $(\tau^1_{\xi_1},\tau^2_{\xi_2})\in\cal D$. The following proposition has the same form as \cite[Lemma 4.2]{Two-Green-interior}{, and its proof is also the same as the proof there}.
\begin{Proposition}
For every $\underline\xi\in\Xi$, $\tau^u_{\underline \xi}$ is an ${\cal F}^u$-stopping time, and $\underline u(\tau^u_{\underline \xi})$ and $\underline u(t\wedge\tau^u_{\underline \xi})$, $t\ge 0$, are all ${\cal F}$-stopping times. \label{u-st}
\end{Proposition}
Assume that $(\widehat w_+,\widehat w_-)$ follows the law $\mathbb{P}^{\underline\rho}_i$. {This assumption will be used up to Lemma \ref{Lem-uB}.} Let $\eta_\pm$ be the chordal Loewner curve driven by $\widehat w_\pm$. Let ${\cal D}_{\disj}$ be as before. Let $\widehat w_{-\sigma}^\sigma(t)$ and $\widehat v_\nu^\sigma(t)$ be the force point functions for $\eta_\sigma$ started from $w_{-\sigma}$ and $v_\nu$ respectively, $\nu\in\{0,+,-\}$, $\sigma\in\{+,-\}$. Define $\widehat B_\sigma$, $\sigma\in\{+,-\}$, by
\begin{equation} \sqrt{\kappa }\widehat B_\sigma(t)=\widehat w_\sigma(t)-w_\sigma-\int_0^t \frac{2ds}{\widehat w_\sigma(s)-\widehat w_{-\sigma}^\sigma(s)}-\sum_{\nu\in \{0,+,-\}} \int_0^t \frac{\rho_{\nu}ds}{\widehat w_\sigma(s)-\widehat v_\nu^\sigma(s)}.\label{hawhaB}\end{equation}
Then $\widehat B_+$ and $\widehat B_-$ are independent standard Brownian motions. So we get five ${\cal F}$-martingales on ${\cal D}_{\disj}$: $\widehat B_+(t_+)$, $\widehat B_-(t_-)$, $\widehat B_+(t_+)^2-t_+$, $\widehat B_-(t_-)^2-t_-$, and $\widehat B_+(t_+)\widehat B_-(t_-)$. Fix $\underline\xi\in\Xi$. Using Propositions \ref{OST} and \ref{Prop-u(t)} and the fact that $u_\pm$ is uniformly bounded above on $[0,\tau_{\underline\xi}]$, we conclude that $\widehat B^u_\sigma(t \wedge \tau^u_{\underline \xi})$, $\widehat B^u_\sigma(t \wedge \tau^u_{\underline \xi})^2- u_\sigma(t\wedge \tau^u_{\underline \xi})$, $\sigma\in\{+,-\}$, and $\widehat B^u_+(t \wedge \tau^u_{\underline \xi}) \widehat B^u_-(t\wedge \tau^u_{\underline \xi} )$ are all ${\cal F}^u$-martingales under $\mathbb{P}^{\underline\rho}_i$. Recall that for a function $X$ defined on $\cal D$, we use $X^u$ to denote the function $X\circ \underline u$ defined on $[0,T^u)$. This rule applies even if $X$ depends only on $t_+$ or $t_-$ (for example, $\widehat B^u_\sigma(t)=\widehat B_\sigma(u_\sigma(t))$); but does not apply to ${\cal F}^u,T^u,T^u_{\disj},\tau^u_{\underline\xi}$.
Thus, the quadratic variation and covariation of $\widehat B^u_+$ and $\widehat B^u_-$ satisfy
\begin{equation} \langle \widehat B^u_+\rangle_t\overset{\mathrm{ae}}{=} u_+(t),\quad \langle \widehat B^u_-\rangle_t\overset{\mathrm{ae}}{=} u_-(t),\quad \langle \widehat B^u_+,\widehat B^u_-\rangle_t=0,\label{quadratic}\end{equation}
up to $\tau^u_{\underline\xi}$. By Corollary \ref{Doob} and Proposition \ref{OST}, $M^u(\cdot\wedge \tau^u_{\underline\xi})$ is an ${\cal F}^u$-martingale. Let $T^u_{\disj}$ denote the first $t$ such that $\underline u(t)\not\in{\cal D}_{\disj}$. Since $T^u_{\disj}=\sup_{\underline \xi\in\Xi}\tau^u_{\underline\xi}=\sup_{\underline \xi\in\Xi^*}\tau^u_{\underline\xi}$, and $\Xi^*$ is countable, we see that, $T^u_{\disj}$ is an ${\cal F}^u$-stopping time.
We now compute the SDE for $M^u$ up to $T^u_{\disj}$ in terms of $\widehat B^u_+$ and $\widehat B^u_-$.
Using (\ref{Mirho-crho}) we may express $M^u$ as a product of several factors, among which $E_{W_+,W_-}^u$, $(W_{\sigma,N}^u)^{\bb}$, $(E_{W_\sigma,V_\nu}^u)^{\rho_\nu/\kappa}$, $\sigma\in\{+,-\}$, $\nu\in\{0,+,-\}$, contribute the local martingale part, and other factors are differentiable in $t$. For $\sigma\in\{+,-\}$, since $W_\sigma(t_+,t_-)=g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}(\widehat w_\sigma(t_\sigma))$, using (\ref{-3})
we get the ${\cal F}^u$-adapted SDEs:
\begin{equation} dW_\sigma^u = W_{\sigma,1}^ud\widehat w^u_\sigma+\Big(\frac \kappa 2-3\Big)W_{\sigma,2}^u u_\sigma'dt+\frac{2(W_{-\sigma,1}^u)^2u_{-\sigma}'}{W_\sigma^u-W_{-\sigma}^u} \,dt,\label{dWju}\end{equation}
Since $W_{\sigma,N}=\frac{W_{\sigma,1}}{W_{\sigma,1}|^\sigma_0}$, $W_\sigma^1(t_+,t_-)=g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}'(\widehat w_\sigma(t_\sigma))$, $W_\sigma^1|^\sigma_0$ is differentiable in $t_{-\sigma}$, and $g_{K_{-\sigma}^{t_\sigma}(t_{-\sigma})}'$ is differentiable in both $t_\sigma$ and $t_{-\sigma}$, we get
the SDE for $(W_{\sigma,N}^u)^{\bb}$:
$$\frac{d(W_{\sigma,N}^u)^{\bb}}{(W_{\sigma,N}^u)^{\bb}}\overset{\mathrm{ae}}{=}\bb\frac{W_{\sigma,2}^u}{W_{\sigma,1}^u}\,\sqrt\kappa d \widehat B^u_\sigma+\mbox{drift terms}.$$
For the SDE for $(E^u_{W_+,W_-})^{\frac 2\kappa}$, note that when $X=W_+$ and $Y=W_-$, the numerators and denominators in (\ref{RXY}) never vanish. So using (\ref{dWju}) we get
$$\frac{d(E_{W_+,W_-}^u)^{\frac2\kappa}}{ (E_{W_+,W_-}^u)^{\frac{2}\kappa}}=\frac 2\kappa \sum_{\sigma\in\{+,-\}}\Big[\frac{W_{\sigma,1}^u}{W_\sigma^u-W_{-\sigma}^u}-\frac 1{\widehat w_\sigma^u-(\widehat w_{-\sigma}^\sigma)^u}\Big]\sqrt\kappa d\widehat B^u_\sigma +\mbox{drift terms}.$$
Note that $E_{W_\sigma,V_\nu}^u(t)$ equals $f(\underline u(t), \widehat w^u_\sigma(t),(\widehat v_\nu^\sigma)^u(t))$ times a differential function in $u_{-\sigma}(t)$, where $f(\cdot,\cdot,\cdot)$ is given by (\ref{f(t,w,y)}). Using (\ref{dWju}) we get the SDE for $(E_{W_\sigma,V_\nu}^u)^{\frac{\rho_\nu}\kappa}$:
$$\frac{d(E_{W_\sigma,V_\nu}^u)^{\frac{\rho_\nu}\kappa}}{ (E_{W_\sigma,V_\nu}^u)^{\frac{\rho_\nu}\kappa}}\overset{\mathrm{ae}}{=} \frac{\rho_\nu}{\kappa}\Big[\frac{W_{\sigma,1}^u }{W_\sigma^u-V_\nu^u}-\frac 1{\widehat w_\sigma^u- (\widehat v_\nu^\sigma)^u}\Big] \,\sqrt\kappa d\widehat B^u_\sigma+\mbox{drift terms}.$$
Here if at any time $t$, $(\widehat v_\nu^\sigma)^u(t)=\widehat w_\sigma^u(t)$, then the function inside the square brackets is understood as $\frac 12\frac{W^u_{\sigma,2}(t)}{W^u_{\sigma,1}(t)}$, which is the limit of the function as $(\widehat v_\nu^\sigma)^u(t)\to \widehat w_\sigma^u(t)$.
Combining the above displayed formulas and using the fact that $M^u$ and $\widehat B^u_\pm$ are all ${\cal F}^u$-local martingales under $\mathbb{P}^{\underline\rho}_i$, we get
\begin{align}
\frac{d M^u}{M^u}\overset{\mathrm{ae}}{=} & \sum_{\sigma\in\{+,-\}}\bigg [ \kappa \bb\frac{W_{\sigma,2}^u}{W_{\sigma,1}^u}+2 \Big[\frac{ W_{\sigma,1}^u}{W_\sigma^u-W_{-\sigma}^u}-\frac 1{\widehat w_\sigma^u-(\widehat w_{-\sigma}^\sigma)^u}\Big]+ \nonumber\\
&+\sum_{\nu\in\{0,+,-\}} \rho_{\nu} \Big[\frac{ W_{\sigma,1}^u }{W_\sigma^u-V_\nu^u}-\frac {1}{\widehat w_\sigma^u- (\widehat v_\nu^\sigma)^u}\Big] \bigg ]\, \frac{d\widehat B^u_\sigma}{\sqrt\kappa}.\label{dMitocu}
\end{align}
From Corollary \ref{RN-M-ic-cor} and Proposition \ref{u-st} we know that, for any $\underline\xi\in\Xi$ and $t\ge 0$,
\begin{equation} \frac{d \mathbb{P}^{\underline\rho}|{\cal F}_{ \underline u(t\wedge \tau^u_{\underline\xi})}}{d \mathbb{P}^{\underline\rho}_i|{\cal F}_{ \underline u(t\wedge \tau^u_{\underline\xi})}}= {M^u(t\wedge \tau^u_{\underline\xi} )} .\label{RNito4xi}\end{equation}
We will use a Girsanov{'s} argument to derive the SDEs for $\widehat w_+^u$ and $\widehat w_-^u$ up to $T^u_{\disj}$ under $\mathbb{P}^{\underline\rho}$.
For $\sigma\in\{+,-\}$, define a process $\widetilde B^u_\sigma(t)$ such that $\widetilde B^u(t)=0$ and
\begin{align}
d\widetilde B_\sigma^u=&d\widehat B_\sigma^u-\bigg[\kappa \bb \frac{W_{\sigma,2}^u}{W_{\sigma,1}^u}+ \Big[\frac{2W_{\sigma,1}^u}{W_\sigma^u-W_{-\sigma}^u}-\frac 2{\widehat w_\sigma^u-(\widehat w_{-\sigma}^\sigma)^u}\Big] \nonumber \\
& +\sum_{\nu\in\{0,+,-\}} \Big[\frac{\rho_\nu W_{\sigma,1}^u }{W_\sigma^u-V_\nu^u}-\frac {\rho_\nu}{\widehat w_\sigma^u- (\widehat v_\nu^\sigma)^u}\Big] \bigg] \, \frac{u_\sigma'(t)}{\sqrt\kappa}\, dt.\label{tilwju}
\end{align}
\begin{Lemma}
For any $\sigma\in\{+,-\}$ and $\underline \xi\in\Xi$, $|\widetilde B_\sigma^u|$ is bounded on $[0,\tau^u_{\underline \xi}]$ by a constant depending only on $\kappa,\underline\rho,\underline w,\underline v,\underline\xi$. \label{uniform2}
\end{Lemma}
\begin{proof}
Throughout the proof, a positive number that depends only on $\kappa,\underline\rho,\underline w,\underline v,\underline\xi$ is called a constant. By Proposition \ref{g-z-sup}, $ V_+$ and $V_-$ are bounded in absolute value by a constant on $[0,\tau _{\underline\xi}]$, and so are $W_+,V_0,W_-$ because $V_+\ge W_+\ge V_0\ge W_-\ge V_-$. It is clear that $\widehat B_\sigma^u(t)=U(u_\sigma(t) \underline e_\sigma)-U(\underline 0)$, $\sigma\in\{+,-\}$, where $U:=W_++W_-+\sum_{\nu\in\{0,+,-\}} \frac{\rho_\nu}2 V_\nu$. Thus, $\widehat B_\sigma^u$, $\sigma\in\{+,-\}$, are bounded in absolute value by a constant on $[0,\tau^u_{\underline\xi}]$. By (\ref{V-V}) and that $V_+^u(t)-V_-^u(t)=e^{2t}(v_+-v_-)$ for $0\le t<T^u$, we know that $e^{2 \tau^u_{\underline\xi}}\le 4\diam(\xi_+\cup\xi_-\cup[v_-,v_+])/|v_+-v_-|$. This means that $\tau^u_{\underline\xi}$ is bounded above by a constant. Since $\underline u[0,\tau^u_{\underline \xi}]\subset [\underline 0,\tau_{\underline\xi}]$, it remains to show that, for $\sigma\in\{+,-\}$, $$\frac{W_{\sigma,2} }{W_{\sigma,1} },\quad \frac{ W_{\sigma,1} }{W_\sigma -W_{-\sigma} }-\frac 1{\widehat w_\sigma - \widehat w_{-\sigma}^\sigma },\quad \frac{ W_{\sigma,1} }{W_\sigma -V_\nu }-\frac 1{\widehat w_\sigma - \widehat v_\nu^\sigma },\quad \nu\in\{0,+,-\},$$ are all bounded in absolute value on $[\underline 0,\tau_{\underline\xi}]$ by a constant.
Because $\frac 1{\widehat w_\sigma - \widehat w_{-\sigma}^\sigma }=\frac{ W_{\sigma,1} }{W_\sigma -W_{-\sigma} }\Big|^{-\sigma}_0$, the boundedness of $ \frac{ W_{\sigma,1} }{W_\sigma -W_{-\sigma} }-\frac 1{\widehat w_\sigma - \widehat w_{-\sigma}^\sigma } $ on $[\underline 0, \tau_{\underline\xi}]$ simply follows from the boundedness of $ \frac{ W_{\sigma,1} }{W_\sigma -W_{-\sigma} } $, which in turn follows from $0\le W_{\sigma,1}\le 1$ and that $|W_\sigma-W_{-\sigma}|$ is bounded from below on $[\underline 0, \tau_{\underline\xi}]$ by a positive constant, where the latter bound was given in the proof of Lemma \ref{uniform}.
For the boundedness of $ \frac{W_{\sigma,2} }{W_{\sigma,1} } $ on $[\underline 0, \tau_{\underline\xi}]$, we assume $\sigma=+$ by symmetry. Since $W_{+,j}(t_+,t_-)=g_{K_{-}^{t_+}(t_-)}^{(j)}(\widehat w_+(t_+))$, $j=1,2$, and $K_{-}^{t_+}(\cdot)$ are chordal Loewner hulls driven by $W_-(t_+,\cdot)$ with speed $W_{-,1}(t_+,\cdot)^2$, by differentiating $ {g''_{K_{-}^{t_+}(t_-)}}/{g'_{K_{-}^{t_+}(t_-)}}$ at $\widehat w_+(t_+)$ w.r.t.\ $t_-$, we get
$$\frac{W_{+,2}(t_+,t_-)}{W_{+,1}(t_+,t_-)} =\int_0^{t_-}\frac{4 W_{-,1}^2 W_{+,1}}{(W_+-W_-)^3}\bigg|_{(t_+,s_-)}\,ds.$$
From the facts that $0\le W_{+,1},W_{-,1}\le 1$ and that $|W_+-W_-|$ is bounded from below by a constant on $[\underline 0, \tau_{\underline\xi}]$, we get the boundedness of $\frac{W_{+,2} }{W_{+,1} }$.
For the boundedness of $\frac{ W_{\sigma,1} }{W_\sigma -V_\nu }-\frac 1{\widehat w_\sigma - \widehat v_\nu^\sigma }$, we assume by symmetry that $\sigma=+$.
By differentiating w.r.t.\ $t_-$ and using (\ref{pa-X},\ref{pajW}), we get
$$\frac{ W_{+,1}(t_+,t_-) }{W_+(t_+,t_-) -V_\nu(t_+,t_-) }-\frac 1{\widehat w_+(t_+) - \widehat v_\nu^+(t_+) }=\int_0^{t_-}\frac{2 W_{-,1}^2 W_{+,1}}{(W_+-W_-)^2(V_\nu-W_-)}\bigg|_{(t_+,s_-)}\, ds.$$
Since $0\le W_{+,1} \le 1$, $|W_+-W_-|$ is bounded from below by a constant on $[\underline 0, \tau_{\underline\xi}]$, and $V_\nu-W_-$ does not change sign (but could be $0$), it suffices to show that $\big|\int_0^{t_-} \frac{2W_{-,1}^2}{V_\nu-W_-}|_{(t_+,s_-)}\,ds\big|$ is bounded by a constant on $[\underline 0, \tau_{\underline\xi}]$. This holds because the integral equals $V_\nu(t_+,t_-)-V_\nu(t_+,0)$ by (\ref{pa-X}), and $|V_\nu|$ is bounded by a constant on $[\underline 0, \tau_{\underline\xi}]$.
\end{proof}
\begin{Lemma}
Under $\mathbb{P}^{\underline\rho}$, there is a stopped planar Brownian motion $\underline B(t)=(B_+(t),B_-(t))$, $0\le t<T^u_{\disj}$, such that,
for $\sigma\in\{+,-\}$, $\widehat w_\sigma^u$ satisfies the SDE
$$d\widehat w_\sigma^u\overset{\mathrm{ae}}{=} \sqrt{\kappa u_\sigma' }dB_\sigma +\Big[\kappa \bb\frac{W_{\sigma,2}^u}{W_{\sigma,1}^u} +\frac{2W^u_{\sigma,1}}{W^u_\sigma-W^u_{-\sigma}}+ \sum_{\nu\in\{0,+,-\}} \frac{\rho_\nu W^u_{\sigma,1}}{W^u_\sigma-V^u_\nu} \Big]u_\sigma'dt,\quad 0\le t<T^u_{\disj}.$$
Here by saying that $(B_+(t),B_-(t))$, $0\le t<T^u_{\disj}$, is a stopped planar Brownian motion, we mean that $B_+(t)$ and $B_-(t)$, $0\le t<T^u_{\disj}$, are local martingales with $d\langle B_\sigma\rangle_t=t$, $\sigma\in\{+,-\}$, $d\langle B_+,B_-\rangle_t=0$, $0\le t<T^u_{\disj}$.
\label{Lem-uB}
\end{Lemma}
\begin{proof} For $\sigma\in\{+,-\}$, define $\widetilde B_\sigma^u$ using (\ref{tilwju}). By (\ref{dMitocu}), $\widetilde B_\sigma^u(t)M^u(t)$, $0\le t<T^u_{\disj}$, is an ${\cal F}^u$-local martingale under $\mathbb{P}^{\underline\rho}_i$. By Lemmas \ref{uniform} and \ref{uniform2}, for any $\underline \xi\in\Xi$, $\widetilde B_\sigma^u(\cdot\wedge \tau^u_{\underline \xi})M^u(\cdot \wedge \tau^u_{\underline \xi})$ is an ${\cal F}^u$-martingale under $\mathbb{P}^{\underline\rho}_i$. Since this process is $({\cal F}_{ \underline u(\cdot\wedge \tau^u_{\underline \xi})})$-adapted, and ${\cal F}_{ \underline u(t\wedge \tau^u_{\underline \xi})}\subset {\cal F}_{ \underline u(t)}={\cal F}^u_t$, it is also an $({\cal F}_{ \underline u(\cdot\wedge \tau^u_{\underline \xi})})$-martingale. From (\ref{RNito4xi}) we see that $\widetilde B_\sigma^u(\cdot\wedge \tau^u_{\underline \xi})$, is an $({\cal F}_{ \underline u(\cdot\wedge \tau^u_{\underline \xi})})$-martingale under $\mathbb{P}^{\underline\rho}$.
Then we see that $\widetilde B_\sigma^u(\cdot\wedge \tau^u_{\underline \xi})$ is an ${\cal F}^u$-martingale under $\mathbb{P}^{\underline\rho}$ since for any $t_2\ge t_1\ge 0$ and $A\in{\cal F}^u_{t_1}={\cal F}_{\underline u(t_1)}$, $A\cap \{t_1\le \tau^u_{\underline \xi}\}\subset {\cal F}_{ \underline u(t_1\wedge \tau^u_{\underline \xi})}$, and on the event $\{t_1> \tau^u_{\underline \xi}\}$, $\widetilde B_\sigma^u(t_1\wedge \tau^u_{\underline \xi})=\widetilde B_\sigma^u(t_2\wedge \tau^u_{\underline \xi})$. Since $T^u_{\disj}= \sup_{\underline \xi\in\Xi^*}\tau^u_{\underline\xi}$, we see that, for $\sigma\in\{+,-\}$, $\widetilde B_\sigma^u(t)$, $0\le t<T^u_{\disj}$, is an ${\cal F}^u$-local martingale under $\mathbb{P}^{\underline\rho}$.
From (\ref{quadratic}) and that for any $\underline\xi\in\Xi^*$ and $t\ge 0$, $\mathbb{P}^{\underline\rho}\ll \mathbb{P}^{\underline\rho}_i$ on ${\cal F}_{ \underline u(t\wedge \tau^u_{\underline \xi})}$, we know that, under $\mathbb{P}^{\underline\rho}$, (\ref{quadratic}) holds up to $\tau^u_{\underline\xi}$. Since $T^u_{\disj}= \sup_{\underline \xi\in\Xi^*}\tau^u_{\underline\xi}$, and $\widetilde B_\pm^u-\widehat B_\pm^u$ are differentiable, (\ref{quadratic}) holds for $\widetilde B^u_+$ and $\widetilde B^u_-$ under $\mathbb{P}^{\underline\rho}$ up to $T^u_{\disj}$.
Since $\widetilde B_+^u$ and $\widetilde B_-^u$ up to $T^u_{\disj}$ are local martingales under $\mathbb{P}^{\underline\rho}$, the (\ref{quadratic}) for $\widetilde B_\pm^u$ implies that there exists a stopped planar Brownian motion $(B_+(t),B_-(t))$, $0\le t<T^u_{\disj}$, under $\mathbb{P}^{\underline\rho}$, such that $d\widetilde B_\sigma^u(t)=\sqrt{u_\sigma'(t)}dB_\sigma(t)$, $\sigma\in\{+,-\}$. Combining this fact with (\ref{hawhaB}) and (\ref{tilwju}), we then complete the proof.
\end{proof}
From now on, we work under the probability measure $\mathbb{P}^{\underline\rho}$.
Combining Lemma \ref{Lem-uB} with (\ref{dWju}) and (\ref{pa-X}), we get an SDE for $W_\sigma^u-V_0^u$ up to $T^u_{\disj}$:
\begin{align*}
d(W_\sigma^u-V_0^u)\overset{\mathrm{ae}}{=} & \,W_{\sigma,1}^u \sqrt{\kappa u_\sigma'} dB_\sigma +\sum_{\nu\in\{0,+,-\}} \frac{\rho_\nu (W_{\sigma,1}^u)^2 u_\sigma'}{W_\sigma^u-V_\nu^u}\,dt +\frac{2 (W_{\sigma,1}^u)^2 u_\sigma'}{W_\sigma^u-W_{-\sigma}^u}\,dt \\
&+ \frac{2 (W_{-\sigma,1}^u)^2 u_{-\sigma}'}{W_\sigma^u-W_{-\sigma}^u}\,dt+ \frac{2 (W_{\sigma,1}^u)^2 u_\sigma'}{W_\sigma^u-V_{0}^u}\,dt + \frac{2 (W_{-\sigma,1}^u)^2 u_{-\sigma}'}{W_{-\sigma}^u-V_{0}^u}\,dt.
\end{align*}
Recall that $R_\sigma =\frac{W_\sigma^u -V_0^u }{V_\sigma^u -V_0^u }\in [0,1]$, $\sigma\in\{+,-\}$, and $V_\sigma^u -V_0^u=\sigma e^{2\cdot} I${, where $I=|v_+-v_0|=|v_--v_0|$}.
Combining the above SDE with (\ref{uj''}) and using the continuity of $R_\sigma$ and the positiveness of $R_++R_-$ (because $W_+^u>W_-^u$), we find that $R_\sigma$, $\sigma\in\{+,-\}$, satisfy the following SDE up to $T^u_{\disj}$:
\begin{equation} dR_\sigma =\sigma \sqrt{\frac{\kappa R_\sigma(1-R_\sigma ^2)}{R_++R_-}}dB_\sigma +\frac{(2+\rho_0)-(\rho_\sigma-\rho_{-\sigma})R_\sigma-(\rho_++\rho_-+\rho_0+6)R_\sigma^2}{R_++R_-}\,dt. \label{SDE-R}\end{equation}
\subsection{SDE in the whole lifespan}
We are going to prove the following theorem in this subsection.
\begin{Theorem}
Under $\mathbb{P}^{\underline\rho}$, $R_+$ and $R_-$ satisfy (\ref{SDE-R}) throughout $\mathbb{R}_+$ for a pair of independent Brownian motions $B_+$ and $B_-$. \label{Thm-SDE-whole-R}
\end{Theorem}
\begin{Remark}
Theorem \ref{Thm-SDE-whole-R} gives the SDE for the two-dimensional diffusion process $(\underline R)$ in its whole lifespan. In the next subsection, we will use this SDE to derive the transition density of $(\underline R)$.
\end{Remark}
\begin{Lemma}
Suppose that $R_+$ and $R_-$ are $[0,1]$-valued semimartingales satisfying (\ref{SDE-R}) for a stopped planar Brownian motion $(B_+,B_-)$ up to some stopping time $\tau$. Then on the event $\{\tau<\infty\}$, a.s.\ $\lim_{t\uparrow \tau} R_\pm(t)$ {exists and equals a finite number other than} $0$. \label{lemma-0-1}
\end{Lemma}
\begin{proof}
Let $X=R_+-R_-$ and $Y=1-R_+R_-$. Then $|X|\le Y\le 1$ because $Y\pm X=(1\pm R_+)(1\mp R_-)\ge 0$. By (\ref{SDE-R}), $X$ and $Y$ satisfy the following SDEs up to $\tau$:
\begin{equation} dX=
dM_X-[(\rho_++\rho_-+\rho_0+6)X+(\rho_+-\rho_-)]dt,\label{dX}\end{equation}
\begin{equation} dY=
dM_Y-[(\rho_++\rho_-+\rho_0+6)Y-(\rho_++\rho_-+4)]dt,\label{dY}\end{equation}
where $M_X$ and $M_Y$ are local martingales whose quadratic variation and covariation satisfy the following equations up to $\tau$:
\begin{equation} d\langle M_X\rangle =\kappa(Y-X^2)dt,\quad d\langle M_X,M_Y\rangle =\kappa (X-XY)dt,\quad d\langle M_Y\rangle =\kappa (Y-Y^2)dt.\label{d<X,Y>}\end{equation}
By (\ref{d<X,Y>}), $\langle M_X\rangle_\tau,\langle M_Y\rangle_\tau\le \kappa\tau$, which implies that $\lim_{t\uparrow \tau} M_X(t)$ and $\lim_{t\uparrow \tau} M_Y(t)$ a.s.\ converge on {the event} $\{\tau<\infty\}$. By (\ref{dX},\ref{dY}),
$\lim_{t\uparrow \tau} (X(t)-M_X(t))$ and $\lim_{t\uparrow \tau} (Y(t)-M_Y(t))$ a.s.\ converge on {the event} $\{\tau<\infty\}$. Combining these results, we see that, on the event $\{\tau<\infty\}$, $\lim_{t\uparrow \tau}X(t)$ and $\lim_{t\uparrow \tau} Y(t)$ a.s.\ converge, which implies the a.s.\ convergence of $\lim_{t\uparrow \tau} R_\pm(t)$ .
Since $(R_+(t),R_-(t))\to (0,0)$ iff $(X(t),Y(t))\to (0,1)$. It suffices to show that, $ (X(t),Y(t))$ does not converge to $(0,1)$ as $t\uparrow \tau$. Since $(X,Y)$ is Markov, it suffices to show that, if $Y(0)\ne 0$, and if $T=\tau\wedge \inf\{t:Y(t)=0\}$, then $(X(t),Y(t))$ does not converge to $(0,1)$ as $t\uparrow T$. Since $Y\ne 0$ on $[0,T)$, we may define a process $Z=X/Y\in [-1,1]$ on $[0,T)$. Now it suffices to show that $(Z(t),Y(t))$ does not converge to $(0,1)$ as $t\uparrow T$.
From (\ref{dX}-\ref{d<X,Y>}) and It\^o's calculation, we see that there is a stopped planar Brownian motion $(B_{Z}(t),B_Y(t))$, $0\le t<T$, such that $Z$ and $Y$ satisfy the following SDEs on $[0,T)$:
$$ dZ=\sqrt{\frac{\kappa (1-Z^2)}Y}dB_{Z}-\frac{(\rho_++\rho_-+4)Z+(\rho_+-\rho_-)}{Y}\,dt;$$
$$ dY=\sqrt{\kappa Y(1-Y)}dB_Y -[(\rho_++\rho_-+\rho_0+6)Y-(\rho_++\rho_-+4)]dt.$$
Let $v(t)=\int_0^t \kappa/Y(s)ds$, $0\le t<T$, and $\widetilde T=\sup v[0,T)$. Let $\widetilde Z(t)=Z(v^{-1}(t))$ and $\widetilde Y(t)=Y(v^{-1}(t))$, $0\le t<\widetilde T$. Then there is a stopped planar Brownian motion $(\widetilde B_{Z}(t),\widetilde B_Y(t))$, $0\le t<\widetilde T$, such that $\widetilde Z$ and $\widetilde Y$ satisfy the following SDEs on $[0,\widetilde T)$:
$$d\widetilde Z=\sqrt{1-\widetilde Z^2}d\widetilde B_{Z}-(a_{Z}\widetilde Z+b_{Z}) dt, $$
$$ d\widetilde Y=\widetilde Y\sqrt{ 1-\widetilde Y}d\widetilde B_Y -\widetilde Y(a_Y(\widetilde Y-1)+b_Y)dt, $$
where $a_{Z}=(\rho_++\rho_-+4)/\kappa$, $b_{Z}=(\rho_+-\rho_-)/\kappa$, $b_Y=(\rho_0+2)/\kappa$, $a_{Y}=a_{Z}+b_Y$.
Let $\Theta=\arcsin (\widetilde Z)\in [-\pi/2,\pi/2]$ and $\Phi=\log(\frac{1+\sqrt{1-\widetilde Y}}{1-\sqrt{1-\widetilde Y}})\in\mathbb{R}_+$. Then $(\widetilde Z(t),\widetilde Y(t))\to (0,1)$ iff $\Theta(t)^2+\Phi(t)^2\to 0$, and $\Theta$ and $\Phi$ satisfy the following SDEs on $[0,\widetilde T)$:
$$ d\Theta=d\widetilde B_{Z}-(a_{Z}-\frac 12)\tan\Theta dt-b_{Z} \sec\Theta dt; $$
$$ d\Phi=-d\widetilde B_Y+ (b_Y -\frac 14 )\coth_2(\Phi) dt+(\frac 34-a_Y)\tanh_2(\Phi) dt.$$
Here $\tanh_2:=\tanh(\cdot /2)$ and $\coth_2:=\coth(\cdot/2)$.
So for some $1$-dimensional Brownian motion $B_{\Theta,\Phi}$, $\Theta^2+\Phi^2$ satisfies the SDE
$$d(\Theta^2+\Phi^2)=2\sqrt{\Theta^2+\Phi^2}dB_{\Theta,\Phi} +2dt+(2b_Y-\frac 12)\Phi \coth_2(\Phi)dt $$ $$+(\frac 32-2a_Y) \Theta\tanh_2(\Phi)dt-(2a_{Z}-1)\Theta \tan\Theta dt-2b_{Z} \sec\Theta dt.$$
From the power series expansions of $\coth_2,\tanh_2,\tan,\sec$ at $0$,
we see that when $\Theta^2+\Phi^2$ is close to $0$, it behaves like a squared Bessel process of dimension $4b_Y+1=\frac 4\kappa(\rho_0+2)+1\ge 2$ because $\rho_0\ge \frac\kappa 4-2$. Thus, a.s.\ $\lim_{t\uparrow \widetilde T} {\Theta(t)^2+\Phi(t)^2}\ne 0$, as desired.
\end{proof}
\begin{Remark}
The assumption $\rho_0\ge \frac\kappa4-2$ is used in the last line of the above proof.
\label{Remark-rho0}
\end{Remark}
\begin{Lemma}
For every $N>0$ and $L\ge 2$, there is $C>0$ depending only on $\kappa,\underline\rho,N,L$ such that for any $v_0\in [(-1)^+,1^-]$, $v_+\in [1^+,\infty)$ and $v_-\in (-\infty,(-1)^-]$ with $|v_+-v_-|\le L$, if $(\eta_+,\eta_-)$ is a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves started from $(1,-1;v_0,v_+,v_-)$, then for any $y\in(0,N]$, $\mathbb{P}[E_+(y)\cap E_-(y)]\ge C$, where for $\sigma\in\{+,-\}$, $E_\sigma(y)$ is the event that $\eta_\sigma$ reaches $\{\mbox{Im}\, z=y\}$ before $\{\mbox{Re}\, z=\sigma \frac 12\}\cup \{\mbox{Re}\, z=\sigma\frac 32\}$.
\label{lower-bound}
\end{Lemma}
\begin{proof}
In this proof, a constant depends only on $\kappa,\underline\rho,N,L$. Since $E_\pm(y)$ is decreasing in $y$, it suffices to prove that $\mathbb{P}[E_+(N)\cap E_-(N)]$ is bounded from below by a positive constant. By \cite[Lemma 2.4]{MW}, there is a constant $\widetilde C >0$ such that $\mathbb{P}[E_\sigma(N)]\ge \widetilde C$ for $\sigma\in \{+,-\}$. Thus, if $(\eta_+',\eta_-')$ is an independent coupling of $\eta_+$ and $\eta_-$, then the events $E_\pm'(N)$ for $(\eta_+',\eta_-')$ satisfy that $\mathbb{P}[E_+'(N)\cap E_-'(N)]\ge \widetilde C^2$. Let $\xi_\sigma=\mathbb{H}\cap \partial\{x+iy:|x-\sigma 1|\le \frac 12,0\le y\le N\}$, $\sigma\in\{+,-\}$. Since the law of $(\eta_+,\eta_-)$ restricted to ${\cal F}^+_{\tau^+_{\xi_+}}\vee {\cal F}^-_{\tau^-_{\xi_-}}$ is absolutely continuous w.r.t.\ that of $(\eta_+',\eta_-')$ (Lemma \ref{RN-M-ic}), and the logarithm of the Radon-Nikodym derivative is bounded in absolute value by a constant (Lemma \ref{uniform}), we get the desired lower bound for $\mathbb{P}[E_+(N)\cap E_-(N)]$.
\end{proof}
\begin{Corollary}
For any $r\in(0,1]$, there is $\delta>0$ depending only on $\kappa,\underline\rho,r$ such that the following holds. Suppose $w_+>w_-\in\mathbb{R}$, $v_0\in [w_-^+,w_+^-]$, $v_+\in [w_+^+,\infty)$, $v_-\in (-\infty,w_-^-]$ satisfy $|v_+-v_0|=|v_--v_0|$ and $|w_+-w_-|\ge r|v_+-v_-|$. Let $(\eta_+,\eta_-)$ be a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves started from $(w_+,w_-;v_0,v_+,v_-)$. Let $\underline\xi=(\xi_+,\xi_-)$, where $\xi_\sigma=\mathbb{H}\cap \partial\{x+iy: |x-w_\sigma|\le |w_+-w_-|/4,0\le y\le e^2|v_+-v_-|\}$, $\sigma\in\{+,-\}$. Let $\tau^u_{\underline\xi}$ be as defined in Section \ref{section-diffusion}. Then $\mathbb{P}[\tau^u_{\underline\xi}\ge 1]\ge \delta$. \label{lower-bound-Cor}
\end{Corollary}
\begin{proof}
Let $E$ denote the event that for both $\sigma\in\{+,-\}$, $\eta_\sigma$ hits $\xi_\sigma$ at its top for the first time. Suppose that $E$ happens. By the definition of $\tau^u_{\underline\xi}$, for one of $\sigma\in\{+,-\}$, the imaginary part of $\eta_\sigma(u_\sigma(\tau^u_{\underline\xi}))$ is $e^2|v_+-v_-|$. So $\rad_{v_0}(\eta_\sigma[0,u_\sigma(\tau^u_{\underline\xi})])\ge e^2 |v_+-v_-|$. By (\ref{V-V'}) we then get $\tau^u_{\underline\xi}\ge 1$. Thus, $\mathbb{P}[\tau^u_{\underline\xi}\ge 1]\ge \mathbb{P}[E]$, which by Lemma \ref{lower-bound} and scaling is bounded from below by a positive constant depending only on $\kappa,\underline\rho,r$ whenever $|v_+-v_-|\le |w_+-w_-|/r$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm-SDE-whole-R}]
We have known that (\ref{SDE-R}) holds up to $T^u_{\disj}$.
We will combine it with the DMP of commuting pairs of chordal SLE$_\kappa(2,\underline\rho)$ curves (Lemma \ref{DMP}).
Let $\eta^0_\pm=\eta_\pm$. Let ${\cal G}^0$ be the trivial $\sigma$-algebra. We will inductively define the following random objects. Let $n=1$. We have the $\sigma$-algebra ${\cal G}^{n-1}$ and the pair $(\eta^{n-1}_+,\eta^{n-1}_-)$, whose law conditionally on ${\cal G}^{n-1}$ is that of a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves. Let $K^{n-1},\mA^{n-1},W^{n-1}_\pm,V^{n-1}_0,V^{n-1}_\pm$, be respectively the hull function, capacity function, driving functions and force point functions. Let ${\cal D}^{n-1}_{\disj}$ and $\Xi^{n-1}$ be respectively the ${\cal D}_{\disj}$ and $\Xi$ defined for the pair. Let ${\cal F}^{n-1}$ be the $\mathbb{R}_+^2$-indexed filtration defined by $ {\cal F}^{n-1}_{(t_+,t_-)}=\sigma({\cal G}^{n-1},\eta_\sigma^{n-1}|_{[0,t_\sigma]},\sigma\in\{+,-\}) $. Let $\underline u^{n-1}$ be the time curve for $(\eta^{n-1}_+,\eta^{n-1}_-)$ as defined in Section \ref{time curve}, which exits for $n=1$ because we assume that $|v_+-v_0|=|v_0-v_-|$.
Let $\underline\xi^{n-1}$ be the $\underline\xi$ obtained from applying Corollary \ref{lower-bound-Cor} to $w_\pm=W^{n-1}_\pm(\underline 0)$ and $v_\pm=V^{n-1}_\pm(\underline 0)$. Then it is a ${\cal G}^{n-1}$-measurable random element in $\Xi^{n-1}$. Let $\tau^{n-1}_{\underline\xi^{n-1}}$ be the random time $\tau^u_{\underline\xi}$ introduced in Section \ref{section-diffusion} for the $(\eta^{n-1}_+,\eta^{n-1}_-)$ and $\underline\xi^{n-1}$ here. Let $\underline\tau^{n-1}=(\tau^{n-1}_+,\tau^{n-1}_-)=\underline u^{n-1}(\tau^{n-1,u}_{\underline\xi^{n-1}})$. Then $\underline\tau^{n-1}$ is a finite ${\cal F}^{n-1}$-stopping time that lies in ${\cal D}^{n-1}_{\disj}$. Let ${\cal G}^n={\cal F}^{n-1}_{\underline\tau^{n-1}}$. We then obtain by Lemma \ref{DMP} a random commuting pair of chordal Loewner curves $(\widetilde\eta^n_+,\widetilde\eta^n_-)$ with some speeds, which is the part of $(\eta^{n-1}_+,\eta^{n-1}_-)$ after ${\underline\tau^{n-1}}$ up to a conformal map, and
the normalization of $(\widetilde\eta^n_+,\widetilde\eta^n_-)$, denoted by $(\eta^n_+,\eta^n_-)$, conditionally on ${\cal G}^n$, is a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curve started from $(W^{n-1}_+,W^{n-1}_-;V^{n-1}_0,V^{n-1}_+,V^{n-1}_-)|_{\underline\tau^{n-1}}$.
If for some $\sigma\in\{+,-\}$ and $\nu\in\{0,+,-\}$, $V^{n-1}_\nu(\underline \tau^{n-1})=W^{n-1}_\sigma(\underline \tau^{n-1})$, then as a force point, $V^{n-1}_\nu(\underline \tau^{n-1})$ is treated as $(W^{n-1}_\sigma(\underline \tau^{n-1}))^{\sign(v_\nu-w_\sigma)}$. By the assumption of $\underline u^{n-1}$, we have $|V^{n-1}_+-V^{n-1}_0|=|V^{n-1}_--V^{n-1}_0|$ at $\underline\tau^{n-1}$. So we may increase $n$ by $1$ and repeat the above construction.
Iterating the above procedure, we obtain two sequences of pairs $(\eta^n_+,\eta^n_-)$, $n\ge 0$, and $(\widetilde\eta^n_+,\widetilde\eta^n_-)$, $n\ge 1$. They satisfy that for any $n\in\mathbb{N}$, $(\eta^n_+,\eta^n_-)$ is the normalization of $(\widetilde\eta^n_+,\widetilde\eta^n_-)$, and $(\widetilde\eta^n_+,\widetilde\eta^n_-)$ is the part of $(\eta^{n-1}_+,\eta^{n-1}_-)$ after ${\underline\tau^{n-1}}$ up to a conformal map. Let $\phi^n_\pm$ be the speed of $\widetilde\eta^n_\pm$, and $\phi^n_\oplus(t_+,t_-)=(\phi^n_+(t_+),\phi^n_-(t_-))$. By Lemma \ref{DMP-determin-1}, for any $n\in\mathbb{N}$ and $Z\in\{ W_+,W_-,V_0,V_+,V_-\}$, $\widetilde Z^n=Z^n\circ \phi^n_\oplus$ and $\widetilde Z^n= Z^{n-1}(\underline\tau^{n-1}+\cdot)$.
Recall that, for $n\ge 0$, $\underline u^{n}$ is characterized by the property that
$ |V^{n}_\pm(\underline u^{n}(t))-V^{n}_0(\underline u^{n}(t))|=e^{2t} |V^{n}_\pm(\underline 0)-V^{n}_0(\underline 0)|$, $t\ge 0$.
So we get $\underline u^n=\phi^n_\oplus(\underline u^{n-1}(\tau^{n-1}_{\underline\xi^{n-1}}+\cdot)-\underline u^{n-1}(\tau^{n-1}_{\underline\xi^{n-1}}))$,
which then implies that $Z^{n-1}\circ \underline u^{n-1}(\tau^{n-1}_{\underline\xi^{n-1}}+\cdot)=Z^{n}\circ \underline u^{n}$, $Z\in \{W_+,W_-,V_0,V_+,V_-\}$. Let $R^n_\pm$ be the $R_\pm$ defined in Section \ref{section-diffusion} for $(\eta^n_+,\eta^n_-)$. Then we have
$R^{n-1}_\pm(\tau^{n-1}_{\underline\xi^{n-1}}+\cdot)=R^n_\pm$. Let $T^n=\sum_{j=0}^{n-1} \tau^{j}_{\underline \xi^j}$, $n\ge 0$. Since $R_\pm= R_\pm^0$, we get $R_\pm(T^n+\cdot)=R^n_\pm$. For $n\ge 0$, since conditionally on ${\cal G}^{n}$, $(\eta^n_+,\eta^n_-)$ has the law of a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves started from $(W^n_+,W^n_-;V^n_0,V^n_+,V^n_-)|_{\underline 0}$, by the previous subsection, there is a stopped two-dimensional Brownian motion $(B^n_+,B^n_-)$ w.r.t.\ ${\cal F}^n_{\underline u^n(\cdot)}$ such that $R^n_+$ and $R^n_-$ satisfy the ${\cal F}^n_{\underline u^n(\cdot)}$-adapted SDE (\ref{SDE-R}) with $(B^n_+,B^n_-)$ in place of $(B_+,B_-)$ up to $\tau^n_{\underline\xi^n}$. Let $T^\infty=\lim_{n\to\infty}T^n= \sum_{j=0}^{\infty} \tau^{j}_{\underline \xi^j}$, and define a continuous processes $B_\pm$ on $[0,T^\infty)$ such that $B_\pm(t)-B_\pm(T^n)=B^n(t -T^n)$ for each $t\in[T^n,T^{n+1}]$ and $n\ge 0$. Then $(B_+,B_-)$ is a stopped two-dimensional Brownian motion, and $R_+$ and $R_-$ satisfy (\ref{SDE-R}) up to $T^\infty$. It remains to show that a.s.\ $T^\infty=\infty$.
Suppose it does not hold that a.s.\ $T^\infty=\infty$. By Lemma \ref{lemma-0-1}, there is an event $E$ with positive probability and a number $r\in(0,1]$ such that on the event $E$, $R_++R_-\ge 2r$ on $[0,T^\infty)$.
For $n\ge 0$, let $E_n=\{|W^n_+(\underline 0)-W^n_-(\underline 0)|\ge r |V^n_+(\underline 0)-V^n_-(\underline 0)|\}=\{R^n_+(0)+R^n_-(0)\ge 2r\}$, which is ${\cal G}^n$-measurable. Since $R^n_\pm=R_\pm(T^n+\cdot)$, we get $E\subset \bigcap E_n$. Let $A_n=\{\tau^n_{\underline\xi^n}\ge 1\}$. By Corollary \ref{lower-bound-Cor}, there is $\delta>0$ depending only on $\kappa,\underline\rho,r$ such that for $n\ge 0$, $\mathbb{P}[A_n|{\cal G}^n, E_n]\ge \delta$. Since $E\subset \{\sum_n \tau^n_{\underline\xi^n}<\infty\}$, we get $E\subset \liminf (E^n\cap A_n^c)$. For any $m\ge n\in\mathbb{N}$,
$$\mathbb{P}[ \bigcap_{k=n}^m (E_{k}\cap A_k^c)]=\mathbb{ E}[\mathbb{P}[ \bigcap_{k=n}^m (E_{k}\cap A_k^c)|{\cal G}^m]]\le (1-\delta)\mathbb{P}[ \bigcap_{k=n}^{m-1} (E_{k}\cap A_k^c)]. $$
So we get $\mathbb{P}[ \bigcap_{k=n}^m (E_{k}\cap A_k^c)]\le (1-\delta)^{n-m+1}$, which implies that $\mathbb{P}[ \bigcap_{k=n}^\infty (E_k\cap A_k^c)]=0$ for every $n\in\mathbb{N}$, and so $\mathbb{P}[E]=0$. This contradiction completes the proof.
\end{proof}
\begin{Corollary}
Almost surely the path $(R_+(t),R_-(t))$, $t\in\mathbb{R}_+$, avoids $(0,0)$. \label{Cor-0}
\end{Corollary}\begin{proof}
Let $\tau$ be the first $t$ such that $(R_+(t),R_-(t))= (0,0)$, if such $t$ exists; and be $\infty$ if otherwise. Then $\tau$ is a stopping time, and on the event $\{\tau<\infty\}$, $\lim_{t\uparrow \tau} R_\pm(t)=0$. By Lemma \ref{lemma-0-1}, such event has probability zero.
\end{proof}
\subsection{Transition density}
In this subsection, we are going to use orthogonal polynomials to derive the transition density of $\underline R(t)=(R_+(t),R_-(t))$, $t\ge 0$, against the Lebesgue measure restricted to $[0,1]^2$. A similar approach was first used in \cite[Appendix B]{tip} to calculate the transition density of radial Bessel processes, where one-variable orthogonal polynomials were used. Two-variable orthogonal polynomials were used in \cite[Section 5]{Two-Green-interior} to calculate the transition density of a two-dimensional diffusion process. Here we will use another family of two-variable orthogonal polynomials to calculate the transition density of the $(\underline R)$ here. In addition, we are going to derive the invariant density of $(\underline R)$, and estimate the convergence of the transition density to the invariant density.
Let $X=R_+-R_-$ and $Y=1-R_+R_-$. Since $R_+$ and $R_-$ satisfy (\ref{SDE-R}) throughout $\mathbb{R}_+$, $X$ and $Y$ then satisfy (\ref{dX},\ref{dY},\ref{d<X,Y>}) throughout $\mathbb{R}_+$. Moreover, {by Corollary \ref{Cor-0}, a.s.\ } $(X,Y)\in \overline\Delta\setminus \{(0,1)\}$, where $\Delta=\{(x,y)\in\mathbb{R}^2:0<|x|<y<1\}$. We will first find the transition density of $(X(t),Y(t))$. Assume that the transition density $p(t,(x,y),(x^*,y^*))$ exists, and is smooth in $(x,y)$, then it should satisf{y} the Kolmogorov's backward equation:
\begin{equation} -\partial_t p+{\cal L} p=0,\label{PDE-boundary}\end{equation}
where ${\cal L}$ is the second order differential operator defined by
{
\begin{align*}
{\cal L}=&\frac{\kappa}{2} (y-x^2)\partial_x^2+\kappa x(1-y)\partial_x\partial_y+\frac \kappa 2 y(1-y)\partial_y^2 \\
&-[(\rho_++\rho_-+\rho_0+6)x+(\rho_+-\rho_-)]\partial_x-[(\rho_++\rho_-+\rho_0+6)y-(\rho_++\rho_-+4)]\partial_y.
\end{align*}}
We perform a change of coordinate $(x,y)\mapsto (r,h)$ by $x=rh$ and $y=h$ at $y\ne 0$. Direct calculation shows that
$$\partial_r=h\partial_x,\quad \partial_h=r\partial_x+\partial_y,\quad \partial_r^2=h^2\partial_x^2,\quad
\partial_h^2=r^2\partial_x^2 +2r\partial_x \partial_y+\partial_y^2,\quad \partial_r\partial_h=rh\partial_x^2+h\partial_x\partial_y.$$
Define $$\alpha_0=\frac {2 }\kappa(\rho_0+2)-1,\quad\alpha_\pm=\frac 2\kappa(\rho_\pm+2)-1, \quad \beta=\alpha_++\alpha_-+1;$$
$${\cal L}^{(r)}=(1-r^2)\partial_r^2- [(\alpha_++\alpha_-+2 )r+(\alpha_+-\alpha_-)]\partial_r;$$
$${\cal L}^{(h)}=h(1-h)\partial_h^2- [(\alpha_0+\beta+2)h-(\beta+1)]\partial_h.$$
Then in the $(r,h)$-coordinate, ${\cal L}=\frac{\kappa}{2} [{\cal L}^{(h)}+\frac 1 h {\cal L}^{(r)}]$. Let
$$ \lambda _n=-n(n+\alpha_0+\beta+1),\quad\lambda^{(r)}_n= -n(n+\beta),\quad n\ge 0.$$
Direct calculation shows that
\begin{equation} [{\cal L}^{(h)}+\frac 1 h \lambda_n^{(r)}]h^n =h^n [{\cal L}^{(h)}-2n(h-1)\partial_h+\lambda_n ],\label{Lie}\end{equation}
where each $h^n$ in the formula is understood as a multiplication by the $n$-th power of $h$.
From (\ref{Jacobi-ODE}) we know that Jacobi polynomials $P_n^{(\alpha_+,\alpha_-)}(r)$, $n\ge0$, satisfy that
\begin{equation} {\cal L}^{(r)} P_n^{(\alpha_+,\alpha_-)}(r)=\lambda^{(r)}_n P_n^{(\alpha_+,\alpha_-)}(r),\quad n\ge 0;\label{r-eigen}\end{equation}
and the functions $P_m^{(\alpha_0,\beta+2n)}(2h-1)$, $m\ge 0$, satisfy that
\begin{equation} ({\cal L}^{(h)}-2n(h-1)\partial_h+\lambda_n )P_m^{(\alpha_0,\beta+2n)}(2h-1)=\lambda_{m+n} P_m^{(\alpha_0,\beta+2n)}(2h-1),\quad m\ge 0.\label{h-eigen}\end{equation}
For $n\ge 0$, define a homogeneous two-variable polynomial $Q_n^{(\alpha_+,\alpha_-)}(x,y)$ of degree $n$ such that
$Q_n^{(\alpha_+,\alpha_-)}(x,y)=y^n P_n^{(\alpha_+,\alpha_-)}(x/y)$ if $y\ne 0$.
It has nonzero coefficient for $x^n$. For every pair of integers $n,m\ge 0$, define a two-variable polynomial $ v_{n,m}(x,y)$ of degree $n+m$ by
$$ v_{n,m}(x,y)=P^{(\alpha_0,\beta+2n)}_m(2y-1) Q^{(\alpha_+,\alpha_-)}_n(x,y).$$
Then $ v_{n,m}$ is also a polynomial in $r,h$ with the expression:
\begin{equation} v_{n,m}(r,h)=h^n P^{(\alpha_0,\beta+2n)}_m(2h-1)P^{(\alpha_+,\alpha_-)}_n(r).\label{vhr}\end{equation}
By (\ref{Lie},\ref{r-eigen},\ref{h-eigen}), on $\mathbb{R}^2\setminus\{y\ne 0\}$,
$$\frac 2\kappa {\cal L} v_{n,m}=\frac 2\kappa[{\cal L}^{(h)}+\frac {1} h {\cal L}^{(r)}] v_{n,m}
= [{\cal L}^{(h)}+\frac {1} h \lambda_n^{(r)}] (h^n P^{(\alpha_0,\beta+2n)}_m(2h-1)P^{(\alpha_+,\alpha_-)}_n(r))$$
$$=h^n [{\cal L}^{(h)}-2n(h-1)\partial_h+\lambda_n ] (P^{(\alpha_0,\beta+2n)}_m(2h-1)P^{(\alpha_+,\alpha_-)}_n(r))
=\lambda_{n+m} v_{n,m}.$$
Since $v_{n,m}$ is a polynomial in $x,y$, by continuity the above equation holds throughout $\mathbb{R}^2$.
Thus, for every $n,m\ge 0$, $ v_{n,m}(x,y)e^{\frac \kappa 2\lambda_{n+m}t}$ solves (\ref{PDE-boundary}), and the same is true for any linear combination of such functions.
From (\ref{vhr}) we get an upper bound of $\|v_{n,m}\|_\infty:=\sup_{(x,y)\in\Delta} |v_{n,m}(x,y)|$:
\begin{equation} \| v_{n,m}\|_\infty\le \|P_m^{(\alpha_0,\beta+2n)}\|_\infty \| P_n^{(\alpha_+,\alpha_-)}\|_\infty,
\label{supernorm'}\end{equation}
where the supernorm of the Jacobi polynomials are taken on $[-1,1]$ as in (\ref{super-exact},\ref{super-upper}).
Since $P_n^{(\alpha_+,\alpha_-)}(r)$, $n\ge0$, are mutually orthogonal w.r.t.\ the weight function $\Psi^{(\alpha_+,\alpha_-)}(r)$, and for any fixed $n\ge 0$, $P_m^{(\alpha_0,\beta+2n)}(2h -1)$, $m\ge 0$, are mutually orthogonal w.r.t.\ the weight function $\Psi^{(\alpha_0,\beta+2n)}(2h-1)=2^{\alpha_0+\beta+2n}{\bf 1}_{(0,1)}(h)(1-h)^{\alpha_0}h^{\beta+2n}$, using a change of coordinates we conclude that $ v_{n,m}(x,y)$, $n,m\in\mathbb{N}\cup\{0\}$, are mutually orthogonal w.r.t.\ the weight function
$$\Psi(x,y):={\bf 1}_\Delta (x,y) (y-x)^{\alpha_+}(y+x)^{\alpha_-} (1-y)^{\alpha_0}.$$
Moreover, we have
\begin{equation} \| v_{n,m}\|^2_\Psi=2^{-(\alpha_0+\beta+2n+1)}\|P_m^{(\alpha_0,\beta+2n)} \|^2_{\Psi^{(\alpha_0,\beta+2n)} }\cdot \|P^{(\alpha_+,\alpha_-)}_n\|^2_{\Psi^{(\alpha_+,\alpha_-)}}.\label{L2-norm'}\end{equation}
Let $f(x,y)$ be a polynomial in two variables. Then $f$ can be expressed by a linear combination $f(x,y)=\sum_{n=0}^\infty\sum_{m=0}^\infty a_{n,m}v_{n,m}(x,y)$, where $a_{n,m}:=\langle f,v_{(n,m)}\rangle_\Psi/\|v_{n,m}\|_\Psi^2$ are zero for all but finitely many $(n,m)$. In fact, every polynomial in $x,y$ of degree $\le k$ can be expressed as a linear combination of $v_{n,m}$ with $n+m\le k$. In fact, the number of such $v_{n,m}$ is $\frac{(k+1)(k+2)}2$.
Define $$f(t,(x,y))=\sum_{n=0}^\infty\sum_{m=0}^\infty a_{n,m} v_{n,m}(x,y)e^{\frac\kappa 2\lambda_{n+m}t}=\sum_{n=0}^\infty\sum_{m=0}^\infty \frac{\langle f,v_{n,m}\rangle_\Psi} {\|v_{n,m}\|_\Psi^2} \cdot v_{n,m}(x,y) e^{\frac\kappa 2\lambda_{n+m}t}.$$
Then $f(t,(x,y))$ solves (\ref{PDE-boundary}). Let $(X(t),Y(t))$ be a diffusion process in $\Delta$, which solves (\ref{dX},\ref{dY},\ref{d<X,Y>}) with initial value $(x,y)$. Fix $t_0>0$ and define $M_t=f(t_0-t,(X(t),Y(t)))$, $0\le t\le t_0$. By It\^o's formula, $M$ is a bounded martingale, which implies that
$$\mathbb{ E}[f(X({t_0}),Y(t_0))]=\mathbb{ E}[M_{t_0}]=M_0=f(t_0,(x,y)) $$
\begin{equation} =\sum_{n=0}^\infty\sum_{m=0}^\infty \int\!\!\int_{\Delta} f(x^*,y^*) \Psi(x^*,y^*)\frac {v_{n,m}(x^*,y^*) v_{n,m}(x,y)}{\|v_{n,m}\|_\Psi^2}\cdot e^{\frac\kappa 2\lambda_{n+m}t_0} dx^*dy^*.\label{Ef-boundary}\end{equation}
For $t>0$, $(x,y)\in\overline\Delta $, and $(x^*,y^*)\in \Delta $, define
\begin{equation} p_t((x,y),(x^*,y^*))=\Psi(x^*,y^*)\sum_{n=0}^\infty\sum_{m=0}^\infty \frac {v_{n,m}(x,y) v_{n,m}(x^*,y^*)}{\|v_{n,m}\|_\Psi^2}\cdot e^{\frac\kappa 2\lambda_{n+m}t}. \label{prs}\end{equation}
Let $p_\infty(x^*,y^*)= {\bf 1}_{\Delta}(x^*,y^*)\Psi(x^*,y^*) /\|1\|_\Psi^2$. Note that $\lambda_0=0$ and $v_{0,0}\equiv 1$ since $P^{\alpha_0,\beta}_0=P^{\alpha_+,\alpha_-}_0\equiv 1$. So $p_\infty(x^*,y^*)$ corresponds to the first term in the series.
\begin{Lemma}
{(i)} For any $t_0>0$, the series in (\ref{prs}) (without the factor $\Psi(x^*,y^*)$) converges uniformly on $[t_0,\infty)\times \overline\Delta\times \overline\Delta$. {(ii)} There is $C_{t_0}\in(0,\infty)$ depending only on $\kappa, \underline\rho,t_0$ such that for any $(x,y)\in\overline\Delta$ and $(x^*,y^*)\in\Delta$,
\begin{equation} |p_t((x,y),(x^*,y^*))-p_\infty(x^*,y^*)|\le C_{t_0} e^{- (\rho_++\rho_-+\rho_0+6)t}\Psi(x^*,y^*),\quad t\ge t_0 .\label{asym}\end{equation}
{(iii)} For any $t>0$ and $(x^*,y^*)\in \overline \Delta$,
\begin{equation} p_\infty (x^*,y^*)=\int\!\!\int_{\Delta} p_\infty (x,y)p_t((x,y),(x^*,y^*))dxdy.\label{invar}\end{equation} \label{asym-lemma}
\end{Lemma}
\begin{proof}
{The statements (i) and (ii)} both follow from Stirling's formula, (\ref{supernorm'},\ref{L2-norm'},\ref{norm},\ref{super-upper}), and the facts that $0>\lambda_1= -\frac 2\kappa (\rho_++\rho_-+\rho_0+6)>\lambda_n$ for any $n>1$ and $\lambda_n\asymp - n^2$ for big $n$. {The statement (iii)} follows from {the statement (i)} and the orthogonality of $v_{n,m}$ w.r.t.\ $\langle\cdot,\cdot\rangle_\Psi$.
\end{proof}
\begin{Lemma}
The process $(X(t),Y(t))$ has a transition density, which is $p_t((x,y),(x^*,y^*))$, and an invariant density, which is $p_\infty(x^*,y^*)$.
\end{Lemma}
\begin{proof}
Fix $(x,y)\in \overline\Delta\setminus\{(0,1)\}$. Let $(X(t),Y(t))$ be the process that satisfies (\ref{dX},\ref{dY},\ref{d<X,Y>}) with initial value $(x,y)$. It suffices to show that, for any continuous function $f$ on $\overline\Delta$, we have
\begin{equation} \mathbb{ E}[f(X(t_0),Y(t_0))]=\int \!\!\int_\Delta p_{t_0}((x,y),(x^*,y^*))f(x^*,y^*)dx^*dy^*.\label{Efp'-boundary}\end{equation}
By Stone-Weierstrass theorem, $f$ can be approximated by a polynomial in two variables uniformly on $\Delta$. Thus, it suffices to show that (\ref{Efp'-boundary}) holds whenever $f$ is a polynomial in $x,y$, which follows immediately from (\ref{Ef-boundary}). The statement on $p_\infty(x^*,y^*)$ follows from (\ref{invar}).
\end{proof}
Since $X=R_+-R_-$, $Y=1-R_+R_-$, and the Jacobian of the transformation is $-(R_++R_-)$, we arrive at the following statement.
\begin{Corollary} The process $(\underline R(t))$ has a transition density:
$$p^R_t(\underline r,\underline r^*):= p_t ((r_+-r_-,1-r_+r_-),(r_+^*-r_-^*,1-r_+^*r_-^*))\cdot (r_+^*+r_-^*),$$ and an invariant density: $p^R_\infty(\underline r^*):= p_\infty( r_+^*-r_-^*,1-r_+^*r_-^*)\cdot (r_+^*+r_-^*)$; and for any $t_0>0$, there is $C_{t_0}\in(0,\infty)$ depending only on $\kappa$, $\underline\rho$, and $t_0$ such that for any $\underline r\in [0,1]^2$ and $\underline r^*\in(0,1)^2$,
$$ |p^R_t(\underline r,\underline r^*)-p^R_\infty(\underline r^*)|\le C_{t_0} e^{- (\rho_++\rho_-+\rho_0+6)t}p^R_\infty(\underline r^*),\quad t\ge t_0 .$$
\label{transition-R-infty}
\end{Corollary}
\section{Commuting Pair{s} of hSLE Curves}\label{section-other-commut}
In this section, we study three commuting pairs of hSLE$_\kappa$ curves. {Each commuting pair corresponds to one case in Theorem \ref{main-Thm1}, and the section will be split into a subsection for each.} It turns out that each {commuting pair} is ``locally'' absolutely continuous w.r.t.\ a commuting pair of chordal SLE$_\kappa(2,\underline\rho)$ curves for some suitable force values. So the results in the previous section can be applied here. Fix $\kappa\in(0,8)$ and $v_-<w_-<w_+<v_+\in\mathbb{R}$. We write $\underline w=(w_+,w_-)$ and $\underline v=(v_+,v_-)$. For $\underline\rho=(\rho_+,\rho_-)$ that satisfies the conditions in Section \ref{subsection-commuting-SLE-kappa-rho}, let $\mathbb{P}^{\underline\rho}_{\underline w;\underline v}$ denote the law of the driving functions of a commuting pair of choral SLE$_\kappa(2,\underline\rho)$ curves started from $(\underline w;\underline v)$.
\subsection{Two curves in a $2$-SLE$_\kappa$} \label{section-two-curve}
Suppose that $(\widehat\eta_+,\widehat \eta_-)$ is a $2$-SLE$_\kappa$ in $\mathbb{H}$ with link pattern $(w_+\to v_+;w_-\to v_-)$. By \cite[Proposition 6.10]{Wu-hSLE}, for $\sigma \in\{+,-\}$, $\widehat \eta_\sigma$ is an hSLE$_\kappa$ curve in $\mathbb{H}$ from $w_\sigma$ to $v_\sigma$ with force points $w_{-\sigma}$ and $v_{-\sigma}$. Stopping $\widehat\eta_\sigma$ at the first time {$t$, denoted by $T_\sigma$, such} that {$\widehat\eta_\sigma[0,t]$} disconnects $\infty$ from any of its force points, and parametrizing the stopped curve by $\mathbb{H}$-capacity, we get a chordal Loewner curve $\eta_\sigma(t)$, $0\le t<T_\sigma$, which is an hSLE$_\kappa$ curve in the chordal coordinate. Let $\widehat w_\sigma$ and $K_\sigma(\cdot)$ be respectively the chordal Loewner driving function and hulls for $\eta_\sigma$; and let ${\cal F}^\sigma$ be the filtration generated by $\eta_{\sigma}$. Let ${\cal F}$ be the separable $\mathbb{R}_+^2$-indexed filtration generated by ${\cal F}^+$ and ${\cal F}^-$.
For $\sigma\in\{+,-\}$, if $\tau_{-\sigma} $ is an ${\cal F}^{-\sigma}$-stopping time, then conditionally on ${\cal F}^{-\sigma}_{\tau_{-\sigma}}$ and the event $\{\tau_{-\sigma}<T_{-\sigma}\}$, the whole $\eta_{ \sigma}$ and the part of $\widehat\eta_{-\sigma}$ after $\eta(\tau_{-\sigma})$ together
form a $2$-SLE$_\kappa$ in $\mathbb{H}\setminus K_{-\sigma}(\tau_{-\sigma})$ with link pattern $(w_\sigma\to v_\sigma;\eta_{ -\sigma}(\tau_{-\sigma})\to v_{-\sigma})$. This in particular implies that the conditional law of $\widehat\eta_\sigma$ is that of an hSLE$_\kappa$ curve from $w_\sigma$ to $v_\sigma$ in $\mathbb{H}\setminus K_{-\sigma}(\tau_{-\sigma})$ with force points $\eta_{-\sigma}(\tau_{-\sigma})$ and $v_{-\sigma}$. Since $ f_{K_{-\sigma}(\tau_{-\sigma})}$ maps $\mathbb{H}$ conformally onto $\mathbb{H}\setminus K_{-\sigma}(\tau_{-\sigma})$, and sends $\widehat w_{-\sigma}(\tau_{-\sigma})$, $g_{K_{-\sigma}(\tau_{-\sigma})}(w_\sigma)$ and $g_{K_{-\sigma}(\tau_{-\sigma})}(v_\nu)$, $\nu\in\{+,-\}$, respectively to $\eta_{-\sigma}(\tau_{-\sigma})$, $w_\sigma$ and $v_\nu$, $\nu\in\{+,-\}$, we see that there a.s.\ exists a chordal Loewner curve $\eta_{\sigma}^{\tau_{-\sigma}}$ with some speed such that $\eta_ \sigma= f_{K_{-\sigma}(\tau_{-\sigma})}\circ \eta_ {\sigma,\tau_{-\sigma}}$, and the conditional law of the normalization of $\eta_ {\sigma,\tau_{-\sigma}}$ given ${\cal F}^{-\sigma}_{\tau_{-\sigma}}$ is that of an hSLE$_\kappa$ curve in $\mathbb{H}$ from $g_{K_{-\sigma}(\tau_{-\sigma})}(w_\sigma)$ to $g_{K_{-\sigma}(\tau_{-\sigma})}(v_\sigma) $ with force points $\widehat w_{-\sigma}(\tau_{-\sigma})$ and $g_{K_{-\sigma}(\tau_{-\sigma})}(v_{-\sigma})$, in the chordal coordinate.
Thus, $(\eta_+,\eta_-)$ a.s.\ satisfies the conditions in Definition \ref{commuting-Loewner} with $I_\sigma=[0,T_\sigma)$, ${\cal I}_\sigma^*={\cal I}_\sigma \cap\mathbb{Q}$, $\sigma\in\{+,-\}$, and ${{\cal D}_1}:={\cal I}_+\times {\cal I}_-$. By discarding a null event, we assume that $(\eta_+,\eta_-;{\cal D}_1)$ is always a commuting pair of chordal Loewner curves, and call $(\eta_+,\eta_-;{\cal D}_1)$ a commuting pair of hSLE$_\kappa$ curves in the chordal coordinate started from $(\underline w;\underline v)$. We adopt the functions from Section \ref{section-deterministic}.
Define a function $M_1$ on ${\cal D}_1$ by $M_1=G_1(W_+,W_-;V_+,V_-)$, where $G_1$ is given by (\ref{G1(w,v)}). Since $F$ is continuous and positive on $[0,1]$, $ |W_\sigma-V_\nu|\le |V_+-V_-|$ for $\sigma,\nu\in\{+,-\}$, and $\frac 8\kappa-1,\frac 4\kappa>0$, there is a constant $C>0$ depending only on $\kappa$ such that
\begin{equation} M_1\le C|V_+-V_-|^{\frac {16}\kappa-1} \min_{\sigma\in\{+,-\}} \{|W_\sigma-V_\sigma|\}^{\frac 8\kappa -1}\le C |V_+-V_-|^{2(\frac{12}\kappa -1)}.\label{M1-boundedness}\end{equation}
Note that $M_1>0$ on $\cal D$ because $|W_\sigma-V_\sigma|>0$, $\sigma\in\{+,-\}$, on $\cal D$.
We will prove that $M_1$ extends to an ${\cal F}$-martingale on $\mathbb{R}_+^2$, {and the extended martingale plays the role of a} Radon-Nikodym derivative between two measures. We first need some deterministic properties of $M_1$.
\begin{Lemma}
$M_1$ a.s.\ extends continuously to $\mathbb{R}_+^2$ with $M_1\equiv 0$ on $\mathbb{R}_+^2\setminus {\cal D}_1$. \label{M-cont}
\end{Lemma}
\begin{proof}
It suffices to show that for $\sigma\in\{+,-\}$, as $t_\sigma\uparrow T_\sigma$, $M_1\to 0$ uniformly in $t_{-\sigma}\in [0,T_{-\sigma})$. By symmetry, we may assume that $\sigma=+$.
Since the union of (the whole) $\eta_+$ and $\eta_-$ is bounded, by (\ref{V-V}) $|V_+-V_-|$ is bounded (by random numbers) on ${\cal D}_1$.
For a fixed $t_-\in[0,T_-)$, as $t_+\uparrow T_+$, $\eta_+(t_+)$ tends to either some point on $[v_+,\infty)$ or some point on $(-\infty, v_-)$.
By (\ref{M1-boundedness}), it suffices to show that when $\eta_+$ terminates at $[v_+,\infty)$ (resp.\ at $(-\infty,v_-)$), $W_+-V_+\to 0$ (resp.\ $W_--V_-\to 0$) as $t_+\uparrow T_+$, uniformly in $[0,T_-)$.
For any $\underline t=(t_+,t_-)\in{\cal D}_1$, neither $\eta_+[0,t_+]$ nor $\eta_-[0,t_-]$ hit $(-\infty, v_-]\cup [v_+,\infty)$, which implies that $v_+,v_-\not\in \overline{K(\underline t)}$ and $V_\pm(\underline t)= g_{K(\underline t)}(v_\pm)$. Suppose that $\eta_+$ terminates at $x_0\in [v_+,\infty)$. Since SLE$_\kappa$ is not boundary-filling for $\kappa\in(0,8)$, we know that $ \dist(x_0,\eta_-)>0$. Let $r=\min\{|w_+-v_+|,\dist(x_0,\eta_-)\}>0$.
Fix $\varepsilon\in(0,r)$. Since $x_0=\lim_ {t\uparrow T_+} \eta_+(t)$, there is $\delta>0$ such that $|\eta_+(t)-x_0|<\varepsilon$ for $t\in(T_+-\delta,T_+)$. Fix $t_+\in(T_+-\delta,T_+)$ and $t_-\in[0,T_-)$. Let $J$ be the connected component of $\{|z-x_0|=\varepsilon\}\cap (\mathbb{H}\setminus K(\underline t))$ whose closure contains $x_0+\varepsilon$. Then $J$ disconnects $v_+$ and $\eta_+(t_+,T_+)\cap(\mathbb{H}\setminus K(\underline t))$ from $\infty$ in $\mathbb{H}\setminus K(\underline t)$. Thus, $g_{K(\underline t)}(J)$ disconnects $V_+(\underline t)$ and $W_+(\underline t)$ from $\infty$. Since $\eta_+\cup\eta_-$ is bounded, there is a (random) $R\in(0,\infty)$ such that $\eta_+\cup\eta_-\subset \{|z-x_0|<R\}$. Let $\xi=\{|z-x_0|=2R\}\cap\mathbb{H}$. By comparison principle, the extremal length (\cite{Ahl}) of the family of curves in $\mathbb{H}\setminus K(\underline t)$ that separate $J$ from $\xi$ is $\le \frac{\pi}{\log(R/\varepsilon)}$. By conformal invariance, the extremal length of the family of curves in $\mathbb{H}$ that separate $g_{K(\underline t)}(J)$ from $g_{K(\underline t)}(\xi)$ is also $\le \frac{\pi}{\log(R/\varepsilon)}$. Now $g_{K(\underline t)}(\xi)$ and $g_{K(\underline t)}(J)$ are crosscuts of $\mathbb{H}$ such that the former encloses the latter. Let $D$ denote the subdomain of $\mathbb{H}$ bounded by $g_{K(\underline t)}(\xi)$. From Proposition \ref{g-z-sup} we know that $D\subset\{|z-x_0|\le 5R\}$. So the Euclidean area of $D$ is less than $13\pi R^2$. By the definition of extremal length, there is a curve $\gamma$ in $D$ that separates $g_{K(\underline t)}(J)$ from $g_{K(\underline t)}(\xi)$ with Euclidean distance less than $2\sqrt{13\pi R^2*\frac{\pi}{\log(R/\varepsilon)}}<8\pi R*\log(R/\varepsilon)^{-1/2}$. Since $g_{K(\underline t)}(J)$ disconnects $V_+(\underline t)$ and $W_+(\underline t)$ from $\infty$, $\gamma$ also separates $V_+(\underline t)$ and $W_+(\underline t)$ from $\infty$. Thus, $|W_+(\underline t)-V_+(\underline t)|<8\pi R*\log(R/\varepsilon)^{-1/2}$ if $t_+\in(T_+-\delta,T_+)$ and $t_-\in[0,T_-)$. This proves the uniform convergence of $\lim_{t_+\uparrow T_+} |W_+-V_+|=0$ in $t_-\in[0,T_-)$ in the case that $\lim_{t_+\uparrow T_+}\eta_+(t_+)\in[v_+,\infty)$. The proof of the uniform convergence of $\lim_{t_+\uparrow T_+} |W_--V_-|=0$ in $t_-\in[0,T_-)$ in the case that $\lim_{t_+\uparrow T_+}\eta_+(t_+)\in(-\infty,v_-)$ is similar.
\end{proof}
From now on, {we extend $M_1$ to $\mathbb{R}_+^2$ using Lemma \ref{M-cont}. It is then} a continuous stochastic process defined on $\mathbb{R}_+^2$ with constant {value} zero on ${\mathbb{R}_+^2}\setminus {\cal D}_1$.
For $\sigma\in\{+,-\}$ and $R>|v_+-v_-|/2$, let $\tau^\sigma_R$ be the first time that $|\eta_\sigma(t)-(v_++v_-)/2|=R$ if such time exists; otherwise $\tau^\sigma_R=T_\sigma$. Let $\underline \tau_R=(\tau^+_R,\tau^-_R)$. Note that $\tau^+_R,\tau^-_R\le \mA(\underline \tau_R)\le R^2/2$ because $K(\underline\tau_R)\subset \{z\in\mathbb{H}:|z-(v_++v_-)/2|\le R\}$.
\begin{Lemma}
For every $R>0$, $M_1(\cdot\wedge \underline \tau_R)$ is an $\mathbb{R}_+^2$-indexed martingale w.r.t.\ the filtration $({\cal F}^+_{t_+\wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{(t_+,t_-)\in\mathbb{R}_+^2}$ closed by $M_1(\underline \tau_R)$. Moreover, if the underlying probability measure {for the $(\eta_+,\eta_-)$ described at the beginning of this subsection} is weighted by $M_1(\underline\tau_R)/M_1(\underline 0)$, then the new law of the driving functions $(\widehat w_+,\widehat w_-)$ agrees with $\mathbb{P}^{(2,2)}_{\underline w;\underline v} $ on the $\sigma$-algebra ${\cal F}^+_{ \tau^+_R}\vee {\cal F}^-_{ \tau^-_R}$. \label{M-mart}
\end{Lemma}
\begin{proof} Let $R>0$, $\sigma\in\{+,-\}$, $t_{-\sigma}\ge 0$, and $\tau_{-\sigma}=t_{-\sigma}\wedge \tau^{-\sigma}_R$. Since $W_\sigma|^{-\sigma}_{\tau_{-\sigma}}$, $W_{-\sigma}|^{-\sigma}_{\tau_{-\sigma}}$ and $V_\nu|^{-\sigma}_{\tau_{-\sigma}}$, $\nu\in\{+,-\}$, are all $({\cal F}^\sigma_t\vee {\cal F}^{-\sigma}_{\tau_{-\sigma}})_{t\ge 0}$-adapted, and are driving function and force point functions for hSLE$_\kappa$ curves with some speeds in the chordal coordinate conditional on ${\cal F}^{-\sigma}_{\tau_{-\sigma}}$, by Proposition \ref{Prop-iSLE-3} (with a time-change), $M_1|^{-\sigma}_{\tau_{-\sigma}}(t)$, $0\le t<T_\sigma$, is an $({\cal F}^\sigma_t\vee {\cal F}^{-\sigma}_{\tau_{-\sigma}})_{t\ge 0}$-local martingale. Since $M_1$ is uniformly bounded on $[\underline 0,\underline\tau_R]$ and $\tau^\pm_R\le R^2/2$, $M_1|^{-\sigma}_{\tau_{-\sigma}}(\cdot \wedge \tau^\sigma_R)$ is an $({\cal F}^+_{t_+\wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{t_\sigma\ge 0}$-martingale closed by $M_1|^{-\sigma}_{\tau_{-\sigma}}(\tau^\sigma_R)$. Applying this result twice respectively for $\sigma=+$ and $-$, we obtain the martingale property of $M_1(\cdot\wedge \underline \tau_R)$
\begin{comment}
Fix $t_-\ge 0$. Let $\tau_-=t_-\wedge \tau_R^-$, $u(t)=\mA(t,\tau_-)-\mA(0,\tau_-)$, $\widetilde T_+=\sup u[0,T_+)$, and $\widetilde \eta_{+}^{\tau_-}(t)=\eta_{+}^{\tau_-}\circ u^{-1}(t)$, $0\le t<\widetilde T_+$. Then $\widetilde \eta_{+}^{\tau_-}$ is the normalization of $\eta_{+}^{\tau-}$, and the conditional law of $\widetilde \eta_{+}^{\tau_-}$ given ${\cal F}^-_{\tau_-}$ is that of an hSLE$_\kappa$ curve in $\mathbb{H}$ from $W_+(0,\tau_-)$ to $V_+(0,\tau_-)$ with force points $W_-(0,\tau_-)$ and $V_-(0,\tau_-)$, in the chordal coordinate. Moreover, the driving function for $\widetilde \eta_{+}^{\tau_-}$ is $W_+(u^{-1}(t),\tau_-)$, and by Lemmas \ref{W=gw} and \ref{common-function}
the force point functions started from $V_+(0,\tau_-)$, $W_-(0,\tau_-)$ and $V_-(0,\tau_-)$ are $V_+(u^{-1}(t),\tau_-)$, $W_-(u^{-1}(t),\tau_-)$ and $V_-(u^{-1}(t),\tau_-)$, respectively. Thus, $M_1(u^{-1}(t),\tau_-)$ agrees with the $M$ given in Proposition \ref{Prop-iSLE-3} with $w_0=W_+(0,\tau_-)$, $w_\infty=V_+(0,\tau_-)$, $v_1=W_-(\cdot,\tau_-)$ and $v_2=V_-(\cdot,\tau_-)$.
Let $\widetilde{\cal F}$ denote the $\mathbb{R}_+$-indexed filtration defined by $\widetilde{\cal F}_t=\sigma({\cal F}^-_{\tau_-},\widetilde\eta_{+}^{\tau_-}(s),0\le s\le t)$, $t\ge 0$.
By Proposition \ref{Prop-iSLE-3}, $M_1(u^{-1}(t),\widehat\tau^-_R)$, $0\le t<\widetilde T_+$, is an $\widetilde{\cal F}$-local martingale. By the definition of $\widetilde\eta_{+}^{\tau_-}$, for any $0\le t<T_+$, $\eta_+(t)=f_{K_-(\tau_-)}\circ \widetilde\eta_{+}^{\tau_-}(u(t))$.
Extend $u$ to $\mathbb{R}_+$ such that if $t\ge T_+$ then $u(t)=\widetilde T_+$. Then for every $t\ge 0$, $u(t)$ is an $\widetilde{\cal F}$-stopping time because for any $a\ge 0$, $u(t)> a$ if and only if $a<\widetilde T_+$ and $\hcap_2(f_{K_-(\tau_-)}\circ \widetilde\eta_{+}^{\tau_-}[0,a])<t$. So we get a filtration $\widetilde {\cal F}^u$ defined by $\widetilde{\cal F}^u_t=\widetilde{\cal F}_{u(t)}$, $t\ge 0$, and $M_1(t,\tau_-)$, $0\le t<T_+$, is an $\widetilde{\cal F}^u$-local martingale.
From $\eta_+(t)=f_{K_-(\tau_-)}\circ \widetilde\eta_{+}^{\tau_-}(u(t))$, $0\le t<T_+$, we know that ${\cal F}^+_{t}\vee {\cal F}^-_{\tau_-}\subset \widetilde {\cal F}_{u(t)}$ for $t\ge 0$. Since $\tau^+_R$ is an ${\cal F}^+$-stopping time, it is also an $\widetilde{\cal F}^u$-stopping time. Since $\tau_-\le \tau^-_R$, by the boundedness of $M_1$ on $[\underline 0,\underline \tau_R]$, $M_1(t\wedge \tau^+_R,\tau_-)$, $t\ge 0$, is a bounded $\widetilde{\cal F}^u$-martingale. Since ${\cal F}^+_{t_+\wedge \tau^+_R }\vee {\cal F}^-_{\tau_-}\subset \widetilde {\cal F}^u_{t_+}$ and $\tau_-=t_-\wedge \tau^-_R$, we conclude that $M_1(t_+\wedge \tau^+_R,t_-\wedge \tau^-_R)$, $t_+\ge 0$, is a bounded $({\cal F}^+_{t_+\wedge \widehat\tau^+_R }\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{t_+\ge 0}$-martingale. This holds for any $t_-\ge 0$. Symmetrically, for any $t_+\ge 0$, $M_1(t_+\wedge \tau^+_R,t_-\wedge \tau^-_R)$, $t_-\ge 0$, is a bounded $({\cal F}^+_{t_+ \wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{t_-\ge 0}$-martingale. Thus, $M_1(\underline t\wedge \underline\tau_R)$, $\underline t\in\mathbb{R}_+^2$, is a bounded $({\cal F}^+_{t_+ \wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{(t_+,t_-)\in\mathbb{R}_+^2}$-martingale. Since $\tau^\pm_R\le R^2/2$, we see that $M_1(\underline t\wedge \underline \tau_R)$ is closed by $M_1(\underline \tau_R)$.
\end{comment}
Let $\mathbb{P}_1$ denote the underlying probability measure.
By weighting $\mathbb{P}_1$ by $M_1(\underline\tau_R)/M_1(\underline 0)$, we get another probability measure, denoted by $\mathbb{P}_0$. To describe the restriction of $\mathbb{P}_0$ to ${\cal F} _{\underline\tau_R}$, we study the new marginal law of $\eta_-$ up to $\tau^-_R$ and the new conditional law of $\eta_+$ up to $\tau^+_R$ given that part of $\eta_-$. We may first weight $\mathbb{P}_1$ by $N_1:=M_1(0,\tau^-_R)/M_1(0, 0)$ to get a new probability measure $\mathbb{P}_{.5}$, and then weight $\mathbb{P}_{.5}$ by $N_2:=M_1(\tau^+_R,\tau^-_R)/M_1(0,\tau^-_R)$ to get $\mathbb{P}_0$.
By Proposition \ref{Prop-iSLE-3}, the $\eta_-$ up to $\tau^-_R$ under $\mathbb{P}_{.5}$ is a chordal SLE$_\kappa (2,2,2)$ curve in $\mathbb{H}$ started from $w_-$ with force points $v_-,w_+,v_+$, respectively, up to $\tau^-_R$. Since $N_1$ depends only on $\eta_-$, the conditional law of $\eta_+$ given any part of $\eta_-$ under $\mathbb{P}_{.5}$ agrees with that under $\mathbb{P}_1$ . Since $M_1(0,\tau^-_R)=0$ implies that $N_1=0$, and $\mathbb{P}_{.5}$-a.s.\ $N_1>0$, we see that $N_2$ is $\mathbb{P}_{.5}$-a.s.\ well defined. Since $\mathbb{ E}[N_2|{\cal F}^-_{\tau^-_R}]=1$, the law of $\eta_-$ up to $\tau^-_R$ under $\mathbb{P}_0$ agrees with that under $\mathbb{P}_{.5}$. To describe the conditional law of $\eta_+$ up to $\tau^+_R=\tau^+_R(\eta_+)$ given $\eta_-$ up to $\tau^-_R$, it suffices to consider the conditional law of $\eta_{+}^{\tau^-_R}$ up to $\tau^+_R(\eta_+)$ since we may recover $\eta_+$ from $\eta_+^{\tau^-_R}$ using $\eta_+=f_{K_-(\tau^-_R)}\circ \eta_{+}^{\tau^-_R}$. By Proposition \ref{Prop-iSLE-3} again, the conditional law of the normalization of $\eta_{+}^{\tau^-_R}$ up to $\tau^+_R(\eta_+)$ under $\mathbb{P}_0$ is that of a chordal SLE$_\kappa(2,2,2)$ curve in $\mathbb{H}$ started from $W_+(0,\tau^-_R)$ with force points at $V_+(0,\tau^-_R)$, $W_-(0,\tau^-_R)$ and $V_-(0,\tau^-_R)$, respectively. Thus, under $\mathbb{P}_0$ the joint law of $\eta_+$ up to $\tau^+_R$ and $\eta_-$ up to $\tau^-_R$ agrees with that of a commuting pair of SLE$_\kappa(2,2,2)$ curves started from $(\underline w;\underline v)$ respectively up to $\tau^+_R$ and $\tau^-_R$. So the proof is complete
\end{proof}
We let $\mathbb{P}_1$ denote the joint law of the driving functions $\widehat w_+$ and $\widehat w_-$ here, and let $\mathbb{P}^0_1=\mathbb{P}^{(2,2)}_{\underline w;\underline v}$. {We will later define $\mathbb{P}_j$ and $\mathbb{P}^0_j$, $j=2,3$, in Sections \ref{section-iSLE-1} and \ref{section-iSLE-2}, and the three pairs of measures $\mathbb{P}_j,\mathbb{P}^0_j$, $j=1,2,3$, will be referred in Section \ref{summary}.}
From the lemma, we find that, for any $\underline t=(t_+,t_-)\in\mathbb{R}_+^2$ and $R>0$,
\begin{equation} \frac{d\mathbb{P}^0_1|({\cal F}^+_{t_+\wedge\tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R} )}{d\mathbb{P}_1|({\cal F}^+_{t_+\wedge\tau^+_R}\vee {\cal F}^-_{t_-\wedge\tau^-_R}) }=\frac {M_1(\underline t\wedge \underline \tau_R)} {M_1(\underline 0)}.\label{RN-formula0}\end{equation}
\begin{Lemma}
Under $\mathbb{P}_1$, $M_1 $ is an ${\cal F}$-martingale; and for any ${\cal F}$-stopping time $\underline \tau$,
\begin{equation} \frac{d\mathbb{P}^0_1| {\cal F} _{\underline \tau } \cap\{\underline \tau\in\mathbb{R}_+^2\} }{d\mathbb{P}_1| {\cal F} _{\underline \tau } \cap\{\underline \tau\in\mathbb{R}_+^2\} }=\frac {M_1(\underline \tau)} {M_1(\underline 0)}.\label{RN-formula}\end{equation}
\label{RN-Thm1}
\end{Lemma}
\begin{proof} For $\underline t\in\mathbb{R}_+^2$ and $R>0$,
since ${\cal F}^+_{t_+\wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R}$ agrees with ${\cal F}^+_{t_+}\vee {\cal F}^-_{t_-}={\cal F}_{\underline t}$ on $\{\underline t\le \underline\tau_R\}$, by (\ref{RN-formula0}),
$$\frac{d\mathbb{P}^0_1|({\cal F}_{\underline t}\cap \{\underline t\le \underline \tau_R\}) }{d\mathbb{P}_1|({\cal F}_{\underline t}\cap \{\underline t\le \underline \tau_R\}) }=\frac {M_1(\underline t)} {M_1(\underline 0)}.$$
Sending $R\to \infty$, we get ${d\mathbb{P}^0_1| {\cal F}_{\underline t} }/{d\mathbb{P}_1| {\cal F}_{\underline t} }={M_1(\underline t)}/ {M_1(\underline 0)}$ for all $\underline t\in\mathbb{R}_+^2$. So $M_1$ is an ${\cal F}$-martingale under $\mathbb{P}_1$.
Let $\underline \tau$ be an ${\cal F}$-stopping time. Fix $A\in{\cal F}_{ \underline \tau}\cap\{\underline \tau\in\mathbb{R}_+^2\}$. Let $\underline t\in\mathbb{R}_+^2$. Define the ${\cal F}$-stopping time $\underline \tau^{\underline t}$ as in Proposition \ref{prop-local}. Then $A\cap \{\underline \tau< \underline t\}=A\cap \{\underline\tau<\underline \tau^{\underline t}\}\in {\cal F}_{\underline \tau^{\underline t}}\subset{\cal F}_{\underline t}$. So we get $$\mathbb{P}^0_1[A\cap \{\underline \tau< \underline t\}]=\mathbb{ E}_1 \Big[{\bf 1}_{A\cap \{\underline \tau< \underline t\}}\frac{M_1(\underline t)}{M_1(\underline 0)}\Big ]=\mathbb{ E}_1 \Big[{\bf 1}_{A\cap \{\underline \tau< \underline t\}}\frac{M_1(\underline \tau^{\underline t})}{M_1(\underline 0)}\Big ]=\mathbb{ E}_1\Big [{\bf 1}_{A\cap \{\underline \tau< \underline t\}}\frac{M_1( \underline\tau)}{M_1(\underline 0)} \Big],$$
where the second ``$=$'' follows from Proposition \ref{OST}. Sending both coordinates of $\underline t$ to $\infty$, we get $\mathbb{P}^0_1[A ]=\mathbb{ E}_1 [{\bf 1}_{A }\frac{M_1(\underline \tau)}{M_1(\underline 0)} ]$. So we get the desired (\ref{RN-formula}).
\end{proof}
\begin{Lemma}
For any ${\cal F}$-stopping time $\underline \tau$,
$$ \frac{d\mathbb{P}_1| {\cal F}_{\underline \tau }\cap \{\underline \tau\in{\cal D}_1\} }{d\mathbb{P}^0_1| {\cal F}_{\underline \tau } \cap \{\underline \tau\in{\cal D}_1\} }=\frac {M_1(\underline 0) }{M_1(\underline \tau) } .$$
\label{RN-Thm1-inv}
\end{Lemma}
\begin{proof}
This follows from Lemma \ref{RN-Thm1} and the fact that $M_1>0$ on ${\cal D}_1$.
\end{proof}
Assume now that $v_0:=(v_++v_-)/2\in [w_-,w_+]$. We understand $v_0$ as $w_\sigma^{-\sigma}$ if $(v_++v_-)/2=w_\sigma$, $\sigma\in\{+,-\}$. Let $V_0$ be the force point function started from $v_0$. By Section \ref{time curve}, we may define the time curve $\underline u:[0,T^u)\to {\cal D}_1$ such that $V_\sigma(\underline u(t))-V_0(\underline u(t))=e^{2t}(v_\sigma-v_0)$, $0\le t<T^u$, $\sigma\in\{+,-\}$, and $\underline u$ {cannot} be extended beyond $T^u$ {while still satisfying this property}. We follow the notation there: for every $X$ defined on $\cal D$, we use $X^u$ to denote the function $X\circ \underline u$ defined on $[0,T^u)$. We also define the processes $R_\sigma =\frac{W_\sigma^u -V_0^u }{V_\sigma^u -V_0^u }\in [0,1]$, $\sigma\in\{+,-\}$, and $\underline R=(R_+,R_-)$. Since $T_\sigma$ is an ${\cal F}^\sigma$-stopping time for $\sigma\in\{+,-\}$, ${\cal D}_1=[0,T_+)\times [0,T_-)$ is an ${\cal F}$-stopping region. As before we extend $\underline u$ to $\mathbb{R}_+$ such that if $s\ge T^u$ then $\underline u(s)=\lim_{t\uparrow T^u}\underline u(t)$. By Proposition \ref{Prop-u(t)}, for any $t\ge 0$, $\underline u(t)$ is an ${\cal F}$-stopping time.
Let $I=v_+-v_0=v_0-v_-$ and define $G_1^*$ on $[0,1]^2$ by $G_1^*(r_+,r_-)=G_1(r_+,-r_-;1,-1)$.
Then $M_1^u(t)=(e^{2t}I)^{\alpha_1} G_1^*(\underline R(t) )$ for $t\in [0,T^u)$, where $\alpha_1=2(\frac{12}\kappa-1)$ is as in Theorem \ref{main-Thm1}
We
now derive the transition density of the process $(\underline R(t) )_{0\le t<T^u}$ under $\mathbb{P}_1$. In fact, $T^u$ is $\mathbb{P}_1$-a.s.\ finite. By saying that $\widetilde p^{R}_1(t,\underline r,\underline r^*)$ is the transition density of $(\underline R)$ under $\mathbb{P}_1$, we mean that, if $(\underline R(t))$ starts from $\underline r$, then for any bounded measurable function $f$ on $(0,1)^2$,
$$\mathbb{ E}_1[{\bf 1}_{\{T^u>t\}}f(\underline R(t) )]=\int_{[0,1]^2} f(\underline r^*)\widetilde p^{R}_1(t,\underline r,\underline r^*) d\underline r^*,\quad t>0.$$
Applying Lemma \ref{RN-Thm1-inv} to the ${\cal F}$-stopping time $\underline u(t)$, and using that $\underline u(t)\in{\cal D}_1$ iff $t<T^u$, we get
$$\frac{d\mathbb{P}_{1}| {\cal F}^u_{t}\cap \{T^u>t\} }{d\mathbb{P}^0_1| {\cal F}^u_{t} \cap \{T^u>t\} }=\frac {M_1^u(0) }{M_1^u(t) } =e^{-2\alpha_1t} \frac{G_1^* (\underline R(0)) }{G_1^* (\underline R(t)) },\quad t\ge 0.$$
Combining it with Corollary \ref{transition-R-infty}, we get the following transition density.
\begin{Lemma}
Let $p^1_t(\underline r,\underline r^*)$ be the transition density $p^R_t(\underline r,\underline r^*)$ given in Corollary \ref{transition-R-infty} with $\rho_0=0$ and $\rho_+=\rho_-=2$. Then under $\mathbb{P}_{1}$, the transition density of $(\underline R)$ is
$$\widetilde p^1_t(\underline r,\underline r^*):= e^{-2\alpha_1 t}p^1_t(\underline r,\underline r^*) {G_1^* (\underline r)}/{G_1^* (\underline r^*)}.$$ \label{transition-1}
\end{Lemma}
\subsection{Opposite pair{s} of hSLE$_\kappa$ curves, the generic case} \label{section-iSLE-1}
Second, we consider another pair of random curves. Let $\underline w=(w_+,w_-)$ and $\underline v=(v_+,v_-)$ be as before. Let $(\eta_w,\eta_v)$ be a $2$-SLE$_\kappa$ in $\mathbb{H}$ with link pattern $(w_+\leftrightarrow w_-;v_+\leftrightarrow v_-)$. For $\sigma\in\{+,-\}$, let $\widehat \eta_\sigma$ be the curve $\eta_w$ oriented from $w_\sigma$ to $w_{-\sigma}$ and parametrized by the capacity viewed from $w_{-\sigma}$, which is an hSLE$_\kappa$ curve in $\mathbb{H}$ from $w_\sigma$ to $w_{-\sigma}$ with force points $v_\sigma$ and $v_{-\sigma}$. Then $\widehat \eta_+$ and $\widehat \eta_-$ are time-reversal of each other.
For $\sigma\in\{+,-\}$, parametrizing the part of $\widehat \eta_\sigma$ up to the time that it disconnects $w_{-\sigma}$ from $\infty$ by $\mathbb{H}$-capacity, we get a chordal Loewner curve: $\eta_\sigma(t)$, $0\le t<T_\sigma$, which is an hSLE$_\kappa$ curve in the chordal coordinate. Let $\widehat w_\sigma$ and $K_\sigma(\cdot)$ denote the chordal Loewner driving function and hulls for $\eta_\sigma$. Let $K(t_+,t_-)=\Hull(K_+(t_+)\cup K_-(t_-))$, $(t_+,t_-)\in[0,T_+)\times [0,T_-)$, and define an HC region:
\begin{equation} {\cal D}_2=\{\underline t\in[0,T_+)\times [0,T_-):K(\underline t)\subsetneqq \Hull(\eta_w)\}.
\label{D-before-overlap}\end{equation}
For $\sigma\in \{+,-\}$, let ${\cal F}^\sigma$ be the filtration generated by $\eta_\sigma$. Let $\tau_-$ be an ${\cal F}^-$-stopping time. Conditionally on ${\cal F}^-_{\tau_-}$ and the event $\{\tau_-<T_-\}$, the part of $\widehat\eta_w$ between $\eta_-(\tau_-)$ and $w_+$ and the whole $\eta_v$ form a $2$-SLE$_\kappa$ in $\mathbb{H}\setminus K_-(\tau_-)$ with link pattern $(w_+\leftrightarrow \eta_-(\tau_-);v_+\leftrightarrow v_-)$. So the conditional law of the part of $\widehat\eta_+$ up to hitting $\eta_-(\tau_-)$ is that of an hSLE$_\kappa$ curve in $\mathbb{H}\setminus K_-(\tau_-)$ from $w_+$ to $\eta_-(\tau_-)$ with force points $v_+,v_-$, up to a time-change. This implies that there is a random curve $\widehat\eta_+^{\tau_-}$ such that the $f_{K_-(\tau_-)}$-image of $\widehat\eta_+^{\tau_-}$ is the above part of $\widehat\eta_+$, and the conditional law of a time-change of $\widehat\eta_+^{\tau_-}$ is that of an hSLE$_\kappa$ curve in $\mathbb{H}$ from $g_{K_-(\tau_-)}(w_+)$ to $\widehat w_-(\tau_-)$ with force points $g_{K_-(\tau_-)}(v_+),g_{K_-(\tau_-)}(v_-)$. By the definition of ${\cal D}_2$, the part of $\eta_+$ up to $T^{{\cal D}_2}_+(\tau_-)$ is a time-change of the part of $\widehat\eta_+$ up to the first time that it hits $\eta_-(\tau_-)$ or separates $\eta_-(\tau_-)$ from $\infty$, which is then the $f_{K_-(\tau_-)}$-image of the part of $\widehat\eta_+^{\tau_-}$ up to the first time that it hits $\widehat w_-(\tau_-)$ or separates $\widehat w_-(\tau_-)$ from $\infty$. So there is a random curve $\eta_+^{\tau_-}$ such that the $f_{K_-(\tau_-)}$-image of $\eta_+^{\tau_-}$ is the part of $\eta_+$ up to $T^{{\cal D}_2}_+(\tau_-)$, and the conditional law of a time-change of $\eta_+^{\tau_-}$ is that of an hSLE$_\kappa$ curve in $\mathbb{H}$ from $g_{K_-(\tau_-)}(w_+)$ to $\widehat w_-(\tau_-)$ with force points $g_{K_-(\tau_-)}(v_+),g_{K_-(\tau_-)}(v_-)$, in the chordal coordinate. A similar statement holds with ``$+$'' and ``$-$'' swapped.
Taking the stopping times in the previous paragraph to be deterministic numbers, we find that $(\eta_+,\eta_-;{\cal D}_2)$ a.s.\ satisfies the conditions in Definition \ref{commuting-Loewner} with ${\cal I}_\pm=[0,T_\pm)$ and ${\cal I}_{\pm}^*={\cal I}_{\pm}\cap\mathbb{Q}$. By removing a null event, we may assume that $(\eta_+,\eta_-;{\cal D}_2)$ is always a commuting pair of chordal Loewner curves. We call $(\eta_+,\eta_-;{\cal D}_2)$ a commuting pair of hSLE$_\kappa$ curves in the chordal coordinate started from $(w_+\leftrightarrow w_-;v_+,v_-)$.
Let ${\cal F}$ be the separable $\mathbb{R}_+^2$-indexed filtration generated by ${\cal F}^+$ and ${\cal F}^-$, and let $\overline{\cal F}$ be the right-continuous augmentation of ${\cal F}$. Then ${\cal D}_2$ is an $\overline{\cal F}$-stopping region because by Lemma \ref{lem-strict},
$$\{\underline t\in{\cal D}_2\}=\lim_{\mathbb{Q}^2\ni \underline s\downarrow\underline t}(\{\underline s<(T_+,T_-)\}\cap \{K(\underline t)\subsetneqq K(\underline s)\})\in\overline{\cal F}_{\underline t},\quad \forall \underline t\in\mathbb{R}_+^2.$$
Define $M_2:{\cal D}_2\to\mathbb{R}_+$ by $M_2=G_2(W_+,W_-;V_+,V_-)$, where $G_2$ is given by (\ref{G2(w,v)}).
Since $V_+\ge W_+\ge W_-\ge V_-$, and $F$ is uniformly positive on $[0,1]$, there is a constant $C>0$ depending only on $\kappa$ such that
\begin{equation} M_2\le C |W_+-W_-|^{\frac8\kappa-1} |V_+-V_-|^{\frac{16}\kappa -1}\le C |V_+-V_-|^{\frac{2}\kappa(12-\kappa)}.\label{est-M2}\end{equation}
\begin{Lemma}
$M_2$ a.s.\ extends continuously to $\mathbb{R}_+^2$ with $M_2\equiv 0$ on $\mathbb{R}_+^2\setminus {\cal D}_2$. \label{M-cont2}
\end{Lemma}
\begin{proof}
Since for $\sigma\in\{+,-\}$, $\eta_\sigma$ a.s.\ extends continuously to $[0,T_\sigma]$, by Remark \ref{Remark-continuity-W}, $W_+$ and $W_-$ a.s.\ extend continuously to $\overline{{\cal D}_2}$. From (\ref{V-V}) we know that a.s.\ $|V_+-V_-|$ is bounded on ${\cal D}_2$. Thus, by (\ref{est-M2}) it suffices to show that the continuations of $W_+$ and $W_-$ agree on $\partial{{\cal D}_2}\cap\mathbb{R}_+^2$. Define $A_\sigma=\{t_\sigma\underline e_\sigma+T^{{\cal D}_2}_{-\sigma}(t_\sigma)\underline e_{-\sigma}:t_\sigma\in (0,T_\sigma)\}$, $\sigma\in\{+,-\}$.
Then $A_+\cup A_-$ is dense in $\partial{{\cal D}_2}\cap(0,\infty)^2$. By symmetry, it suffices to show that $W_+$ and $W_-$ agree on $A_+$.
If this is not true, then there exists $(s_+,s_-)\in{\cal D}_2$ such that $W_+(s_+,\cdot)>W_-(s_+,\cdot)$ on $[s_-,T^{{\cal D}_2}_-(s_+)]$.
Let $K_-^{\underline s}(t)=K(s_+,s_-+t)/K(\underline s)=K_-^{s_+}(s_-+t)/K_-^{s_+}(s_-)$, $0\le t<T':=T^{{\cal D}_2}_-(s_+)-s_-$. Since $K_{-}^{s_+}(t_-)$, $0\le t_-<T^{{\cal D}_2}_-(s_+)$, are chordal Loewner hulls driven by $W_-(s_+,\cdot)$ with speed $\mA(s_+,\cdot)$, $K_{-}^{\underline s}(t)$, $0\le t<T'$, are chordal Loewner hulls driven by $W_-(s_+,s_-+\cdot)$ with speed $\mA(s_+,s_-+\cdot)$. By Lemma \ref{W=gw} and Proposition \ref{prop-comp-g}, $W_+(s_+,s_-+t)=g_{K_-^{\underline s}(t)}^{W_-(\underline s)}(W_+(\underline s))$, $0\le t< T'$. Since $W_+(s_+,s_-+\cdot)>W_-(s_+,s_-+\cdot)$ on $[0,T')$, we have $\dist(W_+(\underline s),K_-^{\underline s}(t))>0$ for $0\le t<T'$. Since $W_+(s_+,s_-+\cdot)$, $W_-(s_+,s_-+\cdot)$ and $\mA(s_+,s_-+\cdot)$ all extend continuously to $[0,T']$, and $W_+(s_+,s_-+T')>W_-(w_+,s_-+T')$, the chordal Loewner process driven by $W_-(s_+,s_-+t)$, $0\le t\le T'$, with speed $\mA(s_+,s_-+\cdot)$ does not swallow $W_+(\underline s)$ at the time $T'$, which implies that $\dist(W_+(\underline s),\Hull(\bigcup_{0\le t<T'} K_-^{\underline s}(t)))>0$.
Since $\underline s\in{\cal D}_2$, by Lemma \ref{lem-strict} we may choose a (random) sequence $\delta_n\downarrow 0$ such that $\eta_+(s_++\delta_n)\in\mathbb{H}\setminus K(s_+,s_-)$ for all $n$.
Let $z_n=g_{K(s_+,s_-)}(\eta_+(s_++\delta_n))\in K(s_++\delta_n,s_-)/K(s_+,s_-)$, $n\in\mathbb{N}$, then $z_n\to W_+(\underline s)$ by (\ref{W-def}). So $\dist(z_{n},\Hull(\bigcup_{0\le t<T'} K_-^{\underline s}(t)))>0$ for $n$ big enough. However, from
$$\eta_+(s_++\delta_{n})\in \Hull(\eta_w)\setminus K(\underline s)=K(s_+,s_-+T')\setminus K(\underline s)=\Hull(\bigcup_{0\le t<T'} K(s_+,s_-+t))\setminus K(\underline s)$$
we get $z_n\in \Hull(\bigcup_{0\le t<T'} K_-^{\underline s}(t))$ for all $n$, which is a contradiction.
\begin{comment}
Since $K_{-}^{s_+}(t_-)$, $0\le t_-<T^{{\cal D}_2}_-(s_+)$, are chordal Loewner hulls driven by $W_-(s_+,\cdot)$ with speed $\mA(s_+,\cdot)$, $K_{-}^{\underline s}(t)$, $0\le t<T':=T^{{\cal D}_2}_-(s_+)-s_-$, are chordal Loewner hulls driven by $W_-(s_+,s_-+\cdot)$ with speed $\mA(s_+,s_-+\cdot)$.
Since $W_-(s_+,s_-+\cdot)$ is the driving function for $K^{\underline s}(\cdot)$, and $\inf_{t\in [0,T')}( W_+(s_+,s_-+t)-W_-(s_+,s_-+t))>0$, we conclude that $W_+(\underline s)$ has positive distance from
$$\Hull\Big(\bigcup_{t\in [0,T')} K_-^{\underline s}(t) \Big)
=K(s_+,T^{{\cal D}_2}_-(s_+))/K(\underline s)=\Hull(\eta_w)/K(\underline s).$$
Since $z_n\to W_+(s_+,s_-)$, for $n$ big enough, $z_n\not\in \Hull(\eta_w)/K(\underline s)$, which implies that $\eta_+(s_++\delta_n)=f_{K(s_+,s_-)}(z_n)\not\in \Hull(\eta_w)\setminus K(\underline s)$, which in turn implies that $\eta_+(s_++\delta_n)\in \mathbb{H}\setminus K(s_+,T^{{\cal D}_2}_-(s_+))$ because $\eta_+(s_++\delta_n)\in \mathbb{H}\setminus K(s_+,s_-)$.
By the DMP and reversibility of hSLE$_\kappa$, conditionally on $\eta_+[0,s_+]$ and $\eta_-[0,T^{{\cal D}_2}_-(s_+)]$, the part of $\eta_+$ after $s_+$ and the part of $\eta_-$ after $T^{{\cal D}_2}_-(s_+)$ are two pieces of the same hSLE$_\kappa$ curve in the closure of one connected component of $\mathbb{H}\setminus (\eta_+[0,s_+]\cup \eta_-[0,T^{{\cal D}_2}_-(s_+)])$ (with opposite directions). Since $\eta_+(s_++\delta_n)\in \mathbb{H}\setminus K(s_+,T^{{\cal D}_2}_-(s_+))$ for $n$ big enough, this connected component has to be $\mathbb{H}\setminus K(s_+,T^{{\cal D}_2}_-(s_+))$. Since $\eta_+(s_++\delta_n)\in K(s_++\delta_n,T^{{\cal D}_2}_-(s_+))$, we see that $K(s_+ ,T^{{\cal D}_2}_-(s_+))\subsetneqq K(s_++\varepsilon,T^{{\cal D}_2}_-(s_+)) $ for any $\varepsilon>0$, which contradicts that $(s_+,T^{{\cal D}_2}_-(s_+))\in\partial{\cal D}_2$.
\end{comment}
\end{proof}
From now on, we understand $M_2$ as the continuous extension defined in Lemma \ref{M-cont2}. Let $\tau^\pm_R$ and $\underline\tau_R$, $R>0$, be as defined before Lemma \ref{M-mart}.
\begin{Lemma}
For any $R>0$, $M_2(\cdot\wedge \underline \tau_R)$ is an $({\cal F}^+_{t_+\wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{(t_+,t_-)\in\mathbb{R}_+^2}$-martingale closed by $M_2(\underline \tau_R)$, and if the underlying probability measure is weighted by $M_2(\underline\tau_R)/M_2(\underline 0)$, then the new law of $(\widehat w_+,\widehat w_-)$ agrees with the probability measure $\mathbb{P}^{(2,2)}_{\underline w;\underline v} $ on ${\cal F}^+_{ \tau^+_R}\vee {\cal F}^-_{ \tau^-_R}$.
\label{M-mart2}
\end{Lemma}
\begin{proof}
We follow the argument in the proof of Lemma \ref{M-mart}, where Proposition \ref{Prop-iSLE-3} is the key ingredient, except that here we use (\ref{est-M2}) instead of (\ref{M1-boundedness}). \end{proof}
Let $\mathbb{P}_2$ denote the joint law of the driving functions $\widehat w_+$ and $\widehat w_-$ here, and let $\mathbb{P}^0_2=\mathbb{P}^{(2,2)}_{\underline w;\underline v}$. Following the proof of Lemma \ref{RN-Thm1} and using Lemma \ref{M-mart2}, we get the following lemma.
\begin{Lemma}
A revision of Lemma \ref{RN-Thm1} holds with all subscripts ``$1$'' replaced by ``$2$'' and the filtration ${\cal F}$ replaced by $\overline{\cal F}$.
\label{RN-Thm2-right}
\end{Lemma}
\begin{comment}
\begin{Lemma}
Lemma \ref{RN-Thm2} holds with the filtration ${\cal F}$ replaced by $\overline{\cal F}$.
\label{RN-Thm2-right}
\end{Lemma}
\begin{proof}
By Proposition \ref{right-continuous}, $M_2$ is an $\overline{\cal F}$-martingale under $\mathbb{P}_2$.
Using Lemma \ref{RN-Thm2} and Proposition \ref{OST}, we easily get (\ref{RN-formula2-right}) in the case that $\underline \tau$ is a bounded $\overline{\cal F}$-stopping time. The result extends to the general case by Proposition \ref{prop-local}.
\end{proof}
\end{comment}
\begin{Lemma}
For any $\overline{\cal F}$-stopping time $\underline \tau$, $M_2(\underline \tau)$ is $\mathbb{P}_2$-a.s.\ positive on $\{\underline \tau\in{\cal D}_2\}$. \label{positive-M-T}
\end{Lemma}
\begin{proof}
Let $\underline \tau$ be an $\overline{\cal F}$-stopping time. Let $A=\{\underline \tau\in{\cal D}_2\}\cap\{M_2(\underline \tau)=0\}$. We are going to show that $\mathbb{P}_2[A]=0$. Since ${\cal D}_2$ is an $\overline{\cal F}$-stopping region,
we have $\{\underline \tau\in{\cal D}_2\}\in \overline{\cal F}_{\underline \tau}$, and $A \in \overline{\cal F}_{\underline \tau}$. Since $M_2(\underline \tau)=0$ on $A$, by Lemma \ref{RN-Thm2-right}, $\mathbb{P}^0_2[A]=0$. For any $\underline t\in\mathbb{Q}_+^2$, since $A\in\overline{\cal F}_{\underline\tau+\underline t}$, by Lemma \ref{RN-Thm2-right}, $\mathbb{P}_2$-a.s\ $M_2(\underline \tau+\underline t)=0$ on $A$. Thus, on the event $A$, $\mathbb{P}_2$-a.s.\ $M_2(\underline \tau+\underline t)=0$ for any $\underline t\in\mathbb{Q}_+^2$, which implies by the continuity that $M_2\equiv 0$ on $\underline \tau+\mathbb{R}_+^2$, which further implies that $W_+\equiv W_-$ on $(\underline \tau+\mathbb{R}_+^2)\cap {\cal D}_2$, which in turn implies by Lemma \ref{Lem-W} that $\eta_+(\tau_++t_+)=\eta_-(\tau_-+t_-)$ for any $\underline t=(t_+,t_-)\in\mathbb{R}_+^2$ such that $\underline\tau+\underline t\in {\cal D}_2$. This is impossible since it implies (by setting $t_-=0$) that $\eta_+$ stays constant on $[\tau_+,T^{{\cal D}_2}_+(\tau_-))$. So we have $\mathbb{P}_2[A]=0$.
\end{proof}
\begin{Remark}
We do not have $M_2>0$ on ${\cal D}_2$ if there is $(t_+,t_-)\in {\cal D}_2$ such that $\eta_+(t_+)= \eta_-(t_-)$, which almost surely happens when $\kappa\in(4,8)$.
\end{Remark}
\begin{Lemma}
A revision of Lemma \ref{RN-Thm1-inv} holds with all subscripts ``$1$'' replaced by ``$2$'' and the filtration ${\cal F}$ replaced by $\overline{\cal F}$.
\label{RN-Thm2-inv}
\end{Lemma}
\begin{proof}
This follows from Lemmas \ref{RN-Thm2-right} and \ref{positive-M-T}.
\end{proof}
Assume that $v_0:=(v_++v_-)/2\in [w_-,w_+]$, and let $V_0$ be the force point function started from $v_0$. Here if $v_0=w_\sigma$ for some $\sigma\in\{+,-\}$, we treat it as $w_\sigma^{-\sigma}$. We may define the time curve $\underline u:[0,T^u)\to {\cal D}_2$ and the processes $R_\sigma(t)$, $\sigma\in\{+,-\}$, and $\underline R(t)$ as in Section \ref{time curve}, and extend $\underline u$ to $\mathbb{R}_+$ such that $\underline u(s)=\lim_{t\uparrow T^u} \underline u(t)$ for $s\ge T^u$. Since ${\cal D}_2$ is an $\overline{\cal F}$-stopping region, by Proposition \ref{Prop-u(t)}, for any $t\ge 0$, $\underline u(t)$ is an $\overline{\cal F}$-stopping time.
Define $G_2^*$ on $[0,1]^2$ by $G_2^*(r_+,r_-)=G_2(r_+,-r_-;1,-1)$.
Then $M_2^u(t)=(e^{2t}I)^{\alpha_1} G_2^*(\underline R(t) )$ for $t\in [0,T^u)$, where $\alpha_2=2(\frac{12}\kappa-1)$ is as in Theorem \ref{main-Thm1}.
Applying Lemma \ref{RN-Thm2-inv} to $\underline u(t)$, we get
the following lemma, which is similar to Lemma \ref{transition-1}.
\begin{Lemma}
Let $p^2_t(\underline r,\underline r^*)$ be the transition density $p^R_t(\underline r,\underline r^*)$ given in Corollary \ref{transition-R-infty} with $\rho_0=0$ and $\rho_+=\rho_-=2$. Then under $\mathbb{P} _{2}$, the transition density of $(\underline R)$ is
$$ \widetilde p^2_t(\underline r,\underline r^*):= e^{-2\alpha_2 t}p^2_t(\underline r,\underline r^*) {G_2^* (\underline r)}/{G_2^* (\underline r^*)}.$$
\label{transition-2}
\end{Lemma}
\subsection{Opposite pair{s} of hSLE$_\kappa$ curves, a limit case } \label{section-iSLE-2}
Let $w_-<w_+<v_+\in\mathbb{R}$. Let $(\eta_w,\eta_v)$ be a $2$-SLE$_\kappa$ in $\mathbb{H}$ with link pattern $(w_+\leftrightarrow w_-;v_+\leftrightarrow \infty)$. For $\sigma\in\{+,-\}$, let $\widehat \eta_\sigma$ be the curve $\eta_w$ oriented from $w_\sigma$ to $w_{-\sigma}$ and parametrized by the capacity viewed from $w_{-\sigma}$, which is an hSLE$_\kappa$ curve in $\mathbb{H}$ from $w_\sigma$ to $w_{-\sigma}$. Then $\widehat \eta_+$ and $\widehat \eta_-$ are time-reversal of each other.
For $\sigma\in\{+,-\}$, parametrizing the part of $\widehat \eta_\sigma$ up to the time that it disconnects $w_{-\sigma}$ from $\infty$ by $\mathbb{H}$-capacity, we get a chordal Loewner curve: $\eta_\sigma(t)$, $0\le t<T_\sigma$, which is an hSLE$_\kappa$ curve from $w_\sigma$ to $w_{-\sigma}$ in the chordal coordinate. Define ${\cal D}_3$ using (\ref{D-before-overlap}) for the $(\eta_+,\eta_-)$ here. Then $(\eta_+,\eta_-;{\cal D}_3)$ is a.s.\ a commuting pair of chordal Loewner curves. Define $W_\pm$, $V_+$ and $\overline{\cal F}$ for the $(\eta_+,\eta_-)$ here in the same way as in the previous subsection. Then ${\cal D}_3$ is an $\overline{\cal F}$-stopping region. We call $(\eta_+,\eta_-;{\cal D}_3)$ a commuting pair of hSLE$_\kappa$ curves in the chordal coordinate started from $(w_+\leftrightarrow w_-;v_+)$.
Define $M_3:{\cal D}_3\to\mathbb{R}_+$ by $M_3=G_3(W_+,W_-;V_+)$, where $G_3$ is given by (\ref{G3(w,v)}).
Since $V_+\ge W_+\ge W_-$, we have $M_3\le C |W_+-W_-|^{\frac 8\kappa -1}|V_{+}-V_-|^{\frac {4}\kappa}\le C |V_{+}-V_-|^{\frac {12}\kappa-1}$ for some constant $C>0$ depending on $\kappa$.
Then the exactly same proof of Lemma \ref{M-cont2} can be used here to prove the following lemma.
\begin{Lemma}
$M_3$ a.s.\ extends continuously to $\mathbb{R}_+^2$ with $M_3\equiv 0$ on $\mathbb{R}_+^2\setminus {\cal D}_3$. \label{M-cont3}
\end{Lemma}
\begin{comment}
We now understand $M_3$ as the continuous extension defined on $\mathbb{R}_+^2$. Let $\tau^\pm_R$ and $\underline\tau_R$, $R>0$, be as defined before Lemma \ref{M-mart}.
\begin{Lemma}
For any $R>0$, $M_3(\cdot\wedge \underline \tau_R)$ is an $({\cal F}^+_{t_+\wedge \tau^+_R}\vee {\cal F}^-_{t_-\wedge \tau^-_R})_{(t_+,t_-)\in\mathbb{R}_+^2}$-martingale closed by $M_3(\underline \tau_R)$, and if the underlying probability measure is weighted by $M_3(\underline\tau_R)/M_3(\underline 0)$, then the new law of $(\widehat w_+,\widehat w_-)$ agrees with the probability measure $\mathbb{P}^{(2)}_{w_+,w_-;v_+} $ on ${\cal F}^+_{ \tau^+_R}\vee {\cal F}^-_{ \tau^-_R}$.
\label{M-mart3}
\end{Lemma}
\end{comment}
Let $\mathbb{P}_3$ denote the joint law of the driving functions $\widehat w_+$ and $\widehat w_-$ here, and let $\mathbb{P}^0_3$ be the joint law of the driving functions for a commuting pair of chordal SLE$_\kappa(2,2)$ started from $(w_+,w_-;v_+)$. Then similar arguments as in the previous subsection give the following lemma.
\begin{comment}
\begin{Theorem}
Under $\mathbb{P}_3$, $M_3$ is an $\overline{\cal F}$-martingale; and for any $\overline{\cal F}$-stopping time $\underline T$,
$$ \frac{d\mathbb{P}_0^3| \overline{\cal F}_{\underline T } \cap\{\underline T\in\mathbb{R}_+^2\} }{d \mathbb{P}_3| \overline{\cal F}_{\underline T } \cap\{\underline T\in\mathbb{R}_+^2\} }=\frac {M_3(\underline T)} {M_3(\underline 0)}.$$ \label{RN-Thm3}
\end{Theorem}
\end{comment}
\begin{Lemma}
Revision of Lemmas \ref{positive-M-T} and \ref{RN-Thm2-inv} hold with all subscripts ``$2$'' replaced by ``$3$''.
\label{RN-Thm3-inv}
\end{Lemma}
Introduce two new points: $v_0=(w_++w_-)/2$ and $v_-=2v_0-v_+$. Let $V_0$ and $V_-$ be respectively the force point functions started from $v_0$ and $v_-$.
Since $v_0=(v_++v_-)/2$, we may define the time curve $\underline u:[0,T^u)\to {\cal D}_2$ and the processes $R_\sigma (t)$, $\sigma\in\{+,-\}$, and $\underline R(t)$ as in Section \ref{time curve}.
Let $G_3^*(r_+,r_-)=G_3(r_+,r_-;1)$. Then $M_3^u(t)=(e^{2t}I)^{\alpha_1} G_3^*(\underline R(t) )$ for $t\in [0,T^u)$, where $\alpha_3=\frac{12}\kappa-1$ is as in Theorem \ref{main-Thm1}.
Applying Lemma \ref{RN-Thm3-inv} to $\underline u(t)$,
we get the following lemma, which is similar to Lemma \ref{transition-1}.
\begin{Lemma}
Let $p^3_t(\underline r,\underline r^*)$ be the transition density $p^R_t(\underline r,\underline r^*)$ given in Corollary \ref{transition-R-infty} with $\rho_0=\rho_-=0$ and $\rho_+=2$. Then under $\mathbb{P}_3$, the transition density of $(\underline R)$ is
$$ \widetilde p^3_t(\underline r,\underline r^*):= e^{-2\alpha_3 t}p^3_t(\underline r,\underline r^*) {G_3(\underline r)}/{G_3(\underline r^*)}.$$ \label{transition-3}
\end{Lemma}
\subsection{A summary}\label{summary}
For $j=1,2,3$, using Lemmas \ref{transition-1}, \ref{transition-2}, and \ref{transition-3}, we can obtain a quasi-invariant density of $\underline R$ under $\mathbb{P}_j$ as follows. Let $G_j^*(r_+,r_-)=G_j(r_+,-r_-;1,-1)$, $j=1,2$, and $G_3^*(r_+,r_-)=G_3(r_+,-r_-;1)$.
Let $p^j_\infty$ be the invariant density $p^R_\infty$ of $\underline R$ under $\mathbb{P}_0^j$ given by Corollary \ref{transition-R-infty}, where $\mathbb{P}_0^1=\mathbb{P}_0^2=\mathbb{P}_{r_+,-r_-;1,-1}^{(2,2)}$ and $\mathbb{P}_0^3=\mathbb{P}_{r_+,-r_-;1}^{(2)}$. Define
\begin{equation} {\cal Z}_j=\int_{(0,1)^2} \frac{p^j_\infty(\underline r^*)}{G_j^*(\underline r^*)}d\underline r^*,\quad \widetilde p^j_\infty=\frac 1{{\cal Z}_j} \frac{p^j_\infty}{G_j^* },\quad j=1,2,3. \label{tilZ}\end{equation}
It is straightforward to check that ${\cal Z}_j\in(0,\infty)$, $j=1,2,3$. To see this, we compute $p^j_\infty(r_+,r_-)\asymp (1-r_+)^{\frac 8\kappa -1}(1-r_-)^{\frac 8\kappa -1} (r_+r_-)^{\frac 4\kappa -1}$ for $j=1,2$, and $\asymp (1-r_+)^{\frac 8\kappa -1}(1-r_-)^{\frac 4\kappa -1}(r_+r_-)^{\frac 4\kappa -1}$ for $j=3$; $G_j^*(r_+,r_-)\asymp (1-r_+)^{\frac 8\kappa -1}(1-r_-)^{\frac 8\kappa -1}$ for $j=1$, and $\asymp (r_++r_-)^{\frac 8\kappa -1}$ for $j=2,3$.
\begin{Lemma} The following statements hold.
\begin{enumerate}
\item [(i)] For any $j\in\{1,2,3\}$, $t>0$ and $\underline r^*\in(0,1)^2$, $ \int_{[0,1]^2} \widetilde p^j_\infty(\underline r)\widetilde p^j_t(\underline r,\underline r^*)d\underline r=e^{-2\alpha_jt}\widetilde p^j_\infty(\underline r^*)$.
This means, under the law $\mathbb{P}_j$, if the process $(\underline R)$ starts from a random point in $ (0,1)^2$ with density $\widetilde p^j_\infty$, then for any deterministic $t\ge 0$, the density of (the survived) $\underline R(t)$ is $e^{-2\alpha_j t} \widetilde p^j_\infty$. So we call $\widetilde p^R_j$ a quasi-invariant density for $(\underline R)$ under $\mathbb{P}_j$.
\item [(ii)] Let $\beta_1=\beta_2=10$ and $\beta_3=8$. For $j\in\{1,2,3\}$ and $\underline r\in(0,1)^2$, if $\underline R$ starts from $\underline r$, then
\begin{equation}\mathbb{P}_{j}[T^u>t]={{\cal Z}_j} G_j^*(\underline r) e^{-2\alpha_j t}(1+O(e^{- \beta_j t}));
\label{P[T>t]}\end{equation}
\begin{equation} \widetilde p^R_j(t,\underline r,\underline r^*)=\mathbb{P}_{j}[T^u>t]\widetilde p^j_\infty(\underline r^*)(1+O(e^{-\beta_j t})).\label{tilptT>t}\end{equation}
Here we emphasize that the implicit constants in the $O$ symbols do not depend on $\underline r$.
\end{enumerate}
\label{property-til-p}
\end{Lemma}
\begin{proof}
Part (i) follows easily from (\ref{invar}). For part (ii), suppose $\underline R$ starts from $\underline r$. Using Corollary \ref{transition-R-infty}, Lemmas \ref{transition-1}, \ref{transition-2}, and \ref{transition-3}, and formulas (\ref{tilZ}), we get
$$\mathbb{P}_{j}[T^u>t]=\int_{(0,1)^2} \widetilde p^j_t(\underline r,\underline r^*)d\underline r^*=\int_{(0,1)^2} e^{-2\alpha_j t} p^j_t(\underline r,\underline r^*) \frac{G_j^*(\underline r)}{G_j^*(\underline r^*)}d\underline r^*$$
$$=\int_{(0,1)^2} e^{-2\alpha_j t} p^j_\infty(\underline r^*)(1+O(e^{-\beta_j t})) \frac{G_j^*(\underline r)}{G_j^*(\underline r^*)}d\underline r^*={\cal Z}_j G_j^*(\underline r) e^{-2\alpha_jt}(1+O(e^{-\beta_j t})),$$
which is (\ref{P[T>t]}); and
$$ \widetilde p^j_t(\underline r,\underline r^*)=e^{-2\alpha_j t} p^j_\infty(\underline r^*)(1+O(e^{-\beta_j t})) \frac{G_j^*(\underline r)}{G_j^*(\underline r^*)}
=e^{-2\alpha_j t} {\cal Z}_j \widetilde p^j_\infty(\underline r^*)(1+O(e^{-\beta_j t})) G_j^*(\underline r),$$
which together with (\ref{P[T>t]}) implies (\ref{tilptT>t}).
\end{proof}
We will need the following lemma, which follows from the argument in \cite[Appendix A]{Green-cut}.
\begin{Lemma}
For $j=1,2,3$, the $(\eta_+,\eta_-;{\cal D}_j)$ in the three subsections satisfies the two-curve DMP as described in Lemma \ref{DMP} except that the conditional law of the normalization of $(\widetilde\eta_+,\widetilde\eta_-;\widetilde{\cal D}_j)$ has the law of a commuting pair of hSLE$_\kappa$ curves in the chordal coordinate respectively started from $(W_+,W_-;V_+,V_-)|_{\underline\tau}$, $(W_+\leftrightarrow W_-;V_+,V_-)|_{\underline\tau}$, and $(W_+\leftrightarrow W_-;V_+)|_{\underline\tau}$.
\label{DMP-123}
\end{Lemma}
\section{Boundary Green's Functions}
We are going to prove the main theorem in this section.
\begin{Lemma}
For $j=1,2$, let $U_j$ be a simply connected subdomain of the Riemann sphere $\widehat\mathbb{C}$, which contains $\infty$ but no{t} $0$, and let $f_j$ be a conformal map from $\mathbb{D}^*:=\widehat\mathbb{C}\setminus \{|z|\le 1\}$ onto $U_j$, which fixes $\infty$. Let $a_j=\lim_{z\to \infty} |f_j(z)|/|z|>0$, $j=1,2$, and $a=a_2/a_1$. If $R>4a_1$, then $\{|z|> R\}\subset U_1$, and $\{|z|> aR+4a_2\}\subset f_2\circ f_1^{-1}(\{|z|> R\}) \subset \{|z|\ge aR-4a_2\}$. \label{distortion}
\end{Lemma}
\begin{proof}
By scaling we may assume that $a_1=a_2=1$. Let $f=f_2\circ f_1^{-1}$.
That $\{|z|>4\}\subset U_1$ follows from Koebe's $1/4$ theorem applied to $J\circ f_1\circ J$, where $J(z):=1/z$. Fix $z_1\in U_1$. Let $z_0=f_1^{-1}(z_1)\in\mathbb{D}^*$ and $z_2=f_2(z_0)\in U_2$. Let $r_j=|z_j|$, $j=0,1,2$. Applying Koebe's distortion theorem to $J\circ f_j\circ J$, we find that $r_0+\frac 1{r_0}-2\le r_j\le r_0+\frac 1{r_0}+2$, $j=1,2$, which implies that $ |r_1-r_2|\le 4$. Thus, for $R>4$, $f(\{|z|> R\})\subset\{|z|> R-4 \}$, and $f(\{|z|=R\})\subset \{|z|\le R+4\}$. The latter inclusion implies that $f(\{|z|> R\})\supset \{|z|> R+4\}$.
\end{proof}
\begin{Theorem}
Let $v_-<w_-<w_+<v_+\in\mathbb{R}$ be such that $0\in [v_-,v_+]$. Let $(\widehat\eta_+,\widehat\eta_-)$ be a $2$-SLE$_\kappa$ in $\mathbb{H}$ with link pattern $(w_+\leftrightarrow v_+;w_-\leftrightarrow v_-)$. Let $\alpha_1=2(\frac{12}\kappa -1)$, $\beta_1'=\frac{5}{6}$, and
$G_1(\underline w;\underline v)$ be as in (\ref{G1(w,v)}).
Then there is a constant $C>0$ depending only on $\kappa$ such that,
\begin{equation} \mathbb{P}[\widehat\eta_\sigma\cap\{|z|>L\}\ne \emptyset,\sigma\in\{+,-\}]= CL^{-\alpha_1} G_1(\underline w;\underline v)(1+O( {|v_+-v_-|}/L)^{\beta_1'}),\label{Thm1-est}\end{equation}
as $L\to \infty$,
where the implicit constants in the $O(\cdot)$ symbol depend only on $\kappa$. \label{Thm1}
\end{Theorem}
\begin{proof}
Let $p(\underline w;\underline v;L)$ denote the LHS of (\ref{Thm1-est}).
Construct the random commuting pair of chordal Loewner curves $(\eta_+,\eta_-;{\cal D}_1)$ from $\widehat\eta_+$ and $\widehat \eta_-$ as in Section \ref{section-two-curve}, where ${\cal D}_1=[0,T_+)\times [0,T_-)$, and $T_\sigma$ is the lifetime of $\eta_\sigma$, $\sigma\in\{+,-\}$. We adopt the symbols from Sections \ref{section-deterministic1}. Note that, when $L>|v_+|\vee |v_-|$, $\widehat\eta_+$ and $\widehat\eta_-$ both intersect $\{|z|>L\}$ if and only if $\eta_+$ and $\eta_-$ both intersect $\{|z|>L\}$. {The fact is:} for any $\sigma\in\{+,-\}$, $\eta_\sigma$ either disconnects $v_{{\sigma}}$ from $\infty$, or disconnects $v_{{-\sigma}}$ from $\infty$. If $\eta_\sigma$ does not intersect $\{|z|>L\}$, then in the former case, $\widehat\eta_\sigma$ grows in a bounded connected component of $\mathbb{H}\setminus \eta_\sigma$ after the end of $\eta_\sigma$, and so {cannot} hit $\{|z|>L\}$; and in the latter case $\eta_{-\sigma}$ grows in a bounded connected component of $\mathbb{H}\setminus \eta_\sigma$, and {cannot} hit $\{|z|>L\}$. We first consider a special case: $v_\pm=\pm 1$ and $w_\pm=\pm r_\pm $, where $r_\pm \in[0,1)$. Let $v_0=0$. {This case corresponds to the additional assumption (\ref{add-assump}) up to translation and dilation.} Let $V_\nu$ be the force point function started from $v_\nu$, $\nu\in\{0,+,-\}$, as before. Since $|v_+-v_0|=|v_0-v_-|$, we may define a time curve $\underline u:[0,T^u)\to \cal D$ as in Section \ref{time curve} and adopt the symbols from there.
Define $p(\underline r;L)=p(r_+,-r_-;1,-1;L)$.
Suppose $L>2e^6$, and so $\frac 12 \log(L/2)>3$. Let $t_0\in [3,\frac 12\log(L/2))$. If both $\eta_+$ and $\eta_-$ intersect $\{|z|>L\}$, then there is some $t'\in[0,T^u)$ such that either $\eta_+\circ u_+[0,t']$ or $\eta_-\circ u_-[0,t']$ intersects $\{|z|>L\}$, which by (\ref{V-V'}) implies that $L\le 2e^{2t'}$, and so $T^u>t'\ge \log(L/2)/2>t_0$.
Thus, $\{\eta_\sigma\cap\{|z|>L\}\ne\emptyset,\sigma\in\{+,-\}\}\subset \{T^u>t_0\}$. By (\ref{V-V'}) again, $\rad_{0}(\eta_\sigma[0,u_\sigma(t_0)])\le 2e^{2t_0}<L$. So $\eta_\sigma\circ u_\sigma[0,t_0]$, $\sigma\in\{+,-\}$, do not intersect $\{|z|>L\}$.
Let $\widehat g^u_{t_0}(z)= (g_{K(\underline u(t_0))}(z)-V^u_0(t_0))/e^{2t_0}$. Then $\widehat g^u_{t_0}$ maps $\mathbb{C}\setminus (K(\underline u(t_0))^{\doub}\cup [v_-,v_+])$ conformally onto $\mathbb{C}\setminus [-1,1]$, and fixes $\infty$ with $\widehat g^u_{t_0}(z)/z\to e^{-2t_0}$ as $z\to\infty$.
From $V^u_- \le v_-<0$, $V^u_+ \ge v_+>0$, and $V^u_0=(V^u_++V^u_-)/2$, we get $|V^u_0(t_0)|\le |V^u_+(t_0)-V^u_-(t_0)|/2 =e^{2t_0}$. Applying Lemma \ref{distortion} to $f_2(z)=(z+1/z)/2$, $a_2=1/2$, $f_1=(\widehat g^u_{t_0})^{-1}\circ f_2$ and $a_1=e^{2t_0}/2$, and using that $L>2 e^{2t_0}$, we get $\{|z|> L\}\subset \mathbb{C}\setminus (K(\underline u(t_0))^{\doub}\cup [v_-,v_+])$ and
\begin{equation} \{|z|> L/e^{2t_0}-2\}\supset \widehat g^u_{t_0}(\{|z|> L\})\supset \{|z|> L/e^{2t_0}+2\}.\label{R1R2}\end{equation}
Note that both $\eta_+$ and $\eta_-$ intersect $\{|z|>L\}$ if and only if $T^u>t_0$ and the $\widehat g^u_{t_0}$-image of the parts of $\eta_\sigma$ after $u_\sigma(t_0)$, $\sigma\in\{+,-\}$, both intersect the $\widehat g^u_{t_0}$-image of $\{|z|>L\}$.
By Lemma \ref{DMP-123} for $j=1$, conditionally on ${\cal F}_{\underline u(t_0)}$ and the event $\{T^u>t_0\}$, the $\widehat g^u_{t_0}$-image of the parts of $\eta_\sigma$ after $u_\sigma(t_0)$, $\sigma\in\{+,-\}$, after normalization, form a commuting pair of hSLE$_\kappa$ curves in the chordal coordinate started from $(R_+(t_0),-R_-(t_0);1,-1)$. The condition that $\eta_\sigma(u_\sigma(t_0))\not\in \eta_{-\sigma}[0,u_{-\sigma}(t_0)]$, $\sigma\in\{+,-\}$, is a.s.\ satisfied on $\{T^u>t_0\}$,which follows from Lemma \ref{W=V} and the fact that a.s.\ $R_\sigma(t_0)=(W^u_\sigma(t_0)-V_0^u(t_0))/(V_\sigma^u(t_0)-V_0^u(t_0))>0$, $\sigma\in\{+,-\}$, on $\{T^u>t_0\}$ (because of the transition density of $(\underline R)$ vanishes outside $(0,1)^2$).
From (\ref{R1R2}) we get
\begin{equation} \mathbb{P}[\eta_\sigma\cap \{|z|>L\}\ne\emptyset, \sigma\in\{+,-\}|\overline{\cal F}_{\underline u(t_0)},T^u>t_0] \gtreqless p(\underline R(t_0); L/{e^{2t_0}}\pm 2)].\label{inclusion}\end{equation}
Here when we choose $+$ (resp.\ $-$) in $\pm$, the inequality holds with $\ge$ (resp.\ $\le$).
We use the approach of \cite{LR} to prove the convergence of $\lim_{L\to\infty} L^{\alpha_1} p(\underline r,L)$.
{Note that the underlying probability measure for the $(\eta_1,\eta_2)$ here is the $\mathbb{P}_1$ introduced in Section \ref{section-two-curve}.}
We first estimate $p(L):=\int_{(0,1)^2} p(\underline r;L) \widetilde p^1_\infty(\underline r)d\underline r$, where $\widetilde p^1_\infty$ is the quasi-invariant density for the process $(\underline R)$ under $\mathbb{P}_1$ given in Lemma \ref{property-til-p}.
This is the probability that the two curves in a $2$-SLE$_\kappa$ in $\mathbb{H}$ with link pattern $(r_+\leftrightarrow 1;-r_-\leftrightarrow -1)$ both hit $\{|z|>L\}$, where $(r_+,r_-)$ is a random point in $(0,1)^2$ that follows the density $\widetilde p^1_\infty$. From Lemma \ref{property-til-p} we know that, for {a} deterministic time ${t}$, $\mathbb{P}[T^u>{t}]=e^{-\alpha_1 {t}}$, and the law of $(\underline R({t}))$ conditionally on $\{T^u>{t}\}$ still has density $\widetilde p^1_\infty$. Thus, the conditional joint law of the $\widehat g^u_{{t}}$-images of the parts of $\widehat\eta_\sigma$ after $\eta_\sigma(u_\sigma({t}))$, $\sigma\in\{+,-\}$ given ${\cal F}^u_{{t}}$ and $\{T^u>{t}\}$ agrees with that of $(\widehat\eta_+,\widehat\eta_-)$. From (\ref{inclusion})
we get $p(L)\gtreqless e^{-2\alpha_1 {t}} p( L/e^{2{t}}\pm 2)$.
Let $q(L)=L^{\alpha_1} p(L)$. Then
\begin{equation} q(L)\gtreqless (1\pm 2e^{2{t}}/L)^{-\alpha_1} q( L/e^{2{t}}\pm 2),\quad {\mbox{if }t\ge 3\mbox{ and }L>2e^{2t}}.\label{L-L}\end{equation}
Suppose $L_0> 4$ and $L\ge e^6(L_0+2)$. Let $t_\pm=\log(L/(L_0\mp 2))/2$. Then $L/e^{2t_\pm}\pm 2=L_0$, $t_+> t_-\ge 3$ and $L=(L_0-2)e^{2t_+}>2e^{2t_+}> 2e^{2t_-}$. From (\ref{L-L}) (applied here with $t_\pm$ in place of ${t}$) we get
\begin{equation} q(L)\gtreqless (1\mp 2/L_0)^{\alpha_1} q(L_0), \quad \mbox{if }L\ge e^6(L_0+2)\mbox{ and }L_0>4.\label{q-lim}\end{equation}
From (\ref{V-V'}) we know that $T^u>{t}$ implies that both $\eta_+$ and $\eta_-$ intersect $\{|z|>e^{2{t}}/64\}$. Since $\mathbb{P}[T^u>{t}]=e^{-2\alpha_1 {t}}>0$ for all ${t}\ge 0$, we see that $p$ is positive on $[0,\infty)$, and so is $q$. From (\ref{q-lim}) we see that $\lim_{L\to \infty} q(L)$ {exists and lies in} $(0,\infty)$. Denote it by $q(\infty)$. By fixing $L_0\ge 4$ and sending $L\to \infty$ in (\ref{q-lim}), we get
\begin{equation} p(L_0)\gtreqless q(\infty) L_0^{-\alpha_1} (1\mp 2/L_0)^{-\alpha_1},\quad \mbox{if }L_0\ge 4. \label{p-asymp}\end{equation}
Now we estimate $p(\underline r;L)$ for a fixed deterministic $\underline r\in [0,1)^2\setminus \{(0,0)\}$.
The process $(\underline R)$ starts from $\underline r$ and has transition density $\widetilde p^1_t$ given by Lemma \ref{transition-1}. Fix $L>2e^6$ and choose $t_0\in[3, \log(L/2)/2)$. {The event that} both $\eta_+$ and $\eta_-$ intersect $\{|z|>L\}$ {is then contained in the event} ${\{}T^u>t_0{\}}$. Let $\beta_1=10$. From Lemma \ref{property-til-p} we know that $\mathbb{P}_{1}[T^u>t_0]={{\cal Z}_1} G_1^*(\underline r) e^{-2\alpha_1 t_0}(1+O(e^{- \beta_1 t_0}))$ and the law of $\underline R(t_0)$ conditionally on $\{T^u>t_0\}$ has a density on $(0,1)^2$, which equals $\widetilde p^1_\infty\cdot (1+O(e^{-\beta_1 t_0}))$, where $\beta_1=10$. Using Lemma \ref{DMP-123} and (\ref{inclusion},\ref{p-asymp}) we get
$$p(\underline r;L)={\cal Z}_1 q(\infty) G_1^*(\underline r) e^{-2\alpha_1 t_0}(L/e^{2t_0})^{-\alpha_1}(1+O(e^{-\beta_1 t_0}))(1+O(e^{2t_0}/L)).$$
For $L>e^{36}$, by choosing $t_0>3$ such that $e^{2t_0}=L^{2/(2+\beta_1)}$ and letting $C_0={\cal Z} q(\infty)$, we get $p(\underline r;L)=C_0 G_1^*(\underline r) L^{-\alpha_1} (1+O(L^{-\beta_1'}))$. Here we note that $\beta_1'=\beta_1/(\beta_1+2)$.
Since $G_1^*(r_+,r_-)=G_1(r_+,-r_-;1,-1)$, we proved (\ref{Thm1-est}) for $v_\pm =\pm 1$, $w_+\in[0,1)$, and $w_-\in(-1,0]$.
Since $G_1(aw_++b,aw_-+b;av_++b ,av_-+b)=a^{-\alpha_1} G_1(w_+,w_-;v_+,v_-)$ for any $a>0$ and $b\in\mathbb{R}$, by a translation and a dilation, we get (\ref{Thm1-est}) in the case that $(v_++v_-)/2\in[w_-,w_+]$. Here we use the assumption that $0\in[v_-,v_+]$ to control the amount of translation.
Finally, we consider all other cases, i.e., $(v_++v_-)/2\not\in [w_-,w_+]$. By symmetry, we may assume that $(v_++v_-)/2<w_-$. Let $v_0=(w_++w_-)/2$. Then $v_+>w_+>v_0>w_->v_-$, but $v_+-v_0<v_0-v_-$. We still let $V_\nu$ be the force point functions started from $v_\nu$, $\nu\in\{0,+,-\}$. By (\ref{pa-X}), $V_\nu$ satisfies the PDE $\partial_+ V_\nu\overset{\mathrm{ae}}{=} \frac{2W_{+,1}^2}{V_\nu -W_+ }$ on ${\cal D}_1^{\disj}$ as defined in Section \ref{section-deterministic-2}. Thus, on ${\cal D}_1^{\disj}$, for any $\nu_1\ne \nu_2\in\{+,-,0\}$,
$\partial_+\log |V_{\nu_1} -V_{\nu_2} |\overset{\mathrm{ae}}{=} \frac{-2W_{+,1}^2} {(V_{\nu_2} -W_+ )(V_{\nu_1} -W_+ )}$,
which implies that
\begin{equation} \frac{\partial_+(\frac{V_+-V_0}{V_0-V_-})}{\partial_+\log(V_+-V_-)}
=\frac{V_+-V_0}{W_+-V_0}\cdot \frac{V_+-V_-}{V_0-V_-} >1.\label{>1}\end{equation}
The displayed formula means that $\frac{V_+ -V_0 }{V_0 -V_- }|^-_0$ is increasing faster than $\log(V_+ -V_-)|^-_0$. From the assumption, $\frac{V_+(\underline 0)-V_0(\underline 0)}{V_0(\underline 0)-V_-(\underline 0)}=\frac{v_+-v_0}{v_0-v_-}\in(0,1)$. Let $\tau_+$ be the first $t$ such that $\frac{V_+(t,0)-V_0(t,0)}{V_0(t,0)-V_-(t,0)}=1$; if such time does not exist, then set $\tau_+=T_+$. Then $\tau_+$ is an ${\cal F}^+$-stopping time, and from (\ref{>1}) we know that, for any $0\le t<\tau_+$, $|V_+(t,0)-V_-(t,0)|<e|v_+-v_-|$, which implies by (\ref{V-V}) that $\diam([v_-,v_+]\cup\eta_+[0,t])< e|v_+-v_-|$. Let $L=e|v_+-v_-|$. From $0\in [v_-,v_+]$ we get $\tau_+\le \tau^+_L$.
Here and below, we write $\underline W$ and $\underline V$ for $(W_+,W_-)$ and $(V_+,V_-)$, respectively. From Lemma \ref{M-mart} we know that $M_1(\cdot\wedge \tau_L^+,0)$ is a martingale closed by $M_1(\tau_L^+,0)$. By Proposition \ref{OST} and the facts that $M_1=G_1(\underline W;\underline V)$ and $M_1(t,0)=0$ for $t\ge T_+$, we get
\begin{equation} \mathbb{ E}[{\bf 1}_{\{\tau_+<T_+\}} G_1(\underline W ;\underline V )|_{(\tau_+,0)}]=\mathbb{ E}[M_1(\tau_+,0)]=M_1(0,0)=G_1(\underline w;\underline v).\label{EM1}\end{equation}
Using the same argument as in the proof of (\ref{inclusion}) with $(\tau_+,0)$ in place of $\underline u(t_0)$ and $g_{K(\tau_+,0)}$ in place of $\widehat g^u_{t_0}$, we get
\begin{equation} \mathbb{P}[\eta_\sigma\cap\{|z|=L\}\ne\emptyset,\sigma\in\{+,-\}|{\cal F}^+_{\tau_+},\tau_+<T_+]\gtreqless p((\underline W;\underline V)|_{(\tau_+,0)} ;L\pm(V_+ -V_- )|_{(\tau_+,0)}).\label{p=exp}\end{equation}
Suppose $\tau_+<T_+$. Then the middle point of $[V_-(\tau_+,0),V_+(\tau_+,0)]$ is $V_0(\tau_+,0)$, which lies in $[W_-(\tau_+,0),W_+(\tau_+,0)]$. Also note that $0\in [V_-(\tau_+,0),V_+(\tau_+,0)]$ since $V_\pm (\tau_+,0)\gtreqless v_\pm \gtreqless 0$. Let $L_\pm=L\pm(V_+ -V_- )|_{(\tau_+,0)}$.
We may apply the result in the particular case to get
\begin{align}
p((\underline W;\underline V)|_{(\tau_+,0)} ;L_\pm) \nonumber =& C_0 G_1(\underline W;\underline V)|_{(\tau_+,0)}\cdot L_\pm^{-\alpha_1} (1+O( {(V_+ -V_- )|_{(\tau_+,0)}}/{L_\pm} )^{\beta_1'} ) \nonumber \\
=& C_0 G_1(\underline W;\underline V)|_{(\tau,0)}\cdot L ^{-\alpha_1} (1+O({|v_+-v_-|}/{L})^{\beta_1'}). \label{p=est}
\end{align}
Here in the last step we used $(V_+ -V_- )|_{(\tau_+,0)}\le e|v_+-v_-|$ and $L_\pm /L=1+O(|v_+-v_-|/L)$. Plugging (\ref{p=est}) into (\ref{p=exp}), taking expectation on both sides of (\ref{p=exp}), and using the fact that $\{\eta_+\cap \{|z|=L\}\ne\emptyset\}\subset\{\tau_+<T_+\}$, we get
\begin{align*}
p(\underline w;\underline v;L) =&C_0 \mathbb{ E}[{\bf 1}_{\{\tau<T_+\}} G_1(\underline W;\underline V)|_{(\tau,0)}]\cdot L^{-\alpha_1} (1+O(|v_+-v_-|/L)^{\beta_1'})\\
=&C_0G_1(\underline w;\underline v) \cdot L^{-\alpha_1} (1+O(|v_+-v_-|/L)^{\beta_1'}),
\end{align*}
where in the last step we used (\ref{EM1}). The proof is now complete.
\end{proof}
\begin{Theorem}
Let $\kappa\in(4,8)$. Then Theorem \ref{Thm1} holds with the same $\alpha_1,\beta_1,G_1$ but a different positive constant $C$ under either of the following two modifications:
\begin{enumerate}
\item [(i)] the set $\{|z|>L\}$ is replaced by $(L,\infty)$, $(-\infty,-L)$, or $(L,\infty)\cup (-\infty,-L)$;
\item [(ii)] the event that $\eta_\sigma\cap \{|z|>L\}\ne \emptyset$, $\sigma\in \{+,-\}$, is replaced by $\eta_+\cap \eta_-\cap \{|z|>L\}\ne\emptyset$.
\end{enumerate}
\label{Thm1'}
\end{Theorem}
\begin{proof}
The same argument in the proof of Theorem \ref{Thm1} works here, where the assumption that $\kappa\in(4,8)$ is used to guarantee that the probability of all event are positive for any $L>0$.
\end{proof}
\begin{Theorem}
Let $v_-<w_-<w_+<v_+\in\mathbb{R}$ be such that $0\in [v_-,v_+]$. Let $\eta_w$ be an hSLE$_\kappa$ curve in $\mathbb{H}$ connecting $w_+$ and $w_-$ with force points $v_+$ and $v_-$.
Let $\alpha_2=\frac{2}\kappa(12-\kappa)$, $\beta_2'=\frac 56$, and $G_2$ be as in (\ref{G2(w,v)}.
Then there is a constant $C>0$ depending only on $\kappa$ such that, as $L\to \infty$,
$$ \mathbb{P}[\widehat\eta_w\cap\{|z|>L\}\ne \emptyset ]= CL^{-\alpha_2} G_2(\underline w;\underline v) (1+O ( {|v_+-v_-|}/L )^{\beta_2'}),$$
where the implicit constants in the $O(\cdot)$ symbol depend only on $\kappa$.
\label{Thm2}
\end{Theorem}
\begin{proof}
Let $(\eta_+,\eta_-;{\cal D}_2)$ be the random commuting pair of chordal Loewner curves as defined in Section \ref{section-iSLE-1}. Then for $L>\max\{|v_+|,|v_-|\}$, $\widehat \eta_w\cap\{|z|>L\}\ne \emptyset$ if and only if $\eta_{\sigma}\cap \{|z|>L\}\ne\emptyset$ for $\sigma\in \{+,-\}$.
The rest of the proof follows that of Theorem \ref{Thm1} except that we now apply Lemmas \ref{property-til-p} and \ref{DMP-123} with $j=2$ and use Lemma \ref{M-mart2}
in place of Lemma \ref{M-mart}.
\end{proof}
\begin{Theorem}
Let $ w_-<w_+<v_+ \in\mathbb{R}$ be such that $0\in [w_-,v_+]$. Let $\eta_w$ be an hSLE$_\kappa$ in $\mathbb{H}$ connecting $w_+$ and $w_-$ with force points $v_+$ and $\infty$. Let $\alpha_3=\frac {12}\kappa-1$, $\beta_3'=\frac 45$, and
$G_3(\underline w;v_+)$ be as in (\ref{G3(w,v)}).
Then there is a constant $C>0$ depending only on $\kappa$ such that, as $L\to \infty$,
$$ \mathbb{P}[\widehat\eta_w \cap\{|z|>L\}\ne \emptyset ]= CL^{-\alpha_3} G_3(\underline w;v_+) (1+O ( {|w_+-v_-|}/L )^{ {\beta_3'} } ),$$
where the implicit constants in the $O(\cdot)$ symbol depend only on $\kappa$.
\label{Thm3}
\end{Theorem}
\begin{proof}
The proof follows those of Theorems \ref{Thm2} and \ref{Thm1} except that we now introduce $v_0:=(w_++w_-)/2$ and $v_-:=2v_0-v_+$ as in Section \ref{section-iSLE-2}. Then we can define the time curve $\underline u$ as in Section \ref{time curve} and apply Lemmas \ref{property-til-p} and \ref{DMP-123} with $j=3$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main-Thm1}]
By conformal invariance of $2$-SLE$_\kappa$, we may assume that $D=\mathbb{H}$ and $z_0=\infty$. Case (A1) follows immediately from Theorem \ref{Thm1}. Cases (A2) and (B) respectively follow from Theorems \ref{Thm2} and \ref{Thm3} since we only need to consider the Green's function for the curve connecting $w_+$ and $w_-$, which is an hSLE$_\kappa$ curve.
\end{proof}
\begin{Remark}
The hSLE$_\kappa$ curve is a special case of the intermediate SLE$_\kappa(\rho)$ (iSLE$_\kappa(\rho)$ for short) curves in \cite{kappa-rho} with $\rho=2$. An iSLE$_\kappa(\rho)$ curve is defined using Definition \ref{Def-hSLE} with $F:= \,_2F_1(1-\frac{4}\kappa,\frac{2\rho}\kappa; \frac{2\rho+4}\kappa;\cdot )$ and $\widetilde G:=\kappa \frac{F'}F+\rho$. The curve is well defined for $\kappa\in(0,8)$ and $\rho>\min\{-2,\frac \kappa 2-4\}$, and satisfies reversibility when $\kappa\in(0,4]$ and $\rho>-2$ or $\kappa\in (4,8)$ and $\rho\ge \frac \kappa 2-2$ (cf.\ \cite{multipl-kappa-rho}). When an iSLE$_\kappa(\rho)$ satisfies reversibility, we can obtain a commuting pair of iSLE$_\kappa(\rho)$ curves in the chordal coordinate started from $(w_+\leftrightarrow w_-;v_+,v_-)$ or $(w_+\leftrightarrow w_-;v_+)$ for given points $v_-<w_-<w_+<v_+$, which satisfy two-curve DMP. Following similar arguments, we find that Theorems \ref{Thm2} and \ref{Thm3} respectively hold for iSLE$_\kappa(\rho)$ curves with $\alpha_2=\frac{\rho+2}\kappa(\rho+4-\frac \kappa 2)$, $\alpha_3=\frac 2\kappa (\rho+4-\frac \kappa 2)$, $\beta_2'=\frac{2\rho+6}{2\rho+8}$, $\beta_3'=\frac{\rho+6}{\rho+8}$, and (with $F= \,_2F_1(1-\frac{4}\kappa,\frac{2\rho}\kappa; \frac{2\rho+4}\kappa;\cdot )$)
$$G_2(\underline w;\underline v)=|w_+-w_-|^{\frac 8\kappa -1}|v_+-v_-|^{\frac{\rho(2\rho+4-\kappa)}{2\kappa}}\prod_{\sigma\in\{+,-\}} |w_\sigma-v_{-\sigma}|^{\frac{2\rho}\kappa} F\Big(\frac{(v_+-w_+)(w_--v_-)}{(w_+-v_-)(v_+-w_-)}\Big)^{-1},$$
$$G_3(\underline w;v_+)=|w_+-w_-|^{\frac 8\kappa -1}|v_+-w_-|^{\frac{2\rho}\kappa} F\Big(\frac{v_+-w_+ }{ w_+-w_-}\Big)^{-1}.$$
The proofs use the estimate on the transition density of $\underline R$ under $\mathbb{P}_{\underline w;\underline v}^{(\rho,\rho)}$ and $\mathbb{P}_{\underline w;\underline v}^{(\rho)}$ (Corollary \ref{transition-R-infty}) and revisions of Lemmas \ref{RN-Thm2-inv} and \ref{RN-Thm3-inv}) with $\mathbb{P}_2^0$ and $\mathbb{P}_3^0$ now respectively representing $\mathbb{P}_{\underline w;\underline v}^{(\rho,\rho)}$ and $\mathbb{P}_{\underline w;\underline v}^{(\rho)}$, $\mathbb{P}_2$ and $\mathbb{P}_3$ now respectively representing the joint law of the driving functions for a commuting pair of iSLE$_\kappa(\rho)$ curves in the chordal coordinate started from $(w_+\leftrightarrow w_-;v_+,v_-)$ and from $(w_+\leftrightarrow w_-;v_+)$, and $M_2$ and $M_3$ replaced by $G_2(W_+,W_-;V_+,V_-)$ and $G_3(W_+,W_-;V_+)$ for the current $G_2$ and $G_3$.
The revision of Theorem \ref{Thm2} (resp.\ \ref{Thm3}) also holds in the degenerate case: $v_+=w_+^+$, in which the $\eta_w$ oriented from $w_-$ to $w_+$ is a chordal SLE$_\kappa(\rho)$ curve in $\mathbb{H}$ from $w_-$ to $w_+$ with the force point at $v_-$ (resp.\ $\infty$). After a conformal map, we then obtain the boundary Green's function for a chordal SLE$_\kappa(\rho)$ curve in $\mathbb{H}$ from $0$ to $\infty$ with the force point $v>0$ at a point $z_0\in (v,\infty)$ or at $z_0=v$. Such Green's functions may also be obtained from the traditional one-curve approach in \cite{Mink-real}.
The exponents $\alpha_2$ and $\alpha_3$ have appeared in \cite[Theorem 3.1]{MW} with a rougher estimate on the intersection probability.
\end{Remark}
|
1,116,691,498,601 | arxiv | \section{Introduction} \label{sec:intro}
A {\em polygon} on a connected compact oriented surface $S$ with boundary is an embedded (closed) disc bounded by a sequence of properly embedded arcs $P_1 P_2, P_2 P_3, \ldots, P_{m-1} P_m, P_m P_1$, where $P_1, P_2, \ldots, P_m \in \partial S$. The points $P_1, \ldots, P_m$ are called the \emph{vertices} of the polygon and the arcs $P_i P_{i+1}$ (with $i$ taken mod $m$) are its \emph{edges}. Given a finite set of \emph{marked points} $M \subset \partial S$, a {\em polygon diagram} on $(S,M)$ is a disjoint union of polygons on $S$ whose vertices are precisely the marked points $M$. See figure \ref{polygon-diagram} for an example. Two polygon diagrams $D_1, D_2$ on $(S,M)$ are {\em equivalent} if there is an orientation preserving homeomorphism $\phi: S\to S$ such that $\phi|_{\partial S}$ is the identity and $\phi(D_1) = D_2$.
Polygon diagrams are closely related to non-crossing permutations. In this paper we count them.
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{polygon-diagram.pdf}
\caption{A polygon diagram on $S_{1,2}$.}
\label{polygon-diagram}
\end{center}
\end{figure}
Denote by $S_{g,n}$ a connected compact oriented surface of genus $g$ with $n \geq 1$ boundary components, or just $S$ when $g$ and $n$ are understood.
Label the boundary components of $S$ as $F_1, \ldots, F_n$. Since we will be performing cutting and pasting operations on polygon diagrams, it is often helpful to choose a single vertex $\mathbf{m}_i \in M \cap F_i$ to be a decorated marked point on each boundary component $F_i$ containing at least one vertex (i.e. such that $M \cap F_i \neq \emptyset$). Two polygon diagrams $D_1, D_2$ on $S$ can then be regarded as equivalent
if there is an orientation preserving diffeomorphism of $S$ taking $D_1$ to $D_2$, such that each decorated marked point on $D_1$ is mapped to the decorated marked point of $D_2$ on the same boundary component.
Fixing the total number of vertices on each boundary component $F_i$ to be $\mu_i$ (i.e. $|M \cap F_i| = \mu_i$), let $P_{g,n}(\mu_1, \ldots, \mu_n)$ be number of equivalence classes of polygon diagrams on $(S,M)$. Clearly
$P_{g,n}$ only depends on $g,n,\mu_1, \ldots, \mu_n$ (not on the choice of particular $S$ or $M$) and is a symmetric function of the variables $\mu_1, \ldots, \mu_n$.
\begin{proposition}\label{Pexamples}
\begin{align}
\label{eqn:P01_formula}
P_{0,1}(\mu_1)&=
\begin{cases}
\binom{2\mu_1-1}{\mu_1}\frac{2}{\mu_1+1}, & \mu_1 > 0 \\
1, & \mu_1 = 0
\end{cases}\\
\label{eqn:P02_formula}
P_{0,2}(\mu_1,\mu_2)&=
\begin{cases} \binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\left(\frac{2\mu_1\mu_2}{\mu_1+\mu_2}+1\right), & \mu_1,\mu_2 > 0 \\
\binom{2\mu_1-1}{\mu_1}, & \mu_2=0
\end{cases} \\
P_{0,3}(\mu_1,\mu_2,\mu_3)&= \binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\binom{2\mu_3-1}{\mu_3}\left(2\mu_1\mu_2\mu_3+\sum_{i\neq j}\mu_i\mu_j+\sum^3_{i=1} \frac{\mu_i^2-\mu_i}{2\mu_i-1}+1\right) \\
P_{1,1}(\mu_1)&=\binom{2\mu-1}{\mu} \frac{1}{2\mu-1} \frac{\mu^3 + 3\mu^2 + 20\mu - 12}{12} \label{P11}
\end{align}
\end{proposition}
Here we take the convention $\binom{-1}{0}=1$ when $\mu_i$ is $0$.
Suppose $D$ is a polygon diagram on $(S,M)$ where $S$ is a disc or an annulus, i.e. $(g,n) = (0,1)$ or $(0,2)$. Each boundary component $F_i$ inherits an orientation from $S$. Label the marked points of $M$ by the numbers $1,2,\ldots, |M| = \sum_{i=1}^n \mu_i$, in order around $F_1$ in the disc case, and in order around $F_1$ then $F_2$ in the annulus case.
Orienting each polygon in agreement with $S$ induces a cyclic order on the vertices (and vertex labels) of each polygon, giving the cycles of a permutation $\pi$ of $\{1,\ldots \sum \mu_i\}$. Such a permutation is known as a
{\em non-crossing permutation} if $S$ is a disc, or {\em annular non-crossing permutation} if $S$ is an annulus. We say the diagram $D$ \emph{induces} or \emph{represents} the permutation $\pi$.
Non-crossing permutations are well known combinatorial objects. It is a classical result
that the number of non-crossing permutations on the disc is a Catalan number. Annular non-crossing permutations were (so far as we know) first
introduced by King \cite{King1999}. They were studied in detail by Mingo--Nica \cite{MN2004}, Nica--Oancea \cite{NO2009}, Goulden--Nica--Oancea \cite{GNO2011},
Kim \cite{Kim2012} and Kim--Seo--Shin \cite{KSS2014}.
In general, if we number the marked points $M$ from $1$ to $|M| = \sum_{i=1}^n \mu_i$ in order around the oriented boundaries $F_1$, then $F_2$, up to $F_n$, then in a similar way, a polygon diagram represents a non-crossing permutation on a surface with arbitrary genus and an arbitrary number of boundary components. This paper studies such non-crossing permutations via polygon diagrams.
The relation between permutation and genus here differs slightly from others in the literature. The notion of genus of a permutation $\pi$ in \cite{Jacques1968} and subsequent papers such as \cite{CH2013, CH2014, CH2018}, in our language, is the \emph{smallest} genus $g$ of a surface $S$ with \emph{one boundary component} on which a polygon diagram exists representing the permutation $\pi$; equivalently, it is the genus of a surface $S$ with one boundary component on which a polygon diagram exists representing $\pi$, such that all the components of $S \backslash D$ are discs. This differs again from the notion of genus of a permutation in \cite{CFHMMPS2002}.
Given a non-crossing permutation $\pi$ on the disc, it's clear that there is a unique polygon diagram $D$ (up to equivalence) representing $\pi$. Therefore $P_{0,1}(\mu)$ is also the $\mu$-th Catalan number. Uniqueness of representation is also true for \emph{connected} annular non-crossing partitions. An annular non-crossing partition is \emph{connected} if there is at least one edge between the two boundary components, i.e. from $F_1$ to $F_2$. Uniqueness of representation follows since an edge from $F_1$ to $F_2$ cuts the annulus into a disc.
The number of connected annular non-crossing partitions counted in $P_{0,2}(\mu_1, \mu_2)$ is known to be \cite[cor. 6.8]{MN2004}
\begin{align*}
\binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\left(\frac{2\mu_1\mu_2}{\mu_1+\mu_2}\right),
\end{align*}
which appears as a term in the formula (\ref{eqn:P02_formula}) for $P_{0,2}(\mu_1,\mu_2)$. A disconnected annular non-crossing permutation however
can be represented by several distinct polygon diagrams, and $P_{0,2}$ can be viewed as the total count of annular non-crossing permutations with
multiplicities. Similarly, in general the $P_{g,n}(\mu, \ldots, \mu_n)$ can be regarded as counts with multiplicity of non-crossing permutations on arbitrary connected compact oriented surfaces with boundary.
If all polygons in $D$ are bigons, then collapsing them into arcs turns $D$ into an {\em arc diagram} previously studied by the first and third authors with Koyama
\cite{DKM2017}. The count of arc diagrams exhibits quasi-polynomial behaviour, and the asymptotic behaviour is governed by intersection
numbers on the moduli space of curves. In this paper we show that the count of polygon diagrams has the same structure.
The arguments mirror those in \cite{DKM2017}.
The formulae for $P_{g,n}$ in Proposition \ref{Pexamples} suggest that $P_{g,n}(\mu_1, \ldots, \mu_n)$ is a product of the $\binom{2\mu_i-1}{\mu_i}$, together with a rational function of the $\mu_i$'s. In fact we also know the form of the denominator. Moreover, the behaviour is \emph{better} than for arc diagrams in the sense that we obtain \emph{polynomials} rather than quasi-polynomials.
\begin{theorem}\label{Pcount}
For $(g,n)\neq (0,1),(0,2)$, let $a=3g-3+n \geq 0$, and
\[
C_{g,n}(\mu)=
\frac{1}{(2\mu-1)(2\mu-3)\dots(2\mu -2a -1)}\binom{2\mu-1}{\mu}
\]
Then
\[
P_{g,n} (\mu_1, \ldots, \mu_n) = \left(\prod_{i=1}^{n}{C_{g,n}(\mu_i)}\right)F_{g,n}(\mu_1, \ldots, \mu_n)
\]
where $F_{g,n}$ is a polynomial with rational coefficients.
\end{theorem}
Note that $F_{g,n}$ might have some common factors with $(2\mu_i-1)(2\mu_i-3)\dots(2\mu_i -2a -1)$, which would
simplify the formula for $P_{g,n}$. For example, $F_{1,1}$ has a factor
$(2\mu_1-3)$, hence only $(2\mu_1-1)$ appears on the denominator in \eqref{P11}.
The $P_{g,n}$ satisfy a recursion which allows the count on a surface to be computed from the counts on surfaces with simpler topology, i.e, either smaller genus $g$, or fewer boundary components $n$, or fewer vertices $\mu_i$.
Let $X= \{1, 2, 3, \ldots, n\}$. For each $I\subseteq X$, let $\boldsymbol{\mu}_I = \{\mu_i \; \mid \; i\in I\}$.
\begin{theorem}
\label{thm:P_recursion}
For non-negative integers $g$ and $\mu_1, \ldots, \mu_n$ such that $\mu_1 > 0$, we have
\begin{align}
P_{g,n} (\mu_1, \ldots, \mu_n)
&= P_{g,n} (\mu_1 - 1, \boldsymbol{\mu}_{X\setminus \{1\}}) + \sum_{k=2}^n \mu_k P_{g,n-1} ( \mu_1 + \mu_k - 1, \boldsymbol{\mu}_{X\setminus \{1,k\}} ) \nonumber \\
& \quad + \mathop{\sum_{i+j=\mu_1 - 1}}_{j>0} \bigg[ P_{g-1,n+1} (i,j, \boldsymbol{\mu}_{X\setminus \{1\}}) + \mathop{\sum_{g_1 + g_2 = g}}_{I \sqcup J = X\setminus \{1\}} P_{g_1, |I|+1} (i, \boldsymbol{\mu}_I) \, P_{g_2, |J|+1} (j, \boldsymbol{\mu}_j) \bigg]. \label{precur-eq}
\end{align}
\end{theorem}
An edge $P_1P_2$ is {\em boundary parallel} if it cuts off a disc from the surface $S$. It is easy to create polygons using edges that are parallel to the same boundary component. The counts of these polygons are clearly combinatorial in nature instead of reflecting the underlying topology of $S$. Therefore from a topological point of view, it is natural to count polygon diagrams where none of the edges are boundary parallel. We call such a diagram a {\em pruned polygon diagram}. Let the count of pruned polygon diagrams be $Q_{g,n}(\mu_1, \ldots, \mu_n)$, i.e. the number of equivalence classes of pruned polygon diagrams on a surface of genus $g$, with $n$ boundary components, containing $\mu_1, \ldots, \mu_n$ marked points respectively. Clearly $Q_{g,n}(\mu_1, \ldots, \mu_n)$ is also a symmetric function of $\mu_1, \ldots, \mu_n$. As the name suggests, the relationship between $P_{g,n}$ and $Q_{g,n}$ mirrors that of Hurwitz numbers and pruned Hurwitz numbers \cite{DN2013}. It also mirrors the relationship between the counts of arc diagrams $G_{g,n}$ and non boundary-parallel arc diagrams $N_{g,n}$ in \cite{DKM2017}; we call the latter \emph{pruned arc diagrams}.
We call a function $f(\mu_1, \ldots, \mu_n)$ a \emph{quasi-polynomial} if it is given by a family of polynomial functions, depending on whether each of the integers $\mu_1, \ldots, \mu_n$ is zero, odd, or even (and nonzero).
In other words, a quasi-polynomial can be viewed as a collection of $3^n$ polynomials,
depending on whether each $\mu_i$ is zero, odd, or nonzero even.
Our definition of a quasi-polynomial differs slightly from the standard definition, in that $0$ is treated as a separate case rather than an even number.
More precisely, for each partition $X = X_e \sqcup X_o \sqcup X_\emptyset$, there is
a single polynomial $f^{(X_e,X_o,X_\emptyset)}( \boldsymbol{\mu}_{X_e}, \boldsymbol{\mu}_{X_o})$ such that $f(\mu_1, \ldots, \mu_n)=f^{(X_e,X_o,X_\emptyset)}(\boldsymbol{\mu}_{X_e}, \boldsymbol{\mu}_{X_o} )$ whenever $\mu_i = 0$ for $i\in X_\emptyset$, $\mu_i$ is nonzero and even for $i \in X_e$, and $\mu_i$ is odd for $i \in X_o$. (Here as above, for a set $I \subseteq X$, $\boldsymbol{\mu}_I = \{ \mu_i \; \mid \; i \in I \}$.) A quasi-polynomial is \emph{odd} if
each $f^{(X_e,X_o,X_\emptyset)}(\boldsymbol{\mu}_{X_e}, \boldsymbol{\mu}_{X_o})$ is an odd polynomial with respect to each $\mu_i\in X_e\sqcup X_o$.
\begin{theorem}
\label{thm:quasipolynomiality}
For $(g,n) \neq (0,1)$ or $(0,2)$, $Q_{g,n}(\mu_1, \ldots, \mu_n)$ is an odd quasi-polynomial.
\end{theorem}
The pruned diagram count captures topological information of $S_{g,n}$. The highest
degree coefficients of the quasi-polynomial $Q_{g,n}$ are determined by intersection numbers in the compactified moduli
space $\overline{\mathcal{M}}_{g,n}$.
\begin{theorem}\label{intersection}
For $(g,n) \neq (0,1)$ or $(0,2)$, $Q_{g,n}^{(X_e,X_o,X_\emptyset)}(\mu_1, \ldots, \mu_n)$ has degree $6g-6+3n$.
The coefficient $c_{d_1,\ldots,d_n}$ of the highest degree monomial $\mu_1^{2d_1+1}\cdots\mu_n^{2d_n+1}$
is independent of the partition $(X_e,X_o)$, and
$$c_{d_1,\ldots,d_n} = \frac{1}{2^{g-1}d_1!\cdots d_n!}\int_{\overline{\mathcal{M}}_{g,n}}\psi_1^{d_1}\cdots\psi_n^{d_n}.$$
\end{theorem}
Here $\psi_i$ is the Chern class of the $i$-th tautological line bundle over the compactified moduli space $\overline{\mathcal{M}}_{g,n}$ of genus $g$ curves with $n$ marked points.
\section{Preliminaries}
\label{sec:preliminaries}
In this section we state some identities required in the sequel.
\subsection{Combinatorial identities}
\label{sec:comb_id}
The combinatorial identities required involve sums of binomial coefficients, multiplied by polynomials. The sums have a polynomial structure, analogous to the sums in \cite[defn. 5.5]{DKM2017} and \cite{NS2014}.
\begin{proposition}
\label{almostpoly}
For any integer $\alpha \geq 0$ there are polynomials $P_{\alpha}$ and $Q_{\alpha}$ such that
\begin{align*}
\sum_{0\leq i\leq n \text{ even}}{i^{2\alpha+1}\binom{2n}{n-i}} &= \frac{\binom{2n}{n}}{(2n-1)(2n-3)\dots(2n-2\alpha-1)}P_{\alpha}(n) \\
\sum_{0\leq i\leq n \text{ odd}}{i^{2\alpha+1}\binom{2n}{n-i}} &= \frac{\binom{2n}{n}}{(2n-1)(2n-3)\dots(2n-2\alpha-1)}Q_{\alpha}(n).
\end{align*}
\end{proposition}
In particular, when $\alpha = 0, 1$ we have
\begin{equation}
\label{PQ01}
P_0(n) = \frac{1}{2}(n^2 - n), \quad
Q_0 (n) = \frac{1}{2} n^2, \quad
P_1 (n) = \left( n^2 - 1 \right)^2 n^2
\quad \text{and} \quad
Q_1 (n) = \frac{1}{2} n^2 \left( 2n^2 - 4n + 1 \right).
\end{equation}
In other words, we have identities
\begin{gather}
\label{comb_id_oe1}
\sum_{0\leq \nu\leq n \text{ even}}{\nu\binom{2\mu}{\mu-\nu}} = \frac{\binom{2\mu}{\mu}}{2\mu-1}\frac{\mu^2-\mu}{2}, \quad \quad \quad
\sum_{0\leq \nu\leq n \text{ odd}}{\nu\binom{2\mu}{\mu-\nu}} = \frac{\binom{2\mu}{\mu}}{2\mu-1}\frac{\mu^2}{2} \\
\label{comb_id_e3}
\sum_{0\leq \nu\leq n \text{ even}}{\nu^3\binom{2\mu}{\mu-\nu}} = \frac{\binom{2\mu}{\mu}}{(2\mu-1)(2\mu-3)}(\mu^2-1)^2\mu^2 \\
\label{comb_id_3o}
\sum_{0\leq \nu\leq n \text{ odd}}{\nu^3\binom{2\mu}{\mu-\nu}} = \frac{\binom{2\mu}{\mu}}{(2\mu-1)(2\mu-3)}\frac{\mu^2(2\mu^2-4\mu+1)}{2}
\end{gather}
\subsection{Algebraic results and identities}
We also need some results for summing polynomials over integers satisfying constraints on their sum and parities.
They can be proved as in \cite{DKM2017} using generalisations of Ehrhart's theorem as in \cite{BV1997}, but we give more elementary proofs in the appendix.
\begin{proposition}\label{lemma-odd}
For positive odd integers $k_1$, $k_2$
$$\sum_{\substack{i_1,i_2\geq 1, \ \ i_1+i_2=n \\ \{i_1,i_2\} \text{ {have fixed parities}}}}i_1^{k_1}i_2^{k_2}$$
is an odd polynomial of degree $(k_1+k_2+1)$ in $n$. Furthermore the leading coefficient is independent of the choice of parities.
\end{proposition}
In other words, in the sum above, we fix elements $\varepsilon_1, \varepsilon_2 \in \mathbb{Z}/2\mathbb{Z}$ and the sum is over integers $i_1, i_2$ such that $i_1, i_2 \geq 1$, $i_1 + i_2 = n$ and $i_1 \equiv \varepsilon_1$ mod $2$, $i_2 \equiv \varepsilon_2$ mod $2$.
Proposition \ref{lemma-odd} can be directly generalized by induction to the following.
\begin{proposition}\label{lemma-odd-induction}
For positive odd integers $k_1, k_2,\ldots ,k_m$
$$\sum_{\substack{i_1,i_2,\ldots,i_m\geq 1, \ \ i_1+i_2+\ldots +i_m=n \\ \{i_1,i_2,\ldots,i_m\} \text{ have fixed parities}}}i_1^{k_1}i_2^{k_2}\cdots i_m^{k_m}$$
is an odd polynomial of degree $(\sum_{i=1}^m k_i + m -1)$ in $n$. Furthermore the leading coefficient is independent of the choice of parities. \qed
\end{proposition}
We will need the following particular cases, which can be proved by a straightforward induction, and follow immediately from the discussion in the appendix.
\begin{lemma}\label{lem-odd-even-power-sums}
Let $n \geq 0$ be an integer.
\begin{enumerate}
\item
When $n$ is odd,
$\displaystyle
\sum_{\substack{0 \leq i \leq n \\ i \text{ odd}}} i
= \frac{ (n+1)^2 }{4}
\quad \text{and} \quad
\sum_{\substack{0 \leq i \leq n \\ i \text{ odd}}} i^2
= \frac{n(n+1)(n+2)}{6}
$.
\item
When $n$ is even,
$\displaystyle
\sum_{\substack{0 \leq i \leq n \\ i \text{ even}}} i
= \frac{ n(n+2) }{4}
\quad \text{and} \quad
\sum_{\substack{0 \leq i \leq n \\ i \text{ even}}} i^2
= \frac{n(n+1)(n+2)}{6}
$.
\end{enumerate}
\end{lemma}
\section{Basic results on polygon diagrams}
\subsection{Base case pruned enumerations}
We start by working out $Q_{g,n}$ for some small values of $(g,n)$.
\begin{proposition}\label{basecases}
\begin{align*}
Q_{0,1}(\mu_1) &= \delta_{\mu_1,0} \\
Q_{0,2}(\mu_1,\mu_2) &= \overline{\mu}_1 \delta_{\mu_1,\mu_2} \\
Q_{0,3}(\mu_1,\mu_2,\mu_3)& =
\begin{cases}
2\mu_1\mu_2\mu_3, & \mu_1,\mu_2,\mu_3 > 0 \\
\mu_1\mu_2, & \mu_1,\mu_2 > 0, \mu_3 = 0 \\
\overline{\mu}_1, & \mu_1 \text{ even}, \mu_2=\mu_3=0 \\
0, & \mu_1 \text{ odd}, \mu_2 = \mu_3=0
\end{cases} \\
\end{align*}
\end{proposition}
Here $\delta$ is the Kronecker delta and $\overline{n} = n + \delta_{n,0}$ is as in \cite{DKM2017}: for a positive integer $\overline{n} = n$, and $\overline{0} = 1$.
\begin{proof}
On the disc, every edge is boundary parallel. Therefore $Q_{0,1}(\mu_1) = 0$ for all positive $\mu_1$.
For $(g,n)=(0,2)$, all non-boundary parallel edges must run between the two boundary components $B_1$ and $B_2$, and are all
parallel to each other. A pruned polygon diagram must consist of a number of pairwise parallel bigons running between $F_1$ and $F_2$. Therefore $Q_{0,2}(\mu_1,\mu_2) = 0$ if $\mu_1\neq \mu_2$. If $\mu_1=\mu_2 > 0$, consider
the bigon containing the decorated marked point on $F_1$. The location of its other vertex on $B_2$ uniquely determines the pruned polygon diagram. Therefore $Q_{0,2}(\mu_1,\mu_1) = \mu_1$, or
$Q_{0,2}(\mu_1,\mu_1) = \overline{\mu}_1$ if we include the trivial case $Q_{0,2}(0,0)=1$.
For $(g,n)=(0,3)$, we can embed the pair of pants in the plane, with its usual orientation, and denote the three boundary components by $F_1 = F_{\text{outer}}$, $F_2 = F_{\text{left}}$ and $F_3 = B_{\text{right}}$, with $\mu_1$, $\mu_2$ and $\mu_3$ marked points respectively. Without loss of generality assume $\mu_1\geq \mu_2, \mu_3$.
A non-boundary parallel edge can be separating, with endpoints on the same boundary component and cutting the surface into two annuli, or non-separating, with endpoints on different boundary components. See figure \ref{P03-1}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{P03-1.pdf}
\caption{Boundary labels and possible non-boundary parallel edges on a pair of pants.}
\label{P03-1}
\end{center}
\end{figure}
On a pair of pants there can be only one type of separating edge, and all separating edges must be parallel to each other.
Consider a polygon $P$ in a pruned diagram. All its diagonals are also non-boundary parallel, for a boundary-parallel diagonal implies boundary-parallel edges. Further, $P$ cannot have more than one vertex on more than one boundary component; if there were two boundary components $F_i, F_j$ each with at least two vertices then there would be separating diagonals from each of $F_i, F_j$ to itself, impossible since there can be only one type of separating edge.
Moreover, $P$ cannot have three vertices on a single boundary component, since the three diagonals connecting them would have to be non-boundary parallel, hence separating, hence parallel to each other, hence forming a bigon at most.
Therefore a polygon in a pruned diagram on a pair of pants is of one of the following types:
\begin{itemize}
\item a non-separating bigon from one boundary component to another,
\item a separating bigon from one boundary component to itself,
\item a triangle with a vertex on each boundary component,
\item a triangle with two vertices on a single boundary component, and the third vertex on a different boundary component,
\item a quadrilateral with two opposite vertices on a single boundary component, and one vertex on each of the other two boundary components.
\end{itemize}
See figure \ref{pants-polygons}.
It's easy to see that there can be at most one quadrilateral or two triangles in any pruned diagram.
\begin{figure}
\begin{center}
\[
\begin{array}{cc}
\includegraphics[scale=0.65]{P03-bigons.pdf} &
\includegraphics[scale=0.65]{P03--quad.pdf} \\
\includegraphics[scale=0.65]{P03-tri1.pdf} &
\includegraphics[scale=0.65]{P03-tri2.pdf}
\end{array}
\]
\caption{The decomposition of a polygon diagram.}
\label{pants-polygons}
\end{center}
\end{figure}
If $\mu_2=\mu_3=0$, then all edges must be between $B_{\text{outer}}$ and itself and separating. A pruned polygon diagram must consist of a number of pairwise
parallel bigons. Hence $Q_{0,3}(\mu_1,0,0)=0$ if $\mu_1$ is odd. If $\mu_1>0$ is even, then the configuration of $\frac{\mu_1}{2}$ separating bigons gives rise to
$\mu_1$ pruned polygon diagrams, as the decorated marked point can be located at any one of the $\mu_1$ positions. If $\mu_1 = 0$ then there is only the empty diagram, so in general there are $\overline{\mu}_1$ diagrams.
If $\mu_2 > 0$ and $\mu_3 = 0$, then since $\mu_1\geq \mu_2$, the possible polygons are
\begin{itemize}
\item a non-separating bigon between $F_{\text{outer}}$ and itself,
\item a separating bigon between $F_{\text{outer}}$ and $F_{\text{left}}$,
\item a triangle with two vertices on $F_{\text{outer}}$ and a vertex on $F_{\text{left}}$.
\end{itemize}
Furthermore there can be at most one triangle. If $\mu_1-\mu_2$ is even, then a pruned polygon diagram must consist of $\mu_2$ bigons
from $F_{\text{outer}}$ to $F_{\text{left}}$ and $\frac{\mu_1-\mu_2}{2}$ bigons from $F_{\text{outer}}$ to itself. If $\mu_1-\mu_2$ is odd,
then a pruned polygon diagram must consist of a single triangle, $\mu_2-1$ bigons
from $F_{\text{outer}}$ to $F_{\text{left}}$ and $\frac{\mu_1-\mu_2-1}{2}$ bigon from $F_{\text{outer}}$ to itself. Again each such
configuration determines $\mu_1\mu_2$ pruned diagrams accounting for the locations of the two decorated marked points on $F_{\text{outer}}$
and $F_{\text{left}}$.
If $\mu_1, \mu_2,\mu_3 > 0$, then because $\mu_1$ is maximal, any separating edge or separating diagonal in a quadrilateral
must be from $F_{\text{outer}}$ to itself. Therefore the single quadrilateral (if it exists) must have a pair of opposite vertices on $F_{\text{outer}}$ and one vertex each on $F_{\text{left}}$ and $F_{\text{right}}$. There are two types of triangles with a separating edge from $F_{\text{outer}}$ to itself, depending
on whether the last vertex is on $F_{\text{left}}$ or $F_{\text{right}}$. Call these \emph{left} or \emph{right} triangles respectively.
There are also two types of triangles with a vertex on each boundary component, depending on whether the triangle's boundary, inheriting an orientation from the surface, goes from $F_{\text{outer}}$ to $F_{\text{left}}$ or $F_{\text{right}}$. Call these \emph{up} or \emph{down} triangles respectively. We then have the following cases.
\begin{enumerate}[label=(\roman*)]
\item
There is one quadrilateral. Then the pruned diagram must consist of this single quadrilateral,
$\mu_2-1$ bigons between $F_{\text{outer}}$ and $F_{\text{left}}$, and
$\mu_3-1$ bigons between $F_{\text{outer}}$ and $F_{\text{right}}$. In this case we have $\mu_1 - \mu_2 - \mu_3 = 0$.
\item
There is a left and a right triangle. Then the pruned diagram must consist of these two triangles, $\mu_2-1$ bigons between $F_{\text{outer}}$ and $F_{\text{left}}$, $\mu_3-1$ bigons between $F_{\text{outer}}$ and $F_{\text{right}}$, and $\frac{\mu_1-\mu_2-\mu_3-2}{2}$ separating bigons between $F_{\text{outer}}$ and itself. In this case we have $\mu_1 - \mu_2 - \mu_3$ is positive and even.
\item
There is an up and a down triangle. Then the pruned diagram must consist of these two triangles, $\frac{\mu_1+\mu_2-\mu_3-2}{2}$
bigons between $F_{\text{outer}}$ and $F_{\text{left}}$, $\frac{\mu_1+\mu_3-\mu_2-2}{2}$ bigons between $F_{\text{outer}}$ and $F_{\text{right}}$, and $\frac{\mu_2+\mu_3-\mu_1-2}{2}$ bigons between $F_{\text{left}}$ and $F_{\text{right}}$. In this case we have $\mu_1 - \mu_2 -\mu_3$ is negative and even.
(Note that $\mu_1+\mu_2 - \mu_3$ and $\mu_1+\mu_3 - \mu_2$ are both positive and even in this case.)
\item
There is a single left (resp. right) triangle. Then the pruned diagram must consist of this triangle, $\mu_2-1$ (resp. $\mu_3-1$)
bigons between $F_{\text{outer}}$ and $F_{\text{left}}$ (resp. $F_{\text{right}}$), $\mu_3$ (resp. $\mu_2$) bigons between $F_{\text{outer}}$ and $F_{\text{right}}$ (resp. $F_{\text{left}}$), and $\frac{\mu_1-\mu_2-\mu_3-1}{2}$ separating bigons between $F_{\text{outer}}$ and itself. In this case $\mu_1 - \mu_2-\mu_3$ is positive and odd.
\item
There is a single up (resp. down) triangle. Then the pruned diagram must consist of this triangle, $\frac{\mu_1+\mu_2-\mu_3-1}{2}$
bigons between $F_{\text{outer}}$ and $F_{\text{left}}$, $\frac{\mu_1+\mu_3-\mu_2-1}{2}$ bigons between $F_{\text{outer}}$ and $F_{\text{right}}$, and $\frac{\mu_2+\mu_3-\mu_1-1}{2}$ bigons between $F_{\text{left}}$ and $F_{\text{right}}$. In this case $\mu_1 - \mu_2-\mu_3$ is negative and odd.
(Note that $\mu_1+\mu_2 - \mu_3$ and $\mu_1+\mu_3 - \mu_2$ are both positive and odd in this case.)
\item
There are only non-separating bigons. Then the pruned diagram must consist of $\frac{\mu_1+\mu_2-\mu_3}{2}$
bigons between $F_{\text{outer}}$ and $F_{\text{left}}$, $\frac{\mu_1+\mu_3-\mu_2}{2}$ bigons between $F_{\text{outer}}$ and $F_{\text{right}}$, and $\frac{\mu_2+\mu_3-\mu_1}{2}$ bigons between $F_{\text{left}}$ and $F_{\text{right}}$. In this case $\mu_1 - \mu_2-\mu_3$ is negative or zero, and even.
(Note that $\mu_1+\mu_2 - \mu_3$ and $\mu_1+\mu_3 - \mu_2$ are both positive and even in this case.)
\item
There are only bigons, some of which are separating. Then the pruned diagram must consist of $\mu_2$ bigons between $F_{\text{outer}}$ and $F_{\text{left}}$, $\mu_3$ bigons between $F_{\text{outer}}$ and $F_{\text{right}}$, and $\frac{\mu_1-\mu_2-\mu_3}{2}$ separating bigons between $F_{\text{outer}}$ and itself. In this case we have $\mu_1 - \mu_2 -\mu_3$ is positive and even.
\end{enumerate}
Observe that for each triple $(\mu_1,\mu_2,\mu_3)$, precisely two of these cases apply, depending on $\mu_1-\mu_2-\mu_3$. (Here we count the left and right versions of (iv) separately, and the up and down versions of (v) separately.) We thus have two possible configurations of polygons, and each configuration corresponds to $\mu_1\mu_2\mu_3$ pruned diagrams, accounting for the locations of the decorated marked points on the three boundary components. Thus $Q_{0,3}$ is as claimed.
\end{proof}
\subsection{Cuff diagrams}
Consider the annulus embedded in the plane with $F_1$ being the outer and $F_2$ the inner boundary. A {\em cuff diagram} is a polygon diagram on an annulus with no edges between vertices on
the inner boundary $F_2$. (These correspond to the local arc diagrams of \cite{DKM2017}.) Let $L(b,a)$ be the number, up to equivalence, of cuff diagrams with $b$ vertices on the
outer boundary $F_1$ and $a$ vertices on the inner boundary $F_2$.
\begin{proposition}\label{cuffcount}
\[
L(b,a) =
\begin{cases}
a\binom{2b}{b-a}, & a,b > 0\\
\frac{1}{2}\binom{2b}{b}, &a = 0, b >0\\
1, & a=b=0 \\
\end{cases}
\]
\end{proposition}
\begin{proof}
This argument follows \cite{DKM2017}, using ideas of Przytycki \cite{Przytycki1999}. A {\em partial arrow diagram} on a circle is a labeling of a subset of
vertices on the boundary of the circle with the label ``out".
Assume $a > 0$. We claim there is a bijection between the set of equivalence classes
cuff diagrams counted by $L(b,a)$, on the one hand, and on the other, the set of partial arrow diagrams on a circle with $2b$ vertices and $b-a$ ``out" labels, together with a choice of
decorated marked point on the inner circle. Clearly the latter set has cardinality $a\binom{2b}{b-a}$.
This bijection is constructed as follows.
Starting from a cuff diagram $D$, observe that there are $b-a$ edges of $D$ with both endpoints on the outer boundary $F_1$. Orient these edges in an anticlockwise direction. (Note this orientation may disagree with the orientation induced from polygon boundaries.) Label the $b$ vertices on $F_1$ from $1$ to $b$ starting from the decorated marked point. Taking a slightly smaller outer circle $F'_1$ close to $F_1$, the edges of $D$ intersect $F'_1$ in $2b$ vertices,
say $1,1',2,2',\ldots, b,b'$. Label each of these $2b$ vertices ``out" if it is a starting point of one of the oriented edges.
We then have $b-a$ ``out" labels, and hence a partial arrow diagram of the required type.
The decorated marked point on the inner circle is given by the cuff diagram.
Conversely, starting from a partial arrow diagram, there is a unique way to reconstruct the edges of the cuff diagram $D$ so that
they do not intersect. Regard the circle with $2b$ vertices of the partial arrow diagram as the outer boundary $F_1$, with the $2b$ vertices lying in pairs close to each marked point of the original annulus, and with the pair close to marked point $i$ labelled $i,i'$.
Since there are both labelled and unlabelled vertices among the $2b$ vertices, there is an ``out" vertex on $F_1$ followed by an unlabelled vertex in a anticlockwise direction. The edge starting from this ``out" vertex must end at that neighbouring unlabelled vertex (otherwise
edges ending at those two vertices would intersect). Next we remove those two matched vertices and repeat the argument. Eventually
all $b-a$ ``out" vertices are matched with unlabelled vertices by $b-a$ oriented edges. The remaining $2a$ unlabelled vertices
are joined to $2a$ vertices on the inner circle $F_2$. These $2a$ edges divide the annulus into $2a$ sectors, which are further subdivided
into a number of disc regions by the oriented edges. Since $2a$ is even, the disc regions can be alternately coloured black and white. Each pair of vertices on $F_1$ is then pinched into the original marked point; the colouring can be chosen so that the pinched vertices are corners of black polygons near $F_1$. The vertices of $F_2$ can then be pinched in pairs in a unique way to produce a polygon diagram $D$, where the polygons are the black regions. This $D$ has $b$ vertices on $F_1$ and $a$ vertices on $F_2$. Finally, each vertex on $F_2$ belongs to a separate polygon with all other vertices on the outer circle. Placing the decorated marked point on $F_2$ at each vertex gives a distinct cuff diagram of the required type. See figure \ref{arrowdiagram2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{arrowdiagram2.pdf}
\caption{Reconstructing a cuff diagram from a partial arrow diagram.}
\label{arrowdiagram2}
\end{center}
\end{figure}
If $a=0$ then the bijection fails. From the cuff diagram we can still construct a partial arrow diagram. But when the cuff diagram is being reconstructed from a partial arrow diagram, there is a single
non-disc region, so not every partial arrow diagram gives rise to a cuff diagram.
Call a partial arrow diagram {\em compatible} if it yields a cuff diagram. Since each edge is now
separating, the regions divided by the edges can still be alternately coloured black and white.
All regions are discs except one which is an annulus. Again choose the colouring so that the pairs of vertices labelled $i,i'$ on $F_1$ are pinched into corners of
black regions. The partial arrow diagram is then compatible if and only if the annulus region is white.
However, when the partial arrow diagram is not compatible, pinching instead the corners
of white regions will then result in a cuff diagram. In other words, if we rotate all the ``out" labels by one spot
counterclockwise, the new partial arrow diagram will be compatible.
Conversely, if a partial arrow diagram is compatible, then rotating its labels one spot clockwise will result in an incompatible
partial arrow diagram. Hence there is a bijection between compatible and incompatible partial arrow diagrams, and the number of
cuff diagram is exactly half of the number of partial arrow diagrams, or $\frac{1}{2}\binom{2b}{b}$.
When $a=b=0$, there is the unique empty cuff diagram.
\end{proof}
\subsection{Annulus enumeration}
\begin{proposition}
\label{P02prop}
\begin{align*}
P_{0,2}(\mu_1,\mu_2)&=
\begin{cases} \binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\left(\frac{2\mu_1\mu_2}{\mu_1+\mu_2}+1\right), & \mu_1,\mu_2 > 0 \\
\binom{2\mu_1-1}{\mu_1}, & \mu_2=0
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
If $\mu_2 = 0$ then a polygon diagram is just a cuff diagram, hence by proposition \ref{cuffcount}
\[
P_{0,2}(\mu_1,0)=L(\mu_1,0)=\frac{1}{2}\binom{2\mu_1}{\mu_1}=\binom{2\mu_1-1}{\mu_1}.
\]
Note that taking $\binom{-1}{0}=1$, this works even when $\mu_1=0$.
If $\mu_1, \mu_2 > 0$, then as we saw in the introduction, from \cite{MN2004} the number of connected polygon diagrams (i.e. with at least one edge from $F_1$ to $F_2$) is
\[
\binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\frac{2\mu_1\mu_2}{\mu_1+\mu_2}.
\]
If there are no edges between
the two boundaries, then the polygon diagram is a union of two cuff diagrams, hence
\begin{align*}
P_{0,2}(\mu_1,\mu_2) &=\binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\frac{2\mu_1\mu_2}{\mu_1+\mu_2} + \frac{1}{2}\binom{2\mu_1}{\mu_1}\cdot\frac{1}{2}\binom{2\mu_2}{\mu_2} \\
&=\binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\left(\frac{2\mu_1\mu_2}{\mu_1+\mu_2}+1\right)
\end{align*}
as required.
\end{proof}
\subsection{Decomposition of polygon diagrams}
Suppose $S$ is not a disc or an annulus. Then any polygon diagram on $S$ can be decomposed into a pruned polygon
diagram on $S$ together with $n$ cuff diagrams, one for each boundary component of $S$. Take an annular collar
of each boundary component of $S$, and isotope all boundary parallel edges to be inside the union of these annuli.
The inner circle of each annulus intersects the polygons in $\nu_i\geq 0$ arcs.
Pinch each arc into a vertex, choose one vertex on each inner circle with $\nu_i > 0$ as a
decorated marked point, and cut along each inner circle. This produces a cuff
diagram on each annular collar and a pruned polygon diagram on the shrunken surface. This decomposition
is essentially unique except for the choice of decorated marked points on the inner circles, i.e.,
a single polygon diagram will give rise to $\prod_{i=1}^n \overline{\nu}_i$ distinct decompositions. See figure \ref{pruned}.
Conversely, starting from such a decomposition, we can reconstruct the unique polygon diagram by
attaching the cuff diagrams to the pruned polygon diagram by identifying the corresponding decorated
marked points along the gluing circles, and unpinching all the vertices on the gluing circles into arcs. Therefore we have
the relationship between $P_{g,n}$ and $Q_{g,n}$, corresponding to the ``local decomposition" of arc diagrams in \cite{DKM2017}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{prune2.pdf}
\caption{The decomposition of a polygon diagram.}
\label{pruned}
\end{center}
\end{figure}
\begin{proposition}\label{PQ}
For $(g,n) \neq (0,1)$ or $(0,2)$,
\begin{equation}
\label{PQeq}
P_{g,n}(\mu_1, \ldots, \mu_n) = \sum_{0 \leq \nu_i \leq \mu_i} \left(Q_{g,n} (\nu_1, \ldots, \nu_n)\prod_{i=1}^n \frac{1}{\overline{\nu}_i}L(\mu_i, \nu_i)\right)
\end{equation}
\qed
\end{proposition}
It turns out that dividing by a power of 2 for each of the $\mu_i$ that is zero, we obtain a nicer form of this result, eliminating the piecewise nature of $L(\mu_i, \nu_i)$. The number of $\mu_i$ that are zero is given by $\sum_{i=1}^n \delta_{\mu_i,0}$. Defining
\[
{P}'_{g,n}(\mu_1, \ldots, \mu_n)=\frac{1}{2^{\sum^n_1 \delta_{\mu_i,0}}}P_{g,n}(\mu_1, \ldots, \mu_n)
\quad \text{and} \quad
{Q}'_{g,n}(\nu_1, \ldots, \nu_n)=\frac{1}{2^{\sum^n_1 \delta_{\nu_i,0}}}Q_{g,n}(\nu_1, \ldots, \nu_n),
\]
and applying proposition \ref{cuffcount}, equation \eqref{PQeq} becomes
\begin{align}\label{PQ'}
P'_{g,n}(\mu_1, \ldots, \mu_n) = \sum_{0 \leq \nu_i \leq \mu_i} \left(Q_{g,n}'(\nu_1, \ldots, \nu_n)\prod_{i=1}^n \binom{2\mu_i}{\mu_i-\nu_i}\right).
\end{align}
\subsection{Pants enumeration}
\begin{proposition}
\label{P03count}
\[P_{0,3}(\mu_1,\mu_2,\mu_3)= \binom{2\mu_1-1}{\mu_1}\binom{2\mu_2-1}{\mu_2}\binom{2\mu_3-1}{\mu_3}\left(2\mu_1\mu_2\mu_3+\sum_{i\neq j}\mu_i\mu_j+\sum^3_{i=1} \frac{\mu_i^2-\mu_i}{2\mu_i-1}+1\right) \]
\end{proposition}
\begin{proof}
It is easier to work with $P'$ and $Q'$. We split the sum from \eqref{PQ'}
\[
P'_{0,3}(\mu_1, \mu_2, \mu_3) = \sum_{0\leq \nu_i \leq \mu_i} \left(Q_{0,3}'(\nu_1,\nu_2,\nu_3)\prod_{i=1}^3\binom{2\mu_i}{\mu_i-\nu_i}\right)
\]
into separate sums depending on how many of the $\nu_i$ are positive.
Using proposition \ref{basecases}, the sum over $\nu_i$ all being positive is given by
\begin{align*}
\sum_{\substack{0 \leq \nu_i \leq \mu_i \\ \text{all } \nu_i \text{ positive}}} Q_{0,3}'(\nu_1,\nu_2,\nu_3)\prod_{i=1}^3\binom{2\mu_i}{\mu_i-\nu_i}
=& \sum_{\substack{0 \leq \nu_i \leq \mu_i \\ \text{all } \mu_i \text{ positive}}} 2\nu_1\nu_2\nu_3\prod_{i=1}^3\binom{2\mu_i}{\mu_i-\nu_i}
= 2\prod_{i=1}^3 \sum_{1}^{\mu_i}\nu_i\binom{2\mu_i}{\mu_i-\nu_i}.
\end{align*}
Proposition \ref{almostpoly} then gives this expression as
\[
2 \prod_{i=1}^3 \frac{\binom{2\mu_i}{\mu_i}}{2\mu_i-1} \left( P_0 (\mu_i) + Q_0 (\mu_i) \right)
= 2\prod_{i=1}^3\frac{\binom{2\mu_i}{\mu_i}}{2\mu_i-1}\frac{2\mu_i^2-\mu_i}{2}
= \frac{\binom{2\mu_1}{\mu_1}}{2}\cdot\frac{\binom{2\mu_2}{\mu_2}}{2}\cdot\frac{\binom{2\mu_3}{\mu_3}}{2}\cdot(2\mu_1\mu_2\mu_3).
\]
Similarly, when $\nu_1 = 0$ and $\nu_2, \nu_3$ are positive we obtain
\begin{align*}
\sum_{\substack{0 \leq \nu_i \leq \mu_i \\ \nu_1 = 0, \nu_2,\nu_3>0 }} \left(Q'_{0,3} (\nu_1,\nu_2,\nu_3)\prod_{i=1}^3\binom{2\mu_i}{\mu_i-\nu_i}\right) =&
\binom{2\mu_1}{\mu_1}\cdot \left(\sum_{\substack{0 \leq \nu_i \leq \mu_i \\ \nu_2,\nu_3 > 0 }} \left(\frac{1}{2}\nu_2\nu_3\prod_{i=2}^3\binom{2\mu_i}{\mu_i-\nu_i}\right)\right) \\
=&\frac{\binom{2\mu_1}{\mu_1}}{2}\cdot\frac{\binom{2\mu_2}{\mu_2}}{2}\cdot\frac{\binom{2\mu_3}{\mu_3}}{2}\cdot\left(\mu_2\mu_3\right).
\end{align*}
The sum over two $\nu_i$ being positive is given by repeating the above calculation with for $\nu_2 = 0$ and $\nu_3 = 0$.
Continuing, when $\nu_1 = \nu_2 = 0$ and $\nu_3 > 0$ we obtain
\begin{align*}
\sum_{\substack{0 \leq \nu_i \leq \mu_i \\ \nu_1 = \nu_2= 0, \nu_3>0 }} \left(Q_{0,3}'(\nu_1,\nu_2,\nu_3)\prod_{i=1}^3\binom{2\mu_i}{\mu_i-\nu_i}\right) =&
\binom{2\mu_1}{\mu_1}\cdot \binom{2\mu_2}{\mu_2}\cdot\left(\sum_{0<\nu_3\leq \mu_3, \nu_3 \text{ even} } \frac{1}{4}\nu_3\binom{2\mu_3}{\mu_3-\nu_3}\right) \\
=&\frac{\binom{2\mu_1}{\mu_1}}{2}\cdot\frac{\binom{2\mu_2}{\mu_2}}{2}\cdot\frac{\binom{2\mu_3}{\mu_3}}{2}\cdot\left(\frac{\mu_3^2-\mu_3}{2\mu_3-1}\right).
\end{align*}
The sum over one $\nu_i$ being positive is given by repeating the above calculation interchanging the roles of $\nu_1, \nu_2, \nu_3$. Finally when all $\nu_i$ are zero we have
\begin{align*}
\sum_{\substack{0 \leq \nu_i \leq \mu_i \\ \nu_1 = \nu_2 = \nu_3 = 0 }}
\left(Q_{0,3}'(\nu_1,\nu_2,\nu_3)\prod_{i=1}^3\binom{2\mu_i}{\mu_i-\nu_i}\right) =&
\frac{\binom{2\mu_1}{\mu_1}}{2}\cdot\frac{\binom{2\mu_2}{\mu_2}}{2}\cdot\frac{\binom{2\mu_3}{\mu_3}}{2}
\end{align*}
Note that with our convention of $\binom{-1}{0}=1$, $\binom{2\mu_i}{\mu_i} = 2^{\delta_{\mu_i,0}}\binom{2\mu_i-1}{\mu_i}$.
Summing the above terms, $P_{0,3} = 2^{\sum_{i=1}^n \delta_{\mu_i,0}} P'_{0,3}$ is given as claimed.
\end{proof}
\section{Recursions}
In this section we will prove recursion relations for both the polygon diagram counts $P_{g,n}$ and the
pruned polygon diagrams counts $Q_{g,n}$.
The recursion for $P_{g,n}$ is similar to that obeyed by the arc diagram counts $G_{g,n}$ in \cite{DKM2017}. The recursion for $Q_{g,n}$, appears messy at first sight, but if we only consider the dominant part, it
actually differs very little from the recursion of non-boundary-parallel arc diagram count $N_{g,n}$ in \cite{DKM2017}.
The top degree component of $N_{g,n}$ in turn agrees with the lattice count polynomials of Norbury, the volume polynomial of
Kontsevich, and the Weil-Petersson volume polynomials of Mirzakhani.
We orient each boundary component $F_i$ as the boundary of $S$. This induces a cyclic order on the $\mu_i$ vertices on
$F_i$, and we denote by $\sigma(v)$ the next vertex to $v$ along $F_i$. If $\mu_i \geq 2$ then $\sigma(v)\neq v$. For any polygon diagram $D$, orient the edges of $D$ by choosing the orientation on each polygon to agree with the orientation on $S$.
\subsection{Polygon counts}
We now prove theorem \ref{thm:P_recursion}, the recursion on $P_{g,n}$, which states that for $g \geq 0$ and $\mu_1 > 0$, equation \eqref{precur-eq} holds:
\begin{align*}
P_{g,n} (\mu_1, \ldots, \mu_n)
&= P_{g,n} (\mu_1 - 1, \boldsymbol{\mu}_{X\setminus \{1\}}) + \sum_{k=2}^n \mu_k P_{g,n-1} ( \mu_1 + \mu_k - 1, \boldsymbol{\mu}_{X\setminus \{1,k\}} ) \\
& \quad + \mathop{\sum_{i+j=\mu_1 - 1}}_{j>0} \bigg[ P_{g-1,n+1} (i,j, \boldsymbol{\mu}_{X\setminus \{1\}}) + \mathop{\sum_{g_1 + g_2 = g}}_{I \sqcup J = X\setminus \{1\}} P_{g_1, |I|+1} (i, \boldsymbol{\mu}_I) \, P_{g_2, |J|+1} (j, \boldsymbol{\mu}_j) \bigg].
\end{align*}
\begin{proof}[Proof of theorem \ref{thm:P_recursion}]
Consider the decorated marked point $\mathbf{m}_1$ on the boundary component $F_1$. Suppose it is a vertex of the polygon $K$ of the diagram $D$. Let $\gamma$ be the outgoing edge from $\mathbf{m}_1$. If the other endpoint of $\gamma$ is also $\mathbf{m}_1$, then $K$ is a $1$-gon, and we obtain a new polygon diagram $D'$ by removing $K$ entirely (including $\mathbf{m}_1$), and then if $\mu_1\geq 2$, selecting the new decorated marked point on $F_1$ to be $\sigma(\mathbf{m}_1)$ (if $\mu_1=1$ then there will be no vertices on $F_1$ in $D'$, so we do not need a decorated marked point).
Conversely, starting with a polygon diagram $D'$ on $S_{g,n}$ with $(\mu_1-1,\mu_2,\ldots,\mu_n)$ boundary vertices, we can insert a $1$-gon on $F_1$ just before the decorated marked point $\mathbf{m}_1'$ (if there are no vertices on $F_1$, simply insert a $1$-gon), and then move the decorated marked point to the vertex of the new $1$-gon. These two operations are inverses of each other. This bijection gives the term $P_{g,n} (\mu_1 - 1, \boldsymbol{\mu}_{X\setminus \{1\}})$ in \eqref{precur-eq}.
If the other endpoint $v$ of $\gamma$ is different from $\mathbf{m}_1$, there are several cases.
\begin{description}
\item[(A) $\gamma$ has both endpoints on $F_1$ and is non-separating.]~
We cut $S = S_{g,n}$ along $\gamma$ into $S' = S'_{g-1,n+1}$, by removing a regular strip $\gamma \times (0,\epsilon)$ from $S$, where $\gamma = \gamma \times \{0\}$ and $\{\mathbf{m}_1\}\times [0,\epsilon]\subset F_1$ is a small sub-interval of $[\mathbf{m}_1,\sigma(\mathbf{m}_1))$. Then $F_1$ splits into two arcs, which together with $\gamma$ and a parallel copy $\gamma \times \{\epsilon\}$, form two boundary components $F'_0$ and
$F'_1$ on $S'$, with $\gamma$ part of $F'_1$. If $\sigma(\mathbf{m}_1) = v$ on $F_1$, then $F'_0$ contains no vertices. We obtain a polygon diagram $D'$ on $S'$ by collapsing $\gamma$ into a single vertex $\mathbf{m}'_1$ which is the decorated marked point on $F'_1$, and setting $\sigma(\mathbf{m}_1)$ as the decorated marked point
on $F'_0$ (if there is at least one vertex on $F'_0$). The new diagram $D'$ has $i\geq 0$ vertices on $F'_0$ and $j\geq 1$ vertices on $F'_1$ with $i+j=\mu_1-1$.
Conversely starting with such a polygon diagram $D'$ on $S_{g-1,n+1}$ with $(i,j,\mu_2,\ldots,\mu_n)$ boundary vertices, we can reconstruct $D$. First expand the
decorated marked point $\mathbf{m}'_1$ on $F'_1$ into an interval. Then glue a strip joining this interval on $F'_1$ to an interval just before the decorated marked point on $F'_0$. (If $i=0$, we can glue to any interval on on $F'_0$.) This bijection gives the term
$\sum_{\substack{i+j=\mu_1 - 1, \ j>0}} P_{g-1,n+1} (i,j,\boldsymbol{\mu}_{X\setminus \{1\}})$ in \eqref{precur-eq}.
\item[(B) $\gamma$ has both endpoints on $F_1$ and is separating.]~
This is almost the same as the previous case. As before, we cut $S_{g,n}$ along $\gamma$ into two surfaces $S'_1$ and $S'_2$
with polygon diagrams $D'_1$ and $D'_2$, such that the
new vertex $\mathbf{m}'_1$ obtained from collapsing $\gamma$ is on $S'_2$. The polygon diagram $D$ can be uniquely reconstructed from such a pair
$(D'_1,D'_2)$. This bijection gives the term
$\sum_{\substack{i+j=\mu_1 - 1, \ j>0}} \sum_{g_1 + g_2 = g, \ I \sqcup J = X\setminus \{1\}} P_{g_1, |I|+1} (i, \boldsymbol{\mu}_I) \, P_{g_2, |J|+1} (j, \boldsymbol{\mu}_j)$
of \eqref{precur-eq}.
\item[(C) $\gamma$ has endpoints $\mathbf{m}_1$ on $F_1$ and $v$ on $F_k$, $k > 1$.]~
In this case $\gamma$ is necessarily non-separating. Cutting $S_{g,n}$ along $\gamma$ and collapsing $\gamma$ following a similar procedure results in a polygon diagram
$D'$ on a surface $S'_{g,n-1}$ with $\mu_1+\mu_k-1$ vertices on its new boundary component $F'_1$, and the collapsed vertex $\mathbf{m}'_1$ as the
decorated marked point on $F'_1$. However this is not a bijection since the information about original location of the decorated
marked point on $F_k$ (relative to $v$) is forgotten in $D'$. In fact the map $D \to D'$ is $\mu_k$-to-$1$.
The decorated marked point $\mathbf{m}_k$ can be placed in any of the $\mu_k$ locations (relative to $v$). All $\mu_k$ such polygon diagrams will give rise to
the same $D'$ after cutting along $\gamma$. Taking the multiplicity $\mu_k$ into account gives the term
$\sum_{k=2}^n \mu_k P_{g,n-1} ( \mu_1 + \mu_k - 1, \boldsymbol{\mu}_{X\setminus \{1,k\}} )$ of \eqref{precur-eq}.
\end{description}
\end{proof}
\subsection{Pruned polygon counts}
The recursion for pruned polygon diagrams follows from a similar analysis. It is more tedious due to the
fact that after cutting along an edge $\gamma$, some other edges may become boundary parallel, so more care is required.
We previously referred to $\overline{n}$ as $\overline{n} = n$ if $n$ is a positive integer, and $\overline{0} = 1$, following \cite{DKM2017}. We now introduce another notation of a similar nature.
\begin{definition}
\label{tilde_notation}
For an integer $\mu$, let $\widetilde{\mu} = \mu$ if $\mu$ is a positive
even integer, and $0$ otherwise.
\end{definition}
\begin{theorem}\label{qrecursion}
For $(g,n)\neq (0,1),(0,2),(0,3)$, the number of pruned polygon diagrams satisfies the following recursion:
\begin{align}
&Q_{g,n}(\mu_1, \ldots, \mu_n) \nonumber =
\sum_{\substack{i+j+m = \mu_1 \\ i\geq 1, j,m\geq 0}}m Q_{g-1,n+1}(i,j,\boldsymbol{\mu}_{X\setminus \{1\}}) + \frac{\widetilde{\mu}_1}{2}Q_{g-1,n+1}(0,0,\boldsymbol{\mu}_{X\setminus \{1\}}) \nonumber\\
+&\sum_{\substack{\mu_k>0 \\ 2\leq k \leq n}}\left( \sum_{\substack{i+m = \mu_1+\mu_k \\ i\geq 1, m\geq 0}}m \mu_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}) +
\widetilde{\sum_{\substack{i+x = \mu_1-\mu_k \\ i\geq 1, x\geq 0}}}x \mu_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})
+ \mu_1 \mu_kQ_{g,n-1}(0,\boldsymbol{\mu}_{X\setminus \{1,k\}}) \right) \nonumber\\
+& \sum_{\substack{\mu_k=0 \\ 2\leq k \leq n}}\left(\sum_{\substack{i+m = \mu_1 \nonumber \\ i\geq 1, m\geq 0}}m Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})+ \widetilde{\mu}_1Q_{g,n-1}(0,\boldsymbol{\mu}_{X\setminus \{1,k\}})\right)\nonumber \\
+& \sum_{\substack{g_1+g_2=g \\ I\sqcup J = X\setminus \{1\} \\ \text{No discs or annuli}}}\left(\sum_{\substack{i+j+m=\mu_1 \\ i\geq 1, j,m \geq0}}mQ_{g_1,|I|+1}(i,\boldsymbol{\mu}_{I})Q_{g_2,|J|+1}(j,\boldsymbol{\mu}_{J}) + \frac{\widetilde{\mu}_1}{2}Q_{g_1,|I|+1}(0,\boldsymbol{\mu}_{I})Q_{g_2,|J|+1}(0,\boldsymbol{\mu}_{J})\right) \label{qrecursion-eq}
\end{align}
\end{theorem}
Here ``no discs or annuli" means $(g_1,|I|+1)$ and $(g_2,|J|+1)$ cannot be $(0,1)$ or $(0,2)$. The tilde summation
$\widetilde{\sum}$ is defined to be
\begin{align*}
\widetilde{\sum_{\substack{i+x = \mu_1-\mu_k \\ i\geq 1, x\geq 0}}}x \mu_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})=
\sum_{\substack{i+x = \mu_1-\mu_k \\ i\geq 1, x\geq 0}}x \mu_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})
- \sum_{\substack{i+x = \mu_k-\mu_1 \\ i\geq 1, x\geq 0}}x \mu_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})
\end{align*}
Note that when $\mu_1\geq \mu_k$ the second sum vanishes, otherwise the first sum vanishes.
\begin{proof}
Suppose $D$ is a pruned polygon diagram on $S$. Let $\gamma$ be the outgoing edge at the decorated marked point $\mathbf{m}_1$ on $F_1$. Since there is
no $1$-gon in $D$ (they are boundary parallel), the other endpoint $v$ of $\gamma$ is distinct from $\mathbf{m}_1$.
As in \cite{DKM2017}, there are three cases for $\gamma$: (A) it has both endpoints on $F_1$ and is non-separating; (B) it has endpoints on $F_1$ and some other $F_k$, or has both endpoints on $F_1$ and cuts off an annulus parallel to $F_k$; or (C) it has both ends on $F_1$, is separating, and does not cut off an annulus. Each of these cases, especially case (B), has numerous sub-cases, which we now consider in detail.
\begin{description}
\item[(A) $\gamma$ has both endpoints on $F_1$ and is non-separating.]~
If an edge becomes boundary parallel after cutting $S$ along $\gamma$, then it must be parallel to $\gamma$ on $S$ (relative to endpoints) to begin with.
Given two edges $\beta_1$ and $\beta_2$, both parallel to $\gamma$, let $I$ be a strip bounded by $\beta_1$, $\beta_2$ and portions of $F_1$.
This strip $I$ is unique, because after we cut open along $I$,
$\beta$ and $\beta'$ belong to different boundary components, so they cannot bound any other strips.
There is a unique minimal strip $A:[0,1]^2\to S$ containing all edges
parallel to $\gamma$, given by the union of connecting strips between all pairs of edges parallel to $\gamma$.
The left (resp. right) boundary of $A$ is an edge
$\gamma_L$ (resp. $\gamma_R$) joining two vertices $p_L$ and $q_L$ (resp. $p_R$ and $q_R$), and the bottom (resp. top) boundary of $A$ is an interval
on $F_1$ from $p_L$ to $p_R$ (resp. $q_R$ to $q_L$).
Note that $A$ may be degenerate, i.e. $\gamma_L$ and $\gamma_R$ may have one or both of their endpoints in common, or
they are the same edge $\gamma$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.75]{caseA.pdf}
\caption{Possible configurations of polygons in case (A).}
\label{caseA}
\end{center}
\end{figure}
Observe that all the edges in $A$, with the possible exception of $\gamma_L$ and $\gamma_R$, form a block of consecutive
parallel bigons inside $A$.
Let there be $m\geq 1$ polygons with at least one edge parallel to $\gamma$.
See figure \ref{caseA}.
There are four cases.
\begin{enumerate}[label=(\arabic*)]
\item All $m$ such polygons are bigons. In this case the $\mu_1$ vertices along $F_1$ are divided into
$4$ cyclic blocks of consecutive vertices: there is a block of $m$ vertices $(p_1,\ldots, p_m)$
followed by $j\geq 0$ vertices, followed by another block of $m$ consecutive vertices $(q_m,\ldots, q_1)$, followed
by $i\geq 0$ vertices, such that there is a bigon between each pair of vertices $\{p_i,q_i\}$, and $\mathbf{m}_1 \in \{p_1,\ldots, p_m\}$.
Remove all $m$ bigons from the pruned polygon diagram $D$ and cut $S$ along $\gamma$. If $j>0$ then let $\sigma(p_m)$ be
the decorated marked point on the new boundary component $F'_1$. If $i>0$ then let $\sigma(q_1)$ be
the decorated marked point on that new boundary component $F'_0$. This produces a pruned polygon diagram $D'$ on
$S'_{g-1,n+1}$ with $(i,j,\mu_2,\ldots,\mu_n)$ boundary vertices. The map $D\to D'$ is $m$-to-$1$, since $\mathbf{m}_1$ can
be any one of $\{p_1,\ldots, p_m\}$ and still produce the same pruned polygon diagram $D'$. Conversely $D$ can be
reconstructed for $D'$ up to the possible location of $\mathbf{m}_1$ as one of $\{p_1,\ldots, p_m\}$.
Therefore we have the following contribution to \eqref{qrecursion-eq}:
\begin{equation}\label{eq1}
\sum_{\substack{i+j+2m = \mu_1 \\ m\geq 1, i,j\geq 0}}m Q_{g-1,n+1}(i,j,\boldsymbol{\mu}_{X\setminus \{1\}}).
\end{equation}
\item $\gamma_L$ is part of a polygon $K$ which is not a bigon, all other polygons are bigons. If $\gamma_L \neq \gamma_R$ then
$K$ and $A$ lie on the opposite sides of $\gamma_L$ (otherwise $K\subseteq A$, so must be a bigon), and there are $m-1$ bigons in $A$.
Remove all bigons, cut $S$ along $\gamma_L$,
collapse $\gamma_L$ to a single vertex $\mathbf{m}'_0$ which we take to be the decorated marked point on the new boundary component $F'_0$,
and let $\sigma(p_R)$ be the decorated marked point on $F'_1$. This produces a pruned polygon diagram $D'$. Similar to the previous case,
the map $D\to D'$ is $m$-to-$1$, as $\mathbf{m}_1$ can any one of the $m$ vertices between $p_L$ and $p_R$. Therefore we have the following
contribution to \eqref{qrecursion-eq}:
\begin{equation}\label{eq2}
\sum_{\substack{i+j+2m = \mu_1 \\ m \geq 1, i,j\geq 0}}m Q_{g-1,n+1}(i+1,j,\boldsymbol{\mu}_{X\setminus \{1\}}) =
\sum_{\substack{i+j+2m-1 = \mu_1 \\ i, m\geq 1,j\geq 0}}m Q_{g-1,n+1}(i,j,\boldsymbol{\mu}_{X\setminus \{1\}}).
\end{equation}
Note that this formula includes the contribution from the special case $\gamma_L = \gamma_R = \gamma$, where $m=1$.
\item $\gamma_R$ is part of a polygon $K$ which is not a bigon, and all other polygons are bigons. This is almost identical to the previous case,
except now $\gamma$ cannot be the edge $\gamma_R$. (If we had $\gamma = \gamma_R$ then, since $\gamma$ is the outgoing edge from $\mathbf{m}_1$, the polygon containing $\gamma$
would have to be on the same side of $\gamma$ as $A$.) The map $D \mapsto D'$ is now $(m-1)$-to-$1$, as $\mathbf{m}_1$ cannot be $p_R$.
Therefore we have the following contribution to \eqref{qrecursion-eq}:
\begin{equation}\label{eq3}
\sum_{\substack{i+j+2m = \mu_1 \\ m \geq 1, i,j\geq 0}}(m-1) Q_{g-1,n+1}(i,j+1,\boldsymbol{\mu}_{X\setminus \{1\}}) =
\sum_{\substack{i+j+2m-1 = \mu_1 \\ j, m\geq 1,i\geq 0}}(m-1) Q_{g-1,n+1}(i,j,\boldsymbol{\mu}_{X\setminus \{1\}}).
\end{equation}
Note that this formula correctly excludes the special case $\gamma_L = \gamma_R = \gamma$, where $(m-1)=0$ and the formula vanishes.
\item $\gamma_L$ and $\gamma_R$ are each part of some polygon which is not a bigon, all other polygons are bigons.
We allow $\gamma_L$ and $\gamma_R$ to be different edges of the same polygon. We obtain a pruned polygon diagram $D'$
by removing the $(m-2)$ bigons and collapsing $\gamma_L$ and $\gamma_R$ to
decorated marked points $\mathbf{m}'_0$ and $\mathbf{m}'_1$. For the same reason as the
previous case, $\gamma$ cannot be the edge $\gamma_R$, so the map $D\to D'$ is only
$(m-1)$-to-$1$. Therefore the contribution to \eqref{qrecursion-eq} is
\begin{equation}\label{eq4}
\sum_{\substack{i+j+2m = \mu_1 \\ m\geq 1, i,j\geq 0}}(m-1) Q_{g-1,n+1}(i+1,j+1,\boldsymbol{\mu}_{X\setminus \{1\}}) =
\sum_{\substack{i+j+2m = \mu_1 \\ i,j\geq 1, m\geq 0}}m Q_{g-1,n+1}(i,j,\boldsymbol{\mu}_{X\setminus \{1\}}).
\end{equation}
\end{enumerate}
Now we compute the total contribution from cases (A)(1)--(4). We drop the subscripts $g-1,n+1$ from $Q_{g-1,n+1}$ and $X \backslash \{1\}$ from $\boldsymbol{\mu}_{X \setminus \{1\}}$ for convenience. Summing expressions \eqref{eq1} and \eqref{eq4} and separating the terms according to where $i,j$ are zero or nonzero, we obtain
\begin{align}\label{eq100}
&\left( \sum_{\substack{i+j+2m = \mu_1 \\ m\geq 1, i,j\geq 0}} + \sum_{\substack{i+j+2m = \mu_1 \\ i,j\geq 1, m\geq 0}} \right) m Q (i,j,\boldsymbol{\mu}) \nonumber \\
&\quad =\sum_{\substack{i+j+2m = \mu_1 \\ i,j,m\geq 1}}2m Q(i,j,\boldsymbol{\mu}) + \sum_{\substack{j+2m = \mu_1 \\ j,m\geq 1}}m Q (0,j,\boldsymbol{\mu}) \nonumber
+\sum_{\substack{i+2m = \mu_1 \\ i,m\geq 1}}m Q(i,0,\boldsymbol{\mu}) + \frac{\widetilde{\mu}_1}{2}Q (0,0,\boldsymbol{\mu})\nonumber \\
&\quad = \sum_{\substack{i+j+2m = \mu_1 \\ i,m\geq 1, j\geq 0}}2m Q (i,j,\boldsymbol{\mu}) + \frac{\widetilde{\mu}_1}{2}Q (0,0,\boldsymbol{\mu})
\end{align}
Similarly for expressions \eqref{eq2} and \eqref{eq3},
\begin{align}\label{eq101}
\sum_{\substack{i+j+2m-1 = \mu_1 \\ i, m\geq 1,j\geq 0}} & m Q(i,j,\boldsymbol{\mu})+\sum_{\substack{i+j+2m-1 = \mu_1 \\ j, m\geq 1,i\geq 0}}(m-1) Q(i,j,\boldsymbol{\mu})\nonumber \\
&=\sum_{\substack{i+j+2m-1 = \mu_1 \\ i,j,m\geq 1}}(2m-1) Q(i,j,\boldsymbol{\mu})+
\sum_{\substack{i+2m-1 = \mu_1 \\ i,m\geq 1}}m Q(i,0,\boldsymbol{\mu}) \nonumber
+\sum_{\substack{j+2m-1 = \mu_1 \\ j,m\geq 1}}(m-1) Q(0,j,\boldsymbol{\mu})\nonumber \\
&=\sum_{\substack{i+j+2m-1 = \mu_1 \\ i,m\geq 1, j\geq 0}}(2m-1) Q(i,j,\boldsymbol{\mu})
\end{align}
Adding \eqref{eq100} and \eqref{eq101} we have the first line of \eqref{qrecursion-eq}.
\item[(B) $\gamma$ has endpoints on $F_1$ and $F_k$, or has both endpoints on $F_1$ and cuts off an annulus parallel to $F_k$.] ~
Here $k \neq 1$. Note that since $(g,n)\neq (0,3)$, if $\gamma$ cuts off an annulus parallel to $F_k$, the remaining surface is not an annulus. Hence
different values of $k$ give different pruned polygon diagrams. There is no double counting when we sum over $k$.
To standardise the possibilities for $\gamma$, we define a path $\alpha$ from $F_1$ to $F_k$ as follows; $\bar{\alpha}$ denotes $\alpha$ with reversed orientation. If $\gamma$ has endpoints on $F_1$ and $F_k$, then let $\alpha = \gamma$.
In this case, the edges that become parallel after $S$ is cut along $\gamma$ are precisely three types of curves: those parallel to the concatenated paths $\alpha$, $\alpha F_k \bar{\alpha}$, and $\bar{\alpha} F_1 \alpha$. On the other hand, if $\gamma$ has both endpoints on $F_1$ and cuts off an annulus parallel to $F_k$, then let $\alpha$ be a curve inside that annulus, connecting $F_1$ to $F_k$. In this case, the curves that become boundary parallel after $S$ is cut along $\gamma$ must be parallel to $\gamma$. See figure \ref{paths}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{paths.pdf}
\caption{The paths $\alpha$ and related paths in case (B).}
\label{paths}
\end{center}
\end{figure}
Since $S$ is not an annulus, there is a unique minimal strip $A^1$ containing all edges parallel to $\alpha$,
bounded by edges $\gamma^1_L$ (resp. $\gamma^1_R$) joining two vertices $p^1_L\in F_1$ and $q^1_L\in F_k$
(resp. $p^1_R$ and $q^1_R$). The top (resp. bottom) boundary of $A^1$ is an interval
on $F_1$ (resp. $F_k$) from $p^1_L$ to $p^1_R$ (resp. $q^1_R$ to $q^1_L$). Similarly there are unique minimal strips $A^2$
and $A^3$ containing all edges of the second and third type respectively, with analogous notations.
Note that edges of the second and third types cannot appear simultaneously, so $A^2$ and $A^3$ cannot both be non-empty.
All three strips $A^i$ may be degenerate. See figure \ref{A_i_def}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{A_i_def.pdf}
\caption{The configurations of the strips $A^i$. In this figure $A^1, A^2$ are nonempty.}
\label{A_i_def}
\end{center}
\end{figure}
Call a polygon \emph{partially boundary parallel} if at least one of its edges is of the three types $\alpha, \alpha F_k \bar{\alpha}, \bar{\alpha} F_1 \alpha$. Call a polygon \emph{totally boundary parallel} if all of its edges are of these three types, and \emph{mixed} if
it is partially boundary parallel but not totally boundary parallel.
A totally boundary parallel polygon is either a bigon, or a triangle with
two edges parallel to $\alpha$ and the third edge parallel to $\alpha F_k \bar{\alpha}$ or $\bar{\alpha} F_1 \alpha$.
Furthermore there can be at most one totally boundary parallel triangle.
Let there be $m$ partially boundary parallel polygons. Note $m \geq 1$, since $\gamma$ lies in a partially boundary parallel polygon.
Assume $\mu_k>0$. We split into the following sub-cases: all $m$ partially boundary parallel polygons are bigons; $m-1$ bigons and one totally boundary parallel triangle; there is a total boundary parallel triangle and a mixed polygon; there is a mixed polygon but no totally boundary parallel triangle.
\begin{enumerate}[label=(\arabic*)]
\item All $m$ partially boundary parallel polygons are bigons. We then split further into sub-cases accordingly as there are bigons parallel to $\alpha F_k \bar{\alpha}$ or $\bar{\alpha} F_1 \alpha$, or not.
\begin{enumerate}
\item There are no bigons parallel to $\alpha F_k \bar{\alpha}$ or $\bar{\alpha} F_1 \alpha$. Then there are $m$ consecutive
bigons between $F_1$ and $F_k$. Removing all $m$ bigons and cutting $S$ along $\gamma$ gives a pruned polygon diagram $D'$ with
$i=\mu_1+\mu_k - 2m$ vertices on the new boundary component $F'_1$. When $i>0$, the decorated marked point on $F'_1$ is set
to be $\sigma(p^1_R)$ if $\mu_1 > m$, and $\sigma(q^1_L)$ if $\mu_1 = m$. The map $D \mapsto D'$ is $m\mu_k$-to-$1$, since
$\mathbf{m}_1$ can be any of $m$ vertices of the bigons on $F_1$, and $\mathbf{m}_k$ can be any of the $\mu_k$ vertices on $F_k$.
Therefore we have the contribution
\begin{align}\label{eq111}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ 1\leq m \leq \min(\mu_1,\mu_k), i\geq 0}}m {\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\item \label{case1b} There are $x\geq 1$ bigons parallel to $\alpha F_k \bar{\alpha}$. See figure \ref{fig:case1b}. Since $\alpha F_k \bar{\alpha}$ cuts off
an annulus parallel to $F_k$, the $\mu_k$ vertices on $F_k$ belong to $\mu_k$ bigons between $F_1$ and $F_k$.
Removing all $m = x + \mu_k$ bigons and cutting
along $\gamma$ gives a pruned polygon diagram $D'$ with $i=\mu_1 - m - x$ vertices on the new boundary component $F'_1$.
The decorated marked point on $F'_1$ is set to be $\sigma(q^1_L)$ if $i>0$. The map $D \mapsto D'$ is $(2x+\mu_k)\mu_k$-to-$1$, since
$\mathbf{m}_1$ can be any of the $(2x+\mu_k)$ vertices of the bigons on $F_1$.
Therefore we have the contribution
\[
\sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}(2x+\mu_k) {\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\]
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{case1b.pdf}
\caption{Configuration of polygons in case (B)(1)(b).}
\label{fig:case1b}
\end{center}
\end{figure}
Splitting the sum in by writing $2x+\mu_k$ as $(x+\mu_k) + x$ and setting $m = x+\mu_k$, we note that $i+2x = \mu_1 - \mu_k$ becomes $i + 2m = \mu_1 + \mu_k$ and obtain
\begin{equation}
\label{eq112}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_k+1, i\geq 0}}m{\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})+\sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x{\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{equation}
\item \label{case1c} There are $x\geq 1$ bigons parallel to $\bar{\alpha} F_1 \alpha$. This is same as the previous case with $F_1$ and $F_k$ interchanged.
The map $D \mapsto D'$ is $\mu_1\mu_k$-to-$1$, since the bigons now have $\mu_1$ vertices on $F_1$.
Therefore we have the contribution:
\[
\sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}\mu_1 {\mu}_k Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\]
Writing $\mu_1$ as $(x + \mu_1) - x$ and setting $m = x + \mu_1$, we note that $i+2x = \mu_k - \mu_1$ becomes $i+2m = \mu_1 + \mu_k$, and obtain
\begin{equation}
\label{eq113}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_1+1, i\geq 0}}m{\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})-\sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}x{\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{equation}
\end{enumerate}
Observe that the index set $\{i+2m = \mu_1+\mu_k, m\geq 1, i\geq 0\}$ is the disjoint union of index sets
$\{i+2m = \mu_1+\mu_k, 1\leq m \leq \min(\mu_1,\mu_k), i\geq 0\}$, $\{i+2m = \mu_1+\mu_k, m \geq \mu_k+1, i\geq 0\}$,
and $\{i+2m = \mu_1+\mu_k, m \geq \mu_i+1, i\geq 0\}$.
(If $m \geq \mu_k + 1$ then $\mu_1 + \mu_k = i + 2m \geq 2\mu_k + 2$, hence $\mu_1 \geq \mu_k + 2$; similarly if $m \geq \mu_1 + 1$ then $\mu_k \geq \mu_1 + 2$. So the second and third sets are disjoint.)
Dropping the subscript $g,n-1$ from $Q$ and $X \setminus \{1,k\}$ from $\boldsymbol{\mu}$ for convenience, we find the sum of \eqref{eq111}, \eqref{eq112}, \eqref{eq113} is
\begin{equation}
\label{eq21}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 1, i\geq 0}}m {\mu}_k Q (i,\boldsymbol{\mu})+
\sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x {\mu}_k Q (i,\boldsymbol{\mu})
-\sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}x {\mu}_k Q (i,\boldsymbol{\mu}).
\end{equation}
\item There is one totally boundary parallel triangle and $m-1$ bigons.
\begin{enumerate}
\item \label{case2a} The triangle has two edges parallel to $\alpha$ and the third edge parallel to $\alpha F_k \bar{\alpha}$.
See figure \ref{fig:case2a}.
The configuration of bigons and triangle is very similar to that of case (B)(1)(b), the only difference is
the innermost bigon parallel to $\alpha F_k \bar{\alpha}$ now becomes the totally boundary parallel triangle.
There are $x-1$ bigons parallel to $\alpha F_k \bar{\alpha}$, $1$ totally boundary parallel triangle, and
$\mu_k-1$ bigons parallel to $\alpha$. An analogous calculation shows we have the contribution
\begin{align}\label{eq22}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_k, i\geq 0}}m {\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})+
\sum_{\substack{i+2x-1 = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x {\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{case2a.pdf}
\caption{Configuration of polygons in case (B)(2)(a).}
\label{fig:case2a}
\end{center}
\end{figure}
\item The triangle has two edges parallel to $\alpha$ and the third edge parallel to $\bar{\alpha} F_1 \alpha$.
This is very similar to case (B)(1)(c). An analogous calculation shows we have the contribution
\begin{align}\label{eq23}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_1, i\geq 0}}m {\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})-
\sum_{\substack{i+2x-1 = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}(x-1) {\mu}_kQ_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\end{enumerate}
\item There are some mixed polygons and a totally boundary parallel triangle. The edge of the triangle not parallel to $\alpha$ is then parallel to either $\alpha F_k \bar{\alpha}$ or $\bar{\alpha} F_1 \alpha$; we consider the two possibilities separately.
\begin{enumerate}
\item \label{case3a} The third edge of the triangle is parallel to $\alpha F_k \bar{\alpha}$. If we view $F_k$ as on the ``inside" of
an edge parallel to $\alpha F_k \bar{\alpha}$, it is easy to see that only the ``outermost" edge
, $\gamma^2_L$ on the minimal strip $A^2$, can be an edge of a mixed polygon. Hence there
is only one mixed polygon, an it is on the outside of $\gamma^2_L$. On the inside of $\gamma^2_L$ we have exactly the
same configuration of totally boundary parallel polygons as Case (B)(2)(a) and figure \ref{fig:case2a}. There are $\mu_k-1$ bigons parallel to $\alpha$.
Let there be $x-1$ bigons parallel to $\alpha F_k \bar{\alpha}$, and $i$ vertices on $F_1$ outside $\gamma^2_L$.
Then $\mu_1=i+2x+\mu_k+1$ and $m=x+\mu_k$. We obtain a pruned polygon diagram $D'$ by removing all totally
boundary parallel bigons and triangle, cutting $S$ along $\gamma^2_L$ and collapsing $\gamma^2_L$ into a new vertex on the
new boundary component $F'_1$ of $S'$, which we set to be the decorated marked point $\mathbf{m}'_1$. Consider the
possible locations of $\mathbf{m}_1$. It can be a vertex on $F_1$ of any of the $[(x-1)+(\mu_k-1)]$ bigons, of which there are $2(x-1) + (\mu_k - 1)$. It can be either of the two vertices of
the triangle on $F_1$. Or it could be the vertex $p^2_L$, but not $q^2_L$, once again due to $\gamma$ being an outgoing edge from
$\mathbf{m}_1$. (If $q^2_L$ is $\mathbf{m}_1$, then $\gamma$ is $\gamma^2_L$. If $\gamma^2_L$ is outgoing, then the polygon
containing $\gamma^2_L$ is on the inside of $\gamma^2_L$, making it totally boundary parallel, a contradiction.) Hence the
multiplicity of the map $D \mapsto D'$ is $(2(x-1)+(\mu_k-1)+2+1)\mu_k=(2x+\mu_k)\mu_k$. An analogous calculation shows we
have the contribution
\begin{align}\label{eq24}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_k+1, i\geq 0}}m {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}})+
\sum_{\substack{i+2x+1 = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\item \label{case3b} The third edge of the triangle is parallel to $\bar{\alpha} F_1 \alpha$. This is the same as the previous
case with $F_1$ and $F_k$ interchanged. The map $D \mapsto D'$ is $\mu_1\mu_k$-to-1. An analogous calculation shows we
have the contribution.
\begin{align}\label{eq25}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_1+1, i\geq 0}}m {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}})-
\sum_{\substack{i+2x+1 = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}x {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\end{enumerate}
\item There are some mixed polygons but no totally boundary parallel triangle. We now split into cases accordingly as there are edges parallel to $\alpha F_k \bar{\alpha}$ or $\bar{\alpha} F_a \alpha$ or not. There cannot be edges parallel to both, so we have 3 sub-cases.
\begin{enumerate}
\item There are no edges parallel to $\alpha F_k \bar{\alpha}$ or $\bar{\alpha} F_1 \alpha$.
Consider the minimal strip $A^1$ containing all edges parallel to $\alpha$. We now consider the leftmost and rightmost edges of this strip $\gamma^1_L$ and $\gamma^1_R$, and to what extent they coincide. They may (i) be the same edge; or (ii) they may share both endpoints but be distinct edges; or they may share a vertex on (iii) $F_k$ or (iv) $F_1$ only; or they may be disjoint. When they are disjoint, (v) $\gamma_L^1$ or (vi) $\gamma_R^1$ or (vii) both may belong to mixed polygons. This leads to the 7 sub-cases below.
\begin{enumerate}[label=(\roman*)]
\item \label{casei} $\gamma^1_L = \gamma^1_R = \gamma$. Then there are no other edges parallel to $\gamma$ and thus no
bigons. Since $\gamma$ is
an outgoing edge by assumption, it bounds a mixed polygon to the left. This configuration will be covered in
Case (B)(4)(a)(v) and we do not include the contribution here.
\item $\gamma^1_L$ and $\gamma^1_R$ are distinct edges with the same endpoints. Then $\gamma^1_L$ and $\gamma^1_R$
bound the bigon $A^1$ and there are no other edges parallel to $\gamma$. This means there are no
mixed polygons, contrary to assumption. Therefore the contribution vanishes in this case.
\item $\gamma^1_L$ and $\gamma^1_R$ share a common vertex $q^1$ on $F_k$ but not on $F_1$.
See figure \ref{case4a3}.
Consider the boundary of $A^1$
on $F_k$, $[q^1_R,q^1_L]$. This interval could either be a single point $q^1$, or the entire boundary $F_k$. If it is
a single point, then the polygon containing $\gamma^1_L$ and $\gamma^1_R$ has to be inside $A^1$, so the diagonal joining
$p^1_L$ and $p^1_R$ is boundary parallel, contradicting the assumption of a pruned diagram.
In the case $[q^1_R,q^1_L]$ is all of $F_k$, $\gamma^1_L$ and $\gamma^1_R$
belong to a single ``outermost" mixed polygon, and there are $m-1$ bigons
between $F_1$ and $F_k$. Let $i\geq 0$ be the number of remaining vertices on $F_1$ outside $A^1$. Then
$i+\mu_k+1=\mu_1$ and we also have $m=\mu_k$. We obtain a pruned polygon diagram by removing all $m-1$
bigons, cutting along the concatenated edge $\gamma^1_L\bar{\gamma}^1_R$ and collapsing $\gamma^1_L\bar{\gamma}^1_R$
into a new vertex. The multiplicity of the map $D \mapsto D'$ is $m\mu_k$, as $\mathbf{m}_1$ can be a vertex of the
$m-1$ bigons or $p^1_L$. Therefore we have the contribution
\begin{align}\label{eq26}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m= \mu_k, i\geq 0}}m {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{case4a3.pdf}
\caption{Configuration of polygons in case (B)(4)(a)(iii).}
\label{case4a3}
\end{center}
\end{figure}
\item $\gamma^1_L$ and $\gamma^1_R$ share a common vertex $p^1$ on $F_1$ but not on $F_k$. This is the same as the previous
case with $F_1$ and $F_k$ interchanged. The map $D \mapsto D'$ is $\mu_1\mu_k$-to-1. An analogous calculation shows we
have the contribution
\begin{align}\label{eq27}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m= \mu_1, i\geq 0}}m {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\item \label{caseiii} $\gamma^1_L$ and $\gamma^1_R$ do not share any vertex, and $\gamma^1_L$ belongs to a mixed polygon but
$\gamma^1_R$ does not. There are $m-1\geq 1$ bigons parallel to $\alpha$. Let $i=\mu_i+\mu_k-2m$ be the total number
of remaining vertices on $F_1$ and $F_k$ outside $A^1$. We obtain a pruned polygon diagram $D'$ by removing all $m-1$ bigons,
cutting along $\gamma^1_L$ and collapsing $\gamma^1_L$ into a new vertex. The map $D \mapsto D'$ is $m\mu_k$-to-1. Note that if
we allow $m=1$, this exactly covers the configuration in case (B)(4)(a)(i).
Therefore we have the contribution
\begin{align}\label{eq28}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ 1\leq m\leq \min(\mu_1,\mu_k), i\geq 0}}m {\mu}_k Q_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\item $\gamma^1_L$ and $\gamma^1_R$ do not share any vertex, and $\gamma^1_R$ belongs to a mixed polygon but
$\gamma^1_L$ does not. This is almost exactly the same as the previous case, except $\gamma^1_R$ bounds a mixed polygon
to the right, so it cannot be $\gamma$. It follows that $\mathbf{m}_1$ cannot be $p^2_R$ and the map $D \mapsto D'$ is $(m-1)\mu_k$-to-1.
Therefore we have the contribution:
\begin{align}\label{eq29}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ 1\leq m\leq \min(\mu_1,\mu_k), i\geq 0}}(m-1) {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}})
\end{align}
Note that we allow $m=1$ in the summation index because the summand vanishes for $m=1$ anyway.
\item $\gamma^1_L$ and $\gamma^1_R$ do not share any vertex, and both belong to mixed polygons (possibly the same one).
Since there could be $1$ or $2$ mixed polygons, we instead define $m\geq 2$ to be $2$ plus the number of bigons in $A^1$.
We obtain a pruned polygon diagram $D'$ by removing all $m-2$ bigons, cutting the strip $A^1$ from $S$ along $\gamma^1_L$
and $\gamma^1_R$, and collapsing $\gamma^1_L$ and $\gamma^1_R$ into two new vertices. Set the decorated marked point
to be the new vertex from collapsing $\gamma^1_L$. Again since $\gamma$ cannot be $\gamma^1_R$, the map $D \mapsto D'$ is
$(m-1)\mu_k$-to-1. Therefore we have the contribution (again we trivially include $m=1$ in the summation index)
\begin{align}\label{eq210}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ 1\leq m\leq \min(\mu_1,\mu_k), i\geq 0}}(m-1) {\mu}_kQ_{g,n-1}(i+2,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\end{enumerate}
\item There are some edges parallel to $\alpha F_k \bar{\alpha}$. This is the same configuration as case (B)(3)(a),
just without the
single totally boundary parallel triangle. An analogous calculation shows we have the contribution
\begin{align}\label{eq211}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_k+1, i\geq 0}}m {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}})+
\sum_{\substack{i+2x+2 = \mu_1-\mu_k \\ x\geq 0, i\geq 0}}x {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\item There are some edges parallel to $\bar{\alpha} F_1 \alpha$. This is the same configuration as case (B)(3)(b),
just without the
single totally boundary parallel triangle. An analogous calculation shows we have the contribution
\begin{align}\label{eq212}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_1+1, i\geq 0}}m {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}})-
\sum_{\substack{i+2x+2 = \mu_k-\mu_1 \\ x\geq 0, i\geq 0}}(x+1) {\mu}_kQ_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\end{align}
\end{enumerate}
\end{enumerate}
We have exhausted all possibilities in case (B). The total contribution is the sum of all the expressions \eqref{eq21}--\eqref{eq212}, which we now sum. We drop subscripts $g,n-1$ from $Q$ and $X \setminus \{1,k\}$ from $\boldsymbol{\mu}$ for convenience.
We first calculate the sum of terms with summation over $m$.
The $m$-summation terms in \eqref{eq24} and \eqref{eq26}, \eqref{eq25} and \eqref{eq27} combine to give
\begin{align}\label{eq31}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_k, i\geq 0}}m & {\mu}_k Q (i+1,\boldsymbol{\mu})+
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_1, i\geq 0}}m {\mu}_k Q(i+1,\boldsymbol{\mu})\nonumber \\
=&\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_k, i\geq 1}}m {\mu}_k Q(i,\boldsymbol{\mu})+
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_1, i\geq 1}}m {\mu}_k Q(i,\boldsymbol{\mu}).
\end{align}
We rewrite the $m$-summation term in \eqref{eq210}, using the substitution $(m',i')=(m-1,i+2)$, and then adding a vacuous summation index $i=1$,
since $1+2m = \mu_1+\mu_k$ and $m \leq \min(\mu_1,\mu_k)-1$ cannot hold simultaneously. We obtain
\begin{equation}\label{eq32}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ 0\leq m \leq \min(\mu_1,\mu_k)-1, i\geq 1}}m {\mu}_k Q (i,\boldsymbol{\mu} ).
\end{equation}
Since the index set $\{i+2m = \mu_1+\mu_k, m\geq 0, i\geq 1\}$ is the disjoint union of index sets
$\{i+2m = \mu_1+\mu_k, 0\leq m \leq \min(\mu_1,\mu_k)-1, i\geq 1\}$, $\{i+2m = \mu_1+\mu_k, m \geq \mu_k, i\geq 1\}$,
and $\{i+2m = \mu_1+\mu_k, m \geq \mu_i, i\geq 1\}$,
\eqref{eq31}
and \eqref{eq32}
sum to
\begin{align}\label{eq40}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 0, i\geq 1}}m {\mu}_k Q (i,\boldsymbol{\mu}) = \sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 1, i\geq 1}}m {\mu}_k Q (i,\boldsymbol{\mu}),
\end{align}
which is the sum of all $m$-summation terms in \eqref{eq24}, \eqref{eq25}, \eqref{eq26}, \eqref{eq27} and \eqref{eq210}.
The $m$-summation terms in \eqref{eq21} and \eqref{eq40}
combine to give
\begin{align}\label{eq401}
\left( \sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 1, i\geq 0}} +
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 1, i\geq 1}} \right) m{\mu}_k Q (i,\boldsymbol{\mu})
&= \sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 1, i\geq 1}}2m {\mu}_k Q(i,\boldsymbol{\mu}) +
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 1, i=0}}m {\mu}_k Q(i,\boldsymbol{\mu}) \nonumber \\
=&\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq 0, i\geq 1}}2m {\mu}_k Q(i,\boldsymbol{\mu}) +
\frac{\widetilde{(\mu_1+\mu_k)}}{2}{\mu}_k Q(0,\boldsymbol{\mu}),
\end{align}
where we use the $\widetilde{\mu}$ notation of definition \ref{tilde_notation} in the final term.
This is the sum of all $m$-summation terms in \eqref{eq21}, \eqref{eq24}, \eqref{eq25}, \eqref{eq26}, \eqref{eq27}, \eqref{eq210}.
We next rewrite the $m$-summation terms from \eqref{eq28} and \eqref{eq29} with the substitution $(m',i') = (m-1,i+1)$ to obtain
\begin{align}\label{int1}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ 1\leq m\leq \min(\mu_1,\mu_k), i\geq 0}} & m {\mu}_kQ(i+1,\boldsymbol{\mu})
+\sum_{\substack{i+2m = \mu_1+\mu_k \\ 1\leq m\leq \min(\mu_1,\mu_k), i\geq 0}}(m-1) {\mu}_kQ(i+1,\boldsymbol{\mu})\nonumber \\
&= \sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ 0\leq m\leq \min(\mu_1,\mu_k)-1, i\geq 1}} (2m+1) {\mu}_kQ(i,\boldsymbol{\mu}),
\end{align}
and similarly with \eqref{eq211}, and \eqref{eq212} to obtain
\begin{align}\label{int2}
\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_k+1, i\geq 0}} &m {\mu}_kQ(i+1,\boldsymbol{\mu})
+\sum_{\substack{i+2m = \mu_1+\mu_k \\ m\geq \mu_1+1, i\geq 0}}m {\mu}_kQ(i+1,\boldsymbol{\mu}) \nonumber \\
&=
\left( \sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_k, i\geq 1}} + \sum_{\substack{i+2m+1= \mu_1+\mu_k \\ m\geq \mu_1, i\geq 1}} \right) (m+1) {\mu}_kQ(i,\boldsymbol{\mu}).
\end{align}
Now combining the $m$-summation terms in \eqref{eq22}, \eqref{eq23}, \eqref{int1}, \eqref{int2}
we obtain
\begin{align}\label{eq41}
\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_k, i\geq 0}} &m {\mu}_kQ(i,\boldsymbol{\mu})
+\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_1, i\geq 0}}m {\mu}_kQ(i,\boldsymbol{\mu})
+ \sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ 0\leq m\leq \min(\mu_1,\mu_k)-1, i\geq 1}} (2m+1) {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
&+ \left( \sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq \mu_k, i\geq 1}} + \sum_{\substack{i+2m+1= \mu_1+\mu_k \\ m\geq \mu_1, i\geq 1}} \right) (m+1) {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
=&\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq 0, i\geq 1}}(2m+1) {\mu}_kQ(i,\boldsymbol{\mu}) +
\left[\sum_{\substack{2m+1 = \mu_1+\mu_k \\ m\geq \mu_k}}
+\sum_{\substack{2m+1 = \mu_1+\mu_k \\ m\geq \mu_1}}\right] m {\mu}_kQ(0,\boldsymbol{\mu}) \nonumber \\
=&\sum_{\substack{i+2m+1 = \mu_1+\mu_k \\ m\geq 0, i\geq 1}}(2m+1) {\mu}_kQ(i,\boldsymbol{\mu})+
\frac{\widetilde{(\mu_1+\mu_k-1)}}{2}{\mu}_kQ(0,\boldsymbol{\mu}).
\end{align}
This is the sum of all $m$-summation terms in \eqref{eq22}, \eqref{eq23}, \eqref{eq28}, \eqref{eq29}, \eqref{eq211}, and \eqref{eq212}.
Adding \eqref{eq401} and \eqref{eq41}, we have the total of all $m$-summation terms:
\begin{align}\label{eq50}
\sum_{\substack{i+m = \mu_1+\mu_k \\ i\geq 1, m\geq 0}}m {\mu}_kQ (i,\boldsymbol{\mu}) + \frac{\widetilde{(\mu_1+\mu_k)}}{2}{\mu}_k Q(0,\boldsymbol{\mu}) +
\frac{\widetilde{(\mu_1+\mu_k-1)}}{2}{\mu}_k Q(0,\boldsymbol{\mu})
\end{align}
Now we sum the terms with summation over $x$. These arise in expressions \eqref{eq21}, \eqref{eq22}, \eqref{eq23}, \eqref{eq24}, \eqref{eq25}, \eqref{eq211} and \eqref{eq212}.
The total is
\begin{align}\label{eq51}
&\sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x {\mu}_kQ(i,\boldsymbol{\mu})
-\sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}x {\mu}_kQ(i,\boldsymbol{\mu})
+\sum_{\substack{i+2x-1 = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
&-\sum_{\substack{i+2x-1 = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}(x-1) {\mu}_kQ(i,\boldsymbol{\mu})
+\sum_{\substack{i+2x+1 = \mu_1-\mu_k \\ x\geq 1, i\geq 0}}x {\mu}_kQ(i+1,\boldsymbol{\mu}) -
\sum_{\substack{i+2x+1 = \mu_k-\mu_1 \\ x\geq 1, i\geq 0}}x {\mu}_kQ(i+1,\boldsymbol{\mu}) \nonumber \\
&+\sum_{\substack{i+2x+2 = \mu_1-\mu_k \\ x\geq 0, i\geq 0}}x {\mu}_kQ(i+1,\boldsymbol{\mu}) -
\sum_{\substack{i+2x+2 = \mu_k-\mu_1 \\ x\geq 0, i\geq 0}}(x+1) {\mu}_kQ(i+1,\boldsymbol{\mu}) \nonumber \\
=&\sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 0, i\geq 0}}x {\mu}_kQ(i,\boldsymbol{\mu})
-\sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 0, i\geq 0}}x {\mu}_kQ(i,\boldsymbol{\mu})
+\sum_{\substack{i+2x+1 = \mu_1-\mu_k \\ x\geq 0, i\geq 0}}(x+1) {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
&-\sum_{\substack{i+2x+1 = \mu_k-\mu_1 \\ x\geq 0, i\geq 0}}x {\mu}_kQ(i,\boldsymbol{\mu})
+\sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 0, i\geq 1}}x {\mu}_kQ(i,\boldsymbol{\mu}) -
\sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 0, i\geq 1}}x {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
&+\sum_{\substack{i+2x+1 = \mu_1-\mu_k \\ x\geq 0, i\geq 1}}x {\mu}_kQ(i,\boldsymbol{\mu}) -
\sum_{\substack{i+2x+1 = \mu_k-\mu_1 \\ x\geq 0, i\geq 1}}(x+1) {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
=& \sum_{\substack{i+2x = \mu_1-\mu_k \\ x\geq 0, i\geq 1}}2x {\mu}_kQ(i,\boldsymbol{\mu}) + \frac{\widetilde{(\mu_1-\mu_k)}}{2} {\mu}_kQ(0,\boldsymbol{\mu}) \nonumber \\
&+\sum_{\substack{i+2x+1 = \mu_1-\mu_k \\ x\geq 0, i\geq 1}}(2x+1) {\mu}_kQ(i,\boldsymbol{\mu}) + \frac{\widetilde{(\mu_1-\mu_k+1)}}{2} {\mu}_kQ(0,\boldsymbol{\mu}) \nonumber \nonumber \\
&- \sum_{\substack{i+2x = \mu_k-\mu_1 \\ x\geq 0, i\geq 1}}2x {\mu}_kQ(i,\boldsymbol{\mu}) - \frac{\widetilde{(\mu_k-\mu_1)}}{2} {\mu}_kQ(0,\boldsymbol{\mu}) \nonumber \\
&- \sum_{\substack{i+2x+1 = \mu_k-\mu_1 \\ x\geq 0, i\geq 1}}(2x+1) {\mu}_kQ(i,\boldsymbol{\mu}) - \frac{\widetilde{(\mu_k-\mu_1-1)}}{2} {\mu}_kQ(0,\boldsymbol{\mu}) \nonumber \\
=&\sum_{\substack{i+x = \mu_1-\mu_k \\ x\geq 0, i\geq 1}}x {\mu}_kQ(i,\boldsymbol{\mu}) - \sum_{\substack{i+x = \mu_k-\mu_1 \\ x\geq 0, i\geq 1}}x {\mu}_kQ(i,\boldsymbol{\mu}) \nonumber \\
&+\left(\frac{\widetilde{(\mu_1-\mu_k)}}{2}+\frac{\widetilde{(\mu_1-\mu_k+1)}}{2}- \frac{\widetilde{(\mu_k-\mu_1)}}{2} -\frac{\widetilde{(\mu_k-\mu_1-1)}}{2}\right) {\mu}_kQ(0,\boldsymbol{\mu})
\end{align}
It is not hard to verify that for $\mu_1,\mu_k\geq 1$,
\begin{align*}
\mu_1 =
\frac{\widetilde{(\mu_1+\mu_k)}}{2}+\frac{\widetilde{(\mu_1+\mu_k-1)}}{2} +
\frac{\widetilde{(\mu_1-\mu_k)}}{2}+\frac{\widetilde{(\mu_1-\mu_k+1)}}{2}- \frac{\widetilde{(\mu_k-\mu_1)}}{2} -\frac{\widetilde{(\mu_k-\mu_1-1)}}{2}
\end{align*}
Hence combining \eqref{eq50} and \eqref{eq51} we have the second line of \eqref{qrecursion-eq}.
If $\mu_k = 0$, then there are only two possible configuration of partially boundary parallel polygons.
Either they form $m$ bigons parallel to $\alpha F_k \bar{\alpha}$, or they form $m-1$ bigons and the outermost edge is parallel to $\alpha F_k \bar{\alpha}$
belongs to a mixed polygon. These two configurations respectively contribute the two terms of
\[
\sum_{\substack{i+2m=\mu_1 \\ i\geq 0, m\geq 1}}2m Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})
+
\sum_{\substack{i+2m=\mu_1 \\ i\geq 0, m\geq 1}}(2m-1) Q_{g,n-1}(i+1,\boldsymbol{\mu}_{X\setminus \{1,k\}}).
\]
Adding a zero term to the first sum and reparametrising the second, this expression becomes
\begin{align*}
\sum_{\substack{i+2m=\mu_1 \\ i\geq 0, m\geq 0}}& 2m Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})+\sum_{\substack{i+2m+1=\mu_1 \\ i\geq 1, m\geq 0}}(2m+1) Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}}) \nonumber \\
=&\sum_{\substack{i+m = \mu_1 \\ i\geq 1, m\geq 0}}m Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})+ \widetilde{\mu}_1Q_{g,n-1}(0,\boldsymbol{\mu}_{X\setminus \{1,k\}})
\end{align*}
This gives the third line of \eqref{qrecursion-eq}.
\item[(C) $\gamma$ has both ends on $S_1$, is separating, and does not cut off an annulus.]~
The configurations in this case are almost identical to those in case $\mathbf{(A)}$, where $\gamma$ is non-separating. The calculation is
formally identical, we simply substitute $Q_{g_1,|I|+1}(\triangle,\boldsymbol{\mu}_{I})Q_{g_2,|J|+1}(\square,\boldsymbol{\mu}_{J})$
in place of $Q_{g-1,n+1}(\triangle,\square,\boldsymbol{\mu}_{X\setminus \{1\}})$ everywhere. We obtain the last line of \eqref{qrecursion-eq}.
\end{description}
\end{proof}
\subsection{Counts for punctured tori}
With the recursion \eqref{qrecursion-eq} of theorem \ref{qrecursion} in hand, we now obtain the count of pruned polygon diagrams on punctured tori, using the established count for annuli in proposition \ref{basecases}. Then, using proposition \ref{PQ}, we obtain the count of general polygon diagrams.
\begin{proposition}\label{basecase2}
\begin{align*}
Q_{1,1}(\mu_1) &=
\begin{cases}
\frac{\mu_1^3-\mu_1}{24}, & \mu_1 > 0 \text{ odd} \\
\frac{\mu_1^3+8\mu_1}{24}, & \mu_1 > 0 \text{ even} \\
1, & \mu_1=0
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
For $(g,n)=(1,1)$ the recursion \eqref{qrecursion-eq} reduces to
\begin{align*}
Q_{1,1} (\mu_1) = \sum_{\substack{i+j+m = \mu_1 \\ i\geq 1, j,m\geq 0}}m Q_{0,2}(i,j) + \frac{\widetilde{\mu}_1}{2}Q_{0,2}(0,0)
\end{align*}
By Proposition \ref{basecases}, $Q_{0,2}(i,j) = \overline{i} \delta_{i,j}$. If $\mu_1 > 0$ is odd, then we have
\[
Q_{1,1} (\mu_1) = \sum_{\substack{2i+m = \mu_1 \\ i,m\geq 1}}m i = \frac{1}{2}\sum_{\substack{0\leq m \leq\mu_1-2 \\ m \text{ odd}}}m (\mu_1-m)
= \frac{\mu_1}{2} \sum_{\substack{0 \leq m \leq \mu_1 - 2 \\ m \text{ odd}}} m - \frac{1}{2} \sum_{\substack{0 \leq m \leq \mu_1 - 2 \\ m \text{ odd}}} m^2.
\]
Lemma \ref{lem-odd-even-power-sums} gives the two sums immediately, and we obtain
\[
Q_{1,1}(\mu_1) = \frac{\mu_1}{2} \frac{(\mu_1 - 1)^2}{4} - \frac{1}{2} \frac{(\mu_1 - 2)(\mu_2 - 1)\mu_2}{6}
= \frac{\mu_1^3 - \mu_1}{24}.
\]
If $\mu_1 > 0$ is even, then similarly we have
\[
Q_{1,1} (\mu_1)
= \sum_{\substack{2i+m = \mu_1 \\ i,m\geq 1}}m i + \frac{\mu_1}{2}
= \frac{1}{2}\sum_{\substack{0\leq m \leq\mu_1-2 \\ m \text{ even}}}m (\mu_1-m) + \frac{\mu_1}{2}
= \frac{\mu_1}{2} \sum_{\substack{0 \leq m \leq \mu_1 - 2 \\ m \text{ even}}} m - \frac{1}{2} \sum_{\substack{0 \leq m \leq \mu_1 - 2 \\ m \text{even}}} m^2 + \frac{\mu_1}{2},
\]
and lemma \ref{lem-odd-even-power-sums} then yields
\[
Q_{1,1} (\mu_1) = \frac{\mu_1}{2} \frac{(\mu_1 - 2)\mu_1}{4}
- \frac{1}{2} \frac{(\mu_1 - 2)(\mu_1 - 1)\mu_1}{6}
+ \frac{\mu_1}{2}
= \frac{\mu_1^3 + 8 \mu_1}{24}.
\]
\end{proof}
\begin{proposition}\label{P11count}
\begin{align*}
P_{1,1}(\mu_1)&=\binom{2\mu-1}{\mu} \frac{1}{2\mu-1} \frac{\mu^3 + 3\mu^2 + 20\mu - 12}{12}
\end{align*}
\end{proposition}
\begin{proof} By Proposition \ref{PQ}, for $\mu_1 > 0$, and then by proposition \ref{basecase2},
\begin{align*}
P_{1,1}(\mu_1)&=\sum_{\nu_1\leq \mu_1, \nu_1 \text{ odd}}Q_{1,1}(\nu_1)\binom{2\mu_1}{\mu_1-\nu_1} + \sum_{\nu_1\leq \mu_1, \nu_1 \text { even}}Q_{1,1}(\nu_1)\binom{2\mu_1}{\mu_1-\nu_1}\\
&=\sum_{\nu_1\leq \mu_1, \nu_1 \text{ odd}}\frac{\nu_1^3-\nu_1}{24}\binom{2\mu_1}{\mu_1-\nu_1} + \sum_{\nu_1\leq \mu_1, \nu_1 \text{ even}}\frac{\nu_1^3+8\nu_1}{24}\binom{2\mu_1}{\mu_1-\nu_1}
\end{align*}
Using the combinatorial identities \eqref{comb_id_oe1}--\eqref{comb_id_3o}, this simplifies to $\binom{2\mu-1}{\mu} \frac{1}{2\mu-1} \frac{\mu^3 + 3\mu^2 + 20\mu - 12}{12}$.
\end{proof}
We have now proved proposition \ref{Pexamples}, with equations \eqref{eqn:P01_formula}--\eqref{P11} proved in the introduction and propositions \ref{P02prop}, \ref{P03count}, and \ref{P11count} respectively.
\section{Polynomiality}
We now prove theorem \ref{thm:quasipolynomiality}, that $Q_{g,n}(\mu_1, \ldots, \mu_n)$ is an odd quasi-polynomial for $(g,n) \neq (0,1),(0,2)$.
The proof follows in the same fashion as proposition \ref{basecase2}.
\begin{proof}[Proof of theorem \ref{thm:quasipolynomiality}]
We use induction on the negative Euler characteristic $-\chi=2g-2+n$. When $2g-2+n=-1$, $(g,n) = (0,3)$ or $(1,1)$, theorem holds by propositions \ref{basecases} and \ref{basecase2}.
Fix the parities/vanishings of $(\mu_1,\ldots,\mu_n)$. We split the right hand side of the recursion equation \eqref{qrecursion-eq} for $Q_{g,n}$ into $9$ partial sums
depending on the parities/vanishings of $(i,j)$. We will show that each partial sum is a polynomial. Within each partial sum, since
the parities/vanishings of $(i,j,\mu_1,\ldots,\mu_n)$ are fixed, $Q_{g-1,n+1}$, $Q_{g,n-1}$, $Q_{g_1,|I|+1}$ and $Q_{g_2,|J|+1}$ are polynomials
by the induction assumption. Split each polynomial into monomials in $(i,j,\mu_1,\ldots,\mu_n)$.
To show odd quasi-polynomiality it is
sufficient to show that for $(i,j)$ with fixed parities/vanishings, and for odd positive integers $K$ and $L$, the following statements hold. (The degrees $K$ and $L$ remain odd by assumption.)
\begin{enumerate}
\item $A(\mu_1)=\sum_{\substack{i+j+m = \mu_1 \\ i\geq 1, j,m\geq 0}}mi^{K}j^{L}$ is an odd polynomial in $\mu_1$,
\item $B(\mu_1,\mu_k)=\left( \sum_{\substack{i+m = \mu_1+\mu_k \\ i\geq 1, m\geq 0}}m \mu_k i^{K} +
\widetilde{\sum_{\substack{i+x = \mu_1-\mu_k \\ i\geq 1, x\geq 0}}} x \mu_k i^{K} \right)$ is an odd polynomial in $\mu_1$ and $\mu_k$,
\item $C(\mu_1)=\sum_{\substack{i+m = \mu_1 \\ i\geq 1, m\geq 0}}mi^{K}$ is an odd polynomial in $\mu_1$.
\end{enumerate}
For the first statement, we have
\[
A(\mu_1) =\sum_{\substack{i+j+m = \mu_1 \\ i\geq 1, j,m\geq 0}}mi^{K}j^{L} = \sum_{\substack{i+j+m = \mu_1 \\ i,j,m\geq 1}}m i^{K}j^{L}
= \sum_{\substack{i+j+m = \mu_1 \\ i,j,m\geq 1, m \text{ even}}}m i^{K}j^{L} + \sum_{\substack{i+j+m = \mu_1 \\ i,j,m\geq 1, m \text{ odd}}}m i^{K}j^{L}
\]
Since $(i,j)$ have fixed parities and $K,L$ are odd, it follows from proposition \ref{lemma-odd-induction} that $A(\mu_1)$ an odd polynomial in $\mu_1$.
A similar argument show $C(\mu_1)$ is an odd polynomial in $\mu_1$. As for $B(\mu_1, \mu_2)$, another application of proposition \ref{lemma-odd-induction} that for some odd polynomial $P(x)$,
polynomial $P(x)$,
\begin{align*}
B(\mu_1,\mu_k)=
\sum_{\substack{i+m = \mu_1+\mu_k \\ i\geq 1, m\geq 0}}m \mu_k i^{K} +
\widetilde{\sum_{\substack{i+x = \mu_1-\mu_k \\ i\geq 1, x\geq 0}}}x \mu_k i^{K}
=&
\begin{cases}
\mu_kP(\mu_1+\mu_k)+\mu_kP(\mu_1-\mu_k),\ \mu_1\geq \mu_k \\
\mu_kP(\mu_1+\mu_k)-\mu_kP(\mu_k-\mu_1),\ \mu_1< \mu_k
\end{cases} \\
=&\ \ \mu_k[P(\mu_1+\mu_k)+P(\mu_1-\mu_k)]
\end{align*}
That $P$ is odd then implies that $B(\mu_1,\mu_k)$ is odd
with respect to both $\mu_1$ and $\mu_k$.
\end{proof}
If we keep track of the degrees of the polynomials in Proposition \ref{lemma-odd-induction}, we see from the recursion \eqref{qrecursion-eq}
only the top degree terms in $Q_{g,n-1}$, $Q_{g_1,|I|+1}$ and $Q_{g_2,|J|+1}$ can contribute to the top degree component of $Q_{g,n}^{(X_e,X_o,X_\emptyset)}$.
Going through each term on the right hand side of \eqref{qrecursion-eq}, it is easy to verify by induction that
\begin{itemize}
\item the degree of $Q_{g,n}^{(X_e,X_o,\emptyset)}$ is $6g-6+3n$ (i.e. when $X_\emptyset = \emptyset$ and all variables $\mu_1, \ldots, \mu_n$ are nonzero),
\item the degree of $Q_{g,n}^{(X_e,X_o,X_\emptyset)}$ is at most $6g-6+3n-|X_0|$ if $X_0$ is non-empty,
\end{itemize}
Furthermore, since the leading coefficient of the resultant odd polynomial in Proposition \ref{lemma-odd-induction} is independent of parities, it
again follows by induction that for $\mu_1,\ldots,\mu_n \geq 1$, the top degree component of $Q_{g,n}(\mu_1,\ldots,\mu_n)$ is independent of
the choice of parities of the $\mu_i$'s.
Let $[Q_{g,n}(\mu_1,\ldots,\mu_n)]^{\mathrm{top}}$ denote this common top degree component of the quasi-polynomial $Q_{g,n}$.
Then for positive $\mu_i$'s the recursion \eqref{qrecursion-eq} truncates to
\begin{align}\label{toprecur}
[Q_{g,n}&(\mu_1, \ldots, \mu_n)]^{\mathrm{top}}
= \left[\sum_{\substack{i+j+m = \mu_1 \\ i,j,m\geq 1}}m [Q_{g-1,n+1}(i,j,\boldsymbol{\mu}_{X\setminus \{1\}})]^{\mathrm{top}}\right]^{\mathrm{top}} \nonumber\\
&+ \left[\sum_{2\leq j\leq n}\left( \sum_{\substack{i+m = \mu_1+\mu_k \\ i,m\geq 1}}m \mu_k [Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})]^{\mathrm{top}} +
\widetilde{\sum_{\substack{i+x = \mu_1-\mu_k \\ i,x\geq 1}}}x \mu_k [Q_{g,n-1}(i,\boldsymbol{\mu}_{X\setminus \{1,k\}})]^{\mathrm{top}}
\right)\right]^{\mathrm{top}} \nonumber\\
&+ \left[\sum_{\substack{g_1+g_2=g \\ I\sqcup J = \{2,\ldots, n\} \\ \text{No discs or annuli}}}
\left(\sum_{\substack{i+j+m=\mu_1 \\ i,j,m\geq 1}}m [Q_{g_1,|I|+1}(i,\boldsymbol{\mu}_{I})]^{\mathrm{top}}[Q_{g_2,|J|+1}(j,\boldsymbol{\mu}_{J})]^{\mathrm{top}}\right) \right]^{\mathrm{top}}
\end{align}
We now compare the pruned polygon diagram counts $Q_{g,n}$ to the non-boundary-parallel (i.e. pruned) arc diagram counts $N_{g,n}$ of \cite{DKM2017}.
We observe from the following two theorems that $N_{g,n}$ satisfies some initial conditions and recursion similar to those of $Q_{g,n}$.
\begin{proposition}[\cite{DKM2017} prop. 1.5]
\label{prop:Ngn_basecase}
\begin{align*}
N_{0,3}(\mu_1,\mu_2,\mu_3) &=
\begin{cases}
\bar{\mu_1}\bar{\mu_2}\bar{\mu_2}, & \mu_1+\mu_2+\mu_3 \text{ even}\\
0, & \mu_1+\mu_2+\mu_3 \text{ odd}
\end{cases}
\quad \quad \text{and} \quad \quad
N_{1,1}(\mu_1) &=
\begin{cases}
\frac{\mu_1^3+20\mu_1}{48}, & \mu_1 > 0 \text{ even}\\
0, & \mu_1 > 0 \text{ odd}\\
1, & \mu_1 = 0.
\end{cases}
\end{align*}
\end{proposition}
\begin{proposition}[\cite{DKM2017} prop. 6.1]\label{initialN}
\label{prop:Ngn_recursion}
For $(g,n) \neq (0,1), (0,2), (0,3)$ and integers $\mu_1 >0$, $\mu_2, \ldots, \mu_n \geq 0$,
\begin{align*}
N_{g,n}(\mu_1, \ldots, \mu_n) &= \sum_{\substack{ i,j,m \geq 0 \\ i+j+m = \mu_1 \\ m \text{ even}}} \frac{m}{2} \; N_{g-1,n+1} (i,j,\boldsymbol{\mu}_{X\setminus \{1\}}) \\
& + \sum_{\substack{\mu_k>0 \\ 2\leq j\leq n}} \left( \sum_{\substack{i,m \geq 0 \\ i+m = \mu_1 + \mu_k \\ m \text{ even}}} \frac{m}{2} \; \mu_k \; N_{g,n-1} (i,\boldsymbol{\mu}_{X\setminus \{1,k\}}) + \widetilde{\sum_{\substack{i,m \geq 0 \\ i+m = \mu_1 - \mu_k \\ m \text{ even}}}} \frac{m}{2} \; \mu_k \; N_{g,n-1} (i, \boldsymbol{\mu}_{X\setminus \{1,k\}}) \right) \\
& + \sum_{\substack{\mu_k=0 \\ 2\leq j\leq n}} \left( \sum_{\substack{i,m \geq 0 \\ i+m = \mu_1 \\ m \text{ even}}} \frac{m}{2} \; N_{g,n-1} (i,\boldsymbol{\mu}_{X\setminus \{1,k\}})\right) \\
& + \sum_{\substack{g_1 + g_2 = g \\ I \sqcup J = \{2, \ldots, n\} \\ \text{No discs or annuli}}} \sum_{\substack{i,j,m \geq 0 \\ i+j+m = \mu_1 \\ m \text{ even}}} \frac{m}{2} \; N_{g_1, |I|+1} (i, \boldsymbol{\mu}_I) \; N_{g_2, |J|+1} (j, \boldsymbol{\mu}_J)
\end{align*}
\end{proposition}
Using the same argument as for $Q_{g,n}$, the first and third authors with Koyama showed that $N_{g,n}$ is an odd quasi-polynomial such that
\begin{itemize}
\item if $\sum_{i=1}^n \mu_i$ is odd, then $N_{g,n}(\mu_1, \ldots, \mu_n) = 0$,
\item if $\sum_{i=1}^n \mu_i$ is even, then the degree of $N_{g,n}^{(X_e,X_o,\emptyset)}(\mu_1, \ldots, \mu_n)$ is $6g-6+3n$ (i.e. when all $\mu_i$ are nonzero),
\item the degree of $N_{g,n}^{(X_e,X_o,X_0)}$ is at most $6g-6+3n-|X_0|$ if $X_0$ is non-empty.
\end{itemize}
Furthermore the leading coefficients of $N_{g,n}$ encode the intersection numbers on the compactified moduli
space $\overline{\mathcal{M}}_{g,n}$.
\begin{theorem}[\cite{DKM2017} thm. 1.9]\label{Nintersection}
For $(g,n) \neq (0,1)$ or $(0,2)$, and $\mu_1,\ldots,\mu_n\geq 1$ such that $\sum \mu_i$ is even,
the polynomial $N_{g,n}^{(X_e,X_o,\emptyset)}(\mu_1, \ldots, \mu_n)$ has degree $6g-6+3n$.
The coefficient $c_{d_1,\ldots,d_n}$ of the highest degree monomial $\mu_1^{2d_1+1} \; \cdots \; \mu_n^{2d_n+1}$
is independent of the partition $(X_e,X_o)$, and
$$c_{d_1,\ldots,d_n} = \frac{1}{2^{5g-6+2n}d_1!\cdots d_n!}\int_{\overline{\mathcal{M}}_{g,n}}\psi_1^{d_1}\cdots\psi_n^{d_n}.$$
\end{theorem}
By comparing the recursions on top-degree terms, we show they are equal up to a constant factor.
\begin{proposition}\label{QN}
For $(g,n) \neq (0,1)$ or $(0,2)$, and $\mu_1,\ldots,\mu_n\geq 1$ such that $\sum \mu_i$ is even,
$$[Q_{g,n}(\mu_1,\ldots,\mu_n)]^{\mathrm{top}} = 2^{4g+2n-5}[N_{g,n}(\mu_1,\ldots,\mu_n)]^{\mathrm{top}}.$$
\end{proposition}
\begin{proof}
The top degree component of $N_{g,n}$ satisfies the recursion
\begin{align}
[N_{g,n}&(\mu_1, \ldots, \mu_n)]^{\mathrm{top}} = \left[\sum_{\substack{ i,j,m \geq 1 \\ i+j+m = \mu_1 \\ m \text{ even}}} \frac{m}{2} \; [N_{g-1,n+1} (i,j,\boldsymbol{\mu}_{X\setminus \{1\}})]^{\mathrm{top}}\right]^{\mathrm{top}} \nonumber \\
& + \left[\sum_{\substack{\mu_k>0 \\ 2\leq j\leq n}} \left( \sum_{\substack{i,m \geq 1 \\ i+m = \mu_1 + \mu_k \\ m \text{ even}}} \frac{m}{2} \; \mu_k \; [N_{g,n-1} (i,\boldsymbol{\mu}_{X\setminus \{1,k\}})]^{\mathrm{top}} + \widetilde{\sum_{\substack{i,m \geq 1 \\ i+m = \mu_1 - \mu_k \\ m \text{ even}}}} \frac{m}{2} \; \mu_k \; [N_{g,n-1} (i, \boldsymbol{\mu}_{X\setminus \{1,k\}})]^{\mathrm{top}} \right)\right]^{\mathrm{top}} \nonumber \\
& + \left[\sum_{\substack{g_1 + g_2 = g \\ I \sqcup J = \{2, \ldots, n\} \\ \text{No discs or annuli}}} \sum_{\substack{i,j,m \geq 1 \\ i+j+m = \mu_1 \\ m \text{ even}}} \frac{m}{2} \; [N_{g_1, |I|+1} (i, \boldsymbol{\mu}_I)]^{\mathrm{top}} \; [N_{g_2, |J|+1} (j, \boldsymbol{\mu}_J)]^{\mathrm{top}}\right]^{\mathrm{top}} \label{ntoprecur}
\end{align}
Since both $[N_{g,n}(\mu_1, \ldots, \mu_n)]^{\mathrm{top}}$ and $[Q_{g,n}(\mu_1, \ldots, \mu_n)]^{\mathrm{top}}$ are independent of parities,
we may assume all $\mu_i$ to be even, so that none of $N_{g-1,n+1} (i,j,\boldsymbol{\mu}_{X\setminus \{1\}})$, $N_{g,n-1} (i,\boldsymbol{\mu}_{X\setminus \{1,k\}})$, $N_{g_1, |I|+1} (i, \boldsymbol{\mu}_I)$, $N_{g_2, |J|+1} (j, \boldsymbol{\mu}_J)$ vanish due to parity issues.
Compare the right hands sides of equations \eqref{toprecur} and \eqref{ntoprecur}. They are identical except for factors of $2$, and that $N_{g,n}$ sums over
even $m$, while $Q_{g,n}$ sums over both even and odd $m$. Proposition \ref{lemma-odd-induction} implies that for $Q_{g,n}$, the top degree component of the
sum over even $m$ in \eqref{toprecur} is the same as that over odd $m$. This introduces another factor of $2$. Comparing the base cases (proposition \ref{prop:Ngn_basecase} for $N_{g,n}$, propositions \ref{basecases} and \ref{basecase2} for $Q_{g,n})$ and recursions on top degree terms (\eqref{ntoprecur} for $N_{g,n}$ and \ref{toprecur} for $Q_{g,n}$), we obtain by induction the desired result.
\end{proof}
We now prove the remaining theorems from the introduction.
\begin{proof}[Proof of theorem \ref{intersection}]
This follows immediately from theorem \ref{Nintersection} and proposition \ref{QN}.
\end{proof}
\begin{proof}[Proof of theorem \ref{Pcount}]
This follows the same argument as proposition \ref{P03count}.
Recall
$$Q'_{g,n}(\mu_1,\ldots,\mu_n):=\frac{1}{2^{\sum^n_1 \delta_{\mu_i,0}(\mu_1,\ldots,\mu_n)}}Q_{g,n}(\mu_1,\ldots,\mu_n).$$
Since $Q_{g,n}$ is a quasi-polynomial, so is $Q'_{g,n}$. Separating $Q'_{g,n}$ into monomials we see that the right hand side of equation
\eqref{PQ'}
$$P'_{g,n}(\mu_1, \ldots, \mu_n) = \sum_{0 \leq \nu_i \leq \mu_i} \left(Q'(\nu_1, \ldots, \nu_n)\prod_{i=1}^n \binom{2\mu_i}{\mu_i - \nu_i}\right)$$
is a sum of terms of the form
\begin{align*}
\prod_{i\in X_e}\left(\sum_{\substack{0 \leq \nu_i \leq \mu_i\\ \nu_i \text{ even}}}\nu_i^{2n_i+1}\binom{2\mu_i}{\mu_i - \nu_i}\right)\cdot
\prod_{i\in X_o}\left(\sum_{\substack{0 \leq \nu_i \leq \mu_i\\ \nu_i \text{ odd}}}\nu_i^{2n_i+1}\binom{2\mu_i}{\mu_i - \nu_i}\right)\cdot
\prod_{i\in X_\emptyset}\binom{2\mu_i}{\mu_i}
\end{align*}
where $n_i\leq 3g-3+n$ as the degree of degree of $Q_{g,n}^{(X_e,X_o,X_0)}$ is at most $6g-6+3n-|X_0|$.
By Proposition \ref{almostpoly}, each
$$\sum_{\substack{1 \leq \nu_i \leq \mu_i\\ \nu_i \text{ fixed parity}}}\nu_i^{2n_i+1}\binom{2\mu_i}{\mu_i - \nu_i}$$
is of the form
$$\frac{\binom{2\mu_i}{\mu_i}}{(2\mu_i-1)(2\mu_i-3)\dots(2n-2n_i-1)}P_{n_i}(\mu_i)$$
for polynomials $P_{n_i}$. Hence taking a common denominator,
$$P'_{g,n}(\mu_1, \ldots, \mu_n)=\left(\prod_1^n\frac{\binom{2\mu_i}{\mu_i}}{(2\mu_i-1)(2\mu_i-3)\dots(2n-2(3g-3+n)-1)}\right)F_{g,n}(\mu_1,\ldots,\mu_n)$$
for some polynomial $F_{g,n}$. Since $\binom{2\mu_i}{\mu_i} = 2^{\delta_{\mu_i,0}}\binom{2\mu_i-1}{\mu_i}$, $P_{g,n}$ has the
required form.
\end{proof}
A nice way to express the relationship \eqref{PQ'} is
to package $P_{g,n}$ and $Q_{g,n}$ into generating differentials.
For $g \geq 0$ and $n \geq 1$ let
\begin{align*}
\omega^P_{g,n}(x_1, \ldots, x_n) &=
\sum_{\mu_1, \ldots, \mu_n \geq 0} P'_{g,n}(\mu_1, \ldots, \mu_n) x_1^{-\mu_1 - 1} \cdots x_n^{-\mu_n - 1} \; dx_1\cdots dx_n \\
\omega^Q_{g,n}(z_1, \ldots, z_n) &=
\sum_{\nu_1, \ldots, \nu_n \geq 0} Q'_{g,n}(\nu_1, \ldots, \nu_n) z_1^{\nu_1 - 1} \cdots z_n^{\nu_n - 1}\; dz_1 \cdots dz_n.
\end{align*}
Following
\cite{DKM2017} and
\cite{DN2013},
for any quasi-polynomial $f$,
\[
\omega^f(z_1, \ldots, z_n) =
\sum_{\nu_1, \ldots, \nu_n \geq 0} f(\nu_1, \ldots, \nu_n) z_1^{\nu_1 - 1} \cdots z_n^{\nu_n - 1}\; dz_1 \cdots dz_n
\]
is a meromorphic differential, hence
$\omega^Q_{g,n}$ is a meromorphic differential.
Using techniques from that previous work, one can show the following.
\begin{proposition}
$\omega^Q_{g,n}$ is the pullback of $\omega^P_{g,n}$ under the map $x_i = \frac{(1+z_i)^2}{z_i}$.
\qed
\end{proposition}
|
1,116,691,498,602 | arxiv | \section{Introduction}
The problem of testing for independence between two random variables
with unspecified densities has been among the very first
applications of rank-based methods in statistical inference.
Spearman's correlation coefficient was proposed in the early 1900s \citep{Spearman1904}, and Kendall's rank correlation goes back to \citet{Kendall1938}, long before \citet{10.2307/3001968} gave his rank sum and signed rank tests for location.
The multivariate version of the same problem---testing independence
between two random vectors with unspecified densities---is
significantly harder, crucially due to the difficulty of defining a
multivariate counterpart to univariate ranks. Indeed, for $d>1$ the real space $\mathbbm{R}^d$ lacks a canonical ordering.
{ As a result, the problem of defining, in dimension
$d>1$, concepts of signs and ranks enjoying the properties that make
the traditional ranks so successful in univariate statistical inference has been an open problem for more than half a century. {One of the most important properties is
the exact distribution-freeness} (for i.i.d.~samples from absolutely
continuous distributions).
In an important new development involving optimal transport, the
concept of center-outward ranks and signs was proposed recently by
\citet{MR3611491}, \citet{hallin2017distribution}, and
\citet{MR4255122} and enjoys a property of ``maximal
distribution-freeness", contrary to earlier concepts put forth in work such as
\citet{MR0298844,MR2598854,MR1212489,MR2329471,MR1926170,MR1963662}.}
For testing independence between two random vectors,
the first attempt to provide a rank-based alternative to the Gaussian likelihood ratio
method of \citet{Wilks1935} was developed in Chapter~8 of \citet{MR0298844} and, for almost thirty years, has remained
the only rank-based approach to the problem. The proposed tests,
however, are based on componentwise rankings and are not
distribution-free---unless, of course, both vectors have
dimension one, in which case we are back to the traditional context
of bivariate independence (see, e.g., Chapter~III.6 of \citet{MR0229351}).
This issue persists in more recent work, e.g., that of
\citet{MR0298844}, \citet{MR1134492}, \citet{MR2691505},
\citet{MR1467849}, \citet{MR1965367,MR2088309}, and \citet{MR2201019}.
We note here that the above work does provide test statistics that are
asymptotically distribution-free in subclasses such as elliptical
distributions. From the perspective we take here, such subclasses are
too restrictive. Moreover, there is a crucial difference between
finite-sample and asymptotic distribution-freeness.
Indeed,
one should be wary that a sequence of tests $\psi^{(n)}$ with asymptotic
size~$\lim_{n\to\infty}{\rm E}_{{\rm P}}[\psi^{(n)}]=\alpha$ under any
element~${\rm P}$ in a class $\mathcal P$ of distributions does not
necessarily have asymptotic size~$\alpha$ under
unspecified~${\rm P}\!\in\!\mathcal P$: the convergence of ${\rm
E}_{{\rm P}}[\psi^{(n)}]$ to $\alpha$, indeed, typically is not uniform
over~$\mathcal P$, so that, in general, $\lim_{n\to\infty}\sup_{{\rm P}\in\mathcal
P}{\rm E}_{{\rm P}}[\psi^{(n)}]\neq \alpha$. Genuinely distribution-free
tests $\phi^{(n)}$, where ${\rm E}_{{\rm P}}[\psi^{(n)}]$ does not depend on
${\rm P}$, do not suffer that problem, and this is why finite-sample
distribution-freeness is a fundamental property.
Palliating these limitations of the existing procedures by defining genuinely distribution-free---now over the class of all absolutely continuous distributions---multivariate extensions of the quadrant, Spearman, and Ken\-dall tests, based
on the concept of center-outward ranks and signs,
is thus highly desirable. It is the objective this paper.
While this paper is focusing on quadrant, Spearman, and Kendall tests of independence, other tests have been considered in the literature. Center-outward ranks and signs have been used recently by \citet{shi2019distribution} in the construction of distribution-free versions of distance covariance tests for multivariate independence, and a general framework for designing distribution-free tests of multivariate independence
that are consistent and statistically efficient
based on center-outward ranks and signs
has been developed in \citet{shi2020rate}. Multivariate ranks (based on measure transportation to the unit cube rather than the unit ball) have been used similarly in \citet{ghosal2019multivariate}, \citet{deb2019multivariate}.
Center-outward ranks and signs also have been used successfully in
other statistical problems: construction of R-estimators \citep{hallin2019center,hallin2020rankbased} in VARMA models, rank tests for multiple-output regression and MANOVA \citep{hallin2020efficient}, and two-sample goodness-of-fit tests \citep{deb2019multivariate,deb2021efficiency,hallin2021finitesample}. We show here how center-outward ranks and signs naturally allow us to define distribution-free multivariate versions of the popular quadrant, Spearman, and Kendall tests.
{The paper is organized as follows. Section~2 briefly reviews the notion of center-outward ranks and signs, and Section~3 introduces our tests of multivariate independence based on center-outward ranks and signs.
In Section~4, we establish an elliptical Chernoff--Savage property for our center-outward test based on van der Waerden scores, which uniformly dominates, against Konijn alternatives, Wilks' test for multivariate independence,
and we also derive an analog of \citet{MR79383}'s result for the problem under study.
This paper ends with a short conclusion in Section~5.
All the proofs are relegated to appendix.
}
\section{Center-outward distribution
functions, ranks,\\ and signs}
\subsection{Definitions}\label{FQsec}
Denoting by~${\mathbb{S}_d}$
and ${\mathcal{S}_{d-1}}$, respectively, the open
unit ball and the
unit hypersphere in ${\mathbb R}^d$, let ${\rm U}_d$ stand for the spherical\footnote{Namely, the spherical
distribution with uniform (over $[0,1]$) radial density---equivalently,
the product of a uniform over the distances to the origin and a
uniform over the unit sphere ${\cal S}_{d-1}$. For~$d=1$, ${\rm U}_1$ coincides with the Lebesgue uniform over~$(-1,1)$.} uniform distribution over~${\mathbb{S}_d}$. Let ${\rm P}$ belong to the class ${\cal P}_d$ of Lebesgue-absolutely continuous distributions over $\mathbbm{R}^d$. The main result in \citet{MR1369395} then implies the existence of an a.e.\ unique convex (and lower semi-continuous) function $\phi:\mathbbm{R}^d\to\mathbbm{R}$ with gradient $\nabla\phi$ such that\footnote{We borrow from measure transportation the convenient notation $T\#\mathrm{P}$ ($T:\mathbbm{R}^d\to\mathbbm{R}^d$ {\it pushes $\mathrm{P}$ forward to~$T\#\mathrm{P}$}) for the distribution of $T({\bf Z})$ under ${\bf Z}\sim\mathrm{P}$.}~$\nabla\phi\#{\rm P}={\rm U}_d$. Call {\it center-outward distribution function} of $\rm P$ any version ${\bf F}_{\scriptscriptstyle \pm}$
of this a.e.~unique gradient.
Further properties of ${\bf F}_{\scriptscriptstyle \pm}$ require further regularity assumptions. Assume that ${\rm P}$ is in the so-called class ${\cal P}^+_d\subset{\cal P}_d$ of distributions {\it with nonvanishing densities}---namely, the class of distributions with density $f:={\rm d P}/{\rm d}\mu_d$ ($\mu_d$ the $d$-dimensional Lebesgue measure) such that, for all~$D\in\mathbbm{R}^+$, there exist constants $\lambda^-_{D;\mathrm{P}}$ and $\lambda^+_{D;\mathrm{P}}$ satisfying
\begin{equation}\label{nonvanprop}
0<\lambda^-_{D;\mathrm{P}}\leq f({\bf z})
\leq \lambda^+_{D;\mathrm{P}}<\inft
\end{equation}
for all $\bf z$ with $\Vert{\bf z}\Vert \leq D$.
Then, it follows from \citet{MR3886582}
that there exists
a version of ${\bf F}_{\scriptscriptstyle \pm}$ defining a homeo\-morphism between the punctured unit ball~${\mathbb S}_d\!\setminus~\!\{{\bf 0}\}$ and $\mathbbm{R}^d\setminus {\bf F}_{\scriptscriptstyle \pm}^{-1}(\{{\bf 0}\})$; that version has a continuous inverse ${\bf Q}_{\scriptscriptstyle \pm}$ (with domain ${\mathbb S}_d\!\setminus\!\{{\bf 0}\}$), which naturally qualifies as~${\rm P}$'s {\it center-outward quantile function}. Figalli's result is extended, in \citet{MR4147635}, to a more general\footnote{Namely, ${\cal P}_d^+\subsetneq{\cal P}_d^{\scriptscriptstyle \pm}\subsetneq{\cal P}_d$} class ${\cal P}_d^{\scriptscriptstyle \pm}$ of absolutely continuous distributions, while the definition of ${\bf F}_{\scriptscriptstyle \pm}$ given in \citet{MR4255122} aims at selecting, for each ${\rm P}\in{\cal P}_d$, a version of $\nabla\phi$ which, whenever ${\rm P}\in{\cal P}_d^{\scriptscriptstyle \pm}$, is yielding that homeomorphism. For the sake of simplicity, since we are not interested in quantiles, we stick here to the a.e.\ unique definition given above for ${\rm P}\in{\cal P}_d$, and, whenever asymptotic statements are made, to~${\rm P}\in{\cal P}_d^+$.
Turning to sample quantities, denote by~$\mathbf{Z}^{(n)}\!:=\big(\mathbf{Z}_1^{(n)},\dots, \mathbf{Z}_n^{(n)}\big)$, $n\in~\!\mathbb{N}$ a triangular array of i.i.d.\ $d$-dimensional random vectors with distribution~$\mathrm{P}$. Associated with~$\mathbf{Z}^{(n)}$ is the {\it
empirical center-outward distribution function}~${\bf F}_{\scriptscriptstyle \pm}^{(n)}$ mapping the $n$-tuple
$\mathbf{Z}_1^{(n)},\dots, \mathbf{Z}_n^{(n)}$ to a ``regular'' grid $\mathfrak{G}_n$ of the unit
ball~${\mathbb S}_d$. That regular grid $\mathfrak{G}_n$ is obtained as follows
\begin{compactenum}
\item[{\it (a)}] first factorize $n$ into $n=n_Rn_S + n_0$, with
$0\leq n_0<\min(n_R, n_S)$;\footnote{Note that this implies that $n_0/n = o(1)$ as $n\to\infty$. See \citet[Chapter~7.4]{mordant2021transporting} for a suggestion of selecting $n_R$ and~$n_S$. }
\item[{\it (b)}] next consider a ``regular array"
$\mathfrak{S}_{n_S}:=\{{\bf s}^{n_S}_1,\ldots,{\bf s}^{n_S}_{n_S}\}$ of $n_S$ points on the sphere
${\cal S}_{d-1}$ (see the comment below);
\item[{\it (c)}] construct the grid consisting in the collection $\mathfrak{G}_n$ of the
$n_Rn_S $ points $\mathfrak{g}$ of the form
$$\big(r/\big(n_R +1\big)\big){\bf s}^{n_S}_s, \quad
r=1,\ldots,n_R,~s=1,\ldots,n_S,$$ along with ($n_0$ copies of) the
origin in case $n_0\neq 0$: in total $n-(n_0 -1)$ or $n$ distinct points, thus, according as~$n_0>0$ or $n_0=0$.
\end{compactenum}
By ``regular'' we mean ``as regular as
possible'', in the sense, for example of the {\it
low-discrepancy sequences} of the type considered in numerical
integration, Monte-Carlo methods, and experimental design.\footnote{See also \citet{hallin2021finitesample} for a spherical version of the so-called Halton sequences.}
The only mathematical requirement needed for the asymptotic results below is the weak convergence, as~$n_S\to\infty$, of the uniform discrete distribution over~$\mathfrak{S}_{n_S}$ to the uniform distribution over
${\cal S}_{d-1}$. A uniform i.i.d.~sample of points over~${\cal S}_{d-1}$ (almost surely) satisfies such a requirement. However, one easily can construct arrays that are ``more regular" than an i.i.d.~one. For instance, one could see that $n_S$ or $n_S-1$
of the points in~$\mathfrak{S}_n$ are such that $-\, {\bf s}^{n_S}_s$ also belongs to~$\mathfrak{S}_{n_S}$, so that~$\Vert \sum_{s=1}^{n_S}{\bf s}^{n_S}_s\Vert =0$ or~1 according as $n_S$ is even or odd. One also could consider factorizations of the form $n=n_Rn_S + n_0$ with $n_S$ even,
then require~$\mathfrak{S}_{n_S}$ to be symmetric with respect to the origin, yielding~$\sum_{s=1}^{n_S}{\bf s}^{n_S}_s={\bf 0}$.
The empirical counterpart ${\bf F}_{\scriptscriptstyle \pm}^{(n)}$ of ${\bf F}_{\scriptscriptstyle \pm}$
is defined as the (bijective, once the origin is given multiplicity $n_0$)
mapping from~$\mathbf{Z}_1^{(n)},\dots, \mathbf{Z}_n^{(n)}$ to the grid~$\mathfrak{G}_n$ that minimizes
$\sum_{i=1}^n\big\Vert {\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)}) - \mathbf{Z}_i^{(n)} \big\Vert ^2$. That
mapping is unique with probability one; in practice, it is obtained
via a simple optimal assignment (pairing) algorithm (a linear program; see \citet{MR4255122}
for details).
Call {\it center-outward rank} of~$\mathbf{Z}_i^{(n)}$ the integer (in~$\{1,\ldots , n_R\}$ or~$\{0, \ldots , n_R\}$ according as $n_0=0$ or not)
$$R^{(n)}_{i;{{{\scriptscriptstyle \pm}}}}:=(n_R +1)\big\Vert {\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})\big\Vert \quad i=1,\ldots,n$$
and {\it center-outward sign} of~$\mathbf{Z}_i^{(n)}$ the unit vector
$${\bf S}^{(n)}_{i;{{\scriptscriptstyle \pm}}}:={\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})/\big\Vert {\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})\big\Vert\quad \text{for ${\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})\neq{\bf 0}$;}$$
put ${\bf S}^{(n)}_{i;{{{\scriptscriptstyle \pm}}}}={\bf 0}$ for ${\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)}) ={\bf 0}$.
Some desirable finite-sample properties, such as strict independence between the ranks and the signs, only hold for~$n_0=0$ or 1, due to the fact that the mapping from the sample to the grid is no longer injective for $n_0\geq 2$. This, which has no asymptotic consequences (since the number~$n_0$ of tied values involved is $o(n)$ as $n\to\infty$), is easily taken care of by the following tie-breaking device:
\begin{compactenum}
\item[{\it (i)}] randomly select $n_0$ directions ${\bf s}^0_1,\ldots,{\bf s}^0_{n_0}$
in~$\mathfrak{S}_{n_S}$, then
\item[{\it (ii)}] replace the $n_0$ copies of the origin with the new gridpoints
\begin{equation}\label{tiebreak}[1/2(n_R+1)]{\bf s}^0_1,\ldots,[1/2(n_R+1)]{\bf s}^0_{n_0}.
\end{equation}
\end{compactenum}
The resulting grid (for simplicity, the same notation ${\mathfrak{G}}_n$ is used) no longer has multiple points, and the optimal pairing between the sample and this grid is bijective; the $n_0$ smallest ranks, however, take the non-integer value~$1/2$.
\subsection{Main properties}\label{Propsec} This section summarizes some of the main properties of the concepts defined in Sections~\ref{FQsec}; further properties and the proofs can be found in \citet{MR4255122}, \citet{hallin2020efficient} and \citet{hallin2021measure}.
\begin{proposition}\label{H2018} Let ${\bf F} _{{\scriptscriptstyle \pm}}$ denote the center-outward distribution function of~${\rm P}\in{\cal P}_d$. Then,
\begin{compactenum}\vspace{.5mm}
\item[(i)]${\bf F} _{{\scriptscriptstyle \pm}}$ is a probability integral transformation of $\mathbbm{R}^d$: namely, ${\bf Z}\sim {\rm P}$ iff~${\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})\sim {\rm U}_d$; by construction, $\Vert{\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})\Vert$ is uniform over $[0, 1)$, ${\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})/\Vert{\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})\Vert$ is uniform over the sphere ${\cal S}_{d-1}$, and they are mutually independent.
\end{compactenum}\vspace{.5mm}
Let ${\bf Z}^{(n)}_1,\ldots ,{\bf Z}^{(n)}_n$ be i.i.d.\ with distribution ${\rm P}\in{\mathcal P}_d$ and center-outward distribution function~${\bf F} _{{\scriptscriptstyle \pm}}$. Then,
\begin{compactenum}\vspace{.5mm}
\item[(ii)] $\big({\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{1}),\ldots , {\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{n}) \big)$ is uniformly distributed over the $n!/n_0!$ permutations with repetitions of the gridpoints in $\mathfrak{G}_n$ with the origin counted as $n_0$ indistinguishable points (resp. the $n!$ permutations of~$\mathfrak{G}_n$ if either $n_0\leq 1$ or the tie-breaking device described in Section~\ref{FQsec} is adopted);
\item[(iii)] if either $n_0=0$ or the tie-breaking device described in Section~\ref{FQsec} is adopted, the $n$-tuple of center-outward ranks $\big(R^{(n)}_{1;{\scriptscriptstyle \pm} }, \ldots , R^{(n)}_{n;{\scriptscriptstyle \pm} }\big)$ and the $n$-tuple of~center-out\-ward signs $\big({\bf S}^{(n)}_{1;{\scriptscriptstyle \pm} }, \ldots , {\bf S}^{(n)}_{n;{\scriptscriptstyle \pm} }\big)$ are mutually independent;
\item[(iv)] if either $n_0\leq 1$ or the tie-breaking device described in Section~\ref{FQsec} is \parfillskip=0pt\par\clearpage adopted, $\big({\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{1}),\ldots , {\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{n}) \big)$ is {\em strongly essentially maximal ancillary}.\footnote{See Section~2.4 and Appendices D.1 and D.2 of \citet{MR4255122} for a precise definition and a proof of this essential property.}
\end{compactenum}\vspace{.5mm}
Assuming, moreover,
that ${\rm P}\in{\mathcal P}_d^+$,
\begin{compactenum}\vspace{.5mm}
\item[(v)] (Glivenko--Cantelli)
\begin{equation*
\displaystyle{\max_{1\leq i\leq n}}\Big\Vert {\bf F}^{(n)} _{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_i) - {\bf F} _{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_i) \Big\Vert \rightarrow 0 ~\textrm{\em a.s.} \quad \text{as}~n\to\infty.
\end{equation*}
\end{compactenum}
\end{proposition}
Center-outward distribution functions, ranks, and signs also inherit, from the invariance of squared Euclidean distances, elementary but quite remarkable invariance and equivariance properties under orthogonal transformations and global rescaling. Denote by ${\bf F}^{{\bf Z}}_{{\scriptscriptstyle \pm}}$ the center-outward distribution function of $\bf Z$ and by~${\bf F}^{{\bf Z};(n)}_{{\scriptscriptstyle \pm}}$ the empirical distribution function of an i.i.d. sample~${\bf Z}_1,\ldots,{\bf Z}_n$ associated with a grid $\mathfrak{G}_n$.
\begin{proposition}\label{invF} Let $\boldsymbol{\mu}\in\mathbbm{R}^d$, $k\in\mathbbm{R}^+$, and denote by ${\bf O}$ a~$d\times d$ orthogonal matrix. Then,
\begin{compactenum}
\item[(i)] ${\bf F}^{\boldsymbol{\mu}+ k{\bf O}{\bf Z}}_{{\scriptscriptstyle \pm}} (\boldsymbol{\mu} + {\bf O}{\bf z})= {\bf O}{\bf F}^{\bf Z}_{{\scriptscriptstyle \pm}}({\bf z})$, ${\bf z}\in\mathbbm{R}^d$;
\item[(ii)] denoting by ${\bf F}^{\boldsymbol{\mu}+ k{\bf O}{\bf Z};(n)}_{{\scriptscriptstyle \pm}}$ the empirical distribution function of the sample~$\boldsymbol{\mu}+ k{\bf O}{\bf Z}_1,\ldots, \boldsymbol{\mu}+ k{\bf O}{\bf Z}_n$ associated with the grid ${\bf O}\mathfrak{G}_n$ (hence by~${\bf F}^{{\bf Z};(n)}_{{\scriptscriptstyle \pm}}$ the empirical distribution function of the sample~${\bf Z}_1,\ldots, {\bf Z}_n$ associated with the grid $\mathfrak{G}_n$),
\begin{equation}\label{equiv}
{\bf F}^{\boldsymbol{\mu}+ k{\bf O}{\bf Z};(n)}_{{\scriptscriptstyle \pm}} (\boldsymbol{\mu} + k{\bf O}{\bf Z}_i)= {\bf O}{\bf F}^{{\bf Z};(n)}_{{\scriptscriptstyle \pm}}({\bf Z}_i), \quad i=1,\ldots,n.
\end{equation}
\end{compactenum}
\end{proposition}
\section{Rank-based tests for multivariate independence}
\subsection{Center-outward test statistics for multivariate independence}
In this section, we describe the test statistics we are proposing for testing independence between two random vectors. Consider a sample
$$({\mathbf{X}} ^{\prime}_{11},{\mathbf{X}} ^{\prime}_{21})^{\prime},
({\mathbf{X}} ^{\prime}_{12},{\mathbf{X}} ^{\prime}_{22})^{\prime}, \ldots, ({\mathbf{X}} ^{\prime}_{1n},{\mathbf{X}}
^{\prime}_{2n})^{\prime}$$
of $n$ \mbox{i.i.d.} copies of some $(d_1+d_2)=d$-dimensional random vector
$({\mathbf{X}} ^{\prime}_1,{\mathbf{X}} ^{\prime}_2)^{\prime}$ with Lebesgue-absolutely continuous distribution~${\rm P}\in{\cal P}_d$ and density $f$. We are
interested in the null hypothesis under which ${\mathbf{X}} _1$ and ${\mathbf{X}} _2$,
with unspecified marginal distributions~${\rm
P}_1$ (density~$f_1$) and ${\rm
P}_2$ (density~$f_2$), respectively, are mutually
independent: $f$ then factorizes into $f=f_1f_2$.
Denote by $R^{(n)}_{ki;{\scriptscriptstyle \pm}}$ and ${\bf S}^{(n)}_{ki;{\scriptscriptstyle \pm}}$, $i=1,2,\ldots,n$ the center-outward rank and the sign of ${\mathbf{X}} _{ki}$ computed from~${\mathbf{X}} _{k1},
{\mathbf{X}} _{k2}, \ldots, {\mathbf{X}} _{kn}$, $k=1,2$, respectively. For the simplicity of notation, assume, without loss of generality as~$n\to\infty$, that the grid used for computing those ranks and signs is such that~$\sum_{s=1}^{n_S}{\bf s}^{n_S}_s={\bf 0}$, for~$d=d_1,d_2$. Also assume that $n_0 =0$ or~1 (if necessary, after implementing the tie-breaking device described in Section~\ref{FQsec}). This implies that $\sum_{i=1}^n{\bf S}^{(n)}_{ki;{\scriptscriptstyle \pm}} =~\!{\bf 0}$ for~$k=1,2$, and moreover, that $$\sum_{i=1}^n J_k\big(R^{(n)}_{ki;{\scriptscriptstyle \pm}}/\big(n_R+1\big)\big){\bf S}^{(n)}_{ki;{\scriptscriptstyle \pm}} = {\bf 0}$$ for any {\it score functions} $J_k: [0, 1) \to \mathbbm{R}$, $k=1,2$.
Consider the $d_1\times d_2$ matrices
\begin{align}\label{tildeW}
{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}&:=\frac{1}{n} \sum_{i=1}^n{\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)\prime}_{2i;{\scriptscriptstyle \pm}},\\
{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}&:= \frac{1}{n(n_R+1)^2} \sum_{i=1}^nR^{(n)}_{1i;{\scriptscriptstyle \pm}}R^{(n)}_{2i;{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)\prime}_{2i;{\scriptscriptstyle \pm}},\\
{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}&:= {n \choose 2}^{-1
\sum_{i<i^{\prime}}
\text{sign}\Big[
\Big(R^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}- R^{(n)}_{1i^{\prime};{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{1i^{\prime};{\scriptscriptstyle \pm}}\Big)\nonumber\\
&\hspace{16mm}\times\Big(R^{(n)}_{2i;{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{2i;{\scriptscriptstyle \pm}}- R^{(n)}_{2i^{\prime};{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{2i^{\prime};{\scriptscriptstyle \pm}}\Big)^{\prime} \Big],
\end{align}
where sign$\big[{\bf M}\big]$ stands for the matrix collecting the signs of the entries of a real matrix $\bf M$, and
\begin{align}\label{scoreW}
{\tenq{\mathbf W}}\,\!_{J}^{(n)}&:=
\frac{1}{n} \sum_{i=1}^n
J_1\Big(\frac{R^{(n)}_{1i;{\scriptscriptstyle \pm}}}{n_R+1}\Big)
J_2\Big(\frac{R^{(n)}_{2i;{\scriptscriptstyle \pm}}}{n_R+1}\Big){\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)\prime}_{2i;{\scriptscriptstyle \pm}},
\end{align}
where the {\it score functions} $J_k:[0,1)\to \mathbbm{R}$, $k=1,2$ are the square-integrable differences of two monotone increasing functions, with
\begin{equation}\label{scorevariance}
0<\sigma_{J_k}^2:=\int_0^1J_k^2(u){\rm d} u<\infty.
\end{equation}
Those matrices defined in \eqref{tildeW}--\eqref{scoreW} clearly constitute matrices of cross-covariance measurements based on center-outward ranks and signs (for ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, signs only).
For~$d_1=1=d_2$, it is easily seen that ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, and~${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$,
up to scaling constants, reduce to the quadrant, Spearman, and Kendall test statistics, while ${\tenq{\mathbf W}}\,\!_{J}^{(n)}$ yields a score-based extension of Spearman's correlation coefficient.
\subsection{Asymptotic representation and asymptotic normality}\label{asreprsec}
Each of the rank-based matrices defined in \eqref{tildeW}--\eqref{scoreW} has an asymptotic representation in terms if i.i.d.~variables. More precisely, defining ${\bf S}_{ki;{\scriptscriptstyle \pm}}$ as~${\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{ki})/\big\Vert {\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{ki})\big\Vert$ if ${\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{ki})\neq{\bf 0}$ and ${\bf 0}$ otherwise for $k=1,2$, let
\begin{align}\label{tildeWas}
{{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}&:=\frac{1}{n} \sum_{i=1}^n{\bf S}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{\prime}_{2i;{\scriptscriptstyle \pm}},\\
{{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}&:=
\frac{1}{n} \sum_{i=1}^n {\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i})
{\bf F}^{\prime}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i}),
\\
{{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}&:= {n \choose 2}^{-1} \sum_{i<i^{\prime}}
\text{sign}\Big[
\Big({\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i}) - {\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i^{\prime}}) \Big)\nonumber\\
&\hspace{16mm}\times\Big({\bf F}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i}) -{\bf F}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i^{\prime}}) \Big)^{\prime}\, \Big],
\end{align}
and
\begin{align}\label{scoreWas}
\qquad{{\mathbf W}}\,\!_{J}^{(n)}:=\frac{1}{n} \sum_{i=1}^n
J_1\Big(\big\Vert{\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i}) \big\Vert\Big)
J_2\Big(\big\Vert {\bf F}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i}) \big\Vert\Big)
{\bf S}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{\prime}_{2i;{\scriptscriptstyle \pm}} .
\end{align}
The following asymptotic representation results then hold under the
null hypothesis of independence (hence, also under contiguous
alternatives).
\begin{proposition}\label{prop:hajek}
Under the null hypothesis of independence, as $n_R$ and $n_S$ tend to infinity,
$\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)} \! - {{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}\big),$
$\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}\! - {{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}\big),$
$\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}\! - {{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}\big),$
and, provided that $J_1$ and $J_2$ are the square-integrable differences of two monotone increasing functions, $\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{J}^{(n)} - {{\mathbf W}}\,\!_{J}^{(n)}\big)$ is~$o_{\text{\rm q.m.}}(n^{-1/2})$.
\end{proposition}
\medskip
The asymptotic normality for {\rm vec}${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, {\rm vec}${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, {\rm vec}${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$, and {\rm vec}${\tenq{\mathbf W}}\,\!_{J}^{(n)}$ follows immediately from the asymptotic representation results and the standard central-limit behavior of {\rm vec}${{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, {\rm vec}${{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, {\rm vec}${{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$, and {\rm vec}${{\mathbf W}}\,\!_{J}^{(n)}$.
\begin{proposition}\label{prop:asym} Under the null (independence) hypothesis, as $n_R$ and $n_S$ tend to infinity,
$n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)},$
$n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)},$
$n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)},$
and $n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{J}^{(n)}$
are asymptotically normal with mean vectors ${\bf 0}_{d_1d_2}$
and covariance matrices
$$\frac{1}{d_1d_2}{\bf I}_{d_1d_2}, \quad
\frac{1}{9d_1d_2}{\bf I}_{d_1d_2}, \quad
\frac{4}{9}{\bf I}_{d_1d_2}, \quad
\text{and} \quad \frac{\sigma^{2}_{J_1}\sigma^{2}_{J_2}}{d_1d_2}{\bf I}_{d_1d_2},
$$
respectively.
\end{proposition}
\subsection{Center-outward sign, Spearman, Kendall, and score tests}\label{testprocsec}
Associated with ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$, and ${\tenq{\mathbf W}}\,\!_{J}^{(n)}$ are the sign, Spearman, Kendall, and score test statistics
\begin{align*
&{\tenq T}\,\!_{\text{\tiny\rm sign}}^{(n)}:= nd_1d_2\big\Vert {\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}\big\Vert^2_{\mathrm F},\quad
{\tenq T}\,\!_{\text{\tiny\rm S}}^{(n)}:= 9nd_1d_2\big\Vert {\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}\big\Vert^2_{\mathrm F}, \quad \\
&{\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}:= \frac{9n}{4}\big\Vert {\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}\big\Vert^2_{\mathrm F}, \quad \text{and} \quad
{\tenq T}\,\!_{J}^{(n)}:=
\frac{nd_1d_2}{\sigma^{2}_{J_1}\sigma^{2}_{J_2}}
\big\Vert {\tenq{\mathbf W}}\,\!_J^{(n)}\big\Vert^2_{\mathrm F},
\end{align*}
respectively, where $\Vert {\bf M}\Vert _{\mathrm F}$ stands for the Frobenius norm of a matrix $\bf M$, and $\sigma^{2}_{J_k}$, $k=1,2$ are defined as in \eqref{scorevariance}.
In view of the asymptotic normality results in Proposition~\ref{prop:asym}, the tests (denoted respectively by ${\psi}\,\!_{\text{\tiny\rm sign}}^{(n)}$, ${\psi}\,\!_{\text{\tiny\rm S}}^{(n)}$, ${\psi}\,\!_{\text{\tiny\rm K}}^{(n)}$, and ${\psi}\,\!_{J}^{(n)}$) rejecting the null hypothesis of independence whenever ${\tenq T}\,\!_{\text{\tiny\rm sign}}^{(n)}$, $ {\tenq T}\,\!_{\text{\tiny\rm S}}^{(n)}$, ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$, or ${\tenq T}\,\!_{J}^{(n)}$ exceed the~$(1-~\!\alpha)$-quan\-tile~$\chi^2_{d_1d_2;1-\alpha}$ of a chi-square distribution with $d_1d_2$ degrees of freedom has asymptotic level $\alpha$. These tests are strictly distribution-free, however, and exact critical values can be computed or simulated as well. The tests based on~${\tenq T}\,\!_{\text{\tiny\rm sign}}^{(n)}$, $ {\tenq T}\,\!_{\text{\tiny\rm S}}^{(n)}$, and ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$ are multivariate extensions of the traditional quadrant, Spearman, and Kendall tests, respectively,
to which they reduce for~$d_1=1=d_2$.
\section{Local asymptotic power}\label{powersec}
While there is only one way for two random vectors ${\bf X}_1$
and~${\bf X}_2$ to be independent, their mutual dependence can take
many forms. The classical benchmark, in testing for bivariate
independence, is a ``local" form of an independent component analysis model that goes back to \citet{MR79384}. A multivariate extension of such alternatives has been considered also by \citet{MR1467849} and \citet{MR1965367} in the elliptical context. We extend it further here to more general, non-elliptical situations.
\subsection{Generalized Konijn alternatives}\label{Konijnsec}
Let ${\bf X} ^*=({\bf X} ^{*\prime}_1, {\bf X} ^{*\prime}_2)^{\prime}$, where ${\bf X} ^*_1$ and ${\bf X} ^*_2$ be mutually independent random vectors, with absolutely continuous distributions ${\rm P}_{1}$ over $\mathbbm{R}^{d_1}$ and~${\rm P}_{2}$ over~$\mathbbm{R}^{d_2}$ and densities $f_1$ and $f_2$, respectively; then ${\bf X} ^*$ has density $f=f_1f_2$ over $\mathbbm{R}^{d}$. Consider
\begin{align}\label{Konijn}
{\bf X}=\left(\!
\begin{array}{c}
{\bf X}_1\\ {\bf X}_2
\end{array}
\!\right)
:={\bf M}_\delta\left(\!
\begin{array}{c}
{\bf X} ^*_1\\ {\bf X} ^*_2
\end{array}
\!\right)
:=
\left(\!
\begin{array}{cc}
(1-{\delta}
){\bf I}_{d_1}&{\delta}{\bf M_1}\\
{\delta}{\bf M}_2&(1-{\delta}){\bf I}_{d_2}
\end{array}
\!\right)
\!\left(\!
\begin{array}{c}
{\bf X} ^*_1\\ {\bf X} ^*_2
\end{array}
\!\right)
\end{align}
where ${\delta}\!\in\!\mathbbm{R}$ and ${\bf M}_1\!\in\!\mathbbm{R}^{d_1\times d_2}$, ${\bf M}_2\!\in\!\mathbbm{R}^{d_2\times d_1}$
are nonzero. For given~${\rm P}_{1}$, ${\rm P}_{2}$, ${\bf M}_1$, and ${\bf M}_2$, the distribution ${\rm P}^{\bf X}$ of ${\bf X}$ belongs to a one-parameter family ${\cal P}^{\bf X}:=\{{\rm P}^{\bf X}_{\delta} \vert\, {\delta}\in\mathbbm{R}\}.$
On $f_1$ and $f_2$, we make the following assumption. \smallskip
\begin{assumption}\label{asp:K}
\mbox{}\vspace{1mm}
\begin{compactenum}
\item[(K1)] The densities $f_1$ and $f_2$ are such tha
\begin{align*}
\int_{\mathbbm{R}^{d_k}}\!{\bf x}f_k({\bf x}){\rm d} {\bf x
={\bf 0}\quad\text{and}\quad
0<\int_{\mathbbm{R}^{d_k}}\!{\bf x}{\bf x}^{\prime} f_k({\bf x}){\rm d} {\bf x}=:{\boldsymbol{\Sigma}}_{k}<\infty,\quad k=1,2.
\end{align*}
\item[(K2)] The functions ${\bf x}_k\mapsto (f_k({\bf x}_k))^{1/2}$, $k=1,2$
admit quadratic mean partial derivatives\footnote{Existence of quadratic mean partial derivatives is equivalent to quadratic mean differentiability; this was shown in \citet{MR307329} and independently rediscovered by \citet[Lemma~2.1]{MR1364260}.}
$$D_{\ell}[(f_k)^{1/2}], \quad \ell =1,\ldots,d_k, \
k=1,2.$$
\item[(K3)] Letting
$${\boldsymbol\varphi}:= \left({\boldsymbol\varphi}_1^{\prime} ,{\boldsymbol\varphi}_2^{\prime}\right)^{\prime}:= \left(\varphi_{1;1},\ldots,\varphi_{1;d_1},\varphi_{2;1},\ldots,\varphi_{2;d_1}
\right)^{\prime}$$
with
\begin{align*}\varphi_{k;\ell}:=-2D_\ell[(f_k)^{1/2}]/(f_k)^{1/2}\stackrel{\text{a.e.}}{=} -\partial_\ell f_k/f_k, \quad \ell =1,\ldots,d_k, \
k=1,2,\end{align*}
it holds that, for $k=1,2$ and $ \ell =1,\ldots,d_k$, $0<~\int_{{\mathbbm R}^{d_k} }\big( \varphi_{k;\ell}({\bf x})\big)^{2}<\infty$,
and\footnote{Integration by parts yields
$\int_{{\mathbbm R}^{d_k} } {\boldsymbol\varphi}_k({\bf x}) f_k({\bf x}){\rm d} {\bf x} ={\bf 0}$,
$\int_{{\mathbbm R}^{d_k} } {\bf x}^{\prime} {\boldsymbol\varphi}_k({\bf x}) f_k({\bf x}){\rm d} {\bf x} =d_k$, and~
$\int_{{\mathbbm R}^{d_k} } {\bf x} {\boldsymbol\varphi}_k({\bf x})^{\prime} f_k({\bf x}){\rm d} {\bf x} =~{\bf I}_{d_k}$,
$k=1,2$; see also \citet[page~555]{MR1364260}.}
\begin{align*}
{\cal J}\!_k:={\rm Var}\left({\bf X} ^{*\prime}_k {\boldsymbol\varphi}_k({\bf X} ^*_k)\right)=
\int_{{\mathbbm R}^{d_k} } \left({\bf x}^{\prime} {\boldsymbol\varphi}_k({\bf x}) -d_k\right)^2
f_k({\bf x}){\rm d} {\bf x}<\infty .
\end{align*}
\end{compactenum}
\end{assumption}
\smallskip
It should be stressed, however, that these assumptions are not to be imposed on the observations in order for our tests to be valid but only intend to provide an analytically convenient benchmark for the comparison of local power. Let
$${\boldsymbol{\cal I}}_k:= \int_{{\mathbbm R}^{d_k} }
{\boldsymbol\varphi}({\bf x}){\boldsymbol\varphi}^{\prime}({\bf x})
f_k({\bf x}){\rm d} {\bf x}<\infty.
$$
Under ${\rm P}^{\bf X}_0$, ${\bf X}_1={\bf X} ^*_1$ and ${\bf X}_2=~\!{\bf X} ^*_2$ are mutually independent; for~${\delta}\neq~\!0$, call ${\rm P}^{\bf X}_{\delta}$ a (generalized) {\it Konijn alternative} to ${\rm P}^{\bf X}_0$. Sequences of the form~${\rm P}^{\bf X}_{n^{-1/2}\tau}$ with $\tau\neq 0$, as we shall see, constitute local alternatives to the null hypothesis of independence in a sample of size $n$. More precisely, the following LAN property holds in the vicinity of ${\delta}=0$.
\begin{proposition}\label{prop:lan} Let ${\rm P}_{1}$ and ${\rm P}_{2}$ satisfy Assumption~\ref{asp:K}. Then, denoting by~${\bf X}^{(n)}:=({\bf X}_1,\ldots,{\bf X}_n)$, $n\in\mathbbm{N}$ a triangular array of $n$ independent copies of ${\bf X}\sim{\rm P}^{\bf X}_0$, for given { nonzero}~${\bf M}_1$ and ${\bf M}_2$,
the family ${\cal P}_{\bf X}$ of Konijn alternatives is LAN at~${\delta}=~\!0$ with root-$n$ contiguity rate, central sequence
\begin{multline}\label{KonDelta}
\Delta^{(n)}({\bf X}^{(n)})
:=
\sum_{i=1}^n
\Big[{\bf X}^{\prime}_{1i}{\bf M}_2^{\prime}{\boldsymbol\varphi}_2({\bf X}_{2i}) +
{\bf X}^{\prime}_{2i}{\bf M}_1^{\prime}{\boldsymbol\varphi}_1({\bf X}_{1i}) \\
- \Big({\bf X}^{\prime}_{1i}{\boldsymbol\varphi}_1({\bf X}_{1i}) -d_1
\Big)
-\Big({\bf X}^{\prime}_{2i}{\boldsymbol\varphi}_2({\bf X}_{2i}) -d_2
\Big)\Big]
\end{multline}
and Fisher information
\begin{multline}\label{Kongamma}
\gamma^2:={\cal J}_1 + {\cal J}_2
+ \text{\rm vec} ^{\prime}\! \left({\boldsymbol{\Sigma}}_1\right) \text{\rm vec}\! \left({\bf M}_2^{\prime}{\boldsymbol{\cal I}}_2{\bf M}_2\right) \\
+ \text{\rm vec} ^{\prime}\! \left({\boldsymbol{\Sigma}}_2\right) \text{\rm vec}\! \left({\bf M}_1^{\prime}{\boldsymbol{\cal I}}_1{\bf M}_1\right)
+ \text{\rm tr}({\bf M}_1{\bf M}_2)
+ \text{\rm tr}({\bf M}_2{\bf M}_1).
\end{multline}
Namely, under ${\rm P}^{\bf X}_0$,
\begin{align}
\Lambda^{(n)}({\bf X}^{(n)}):=\log\frac{{\rm d}{\rm P}^{\bf X}_{n^{-1/2}\tau}}{{\rm d}{\rm P}^{\bf X}_0}({\bf X}^{(n)})
=\tau\Delta^{(n)}({\bf X}^{(n)}) -\frac{1}{2}\tau^2\gamma^2 + o_{{\rm P}}(1)
\label{LANKon}
\end{align}
and $\Delta^{(n)}({\bf X}^{(n)})$ is asymptotically normal, with mean zero and variance $\gamma^2$ as $n\to\infty$.
\end{proposition}
\subsection{Limiting distributions and Pitman efficiencies}
In this section, we aim to establish elliptical Chernoff--Savage and Hodges--Lehmann results for our center-outward test based on van der Waerden and Wilcoxon scores
comparing to Wilks' test, respectively; compare \citet{MR100322} and \citet{MR79383}.
To this end, we first derive the limiting distributions of ${\tenq T}\,\!_{J}^{(n)}$ and ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$ under the sequence of alternatives~${\rm P}^{\bf X}_{n^{-1/2}\tau}$.
\begin{proposition}\label{prop:J} Let ${\rm P}_{1}$ and ${\rm P}_{2}$ satisfy Assumption~\ref{asp:K}. Then, if observations are $n$ independent copies with distribution~${\rm P}^{\bf X}_{n^{-1/2}\tau}$, for given { nonzero}~${\bf M}_1$ and ${\bf M}_2$,
\begin{compactenum}
\item[(i)] the limiting distribution of the test statistic ${\tenq T}\,\!_{J}^{(n)}$ is noncentral chi-square with $d_1d_2$ degrees of freedom and noncentrality parameter
\begin{equation*}
\frac{\tau^2 d_1d_2}{\sigma^{2}_{J_1}\sigma^{2}_{J_2}}\Big\Vert {\rm E}_{H_0}\Big[{\bf J}_1({\bf F}_{1;{\scriptscriptstyle \pm}} ({\bf X}_1))
{\bf R} {\bf J}_2({\bf F}_{2;{\scriptscriptstyle \pm}} ({\bf X}_2))^{\prime}\Big]\Big\Vert^2_{\mathrm F},
\end{equation*}
where ${\bf R}:={\bf X}_{1}^{\prime}{\bf M}_2^{\prime}{\boldsymbol\varphi}_2({\bf X}_{2}) +
{\bf X}_{2}^{\prime}{\bf M}_1^{\prime}{\boldsymbol\varphi}_1({\bf X}_{1})$ and
$${\bf J}_k({\bf u}):= J_k(\Vert{\bf u}\Vert)\frac{{\bf u}}{\Vert{\bf u}\Vert}{\bf 1}_{[\Vert{\bf u}\Vert\neq 0]},\quad {\bf u}\in{\mathbb{S}_d};$$
\item[(ii)] the limiting distribution of the test statistic ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$ is noncentral chi-square with $d_1d_2$ degrees of freedom and noncentrality parameter
\begin{equation*}
{9\tau^2}\Big\Vert {\rm E}_{H_0}\Big[{\bf F}^{\square}_{1;{\scriptscriptstyle \pm}} ({\bf X}_1)
{\bf R} {\bf F}^{\square}_{2;{\scriptscriptstyle \pm}} ({\bf X}_2)^{\prime}\Big]\Big\Vert^2_{\mathrm F},
\end{equation*}
where
$$\big({\bf F}^{\square}_{k;{\scriptscriptstyle \pm}} ({\bf X}_k)\big)_j:=2F_{kj}\Big(\big({\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{k})\big)_{j}\Big)-1$$
(recall $F_{kj}$ denotes the cumulative distribution function of~$\big({\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{k}) \big)_{j}$).
\end{compactenum}
\end{proposition}
Suppose that all the conditions in Proposition~\ref{prop:J} hold. Then the limiting alternative distribution of Wilks' (log) likelihood ratio test statistic
is also noncentral chi-square, with~$d_1d_2$ degrees of freedom and noncentrality parameter
\begin{equation*}
\tau^2 \Big\Vert
{\boldsymbol{\Sigma}}_{1}^{ 1/2}{\bf M}_2^{\prime}{\boldsymbol{\Sigma}}_2^{-1/2} +
{\boldsymbol{\Sigma}}_{1}^{-1/2}{\bf M}_1 {\boldsymbol{\Sigma}}_2^{ 1/2} \Big\Vert^2_{\mathrm F};
\end{equation*}
see, e.g., page 919 of \citet{MR2201019}.
Now we are ready to compute the asymptotic relative efficiencies of our center-outward rank tests with respect to Wilks' likelihood ratio test.
\begin{proposition}\label{prop:Pitman}
Let ${\rm P}_{1}$ and ${\rm P}_{2}$ be elliptically symmetric distributions, namely, admit densities of the form
$$f_k({\bf x}_k)\propto ({\rm det}({\boldsymbol{\Sigma}}_k))^{-1/2}
\phi_k\Big(\sqrt{{\bf x}_k^{\prime}{\boldsymbol{\Sigma}}_k^{-1}{\bf x}_k}\,\Big),\quad k=1,2,$$
satisfying Assumption~\ref{asp:K}. Then,
the Pitman asymptotic relative efficiency (ARE)
of the center-outward test based on score functions $J_k$, $k=1,2$
with respect to Wilks' test (denoted by ${\psi}\,\!_{\mathcal N}^{(n)}$) is
\begin{align*}
{\rm ARE}({\psi}\,\!_{J}^{(n)}, {\psi}\,\!_{\mathcal N}^{(n)})
=\frac{\Big\Vert
D_1C_2{\boldsymbol{\Sigma}}_{1}^{ 1/2}{\bf M}_2^{\prime}{\boldsymbol{\Sigma}}_2^{-1/2} +
D_2C_1{\boldsymbol{\Sigma}}_{1}^{-1/2}{\bf M}_1 {\boldsymbol{\Sigma}}_2^{ 1/2} \Big\Vert^2_{\mathrm F}}{d_1d_2\sigma^{2}_{J_1}\sigma^{2}_{J_2}\Big\Vert
{\boldsymbol{\Sigma}}_{1}^{ 1/2}{\bf M}_2^{\prime}{\boldsymbol{\Sigma}}_2^{-1/2} +
{\boldsymbol{\Sigma}}_{1}^{-1/2}{\bf M}_1 {\boldsymbol{\Sigma}}_2^{ 1/2} \Big\Vert^2_{\mathrm F}},
\end{align*}
where
\begin{align*}
&C_k\equiv C_k(J_k,\phi_k):={\rm E}[J_k^{-1}(U)\rho_k(\tilde F_k^{-1}(U))],\\
&D_k\equiv D_k(J_k,\phi_k):={\rm E}[J_k^{-1}(U)\tilde F_k^{-1}(U))],
\end{align*}
$\rho_k:=-\,\phi_k^{\prime}/\phi_k$,
$\tilde F_k$ denotes the cumulative distribution function of $\Vert {\bf Y}_k\Vert$ with ${\bf Y}_k:={\boldsymbol{\Sigma}}_{k}^{-1/2}{\bf X}_k$,
and $U$ stands for a random variable uniformly distributed over $(0,1)$.
In particular, if
${\boldsymbol{\Sigma}}_{1} {\bf M}_2^{\prime}
={\bf M}_1 {\boldsymbol{\Sigma}}_{2}$, we have
\begin{compactenum}
\item[(i)]${\rm ARE}({\psi}\,\!_{J^{\text{\tiny{\rm vdW}}}}^{(n)}, {\psi}\,\!_{\mathcal N}^{(n)})\ge 1,$
where $J^{\text{\tiny{\rm vdW}}}_k,~k=1,2$~are~the
van der Waerden score functions~$J^{\text{\tiny{\rm vdW}}}_k(u)\!:=\!\big(F_{\chi^2_{d_k}}^{-1}\!(u)\big)^{\!1/2}\!$ with $F_{\chi^2_d}$ the $\chi^2_d$ cumulative distribution function;
\item[(ii)]
$
{\rm ARE}({\psi}\,\!_{J^{\text{\tiny{\rm W}}}}^{(n)}, {\psi}\,\!_{\mathcal N}^{(n)})
\ge \Omega(d_1,d_2)
\ge {9}/{16},
$
where the
Wilcoxon score functions are defined as $J^{\text{\tiny{\rm W}}}_k(u):=u$ for~$k=1,2$, and
\begin{align*}
&\Omega(d_1,d_2):=\frac{9(2c_{d_1}^2+d_1-1)^2(2c_{d_2}^2+d_2-1)^2}{1024 d_1d_2c_{d_1}^2c_{d_2}^2}, \\
&c_d:=\inf\Big\{ x>0 \ \Big\vert\ \Big(\sqrt{x} B_{\sqrt{2d-1} / 2}(x)\Big)^{\prime} = 0\Big\}, \\
&B_{a}(x):=\sum_{m=0}^{\infty}{\frac {(-1)^{m}}{m!\Gamma (m+a+1)}}{\left({\frac {x}{2}}\right)}^{2m+a}.
\end{align*}
\end{compactenum}
\end{proposition}
\citet{MR2691505} notices that
the Pitman ARE depends on the underlying covariance structure (${\boldsymbol{\Sigma}}_{1}$ and ${\boldsymbol{\Sigma}}_{2}$)
for ${\bf X}_1$ and ${\bf X}_2$ with elliptically symmetric distributions,
while most the existing literature (e.g. \citet{MR2691505}, \citet{MR1467849},
\citet{MR1965367,MR2088309}, \citet{MR2201019}, \citet{MR2462206} and \citet{deb2021efficiency}) focuses on the spherically symmetric case.
The proposition above fills this gap
by providing the explicit formula of ARE with general ${\boldsymbol{\Sigma}}_{k}$'s.
The claim (i) shows Pitman non-admissibility under ellipticity of Wilks' test,
which is uniformly dominated by our center-outward test with van der Waerden scores,
for elliptically symmetric distributions.
This is comparable with Theorem~4.1 in \citet{deb2021efficiency}.
Claim (ii) is a multivariate extension of \citet{MR79383}'s result; the minimum of $\Omega(d_1,d_2)$, 9/16, is achieved when $d_1,d_2\to\infty$. One can find more numerical values of~$\Omega(d_1,d_2)$ for fixed $d_1,d_2$ in \citet[Table~3]{MR2462206}.
\section{Conclusion}
Optimal transport provides an entirely new approach to rank-based statistical inference in dimension $d\geq 2$. The new multivariate ranks retain many of the favorable properties one is used to with the classical univariate ranks. Here, we demonstrate how the new multivariate ranks can be used for a definition of multivariate versions of popular rank correlations such as Kendall’s tau or Spearman’s rho. We show how the new multivariate rank correlations yield
fully distribution-free, yet powerful and computationally efficient tests of independence. A highlight of our results is the fact that the use of van der Waerden scores allows one to design a nonparametric test whose asymptotic efficiency under arbitrary elliptical densities never drops below that of Wilks' test---not even under a Gaussian model.
\setcitestyle{numbers}
\bibliographystyle{apalike}
|
1,116,691,498,603 | arxiv | \section{Introduction}
Hickson compact groups \citep[HCGs,][]{Hickson1982} of galaxies are systems characterised by a high local density while being located in low density environments when viewed at larger scales. This high density combined with low velocity dispersions \citep{Hickson+1992} in many cases leads them to exhibit multiple physical processes associated with galaxy--galaxy interactions: tidal tails and bridges visible optically, in atomic gas (H\,{\sc i}), or both \citep[e.g.][]{Verdes-Montenegro+1997,Sulentic+2001,Verdes-Montenegro+2005,Serra+2013,Konstantopoulos+2013}; intragroup diffuse X-ray emission \citep{Belsole+2003,Desjardins+2013,OSullivan+2014b}; shock excitation from starburst winds or galaxy--tidal debris collisions \citep{Rich+2010,Vogt+2013,Cluver+2013}; anomalous star formation (SF) activity, molecular gas content and morphological transformations \citep{Tzanavaris+2010,Plauchu-Frayn+2012,Alatalo+2015,Eigenthaler+2015,Zucker+2016,Lisenfeld+2017}, among others.
Single dish H\,{\sc i} \ studies of HCGs \citep{Williams+Rood1987,Huchtmeier1997} revealed that most are deficient in H\,{\sc i}. \citet{Verdes-Montenegro+2001} expanded on this discovery by performing a comprehensive study of the total H\,{\sc i} \ contents of 72 HCGs observed with single dish telescopes, together with an analysis of the spatial distribution and kinematics of the H\,{\sc i} \ gas within a subset of 16 HCGs observed with the VLA (Very Large Array). As a result of the analysis the authors proposed an evolutionary sequence in which compact group galaxies become increasingly H\,{\sc i} \ deficient as the group evolves. In phase 1 of the sequence the H\,{\sc i} \ is relatively unperturbed and found mostly in the discs of the galaxies, with the remaining gas found in incipient tidal tails. In phase 2 30--60\% of the total H\,{\sc i} \ mass forms tidal features. In Phase 3a most, if not all, of the H\,{\sc i} \ has been stripped from the discs of the galaxies and is either found in tails, or is not detected. The least common phase, 3b, involves groups where the H\,{\sc i} \ gas seems to form a large cloud with a single velocity gradient that contains all the galaxies. However, of the four groups proposed to be in this phase, HCGs 18 and 54 are now thought to be false groups \citep{Verdes-Montenegro+2001,Verdes-Montenegro+2002} and the H\,{\sc i} \ distribution in HCG 26 probably does not fulfil the necessary criteria (Damas-Segovia et al. in prep), leaving only HCG 49 and raising the question whether phase 3b is a genuine phase of CG evolution. A slightly different evolutionary sequence was proposed by \citet{Konstantopoulos+2010}, where the evolution follows a similar sequence, but all groups are split into two categories: a) those where the gas is mostly consumed by SF in the galactic discs before major interactions can strip it, leading to late-time dry mergers, and b) those where the gas is removed from the galaxies through tidal stripping early on in the group's evolution, leading to a hot, diffuse intra-group medium (IGrM) at late times.
\citet{Borthakur+2010,Borthakur+2015} compared single dish H\,{\sc i} \ spectra of HCGs obtained with the GBT (Green Bank Telescope) and VLA H\,{\sc i} \ maps to demonstrate that some HCGs have a diffuse H\,{\sc i} \ component that was not detected by the VLA and can extend to up to 1000 \kms \ in velocity width. The fraction of H\,{\sc i} \ in this component seemed to be greater for groups with larger H\,{\sc i} \ deficiencies, and thus makes up some, but not all, of the ``missing'' H\,{\sc i} \ reported by \citet{Verdes-Montenegro+2001}. The connection between the H\,{\sc i} \ content and distribution, SF activity and X-ray emission has been the subject of numerous studies \citep[e.g.][]{Ponman+1996,Rasmussen+2008,Bitsakis+2011,Martinez-Badenes+2012,Desjardins+2013,OSullivan+2014a,OSullivan+2014b}, however, how the observed H\,{\sc i} \ depletion occurs and, more generally, how the groups might evolve from phase 2 to 3, remains far from understood.
Assuming the proposed evolutionary scenario is correct, detailed studies of phase 2 groups are of special relevance for addressing the unknowns above, because in these groups the processes that drive the transformation to phase 3 HCGs should be at work. HCG 16 is a prototypical example of this intermediate phase of evolution. Its H\,{\sc i} \ gas is in the process of leaving the discs of the galaxies and filling the intragroup medium with significant amounts of H\,{\sc i} \ in tidal tails, but the group has not yet become globally H\,{\sc i} \ deficient. HCG 16 also hosts an array of other ongoing processes that will likely shape its future evolution: active galactic nuclei (AGN), a new member, starburst events and the accompanying winds and shocks. Many of these have been studied in detail in an extensive set of papers focusing on the group or a small sample of groups including HCG 16 \citep{Ribeiro+1996,Mendes+1998,Rich+2010,Vogt+2013,Konstantopoulos+2013,OSullivan+2014a,OSullivan+2014b}, however, to date there has been no study specifically of the group's extraordinary H\,{\sc i} \ component, which will be the focus of this work.
The aim of this paper is then to shed light on how the final stages of evolution in HCGs are reached by performing a census of the on-going physical processes in HCG 16, identifying those that could be influencing the fate of the H\,{\sc i} \ in the group and its evolution towards a phase 3 morphology.
In the following section we give a brief overview of HCG 16, in \S\ref{sec:data} we describe the observations and standard data reduction, \S\ref{sec:HIsep} covers the separation of H\,{\sc i} \ into galaxies and tidal features. In \S\ref{sec:results} we present our results for the group as a whole and the individual galaxies, and in \S\ref{sec:discuss} we discuss their interpretation and attempt to construct a coherent picture of the evolution of the group. Throughout this paper we assume a distance of $55.2 \pm 3.3$ Mpc for HCG 16 and all its constituent galaxies.
\section{Overview of HCG 16}
HCG 16 was first identified in the Atlas of Peculiar Galaxies \citep{Arp1966}, Arp 318, and later classified as a compact group by \citet{Hickson1982}. Since then it has been referenced in approximately 100 journal articles and is thus an extremely well-studied group with a large amount of multi-wavelength data. However, this work represents the first focused investigation of its H\,{\sc i} \ component.
The core group contains 4 disc galaxies, each with stellar mass of the order $10^{10}$--$10^{11}$ \Msol, that all fall within a projected separation of just 7\arcmin \ (120 kpc). There is a fifth, similar sized, member of the group (NGC 848) to the South-East that was identified as being associated with the core group through optical spectroscopy \citep{deCarvalho+1997}, and was later shown to be connected in H\,{\sc i} \ also \citep{Verdes-Montenegro+2001}. \citet{deCarvalho+1997} also identified two other dwarf galaxy members of HCG 16, PGC 8210 to the South-West and 2MASS J02083670-0956140 to the North-West. The latter is not considered in this work as it falls outside of the primary beam of the H\,{\sc i} \ observations. The basic optical properties of the others are summarised in Table \ref{tab:optprops}.
In \S\ref{sec:results} we discuss the details of each galaxy individually, here we provide a brief overview of their properties for readers unfamiliar with this group. Figure \ref{fig:optim} shows an optical $grz$ image (from the Dark Energy Camera Legacy Survey) of the group. From North-West to South-East the galaxies are HCG 16b, a, c, d, and NGC 848. PGC 8210 is to the South-West of the core group. The 4 galaxies in the core group form 2 interacting pairs, HCG 16a and b, and HCG 16c and d. In the first pair both galaxies host an AGN, but have limited SF activity, while the second pair does not host AGN and both galaxies are currently undergoing nuclear starburst events. NGC 848 is physically connected to the core group by an enormous H\,{\sc i} \ tail, while PGC 8210 appears quite separate and shows no evidence in H\,{\sc i} \ for a past interaction with the core group.
\begin{table*}
\centering
\caption{HCG 16 galaxies}
\label{tab:optprops}
\begin{tabular}{cccccccc}
\hline \hline
HCG ID & Other name & RA & Dec & Type & $v_\mathrm{opt}/\mathrm{km\,s^{-1}}$ & $D_{25}$/\arcsec & $L_\mathrm{B}/\mathrm{L_\odot}$ \\ \hline
HCG 16a & NGC 835 & 2h 09m 24.6s & -10$^{\circ}$ 08\arcmin \ 09\arcsec & Sab & $4073$\textsuperscript{a} & 76 & $10.27 \pm 0.05$\\
HCG 16b & NGC 833 & 2h 09m 20.8s & -10$^{\circ}$ 07\arcmin \ 59\arcsec & SABa & $3864$\textsuperscript{a} & 89 & $10.14 \pm 0.05$ \\
HCG 16c & NGC 838 & 2h 09m 38.5s & -10$^{\circ}$ 08\arcmin \ 48\arcsec & S0a & $3849$\textsuperscript{b} & 69 & $10.11 \pm 0.02$ \\
HCG 16d & NGC 839 & 2h 09m 42.9s & -10$^{\circ}$ 11\arcmin \ 03\arcsec & S0a & $3874$\textsuperscript{b} & 87 & $9.97 \pm 0.02$ \\
& NGC 848 & 2h 10m 17.6s & -10$^{\circ}$ 19\arcmin \ 17\arcsec & SBab & $4045$\textsuperscript{b} & 89 & $10.09 \pm 0.04$ \\
& PGC 8210 & 2h 09m 06.0s & -10$^{\circ}$ 19\arcmin \ 13\arcsec & Sc & $3972$\textsuperscript{a} & 72\textsuperscript{c} & $9.37 \pm 0.18$ \\ \hline
\end{tabular}
\tablefoot{Columns: (1) name in HCG catalogue, (2) other name, (3) right ascension (J2000), (4) declination (J2000), (5) morphological type, from HyperLeda (\url{http://leda.univ-lyon1.fr/}), (6) heliocentric velocity from optical spectra (references below), (7) optical isophotal diameter at 25 mag arcsec$^2$ in B-band \citep[from RC3][unless indicated otherwise]{RC3}, (8) logarithm of B-band luminosity calculated following \citet{Fernandez-Lorenzo+2012} with values from HyperLeda and the morphologies and velocities given in this table, the quoted errors ignore distance uncertainty ($\pm 0.05$ dex).\\
\textsuperscript{a} \citet{Ribeiro+1996},
\textsuperscript{b} \citet{Diaz-Gimenez+2012},
\textsuperscript{c} \citet{Paturel+2000}.
}
\end{table*}
\section{Observations and data reduction}
\label{sec:data}
\subsection{H\,{\sc i} \ data}
\label{sec:HIdata}
HCG 16 was mapped with the VLA in C and D configurations in 1999 and 1989 respectively. These data were reduced using AIPS (Astronomical Image Processing System) and presented in \citet{Verdes-Montenegro+2001} and \citet{Borthakur+2010}. In this work we have re-reduced the raw data using \texttt{CASA} \citep[Common Astronomy Software Applications,][]{CASA}\footnote{\url{https://casa.nrao.edu/}} and re-imaged them using multi-scale CLEAN in the \texttt{CASA} task \texttt{tclean}. The H\,{\sc i} \ line emission was imaged over the velocity range 3246 \kms \ to 4557 \kms \ with a resolution of 21 \kms. The dataset was imaged twice to generate two cubes using Brigg's robust weighting parameters of 2 and 0. While almost all of the following analysis relies on the robust=2 cube, the robust=0 is useful to see some parts of the highest column density gas with finer angular resolution. The multi-scale CLEAN angular scales used in these two cubes are 0, 8, 16, 24, and 40 pixels in the robust=2 cube, and 0, 4, 8, 16, 24, and 40 pixels in the robust=0 cube, where each pixel was 4\arcsec \ across in both cases. The resulting beam sizes of these two cubes were 37.2\arcsec \ $\times$ 30.3\arcsec \ and 19.4\arcsec \ $\times$ 14.8\arcsec \ respectively. At the assumed distance of 55.2 Mpc, 20\arcsec corresponds to a projected distance of 5.4 kpc. The robust=2 and robust=0 cubes have rms noises of 0.36 mJy/beam and 0.40 mJy/beam respectively, which correspond to 3$\sigma$ H\,{\sc i} \ column density sensitivities of 2.2 $\times$ 10$^{19}$ cm$^{-2}$ and 9.6 $\times$ 10$^{19}$ cm$^{-2}$ at a velocity resolution of 21 \kms. An interactive, 3-dimensional figure displaying the robust=2 cube created using the X3D pathway introduced in \citet{Vogt+2016} is available at \url{http://amiga.iaa.es/FCKeditor/UserFiles/X3D/HCG16/HCG16.html}.
To create a source mask within which the 0th moment and total integrated flux could be calculated we made use of the \texttt{SoFiA} package \citep[Source Finding in Astronomy,][]{SoFiA,Serra+2015}, using smoothing kernels over spatial scales approximately equal to one and two times the (robust=2) beam size, over one and three channels, and clipping at 3.5$\sigma_{\mathrm{rms}}$ (shown with contours in Figure \ref{fig:mom0}, right panel). A reliability threshold of 100\% was set to remove spurious noise spikes, the sources were merged and the final mask dilated using \texttt{SoFiA}'s mask optimisation tools in order to include all flux associated with the group. An equivalent procedure with a threshold of 5$\sigma_{\mathrm{rms}}$ and without dilating the mask was used to produce the 1st moment map of the group H\,{\sc i} \ emission (Figure \ref{fig:mom1}). In addition we made a more traditional source mask based on a 3$\sigma_{\mathrm{rms}}$ clipping in each channel (of the robust=2 cube) using \texttt{CASA}. A comparison of the moments generated by these two masks can be seen in Figure \ref{fig:mom0}.
The spatial and spectral smoothing performed by \texttt{SoFiA} results in a more extended mask (even though the threshold is 3.5$\sigma_{\mathrm{rms}}$ rather than 3$\sigma_{\mathrm{rms}}$), which should include more of the low column density emission. The standard 3$\sigma_{\mathrm{rms}}$ mask is only used for visual representation as it more clearly separates the higher column density features (precisely because it excludes the fainter emission). In all the following sections and analysis we use the robust=2 cube and the \texttt{SoFiA} source mask, unless explicitly stated otherwise.
\subsection{Optical images}
\label{sec:opt_data}
Throughout this paper we compare H\,{\sc i} \ features with optical images from the Dark Energy Camera Legacy Survey (DECaLS, \url{http://www.legacysurvey.org/decamls}). The three DECaLS bands ($g$,$r$, and $z$) have surface brightness limits in the field of interest of 28.5, 28.7 and 28.0 mag arcsec$^{-2}$ (3$\sigma$ in 10\arcsec$\times$10\arcsec \ boxes), respectively.
This is the deepest image that covers the entire field of which we are aware, thus we focus on this image to look for faint optical features which may be associated with extended H\,{\sc i} \ features. However, Hubble Space Telescope and Spitzer images of the group have been published by \citet{Konstantopoulos+2013}.
We used the DECaLS images as published, which were processed with the automated Dark Energy Camera Community Pipeline. This processing slightly over-subtracts the sky in the vicinity of large galaxies, negatively impacting the sensitivity for large scale faint features near large galaxies (like those in HCG 16). As this work is focused on the H\,{\sc i} \ component of the group, we note this issue, but do not reprocess the images.
\section{Separation of H\,{\sc i} \ features}
\label{sec:HIsep}
The H\,{\sc i} \ content of HCG 16 is enormously complicated, with multiple blended galaxies and tidal features. Therefore, in order to study the properties of each galaxy and tidal feature, the H\,{\sc i} \ emission must be separated into these individual objects wherever possible. By doing this we can assess whether each galaxy has a high, normal, or low H\,{\sc i} \ content (irrespective of the global group H\,{\sc i} \ content), if they have well ordered rotation and morphology, or have been disrupted by interactions, and where the gas in tidal features likely originated. The answers to these questions will in turn help to build a consistent picture of how the group has evolved to date and how it might proceed in the future.
To perform this separation we used the \texttt{SlicerAstro} package \citep{SlicerAstro,Punzo+2017}. The \texttt{SoFiA} mask was imported into \texttt{SlicerAstro} and divided into sub-regions corresponding to galaxies or specific tidal features. By importing the same mask we ensure that all of the flux included in the integrated measurement is assigned to a particular galaxy or feature. This is especially important for obtaining the H\,{\sc i} \ masses of individual galaxies. Although the higher column density regions are more straightforward to separate into distinct sources, this would lead to the H\,{\sc i} \ flux being underestimated as low column density emission would be systematically missing, while the scaling relations we will use to define the deficiency were calculated from single dish observations that include all the H\,{\sc i} \ flux of target (isolated) galaxies. The final separation was made through an iterative comparison of the channel maps, optical images, 3-dimensional visualisations, and moment maps of the individual galaxies. This process was unavoidably subjective to some degree, particularly in the region of HCG 16c and HCG 16d where emission smoothly changes from gas associated with the galaxies to multiple high column density tidal features. However, despite the resulting large uncertainties in the galaxy parameters this is still an instructive tool for assessing the likely history of the group, as discussed in the following sections.
More specifics of this process are described in \S\ref{sec:indv_gals}, along with the results for individual galaxies once they have been separated from surrounding features.
\section{Results}
\label{sec:results}
In this section we present the results of our H\,{\sc i} \ analysis, first for the group as a whole and then for each galaxy.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figures/Fig1-HCG16_DECaLS_image.pdf}
\caption{DECaLS $grz$ colour image of HCG 16 with the member galaxies labelled.}
\label{fig:optim}
\end{figure*}
\begin{table*}
\centering
\caption{H\,{\sc i} \ properties of galaxies and tidal features in HCG 16}
\label{tab:HIprops}
\begin{tabular}{lcccccc}
\hline\hline
Object & $S_{\mathrm{HI}}/\mathrm{Jy\,km\,s^{-1}}$ & $\bar{v}/\mathrm{km\,s^{-1}}$ & $\sigma_{v}/\mathrm{km\,s^{-1}}$ & $\log{M_{\mathrm{HI}}/\mathrm{M}_{\odot}}$ & H\,{\sc i} \ deficiency \\ \hline
HCG 16a & 1.97 & 4026 & 120 & 9.15 & $0.69 \pm 0.21$ \\
HCG 16b & 0.69 & 3882 & 61 & 8.70 & $1.01 \pm 0.21$ \\
HCG 16c & 5.14 & 3819 & 89 & 9.57 & $0.12 \pm 0.21$ \\
HCG 16d & 7.48 & 3901 & 56 & 9.73 & $-0.18 \pm 0.21$ \\
NGC 848 & 8.53 & 3987 & 69 & 9.79 & $-0.12 \pm 0.21$ \\
PGC 8210 & 1.14 & 3978 & 48 & 8.91 & $0.07 \pm 0.26$ \\
NW tail & 5.50 & 3788 & 106 & 9.60 & \\
NE tail & 2.98 & 3888 & 51 & 9.33 & \\
E clump & 0.72 & 3910 & 23 & 8.72 & \\
S clump & 1.25 & 4071 & 20 & 8.95 & \\
SE tail & 4.74 & 3997 & 45 & 9.53 & \\
cd bridge & 5.97 & 3944 & 148 & 9.63 & \\
NGC 848S tail & 1.08 & 4030 & 43 & 8.89 & \\ \hdashline
NGC 848S loop & 1.75 & 4034 & 24 & 9.10 & \\ \hline
\end{tabular}
\tablefoot{Columns: (1) object name, (2) H\,{\sc i} \ integrated flux, (3) flux-weighted mean velocity, (4) flux-weighted velocity dispersion, (5) logarithm of H\,{\sc i} \ mass, (6) H\,{\sc i} \ deficiency (calculated in \S\ref{sec:indv_gals}). The final component is separated from the rest as it was not included without the \texttt{SoFiA} mask as it is low significance. It also does not contribute to the global flux measurement.}
\end{table*}
\subsection{Global H\,{\sc i} \ morphology}
Figure \ref{fig:mom0} (left) shows the 0th moment map of the H\,{\sc i} \ emission created using a standard 3$\sigma_{\mathrm{rms}}$ clipping in each channel. This map excludes much of the lowest column density emission, making many features easier to distinguish by eye. The galaxies HCG 16a, b, c, d, NGC 848, and PGC 8210 are all detected along with tidal features across the whole group that appear to connect HCG 16a and b to HCG 16c, HCG 16c to d, and HCG 16d to NGC 848. The most striking tidal feature is the South-East tail, which stretches over a projected distance of $\sim$160 kpc, connecting NGC 848 to the core group. There are no indications of an interaction between PGC 8210 and the rest of the group.
Figure \ref{fig:mom0} (right) shows the 0th moment again, but this time made using the \texttt{SoFiA}-generated mask (\S\ref{sec:HIdata}) and displayed as contours overlaid on the DECaLS $grz$ image.
This mask includes more of the low column density emission than the previous mask and thus provides a more complete measurement of the H\,{\sc i} \ content, but makes most individual features more difficult to identify by eye. This masking is used throughout the following analysis as we want to include low column density emission because a large fraction of the group's H\,{\sc i} \ is in tidal features.
In Figure \ref{fig:mom1} we show the 1st moment of the entire group (using a \texttt{SoFiA} mask with a 5$\sigma_{\mathrm{rms}}$ threshold).
Due to the high spatial density of the group, the galaxies and tidal features are superposed and confused in this image, complicating its interpretation. Without separating out individual objects and features, clear signs of ordered rotation are only visible in NGC 848 and PGC 8210, although in the latter cases it is barely larger than the beam. The velocity field of each galaxy is presented in \S\ref{sec:indv_gals} after separating them from the rest of the emission in the group.
\subsection{H\,{\sc i} \ flux and mass}
Integrating the entire H\,{\sc i} \ emission shown in Figure \ref{fig:mom0} (right) gives the total H\,{\sc i} \ flux of HCG 16 as 47.2 Jy \kms \ and its mass as $\log{M_\mathrm{HI}/M_\odot} = 10.53 \pm 0.05$ (using the standard formula, $M_\mathrm{HI}/\mathrm{M_{\odot}} = 235600 \times [D/\mathrm{Mpc}]^2 \times [S_\mathrm{HI}/\mathrm{Jy \, km \, s^{-1}}]$). This is considerably higher than the values of \citet{Borthakur+2010}, 21.5 Jy \kms \ and $10.19 \pm 0.05 \times 10^{10}$ \Msol, based on the single dish spectrum taken with GBT. However, as pointed out in that work, this difference arises because the GBT HPBW is 9.1\arcmin \ and the pointing was centred on the group core (red x in the left panel of Figure \ref{fig:mom0}). This means that the majority of the flux from the tail extending to the SE, NGC 848 and PGC 8210 is missing from the GBT spectrum. The moment 0 map was weighted with a 2D Gaussian window (FWHM of 9.1\arcmin) centred on the GBT pointing centre to estimate the fraction of the total emission that the GBT observations would have been able to detect. This gives an H\,{\sc i} \ mass of $\log{M_\mathrm{HI}/M_\odot} = 10.24 \pm 0.05$ \Msol \ and a flux of 24.2 Jy \kms, which is about 10\% higher than the GBT measurement (Figure \ref{fig:group_spec}). This slight difference could arise from calibration or baseline uncertainties, or simply because a Gaussian is not a completely accurate representation of the beam response.
We compare our total flux measurement (47.2 Jy \kms) to that in HIPASS. As HCG 16 is extended over many arcmin it is a marginally resolved source even for the Parkes telescope. Therefore, we cannot use the HIPASS catalogue values, which assume it is a point source. Using the HIPASS cube in this region of the sky we perform a source extraction with \texttt{SoFiA}, applying a threshold limit of 4.5$\sigma$ over smoothing kernels of 3 and 6 spatial pixels and 3 and 7 velocity pixels (each pixel is 4\arcmin \ $\times$ 4\arcmin \ $\times$ 13 \kms). The resulting spectrum is shown in Figure \ref{fig:group_spec}. The integrated flux in HIPASS is 43.8 Jy \kms, which is within 10\% of our measurement with the VLA. Given the difficulty in absolute calibration \citep[e.g.][]{vanZee+1997}, this is about the level of agreement that is to be expected. However, there are some additional discrepancies with this comparison which we discuss further in Appendix \ref{sec:flux_disc}.
\subsection{Individual galaxies}
\label{sec:indv_gals}
Table \ref{tab:HIprops} shows the basic H\,{\sc i} \ properties of the individual galaxies and tidal features based on the separation performed in \S\ref{sec:HIsep}. We have chosen to use the flux-weighted mean velocity and flux-weighted velocity dispersion to quantify the centre and width of each emission profile, rather than more common measures e.g. $W_{50}$, because some of the profiles, particularly those of the strongly disturbed galaxies or tidal features, do not follow typical profile shapes of H\,{\sc i} \ sources (Figures \ref{fig:spectra} \& \ref{fig:tidal_spectra}). The measurements of the B-band luminosity, $L_{\mathrm{B}}$, of each galaxy (Table \ref{tab:optprops}) were used as inputs to the H\,{\sc i} \ scaling relation of \citet{Jones+2018} in order to estimate their expected H\,{\sc i} \ masses if they were in isolation, and thus their H\,{\sc i} \ deficiencies. We note that although it has been fairly common in past works to consider the morphology of a galaxy when calculating H\,{\sc i} \ deficiency, here we choose to ignore it. There are two main reasons for this:
\begin{itemize}
\item \citet{Jones+2018} suggest that morphology should be ignored unless the sample has a large fraction of ellipticals, because their piece-wise scaling relations (split by morphology) are quite uncertain due to the small number of galaxies that are not Sb-Sc in the AMIGA \citep[Analysis of the interstellar Medium of Isolated GAlaxies,][]{Verdes-Montenegro+2005b} reference sample. Hence, using these piece-wise relations, in the case of HCG 16, would trade a small bias for a large uncertainty that is difficult to accurately quantify.
\item Galaxies are expected to undergo morphological transformation in compact groups as they are stripped of their gas and perturbed by interactions. Therefore, it is perhaps misguided to use their present day morphologies in these scaling relations at all.
\end{itemize}
Our analysis indicates that only galaxies HCG 16a and b are missing neutral gas compared to their expected H\,{\sc i} \ content if they were in isolation, all the remaining galaxies are consistent with having a normal quantity of H\,{\sc i}. The group as a whole (galaxies
\begin{landscape}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Fig2-HCG16_mom0+DECaLS_overlay.pdf}
\caption{\textit{Left}: Moment 0 map (primary beam corrected) of the H\,{\sc i} \ emission of HCG 16 calculated using a 3$\sigma_{\mathrm{rms}}$ mask in each channel (this mask is intended only for visual purposes, all the analysis uses the \texttt{SoFiA}-generated mask, shown in the right panel).
The black ellipse in the lower right corner indicates the beam size (37.2\arcsec \ $\times$ 30.3\arcsec \ or 10.0 kpc $\times$ 8.1 kpc) and the small red x shows the centre of the GBT pointing from \cite{Borthakur+2010}. The extended features that we separated from the galaxies are indicated by arrows, with the exception of the emission joining galaxies HCG 16c and d. The filled black circle and the white cross indicate the pointing centres of the VLA D and C array data respectively. \textit{Right}: Moment 0 contours (uncorrected for primary beam) overlaid on a DECaLS $r$-band image. In this case the map was generated using the \texttt{SoFiA}-generated mask described in \S\ref{sec:HIdata}, which includes more extended emission. The galaxies in the main band of H\,{\sc i} \ emission going from top-right (NW) to bottom-left (SE) are: HCG 16b, a, c, d, NGC 848, and the single galaxy to the lower-right of the group is PGC 8210. Contour levels: -2.45, 2.45, 9.80, 24.4, 49.0, 73.5, and 98.0 $\times 10^{19} \; \mathrm{cm}^{-2}$, where $2.2 \times 10^{19} \; \mathrm{cm}^{-2}$ corresponds to the 3$\sigma$ sensitivity in one channel. In order of increasing flux the contours are coloured: black (dashed), black, blue, purple, red, orange, and yellow.}
\label{fig:mom0}
\end{figure}
\end{landscape}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figures/Fig3-HCG16_mom1.pdf}
\caption{Moment 1 map of the H\,{\sc i} \ emission calculated using the \texttt{SoFiA}-generated mask with a threshold of 5$\sigma_{\mathrm{rms}}$ and without dilation. The rainbow colour scale indicates the recessional (optical) velocity in \kms. The black ellipse in the lower right indicates the beam size, the grey lines are isovelocity contours separated by 40 \kms, and the small black star symbols indicate the locations of the optical centres of the galaxies in the group.}
\label{fig:mom1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fig4-HCG16_spectrum.pdf}
\caption{The integrated VLA spectrum of the entire group (thick black line) compared to the GBT spectrum of the core group \citep{Borthakur+2010} (high spectral resolution green line), the VLA spectrum within the GBT beam area (thick black dashed line), and the spectrum extracted from the HIPASS cube (orange line). Vertical dashed lines indicate the optical redshifts of the member galaxies.}
\label{fig:group_spec}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Fig5-HCG16_galaxy_spectra_indv.pdf}
\caption{Spectral profiles of each of the 6 galaxies detected in H\,{\sc i} \ near the core group, calculated from the separation of tidal features and galaxies performed in \S\ref{sec:HIdata}. The black vertical dashed lines show the optical redshifts of each galaxy as given in Table \ref{tab:optprops}. The dot-dashed and dotted vertical lines show the central and extreme velocities from the rotation curve measurements of \citet{Rubin+1991} in red and \citet{Mendes+1998} in blue. These measurements are only available for the four core galaxies, also \citet{Rubin+1991} does not specify $V_\mathrm{max}$ values for HCG 16b or c due to the peculiar shape of their rotation curves. The left panels have vertical scales that go to 11 mJy, whereas the scales on the right are a factor of approximately 6 higher (60 mJy).}
\label{fig:spectra}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Fig6-HCG16_tidal_spectra_indv.pdf}
\caption{H\,{\sc i} \ spectral profiles of each of the 8 separate tidal features in the group, calculated from the separation of tidal features and galaxies performed in \S\ref{sec:HIdata}. Each row has a different vertical scale.}
\label{fig:tidal_spectra}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Fig7-HCG16_pv_plot.pdf}
\includegraphics[width=\textwidth]{figures/Fig7-HCG16_pv_trace.pdf}
\caption{\textit{Top}: A segmented position--velocity diagram showing the spatially and kinematically continuous H\,{\sc i} \ emission spanning HCG 16. The vertical dashed lines show the locations of the nodes making up the segmented slice through the data cube. Galaxies are labelled in red and tidal features in blue. Note that the noise in this plot increases significantly near the left edge as this region is approaching the primary beam edge. \textit{Bottom}: The left panel shows the position--velocity slice as a segmented red line on top of the moment 0 map (using the standard 3$\sigma$ threshold mask as in the left panel of Figure \ref{fig:mom0}) and the right panel shows the same line but overlaid on the DECaLS $r$-band image.}
\label{fig:pv_plot}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[height=75mm]{figures/Fig8-HCG16a_mom0.pdf}
\includegraphics[height=75mm]{figures/Fig8-HCG16a_mom1_cont.pdf}
\caption{\textit{Left}: The moment 0 map of HCG 16a. The white and black stars indicate the optical centres of HCG 16a and b respectively, and the black ellipse shows the beam. \textit{Right:} Moment 1 contours for HCG 16a overlaid on the DECaLS $grz$ image of HCG 16a and b. The green lines are separated by 20 \kms.}
\label{fig:HCG16a_moms}
\end{figure*}
\noindent
and extended features) has an H\,{\sc i} \ mass of $(3.39 \pm 0.29) \times 10^{10}$ \Msol \ (accounting only for distance uncertainty) and a global H\,{\sc i} \ deficiency of $-0.12 \pm 0.09$.\footnote{If we were to assume that the HIPASS flux is correct and that ours is an overestimate then the H\,{\sc i} \ deficiency would be $-0.08 \pm 0.09$.} Thus the group as a whole does not appear to be missing H\,{\sc i} \ gas (it is marginally H\,{\sc i}-rich), but the galaxy pair HCG 16a and b has likely lost the vast majority of its original H\,{\sc i} \ content. The probable fate of this lost gas is discussed in the following sections.
Here we note that had we used the \citet{Haynes+Giovanelli1984} scaling relation instead of the updated \citet{Jones+2018} relation, then we would have found the global H\,{\sc i} \ deficiency to be 0.05, which is approximately 2$\sigma$ higher (more deficient). The differences between these relations are discussed in detail in \citet{Jones+2018}, but in this case the most important point is that the uncertainties in the measurements of $L_{\mathrm{B}}$ for the reference sample used to calibrate the relation, results in the \citet{Haynes+Giovanelli1984} $M_{\mathrm{HI}}$--$L_{\mathrm{B}}$ scaling relation overestimating the H\,{\sc i} \ deficiency of galaxies.
The uncertainty estimates for the H\,{\sc i} \ deficiencies in Table \ref{tab:HIprops} are dominated by the scatter in the $M_{\mathrm{HI}}$--$L_{\mathrm{B}}$ scaling relation and also have small contributions due to the uncertainty in $L_{\mathrm{B}}$ and the group distance, $55.2 \pm 3.3$ Mpc, which was estimated using the \citet{Mould+2000} local flow model as described in \citet{Jones+2018}, which has corrections for flow towards the Virgo cluster, the Great Attractor, and the Shapley supercluster. The uncertainties in the masses of each component are challenging to estimate because these values strongly depend on where subjective boundaries are drawn between tidal gas and gas in a galactic disc. As H\,{\sc i} \ deficiency is the quantity of interest for the galaxies and the scatter in the scaling relation (0.2 dex) results in an uncertainty of 40-60\%, which is expected to dominate the error budget, we do not attempt to estimate the H\,{\sc i} \ mass uncertainty due to the subjective measurements. The same issue applies to the tidal features, but is more severe as they are mostly made up of lower column density gas. Thus their individual mass measurements should be treated with caution.
\subsubsection{HCG 16a}
\label{sec:HCG16a}
HCG 16a is an Sab galaxy that has been classified as an active star forming galaxy in IR \citep{Zucker+2016} and hosts an AGN (Seyfert 2) that has been confirmed with both optical line ratios \citep{Veron-Cetty+Veron2010,Martinez+2010} and X-rays \citep{Turner+2001,Oda+2018}. The galaxy also hosts a ring of SF (star formation) that is clearly visible in the IR and UV \citep{Tzanavaris+2010,Bitsakis+2014}. In terms of its H\,{\sc i}, HCG 16a is the second most deficient galaxy in the group, apparently missing approximately $5.5 \times 10^{9}$ \Msol \ of H\,{\sc i}.
The discs of HCG 16a and b overlap in the plane of the sky which complicates the separation of their H\,{\sc i} \ emission. However, in velocity the gas that is co-spatial with HCG 16a forms a clear, continuous, and steep gradient, as would be expected from an inclined disc galaxy. This is most readily seen in the \href{http://amiga.iaa.es/FCKeditor/UserFiles/X3D/HCG16/HCG16.html}{X3D interactive figure} which accompanies this paper. The emission associated with HCG 16b forms a small, offset clump at approximately the same velocity as the lower velocity "horn" of the HCG 16a profile (Figure \ref{fig:spectra}). The two galaxies are connected in H\,{\sc i} \ by a faint bridge of emission, visible in the channel maps (available in the electronic version) between velocities 3858 and 4006 \kms. Given the size of the beam and the fact that the optical discs of HCG 16a and b overlap, it is not possible to reliably assign the emission to either galaxy. Therefore we simply split it approximately half-way between the two sources. The uncertainty in where this bridge should be split introduces a negligible error to the H\,{\sc i} \ properties of HCG 16a as the entirety of the emission that we attribute to HCG 16b would only increase the H\,{\sc i} \ mass of HCG 16a by 0.13 dex.
When viewed projected in the plane of the sky there is an H\,{\sc i} \ tail which connects HCG 16a to HCG 16c (the NW tail, see Figure \ref{fig:mom0}, left panel). There is also an accompanying stellar tail \citep[identified by][]{Rubin+1991} that extends in the same direction away from HCG 16a (clearly visible in the Figures \ref{fig:optim} \& \ref{fig:pv_plot}). However, when the H\,{\sc i} \ data cube is studied in 3D it is clear that the H\,{\sc i} \ tail does not form a kinematic connection with HCG 16a and the apparent connection is the result of a projection effect. In Figure \ref{fig:pv_plot} it can be seen that as the NW tail is traced away from HCG 16c its velocity decreases from $\sim$3800 \kms \ to $\sim$3500 \kms, whereas HCG 16a's H\,{\sc i} \ profile covers the approximate range 3850-4300 \kms \ (Figure \ref{fig:spectra} \& \ref{fig:HCG16a_moms} right panel). This tail then ends in a dense clump, which may have formed a tidal dwarf galaxy (TDG), discussed further in \S\ref{sec:TDGs}.
Figure \ref{fig:HCG16a_moms} shows the 0th and 1st moments of the H\,{\sc i} \ emission. From these two maps it can be seen that the high column density gas in the immediate vicinity of the optical disc of HCG 16a is relatively undisturbed, despite that large amount of missing H\,{\sc i}. The moment 0 map has a regular oval shape and is centrally peaked (almost coincident with the optical centre). While the velocity field appears quite regular in the centre, the line of nodes forms an `S' shape in the outer regions, likely indicating the presence of a warp in the disc \citep[e.g.][]{Bosma1978}. Given the stellar tails in the optical and the clear on-going interaction with HCG 16b such an asymmetry is not unexpected.
\citet{Mendes+1998} also found that the rotation in the inner region of HCG 16a is very regular, rising quickly and becoming flat well within one of our beam widths. The velocity extent of our H\,{\sc i} \ spectrum (Figure \ref{fig:spectra}) approximately agrees with that implied by their H$\alpha$ rotation curve. However, \citet{Rubin+1991} found the conflicting result that the H$\alpha$ rotation has an anomalous structure, declining on one side of the galaxy at large radii. As our H\,{\sc i} \ map extends to much larger radii we would expect to see a continuation of this in our data, but we do not. Therefore, the H\,{\sc i} \ velocity field appears more consistent with the rotation curve of \citet{Mendes+1998}. Having said this, the major axis position angle given in that work is almost exactly aligned N-S, whereas in H\,{\sc i} \ it is clearly rotated (counter-clockwise). We expect this is due to a combination of beam smearing and disturbances in the outer regions of the disc.
\subsubsection{HCG 16b}
\label{sec:HCG16b}
\begin{figure*}
\centering
\includegraphics[height=75mm]{figures/Fig9-HCG16b_mom0.pdf}
\includegraphics[height=75mm]{figures/Fig9-HCG16b_mom1_cont.pdf}
\caption{\textit{Left}: The moment 0 map of HCG 16b. The black stars indicate the optical centres of HCG 16a and b, and the black ellipse shows the beam. \textit{Right:} Moment 1 contours for HCG 16b overlaid on the DECaLS $grz$ image of HCG 16a and b. The green lines are separated by 20 \kms. The signature of a small, consistent velocity gradient across the optical disc is visible, but the H\,{\sc i} \ distribution is strongly off centre and disturbed.}
\label{fig:HCG16b_moms}
\end{figure*}
HCG 16b is an Sa galaxy and the most H\,{\sc i} \ deficient galaxy in the group, probably having lost about $4.6 \times 10^{9}$ \Msol \ of H\,{\sc i}.\footnote{It is correct that this galaxy has lost less H\,{\sc i} \ than HCG 16a, but is more H\,{\sc i} \ deficient. This is because H\,{\sc i} \ deficiency is defined as the logarithmic decrement, not the additive decrement.} The galaxy has a central source identified as an AGN with X-rays and line ratios \citep{Turner+2001,Martinez+2010,Oda+2018}, it has also been classified as a LINER \citep{Veron-Cetty+Veron2010}. This is the only galaxy in the group that was classified as quiescent in terms of its WISE IR colours \citep{Zucker+2016}.
As described above, HCG 16b has an H\,{\sc i} \ bridge connecting to the much more H\,{\sc i} \ massive HCG 16a. In the absence of more information the faint bridge is simply split approximately half-way between the centres of emission of each object in each channel. One may argue that this bridge should be classified as a separate tidal feature, however, given the fact that the outer regions of the optical discs of the two galaxies blend together and the resolution of the H\,{\sc i} \ cube, this is not a practical suggestion. While it is difficult to quantify the resulting uncertainty in the H\,{\sc i} \ mass of HCG 16b due to the simplistic separation, it is likely that the procedure assigns too much emission to HCG 16b rather than HCG 16a, because the latter is both more optically luminous and has a higher H\,{\sc i} \ mass, thus more of the bridge is probably gravitationally bound to it. Therefore, the key result that HCG 16b has lost around 90\% or more of its expected H\,{\sc i} \ content would be unchanged.
Figure \ref{fig:HCG16b_moms} shows that the little remaining H\,{\sc i} \ in HCG 16b is very off centre compared to its optical disc. The velocity field shows a small, but clear velocity gradient aligned with the major axis of the optical disc, suggesting that at least some of the remaining H\,{\sc i} \ is likely still rotating with the optical disc. However, this gradient is disrupted on the eastern side of the galaxy where there is a gas extension to the North-East that connects to emission around the NW tail and that around HCG 16a.
For HCG 16b there is dramatic disagreement between the rotation curves of \citet{Rubin+1991} and \citet{Mendes+1998}, with them even disagreeing on the direction of rotation. The direction that we see in H\,{\sc i} \ agrees with that of \citet{Rubin+1991}, with redshifted emission occurring to the NE of the galaxy centre and blueshifted to the SW. If we take the \citet{Rubin+1991} systemic velocity then it appears that we are mostly detecting gas on the redshifted side of the disc, with little contribution from the blueshifted side. This would also imply that the entire velocity gradient seen in \citet{Mendes+1998} is on the redshifted side of the galaxy.
\subsubsection{HCG 16c}
\label{sec:HCG16c}
\begin{figure*}
\centering
\includegraphics[height=75mm]{figures/Fig10-HCG16c_mom0.pdf}
\includegraphics[height=75mm]{figures/Fig10-HCG16c_mom1_cont.pdf}
\caption{\textit{Left}: The moment 0 map of HCG 16c. The white and black stars indicate the optical centres of HCG 16c and d respectively, and the black ellipse shows the beam. \textit{Right:} Moment 1 contours for HCG 16c overlaid on the DECaLS $grz$ image of HCG 16c and d. The green lines are separated by 20 \kms.}
\label{fig:HCG16c_moms}
\end{figure*}
HCG 16c is classified as an S0-a galaxy and a LIRG (Luminous Infrared Galaxy). It is also the most actively star-forming galaxy in the group. It has been classified as both a pure starburst based on X-ray observations \citep{Turner+2001} and a composite object, with both AGN and SF activity, using optical line ratios \citep{Martinez+2010}. The overall morphology of the galaxy is very reminiscent of M82, with the nuclear starburst driving a bipolar galactic wind that was studied in detail by \citet{Vogt+2013}.
HCG 16c is apparently involved in a number of ongoing interactions: the NW tail extends between HCG 16c towards HCG 16a, the NE tail extends from HCG 16c away from the rest of the group, and there is an H\,{\sc i} \ bridge connecting HCG 16c and d, all of which can be seen in Figure \ref{fig:mom0}, the \href{http://amiga.iaa.es/FCKeditor/UserFiles/X3D/HCG16/HCG16.html}{X3D plot}, and in the channel maps (available in the electronic version). All three of these features contain more than $10^{9}$ \Msol \ of H\,{\sc i}, that is, they are comparable in H\,{\sc i} \ mass to the galaxy itself, yet only considering the H\,{\sc i} \ emission in the region we identified as the H\,{\sc i} \ disc of HCG 16c the galaxy is not found to be H\,{\sc i} \ deficient. This strongly suggests that the majority of the neutral gas surrounding HCG 16c likely originated elsewhere (discussed further in \S\ref{sec:discuss}).
We trace the NW tail from the central velocity and position of HCG 16c extending towards HCG 16a in the plane of the sky, but away from it in velocity space, ending when separated from HCG 16a by over 300 \kms. The NE tail appears to originate from the receding edge of HCG 16c and extends out to the NE, but then curves back towards HCG 16a, overlapping both spatially and in velocity with the NW tail. In this region there is a great deal of extended emission of uncertain origin and it is impossible to reliably separate the two features anywhere except at their bases where the column density is high. Therefore, we assign emission at different velocities that is mostly co-spatial (on the plane of the sky) with the bulk of each feature. This means the fluxes of these two tails should be treated with great caution, however the measurement of the overall flux of the extended emission is not impacted by this issue. Finally, there is the cd bridge. This is predominantly made up of the high column density H\,{\sc i} \ emission that forms a bridge between galaxies HCG 16c and d, however, all the remaining emission in the vicinity of galaxies HCG 16c and d that had not been assigned to any of the galaxies or other features listed above, was also assigned to the cd bridge, thus it is more poorly defined than the other extended features we describe.
Despite the evidence for interactions listed above, the centre of the H\,{\sc i} \ disc of HCG 16c still has a consistent velocity gradient across it, indicating that it is rotating and has not been completely disrupted (Figure \ref{fig:HCG16c_moms}). However, the H\,{\sc i} \ distribution is clearly extended (asymmetrically) in the direction of HCG 16d and the outer regions of the velocity field also trail off in that direction, indicating that there are major disturbances in the outer parts of the disc.
The \citet{Rubin+1991} and \citet{Mendes+1998} rotation curves are again quite different for this source, but this is because the spectral slit that \citet{Rubin+1991} used was aligned with the R-band photometric major axis, which is rotated about 40$^\circ$ with respect to the kinematic axis identified by \citet{Mendes+1998}. The kinematic major axis of the H\,{\sc i} \ data (Figure \ref{fig:HCG16c_moms}, right) appears approximately consistent with this position angle (120$^\circ$) obtained from the H$\alpha$ line. The \citet{Mendes+1998} velocity field is more or less regular, but the range of velocities seen is considerably larger than we see in H\,{\sc i}. This may indicate that the rotation curve declines beyond the central region that they measure, or that their rotation curve was contaminated by H$\alpha$ associated with the central outflow, as suggested by \citet{Vogt+2013}.
Figure \ref{fig:pv_plot} shows the peculiar kinematics of the NW tail. Rather than connecting to the outer edge of the disc of HCG 16c, as is typical for tidal tails (in fact this is visible on the opposite side of the same plot, around NGC 848), the NW tail appears to intersect HCG 16c at its central velocity. This behaviour leads us to consider 3 competing hypotheses which we will discuss in the following section: 1) the gas is being accreted directly to the central regions of the galaxy and providing fuel for the nuclear starburst, 2) the gas is being ejected from the central regions of the galaxy by the nuclear starburst, or 3) the entire feature is superposed with HCG 16c and that the coinciding velocities do not correspond to a spatial connection.
\subsubsection{HCG 16d}
\label{sec:HCG16d}
\begin{figure*}
\centering
\includegraphics[height=75mm]{figures/Fig11-HCG16d_mom0.pdf}
\includegraphics[height=75mm]{figures/Fig11-HCG16d_noabs_mom1_cont.pdf}
\caption{\textit{Left}: The moment 0 map of HCG 16d. The white and black stars indicate the optical centres of HCG 16d and c respectively, and the black ellipse shows the beam. A central depression is clearly visible around the white star marking the centre of HCG 16d. \textit{Right:} Moment 1 contours for HCG 16d overlaid on the DECaLS $grz$ image of HCG 16d and c. The central region of the galaxy is removed as the H\,{\sc i} \ absorption feature would contaminate the moment 1 map there. The green lines are separated by 20 \kms. While the velocity field is highly irregular there is a slight gradient roughly aligned with the \textit{minor} axis of the optical disc.}
\label{fig:HCG16d_moms}
\end{figure*}
\begin{figure}
\includegraphics[width=\columnwidth]{figures/Fig12-HCG16d_absorption_spec.pdf}
\caption{The H\,{\sc i} \ spectral profile of the centre of HCG 16d. The dashed vertical line shows the systemic velocity as determined from optical lines. The profile was extracted from the robust=0 cube (higher resolution) by placing a double Gaussian window, with the shape and orientation of the synthesized beam, at the position of the optical centre of the galaxy.}
\label{fig:absorp_spec}
\end{figure}
Similarly to HCG 16c, HCG 16d is a LIRG, classified as S0-a, and is morphologically similar to M82. It contains a central source classified as both a LINER and as a LINER/Seyfert 2 double nucleus \citep{Ribeiro+1996,deCarvalho+1999} with optical line ratios, or as an AGN in X-rays observations with XMM \citep{Turner+2001}. However, HST observations show that the suggested double nucleus is instead a group of star clusters \citep{Konstantopoulos+2013} and Chandra X-ray observations indicate that the hard X-rays might originate solely from X-ray binaries, not an AGN \citep{OSullivan+2014a,Oda+2018}, while IFU observations suggest that the LINER line ratios in the optical may arise from shock excitation driven by the on-going starburst \citep{Rich+2010}.
Upon first inspection the integrated H\,{\sc i} \ emission (Figure \ref{fig:HCG16d_moms}) appears almost like a face-on galaxy with a central H\,{\sc i} \ hole, however, this is quite misleading and does not agree with the optical image, which shows a highly inclined disc. The apparent central hole is instead H\,{\sc i} \ absorption in front of a central continuum source. The H\,{\sc i} \ depression is the same shape as the beam in both the robust=2 and robust=0 cubes (i.e. at two different resolutions) and the spectral profile at that position switches from a flux density of approximately 1.5 mJy to -1.5 mJy at the central velocity of the galaxy (Figure \ref{fig:absorp_spec}).
This absorption feature is redshifted relative to the central velocity (from optical spectra) by about 100 \kms. A regular rotating disc would form a symmetric absorption feature about the central velocity \citep{Morganti+2018}, so the fact this feature is redshifted suggests it could be gas falling towards the centre and fuelling the starburst event. However, a 100 \kms \ velocity shift is comfortably within what might be expected from orbiting gas in a galaxy of this size. Given the resolution of the data it is not possible to distinguish between these two possibilities, or indeed that the absorption may be due to an intervening clump of stripped gas in the group.
The absorption also means that the integrated flux for HCG 16d will be underestimated. To estimate how much the integrated flux is reduced we made a crude linear interpolation across the absorption feature between 3879 and 4006 \kms. This gives the missing flux as 0.1 Jy \kms, which is barely more than 1\% of the total H\,{\sc i} \ emission flux of HCG 16d. Indeed, the absorption feature is not apparent in its integrated spectrum (Figure \ref{fig:spectra}). Therefore, we ignored this feature for our analysis of the H\,{\sc i} \ deficiency.
When considering the kinematics of the gas in HCG 16d there is an apparent contradiction between H\,{\sc i} \ and H$\alpha$. In H\,{\sc i} \ there is only a weak velocity gradient in approximately the North-South direction, almost perpendicular to the optical major axis (Figure \ref{fig:HCG16d_moms}). In contrast, the existing H$\alpha$ rotation curve of the inner regions of the galaxy \citep{Rubin+1991,Mendes+1998,Rich+2010} shows significant rotation ($V_{\mathrm{rot}} \approx 100$ \kms) and has its major axis almost aligned with the optical disc, that is, approximately perpendicular to the slight gradient in the H\,{\sc i} \ emission. The full extent of the H$\alpha$ velocity field in \citet{Mendes+1998} is about 15\arcsec, less than the beam size of the H\,{\sc i} \ data, thus it is possible that the gas could undergo a major kinematic warp (or other distortion) beyond this region, meaning the H\,{\sc i} \ and H$\alpha$ results are not necessarily in conflict. However, as the H$\alpha$ kinematic major axis is aligned with the optical disc of the galaxy (which extends to much larger radii) this seems unlikely.
The H\,{\sc i} \ data thus demonstrates two key points: a) that there is a continuum source at the centre of HCG 16d, although the H\,{\sc i} \ data offers no information on its nature, and b) the H\,{\sc i} \ gas appears to be completely kinematically disconnected from the optical disc of the galaxy. These points raise the question "is the H\,{\sc i} \ truly associated with the galaxy or just superposed on it?", which we will discuss in the following section.
There are also a number of tidal features in the vicinity of HCG 16d, most notable is the SE tail that extends for about 10\arcmin \ (160 kpc) towards NGC 848. Although this feature connects HCG 16d and NGC 848 in both velocity and the plane of the sky (Figure \ref{fig:pv_plot}) there is no evidence for an accompanying optical tail emanating from HCG 16d. Most of the emission from the tail is contained within only 4 channels (a range of 84 \kms), although there are some clumps of emission which we have assigned to this feature near HCG 16d that extend further in velocity space such that the entire profile of the SE tail covers 10 channels (Figure \ref{fig:tidal_spectra}). The small velocity range along the length of the tail likely indicates that it is probably almost aligned with the plane of the sky. HCG 16d is also accompanied by 2 small, dense clumps on the East and South, and at approximately the same velocity as HCG 16d. Each is several times $10^8$ \Msol \ in H\,{\sc i} \ mass. These clumps are discussed further in \S\ref{sec:TDGs}. There is also a high H\,{\sc i} \ column density bridge connecting HCG 16d and c. The complexity of this region of the group makes separation of these features highly subjective and thus their H\,{\sc i} \ properties should be treated with caution.
\subsubsection{NGC 848}
\label{sec:NGC848}
\begin{figure*}
\centering
\includegraphics[height=75mm]{figures/Fig13-NGC848_mom0.pdf}
\includegraphics[height=75mm]{figures/Fig13-NGC848_mom1_cont.pdf}
\caption{\textit{Left}: The moment 0 map of NGC 848. The white star indicates the optical centre, and the black ellipse shows the beam. \textit{Right:} Moment 1 contours for NGC 848 overlaid on the DECaLS $grz$ image. The green lines are separated by 20 \kms.}
\label{fig:NGC848_moms}
\end{figure*}
NGC 848 is a barred spiral galaxy (SBab) that is separated from the rest of the group by about 10\arcmin, or around 160 kpc, and is another galaxy in the group undergoing a starburst \citep{Ribeiro+1996,deCarvalho+1999}. Although this galaxy was not included in the original Hickson catalogue, once spectra were obtained of the galaxies it was immediately noticed that it was at the same redshift as the core group (Table \ref{tab:optprops}) and therefore likely associated \citep{Ribeiro+1996}. However, it was not until there were H\,{\sc i} \ interferometric observations that the physical connection, in the form of a 160 kpc tidal tail, was discovered \citep{Verdes-Montenegro+2001}.
Despite its connection to this enormous tidal feature (visible in Figure \ref{fig:mom0}) the galaxy itself has an entirely normal global H\,{\sc i} \ content (Table \ref{tab:HIprops}). This alone indicates that the gas in the tail probably did not (for the most part) originate from NGC 848, but from somewhere else in the group, especially as the total H\,{\sc i} \ mass in the tail is approximately the same as the H\,{\sc i} \ mass of NGC 848. There is another tail connected to NGC 848, which appears to originate from its receding side and extends to the South-East, that we refer to as the NGC 848S tail (Figure \ref{fig:mom0}). At least part of this feature is included in the \texttt{SoFiA} mask, however it may extend much further, looping back around towards the core group, but this is at low signal-to-noise and (if real) the emission in adjacent channels also shifts in position, such that smoothing spatially and in velocity does little to improve its signal-to-noise. Some parts of this faint feature are visible in the left panel of Figure \ref{fig:mom0} (as well as the \href{http://amiga.iaa.es/FCKeditor/UserFiles/X3D/HCG16/HCG16.html}{X3D plot}) slightly to the North of NGC 848 and the SE tail. We include this extended loop in Table \ref{tab:HIprops} and Figure \ref{fig:tidal_spectra}, but it is not included in the total integrated emission of the group as it does not have high enough significance to be included in the \texttt{SoFiA} mask.
The H\,{\sc i} \ distribution in the galaxy itself (Figure \ref{fig:NGC848_moms}) appears mostly regular, with a centroid coincident with the optical centre and a mostly uniform velocity field. However, the kinematic major and minor axes are not quite perpendicular, which is probably due to the presence of the bar \citep[e.g.][]{Bosma1981}. The H\,{\sc i} \ gas is also slightly extended towards the NW and SE, i.e. in the direction of the tidal tail, while in the optical image the spiral arms appear very loosely wrapped (Figure \ref{fig:NGC848_moms}), probably indicating that the outer disc is beginning to be unbound.
\subsubsection{PGC 8210}
\label{sec:PGC8210}
\begin{figure*}
\centering
\includegraphics[height=75mm]{figures/Fig14-PGC8210_mom0.pdf}
\includegraphics[height=75mm]{figures/Fig14-PGC8210_mom1_cont.pdf}
\caption{\textit{Left}: The moment 0 map of PGC 8210. The white star indicates the optical centre, and the black ellipse shows the beam. \textit{Right:} Moment 1 contours for PGC 8210 overlaid on the DECaLS $grz$ image. The green lines are separated by 20 \kms.}
\label{fig:PGC8210_moms}
\end{figure*}
The final galaxy detected within the primary beam of the VLA observations is PGC 8210. This is another spiral galaxy but it is considerably smaller and less massive than the core members of the group. Its B-band luminosity is almost 1 dex lower than the galaxies in the core group and its H\,{\sc i} \ mass is considerably less than all but HCG 16b, which is extremely H\,{\sc i} \ deficient. Because the velocity of PGC 8210 is coincident with that of the group it is assumed to be part of the same structure, or at least about to join it. However, as shown in Figure \ref{fig:PGC8210_moms} the H\,{\sc i} \ distribution appears undisturbed (with the caveat that it is hardly larger than the beam), as does its optical disc, and its total H\,{\sc i} \ content is entirely normal. Therefore, it seems very unlikely that this galaxy has had any meaningful interaction with the group to date.
\subsection{Tidal dwarf galaxy candidates}
\label{sec:TDGs}
\begin{figure*}
\centering
\includegraphics[width=0.66\columnwidth]{figures/Fig15-NW_clump_mom0_cont.pdf}
\includegraphics[width=0.66\columnwidth]{figures/Fig15-E_clump_mom0_cont.pdf}
\includegraphics[width=0.66\columnwidth]{figures/Fig15-S_clump_mom0_cont.pdf}
\includegraphics[width=0.66\columnwidth]{figures/Fig15-NW_clump_mom1.pdf}
\includegraphics[width=0.66\columnwidth]{figures/Fig15-E_clump_mom1.pdf}
\includegraphics[width=0.66\columnwidth]{figures/Fig15-S_clump_mom1.pdf}
\caption{\textit{Top}: The moment 0 maps of the NW, E and S clumps (left to right) overlaid on the DECaLS $grz$ image. Contour levels: 0.98, 1.47, 1.96, 2.45 $\times 10^{20} \; \mathrm{cm}^{-2}$. \textit{Bottom:} Moment 1 maps of the same features with the beam shown as a black ellipse in the corner.}
\label{fig:clump_moms}
\end{figure*}
We have identified three dense clumps of H\,{\sc i} \ emission in the group that are not associated with any of the main galaxies. The first two are listed in Table \ref{tab:HIprops}, the E and S clumps in the vicinity of HCG 16d, and the third is a dense clump at the north-western end of the NW tail, which is clearly seen as an excess at low velocities in the spectrum of the NW tail (Figure \ref{fig:tidal_spectra}). These could be candidate LSB (low surface brightness) dwarf galaxies or TDGs, both of which can occur in galaxy groups and can be rich in H\,{\sc i} \ \citep{Lee-Waddell+2012,Lee-Waddell+2016,Roman+2017,Spekkens+2018}, or transient clumps of stripped gas. Their moment 0 and moment 1 maps are shown in Figure \ref{fig:clump_moms} (NW, S and E clumps, left to right).
All three H\,{\sc i} \ clumps have masses well above $10^{8}$ \Msol, the minimum mass thought to be needed to form a long-lived TDG \citep{Bournaud+2006}. Their masses are 3.9, 5.2, and 9.0 $\times 10^{8}$ \Msol, respectively (going left to right in Figure \ref{fig:clump_moms}). While the S clump is the most massive (in H\,{\sc i}) it is also the most spatially extended, has the lowest peak column density, shows little evidence for rotation in its moment 1 map, and has no apparent optical counterpart. Therefore, despite its large mass it seems unlikely that the S clump is a long-lived TDG or a LSB galaxy, and instead is probably a transient feature associated with the SE tail.
The E clump is in between the other two clumps in terms of H\,{\sc i} \ mass and column density, however, its moment 1 map shows a clear velocity gradient across it indicating that it may be rotating. Taking very approximately the H\,{\sc i} \ diameter as 1.5\arcmin \ and the rotation velocity as 25 \kms, the dynamical mass (assuming equilibrium) would be $1.7 \times 10^{9}$ \Msol, which is about 3 times its measured H\,{\sc i} \ mass. This simple estimate implies that there may need to be some dark matter component to explain the velocity gradient, if it is due to rotation. However, there are also a number of reasons why this estimate might be incorrect. For example, the assumption of dynamical equilibrium is unlikely to be correct and even if it were, the source is only 3 beams across, which means both its velocity field and spatial extent are likely to be heavily affected by beam smearing.
The NW clump has the smallest H\,{\sc i} \ mass of the three clumps, but the highest column density, peaking at over $2.45 \times 10^{20} \; \mathrm{cm}^{-2}$. The moment 1 map does not show clear evidence for rotation, however, it is barely 2 beams across and any potential signature of rotation may be confused with gas in the NW tail. TDGs are expected to be found at the tip of tidal tails as this clump is \citep[e.g.][]{Bournaud+2004} and unlike the other two clumps there is a clear, but very LSB optical counterpart visible in the DECaLS image at $\mathrm{RA = 2h\;9m\;26s}$, $\mathrm{Dec = -10^\circ}$ 7\arcmin \ 9\arcsec \ (top left panel of Figure \ref{fig:clump_moms} and right panel of Figure \ref{fig:HCG16a_moms}) and it is also just visible in GALEX \citep[Galaxy Evolution Explorer,][]{Martin+2005}. The blue colour of this optical counterpart suggests that it is the result of in situ SF, not old stars which might have been stripped along with the gas. This strengthens the case for this candidate TDG, but there remains the possibility that this is a gas-rich LSB dwarf which is having its H\,{\sc i} \ stripped, rather than a new galaxy that has formed as a result of the tidal interactions in the group. While it is not possible, with the current information, to rule out either of these hypotheses, we favour the TDG interpretation because this dwarf appears at the end of an enormous tidal feature and it seems implausible that either such a small galaxy stripped this gas from the other (much more massive) group members, or that it could have originally been sufficiently gas-rich to be the origin of the observed tidal gas.
\section{Discussion}
\label{sec:discuss}
\subsection{Gas consumption timescales}
\begin{table*}
\centering
\caption{Gas consumption times of HCG 16 core galaxies}
\label{tab:gas_time}
\begin{tabular}{lccccc}
\hline\hline
Object & $\log{M_{\ast}/\mathrm{M_\odot}}$ & $\log{M_{\mathrm{HI}}/\mathrm{M_\odot}}$ & $\log{M_{\mathrm{H_2}}/\mathrm{M_\odot}}$ & SFR/$\mathrm{M_\odot}\,\mathrm{yr}^{-1}$ & $T_\mathrm{{con}}$/Gyr \\ \hline
HCG 16a & 11.05 & 9.15 & 10.05 & 4.6 & 3.5 \\
HCG 16b & 10.84 & 8.70 & 9.17 & 0.46 & 5.6 \\
HCG 16c & 10.86 & 9.57 & 9.78 & 14.0 & 0.9 \\
HCG 16d & 10.61 & 9.73 & 10.01 & 16.7 & 1.2 \\ \hline
\end{tabular}
\tablefoot{Columns: (1) object name, (2) logarithm of stellar mass estimated from IR photometry \citep{Lenkic+2016}, (3) logarithm of H\,{\sc i} \ mass, (4) logarithm of the molecular hydrogen mass estimated from the CO observations compiled in \citet{Martinez-Badenes+2012}, (5) star formation rate estimated from the combination of UV and IR fluxes \citep{Lenkic+2016}, and (6) the gas consumption timescale taken to be $1.3(M_{\mathrm{HI}} + M_{\mathrm{H_2}})/\mathrm{SFR}$.}
\end{table*}
One way that a galaxy can become gas deficient is by converting its gas into stars without replenishment. Ideally the detailed SFH (star formation history) of a galaxy would be compared to its present day gas content, but in the absence of a SFH the gas consumption time given by the current SFR (star formation rate) can be used instead. If the galaxy is in the middle of converting its gas reservoir into stars then this should be apparent.
To estimate the gas consumption timescales of the cold gas in the galaxies we have taken stellar masses and SFRs from \citet{Lenkic+2016}, and molecular gas mass measurements from \citet{Martinez-Badenes+2012}. \citet{Lenkic+2016} use Spitzer IRAC (Infrared Array Camera) 3.6 and 4.5 $\mu$m photometry to estimate stellar masses based on the prescription of \citep{Eskew+2012}. To estimate SFRs these authors use the UV (2030 \AA, from Swift) and IR (24 $\mu$m) luminosities as proxies for the unobscured and obscured (re-radiated) emission due to SF, and combine the two to estimate the total SFR, which in theory allows them to avoid correcting for internal extinction. \citet{Martinez-Badenes+2012} use CO data from \citet{Boselli+1996,Leon+1998,Verdes-Montenegro+1998} to estimate molecular gas masses. They use a standard constant value of the CO-to-$\mathrm{H_{2}}$ conversion factor, $X = N_\mathrm{H_{2}}/I_\mathrm{CO} = 2 \times 10^{20} \; \mathrm{cm^{-2} \, K^{-1}\,km^{-1}\,s}$. As all the CO observations are single pointings they also extrapolate for emission beyond the primary beam by assuming a pure exponential disc with a scale length of $0.2 r_{25}$, where $r_{25}$ is the major optical 25 mag arcsec$^{-2}$ isophotal radius.
Table \ref{tab:gas_time} shows the gas consumption timescales of the four core galaxies of HCG 16, estimated by combining our measurements of the H\,{\sc i} \ masses with measurements of the H$_2$ masses \citep{Martinez-Badenes+2012} and the SFRs \citep{Lenkic+2016}. The total hydrogen mass is multiplied by 1.3 to account for all other elements, giving $T_{\mathrm{con}} = 1.3(M_{\mathrm{HI}} + M_{\mathrm{H_2}})/\mathrm{SFR}$. The galaxies are roughly split into two categories in terms of their gas consumption time, those that at their current SFR will exhaust their existing gas reservoirs in about a Gyr (HCG 16c and d) and those which will take several Gyr to do the same (HCG 16a and b).
HCG 16c and d are both starbursting LIRGs, so it is unsurprising that their gas consumption timescales are short. It is tempting to think that HCG 16a and b probably looked much the same approximately a Gyr in the past and now have slowed SFRs and have become gas deficient. However, as the group is not globally H\,{\sc i} \ deficient it seems unlikely that their atomic gas was converted to H$_2$ and consumed in SF, unless the group was gas-rich to begin with. But even if we allow for that possibility this scenario does not seem to agree with their gas and stellar content. To become depleted in H\,{\sc i}, but not in H$_2$, via SF would have required them to have undergone a SF episode, which would consume much of the molecular gas, and then for the remaining H\,{\sc i} \ to condense into H$_2$. The stellar populations (see \S\ref{sec:stellar_pop}) do not support this scenario and tidal stripping is a more natural explanation as it preferentially removes the most loosely bound gas, which is typically H\,{\sc i}, not H$_2$. However, this would imply that the encounter(s) responsible for stripping the H\,{\sc i} \ gas did not trigger a major SF event in these galaxies, despite the presence of a considerable amount of molecular gas.
\subsection{Hot gas in the IGrM}
\citet{Belsole+2003} and \citet{OSullivan+2014b} measured the hot diffuse gas component of the IGrM of HCG 16 with the XMM Newton and Chandra satellites respectively. The fact that this hot diffuse medium is detected at all is already unusual for a spiral-rich HCG, but in addition it is also quite massive. \citet{Belsole+2003} estimate the total hot gas component of the IGrM as $4.5 \times 10^{10}$ \Msol \ and \citet{OSullivan+2014b} estimate it as 5.0-9.0 $\times 10^{10}$ \Msol \ (after adjusting to a distance of 55.2 Mpc). As thoroughly discussed in \citet{OSullivan+2014b} the origin of such a large amount of hot gas is difficult to fully explain. The group is not virialised, so it is unlikely that the gas has a cosmic origin and has simply fallen into the group halo and been heated. The group is also not globally deficient in H\,{\sc i}, meaning that stripped gas cannot be the main source either. \citet{OSullivan+2014b} conclude that the most probable origin of the majority of the hot gas is the galactic winds of HCG 16c and d.
The hot gas in the vicinity of the group core is of course co-spatial with a large amount of H\,{\sc i} \ in tidal structures, demonstrating that the IGrM is multi-phase. This hot gas is unlikely to act as a source of additional cool gas due to its long cooling timescale \citep[7-10 Gyr,][]{OSullivan+2014b}, but it could negatively impact the lifetime of the H\,{\sc i}, which we discuss in section \S\ref{sec:HI_lifetime}.
\subsection{Stellar populations, star formation rates, and outflows}
\label{sec:stellar_pop}
\citet{OSullivan+2014a} used the STARLIGHT code and spectra from SDSS to model stellar populations in HCG 16b and c, the only two of the galaxies with spectra in SDSS. They concluded that HCG 16b is entirely dominated by an old stellar population with a characteristic age of $\sim$10 Gyr. They also find some evidence of a very minor SF event occurring at some point in the past few hundred Myr. This event was likely triggered by the on-going interactions with HCG 16a, but it represents a negligible fraction of the total stellar population. For HCG 16c the results were heavily dependent on the choice of stellar population models, but the general finding was that a significant minority of the stellar mass of HCG 16c was formed in a starburst event during the past few hundred Myrs, with the rest of the population being made up of old stars (5-10 Gyr).
As mentioned in the previous section \citet{Lenkic+2016} estimated the SFRs of all the core galaxies. Although there is no stellar population estimate for HCG 16a its central region appears similar in colour to HCG 16b, suggesting it is made up of an evolved stellar population. However, it is surrounded by a ring of SF that is bright in GALEX UV bands and greatly elevates the estimated SFR. In the case of HCG 16d the estimated SFR is very high ($\sim17 \; \mathrm{M_{\odot}\,yr^{-1}}$) as is expected for a LIRG.
Using WiFeS (Wide Field Spectrograph on the ANU 2.3 m telescope) \citet{Rich+2010} studied the biconical outflow from HCG 16d. This outflow, driven by the ongoing nuclear starburst, contains ionised and neutral gas, traced by H$\alpha$ and Na D lines. They also find A-type stellar absorption features throughout the stellar disc, and suggest that starbursting galaxies like HCG 16d might be progenitors for E+A galaxies. This A-type stellar population in the stellar disc indicates that the galaxy has undergone a global star formation event less than a Gyr ago, in addition to the current nuclear starburst (although they may represent different phases of the same sustained event). Despite this evidence of significant recent star formation, HCG 16d still has sizeable reservoirs of both molecular and atomic gas (Table \ref{tab:gas_time}), however, it is not clear whether the H\,{\sc i} \ component of this gas is truly associated with the galaxy, or is a chance superposition.
One reason to believe that the H\,{\sc i} \ gas might not be associated with the galaxy is because of its peculiar velocity structure, which is very disorderly and any potential gradient appears to be almost perpendicular to the stellar disc (Figure \ref{fig:HCG16d_moms}). Here we draw a comparison with the H\,{\sc i} \ distribution around M82, one of the best studied starbursting galaxies. The H\,{\sc i} \ distribution around M82 looks somewhat similar to HCG 16d, with the velocity gradient in H\,{\sc i} \ aligned with the outflow rather than with the stellar disc \citep[e.g.][]{Martini+2018}. In the case of M82 the H\,{\sc i} \ distribution is interpreted as gas that is entrained in the hot wind, although the details of the exact mechanism are uncertain. This lends support to the interpretation that this H\,{\sc i} \ gas observed in the vicinity of HCG 16d really is associated with it and that its anomalous velocity structure is a result of the current wind. However, it is also possible, and even likely, that both proposed scenarios are somewhat true. There is a great deal of extended H\,{\sc i} \ in the IGrM so it is very plausible that we have mistakenly attributed some of this to HCG 16d.
HCG 16c is another LIRG in the group, and like HCG 16d has a high SFR. \citet{Vogt+2013} studied the M82-like wind that also exists in this galaxy, also with the WiFeS instrument. In striking similarity with HCG 16d the wind also appears to be a nuclear starburst driven phenomenon and the galactic disc shows signs of an A-type stellar population throughout. This indicates that the recent past has been very similar for these two galaxies, with each experiencing a global SF event within the last Gyr and both currently undergoing a nuclear starburst that is powering a galactic wind. The simplest explanation for this synchronised evolution is that it is driven by their tidal interaction with each other. However, there is another plausible explanation, that the passage of NGC 848 through the group triggered these events in both galaxies at approximately the same time (we consider this time scale in the following subsection). \citet{Vogt+2013} argue that although NGC 848 could have triggered the event responsible for the A-type population, the timescales are not compatible for the ongoing starbursts, however, it is possible that the events were initially global and have since been funnelled to the nuclear regions.
Despite the apparent similarity in their recent SFHs and the presence of winds, the H\,{\sc i} \ properties of HCG 16c and d are quite disparate. While HCG 16d shows no signs of rotation and a possible velocity gradient along the minor axis, HCG 16c has a mostly regular H\,{\sc i} \ velocity field (Figure \ref{fig:HCG16c_moms}) except in its outskirts. \citet{Vogt+2013} discussed the different nature of the galactic winds in the two galaxies. The wind emanating from HCG 16c is still (mostly) confined to two bubbles above and below the disc within the H\,{\sc i} \ surrounding envelope, indicating that it is young (only a few Myr old). On the other hand the HCG 16d wind is biconical and apparently free streaming. These authors also argue that the primary driving mechanisms of the winds differ, with one being shock-excited (HGC 16d) and the other photoionised (HCG 16c). Given that these two galaxies are in the same environment and appear to have had similar recent interactions, the differences in these winds are probably due to pre-existing differences in the host galaxies or simply the different phases we are currently observing them in. We refer the reader to \citet{Rich+2010}, \citet{Vogt+2013}, and references therein, for further discussion on this topic.
\subsection{Tail age estimates}
The SE tail, which links the core group to NGC 848 was most likely formed by NGC 848 passing very close to the core group, unbinding (or attracting already loosely bound) H\,{\sc i} \ gas and stretching it out to form the $\sim$160 kpc long tail. Across most of its length the SE tail is visible in just 4 velocity channels (3942--4006 \kms), which suggests the feature is approximately aligned with the plane of the sky and that we can consider its projected length as almost equal to its physical length. Given the large separation between NGC 848 and the core group it is also reasonable to assume that when NGC 848 passed through the group it was travelling at approximately the escape velocity. Summing the stellar masses on the four core galaxies gives the total stellar mass of the core group as $4.42 \times 10^{10}$ \Msol \ (Table \ref{tab:gas_time}). Combining this with the stellar mass--halo mass relation from \citet[][their equation 3 and Table 2]{Matthee+2017} calculated from the EAGLE (Evolution and Assembly of GaLaxies and their Environments) simulations, we estimate the dark matter halo mass of the core group as $2.8 \times 10^{12}$ \Msol. This corresponds to an escape velocity of $\sim$400 \kms \ at the present separation. Assuming this velocity NGC 848 would have passed by the group approximately 400 Myr ago. It should be noted that this value is quite uncertain as the simplistic argument above hides many complexities, however, it is still useful as an order of magnitude guide.
The only disturbance to NGC 848 that is visible in the optical image is that its spiral arms appear quite loose and are extended to the North-West and the South-East. However, over the majority of the extent of the H\,{\sc i} \ tail there is no detectable optical counterpart. \citet{Konstantopoulos+2010} estimate that optical tidal features in groups will be dispersed within about 500 Myr. Given our age estimate of the H\,{\sc i} \ feature, this may explain why no optical counterpart is seen, however, it is also possible that the tail is formed of H\,{\sc i} \ gas that was already loosely bound and did not host any significant stellar population. In this case there may never have been an optical counterpart. We discard the possibility of in situ SF within the SE tail because at no point along its length does the column density rise to $10^{21} \; \mathrm{cm^{-2}}$ (the peak value is $1.3 \times 10^{20} \; \mathrm{cm^{-2}}$).
Following the arguments in \S3.5 of \citet{Borthakur+2015} we can also make an estimate of how long the H\,{\sc i} \ content of the SE tail can persist into the future. The peak column density along the spine of the SE tail is about $10^{20} \; \mathrm{cm^{-2}}$ and it is about 1\arcmin \ (16 kpc) in width. If we assume that the tail is a cylinder of gas then the average density is about 0.016 $\mathrm{cm^{-3}}$. \citet{Borthakur+2015} find that H\,{\sc i} \ becomes susceptible to ionisation from background radiation below densities of about $10^{-3} \; \mathrm{cm^{-3}}$ at column densities of about $10^{19} \; \mathrm{cm^{-2}}$ (their Figure 6). If we assume that the H\,{\sc i} \ clouds making up the SE tail are expanding with a fiducial velocity of 20 \kms \ then it will reach this threshold column density after about 1 Gyr and the threshold density after about 1.7 Gyr. Thus we expect this feature to survive for at least another Gyr.
\citet{Konstantopoulos+2013} use the colours (from SDSS images) of the optical tail extending eastward of HCG 16a to estimate an age of between 100 Myr and 1 Gyr. While in projection this tail looks like it is associated with the NW tail (Figure \ref{fig:mom0}), as discussed previously this H\,{\sc i} \ feature does not connect kinematically to HCG 16a, instead if this optical tail has an H\,{\sc i} \ counterpart it is probably the much smaller H\,{\sc i} \ feature visible on the eastern side of HCG 16a at 4027 \kms \ (see the channel maps in the electronic version). Given the loose constraint on the age of this tail and the density of the core group it is difficult to say what interaction is responsible for this tail. It could simply be the on going interaction between HCG 16a and b, or an interaction with HCG 16c and d, or perhaps even due to the recent passage of NGC 848.
\subsection{The NW tail: accreting gas, tidal tail, or outflow?}
\label{sec:nw_tail}
The NW tail intersects HCG 16c at its centre on the plane of the sky and at its central velocity in the spectral direction (Figure \ref{fig:pv_plot}). As HCG 16c is currently undergoing a SB event it is worthwhile considering if this could be a sign of low angular momentum, cool gas accreting onto the centre of HCG 16c and fuelling its starburst. However, before asserting such an exceptional hypothesis we should first examine and attempt to eliminate other more mundane scenarios. As mentioned previously, two other competing hypotheses are that this feature could be the result of an outflow or the chance superposition of a tidal tail.
First, consider the outflow hypothesis. We have already argued that H\,{\sc i} \ gas in HCG 16d is being disrupted by its galactic wind and that there is also a wind emanating from HCG 16c. However, the NW tail is a well-collimated feature, albeit with a pronounced curve, whereas the gas in the vicinity of HCG 16d is disordered and even the H\,{\sc i} \ velocity field of M82 \citep{Martini+2018} does not show collimated features like this. Without invoking a mechanism to collimate the ouflowing neutral gas over many 10s of kpc it would not be possible to form such a feature with a galactic wind, therefore, we discard the possibility of the NW tail being an outflow.
Next, we discuss the possibility of a chance superposition. Given that the feature intersects the centre of HCG 16c in both velocity and position a chance superposition seems, at first, unlikely. However, we know already that NGC 848 must have passed extremely close to HCG 16d (and therefore HCG 16c) in order to form the SE tail. Also the SE tail and the NW tail appear as though they may be part of one continuous feature which passes through the core group (Figure \ref{fig:pv_plot}, top panel). This feature may trace the path of NGC 848 when it passed through the core group, with the NW tail being the leading tail formed from the gas surrounding HCG 16c and d as NGC 848 approached the group, and the SE tail being the trailing tail which became very extended as NGC 848 exited the opposite side of the group. In this scenario a chance superposition of the centre of HCG 16c and the NW tail is not nearly as contrived as it might otherwise be.
In summary, while it remains a possibility that the NW tail might be accreting onto HCG 16c's centre, given the other information about the likely recent past of the group it seems that a chance superposition is the most likely explanation, that is, that the NW tail is a tidal tail with the same redshift as the centre of HCG 16c, but not the same line-of-sight distance (owing to peculiar motions).
\subsection{Survival of H\,{\sc i} \ in the IGrM}
\label{sec:HI_lifetime}
We observe a considerable amount of H\,{\sc i} \ gas in HCG 16 that is not associated with any individual galaxy. Here we discuss the stability of this gas considering the on-going processes within the group. As noted by \citet{Verdes-Montenegro+2001}, HCGs seem to have two final stage morphologies\footnote{Here we ignored the common envelope phase as at least some of the few known examples were misidentified \citep[][and Damas-Segovia et al. in prep]{Verdes-Montenegro+2002} and we are no longer convinced this is a genuine phase.}; those with almost no remaining H\,{\sc i}, and those with H\,{\sc i} \ found only in extended features, not associated with individual galaxies. Which of these will be that fate of HCG 16?
\citet{Borthakur+2015} discussed the distribution and fate of diffuse H\,{\sc i} \ gas in 4 compact groups. These authors formulated an analytic approximation for the minimum distance an H\,{\sc i} \ cloud of a given column density can be from a starburst event forming stars at a given rate (their equation 3). The lowest column density contours in Figure \ref{fig:mom0} are 2.45 and $9.80 \times 10^{19} \; \mathrm{cm}^{-2}$, and these enclose much of the area around the galaxies in the core group. Using the fiducial column density of $5 \times 10^{19} \; \mathrm{cm}^{-2}$ and the expressions from \citet{Borthakur+2015} we estimate that such H\,{\sc i} \ clouds should not be stable within $\sim$100 kpc of either HCG 16c or d, yet this would rule out most of the core region of the group, where we already know there is H\,{\sc i} \ gas.
\citet{Borthakur+2015} found a similar apparent contradiction in HCG 31, but reasoned that the diffuse H\,{\sc i} \ probably followed a similar distribution to the higher column density H\,{\sc i} \ and was thus likely shielded from ionising photons.
In HCG 16 it may be that the apparently low column density features are really made up of dense clumps, smaller than the resolution of our images ($\sim$30 \arcsec or about 8 kpc), which have their emission smeared out by the beam. Also the energy output from the starbursts in HCG 16c and d will be highly non-isotropic, which may mean that a small faction of the core group is very hostile to H\,{\sc i} \ clouds, but that the majority is not.
Using deep Chandra observations \citet{OSullivan+2014b} estimated the temperature of the hot, diffuse IGrM in HCG 16 as 0.3 keV ($3.5 \times 10^{6}$ K) and its number density as around $1 \times 10^{-3} \; \mathrm{cm}^{-3}$. Following \citet{Borthakur+2010} we estimate that the critical radius of H\,{\sc i} \ clouds to prevent evaporation due to conductive heating is about 2 kpc. Given the spatial resolution of our VLA data we cannot verify this directly, other than to say that the persistence of H\,{\sc i} \ in the IGrM of HCG 16 implies that the H\,{\sc i} \ clouds are larger than this limit. We also note that lack of correlation between the H\,{\sc i} \ properties and hot IGrM properties found in other HCGs \citep{Rasmussen+2008} disfavours conductive heating as being a key H\,{\sc i} \ removal mechanism in HCGs.
The final source of energy which could have the potential to remove the H\,{\sc i} \ gas from the IGrM is the starburst-driven galactic winds emanating from HCG 16c and d. \citet{Rich+2010} and \citet{Vogt+2013} describe these winds that are driving out neutral gas from the discs of these two galaxies. Despite the dramatic nature of these winds, \citet{Borthakur+2013} argue that the energy release rate of an M82-like wind is similar to the cooling rate of the surrounding medium, so while a starburst can maintain a galactic scale wind while it is active, it is unlikely to have a major long-term effect on the temperature of the IGrM.
To summarise, we are not aware of any source that is likely to destroy a considerable fraction of the H\,{\sc i} \ currently observed in the IGrM on short timescales ($<$1 Gyr). Therefore, we expect the dominant effect to be gradual evaporation due to background UV radiation.
\subsection{Correspondence of H\,{\sc i} \ and optical extended features}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Fig16-HCG16_core_deep.pdf}
\includegraphics[width=\columnwidth]{figures/Fig16-NGC848_deep.pdf}
\caption{\textit{Top}: Enhanced DECaLS $gr$ image of the core group with LSB features highlighted. The black regions around the galaxies are due to over-subtraction of the sky that occurs in the DECaLS pipeline. The grey regions are the masked high surface brightness emission. \textit{Bottom}: As above but for the area surrounding NGC 848. This image also includes H\,{\sc i} \ contours from the combination of the 3 channels where the NGC 848S tail is most pronounced. Levels: -1, 1, 2, 3, 4, 5 $\times 0.05$ Jy \kms \ per beam, $\times 4.9 \times 10^{19} \; \mathrm{cm^{-2}}$, or $\times 0.39 \; \mathrm{M_{\odot}\,pc^{-2}}$}
\label{fig:deep_opt}
\end{figure}
The DECaLS images were masked using the \texttt{Noisechisel} software \citep{Akhlaghi+2015}. Then we rebinned the images to a pixel size of 1\arcsec \ followed by smoothing with a kernel of 2 pixels. This enhances the diffuse emission of the data, allowing the lowest surface brightness features of the data, not detectable at the original pixel scale of $\approx0.27$\arcsec, to be identified. By comparing images in the $g$, $r$ and $z$ bands we identified 4 new LSB features (Figure \ref{fig:deep_opt}) which appear to be associated with HCG 16 (there are other LSB features which were attributed to background galaxies or clusters). The over-subtraction and fluctuation in the reference background level, especially close to extended galaxies, makes the surface brightness of the features unreliable, but we estimate the features to be between 27 and 28 mag arcsec$^{-2}$ in $r$-band.
In the core group we were unable to confidently identify any H\,{\sc i} \ structures which correspond to these LSB features. As stellar tails are generally thought to be shorter lived than H\,{\sc i} \ tails this likely indicates that these LSB features have a gas-poor origin. For example, it is possible that the most western feature originates from either HCG 16a or b, which are both H\,{\sc i} \ deficient. An alternative source for these features could also be disrupted dwarf members of the group that have been accreted by the larger members, leaving only faint stellar tails.
Near NGC 848 there is a very linear LSB feature, which was almost obscured by a star (Figure \ref{fig:deep_opt}, bottom). This feature is co-spatial with the H\,{\sc i} \ emission of the NGC 848S tail and is most likely its optical counterpart. It has a $g-r$ colour of $0.4 \pm 0.1$. The contours in the lower panel of Figure \ref{fig:deep_opt} are from the combination of 3 channels (4027-4069 \kms) where the NGC 848S tail is most prominent. The peak column density contour within the tail is $1.5 \times 10^{20} \; \mathrm{cm^{-2}}$, so in situ star formation is unlikely and we are probably detecting the old stellar population in this tail. As optical tidal tails are expected to be observed for only $\sim$500 Myr after formation, it is consistent with our age estimate of 400 Myr for the SE tail and for the interaction of NGC 848 with the rest of the group. If this tidal feature formed at approximately the same time then today it would be expected to be barely visible in the optical.
Even in these enhanced images there is still no indication of a stellar counterpart for the SE tail. The deepest of the three bands is $r$ with a 3$\sigma$ surface brightness sensitivity of 28.7 mag arcsec$^{-2}$ in a 10\arcsec$\times$10\arcsec \ box. This null detection supports the hypothesis that the SE tail was formed from loosely bound H\,{\sc i} \ gas that had no associated stellar counterpart.
\subsection{Morphological transformation}
\label{sec:morph_trans}
HCGs are known to host an excess of early-type galaxies, particularly S0s, and have a corresponding shortage of late-type galaxies relative to the field \citep{Hickson+1988}. This shortage of spirals and excess of S0s has led various authors to suggest that as spiral in CGs are stripped of gas they may evolve into S0s \citep[e.g.][]{Verdes-Montenegro+2001,Sulentic+2001}. In addition, CGs are found to have a pronounced gap in IR colours \citep{Johnson+2007}, referred to as the ``canyon'' or the IRTZ (Infra-Red Transition Zone), where galaxies appear to be in the late stages of shifting their morphology after passing the optical green valley \citep{Alatalo+2014} and beginning to lose their molecular gas \citep{Lisenfeld+2017}. Furthermore, in HCGs the IR canyon is dominated by lenticulars and early-types, again suggesting that spirals may be transitioning to later types in CGs.
HCG 16 contains two S0a galaxies, HCG 16c and d, which are both undergoing starbursts, as would be expected in the model proposed by \citet{Bekki+2011}, in which spirals transforms into S0s through repeated close interactions and the associated SF events. This raises the question, are HCG 16c and d currently undergoing morphological transformation from spirals to lenticulars?
In the DECaLS image (Figure \ref{fig:optim}) faint stellar shells are visible around both HCG 16c and d. Shells in massive galaxies have been widely regarded as the result of interactions with low mass companions on approximately radial orbits \citep[e.g.][]{Quinn1984,Dupraz+1986,Hernquist+1988,Hernquist+1989} and generally have been found around early-type, rather than late-type, galaxies \citep[e.g.][]{Malin+1983,Atkinson+2013}. However, more recently \citet{Pop+2018} performed a study of the occurrence of shells based on the Illustris cosmological simulations, finding that they can be caused by more equal mass interactions as well. In addition, they find that although for 1:10 mass ratio interactions (the minimum found to produce shells) the orbit must be almost purely radial, the more equal the mass ratio of the progenitors the wider the range of impact parameters that can produce shells. Thus, the simplest and most likely explanation for both HCG16c and d displaying shells is that they are the respective causes of each other's shells. However, this does not rule out the possibility that they both independently formed shells in mergers with now unseen companions, prior to their present interaction with each other.
The presence of shells in HCG 16c and d also hints that they are likely to have earlier morphologies in the near future, as most shells are found around early-type galaxies. However, the formation mechanism of S0s in groups and the field is still a topic of debate, with evidence that supports either a secular \citep{Guerou+2016,Rizzo+2018} or violent \citep{Laurikainen+2010,Querejeta+2015,Tapia+2017,Eliche-Moral+2018} evolutionary pathway from spirals to lenticulars, or perhaps both \citep{Fraser-McKelvie+2018}. It has even been suggested that spirals are not the progenitors at all and that lenticulars represent an old population distinct from both ellipitcal and spiral galaxies \citep{Gao+2018}.
It should also be noted that HCG 16c and d are not in the IRTZ/canyon because they are actively forming stars, which, almost by definition, IRTZ/canyon galaxies are not. However, they are already classified as S0a, indicating the lenticular-like appearance arose before any potential future transition. In fact the only galaxy in the group that has apparently traversed the canyon already is HCG 16b \citep{Zucker+2016}, which has clear spiral features. Of course, just because crossing the canyon was not accompanied by a morphological change for this galaxy does not mean it cannot be for others, but it does indicate that caution is needed when trying to interpret small samples.
Given this lack of consensus regarding the origin of S0s and what relation they may have to the IRTZ, it is not possible at present to say whether HCG 16c and d are likely undergoing a transformation from spiral to lenticular morphology, or whether they were lenticular already. However, this is a possibility that deserves further consideration both in HCG 16 and other CGs.
\section{Summary and outlook}
\label{sec:summary}
In this final section we first review our findings of the present state of the group as evidenced by the H\,{\sc i} \ data and the rich multi-wavelength data set described in the literature, then to conclude we propose a scenario which explains these findings and discuss the probable end state of the group.
\subsection{Summary}
Overall HCG 16 is not deficient in H\,{\sc i}, it has the expected quantity of H\,{\sc i} \ gas given the B-band luminosity of its members. However, while the total amount may be equivalent to that found in isolated galaxies, the gas is unevenly distributed throughout the group and its members. The north-western pair, HCG 16a and b, have both lost the vast majority of their expected H\,{\sc i} \ content, while the other members all have normal H\,{\sc i} \ masses. The remaining gas is spread out through the IGrM in a tangle of tidal tails, bridges and dense clumps, some of which may be TDGs.
Despite being spread out across the entire group, the most plausible origin for the majority of the extended H\,{\sc i} \ gas is the HCG 16a and b pair. That pair is undergoing a strong interaction resulting in multiple optical and H\,{\sc i} \ tidal features between and around them. However, despite this interaction (and the presence of significant molecular gas reservoirs) they do not appear to have highly elevated SFRs (although HCG 16a does have a ring of SF in its outer disc) nor do their optical colours or stellar population models point to bursts of SF in the recent past. Given that these galaxies are both H\,{\sc i} \ deficient and have not recently converted large amounts of gas into stars, we conclude that the H\,{\sc i} \ gas must have been tidally stripped by interactions without triggering starburst events.
The pair at the centre of the group, HCG 16c and d, are also undergoing a strong interaction, but with quite the opposite outcome. Both galaxies are currently starbursting and have galactic winds powered by these events. They are also embedded in a tangled web of extended H\,{\sc i} \ features which form a high column density bridge between the two galaxies, tidal tails to the NE and NW, dense knots, and an enormous tail to the SE. The H\,{\sc i} \ kinematics of HCG 16c are mostly regular in its inner region, although the outer disc is somewhat disturbed. The NW tail connects in position and velocity to the centre of HCG 16c, presenting the possibility of cool gas with low angular momentum accreting directly to its centre and fuelling the starburst. However, given the interactions we traced through the group we favour an interpretation of this as a chance superposition in velocity space that does not correspond to the two objects being truly co-spatial in 3 dimensions. In HCG 16d the H\,{\sc i} \ kinematics appear to be completely disrupted. There is only a faint indication of a gradient in its velocity field and this is approximately perpendicular to the major axis of the disc. We interpret this as the starburst event disrupting the H\,{\sc i} \ and it becoming entrailed in the galactic wind. The rapid SFRs of these two galaxies mean that they will exhaust their gas reservoirs within about a Gyr, leaving them gas deficient without external replenishment.
The final large galaxy in the group, NGC 848, is separated from the other galaxies in optical emission by approximately 160 kpc, but this distance is traversed by the enormous SE tail, which forms an H\,{\sc i} \ connection between NGC 848 and the core group. Other than the unwinding edges of NGC 848's disc, this tidal tail has no apparent optical counterpart along its entire length, indicating that it is either too old for any accompanying stellar component to visibly persist to the present day, or that it was formed from loosely bound gas that did not have an associated stellar component to begin with. A simplistic estimate of the age of the tail, based on the assumption that NGC 848 is travelling at approximately the group's escape velocity, gives 400 Myr, on the upper end of how long a counterpart is expected to survive. Given the approximate nature of this estimate it is hardly conclusive either way, however, due to the abundance of neutral gas in the core group that is not associated with any galaxy we favour the latter interpretation.
Therefore, the dominant processes modifying the H\,{\sc i} \ content of HCG 16 are tidal stripping and star formation. Tidal interactions removed the majority of the H\,{\sc i} \ content of HCG 16a and b, and spread it out across the entire group, while SF in HCG 16c and d is rapidly consuming molecular gas (which will presumably be replenished from the available H\,{\sc i} \ reservoirs) and disrupting the H\,{\sc i} \ disc of HCG 16d. If this interpretation is correct then it would contradict the \citet{Konstantopoulos+2010} modified evolutionary sequence for HCGs because the HCG 16a and b pair would have been a gas-rich, strong interaction which did not result in gas being consumed by SF, whereas HCG 16c and d are another gas-rich pair which clearly has resulted in elevated SF, demonstrating not only that these two different results are possible, but that they are even possible in the same group. Thus the group cannot be classified as either a case where gas is mostly consumed by SF before major interactions occur, or as a case where tidal interactions remove the gas before it can be consumed by SF (the two distinct pathways in that scheme).
After reviewing the potential mechanisms for ionising the H\,{\sc i} \ currently in the IGrM we find that the hot component of the IGrM is not energetic enough to evaporate large ($>$2 kpc) H\,{\sc i} \ structures, that the on-going SF in the group does not appear to be strongly effecting the existing H\,{\sc i} \ in the IGrM, and that while the galactic wind of HCG 16d may currently be ejecting gas and energy into the IGrM, the effects of this are unlikely to persist once the starburst event has ended. Thus, evaporation by the UV background will likely be the principal mechanism for removing H\,{\sc i} \ from the IGrM on a long timescale ($>$1 Gyr).
\subsection{Global picture and future outlook}
With all of the above results in mind we have attempted to construct a coherent picture of the past evolution of the group that fits with all the available evidence.
Strong tidal interactions involving HCG 16a and b likely unbound much of their H\,{\sc i} \ gas without triggering a major SF event. This unbound gas was then dragged through the centre of the group by the passage of NGC 848 about 0.5 Gyr ago. This close passage also started SF episodes in HCG 16c and d, generating the E+A-like spectra their discs have today. This passage of NGC 848 is traced by the SE tail and the NW tail, which together form a continuous structure spanning the entire group, and the latter of which may have formed a TDG at its tip. At present the SFRs of HCG 16c and d are highly elevated, now driven by their interaction with each other, while the SFRs in HCG 16a and b remain more modest, aside from a ring of SF activity in HCG 16a.
Over the next Gyr HCG 16c and d will likely convert, consume or expel much of their gas supply through SF leaving themselves H\,{\sc i} \ deficient like HCG 16a and b, though by different means. Meanwhile, the H\,{\sc i} \ in all the galaxies will continue to be stripped by tidal interactions. It is unclear whether NGC 848 is travelling fast enough to escape the group, or whether it will fall back to be the sole H\,{\sc i}-normal large galaxy in the group. The extended H\,{\sc i} \ features in the group are expected to persist for several Gyr as they are gradually evaporated by the UV background. This will result in HCG 16 resembling a phase 3a group where there is little or no H\,{\sc i} \ remaining in the galaxies (with the possible exception of NGC 848), but extended H\,{\sc i} \ features are still visible in the IGrM.
\begin{acknowledgements}
MGJ is supported by a Juan de la Cierva formaci\'{o}n fellowship. We also acknowledge support from the grants AYA2015-65973-C3-1-R and RTI2018-096228-B-C31 (MINECO/FEDER, UE). This work has been supported by the Spanish Science Ministry ``Centro de Excelencia Severo Ochoa” program under grant SEV-2017-0709.
MGJ wishes to thank B. Koribalski, K. Lee-Waddell, and S. Cazzoli for helpful discussions. We also thank the referee for his thorough comments which helped to improve this paper.
This project used archival data from the VLA. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration (full acknowledgement at \url{legacysurvey.org/acknowledgment}).
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We also acknowledge the use of the HyperLeda database \citep{HyperLeda}. This research made use of \texttt{APLpy}, an open-source plotting package for \texttt{Python} \citep{aplpy2012,aplpy2019}, \texttt{astropy} \citep{astropy1,astropy2}, \texttt{Aladin} \citep{aladin}, \texttt{mayavi} \citep{mayavi}, and \texttt{SAOImageDS9} \citep{ds9}.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,498,604 | arxiv | \section{Introduction}
We address the challenge of designing a realistic model of complex networks while preserving its analytic tractability. The model should include the essential structural properties of real networks, and the theoretical framework should guarantee easy access to quantitative calculations. For the second aspect of this endeavour, we cast our analysis in terms of a percolation problem. This has been a topic of choice for some years since it can just as well represent the dynamics \textit{of} a network as the dynamics \textit{on} the network \cite{Dorogovtsev03_Evolution,Meyers07_BullAmerMathSoc,Arenas08_PhysRep,Dorogovtsev08_RevModPhys,Cohen10_ComplexNetworks,Newman10_Networks,Hebert-Dufresne13_PhysRevLett,Hebert-Dufresne13_SciRep}. One might think of its growth, its robustness (to attacks or failures) and the propagation of emerging infectious agents (e.g. disease or information).
While the study of percolation models on idealized networks has led to a better understanding of both the processes they model and the networks that support them, the study of percolation on real networks has somewhat stagnated. Unfortunately, purely numerical approaches are time-consuming, require a complete description of the networks under scrutiny and lack the insights of an analytical description. Conversely, although analytical modeling provides a better understanding of the organization of real networks, they are limited at present to simplified random models \cite[see][{and references therein}]{Newman03_SIAMRev,Newman10_Networks}.
In this paper, we demonstrate how the k-core structure of networks (hereafter simply core structure) plays a central role in the outcome of bond percolation, and how it acts as a proxy that captures the essential structural properties of real networks. The ensuing model, that we call the Hard-core Random Network (HRN) model, creates maximally random networks with an arbitrary degree distribution \textit{and} an arbitrary core structure. We also propose a Metropolis-Hastings algorithm to generate such random networks. The HRN model serves our purpose well since it is shown to be amenable to an exact solution for the size of the extensive ``giant'' component (in the limit of large network size). With less input information, it outperforms the current standard model \cite{Melnik11_PhysRevE} for precise prediction of percolation results on real networks.
The organization of this paper goes as follows. In Sec.~\ref{sec:hrn_perco}, we introduce the bond percolation problem and briefly present the two models used for comparison. In Sec.~\ref{sec:hrn_hrn}, we present the HRN model, the equations used to solve the bond percolation problem and the Metropolis-Hastings algorithm generating the corresponding random networks. We also compare the predictions of the HRN model and the ones of the two aforementioned models with the results obtained numerically using real network databases. Final remarks are collected in the last section.
\section{Bond percolation on networks} \label{sec:hrn_perco}
The bond percolation problem concerns the connectivity of a network after the removal of a fraction $1-T$ of its edges. More precisely, for a synthetic or empirical network, we are interested in the fraction $S$ of nodes contained in the largest connected component---the giant component---after each edge has been removed independently with a probability $1-T$. In the limit of large networks, this component undergoes a \textit{phase transition} at a critical point $T_\mathrm{c}$ during which its size (the number of nodes it contains) becomes an extensive quantity that scales linearly with the number of nodes ($N$) of the whole network \cite{Christensen05_ComplexityAndCriticality}.
To compare and assert the precision of the predictions of our model, we use the \textit{Configuration Model} (CM) and \textit{Correlated Configuration Model} (CCM) \cite{Newman01_PhysRevE,Newman02_PhysRevE,Newman02_PhysRevLett,Vazquez03_PhysRevE} (see Fig.~\ref{fig:hrn_example_CM}--\subref{fig:hrn_example_CCM}) as benchmarks. These models define maximally random network ensembles that are random in all respects other than the degree distribution (CM,CCM) and the degree-degree correlations (CCM). The degree distribution, $\{P(k)\}_{k\in\mathbb{N}}$, is the distribution of the number of connections (the degree $k$) that nodes have. The degree-degree correlations are defined through the \textit{joint degree distribution}, $\{P(k,k^\prime)\}_{k,k^\prime\in\mathbb{N}}$, giving the probability that a randomly chosen edge has nodes of degree $k$ and $k^\prime$ at its ends.
For both models, the size of the giant component $S$ and the percolation threshold $T_\mathrm{c}$ can be calculated in the limit $N\rightarrow\infty$ using probability generating functions (pgf) \cite{Newman01_PhysRevE,Newman02_PhysRevE,Newman02_PhysRevLett,Vazquez03_PhysRevE,Newman03a_PhysRevE,Vazquez06_PhysRevE,Allard09_PhysRevE,Allard12_JPhysA}. To model bond percolation on a given network with these models, we simply extract the degree distribution and the joint degree distribution; the required information therefore scales as $k_{\mathrm{max}}$ and $k_{\mathrm{max}}^2$. The original network is then found within the random ensembles containing all possible networks that can be designed with the same degree distribution and/or degree-degree correlations. The readers unfamiliar with these models and/or the mathematics involved can get a brief overview of these subjects in Appendices \ref{app:CM} and \ref{app:CCM}.
The degree distribution and the joint degree distribution can be seen as the one-point and two-point correlation functions of a network. The next logical step would therefore be to consider three-point correlations (i.e., clustering), and eventually to incorporate mesoscopic features such as motifs, cliques, and communities. Although many theoretical models have been proposed \cite{Newman03b_PhysRevE,Serrano06_PhysRevLett,Serrano06b_PhysRevE,Shi07_PhysicaA,Berchenko09_PhysRevLett,Miller09_PhysRevE,Newman09_PhysRevLett,Gleeson09_PhysRevE,Karrer10_PhysRevE,Zlatic12_EPL,Allard12_JPhysA}, a general, objective, and systematic method to tune these models in order to reproduce the features found in real networks as well as to predict the outcome of bond percolation is yet to be found \footnote{Recent advances in understanding the global organization of clustering in real networks \cite{Colomer-de-Simon13} offers further ideas to incorporate clustering in our model and will be the subject of a subsequent study.}.
\begin{figure}[t]
\subfigure[\ CM]{\label{fig:hrn_example_CM} \includegraphics[width=0.25\linewidth]{Hebert-Dufresne_Fig_1a_example_CM}}
\hspace{0.05\linewidth}
\subfigure[\ CCM]{\label{fig:hrn_example_CCM} \includegraphics[width=0.25\linewidth]{Hebert-Dufresne_Fig_1b_example_CCM}}
\hspace{0.05\linewidth}
\subfigure[\ HRN]{\label{fig:hrn_example_HRN} \includegraphics[width=0.25\linewidth]{Hebert-Dufresne_Fig_1c_example_HRN}}
\caption{\label{fig:hrn_example}(color online). Comparison of the three random network models considered. (a) The CM randomly connects stubs drawn from a given degree distribution $\{P(k)\}_{k\in\mathbb{N}}$. (b) The CCM distinguishes nodes according to their degree (colors) and randomly match stubs according to the joint degree distribution $\{P(k,k^\prime)\}_{k,k^\prime\in\mathbb{N}}$. (c) The HRN model distinguishes nodes by their coreness (colors) and stubs by their contribution to a node's coreness (red or blue). Stubs are then randomly matched according to the matrices $\mathbf{K}$ and $\mathbf{C}$.}
\end{figure}
\section{Hard-core Random Networks (HRN)} \label{sec:hrn_hrn}
We propose an alternative approach by considering a macroscopic measure of centrality: the \textit{coreness} of nodes. This choice is motivated by the recent observation that a node's coreness is a better indicator of the likeliness for that node to be part of the giant component than its degree \cite{Kitsak10_NaturePhys}. This measure also has the advantage of being general, objective, systematic, and easily calculated \cite{Batagelj03_arXiv}.
\subsection{Network coreness}
The coreness $c$ of a node is specified through its position in the core decomposition of a network \cite{Seidman83_SocialNetworks,Dorogovtsev06_PhysRevLett}. This decomposition assigns nodes to nested cores where nodes belonging to the $n$-th core all share at least $n$ edges with one another. A node has a coreness equal to $c$ if it is found in the $c$-th core, but not in the $(c+1)$-th core. The set of nodes with a coreness equal to $c$ forms the $c$-shell.
This definition of the coreness may appear complicated to compute, but a simple algorithm allows us to do the decomposition very efficiently \cite{Batagelj03_arXiv}.
\algblock{If}{EndIf}
\algcblock[If]{If}{ElsIf}{EndIf}
\algcblock{If}{Else}{EndIf}
\algcblockdefx[Strange]{}{Eeee}{Oooo}
[1]{\textbf{Input} #1}
[1]{\textbf{Output} #1}
\begin{algorithmic}[1]
\Eeee{graph as lists of nodes $\mathcal{V}$ and neighbors $\mathcal{N}$}
\Oooo{list $\mathcal{C}$ with coreness for each node}
\STATE{compute and list the degrees $\mathcal{D}$ of nodes;}
\STATE{sort $\mathcal{V}$ with increasing degree of nodes;}
\FORALL{$v \in \mathcal{V}$ in the order of $\mathcal{V}$}
\STATE{$\mathcal{C}(v)$ := $\mathcal{D}(v)$;}
\FORALL{$u \in \mathcal{N}(v)$}
\IF{$\mathcal{D}(u) > \mathcal{D}(v)$}
\STATE{$\mathcal{D}(u)$ := $\mathcal{D}(u) -1$;}
\ENDIF
\ENDFOR
\STATE{re-sort $\mathcal{V}$ accordingly}
\ENDFOR
\end{algorithmic}
In short, this algorithm is similar to a \textit{pruning} process which removes nodes in order of their effective degree, i.e., their number of links shared with nodes currently ranked higher in the process. In the end, the coreness of a node is simply given by its degree once the peeling process reaches this particular node. Hence, we know that a node of degree $k$ and coreness $c$ has $c$ \textit{contributing} edges and $k-c$ \textit{non-contributing} edges. Based on this key observation, we develop a coreness-based random network model that defines a maximally random network ensemble with an arbitrary degree distribution \textit{and} an arbitrary core structure.
\subsection{The HRN model}
The only two inputs of the HRN model are a $\mathbf{K}$ matrix whose elements $K_{ck}$ correspond to the fraction of the nodes that have a coreness $c$ and a degree $k$, and a matrix $\mathbf{C}$ whose elements $C_{cc^\prime}$ give the fraction of edges that leave nodes of coreness $c$ to nodes of coreness $c^\prime$. As this model considers undirected networks, the matrix $\mathbf{C}$ is symmetric and each edge is counted twice to account for both directions.
The HRN model is a multitype version of the CM \cite{Allard09_PhysRevE,Allard12_JPhysA,Allard13b} in which each node is assigned to a type, its coreness, and in which edges are formed by randomly pairing stubs that either contribute to the node's coreness (say, \textit{red} stubs) or do not contribute to it (say, \textit{blue} stubs). Red stubs from nodes of coreness $c$ may be paired with blue stubs from nodes of coreness $c^\prime \geq c$, or with red stubs attached to nodes of coreness $c^\prime=c$ (intra-shell). Blue stubs stemming from nodes of coreness $c$ may only be matched with red stubs stemming from nodes with a coreness $c^\prime \leq c$. Blue stubs may never be paired together.
These rules enforce a minimal core structure, although random variations can bring nodes to a higher coreness than originally intended. For example, 3 nodes of original state $(k=2,c=1)$ could end up in the 2-shell in the unlikely event that they form a triangle. However, such random variations may never pull nodes to a lower coreness than intended, in addition to being extremely unlikely in the limit of large networks ($N \rightarrow \infty$). The matrices $\mathbf{K}$ and $\mathbf{C}$ (see Appendix \ref{app:cons} for consistency conditions) combined with the aforementioned stub pairing rules define a maximally random network ensemble with an arbitrary degree distribution and core structure (see Fig.~\ref{fig:hrn_example_HRN}).
The $\mathbf{K}$ matrix encodes several useful quantities. For instance, the fraction of nodes of coreness $c$
\begin{align} \label{eq:hrn_wc}
w_c & = \sum_{k} K_{ck} \ ,
\end{align}
and the associated joint degree distribution, i.e. the probability that a randomly chosen node of coreness $c$ has $k_r$ red stubs and $k_b$ blue stubs
\begin{align}
P_c(\bm{k}) \equiv P_c(k_r,k_b) = \frac{\delta_{c,k_r}}{w_c}K_{k_r,k_r+k_b} \ ,
\end{align}
where $\delta_{c,k_r}$ is the Kronecker delta. Furthermore, we can extract the average degree of nodes of coreness $c$
\begin{align}
\langle k \rangle_c = \frac{1}{w_c}\sum_{k}k K_{c,k}
\end{align}
\begin{figure}[t]
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=4.95cm]{Hebert-Dufresne_Fig_2_validation_HRN}
\caption{\label{fig:hrn_validation}(color online). Validation of the HRN model. The predictions of Eqs.\eqref{eq:hrn_pgf_order_param}--\eqref{eq:hrn_pgf_fixed_point} (lines) are compared with the results obtained on networks generated with the Metropolis-Hastings algorithm described in Sec.~\ref{sec:hrn_hrn_algo} (symbols). The matrices $\mathbf{K}$ and $\mathbf{C}$ were extracted from an email network, the MathSciNet co-authorship network and a power grid chosen for their different behaviors (see Table~\ref{tab:hrn_networks} for datasets details). Numerical results (symbols) represent the average value of over $5\cdot 10^5$ simulations performed on networks with more than $3\cdot 10^ 5$ nodes.}
\end{figure}
and the average degree of the whole network
\begin{align}
\langle k \rangle = \sum_{c,k}k K_{ck} \ .
\end{align}
It follows from the above definition that a fraction $w_c\langle k \rangle_c/\langle k \rangle$ of stubs stems from nodes of coreness $c$, of which a fraction $w_cc/\langle k \rangle$ is red and a fraction $w_c(\langle k \rangle_c-c)/\langle k \rangle$ is blue.
The $\mathbf{C}$ matrix encodes the transition probability $R(c^\prime,j|c,i)$ that a node of coreness $c$ through a stub of color $i$ [red ($r$) or blue ($b$)] leads to a node of coreness $c^\prime$ through one of its stubs of color $j$. Since inter-shell edges can only be formed by matching a red with a blue stub, we readily obtain
\begin{subequations} \label{eq:hrn_R}
\begin{align}
R(c^\prime,b|c,r) & = \frac{C_{cc^\prime}}{w_cc/\langle k \rangle} \\
R(c,r|c^{\prime},b) & = \frac{C_{cc^\prime}}{w_c(\langle k \rangle_c - c )/\langle k \rangle} \\
R(c^\prime,r|c,b) & = R(c,b|c^{\prime},r) = 0
\end{align}
for $c < c^\prime$. Similarly, as the pairing of blue stubs is forbidden [$R(c^\prime,b|c,b)=0$ for any $c$ and $c^\prime$], a blue stub stemming from a node of coreness $c$ leads to a node belonging to the same shell (through its red stub) with probability
\begin{align} \label{eq:hrn_Rcbrc}
R(c,r|c,b) & = \frac{w_c(\langle k \rangle_c - c )/\langle k \rangle - \sum_{c^{\prime\prime}<c} C_{cc^{\prime\prime}}}{w_c(\langle k \rangle_c - c )/\langle k \rangle} \ .
\end{align}
\begin{figure*}[t!h]
\includegraphics[trim = 0mm 4mm 0mm 0mm, clip, height=3.95cm]{Hebert-Dufresne_Fig_3a_GnutellaLin}
\includegraphics[trim = 4mm 4mm 0mm 0mm, clip, height=3.95cm]{Hebert-Dufresne_Fig_3b_GowallaLin} \\
\includegraphics[trim = 0mm 4mm 0mm 0mm, clip, height=3.95cm]{Hebert-Dufresne_Fig_3c_PGPLin}
\includegraphics[trim = 4mm 4mm 0mm 0mm, clip, height=3.95cm]{Hebert-Dufresne_Fig_3d_WWWLin}
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=4.345cm]{Hebert-Dufresne_Fig_3e_MathSciLin}
\includegraphics[trim = 4mm 0mm 0mm 0mm, clip, height=4.345cm]{Hebert-Dufresne_Fig_3f_FacebookLin} \\
\caption{\label{fig:hrn_results_1}(color online). Results of bond percolation on real networks (black) compared with analytical predictions obtained with the CM (blue), CCM (green) and HRN (red). The networks are: (a) a snapshot of the Gnutella peer-to-peer network, (b) a snapshot of the Gowalla location-based social network, (c) the Pretty-Good-Privacy trust network, (d) a subset of the World-Wide Web, (e) the co-authorship network of MathSciNet before 2008, and (f) a large subset of the Facebook social network. See Table~\ref{tab:hrn_networks} for further details.}
\end{figure*}
This last result is computed by subtracting the number of blue stubs leading to outer shells (i.e., lower coreness) to the total number of blue stubs stemming from nodes of coreness $c$, and then by normalizing [$\sum_{c^\prime,j}R(c^{\prime},j|c,i)=1$ for $c\in\mathbb{N}$ and $i\in\{r,b\}$]. Finally, symmetry with Eq.~\eqref{eq:hrn_Rcbrc} implies that
\begin{align}
R(c,b|c,r) & = \frac{w_c(\langle k \rangle_c - c )/\langle k \rangle - \sum_{c^{\prime\prime}<c} C_{cc^{\prime\prime}}}{w_cc/\langle k \rangle} \ ,
\end{align}
and normalization leads to
\begin{align}
R(c,r|c,r) & = \frac{2w_cc/\langle k \rangle - C_{cc} - 2\sum_{c^{\prime\prime}>c}C_{cc^{\prime\prime}}}{w_cc/\langle k \rangle} \ ,
\end{align}
\end{subequations}
where we have used the fact that $\sum_{c^{\prime\prime}} C_{cc^{\prime\prime}} = w_c\langle k \rangle_c/\langle k \rangle$.
To compute the size of the giant component in the limit of large networks ($N \rightarrow \infty$), we define a probability generating function (pgf)
\begin{align}
g_c(\bm{x}) & = \sum_{\bm{k}} P_c(\bm{k}) \prod_{i} \Big[ (1-T) + T \sum_{c^\prime,j}R(c^{\prime},j|c,i) x_{c^{\prime}j}\Big]^{k_i}
\end{align}
that generates the distribution of the number of nodes of each type (i.e., coreness $c^{\prime}$) that can be reached from a node of coreness $c$ (the subscript $j$ of the variable $x_{c^\prime j}$ indicates the color of the stubs from which the node has been reached). Similarly, let us consider a node of coreness $c$ that has been reached from one of its stubs, the distribution of the number and type of its other neighbors (its \textit{excess} degree distribution) is generated by one of the two following pgfs
\begin{align}
f_{cr}(\bm{x}) & = \sum_{\bm{k}} P_c(\bm{k}) \prod_{i} \Big[ 1-T + T \sum_{c^\prime,j}R(c^{\prime},j|c,i) x_{c^{\prime}j}\Big]^{k_i-\delta_{ir}} \\
f_{cb}(\bm{x}) & = \sum_{\bm{k}} \frac{k_b P_c(\bm{k})}{\langle k \rangle_c - c} \prod_{i} \Big[ 1-T + T \sum_{c^\prime,j}R(c^{\prime},j|c,i) x_{c^{\prime}j}\Big]^{k_i-\delta_{ib}}
\end{align}
depending on the color of the stubs from which the node has been reached. The size of the giant component is then given by (see Ref.~\cite{Allard13b} for a complete and more general theoretical framework)
\begin{align} \label{eq:hrn_pgf_order_param}
S = 1 - \sum_{c} w_c g_c(\bm{a})
\end{align}
where $\bm{a}\equiv\{a_{ci}\}_{c\in\mathbb{N},i\in\{r,b\}}$ is the probability that a node of coreness $c$ reached by one of its stubs of color $i$ does not belong to the giant component. These probabilities correspond to the stable fixed point of the system of equations
\begin{align} \label{eq:hrn_pgf_fixed_point}
a_{ci} & = f_{ci}(\bm{a})
\end{align}
with $c\in\mathbb{N}$ and $i\in\{r,b\}$. As the distributions generated by $f_{ci}(\bm{x})$ are normalized, $\bm{a}=\bm{1}$ is always a solution of Eqs~\eqref{eq:hrn_pgf_fixed_point} and corresponds to the subcritical regime $S=0$. At $T=T_\mathrm{c}$, this fixed point undergoes a transcritical bifurcation and looses its stability to another solution in $[0,1)^{c_\mathrm{max}}$. This supercritical regime corresponds to the existence of a giant component ($S>0$); the critical point $T_\mathrm{c}$ is obtained from a stability analysis of Eqs.~\eqref{eq:hrn_pgf_fixed_point} around $\bm{a}=\bm{1}$.
\begin{figure}[t]
\includegraphics[trim = 0mm 4mm 0mm 0mm, clip, height=3.95cm]{Hebert-Dufresne_Fig_4a_PolishGridLin} \\
\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=4.345cm]{Hebert-Dufresne_Fig_4b_PowerGridLin} \\
\caption{\label{fig:hrn_results_2}Results of bond percolation on real networks (black) compared with analytical predictions obtained with the CM (blue), CCM (green) and HRN (red). The networks are: (a) a subset of the power grid of Poland, and (b) the Western States Power Grid of the United States. See Table~\ref{tab:hrn_networks} for further details.}
\end{figure}
\subsection{Numerical HRN networks} \label{sec:hrn_hrn_algo}
To generate networks with a given core structure, we start with $N \gg 1$ nodes whose number of stubs is drawn from the degree distribution $\{P(k)\}_{k\in\mathbb{N}} = \{\sum_{c} K_{ck}\}_{k\in\mathbb{N}}$, and randomly match stubs to create edges (as done for the CM \cite{Newman02_PhysRevE}). Next, for each node, we assign a coreness $c$ with probability $Q_k(c) = K_{ck}/P(k)$; $c$ of its stubs are then randomly selected as red and the $k-c$ others are identified as blue. Finally, we apply the following Metropolis-Hastings rewiring algorithm (similar to the one proposed in Ref.~\cite{Newman02_PhysRevLett}). At each step, two edges are randomly selected: edge 1 joins nodes of coreness $c_1$ and $c_1^\prime$ via their respective stubs of color $i_1$ and $j_1$ ($c_2$, $i_2$, $c_2^\prime$ and $j_2$ for edge 2). We replace these two edges by edge 3 ($c_1$, $i_1$, $c_2$ and $i_2$) and edge 4 ($c_1^\prime$, $j_1$, $c_2^\prime$ and $j_2$) with probability
\begin{align*}
\min\left\{1,\frac{\Gamma(c_1,i_1;c_2,i_2)\Gamma(c_1^\prime,j_1;c_2^\prime,j_2)}{\Gamma(c_1,i_1;c_1^\prime,j_1)\Gamma(c_2,i_2;c_2^\prime,j_2)}\right\} \ ,
\end{align*}
where $\Gamma(c,i;c^\prime,j)$ is the wanted fraction of edges that join nodes of coreness $c$ and $c^\prime$ via their respective stubs of color $i$ and $j$. These fractions are readily obtained from the matrix $\mathbf{C}$ [joint probabilities of Eqs.~\eqref{eq:hrn_R}]
\begin{equation}
\begin{aligned}
\Gamma(c,r;c^\prime,b) = \Gamma(c^\prime,b;c,r) & = C_{cc^\prime} \\
\Gamma(c,r;c,b) = \Gamma(c,b;c,r) & = w_c(\langle k \rangle_c - c )/\langle k \rangle - \sum_{c^{\prime\prime}<c} C_{cc^{\prime\prime}} \\
\Gamma(c,r;c,r) & = 2w_cc/\langle k \rangle - C_{cc} - 2\sum_{c^{\prime\prime}>c} C_{cc^{\prime\prime}}
\end{aligned}
\end{equation}
where $c<c^\prime$, and $\Gamma(c,i;c^\prime,j)$ is zero for all other combinations. This procedure preserves the degree distribution, and up to finite-size constraints, has the wanted core structure as its fixed point and is ergodic over the ensemble of networks defined by the HRN model. Figure~\ref{fig:hrn_validation} compares the predictions of Eqs.~\eqref{eq:hrn_pgf_order_param}--\eqref{eq:hrn_pgf_fixed_point} with the size of the giant component found in networks generated through this algorithm and shows a perfect agreement.
\begin{figure}[t]
\includegraphics[width=\linewidth]{Hebert-Dufresne_Fig_5_cbyk}
\caption{\label{fig:hrn_cbyk}Relation between the highest coreness, $c_\mathrm{max}$, and the highest degree, $k_\mathrm{max}$, for different real networks. The dashed line corresponds to $c_\mathrm{max} \propto \sqrt{k_\mathrm{max}}$.
\end{figure}
\begin{table*}
\centering
\caption{\label{tab:hrn_networks}Description and properties of the real networks used in Figs.~\ref{fig:hrn_validation}--\ref{fig:hrn_cbyk}.}
\begin{tabular}{l | c | c | c | c | c | c}
\hline
\hline
\multicolumn{1}{c|}{Description} & \multicolumn{1}{c|}{$N$} & \multicolumn{1}{c|}{$\langle k \rangle$} & \multicolumn{1}{c|}{$k_\mathrm{max}$} & \multicolumn{1}{c|}{$c_\mathrm{max}$} & Fig. & Ref. \\
\hline
Web of trust of the Pretty Good Privacy (PGP) encryption algorithm & 10 680 & 4.55 & 205 & 31 & \ref{fig:hrn_results_1}(c), \ref{fig:hrn_cbyk} & \cite{Boguna04_PhysRevE} \\
Snapshot of the Gnutella peer-to-peer network & 36 682 & 4.82 & 55 & 7 & \ref{fig:hrn_results_1}(a), \ref{fig:hrn_cbyk} & \cite{Ripeanu02_PeerToPeerSystems} \\
Large subset of the Facebook social network & 63 891 & 5.74 & 223 & 16 & \ref{fig:hrn_results_1}(f), \ref{fig:hrn_cbyk} & \cite{Viswanath09_WOSN} \\
Snapshot of the Gowalla location-based social network & 196 591 & 9.67 & 14 730 & 51 & \ref{fig:hrn_results_1}(b), \ref{fig:hrn_cbyk} & \cite{Cho11_KDD} \\
Email exchange network from an undisclosed European institution & 300 069 & 2.80 & 7 631 & 31 & \ref{fig:hrn_validation}, \ref{fig:hrn_cbyk} & \cite{Leskovec07_TKDD} \\
Subset of the World Wide Web & 325 729 & 6.69 & 10 721 & 155 & \ref{fig:hrn_results_1}(d), \ref{fig:hrn_cbyk} & \cite{Barabasi99_Science} \\
Co-authorship network of MathSciNet before 2008 & 391 529 & 4.46 & 496 & 24 & \ref{fig:hrn_validation}, \ref{fig:hrn_results_1}(e), \ref{fig:hrn_cbyk} & \cite{Palla08_NewJPhys} \\
&&&&&&\\
Subset of the power grid of Poland & 3 374 & 2.41 & 11 & 5 & \ref{fig:hrn_validation}, \ref{fig:hrn_results_2}(a), \ref{fig:hrn_cbyk} & \cite{Zimmerman2011} \\
Western States Power Grid of the United States & 4 941 & 2.67 & 19 & 5 & \ref{fig:hrn_results_2}(b), \ref{fig:hrn_cbyk} & \cite{Watts98_Nature} \\
&&&&&&\\
Email communication within the University Rovira i Virgili & 1 134 & 9.07 & 1 080 & 8 & \ref{fig:hrn_cbyk} & \cite{Palla05_Nature} \\
Protein-protein interactions in \textit{S. cerevisiae} & 2 640 & 5.00 & 111 & 8 & \ref{fig:hrn_cbyk} & \cite{Palla05_Nature} \\
Word association graph from the South Florida Free Association norms & 7 207 & 8.82 & 218 & 7 & \ref{fig:hrn_cbyk} & \cite{Palla05_Nature} \\
Network of hyperlinks between Google's webpages & 15 763 & 18.96 & 11 401 & 102 & \ref{fig:hrn_cbyk} & \cite{Farkas_directednetwork} \\
Structure of the Internet at the level of autonomous systems & 22 963 & 4.22 & 2390 & 25 & \ref{fig:hrn_cbyk} & \cite{Hebert-Dufresne11_PhysRevLett} \\
Reply network of the social news website Digg & 30 398 & 5.60 & 283 & 9 & \ref{fig:hrn_cbyk} & \cite{b565} \\
The cond-mat arXiv co-authorship network circa 2005 & 30 561 & 8.24 & 191 & 15 & \ref{fig:hrn_cbyk} & \cite{Palla05_Nature} \\
Email interchanges between different Enron email addresses & 36 692 & 10.02 & 1 383 & 43 & \ref{fig:hrn_cbyk} & \cite{Klimt2004} \\
Brightkite location-based online social network & 58 228 & 7.35 & 1 134 & 52 & \ref{fig:hrn_cbyk} & \cite{Cho11_KDD} \\
Network of tagged relationships on the Slashdot news website & 77 360 & 12.13 & 2 539 & 54 & \ref{fig:hrn_cbyk} & \cite{Leskovec_08} \\
Friendships between 100 000 Myspace accounts & 100 000 & 16.82 & 59 108 & 78 & \ref{fig:hrn_cbyk} & \cite{Ahn:2007:ATC:1242572.1242685} \\
Network of interactions between the users of the English Wikipedia & 138 592 & 10.33 & 10 715 & 55 & \ref{fig:hrn_cbyk} & \cite{konect:maniu2011}\\
Co-acting network in movies released after December 31st 1999 & 716 463 & 21.40 & 4625 & 192 & \ref{fig:hrn_cbyk} & \cite{Hebert-Dufresne11_PhysRevLett} \\
\hline
\hline
\end{tabular}
\end{table*}
\subsection{Results}
Figures~\ref{fig:hrn_results_1}--\ref{fig:hrn_results_2} display the predictions of Eqs.~\eqref{eq:hrn_pgf_order_param}--\eqref{eq:hrn_pgf_fixed_point} with the size of the giant component found in real networks (see caption and Table~\ref{tab:hrn_networks} for a complete description), and with the predictions of the CM and the CCM. These particular networks were chosen to highlight some important results.
First, we find that the HRN model performs \textit{at least as well} as the CCM in all investigated cases. This observation is interesting as the HRN model requires less input information than the CCM. Indeed the required information scales roughly as $k_\mathrm{max} c_\mathrm{max} + c_\mathrm{max}^2$. As shown in Fig.~\ref{fig:hrn_cbyk}, $c_\mathrm{max}$ scales approximately as $k_\mathrm{max}^{1/2}$ in many real networks, hence the input information in the HRN model scales roughly as $k_\mathrm{max}^{3/2}$. Considering the fact that $k_\mathrm{max}$ in real networks is often well above 10\textsuperscript{2} (see Table~\ref{tab:hrn_networks}), this difference results in a much faster computation and a major memory gain. Moreover, this implies that, although the HRN model does not account explicitly for the degree-degree correlations, they are effectively captured by the matrices $\mathbf{K}$ (degree-coreness correlations) and $\mathbf{C}$ (coreness-coreness correlations). As shown on Figs.~\ref{fig:hrn_results_1}--\ref{fig:hrn_results_2}, this effect was observed on all available real-world networks.
Second, and perhaps surprisingly, we see in Fig.~\ref{fig:hrn_results_2}(a) that the ``S'' shape obtained from the Polish power grid, typically due to finite size, is well reproduced by the HRN model, which is formally infinite in size. More precisely, this shape is usually attributed to the finite size of the network ($N=3374$ for the Polish power grid) as the small components---whose average size formally diverges at $T=T_c$---are misinterpreted as an emerging giant component. Interestingly, the results from the HRN model suggest that this shape is not a numerical artifact of the percolation algorithm, but that it is rather a signature of its geographically-embedded nature due to strong \textit{coreness-related} correlations. This unexpected property of the HRN model is confirmed on another, more clustered, power grid on Fig.~\ref{fig:hrn_results_2}(b). In this case, adding clustering to the HRN is expected to shift its prediction towards higher values of $T$, i.e., closer to the results from the real network. In fact, the HRN model is more accurate in predicting percolation on the Polish power grid (clustering coefficient $C=0.02$) than for this grid ($C=0.08$). A clustered version of the HRN model seems to offer a promising avenue for the modeling of geographically-embedded networks such as power grids.
In this regard, the results of Figs.~\ref{fig:hrn_results_1}(e)--(f) add even more emphasis on the importance of including the effect of clustering in a subsequent version of the HRN model. Indeed, co-authorship networks \ref{fig:hrn_results_1}(e) are notoriously clustered networks as authors of a same paper are all connected via a fully-connected clique. Similarly, in Facebook \ref{fig:hrn_results_1}(f), people belonging to a same social group (e.g., classmates, colleagues, teammates) tend all to be connected to one another, yielding almost fully-connected cliques. Again, we expect in this situation that clustering would reduce the size of the giant component (due to redundant connections in cliques), hence bringing the predictions of a clustered HRN model closer to the behaviors observed with the real networks.
\section{Conclusion}
We have shown that the core structure can be useful beyond the characterization and visualization of networks. It serves well modeling efforts and is efficient in reproducing the structural properties of real networks. Moreover, a few simple connection rules can enforce a core structure in random networks for which the outcome of bond percolation can be predicted with the well-established pgf approach\footnote{Codes solving the theoretical model and generating the networks are available at \texttt{http://dynamica.phy.ulaval.ca}.}. We feel that this work sets the stage for further improvements (specifically the inclusion of clustering) and paves the way towards a more complete analytical description of percolation on real networks.
\begin{acknowledgments}
The authors would like to acknowledge the financial support of the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, and the Fonds de recherche du Qu\'ebec--Nature et technologies.
\end{acknowledgments}
|
1,116,691,498,605 | arxiv | \section{Introduction}
Heat conduction in molecular junctions\cite{Segal2016arpc}
has become a subject of increasing interests as a sub-field
of nanoscale energy dissipation and transport over the last decade\cite{Pop2010},
driven by technological considerations of both stability and functionality
of envisioned molecular electronic devices
as well as the need to understand the fundamentals of heat transport in nanosize systems\cite{Li2012rmp,Dubi2011}.
The most common approach to such calculations is based on classical MD simulations
with substrate temperature controlled by generalized Langevin baths,
with the obvious deficiency of misrepresenting the dynamics of high frequency modes,
relying on the assumption that molecular heat conduction is dominated by modes in the lower frequency regime.
Alternatively, quantum calculations were done,
mostly based on the non-equilibrium Green's Function (NEGF) \cite{Yamamoto2006,Wang2006prb} methodology
usually using the harmonic part of the molecular force field,
leading to Landauer-type expressions\cite{landauer1957} for the molecular heat conduction
in the harmonic approximation analogous to its counterpart
in the problem of molecular electronic transport using free electron models.
Given the different ranges of applicability of classical dynamics on the one hand
and harmonic quantum dynamics on the other,
comparing their performance in evaluating and predicting heat molecular conduction is obviously of interest.
Another powerful tool to investigate many-body interactions, without necessitating the harmonic approximation, is using atomistic Molecular Dynamics (MD) simulations.
MD simulation of an ergodic system allows calculation of Statistical Mechanical properties of a system (e.g. thermal conductivity) by analysis of the atomic trajectories.
MD simulations from previous reports, however, are mostly focused on specific systems such as liquids\cite{Zhang2005jpcb}, thin-films\cite{Lukes2000jht}, Graphene\cite{Berber2000prl}, or one-dimensional metal/semi-metal chains or wires\cite{Ness2017jcp}.
It is unclear how applicable these system-tailored simulations are to the thermal conduction in Single Molecule Junctions (SMJ).
A fully-functional MD simulation tool to study the structural dependence of molecular heat conduction, in which a full force-field is applied without particular system restrictions, is still lacking.
Focusing on classical simulations,
equilibrium MD (EMD) is one of the easiest approaches to implement.
One essentially applies the Green-Kubo formula
to the time-autocorrelation of the current
to get the thermal conductivity in the linear response regime
\cite{Ladd1986prb,Volz2000prb,Che2000jcp,Schelling2002prb,
McGaughey2004ijhmt1,McGaughey2004ijhmt2,Chen2012jap,Sellan2010prb}.
Besides its limitation to linear response approximation,
the method also suffers from slow convergence and limited applicability to heterogeneous systems\cite{Schelling2002prb}.
Alternatively, under the nonequilibrium MD (NEMD) approach
\cite{Baranyai1996, Poetzsch1994, Oligschleger1999,Berber2000prl,Schelling2002prb, Jiang2010jap}
one creates a temperature gradient by separating the simulated system into "slabs",
and rescaling the atomic velocities at the "heat source" and “sink” slabs
to set the temperature boundary conditions.
In implementing this methodology care has to be taken
for the finite-size effects associated with the so-imposed boundary conditions\cite{Schelling2002prb,Sellan2010prb}.
Plus, the thermal bath effects are relatively obscure for this method.
Another popular tool is the so-called Reversed Nonequilibrium MD (RNEMD)
\cite{Jund1999,Muller-Plathe1997,Muller-Plathe1999,Zhang2005jpcb,Bagri2011nl,Dong2014scirep,Tang2013apl,Si2017ijhmt}
in which the effect (fluxes) and the cause (temperatures) are reversed:
one creates temperature differences by separating the simulated system into "slabs",
and enforces a given heat flux on the system
by taking a certain amount of kinetic energy from the "heat source" slab
and put it into the "heat sink" slab,
until the system reaches steady state at which the temperature at the source
and sink sides is determined.
Since in this approach non-equilibrium is imposed by a constant heat flux,
it is limited to steady-state calculations.
The stochastic nonequilibrium MD (SNEMD) methodology
used in the present work is a variant of the NEMD outlined above,
in which velocity rescaling reflects the interaction with a generic (white) thermal bath
in a way consistent with the fluctuation-dissipation theorem.
To account substrate actual spectral properties a section adjacent to the molecule is modeled explicitly
and filters the effects of the generic stochastic dynamics\cite{Goga2012jctc}
applied to bulk layers further from the molecular bridge.
We show the stability and applicability of our method
in calculating the temperature distribution, heat current, and thermal conductance
in various molecular junction settings. Under our approach,
the concept of temperature and build-up of thermal bias come in naturally,
without manually perturbing the system at each simulated time-step or reversing causality.
This paper is the first in a series in which
we plan to study the interplay between molecular composition and structure
and its heat transport properties.
For this purpose we have developed a numerical tool (described below)
that can be readily adapted to different molecules and structures.
Here we apply our tool to the study of heat transport in single alkane chains,
a system that has been studied numerically\cite{Segal2003,Kloeckner2016}
and experimentally (mostly for alkane layers\cite{Wang2006apl,Meier2014prl,Majumdar2015nl},
but very recently, for the first time, also for single alkane chains\cite{Cui2019nature}).
Our results serve to test our calculation against previous calculation
and most importantly against recent experimental results
as well as Landauer based harmonic quantum calculations,
and demonstrate the applicability robustness of these calculations.
Furthermore, we present heat conduction results also for a series of conjugated carbon chains
–- better candidates for molecular electronic transport applications\cite{Crljen2007prl,Garner2018jpcc}
but, as be find, similar to their saturated counterparts in their heat transport behavior.
Section 2 provides details on our simulation technique and our code.
Section 3 discuss the results of heat conduction properties
of different types of hydrocarbon chains within molecualr junctions using such approach,
and compare them to the existing theoreical and experimental data.
In Section 4 we conclude and give future directions of research in this series.
\section{Model and Calculations}
While the energy spectrum of molecular vibrations encompass a relatively large (~0 – 0.5 eV) frequency range,
high-frequency vibrations tend to be spatially localized
and energetically above the cutoff frequency of many solid-state substrates.
For these reasons, and also because such modes are not populated at room temperature,
they contribute little to molecular heat transport at that temperature\cite{Segal2003}.
Molecular heat transport is therefore dominated by lower frequency vibrations,
for which classical dynamics provide a reasonable approximation.
Molecular Force-Fields (FF) allow efficient representation of a classical,
anharmonic molecular Potential Energy Surface (PES) which is the input to the SNEMD studies described below.
In the present work we chose to represent the attributes of the thermal environment
using an explicit thermal bath.
As shown in figure 2,
we extend our molecular system with one or more atomic layers of the substrate,
while the bulk atoms furthest from the molecular system are subjected to Markovian white noise
which is thus filtered by the explicit substrate layers.
Specifically, the interfaces between the molecule and the baths (Region III)
on either side of it are denoted as Region II in the diagram (Figure \ref{fig:schematic}).
They are modeled using the same Molecular Mechanical Force Fields as the molecular system,
which are optimized for small organic and organometallic molecules (more details later in this section).
The interfaces are comprised of an explicit part of the bulk,
which can be seen as the tips of the measuring apparatus,
and are usually composed of layers of metallic materials (e.g gold, platinum).
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/schematic.png}
\caption{A schematic diagram of the explicit bath model. Region I is the molecular system (including thiol groups); Region II represents the interface and is comprised of explicit layers of metallic materials;
Region III are implicit baths representing the infinitely large thermal reservoirs, exerting white noise.}
\label{fig:schematic}
\end{figure}
The calculation of the heat current starts
by representing the potential energy of the whole system
as a sum of individual interaction terms $V_\tau$,
where $\tau$ refers to different interaction types,
for example, the two-body interaction between atom 1 and 2
or the three-body interaction between atoms 1, 2, and 3.
We assume an interaction term $V_\tau$ can be further separated
as a weighted sum over the atoms connected by it,
weighted according to some partition scheme.
A generalized analytic formalism has been explored by Torii \textit{et al.}\cite{Torii2008jcp}, explicitly
\begin{equation} \label{eqn:potential_partition}
\begin{split}
E_{tot}&=\sum_i^{N}\frac{1}{2}m_i\mathbf{v}^2_i+\sum_\tau V_\tau \\V_\tau&=\sum_j^{n(\tau)}U_{\tau,j}, \\
U_{\tau,j}(\{\mathbf{r}_1\ldots\mathbf{r}_{n(\tau)}\})&=C_{\tau,j}V_\tau(\{\mathbf{r}_1\ldots\mathbf{r}_{n(\tau)}\}),\\
\sum_j^{n(\tau)}C_{\tau,j}&=1,
\end{split}
\end{equation}
where $n(\tau)$ is the number of atoms connected by the interaction $V_\tau$.
The fraction, $C_{\tau,j}$, of potential energy from $V_\tau$ assigned to atom $j$ is termed as $U_{\tau,j}$.
Assigning specific energies to individual atoms is necessary in order to define atomic energies and energy flows between atoms,
but is obviously somewhat arbitrary.
In our modeling we chose to assign equal partitioning of each potential energy term between the individual participating atom.
With such partitioning defined, the heat flux associated with a given atom i in the molecular system, is given by
\begin{equation}
\label{eqn:heat_flux}
J_i\equiv\frac{dE_i}{dt}=\frac{d}{dt}\left( \frac{1}{2}m_i\mathbf{v}^2_i+\sum_\tau U_{\tau,i}\right) \\
=\sum_\tau\sum_{j=1}^{n(\tau)} J_{\tau,i j},
\end{equation}
where the heat flux going from atom $j$ to atom $i$
which are connected by $V_\tau$ is defined as,
\begin{equation}
\label{eqn:heat_flux_tau}
J_{\tau,i j} = C_{\tau,j}\mathbf{f}_{\tau,i}\cdot \mathbf{v}_i-C_{\tau,i}\mathbf{f}_{\tau,j}\cdot \mathbf{v}_j.
\end{equation}
We have defined $\mathbf{f}_{\tau,j}$ as the force derived from interaction $U_{\tau,j}$.
This is the core expression we use to calculate the inter-atomic heat currents.
In addition to the inter-atomic force fields we add the effect of the thermal baths through damping forces and random fluctuations from Langevin Dynamics.
We expect heat current values to plateau as the system tends towards steady-state.
A customized Molecular Dynamics (MD) package built around
GROningen MAchine for Chemical Simulations
(GROMACS) 4.5\cite{Gromacs4.5} is developed and utilized to conduct the simulations.
The leap-frog algorithm (provided by GROMACS) is used for propagation of the deterministic parts of the system, while Langevin dynamics are used to propagate the stochastic parts of the simulation\cite{Goga2012jctc}.
Unless otherwise stated, the time step is always 1 fs for all runs and the coupling strength between the Markovian bath and outermost layer of explicit bulk (region is 1) is $ps^{-1}$.
First, different utilities, are used to prepare the initial conditions for the simulation.
These include open source software \textit{Open Babel}\cite{OpenBabel}, \textit{Avogadro}\cite{Avogadro} and other homemade programs and scripts for creating input topologies and indices.
The Universal Force Field (UFF) \cite{UFF} parameters are chosen throughout the simulations.
UFF is one of a few force fields that includes most of the atomic types and bonds across the periodic table, and thus is suitable for organometallic junctions.
As the high frequency carbon-hydrogen bonds often contribute little to the overall vibrational heat conduction,
it is reasonable to compare side-by-side the effect of with and without hydrogen explicitly appear in the force field.
This will in principle determine whether it is a good approximation
to use unified-atom (ua) version of UFF over all-atom(aa) UFF.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/Tprofile_aa_vs_ua.pdf}
\caption{Temperature profile for 1,6-hexanedithiol, comparing UFF all-atom and UFF unified-atom force fields. The horizontal axis is labeled according to the index of the backbone atoms (Sulfur atoms included), and also including the layer of Gold atoms nearest the molecule (the leads), to the left most and right most. The temperatures of the left and right leads are set at 350K and 300K respectively.
The error bars represent the standard error.(see also Caption in Figure \ref{fig:conductance_layers})}
\label{fig:T_UFF}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{||c c c c c||}
\hline
& Conductance & SD & SE. &\\
\hline
All-Atom & 20.73 & 7.44 & 0.74 &\\
Unified-Atom & 21.77 & 6.40 & 0.64 &\\
\hline
\end{tabular}
\caption{Heat conductance difference between UFF all-atom and UFF unified-atom force fields, for 1,6-hexanedithiol with three layers of Gold electrodes on each side.
SD stands for standard deviation, and SE for standard error. All values are given in units of pico-Watts per Kelvin.}
\label{table:UFF}
\end{table}
The local temperatures of the backbone carbons (together with sulfurs and first layer of gold)
are reasonably unchanged when change the UFF-aa to UFF-ua (\ref{fig:T_UFF}).
Table \ref{table:UFF} compares the effect of force-field choice between all-atom and a unified-atom approximation of UFF, on the heat conduction of the hexanedithiold molecule. The relative symmetric difference\footnote{The relative symmetric difference is $\delta(x,y) = \frac{|x-y|}{(x+y)/2}$} between the calculation results for the two force-fields is 4.89\%, and Welch's t-test\footnote{Welch's t-test is $\eta(x,y)=\frac{|\mathbf{E}[x]-\mathbf{E}[y]|}{\sqrt{\sigma_x^2+\sigma_y^2}}$, where $\sigma_x$ is the Standard Deviation in random variable $x$} is 13\%. Therefore, we conclude that the unified-atom approximation is acceptable for our purposes.
The simulation begins by preparing the desired molecular state
through building of the junction structure,
and optimizing its geometry to be at the configuration of minimal energy.
The structure is equilibrated to the average temperature of the baths,
and then propagated under the boundary conditions of the required temperature bias (e.g. 300K and 350K) until it reaches steady state (usually about a few nanosecond).
The steady state trajectories are sampled under this specific temperature bias. Pairwise forces between atoms, and between each bath and the atoms coupled to it, are also sampled.
Finally, the heat currents are calculated from the trajectories and forces. The heat currents are then time-averaged.
Ensemble-averages are performed to obtain statistically sound final currents and conductance.
As for Landauer-type calculations, a detailed description of the formalism is provided in the Supporting Information (SI), together with other relevant data and figures.
The heat current equations and computational apparatus described above
were used to calculate individual heat currents between any two atomic pair within the molecular systems.
This is not limited to nearest-neighbour bonded atoms (bond stretching interactions),
but also applies to atoms that are three or four sites apart
but still interconnected by other interactions parameterized in the force fields (e.g., angle bending, torsion, etc.)
The heat current flowing from one heat bath to the other bath in the molecular junction,
can be measured by setting up an imaginary plane
which is perpendicular to the longitudinal axis of the molecule
and sum over all the inter-atomic heat currents
going from one side of the plane to the other side of the plane.
In steady-state, the heat current through the molecule will be measured the same,
regardless of where we chose to draw this imaginary plane.
For computational simplicity, we chose to draw it between region I in figure~\ref{fig:schematic} (molecular) and region II (the substrate interface).
The average thermal conductance is defined as the ratio between this quantity and the temperature bias between the hot and cold baths,
\begin{equation}
\kappa=\frac{J_{tot}}{T_\text{hot}-T_\text{cold}}.
\end{equation}
In addition to heat fluxes,
the local temperature of each atom in the conducting molecule
is calculated from the statistically averaged kinetic energy of the atoms.
More details regarding our methodology are given at the end of this article and in the Supporting Information (SI).
\section{Results and discussion}
The system under investigation is a Single Molecule Junction
comprising an alkanedithiol \ch{HS(CH_2)_nSH}
as a molecular bridge connecting several layers of explicit bulk atoms.
We compared alkanedithiols of various lengths
(measured by the number of Carbon atoms in the alkane backbone),
and explicit bulk comprised of one to four layers of gold atoms
which are further connected to bulk substrates.
Some examples are illustrated in Figure \ref{fig:alkanesB}.
For each molecular species,
we performed MD simulations of the non-equilibrium molecular junction,
evaluating its steady-state heat transport behavior following the procedure described in Section 2.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/alkanes_drawings_on_Black_background.pdf}
\caption{Illustration of some of the alkane molecules studied in the simulation, with three layers of explicit gold bulk. The white noise thermostats are only attached to the layer of gold atoms furthest from the alkane bridge.
(a) 1,4-butanedithiol(\ch{HS(CH_2)_4SH}); (b) 1,8-octanedithiol(\ch{HS(CH_2)_8SH}).}
\label{fig:alkanesB}
\end{figure}
For each molecular species, we performed MD simulations of the molecular junction at Nonequilibrium Steady-State (NESS). More computational details are given in the SI.
In order to ascertain the relevance of the explicit baths modelling,
we compared results for molecular heat conductance
using different numbers of explicit gold layers to represent the bulk.
Specific simulations are performed on hexanedithiol (Figure \ref{fig:conductance_layers}).
The calculated conductance appears to converge
when three layers of explicit gold are used in the substrate representation.
A similar saturation of layer effect was found also in the other hydrocarbon molecules
(See SI for results of more alkanedithiols).
In agreement with this observation, a study by Zhang \textit{et. al.}
on self-assembled monolayers showed
that the effect of the baths on the molecular system
is mainly due to the first few layers of gold substrate\cite{Zhang2010pccp}.
As seen in Fig. \ref{fig:auto},
the observed convergence reflects the convergence in spectral density properties
of the gold clusters used to represent the thermal baths.
It should be noted that to enforce the junction geometry,
the explicit bulk is position-restrained
by a harmonic force acting on the layer of atoms furthest from the molecule.
The spectral densities in Fig. \ref{fig:auto} are calculated for the position-restrained clusters
since the position restraint force is part of the spectral density affecting the molecule.
The signature of the harmonic position-restraining force is evident at its frequency of about 40 wavenumbers.
The rest of the spectrum shows frequencies in the range of tens to a few hundred wavenumbers,
which agrees with experimental measurements of vibrational DOS for gold nanoparticles\cite{Carles2016scirep}.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/conductance_vs_num_gold_layers.pdf}
\caption{Heat conductance for the molecule \ch{HS(CH_2)_6SH} given different numbers of gold layers as the explicit bulk.
The temperature bias is set at 300K to 350K. The bars shown in the figure are the standard errors (SE)
of the conductance measurements.
(SE=Standard Deviation (SD) / square root of the sample size,
is a statistical uncertainty indicator of the estimated mean value of the conducted measurements.\cite{SE})
}
\label{fig:conductance_layers}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/auto_fourier_molecule_ps.pdf}
\caption{Velocity-velocity autocorrelation functions of the only atom in the first layer of each of four different gold clusters.
The outer-most layers are attached to Markovian thermal reservoirs at temperature of 300K and the clusters are allowed sufficient time (e.g. a few nanoseconds) to relax to the temperature of the bath.
Column (a): Velocity time-autocorrelations, $C_{vv}(t)$, which are normalized to the value at $t=t_1-t_2=0$;
Column (b): The Fourier transforms of the corresponding correlations, normalized across the whole spectra;
Column (c): Artistic representation of the corresponding gold clusters}
\label{fig:auto}
\end{figure}
To study the effect of the molecular structure on the junction conductance, we compared molecules of various lengths and of different degrees of saturation.
Namely, we compared molecules with an alkane backbone (saturated) against molecules with a conjugated polyyne backbone (unsaturated)\footnote{A polyyne is an organic compound with alternating single and triple bonds;
that is, a series of consecutive alkynes, $\left(- C \equiv C - \right)_n$ with $n$ greater than 1.}.
Unless otherwise specified,
the simulation results displayed below were obtained
using three explicit atomic layers for the gold substrates.
Figure~\ref{fig:conductance_conjugated} shows
that heat conductance of shorter molecules ($n < 6$)
is not strongly affected by carbon bond saturation,
however longer unsaturated chains exhibit lower conductance than their saturated counterparts.
This observation stands in contrast to the higher electronic transport properties of conjugated chain molecules\cite{Crljen2007prl},
and may be explained by the difference in current carriers of thermal and electronic transport in these systems:
While the delocalized electrons in conjugated molecules may contribute much to the overall electronic conduction,
heat conduction, which is dominated by phonon transport,
is mostly determined by bond structure and vibrational modes in the molecular system
The steady state temperature profiles associated with
the results of Figure \ref{fig:conductance_conjugated} are displayed in Figure \ref{fig:T_alkane}.
The bias (300K - 350K) clearly exceeds the regime of validity of linear response,
yet is more realistic with respect to existing experimental setups
\cite{Meier2014prl,Wang2006apl,Majumdar2015nl,Cui2019nature}.
The temperature profiles show that most of the thermal resistance is interfacial.
Even the longer molecules are homogeneous enough
that the temperature profile does not slope significantly,
which could point to a ballistic regime of heat conduction in the molecular bridge.
This can be ascribed to the explicit modelling of the molecule-bulk interface at the atomic level.
The features of interfacial temperature already show some filtering effects from the white baths,
noting the first layer of gold bulk on the left is about 10 degrees lower than external hot reservoir
and right first layer 10 degrees higher than the external cold reservoir.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/conductance_alkanes_saturation_upto12.pdf}
\caption{Length-dependent heat conductance of alkane chains (saturated) and triple-bond conjugated hydrocarbon chain (unsaturated) molecules for a temperature bias of 300K and 350K.
The error bars represent the SE (see Caption in figure \ref{fig:conductance_layers}) of the conductance measurements.}
\label{fig:conductance_conjugated}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{img/Tprofile_alkane_polyyne248.pdf}
\caption{Temperature profile for non-branching alkanedithols of various lengths. In the legend the ones with and without hydrogen atoms are alkanes and conjugated polyyenes respectively. The horizontal axis is labeled according to the index of the backbone atoms (Sulfur atoms included), and also including the layer of gold atoms nearest the molecule (the leads), to the left most and right most. The temperatures of the left and right leads are set at 350K and 300K respectively. That is, atom zero is always the gold atom to the left of the alkanedithiol, atom one is the left Sulfur atom, and the same on the right.
The error bars represent the SE (see Caption in figure \ref{fig:conductance_layers}) the temperature measurements.}
\label{fig:T_alkane}
\end{figure}
Next, consider the results obtained from the quantum-mechanical harmonic model calculation.
The theoretical and numerical details of the Landauer approach for calculating heat conductance
has been specified in the Support Information.
Figure \ref{fig:conductance_MD_vs_Landauer} compares the results of this model
to those obtained from the classical MD simulation.
While a certain degree of divergence occurs of these two methods
in the results of alkanedithiolds
(which we will explain in the later parts in the section),
the remarkable agreement in polyynes molecules
indicates that heat conduction in these systems
is determined by the harmonic part of the force field
and dominated by modes from the lower frequency range of the molecular spectrum.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{img/conductance_MD_vs_Landauer_2panel.pdf}
\caption{Length-dependent heat conductance of alkane chains (saturated, in the upper panel) and triple-bond conjugated hydrocarbon chain (unsaturated, in the lower panel) molecules, obtained with GROMACS MD simulations and Landauer-type quantum calculations, respectively. In all cases, thiol head-groups are attached, and the currents are calculated across a temperature bias from 350K to 300K, with conductance defined as the ratio between heat current and temperature bias (50K).}
\label{fig:conductance_MD_vs_Landauer}
\end{figure}
Finally, consider the absolute values of the calculatioted heat conductions.
In general, our results are in agreement with the most recent experimental measurements
for heat conduction in SMJs \cite{Cui2019nature}.
For the saturated alkanes, the Landauer conductance starts to fluctuate and then goes down as the chain grows even longer (See SI for more data),
while the MD results decrease slowly and stay relatively stable around the value 20 pW/K.
Though the non-monotonic behavior of the Landauer calculations align with an independent ab initio Landauer-type calculation\cite{Kloeckner2016},
MD simulations align more closely with the experimental data and trend\cite{Cui2019nature},
indicating anharmonicity plays a tunning role in molecular thermal transport.
It is interesting and intuitive to see heat conduction from a normal mode harmonic perspective
(i.e. the normal mode density, localization, and transmission probablity, see SI for detailed analysis),
but at room temperature classical MD seems do better in taking into account both harmonic and anharmonic effects
by utlizing full-force-field interactions of the molecular potential energy.
\section{Conclusion}
In summary, we have presented results of classical MD simulations using stochastic Langevin thermal baths,
as well as results of a quantum calculation based on the harmonic part of the molecular force field,
for the steady-state heat conduction of molecular junctions comprising saturated and conjugated hydrocarbon chains connecting gold leads.
The multiple layers of explict gold substrates act as filters of larger environmental white noise
and bring characteristic bath effects to the heat conducting molecular systems under investigation.
The high degree of agreement between our simulations and the most recent experimental measurements\cite{Cui2019nature}
also validates the methods and numerical tools we use.
For the alkanedithiols in particular,
MD simulation agrees better with the experiment than Landauer-type quantum calculation in conditions of room temperature and with large bias.
This may hint that at ambient conditions,
the explicit treatment of quantum effects is less relevant than explicit treatment of anharmonicity,
and linear response less relevant than the behavior of systems far-from-equilibrium.
For conjugated hydrocarbons,
while previous studies have shown unusually high electronic conduction of polyynes\cite{Crljen2007prl},
our study shows that they have a lower thermal conductance than their saturated counterparts.
which might indicate they are potentially good candidates for thermoelectric nanomaterials.
Still the effects of complex molecular structures and topologies on nanoscale heat conduction are still not clear,
and will be future directions of our research in the group by making full use of the atomistic-resolution afforded by our approach.
\begin{acknowledgement}
The research of AN is supported by the Israel-U. S. Binational Science Foundation,
the German Research Foundation (DFG TH 820/11-1), the U. S. National ScienceFoundation(GrantNo. CHE1665291),and the University of Pennsylvania.
\end{acknowledgement}
\section{S1. Details on molecular configuration preparation and MD simulation procedures}
There are mainly three phases in using our customized GROMACS\cite{Gromacs4.5} package
to conduct simulations on heat conduction in molecular junctions,
including both classical MD and quantum Landauer transport
(as illustrated in figure \ref{fig:procedure}):
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/procedure_diagram.png}
\caption{Molecular heat conduction MD simulation procedure and implementation diagram,
with Landauer-type quantum calculations included.}
\label{fig:procedure}
\end{figure}
(I) Configuration preparation.
The open-source graphic cross-platform molecule editor,
\textit{Avogardro}\cite{Avogadro},
is used to design the initial molecular topologies as one wishes
(e.g. alkanedithiols with three layers of gold substrates).
Then these configurations are energy minimized
with \textit{Avogardro}'s native utility,
and saved as \textit{pdb} formats,
which are eventually transformed into GROMACS's recognizable input files
(\textit{gro} and \textit{top})
using \textit{Open Babel}\cite{OpenBabel}
with Uinversal Force Field (UFF)\cite{UFF}
implemented as molecular force field parameters.
(II) Production runs.
Within GROMACS the molecualr configurations are further energy minimized
to ensure stable structure for subsequent dynamical runs and normal mode analysis.
For MD production runs, the systems are first equilibrated to the average temperatures
of the baths (usually takes a few hundred pecoseconds),
and then with a single long trajectory (a few nanoseconds) relaxed
to nonequilibrium steady state.
From the tail of this trajectory, thousands of parallel
at-steady-state trajectories are launched
get large enough statisical ensembles.
At the end of this phase, as outputs, normal modes, steady-state trajectories,
and inter-atomic forces are collected.
(III) Post-data processing.
The heat currents are calculated according to the equation
in the Main text using velocities and inter-atomic
forces sampled at the end of the last step.
The Landauer-type currents are calcualted
through a different channel,
by using the formalism described in the last section of this SI.
The resultant quantities (e.g. currents, local temperatures, conductance, \textit{etc.})
are averaged (with error estimation) and plotted for illustration purposes.
\section{S2. Supplemental MD simulation results}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/conductance_alkanes_1vs3_gold_layers_upto16.pdf}
\caption{Heat conductance simulated for alkanes of various lengths, under explicit baths of different number of explicit Gold layers.
The bath temperatures are set at 300K to 350K. The bars shown in the figure are the standard errors.}
\label{fig:AuLayer}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/conductance_alkanes_lowTvsRoomT_upto16}
\caption{Comparison of low temperature conductance with room temperature conductance for different length of alkanedithiols with 3-layers of explicit gold bulks, from GROMACS MD simulations.}
\label{fig:conductance_alkanes_lowTvsRoomT_upto16}
\end{figure}
\section{S3. Landauer-type calculations for hydrocarbon heat conduction}
\subsection{Landauer formalism}
When only the harmonic part of the system-bath interactions are taken into account,
the (phononic) heat current can be expressed by the Landauer formula,
\begin{align}
\label{eqn:Landauer_heat}
J = \frac { \hbar } { 2\pi } \int _ {0} ^ { \infty } \mathcal { T } ( \omega ) \left[ f \left( \omega , T _ { L } \right) - f \left( \omega , T _ { R } \right) \right] \omega d \omega
\end{align}
which has been derived in various theoretical approaches and employed in different systems\cite{Segal2016arpc,Dhar2006,Dhar2008,Segal2003,Wang2006prb,Yamamoto2006,Kloeckner2016}. Here $f$ function is the Bose-Einstein distribution function which depends the temperature of the bath, $f(\omega, T)=(e^{\hbar\omega/k_BT}-1)^{-1}$,
and $\mathcal { T }$ is the transmission probability.
In the Linear Response regime,
the thermal conductance is taken as the derivative of the heat current with respect to the temperature,
\begin{align}
\kappa=\frac { \hbar } { 2\pi } \int _ {0} ^ { \infty } \mathcal { T } ( \omega ) \frac{\partial f \left( \omega , T \right)}{\partial T} \omega d \omega
\end{align}
The transmission function is often expressed using the Meir-Wingreen formula\cite{Meir1992prl}
\begin{align}
\label{eqn:transmission}
\mathcal { T } ( \omega ) = \operatorname { Tr } \left[ \boldsymbol { G } _ { S } ^ { r } ( \omega ) \mathbf { \Gamma } _ { L } ( \omega ) \boldsymbol { G } _ { S } ^ { a } ( \omega ) \mathbf { \Gamma } _ { R } ( \omega ) \right],
\end{align}
in which $\bm{G}_S^a=[\bm{G}_S^r]^\dagger$, are the advanced \& retarded Green's functions of the system and can be written as\cite{Dhar2006},
\begin{align}
\label{eqn:Green_harmonic}
\bm{G}_S^{r/a}(\omega)=[\omega^2\bm{M}-\bm{D}-(\bm{\Sigma}_L^{r/a}+\bm{\Sigma}_R^{r/a})]^{-1},
\end{align}
most easily understood in the basis of the atomic coordinates, $\bm{D}$ is the dynamical matrix (or Hessian) whose elements are the second derivatives of the total energy with respect to the atomic coordinates,
$\bm{M}$ is the (diagonal) matrix of atomic masses, and $\mathbf{\Sigma}^{r/a}_K$ is the retarded (or advanced, respectively) self-energy of the $K$ bath ($K \in {L, R}$).
$\mathbf{\Gamma}_L$ and $\mathbf{\Gamma}_R$ are the spectral function matrices of the baths, reflecting the coupling strengths of the systems to the leads.
\begin{align}
\label{eqn:coupling}
\bm{\Gamma}_{L/R}(\omega)=i[\bm{\Sigma}^r_{L/R}(\omega)-\bm{\Sigma}^a_{L/R}(\omega)].
\end{align}
When spectral density is taken as Ohmic\cite{Dhar2008,Segal2016arpc}, the self-energies are defined as $[\bm{\Sigma}^r_L]_{ij}\equiv -i\omega\gamma_L(\omega)\delta_{ij}$ for all atomic indexes $i$ (and $j$) which correspond to coordinates of atoms at the left-most layer of bulk, and directly coupled to the left heat reservoir, which is characterised by a dissipation coefficient $\gamma_L(\omega)$. Correspondingly for the right bath.
Also, taking both baths to be white, $\gamma_L(\omega)$ and $\gamma_R(\omega)$ are constant.
\subsection{Methodology}
The main ingredient needed for the use of the Landauer formula in molecular systems
is to reliably obtain the dynamical force matrix $\bm{D}$.
The GROMACS~\cite{Gromacs4.5} software package provides utilities which construct the Hessian matrix of the system under study. Furthermore, it has the ability to diagonalize the obtained Hessian, providing the eigenfrequencies and eigenvectors (normal modes) of the system.
\subsection{Results}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/conductance_lnd_layer.pdf}
\caption{Length-dependent thermal conductance, calculated through the Landauer formula. The exterior-most layers of explicit bulk are coupled to white noise baths. The conductance denotes ratio between heat current and temperature bias, with left and right baths at 300K and 350K respectively. There are three Gold layers,
in accordance with the MD simulations.}
\label{fig:conductance_Landauer_20}
\end{figure}
By adding layers to different alkane molecules(Figure \ref{fig:conductance_Landauer_20}), we find
\begin{enumerate}
\item The presence of thiol groups enhances alkane conductance.
\item The trend of conductance increase and eventual decrease (as a function of chain length), as reported in \textit{ab-initio} calculations of similar systems\cite{Kloeckner2016}, appears only when including the layers of Gold leads, explicitly.
\end{enumerate}
To see how different molecular vibrational modes line up with the bath spectral density, the Normal Mode Analysis (NMA) of various molecules is performed (results shown in Figure \ref{fig:modes_histogram_triple}).
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/modes_histogram_Wtriple_goldbulk_withmolecule.pdf}
\caption{
Histograms of the Normal Mode Analysis of some representative molecules.
The x-axes are the histogram bins (frequencies in wavenumbers),and the y-axes are the count of modes with frequency in the bin.
The corresponding molecular topologies are drawn within each panel.
Note that the first and last atom in each molecule are Sulfur.
The first row shows the normal mode histogram for disulfur and a three layer gold cluster (left and right columns, respectively).
The following rows depict alkanes (left) and polyyens (right) with 4, 8, and 16 Carbon units, respectively.}
\label{fig:modes_histogram_triple}
\end{figure}
While the molecules have some modes up to a thousand wavenumbers, the significant band of bulk phonons exists between a few tens and a few hundred wavenumbers,
as seen in gold bulk spetrum density figure in the main text.
The high-frequency molecular vibrational modes, therefore, do not overlap with the bulk modes.
The prominent peaks of molecular vibrations at lower frequencies are due to rigid-body Center of Mass Motion, or such Degrees of Freedom as they are hybridized with vibrations.
Other factors which could be affecting the overall heat conduction include, normal mode localization\cite{Segal2003}, system-bath coupling, transmission coefficients in the picture of coherent transport\cite{Kloeckner2016}.
The relevant investigations will require not only analysis of energy flows within the molecules and between molecules and baths, but also in frequency space, and detailed comparisons to Landauer-type conductivity calculations.
We compare the localization properties of the normal modes
for the molecules of different lengths and saturations.
Assuming the transformation coefficients for
changing the local atomic basis to normal mode basis to be $C_{n_i,\alpha}$,
then a localization factor (also called participation index) for each mode can be defined
as \cite{Segal2003}:
\begin{align}
\label{}
P_\alpha(n) = \sum_{n_i} |C_{n_i,\alpha}|^2,
\end{align}
where n is the index of atoms in the molecule, $\alpha$ denote the modes,
$n_i$ is the degrees of freedom which goes from 1 to 3 for each n.
To see the localization on carbon based backbone clusters,
the factor can further be summed up over nearby hydrogens.
\begin{align}
P_\alpha(n_B) = \sum_{n=1}^{n_H}P_\alpha(n),
\end{align}
where $n_H$ denotes the bonded hydrogens to backbone atom $n_B$.
\begin{figure}\ContinuedFloat
\includegraphics[width=1\textwidth]{supplemental/img/Localization_N4N12_a.pdf}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/Localization_N4N12_b.pdf}
\caption{Localization coefficients
for different hydrocarbon molecules.
$N$ is the number of backbone carbon atoms, and all molecules are capped with thiol groups.
"Saturated" refers to alkanedithiols(upper six panels labeled as (a)),
while "Unsaturated" refers to conjugated carbon chains with alternating single and triple bond (Polyynes)(lower six pannels labeled as (b)).
The numbers in the legends are the corresponding frequencies (unit in $cm^{-1}$, ranging from low, medium to high)
of the selected normal modes of the hydrocarbon chain molecules.
}
\label{fig:localization}
\end{figure}
From figure \ref{fig:localization}, the high frequency modes
($\sim 3000 cm^{-1}$ for alkanes and $\sim 2000 cm^{-1}$ for polyynes) seem
more spatially localized than low (less than 100 $cm^{-1}$) and medium
($\sim 1200 cm^{-1}$ for alkanes and $\sim 600 cm^{-1}$ for polyynes)
frequency modes, which might indicate that those high frequency modes
are less conductive even when the baths contain high frequency phonons.
Nonetheless, one has to be careful not to ignore
the couplings among molecular modes and between system and baths as well.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/transmissionN4N12.pdf}
\caption{The transmission probabilities
$ \mathcal{T}(\omega) $
for different hydrocarbon molecules.
$N$ is the number of backbone carbon atoms, and all molecules are capped with thiol groups.
"Saturated" refers to alkanedithiols, while "Unsaturated" refers to conjugated carbon chains with alternating single and triple bond (Polyynes).
Each molecule is adsorbed on a gold surface modeled using 3 explicit bulk layers.}
\label{fig:transmission}
\end{figure}
As an important component in Landuaer's description,
transmission coefficients may become more a direct indicator
of heat conduction compared to normal mode density and localization.
The comparison of transmission between short and long chains, of both saturated and unsaturated hydrocarbons (Figure \ref{fig:transmission}), provides insights into the contribution of different vibrational modes
to the overall heat conduction. Based on these results, one may make a qualitative observation that the short saturated hydrocarbons are the most conducting, while long unsaturated hydrocarbons are less conductive.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/derivative_average_tempeture.pdf}
\caption{Comparison between derivative conductance and finite-bias conductance. For the finite-bias case, bath temperatures of 300K and 350K are taken (i.e. bias of 50K). This is compared with two derivative conductance calculations: Around a temperature of 300K and of 350K.}
\label{fig:derivative_average_tempeture}
\end{figure}
We would now like to see how the choices of different conductance expressions (using the derivative form or the finite-bias form) influence the results.
It turns out that the influence is negligible across a range of alkane chain lengths (Figure \ref{fig:derivative_average_tempeture}). The conductance at 350K is slightly higher and 300K slightly lower, but the difference between either of them and the finite-bias result is significantly smaller than the values of the relevant conductances themselves.
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{supplemental/img/temperature_Landauer_C10.pdf}
\caption{Temperature dependence of the derivative Landuaer thermal conductance, for butanedithiol (\ch{HS(CH_2)_10SH}) adsorbed on a substrate modelled by 3 layers of explicit Gold atoms.}
\label{fig:temperature_Landauer_C10}
\end{figure}
Figure \ref{fig:temperature_Landauer_C10} shows the temperature dependence of the heat conductance of butanedithiol.
Conductance increases at a higher rate until 100K, and then starts to plateau as temperature increases further. This trend agrees with results reported for an identical alkane \cite{Kloeckner2016}.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{supplemental/img/conductance_lnd_low_vs_room_temperature.pdf}
\caption{Comparison of Landauer conductance at low vs. room temperatures (temperature bias the same in both cases) for alkanedithiols of different lengths, with 3-layers of explicit gold bulks.}
\label{fig:conductance_lnd_low_vs_room_temperature}
\end{figure}
The comparison of conductance between high (300K to 350K) and low (25K to 75K)(Figure \ref{fig:conductance_lnd_low_vs_room_temperature})
temperatures from Landauer calculations
gives us a sense of the importance of the temperature factor
plays in the molecular heat conduction simulations.
From this perspective, the classical MD simulations (Figure \ref{fig:conductance_alkanes_lowTvsRoomT_upto16} is not a
reliable reference in the low temperature limit.
The characteristic energy of phonons will be much lower, and thus the conductance at low temperature. The classical simulation does not show any major difference when comparing the high temperature profiles (Figure \ref{fig:conductance_alkanes_lowTvsRoomT_upto16}).
Therefore, we may propose a combination of probing heat conduction with classical MD at high temperature (where scattering might be more important), and with quantum Landauer's approach at low temperature limit (where the harmonic interaction dominates).
These benefits are all included in our newly developed simulation package presented here.
|
1,116,691,498,606 | arxiv | \section{Joule heat}
To calculate the rate of energy dissipation in the
two level system with a given momentum value ${\bf p}$,
\begin{eqnarray}
\dot {\cal E}_{\bf p}&=&-vp \frac{\partial}{\partial t} \left(|b_{\bf p}(t)|^2-|a_{\bf p}(t)|^2\right)\nonumber\\
&&= -2vp ~ \text{Re} \left(b^*_{\bf p} \frac{\partial b_{\bf
p}}{\partial t} -a^*_{\bf p} \frac{\partial a_{\bf p}}{\partial
t}\right),
\end{eqnarray}
we must to keep only the relaxation terms in the expressions for the
rate of change of the amplitudes, (\ref{amplitudes_eqs1}), since
only these terms correspond to the net loss of energy.
\begin{eqnarray}
\dot {\cal E}_{\bf p}=2vp \left(|b_{\bf p}|^2+\text{Re} ~a^*_{\bf
p}(1- a_{\bf p})\right ).
\end{eqnarray}
Substituting here the expressions for the amplitudes (at $t\to
\infty$) from Eqs.~(\ref{amplitudes_eqs1}) we find
\begin{eqnarray}
\dot {\cal E}_{\bf p}= \frac{4vp \gamma_p^3 \Omega^2_{\bf p}}{
(\Omega^2_{\bf p}+\gamma_p^2)^2+\gamma_p^2 (\omega-2vp)^2}.
\end{eqnarray}
The total dissipation is given upon summation over all momenta, two
spins directions and both Dirac points, $\dot {\cal E} =4\sum_{\bf
p}\dot {\cal E}_{\bf p}$. We thus obtain
\begin{equation}
\dot {\cal E} =\frac{e^2 v E_0^2}{4\pi^2\omega^2}
\int\limits_0^{2\pi} \int\limits_0^\infty \frac{\gamma_p p^2 dp
~\sin^2{\chi} d\chi}{(2p-\frac{\omega}{v})^2+\beta^2(p,\chi)},
\end{equation}
where
$$
\beta(p,\chi)=\frac{\gamma_p}{2v}+\frac{e^2 vE_0^2\sin^2{\chi}}{8
\gamma_p \omega^2}.
$$
As long as $\gamma_p \ll \omega/v$ the integrand is a sharply peaked
function of $p-\omega/2v$ and the integration over $dp$ can be
carried out easily. The subsequent integration over the angle is
straightforward and yields,
\begin{equation}
\dot {\cal E} =\frac{\gamma^2 \omega^2}{v^2} \left( 1-\frac{2\gamma
\omega }{\sqrt{4\gamma^2 \omega^2 +e^2v^2E_0^2}} \right),
\end{equation}
|
1,116,691,498,607 | arxiv | \section{Introduction and statement of the main results}
Crossed product $C^*$-algebras are among the most important and most studied classes of $C^*$-algebras. They provide deep connections between theories of $C^*$-algebras and dynamical systems. The problem of simplicity of reduced crossed product $C^*$-algebras, and more generally understanding the ideal structure of the full crossed products
have received lots of attention in the past few decades (see e.g. \cites{Tak64, Elliott80, DS86, KT90, Bedos91, JangLee93, AS94, Sier10}).
In \cite{DS86}, de la Harpe and Skandalis proved
that for any action $\Gamma\curvearrowright \mathcal{A}$ of a Powers' group $\Gamma$ on a unital $C^*$-algebra $\mathcal{A}$, the reduced crossed product $\Gamma\ltimes_r \mathcal{A}$ is simple if $\mathcal{A}$ is $\Gamma$-simple (i.e. has no non-zero proper two sided closed $\Gamma$-invariant ideals).
They left it as a question whether the same holds in the more general case of $C^*$-simple groups. In \cite{BKKO}, the authors answered this question by proving the result for all $C^*$-simple groups, by using the dynamics of the Furstenberg boundary action.
Intermediate $C^*$-algebras, i.e. $C^*$-algebras $\mathcal{B}$ of the form $C_{r}^*(\Gamma) \subseteq \mathcal{B} \subseteq \Gamma\ltimes_r \mathcal{A}$, have recently gained some particular attention, for instance in the work of Suzuki (\cites{Suz17, Suz18}) in connection to problems of minimal ambient nuclear $C^*$-algebras as well as maximal injective von Neumann subalgebras.
In this paper we consider the simplicity problem for intermediate $C^*$-subalgebras of crossed products of $C^*$-simple group actions, and more generally, the $\Gamma$-simplicity of their unital $\Gamma$-invariant $C^*$-subalgebras.
\begin{theorem}\label{main-general}
Let $\Gamma$ be a countable discrete $C^*$-simple group, and let $\mathcal{A}$ be a $\Gamma$-$C^*$-algebra. Suppose for some $C^*$-simple measure $\mu\in\operatorname{Prob}(\Gamma)$, all $\mu$-stationary states on $\mathcal{A}$ are faithful. Then any unital $\Gamma$-invariant $C^*$-subalgebra of the reduced crossed product $\Gamma\ltimes_r \mathcal{A}$ is $\Gamma$-simple.
\end{theorem}
Since, with respect to the inner action of $\Gamma$, any $C^*$-subalgebra $\mathcal{B}$ of $\Gamma\ltimes_r \mathcal{A}$ that contains $C^*_{r}(\Gamma)$, as well as any ideal in $\mathcal{A}$, are invariant, the following is immediate.
\begin{cor}
Under the assumptions of Theorem \ref{main-general}, any intermediate $C^*$-subalgebra $C_{r}^*(\Gamma) \subseteq \mathcal{B} \subseteq \Gamma\ltimes_r \mathcal{A}$ is simple.
\end{cor}
In the case of commutative $\Gamma$-$C^*$-algebras $\mathcal{A}= C(X)$, we obtain the following.
\begin{thm}\label{main-commutative}
Let $\Gamma$ be a countable discrete $C^*$-simple group, and let $\Gamma \curvearrowright X$ be a minimal action of $\Gamma$ on a compact space $X$. Then any unital $\Gamma$-invariant $C^*$-subalgebra of $\Gamma\ltimes_r \mathcal{A}$ is $\Gamma$-simple. In particular, any intermediate $C^*$-subalgebra $C_{r}^*(\Gamma) \subseteq \mathcal{B} \subseteq \Gamma\ltimes_r C(X)$ is simple.
\end{thm}
Examples of actions $\Gamma\curvearrowright \mathcal{A}$, where $\mathcal{A}$ is noncommutative and assumptions of Theorem \ref{main-general} hold, include $\Gamma\curvearrowright C^*_{r}(\Gamma)$ by inner automorphisms, for any $C^*$-simple group $\Gamma$ (\cite[Theorem 5.1]{HartKal}), as well as ${\mathbb{F}}_n\curvearrowright{\mathbb{F}}_n\ltimes_r C(\partial{\mathbb{F}}_n)$, also by inner automorphisms, for any $n\geq 2$ (\cite[Example 4.13]{HartKal}).
None of the proofs in \cite{DS86} and \cite{BKKO} of simplicity of the reduced crossed products have obvious modification to include the case of invariant subalgebras. In fact, one can observe that such a result is very far from being true in general. For example, let $\mathcal{A}$ be a non-trivial simple $C^*$-algebra and let $\Gamma\curvearrowright \mathcal{A}$ be the trivial action of a Powers' group $\Gamma$. Then $\Gamma\ltimes_r\mathcal{A} = C^*_{r}(\Gamma)\otimes \mathcal{A}$ is simple. However, if $\mathcal{B}$ is a non-simple unital $C^*$-subalgebra of $\mathcal{A}$, then $C^*_{r}(\Gamma)\subset\Gamma\ltimes_r\mathcal{B} \subset \Gamma\ltimes_r\mathcal{A}$, and $\Gamma\ltimes_r\mathcal{B} = C^*_{r}(\Gamma)\otimes \mathcal{B}$ is not simple.
It is not hard to construct even a faithful such action. But one should notice that the main reason that simplicity for invariant subalgebras could fail is that, in general, $\Gamma$-simplicity does not pass to subalgebras.
On the other hand, in the above setup, even in the more general case of a $C^*$-simple group $\Gamma$, if $\mathcal{A} = C(X)$ is commutative and $\Gamma$-simple (which is equivalent to minimality of $\Gamma\curvearrowright X$), since any invariant $C^*$-subalgebra $\mathcal{B}\subset \mathcal{A}$ is of the form $C(Y)$ where $Y$ is an equivariant factor of $X$, and since minimality passes to factors, it follows $\Gamma\ltimes_r\mathcal{B}$ is also simple by \cite[Theorem 7.1]{BKKO}.
Thus, we have observed that if $\Gamma$ is $C^*$-simple and $\Gamma\curvearrowright X$ is minimal, then any intermediate $C^*$-subalgebra $C^*_{r}(\Gamma)\subset\mathcal{B}\subset \Gamma\ltimes_r C(X)$ which itself is of the form $\mathcal{B}= \Gamma\ltimes_r C(Y)$, is simple.
To deal with general intermediate $C^*$-subalgebras, not necessarily of the crossed product type, we need to translate minimality in the non-commutative setting in a way that passes to subalgebras and does not require a crossed product structure to realize.
Inspired by the recent work \cite{HartKal} of Hartman and the second named author, we use the notion of stationary states to capture ``minimality'' of the intermediate $C^*$-subalgebras.\\
Before proceeding into the details of our results, we recall some definitions and basic facts, and fix some conventions and terminology. Unless otherwise stated, $\Gamma$ will be a countable discrete group, and all compact spaces are assumed Hausdorff.
We denote by $\lambda: \Gamma \to \mathcal{B}(\ell^2(\Gamma))$ the left regular representation of $\Gamma$, and by $C^*_{r}(\Gamma)$ the reduced $C^*$-algebra of $\Gamma$, i.e. the $C^*$-algebra generated by $\{\lambda_s: s\in \Gamma\}$. We denote by $\tau_0$ the canonical trace on $C^*_{r}(\Gamma)$, defined by $\tau_0(\lambda_e) = 1$ and $\tau_0(\lambda_s) = 0$ for all non-trivial elements $s\in \Gamma/\{e\}$.
By \emph{$\Gamma$-$C^*$-algebra}, we mean a unital $C^*$-algebra on which $\Gamma$ acts by $*$-automorphisms. We say $\mathcal{A}$ is $\Gamma$-simple if it does not contain any non-trivial proper closed $\Gamma$-invariant ideals. If $\mathcal{A} = C(X)$ is commutative, then $\mathcal{A}$ is $\Gamma$-simple iff $\Gamma\curvearrowright X$ is minimal, i.e. the compact $\Gamma$-space $X$ does not have any non-empty proper closed $\Gamma$-invariant subsets.
Any action $\Gamma\curvearrowright \mathcal{A}$ induces an action of $\Gamma$ on the state space of $\mathcal{A}$ in a canonical way, $(s\tau)(a) = \tau(s^{-1}(a))$ for $s\in\Gamma$, $a\in \mathcal{A}$, and a state $\tau$ on $\mathcal{A}$. Let $\mu\in\operatorname{Prob}(\Gamma)$, a state $\tau$ on a $\Gamma$-$C^*$-algebra $\mathcal{A}$ is said to be \emph{$\mu$-stationary} if $\mu*\tau = \tau$, where $\mu*\tau = \sum_{s\in\Gamma} \mu(s)s\tau$ is the convolution of $\tau$ by $\mu$. The theory of stationary states was introduced and studied in \cite{HartKal}, where applications to several rigidity problems in ergodic theory and operator algebras were given. One of the main results there states that a countable group $\Gamma$ is $C^*$-simple if and only if there is a measure $\mu\in\operatorname{Prob}(\Gamma)$ such that the canonical trace $\tau_0$ is the unique $\mu$-stationary state on $C^*_{r}(\Gamma)$ (\cite[Theorem 5.1]{HartKal}); such a measure $\mu$ is called a \emph{$C^*$-simple measure} (\cite[Definition 5.2]{HartKal}).
\section{Powers' averaging property for crossed products}
In this section we prove the Powers' averaging property for the reduced crossed product $\Gamma\ltimes_r \mathcal{A}$, in the case of an action $\Gamma\curvearrowright \mathcal{A}$ of a $C^*$-simple group $\Gamma$ on a unital $C^*$-algebra $\mathcal{A}$. In the case of Powers' groups $\Gamma$, this result was proved by de la Harpe and Skandalis in \cite{DS86}, and it was used to prove simplicity of the reduced crossed product $\Gamma\ltimes_r \mathcal{A}$ when $\mathcal{A}$ is $\Gamma$-simple.
Recent developments of the subject include two independent work of Haagerup \cite{Haagerup} and Kennedy \cite{Kennedy}, where they prove that the reduced $C^*$-algebra $C^*_{r}(\Gamma)$ of any $C^*$-simple group $\Gamma$ has the Powers' averaging property. Below we prove the same averaging scheme can be lifted to the crossed product level as well.
First, let us quickly recall the construction of reduced crossed products in order to introduce our notation. Let $\mathcal{A}$ be a unital $\Gamma$-$C^*$-algebra. Fix a faithful $*$-representation $\pi: \mathcal{A} \to \mathbb{B}(H)$ of $\mathcal{A}$ into the space of bounded operators on the Hilbert space $H$. Denote by $\ell^2(\Gamma, H)$ the space of square summable $H$-valued functions on $\Gamma$. The group $\Gamma$ acts on $\ell^2(\Gamma, H)$ by left translation unitaries
\[
\tilde\lambda_s\xi(t):=\xi(s^{-1}t)\quad \left(s,t \in \Gamma, \,\xi \in\ell^2(\Gamma,H)\right).
\]
There is also a $*$-representation $\sigma:\mathcal{A} \to \mathcal{B}(\ell^2(\Gamma, H))$ defined by
\[
[\sigma(a)\xi](t):=\pi(t^{-1}a)(\xi(t))\quad \left(a \in A,\, \xi \in\ell^2(\Gamma,H),\, t \in \Gamma \right) .
\]
The reduced crossed product $\Gamma\ltimes_r\mathcal{A}$ is the $C^*$-algebra generated by unitaries $\{\tilde\lambda_s : s\in\Gamma\}$ and operators $\{\sigma(a) : a\in\mathcal{A}\}$ in $\mathcal{B}(\ell^2(\Gamma, H))$. Note that $\tilde\lambda_s\sigma(a)\tilde\lambda_{s^{-1}}=\sigma(sa)$ for all $s \in \Gamma$ and $a \in A$. In particular, the group $\Gamma$ also acts on $\Gamma\ltimes_r\mathcal{A}$ by inner automorphisms.
We denote by ${\mathbb{E}}: \Gamma\ltimes_r\mathcal{A} \to \sigma(\mathcal{A})$ the canonical conditional expectation which is defined by ${\mathbb{E}}(\sigma(a)) = \sigma(a)$ and ${\mathbb{E}}(\sigma(a)\tilde\lambda_s) = 0$ for $a\in \mathcal{A}$ and $s \in \Gamma/\{e\}$. The map ${\mathbb{E}}$ is $\Gamma$-equivariant and faithful.
The following lemma provides the estimation that will allow us to lift an averaging scheme from the reduced $C^*$-algebra to the reduced crossed product.
\begin{lemma}\label{ineq-av-1}
Let $\Gamma$ be a discrete group, and let $\mathcal{A}$ be a $\Gamma$-$C^*$-algebra. Then for any $t_0, s_1, \dots, s_m\in \Gamma$, $p_1, \dots, p_m \in {\mathbb{R}}^+$, and $a\in \mathcal{A}$ we have
\begin{equation}\label{ave-ineq-1}
\left\|\sum_{j=1}^m p_j\tilde\lambda_{s_j}\sigma(a)\tilde\lambda_{t_0}\tilde\lambda_{s_j^{-1}}\right\|_{{\mathbb{B}}(\ell^2(\Gamma, H))} \le \|a\|_\mathcal{A} \left\|\sum_{j=1}^m p_j\lambda_{s_jt_0s_j^{-1}}\right\|_{{\mathbb{B}}(\ell^2(\Gamma))} .
\end{equation}
\end{lemma}
\begin{proof}
For $\xi \in \ell^2(\Gamma, H)$, observe that for each $t\in \Gamma$ we have
\[\begin{split}
&\sum_{j=1}^m p_j[\tilde\lambda_{s_j}\sigma(a)\tilde\lambda_{t_0}\tilde\lambda_{s_j^{-1}}(\xi)](t)
=\sum_{j=1}^m p_j[\sigma(s_ja)\tilde\lambda_{s_jt_0s_j^{-1}}\xi](t)
\\&=\sum_{j=1}^m p_j[\sigma(s_ja)\xi](s_jt_0^{-1}s_j^{-1}t)
=\sum_{j=1}^m p_j\pi(t^{-1}s_jt_0a)[\xi(s_jt_0^{-1}s_j^{-1}t)] .
\end{split}\]
Define the function $\xi_1(t)=\|\xi(t)\|_H$, $t\in \Gamma$. Then $\xi_1\in\ell^2(\Gamma)$, and $\left\|\xi_1\right\|_{\ell^2(\Gamma)}=\left\|\xi\right\|_{\ell^2(\Gamma, H)}$. We have
\[\begin{split}
\left\|\sum_{j=1}^m p_j\tilde\lambda_{s_j}\sigma(a)\tilde\lambda_{t_0}\tilde\lambda_{s_j^{-1}}(\xi)\right\|_{\ell^2(\Gamma, H)}^2
&=\sum_{t \in \Gamma}\left\|\sum_{j=1}^m p_j\pi(t^{-1}s_jt_0a)[\xi(s_jt_0^{-1}s_j^{-1}t)]\right\|_H^2
\\&\le \|a\|_\mathcal{A}^2\sum_{t \in \Gamma}\left(\sum_{j=1}^m p_j\left\|\xi(s_jt_0^{-1}s_j^{-1}t)\right\|_H\right)^2
\\&=\|a\|_\mathcal{A}^2\left\|\sum_{j=1}^m p_j\lambda_{s_jt_0s_j^{-1}}(\xi_1)\right\|_{\ell^2(\Gamma)}^2
\\&\leq\|a\|_\mathcal{A}^2\left\|\sum_{j=1}^m p_j\lambda_{s_jt_0s_j^{-1}}\right\|_{{\mathbb{B}}(\ell^2(\Gamma))}^2\left\|\xi_1\right\|_{\ell^2(\Gamma)}^2 ,
\end{split}\]
and since $\left\|\xi_1\right\|_{\ell^2(\Gamma)}=\left\|\xi\right\|_{\ell^2(\Gamma, H)}$, the inequality \eqref{ave-ineq-1} follows.
\end{proof}
It follows, in particular, from the above Lemma \ref{ineq-av-1} that if $C^*_{r}(\Gamma)$ has the Powers' averaging property, then so does the reduced crossed product $\Gamma\ltimes_r \mathcal{A}$ for any action $\Gamma\curvearrowright \mathcal{A}$. But in order to prove our characterization of stationary states on the crossed product, we need a more precise averaging scheme.
We denote by $\mu^*k$ the $k$-th convolution power of a measure $\mu\in\operatorname{Prob}(\Gamma)$. Also, for $a\in \mathcal{A}$ and a measure $\mu'\in\operatorname{Prob}(\Gamma)$, we denote $\mu'*a = \sum_{t\in\Gamma} \mu'(t) ta$ for the convolution of $a$ by $\mu'$.
\begin{theorem}\label{Pow-av-red-cros-prod}
Let $\Gamma$ be a $C^*$-simple group, let $\mu \in \operatorname{Prob}(\Gamma)$ be a $C^*$-simple measure, and let $\mathcal{A}$ be a $\Gamma$-$C^*$-algebra. Then
\[
\left\|\, \frac1n\sum_{k=1}^n\mu^k*(a-{\mathbb{E}}(a)) \, \right\| \xrightarrow{n \to \infty} 0
\]
for every $a\in \Gamma\ltimes_r \mathcal{A}$.
\end{theorem}
\begin{proof}
Let $a\in \Gamma\ltimes_r \mathcal{A}$, and let $\varepsilon>0$ be given. Then, there are $t_1, \dots, t_m \in \Gamma\backslash \{e\}$ and $a_1, \dots, a_m\in \mathcal{A}$ such that for $b=\sum_{i=1}^m\sigma(a_{i})\tilde\lambda_{t_i}+E(a)$ we have $\|b-a\|_{\Gamma\ltimes_r \mathcal{A}} < \frac{\varepsilon}{2}$. Since $\mu$ is $C^*$-simple, it follows from \cite[Proposition 4.7]{HartKal} that $\left\|\frac{1}{n}\sum_{k=1}^n\mu^n*\lambda_{t_i}\right\|_{C^*_{r}(\Gamma)} \to 0$, as $n\to\infty$, for all $i = 1, 2, \ldots, m$. Thus, Lemma \ref{ineq-av-1} implies
\[\begin{split}
&\left\|\frac{1}{n}\sum_{k=1}^n\mu^k*(b-{\mathbb{E}}(a))\right\|_{\Gamma\ltimes_r \mathcal{A}}
=
\left\|\frac{1}{n}\sum_{k=1}^n\sum_{i=1}^m\mu^k*(\sigma(a_{i})\tilde\lambda_{t_i})\right\|_{\Gamma\ltimes_r \mathcal{A}}
\\&\quad\quad\quad\quad\quad\leq
\sum_{i=1}^m \left(\|a_i\|_\mathcal{A} \left\|\frac{1}{n}\sum_{k=1}^n\mu^k*\tilde\lambda_{t_i}\right\|_{C^*_{r}(\Gamma)} \right)
\xrightarrow{n\to\infty} 0 .
\end{split}\]
Hence
\[
\limsup_n\left\|\, \frac1n\sum_{k=1}^n\mu^k*(a-{\mathbb{E}}(a)) \, \right\|_{\Gamma\ltimes_r \mathcal{A}} \leq \varepsilon,
\]
and since $\varepsilon$ was arbitrary, the theorem follows.
\end{proof}
\section{Stationary states on the reduced crossed product}
In this section we prove for an action $\Gamma\curvearrowright \mathcal{A}$ of a $C^*$-simple group, a one-to-one correspondence between stationary states on $\mathcal{A}$ and stationary states on the reduced crossed product $\Gamma\ltimes_r \mathcal{A}$.
This correspondence, together with the important feature of stationary states that for any action $\Gamma\curvearrowright \mathcal{A}$ and $\mu\in\operatorname{Prob}(\Gamma)$ there is a $\mu$-stationary state $\tau$ on $\mathcal{A}$ (\cite[Proposition 4.2]{HartKal}), are the main ingredients in proving our main result, Theorem \ref{main-general}.
\begin{theorem}\label{thm:charac-stationary-crsed-prod}
Let $\Gamma$ be a $C^*$-simple group, let $\mu \in \operatorname{Prob}(\Gamma)$ be a $C^*$-simple measure, and let $\mathcal{A}$ be a $\Gamma$-$C^*$-algebra. Then any $\mu$-stationary state $\tau$ on $\Gamma\ltimes_r \mathcal{A}$ is of the form $\tau=\nu\circ\sigma^{-1} \circ{\mathbb{E}}$ for some $\mu$-stationary state $\nu$ on $\mathcal{A}$.
\end{theorem}
\begin{proof}
Let $\mu \in \operatorname{Prob}(\Gamma)$ be a $C^*$-simple measure, and let $\tau$ be a $\mu$-stationary state on $\Gamma\ltimes_r \mathcal{A}$. Then, for any $a\in \Gamma\ltimes_r \mathcal{A}$, Theorem \ref{Pow-av-red-cros-prod} implies
\[\begin{split}
\left|\, \tau(a-{\mathbb{E}}(a)) \,\right| &= \left| (\mu^n*\tau) (a-{\mathbb{E}}(a)) \,\right|
=\left|\, \tau(\mu^n*(a-{\mathbb{E}}(a))) \,\right| \\&\le \left\|\mu^n*(a-{\mathbb{E}}(a))\right\|\xrightarrow{n \to \infty} 0 ,
\end{split}\]
which implies $\tau = \tau\circ {\mathbb{E}}$. Thus, if we let $\nu=\tau|_{\sigma(\mathcal{A})}\circ\sigma$ be the state on $\mathcal{A}$ obtained from restriction of $\tau$ to $\sigma(\mathcal{A})\subset \Gamma\ltimes_r \mathcal{A}$, we see that $\nu$ is $\mu$-stationary and $\tau =\nu\circ\sigma^{-1} \circ{\mathbb{E}}$.
\end{proof}
\begin{remark}
A similar correspondence between invariant probabilities on $X$ and traces on the crossed product was proved by de la Harpe and Skandalis in \cite{DS86} in the case of minimal actions of Powers' groups.
\end{remark}
\begin{remark}
The conclusion of the above Theorem \ref{thm:charac-stationary-crsed-prod} in the case of trivial action $\mathcal{A} = {\mathbb{C}}$ translates to unique stationarity of the canonical trace on the reduced $C^*$-algebra $C^*_{r}(\Gamma)$. Thus, it generalizes one direction of \cite[Theorem 5.1]{HartKal}, and in fact, combined with the latter, they give a similar characterization of $C^*$-simplicity, which we record in the following theorem.
\begin{thm}
The following are equivalent for a countable group $\Gamma$.
\begin{enumerate}
\item
$\Gamma$ is $C^*$-simple;
\item
there is $\mu\in\operatorname{Prob}(\Gamma)$ such that for any action $\Gamma\curvearrowright \mathcal{A}$, any $\mu$-stationary state $\tau$ on $\Gamma\ltimes_r \mathcal{A}$ is of the form $\tau=\nu\circ\sigma^{-1} \circ{\mathbb{E}}$ for some $\mu$-stationary state $\nu$ on $\mathcal{A}$;
\item
there is an action $\Gamma\curvearrowright \mathcal{A}$ such that for some $\mu\in\operatorname{Prob}(\Gamma)$, every $\mu$-stationary state $\tau$ on $\Gamma\ltimes_r \mathcal{A}$ is of the form $\tau=\nu\circ\sigma^{-1} \circ{\mathbb{E}}$ for some $\mu$-stationary state $\nu$ on $\mathcal{A}$.
\end{enumerate}
\end{thm}
\begin{proof}
By \cite[Theorem 5.1]{HartKal} every $C^*$-simple group admits a $C^*$-simple measure, thus (1) $\implies$ (2) follows from Theorem \ref{thm:charac-stationary-crsed-prod}. The implication (2) $\implies$ (3) is trivial. Now suppose (3) holds. Then let $\eta$ be a $\mu$-stationary state on $C^*_{r}(\Gamma)$. By \cite[Proposition 4.2]{HartKal}, $\eta$ extends to a $\mu$-stationary state $\tau$ on $\Gamma\ltimes_r \mathcal{A}$. Let $\nu$ be the state on $\mathcal{A}$ such that $\tau=\nu\circ\sigma^{-1} \circ{\mathbb{E}}$. Then for $s\in\Gamma/\{e\}$ we have $\eta(\lambda_s) = \nu \circ\sigma^{-1}({\mathbb{E}}(\lambda_s)) = 0$, hence $\eta= \tau_0$. This shows that $\tau_0$ is the unique $\mu$-stationary state on $C^*_{r}(\Gamma)$, and thus $\Gamma$ is $C^*$-simple by \cite[Theorem 5.1]{HartKal}.
\end{proof}
\end{remark}
\section{Proofs of the main results}
In this section we prove Theorems \ref{main-general} and \ref{main-commutative}.\\
\noindent
{\it Proof of Theorem \ref{main-general}.}\
Let $\Gamma$ be a countable discrete $C^*$-simple group, and let $\mathcal{A}$ be a $\Gamma$-$C^*$-algebra. Let $\mu\in\operatorname{Prob}(\Gamma)$ be such that all $\mu$-stationary states on $\mathcal{A}$ are faithful.
Let $\mathcal{B}$ be a unital $\Gamma$-invariant $C^*$-subalgebra of $\Gamma\ltimes_r\mathcal{A}$, and let $I$ be a proper closed two-sided $\Gamma$-invariant ideal of $\mathcal{B}$. Then the action $\Gamma \curvearrowright \mathcal{B}$ induces an action $\Gamma \curvearrowright \mathcal{B}/I$. By \cite[Proposition 4.2]{HartKal}, there exists a $\mu$-stationary state $\eta$ on $\mathcal{B}/I$. Composing $\eta$ with the canonical quotient map $\mathcal{B}\to \mathcal{B}/I$ we obtain a $\mu$-stationary state $\tilde\eta$ on $\mathcal{B}$ that vanishes on $I$. Now by the same \cite[Proposition 4.2]{HartKal}, this $\tilde\eta$ can be extended to a $\mu$-stationary state $\tau$ on $\Gamma\ltimes_r\mathcal{A}$. By Theorem \ref{thm:charac-stationary-crsed-prod}, there is a $\mu$-stationary state $\nu$ on $\mathcal{A}$ such that $\tau=\nu\circ\sigma^{-1} \circ{\mathbb{E}}$. By the assumptions, $\nu$ is faithful, and since ${\mathbb{E}}$ is also faithful, it follows $\tau$ is faithful. But $\tau$ vanishes on $I$, hence $I$ is trivial.\qed\\
In order to prove Theorem \ref{main-commutative} we need to work with a generating $C^*$-simple measure, existence of which for a $C^*$-simple group was not established formally in \cite{HartKal}. But we verify below that a simple tweak in the proof of \cite[Theorem 5.1]{HartKal} will do the job.
\begin{lem}{(cf. \cite[Theorem 5.1]{HartKal})}\label{lem:gen-C*-simple-measure}
Every countable $C^*$-simple group $\Gamma$ admits a generating $C^*$-simple measure.
\end{lem}
\begin{proof}
It was shown in the proof of \cite[Theorem 5.1]{HartKal} that if $\Gamma$ is a $C^*$-simple group then there is a sequence $(\mu_n)$ of probabilities on $\Gamma$ such that $\left\|\mu_n * a -\tau_0(a)1_{C^*_{r}(\Gamma)}\right\| \to 0$, as $n\to\infty$, for all $a\in C^*_{r}(\Gamma)$, and that any such sequence $(\mu_n)$ has a subsequence $(\mu_{n_k})$ such that $\mu := \sum_{k=1}^\infty \frac{1}{2^k} \mu_{n_k}$ is a $C^*$-simple measure.
Now, consider a sequence $(\mu_n)$ as above, and for a fixed $\omega\in \operatorname{Prob}(\Gamma)$ with full support, let $\tilde\mu_n := \omega*\mu_n$ for each $n\in {\mathbb{N}}$. Then every $\tilde\mu_n$ has full support, and
\[\begin{split}
\left\|\tilde\mu_n * a -\tau_0(a)1_{C^*_{r}(\Gamma)}\right\|
&= \left\|\omega*\mu_n * a -\tau_0(a)1_{C^*_{r}(\Gamma)}\right\|
\\&= \left\|\omega*[\mu_n * a -\tau_0(a)1_{C^*_{r}(\Gamma)}]\right\|
\\&\le \left\|\mu_n * a -\tau_0(a)1_{C^*_{r}(\Gamma)}\right\|
\xrightarrow{n\to\infty} 0
\end{split}\]
for all $a\in C^*_{r}(\Gamma)$, which implies, as commented above, that for an appropriately chosen subsequence, the measure $\tilde\mu := \sum_{k=1}^\infty \frac{1}{2^k} \tilde\mu_{n_k}$ is $C^*$-simple. Since the measures $\mu_{n_k}$ have full support, so does the $C^*$-simple measure $\tilde\mu$.
\end{proof}
\noindent
{\it Proof of Theorem \ref{main-commutative}.}\
Let $\Gamma$ be a countable discrete $C^*$-simple group, and let $\Gamma\curvearrowright X$ be a minimal action on the compact space $X$. By Lemma \ref{lem:gen-C*-simple-measure}, there is a generating $C^*$-simple measure $\mu$ on $\Gamma$. Let $\nu\in\operatorname{Prob}(X)$ be $\mu$-stationary. It is not hard to see that $\operatorname{Supp}(\nu)$ is invariant under the action of elements in $\operatorname{Supp}(\mu)$, and since $\mu$ is generating, the $\operatorname{Supp}(\nu)$ is $\Gamma$-invariant. Therefore, by minimality of the action $\Gamma\curvearrowright X$, we conclude that $\operatorname{Supp}(\nu)= X$. This implies every $\mu$-stationary state on $C(X)$ is faithful, hence the result follows from Theorem \ref{main-general}.\qed
|
1,116,691,498,608 | arxiv | \section{Introduction}
\label{introduction}
Exoplanetary system architectures have revealed numerous surprises
since the first exoplanets were discovered. One of the earliest
surprises was the discovery of exoplanets in highly eccentric orbits,
for which there is no analog in the Solar System. These eccentric
orbits were discovered for giant planets, such as HD~114762b
\citep{lat89,kan11c} and 70~Vir~b \citep{mar96,kan15} with
eccentricities of 0.33 and 0.40 respectively. Since those early
discoveries, eccentric planets have presented a significant challenge
for formation theories to account for the components of planet-planet
scattering \citep{cha08,pet14} and tidal circularization
\citep{pon11}. Such planets tend to be discovered with the radial
velocity (RV) technique since the observations are able to sample the
entire Keplerian planetary orbit. Subsequent investigations of the
eccentricity distribution of planetary orbits that take {\it Kepler}
transiting exoplanet discoveries into account show that small planets
in multi-planet systems are more likely to have low eccentricities
\citep{kan12c,van15}. The discovery and characterization of eccentric
orbits is an on-going effort to understand the evolutionary history of
these fascinating systems.
A particularly eccentric exoplanet was discovered by \citet{jon06}
orbiting the star HD~20782. With a minimum mass twice that of Jupiter
and an orbital period of 597~days, the planet is typical of
high-eccentricity planets. The orbit was further revised by
\citet{oto09} and shown to have an eccentricity as high as 0.97,
making it the highest eccentricity exoplanet yet discovered. However,
data during periastron passage is difficult to obtain for such systems
since the RV variation predominantly occurs during a very small
fraction of the orbital phase. The star continued to be monitored by
the Transit Ephemeris Refinement and Monitoring Survey (TERMS) to
improve the orbital parameters of the system \citep{kan09}. Such
orbital refinement may be used to predict and observe events that
occur during particular periods of the orbit, such as planetary
transits \citep{kan08} or phase variations \citep{kan10}.
Here we present new results for the HD~20782 system, including RVs
that sample several periastron passages and establish the planet as
the most eccentric known exoplanet. Follow-up photometry from both
ground-based and space-based telescopes rule out a transit of the
planet and show evidence of phase variations due to reflected light
from the planet close to periastron passage. Section~\ref{science}
provides background information and discusses the science motivation
for studying the system. Section~\ref{stellarprop} presents analysis
of new CHIRON spectra and the resulting fundamental parameters of the
host star as well as stellar abundances. New RV data are combined with
those published in Section~\ref{orbit} and a new Keplerian orbit for
the planet is produced. Section~\ref{astrometry} describes the use of
{\it Hipparcos} astrometry to constrain the orbital inclination of the
planet. Section~\ref{transit} discusses the transit prospects for the
system and the effects of both orbital eccentricity and
inclination. Section~\ref{photometry} presents the ground-based
photometry and an estimate of the stellar rotation period. Data from
MOST are used during the transit/periastron window to rule out a
transit and also reveal the potential presence of a reflected light
signature of the planet as it passes through periastron passage. We
discuss future observing opportunities and make concluding remarks in
Section~\ref{conclusions}.
\section{Science Motivation}
\label{science}
The eccentricity distribution of exoplanets has a well-defined shape
whereby the orbits diverge from circular beyond a semi-major axis of
$\sim 0.1$~AU \citep{but06,kan13}, inside of which tidal
circularization tends to force low eccentricity \citep{gol66,pon11}.
The observed eccentricity distribution is a clear indicator of
formation processes that are dependent upon initial system
architectures, in particular planet-planet scattering. Wide binaries
may inadvertantly create a more suitable environment for the formation
of highly-eccentric planetary orbits through gravitational
perturbations from the companion star and the triggering of planetary
ejections \citep{kai13}.
\begin{figure}
\includegraphics[angle=270,width=8.5cm]{f01.ps}
\caption{A top-down view of the HD~20782 system based on data
described in this paper. The Keplerian orbit of the planet, shown
as a solid line, is depicted using the new parameters from
Table~\ref{planet}. The orbits of the Solar System planets (dashed
lines) are shown for comparison.}
\label{orbitfig}
\end{figure}
HD~20782 is part of a wide binary with HD~20781 having a projected
separation of 9,000~AU, recently described by \citet{mac14}. The known
planet orbiting HD~20782 lies at the very top of the exoplanet
eccentricity distribution, though RV measurements during the crucial
periastron passage were relatively rare. The extreme nature of the
planet's orbital eccentricity may be seen in Figure~\ref{orbitfig},
where the orbit is described using our expanded dataset (see
Section~\ref{orbit}).
Our further investigations of this system are primarily motivated by a
better characterization of the planetary orbit and performing
follow-up observations at key orbital phases that can help to
understand the nature of the planet. It is also important to establish
that the secondary object is indeed a planet since a face-on orbital
orientation would make it consistent with the eccentricity
distribution of spectroscopic binaries \citep{mei05,maz08}.
The orbital orientation depicted in Figure~\ref{orbitfig} shows that
the star--planet separation along the line of sight to the observer is
quite small, despite the $\sim$18 month orbital period. This yields a
relatively high transit probability equivalent to that of a hot
Jupiter (see Section~\ref{transit}). Thus a primary motivation for
follow-up observations is the possible detection of a planetary
transit for a long-period eccentric planet \citep{kan08}. A previous
example of such a system can be seen in the case of HD~80606b
\citep{nae01}, where the secondary eclipse of the 0.93 eccentricity
planet was detected by \citet{lau09} and later confirmed to also
exhibit a primary transit \citep{fos09,gar09,mou09}. An additional
motivation for obtaining high-precision photometry during the transit
window and periastron passage for HD~20782b is the possibility of
detecting reflected light from the planet since the small star--planet
separation will greatly increase the amplitude of the phase signature
\citep{kan10}. Such a detection would allow an estimate of the
geometric albedo of the planet and place constraints upon the
atmospheric properties and the atmosphere's radiative and advective
time scales \citep{sea05,for08}. Note that since the orbital period is
18 months, an observing opportunity for a particular point in the
orbit will only arise every 3 years since the star will be largely
inaccessible to ground-based observers for each alternate orbit.
\section{Stellar Properties}
\label{stellarprop}
A critical step in quantifying the properties of the planet lies in
understanding the host star. Here we provide new fundamental
parameters and abundances for HD~20782.
\subsection{Fundamental Parameters}
\label{stellar:sme}
We acquired a high S/N (300 second integration) spectrum of HD~20782
on the night of July 6th, 2014. The data were acquired using CHIRON, a
fiber-fed Echelle spectrometer \citep{tok13,bre14}, installed at the
1.5m telescope at Cerro Tololo Inter-American Observatory
(CTIO). CHIRON operates at a fixed wavelength range of 415--880~nm and
a resolution of $R = 79,000$. The spectrum was modeled using the
Spectroscopy Made Easy (SME) package, described in more detail by
\citet{val96,val05}. SME uses an iterative technique that combines
model atmosphere analysis with Yonsei-Yale model isochrones
\citep{dem04} that utilize {\it Hipparcos} photometry and distances
\citep{van07a,van07b}. This approach produces a self-consistent
convergence with the measured surface gravity \citep{val09}.
The results of this analysis are shown in Table~\ref{stellar},
including values for the surface gravity $\log g$, rotational velocity
$v \sin i$, atmospheric abundance [Fe/H], effective temperature
$T_{\rm eff}$ and stellar isochrone solution (mass $M_\star$, radius
$R_\star$, and age). These parameters are consistent with previous
estimates of the stellar properties, such as those calculated by
\citet{tak07}. The revised parameters demonstrate that HD~20782 is
quite similar to the Sun, with the mass and radius being crucial
properties for the subsequent analysis of the planetary companion in
this paper.
\begin{deluxetable}{lc}
\tablecaption{\label{stellar} Stellar Parameters}
\tablehead{
\colhead{Parameter} &
\colhead{Value}
}
\startdata
$V$ & 7.4 \\
$B-V$ & 0.63 \\
Distance (pc) & $35.5 \pm 0.8$ \\
$T_\mathrm{eff}$ (K) & $5798 \pm 44$ \\
$\log g$ & $4.36 \pm 0.06$ \\
$v \sin i$ (km\,s$^{-1}$) & $1.7 \pm 0.5$ \\
$[$Fe/H$]$ (dex) & $0.01 \pm 0.03$ \\
$M_\star$ ($M_\odot$) & $1.02 \pm 0.02$ \\
$R_\star$ ($R_\odot$) & $1.09 \pm 0.04$ \\
Age (Gyrs) & $5.53 \pm 1.43$
\enddata
\end{deluxetable}
\subsection{Abundances}
\label{stellar:abund}
Both components of the wide binary system, namely HD~20781 and
HD~20782, have had their elemental abundances measured by a number of
different authors. However, due to the difference in size and spectral
type, the abundances within HD~20782 are easier to determine. While
half as many groups have measured HD~20781 than HD~20782, there does
remain some overlap by some, such as \citet{nev09}, \citet{del10}, and
\citet{mac14} who did a more in-depth comparison of the two stars.
Per the analysis in the Hypatia Catalog \citep{hin14}, the individual
abundances within both stars were renormalized to the \citet{lod09}
solar scale. The largest measurement discrepancy between datasets,
known as the {\it spread}, was used to better quantify the uniformity,
or lack thereof, between measurements. This technique was implemented
in the Hypatia Catalog to better understand the variation in
abundances seen when employing different reduction techniques, due to
instances where the {\it spread} between groups was larger than
associated error. For the cases where variations between groups were
small, the median value was used as the ultimate abundance
measurement.
The overall median [Fe/H] content in HD~20781 was 0.1 dex, as compared
to 0.15 dex within HD~20782, where the spread was 0.03 dex and 0.17
dex, respectively. In other words, the groups that measured HD~20781,
while fewer in number, were in closer agreement regarding the iron
abundance than those that measured HD~20782. The [Fe/H] determinations
for both stars are disparate compared to the abundances determined by
\citet{mac14}, which are not part of the Hypatia Catalog, who measured
0.04$\pm$0.03 and -0.02$\pm$0.02, respectively. These are consistent
with our new [Fe/H] determination shown in Table~\ref{stellar}.
A wide variety of $\alpha$-elements (carbon, magnesium, silicon, and
titanium), odd-Z elements (sodium, aluminum, and scandium), and
iron-peak elements (vanadium, chromium, cobalt, and nickel) have been
measured for both stars. For all elements except for [Na/Fe], the
abundance measurements for HD~20781 and HD~20782 were found to be
consistent to within error and markedly sub-solar, or $\sim$ -0.1
dex. The [Na/Fe] content in HD~20782 was found to be $\sim 2.5$ more
than in the companion HD~20781, where [Na/Fe] = -0.09$\pm$0.06 dex and
-0.22$\pm$0.04 dex, respectively.
\section{The Keplerian Orbit of the Planet}
\label{orbit}
The highly eccentric planet orbiting HD~20782 was first reported in
\citet{jon06} and updated in \citet{oto09}. We now present a further
six years of radial velocity data from the Anglo-Australian Planet
Search (AAPS). The AAPS is one of the world's longest-running planet
searches, with more than 40 planet discoveries in its 16 years of
operation \citep[e.g.][]{but01,jon10,vog10,wit12,tin11,wit14}.
HD~20782 has been observed on 52 epochs from 1998 Aug 9 to 2013 Sep 19
(Table~\ref{rvs_aat}). Precision Doppler measurements are obtained
with the UCLES echelle spectrograph \citep{die90} at the 3.9~m
Anglo-Australian Telescope (AAT). A 1-arcsecond slit delivers a
resolving power of $R\sim$45,000. Calibration of the spectrograph
point-spread function is achieved using an iodine absorption cell
temperature-controlled at 60.0$\pm$0.1$^{\rm{o}}$C. The iodine cell
superimposes a forest of narrow absorption lines from 5000 to
6200\,\AA, allowing simultaneous calibration of instrumental drifts as
well as a precise wavelength reference \citep{val95,but96}. The result
is a precise radial velocity shift measured relative to the epoch of
the iodine-free ``template'' spectrum. AAT velocities for HD~20782
span a total of 15 years and have a mean internal uncertainty of
2.4~m\,s$^{-1}$.
Orbital fits to the AAT data allowed predictions of the next
periastron passage of the planet, estimated to be 15 January 2015. We
were able to observe HD~20782 during that passage using the Physical
Research Laboratory (PRL) optical fiber-fed high-resolution
cross-dispersed echelle spectrograph (PARAS) with the Mount Abu 1.2~m
telescope in India. The PARAS spectrograph is temperature-controlled
at 25.55$\pm$0.01$^{\rm{o}}$C in an enclosure that is
pressure-controlled at 0.10$\pm$0.03~mbar. PARAS has a resolution of
$R\sim$67,000 and obtains RV data at a spectral range of 3800 to
6900\,\AA with simultaneous wavelength calibration with a
thorium-argon (ThAr) hollow cathode lamp. The uncertainties for the
PARAS measurements were derived based on the photon noise estimation
procedure explained by \citet{bou01}. Further details of the PARAS
instrument and the data reduction are described by
\citet{cha14}. PARAS observations were made under high air mass
conditions (1.7--1.9) with no Atmospheric Dispersion Corrector
(ADC). The five PARAS observations (see Table~\ref{rvs_paras})
complete our RV dataset and bring the total number of observations to
57.
\begin{deluxetable}{ccc}
\tablewidth{0pc}
\tablecaption{\label{rvs_aat} HD~20782 AAT Radial Velocities}
\tablehead{
\colhead{Date} &
\colhead{RV} &
\colhead{$\sigma$} \\
\colhead{(BJD -- 2440000)} &
\colhead{(m\,s$^{-1}$)} &
\colhead{(m\,s$^{-1}$)}
}
\startdata
11035.31946 & 21.90 & 2.33 \\
11236.93065 & -6.51 & 3.27 \\
11527.01731 & 7.32 & 3.39 \\
11630.88241 & 29.70 & 2.72 \\
11768.30885 & -6.64 & 2.62 \\
11828.11066 & -7.64 & 3.00 \\
11829.27449 & -6.64 & 3.82 \\
11856.13530 & -10.37 & 3.55 \\
11919.00660 & -3.62 & 2.92 \\
11919.99630 & -1.67 & 2.85 \\
11983.89009 & 4.16 & 3.32 \\
12092.30437 & 17.84 & 2.35 \\
12127.26814 & 17.70 & 2.79 \\
12152.16308 & 23.15 & 2.50 \\
12187.15965 & 22.78 & 2.53 \\
12511.20636 & -1.26 & 2.29 \\
12592.04809 & 17.40 & 2.30 \\
12654.96031 & 15.38 & 2.34 \\
12859.30551 & -202.48 & 1.90 \\
12946.13833 & -18.15 & 2.08 \\
12947.12246 & -14.27 & 1.77 \\
13004.00143 & -0.29 & 1.85 \\
13044.02367 & 0.76 & 2.25 \\
13045.96088 & -0.40 & 1.93 \\
13217.28800 & 9.01 & 1.71 \\
13282.22023 & 20.57 & 1.87 \\
13398.96924 & 22.14 & 1.39 \\
13403.96059 & 30.40 & 2.56 \\
13576.30688 & -9.14 & 1.60 \\
13632.28114 & -7.62 & 1.59 \\
13665.18659 & 6.38 & 1.72 \\
14013.21622 & 31.23 & 1.55 \\
14040.13171 & 22.12 & 1.96 \\
14153.97010 & -11.56 & 2.10 \\
14375.24693 & 13.32 & 1.70 \\
14544.89158 & 10.26 & 2.15 \\
14776.10092 & -7.55 & 1.85 \\
14843.02077 & 0.09 & 1.56 \\
14899.92440 & -0.65 & 2.07 \\
15107.24701 & 16.54 & 2.78 \\
15170.05453 & 17.31 & 2.37 \\
15204.97966 & 29.22 & 1.88 \\
15253.91188 & -78.17 & 2.35 \\
15399.32249 & -8.19 & 1.88 \\
15426.31459 & -6.89 & 1.71 \\
15461.23900 & -14.81 & 2.99 \\
15519.13309 & 8.36 & 2.00 \\
15844.13584 & -145.90 & 6.54 \\
15845.17956 & -185.60 & 2.28 \\
15846.13671 & -156.28 & 2.32 \\
15964.93095 & 7.77 & 2.87 \\
16499.33740 & -11.25 & 3.03
\enddata
\end{deluxetable}
\begin{deluxetable}{ccc}
\tablewidth{0pc}
\tablecaption{\label{rvs_paras} HD~20782 PARAS Radial Velocities}
\tablehead{
\colhead{Date} &
\colhead{RV} &
\colhead{$\sigma$} \\
\colhead{(BJD -- 2440000)} &
\colhead{(m\,s$^{-1}$)} &
\colhead{(m\,s$^{-1}$)}
}
\startdata
57036.16183 & 272.25 & 4.12 \\
57038.14436 & 127.25 & 4.09 \\
57039.13336 & 61.53 & 3.65 \\
57040.15494 & 149.14 & 3.86 \\
57042.12356 & 183.06 & 2.98
\enddata
\end{deluxetable}
The RV data shown in Tables \ref{rvs_aat} and \ref{rvs_paras} were
used to produce a revised Keplerian orbital solution. This was
performed using the RVLIN package; a partially linearized,
least-squares fitting procedure \citep{wri09}. The uncertainties for
the orbital and associated physical parameters were estimated using
the BOOTTRAN bootstrapping routines described in \citet{wan12}. To be
sure that known instabilities of the Levenberg-Marquardt-based RVLIN
algorithm did not prevent convergence at these high eccentricities, we
reduced the number of nonlinear parameters in the problem by fixing
the eccentricity at 100 values evenly spaced between 0.93 and 0.995
and selecting the value that produced the minimum $\chi^2$ fit.
A fit to the AAT and PARAS data with their instrumental uncertainties
was unsatisfactory. The rms residuals to the PARAS data are
17~m\,s$^{-1}$, with two excursions at and after RV minimum of over
20~m\,s$^{-1}$, inconsistent with typical instrumental uncertainties
of under 6~m\,s$^{-1}$. Further, the scatter about the fit to the AAT
data is 6.1~m\,s$^{-1}$, including three excursions larger than
15~m\,s$^{-1}$ (up to 17~m\,s$^{-1}$), both significantly larger than
the quoted errors of 2.3~m\,s$^{-1}$. Given that there are only 52 AAT
points, we do not expect to see 3 points ($\sim$5\%) with deviations
of 15~m\,s$^{-1}$ from Gaussian noise unless the errors are more like
6~m\,s$^{-1}$.
Some component of the scatter about the best fit is due to intrinsic
stellar variability, and some is due to the precision of the
measurements (due to both instrumental/algorithmic imprecision and
photon noise). The stellar noise should be the same for both
instruments, meaning that the large excursions seen in the PARAS data
indicate a problem with either the fit or the PARAS data.
Close examination of points near periastron reveal that the problem
must lie with the instrumental uncertainties, not the fit. PARAS and
AAT have two measurements (each) at very similar phases (the expected
change in RV between the points in each pair is $<
10$~m\,s$^{-1}$). However, in both cases the difference in velocities
is over 20~m\,s$^{-1}$, and in different directions. The combined
measurement uncertainties of the two instruments therefore must be of
order 20~m\,s$^{-1}$.
We attempted a second fit, but inflated both instrumental
uncertainties by adding, in quadrature, 5~m\,s$^{-1}$ and
19~m\,s$^{-1}$ to the AAT and PARAS velocities, respectively. These
inflations reflect a common stellar jitter component (likely to be
around 5~m\,s$^{-1}$) and an additional, instrument-dependent
component added in quadrature. This resulted in a much more
satisfactory fit: the residuals to the best fit for the two telescopes
have standard deviations of 5.75~m\,s$^{-1}$ and 19.85~m\,s$^{-1}$,
respectively, and $\chi^2$ values of 1.03 and 1.01,
respectively. There is still a significant outlier to the AAT fit (at
15~m\,s$^{-1}$), but at 2.5$\sigma$ (using the inflated measurement
uncertainties) this is not unexpected from 52 data points.
\begin{deluxetable*}{lcc}
\tablecaption{\label{planet} Keplerian Orbital Model}
\tablewidth{0pt}
\tablehead{
\colhead{Parameter} &
\colhead{Value (AAT)} &
\colhead{Value (AAT+PARAS)}
}
\startdata
\noalign{\vskip -3pt}
\sidehead{HD 20782 b}
~~~~$P$ (days) & $597.099 \pm 0.049$ & $597.065 \pm 0.043$ \\
~~~~$T_c\,^{a}$ (BJD -- 2,440,000) & $17037.788 \pm 0.145$ & $17037.794 \pm 0.100$ \\
~~~~$T_p\,^{b}$ (BJD -- 2,440,000) & $17038.510 \pm 0.108$ & $17038.458 \pm 0.094$ \\
~~~~$e$ & $0.953 \pm 0.005$ & $0.956 \pm 0.004$ \\
~~~~$\omega$ (deg) & $142.2 \pm 2.2$ & $142.1 \pm 2.1$ \\
~~~~$K$ (m\,s$^{-1}$) & $114.9 \pm 4.4$ & $116.0 \pm 4.2$ \\
~~~~$M_p$\,sin\,$i$ ($M_J$) & $1.46 \pm 0.03$ & $1.43 \pm 0.03$ \\
~~~~$a$ (AU) & $1.397 \pm 0.009$ & $1.397 \pm 0.009$ \\
\sidehead{System Properties}
~~~~$\gamma$ (m\,s$^{-1}$) & $1.95 \pm 0.82$ & $1.79 \pm 0.80$ \\
\sidehead{Measurements and Model}
~~~~$N_{\mathrm{obs}}$ & 52 & 57 \\
~~~~rms (m\,s$^{-1}$) & 5.91 & 8.06 \\
~~~~$\chi^2_{\mathrm{red}}$ & 1.0 & 1.14
\enddata
\tablenotetext{a}{Time of mid-transit.}
\tablenotetext{b}{Time of periastron passage.}
\end{deluxetable*}
\begin{figure}
\includegraphics[width=8.2cm]{f02a.ps} \\
\includegraphics[width=8.3cm]{f02b.ps} \\
\includegraphics[width=8.4cm]{f02c.ps}
\caption{Top: All 57 RV measurements from AAT/PARAS observations of
HD~20782 (see Tables \ref{rvs_aat} and \ref{rvs_paras}) along with
the best-fit orbital solution (Table \ref{planet}). RV offsets
between datasets have been accounted for in this figure. Middle:
The RV data phased on the orbital the solution from
Table~\ref{planet}, where phase zero corresponds to superior
conjunction. Bottom: A zoomed version of the phased middle plot
which shows the coverage during periastron passage.}
\label{rv}
\end{figure}
We conclude that there is significant instrumental/observational
systematic noise in the PARAS data due to air mass, of order
20~m\,s$^{-1}$. We also examined the inclusion of a linear RV trend in
our model but found that this does not improve the quality of the
fit. The final orbital solution from the data is shown in
Table~\ref{planet}, where we have included the solution that uses the
AAT data only for comparison. The AAT+PARAS orbital solution includes
an offset between the AAT and PARAS datasets as a free parameter,
found to be $276.5 \pm 8.7$~m\,s$^{-1}$. The $\gamma$ parameter shown
in Table~\ref{planet} is the systemic velocity of the system with
respect to the zero point of the extracted RVs (relative to the
template spectrum). Thus, there is an offset between the $\gamma$
value reported in Table~\ref{planet} and the true systemic velocity,
reported by \citet{val05} to be 40.7~km\,s$^{-1}$. Our final AAT+PARAS
orbital solution is depicted in Figure~\ref{rv}. The bottom panel of
Figure~\ref{rv} shows the quality of the combined data coverage during
periastron passage for this highly eccentric planet, particularly the
additional coverage provided by the PARAS data.
We note that the transit time we calculate is sensitive to the weights
assigned to the PARAS and AAT data. The PARAS data favor a transit
time that is 0.2--0.3 days later than the AAT data. Because we do not
fully understand the source of the very large scatter in the PARAS
data, we should not assume that our errors are Gaussian. Fortunately,
BOOTTRAN uses bootstrapping to determine its parameter uncertainties,
which is appropriate for non-normally distributed residuals (although
the underlying fitter minimizes $\chi^2$, and so does assume Gaussian
errors).
\section{Astrometric Constraints on the Orbit}
\label{astrometry}
To constrain the inclination of the system and possibly refine the
estimate of the companion mass, we combine Hipparcos astrometry of
HD~20782 with the orbital parameters obtained from the radial velocity
observations. We use the new reduction of the Hipparcos data
\citep{van07a}, which presents a significant improvement in the
overall reliability of astrometric information \citep{van07b} and
includes the Intermediate Astrometric Data (IAD) product in a format
that facilitates the quest for signatures of orbital motion. Following
the method prescribed by \citet{sah11}, we use the spectroscopic
elements derived from our RV solution (Table~\ref{planet}) to search
for an orbital signature.
The 5 standard astrometric parameters for the Hipparcos solution of a
HD20782 can be obtained from the VizieR Catalogue \citep{van07a};
these are right ascension (RA, $\alpha$=50.01$^{\circ}$), declination
(dec, $\delta$=-28.85$^{\circ}$), proper motion in RA
($\mu_{\alpha}$=349.33 mas/yr) and dec ($\mu_{\delta}$=-65.92 mas/yr),
and parallax ($\varpi$=28.15 mas). The 5 spectroscopic parameters
required from the radial velocity analysis are period ($P$),
eccentricity ($e$), semi-amplitude ($K$), time of periastron ($T_0$),
and argument of periastron ($\omega$). Each Hipparcos observation is
reconstructed from the IAD and fit with a comprehensive model based on
12 parameters: the 5 standard astrometric parameters, the 5
spectroscopically derived parameters, the inclination ($i$), and the
longitude of the ascending node ($\Omega$). In practice, the
spectroscopic parameters are treated as constants since they are
considered reliable, and we work with 7 free parameters. The details
of the procedure are carefully described by \citet{sah11}, and we
follow their methods to calculate inclination, a new orbit, and the
significance of the orbit via the permutation test.
We begin by constructing a two-dimensional $i-\Omega$ grid, where we
solve for the remaining five parameters of the 7-parameter model, and
the corresponding $\chi^2$. The parameter values identified by the
minimum $\chi^2$ value are used as the starting point for an $AMOEBA$
minimization, using the downhill simplex method, to supersede the
limitations imposed by the resolution of the $i-\Omega$ grid. We then
perform 100,000 Monte Carlo simulations, where we generate 1,000 sets
of Hipparcos measurements from the existing data. For each set of
Hipparcos abscissae, we execute 100 random draws from the
spectroscopic parameters, in order to inculcate their uncertainties
into our result. Each spectroscopic parameter is assumed to be a
gaussian distribution, with the RV solution and its errors serving as
the mean and standard deviation respectively. The Monte Carlo models
are then solved as described above, to produce 100,000 sets of
solution parameters. The final parameters are defined as the median
of the associated distribution, while the errors are the interval
between the 15.85 percentile and the 84.15 percentile, encompassing
68.3\% of the values around the median.
For completeness, we report here our full set of final parameters, as
offsets to the Hipparcos values. The changes in right ascension,
declination, parallax, proper motion in RA and declination, and the
argument of periastron are: $\Delta\alpha=0.3^{+1.4}_{-1.2}$ mas,
$\Delta\delta=1.0^{+1.3}_{-1.1}$ mas, $\Delta\varpi=0.2^{+0.7}_{-0.7}$
mas, $\Delta\mu_{\alpha^{\star}}=0.0^{+0.4}_{-0.4}$ mas/yr,
$\Delta\mu_{\delta}=0.1^{+0.6}_{-0.6}$ mas/yr. We find an inclination
of ${2.7^{+2.3}_{-1.2}}^{\circ}$ and an argument of periastron
${202.5^{+59.3}_{-66.3}}^{\circ}$, but the solution is very poorly
constrained.
The astrometric data covers approximately two orbits for this system,
so phase coverage should not inhibit the recovery of any significant
orbital signatures. Unfortunately, the projected minimum semi-major
axis of our new solution is very small ($a \sin i = 0.05$~mas)
compared to the median Hipparcos single-measurement precision for this
target (3.5 mas), as shown in Figures \ref{fig:hist} and
\ref{fig:orbit}. We perform a permutation test to verify the
significance of our result by comparing the semimajor axis of the new
solution orbit with 1,000 pseudo orbits generated from random
permutations of the astrometric data, similar to \citet{sah11}. Figure
\ref{fig:hist} illustrates the calculation of a low significance orbit
(68.2\%, which is almost exactly at the 1$\sigma$ level of detection),
confirming that the Hipparcos data contains little or no orbital
signature. The new Hipparcos reduction has this target flagged as
having a good fit with just the 5 astrometric parameters, which is
consistent with the fact that adding the 7 RV parameters does not seem
to change the solution. \citet{sah11} emphasize that orbital solutions
at this significance level are prone to very large biases, and the
calculated values and their errors should be considered highly
suspect. We present our full set of orbital parameters here only to
facilitate future comparison of analytical methods, and not for direct
application.
\begin{figure}
\includegraphics[width=8.2cm]{f03.ps}
\caption{Histogram of the semi-major axes for 1000 randomly permuted
pseudo-orbits of HD~20782. The pseudo-orbits are used to calculate
the significance of the new orbit via the permutation test, as
described by \citet{sah11}. The solid black line shows the actual
best-fit solution and the dashed line represents the median
Hipparcos single-measurement precision for this system.}
\label{fig:hist}
\end{figure}
\begin{figure}
\includegraphics[width=8.2cm]{f04.ps}
\caption{The red line shows the orbital signature detected in the
Hipparcos data when combined with orbital parameters from the
radial velocity solution. As projected on sky, North is up and
East is left. Open circles mark the individual Hipparcos
measurements. Dashed lines with orientation along the scan angle
$\psi$ and length given by the residual of the orbital solution
connect the measurements with the predicted location from our
model. This illustrates the difficulty of detecting an orbit with
such a small projected semi-major axis, given the median Hipparcos
single-measurement precision on this target.}
\label{fig:orbit}
\end{figure}
On the other hand, simulations by \citet{sah11} show that orbits are
always detected at the 3$\sigma$ level when the semi-major axis is at
least 70\% of the Hipparcos precision on a target. Any orbital
signature above 2.45 mas would have been detectable in this Hipparcos
dataset, and this helps to set an upper limit on the companion
mass. Using this assertion, we get a lower limit on inclination
(1.22$^{\circ}$) and an upper limit on the companion mass
(66~$M_J$). Although the consideration of astrometric data does not
allow us to put tight constraints on the inclination of the system,
the non-detection of an orbit allows us to rule out a stellar binary
system. Verification of this could be achieved through high-contrast
adaptive-optics imaging of the system at predicted apastron
passage. Figure~\ref{sep} shows the projected and angular separation
of the planet and star for one complete face-on orbit, where phase
zero corresponds to superior conjuction as described by
\citet{kan13}. An additional consequence of our astrometric constraint
is that the transit probability is increased by a small amount since
inclinations below 1.22 degrees are ruled out.
\begin{figure}
\includegraphics[angle=270,width=8.2cm]{f05.ps}
\caption{The projected (AU) and angular ($\arcsec$) separation of
HD~20782b from the host star as a function of orbital phase, where
phase zero corresponds to superior conjunction.}
\label{sep}
\end{figure}
\section{Planetary Transit Prospects}
\label{transit}
As described in Section~\ref{science}, one of the most interesting
aspects of HD~20782b is the relatively large transit probability
compared with the orbital period. The transit probability is a
function of the stellar and planetary radii and the star--planet
separation along the line of sight \citep{kan08}. For HD~20782, we use
the stellar radius shown in Table~\ref{stellar} and adopt a planetary
radius of $R_p = 1.0$~$R_J$ given the minimum mass of 1.41~$M_J$ (see
Table~\ref{planet}) and using the mass-radius relationship described
by \citet{kan12a}.
If the planet were in a circular orbit with the same semi-major axis,
the transit probability would be 0.4\%. The extreme eccentricity of
the orbit results in star--planet separation of 0.061~AU at periastron
and 0.076~AU at inferior conjunction where a transit is possible. Such
a separation is similar to that of a hot Jupiter in a circular
orbit. Inferior conjunction occurs when $\omega + f = 90\degr$; in
this case, the true anomaly is $f = 307.9\degr$ at the time of
mid-transit. This orbital orientation results in an enhanced transit
probability of 7.1\%.
A further influence of the high eccentricity on the transit parameters
is the expected transit duration. Since the separation at inferior
conjunction is comparable to a hot Jupiter, the duration is likewise
reduced and has an amplitude of 0.13~days (3.1~hours) for a central
transit. The epoch of mid-transit shown in Table~\ref{planet} was
calculated using the same Monte-Carlo bootstrap method used to
calculate the orbital parameter uncertainties. The time of mid-transit
corresponds to a calendar date of 15 January 2015 and a UT of
7:02. The uncertainty on this time is 0.1~days which results in a
total 1$\sigma$ transit window of 0.33~days (7.6~hours). The estimated
depth of the transit is 0.96\% and so should be readily observable in
typical millimag photometry. However, the infrequent occurrence of
such events (see Section~\ref{science}) motivated observations from
both ground and space.
\section{Photometric Observations}
\label{photometry}
The derived physical and orbital properties of HD~20782b described in
previous sections motivated photometric monitoring of the host star
for stellar variability and planetary transit/phase signatures. Here
we describe our photometric observations and results in detail.
\subsection{APT Photometry}
We collected a total of 191 nightly photometric observations of
HD~20782 during its 2013--14, 2014--15, and 2015--16 observing seasons
to search for stellar variability. The observations were acquired with
the T8 0.80~m automatic photoelectric telescope (APT), one of several
automated telescopes operated by Tennessee State University (TSU)
located at Fairborn Observatory in southern Arizona. The T8 APT is
equipped with a two-channel precision photometer that uses a dichroic
filter and two EMI 9124QB bi-alkali photomultiplier tubes to measure
the Str\"omgren $b$ and $y$ pass bands simultaneously. We computed the
differential magnitudes of HD~20782 with respect to the mean
brightness of its three constant comparison stars. To improve the
precision further, we combined the differential $b$ and $y$
observations into a single $(b+y)/2$ ``passband.'' The TSU APTs,
their precision photometers, observing strategy, and data reduction
techniques are described in detail by \cite{hen99}. A summary of the
photometric data for HD~20782 is given in Table~\ref{aptsum}.
\begin{deluxetable}{ccccc}
\tablecaption{\label{aptsum} Summary of photometric observations for
HD~20782}
\tablewidth{0pt}
\tablehead{
\colhead{Observing} &
\colhead{} &
\colhead{Julian Date Range} &
\colhead{Mean} &
\colhead{Sigma} \\
\colhead{Season} &
\colhead{$N_{obs}$} &
\colhead{(HJD -- 2,400,000)} &
\colhead{(mag)} &
\colhead{(mag)}
}
\startdata
2013--14 & 43 & 56622--56701 & 1.01241 & 0.00183 \\
2014--15 & 89 & 56943--57045 & 1.01206 & 0.00228 \\
2015--16 & 59 & 57293--57390 & 1.01128 & 0.00171
\enddata
\end{deluxetable}
The nightly observations of HD~20782 are plotted in the top panel of
Figure~\ref{aptphot} in our $(b+y)/2$ passband. The observing seasons
are quite short from Fairborn Observatory, only three months in
length, because of the star's southerly declination of $-29\arcdeg$.
The observations scatter about their grand mean (indicated by the
horizontal dotted line) with a standard deviation of 0.00205 mag, as
given in the upper right corner of the top panel. This is essentially
the limit of precision for the HD~20782 observations because the
star's southerly declination results in all measurements being made at
air mass values between 2.0 and 2.4 (see \cite{hen99}, Figure~8).
\begin{figure}
\includegraphics[width=8.2cm]{f06.ps}
\caption{HD~20782 APT photometry acquired during three consecutive
observing seasons. Top: The relative photmetry as a function of
Heliocentric Julian Date. Middle: The power spectra from a fourier
analysis of all seasons photometry. Bottom: Sinusoidal fit to the
most significant period found from the fourier analysis. Our
analysis described in the text demonstrates that this period is
spurious.}
\label{aptphot}
\end{figure}
The middle and bottom panels of Figure~\ref{aptphot} show the
frequency spectrum of the data set and the phase curve computed with
the best frequency, respectively. Our frequency analysis is based on
least-squares sine fits with trial frequencies between 0.01 and 0.95
c/d, corresponding to periods between one and 100 days. The goodness
of fit at each frequency is measured as the reduction factor in the
variance of the original data, whose value lies between the extremes
of 0.0 and 1.0. A reduction factor of 0.0 corresponds to the case
where the variance in the residuals from a least-squares sine fit to
the observational data at some trial frequency have the same value as
the variance in the original data, i.e., no reduction in the variance
takes place at that particular frequency. A reduction factor of 1.0
corresponds to the extreme case where the variance in the residuals of
the sine fit is 0.0, i.e., the sine curve fits the data perfectly with
no residuals. The frequency spectrum in the middle panel shows several
peaks with reduction factors near 0.1, but no peak stands out above
the others to suggest a stellar rotation period. We ran simulations
adding computed sine curves to our data sets and found that coherent
variations with peak-to-peak amplitudes of $\sim0.004$ mag or larger
would be detectable in our light curves. This places an upper limit to
any periodic modulation for HD~20782, such as rotational modulation
caused by starspots. This is consistent with the low level of magnetic
activity (logR'$_{HK}$ = -4.91) given in the discovery paper of
\citet{jon06} and demonstrates that our best-fit period of 1.2619 days
in the bottom panel is spurious. In addition to the absense of
rotational modulation, we find no evidence for longer-term
variability; the three seasonal means in Table~\ref{aptsum} scatter
about their grand mean with a standard deviation of only 0.00058 mag.
\subsection{MOST Observations and Transit Window}
\label{most}
\begin{figure*}
\begin{center}
\includegraphics[angle=270,width=15.5cm]{f07.ps}
\end{center}
\caption{MOST photometry of HD~20782 acquired for $\sim$7~days
surrounding the predicted transit mid-point. The solid line
indicates the location and depth of a possible transit and the
vertical dashed lines are the boundaries of the 3$\sigma$ transit
window.}
\label{transitfig}
\end{figure*}
Given the size of the transit windows and the relatively infrequent
opportunities to observe them (see Sections \ref{science} and
\ref{transit}), we elected to make use of the Microvariability and
Oscillations of STars (MOST) satellite to observe HD~20782 during the
next scheduled transit window. MOST has an aperture of 15~cm and a
filter passband covering the range 375--675~nm, making it well suited
to obtain precision optical photometry of very bright stars
\citep{wal03,mat04}.
Observations of HD~20782 commenced at HJD 2457035.3 (2015 January 12
19:11 UT) and concluded 7 days later at HJD 2457042.3 (2015 January 19
19:11 UT). The predicted time of mid-transit (see Table~\ref{planet})
was BJD 2457037.794 (2015 January 15 07:02 UT). The star is outside of
MOST's Continuous Viewing Zone and so required observations outside of
normal operational parameters. For each 101~min orbit, MOST was able
to acquire the target field for 20~mins. Exposure times were 0.6~secs
to allow for both the brightness of the target and scattered light due
to the roll angle of the spacecraft with respect to the Sun. This
resulted in photometry with a 1$\sigma$ RMS precision of 0.07\%.
During the week of MOST observations, a total of 257 measurements of
HD~20782 were acquired. The resulting relative photometry of the data
are shown in Figure~\ref{transitfig}, along with a solid line that
indicates the predicted location and depth of a possible transit. The
1$\sigma$ transit window (0.33~days) was described in
Section~\ref{transit}. We use the 3$\sigma$ transit window (0.43~days)
to draw vertical dashed lines in Figure~\ref{transitfig}. A central
transit of the planet (impact parameter of $b = 0$) is ruled out for
most locations within the transit window. The cadence of the
observations is such that a transit duration half that of a central
transit could have been missed within the 1$\sigma$ transit
window. Such a duration corresponds to an impact parameter of $b =
0.87$, above which transits cannot be ruled out by our photometry.
A further consideration is the detection of the Rossiter-McLaughlin
(R-M) effect during a possible transit. The amplitude of the R-M
effect is shown by \citep{gau07} to be
\begin{equation}
K_R = v \sin i \frac{(R_p/R_\star)^2}{1 - (R_p/R_\star)^2}
\label{rm}
\end{equation}
Using our stellar parameters from Table~\ref{stellar} and the transit
parameters described in Section~\ref{transit}, the amplitude of the
R-M effect for a transit of HD~20782b is predicted to be
$\sim$15.5~m\,s$^{-1}$. Two of our RV measurements (one each from AAT
and PARAS) are within the transit window, shown close to 0.5 phase in
the bottom panel of Figure~\ref{rv}. Neither of these measurements
show evidence of any significant deviation from our Keplerian
model. Thus the RV data are consistent with the MOST photometry
leading to the conclusion that the planet does not transit the host
star.
\subsection{Evidence of Phase Variations}
\label{phase}
The phase variations of a planet as it orbits the host star has become
a detectable signature in the era of high-precision
photometry. Numerous examples of phase signatures have been detected
from the planets detected with the {\it Kepler} mission
\citep{est13,est15}.
Exoplanets that are close to their host stars have generally been
found to have low geometric albedos, such as the low geometric albedo
of HAT-P-7b \citep{wel10} and the null detection of phase variations
from HD~209458b \citep{row08}. There are exceptions to the rule,
however, such as the case of Kepler-7b \citep{dem11}, and it is likely
that a greater understanding of atmospheric processes is needed to
explain this diversity \citep{dem14}. \citet{kan10} developed a
geometric albedo model that scales the geometric albedo with
star--planet separation. The implication of the model for planets in
eccentric orbits is that the geometric albedo is time dependent, with
an assumption that reflective/scattering condensates in the upper
atmosphere are removed during periastron passage by the increase in
radiative flux from the host star. The generalized expression for the
planet to host flux ratio is given by
\begin{equation}
\epsilon(\alpha,\lambda) \equiv
\frac{f_p(\alpha,\lambda)}{f_\star(\lambda)} = A_g(\lambda)
g(\alpha,\lambda) \frac{R_p^2}{r^2}
\label{fluxratio}
\end{equation}
where $\alpha$ is the phase angle, $A_g$ is the geometric albedo, $g$
is the phase function, $R_p$ is the planetary radius, and $r$ is the
star--planet separation. This separation is given by
\begin{equation}
r = \frac{a (1 - e^2)}{1 + e \cos f}
\label{separation}
\end{equation}
where $f$ is the true anomaly. The phase angle, $\alpha$, is defined
to be zero at superior conjunction. A model of geometric albedo
time-dependence assumes that the planetary atmosphere responds to the
change in incident flux on timescales comparable to the duration of
the periastron encounter. This effect has been modeled for the
eccentric planet HAT-P-2b at infrared wavelengths by
\citet{lew13}. Thus \citet{kan10} predicted that, although the largest
phase variations of eccentric planets occur during a relatively short
fraction of their orbital phase, the amplitude of the signature would
be lowered by the subsequent darkening of their atmospheres during
periastron.
\begin{figure*}
\begin{center}
\includegraphics[angle=270,width=15.5cm]{f08a.ps} \\
\includegraphics[angle=270,width=15.5cm]{f08b.ps} \\
\includegraphics[angle=270,width=15.5cm]{f08c.ps} \\
\end{center}
\caption{Top: The predicted flux variations of the HD~20782 system
due to reflected light from the planet (dashed line), ellipsoidal
variations (dotted line), and Doppler boosting (dot-dashed
line). The sum of these three effects is shown as a solid
line. This assumes a time-varying geometric albedo, as formulated
by \citet{kan10}. The zoomed panel shows the maximum phase
variation along with the orbital phase location of periastron and
the predicted transit time described in Section
\ref{transit}. Middle: The MOST data with the running average
shown as a solid line. Bottom: The binned MOST data along with a
model of the phase variations.}
\label{phasefig}
\end{figure*}
For HD~20782b, we calculated the predicted phase variations of the
planet with respect to the inferior conjunction (transit) and
periastron times, shown as a dashed line in the top panel of
Figure~\ref{phasefig}. These orbital locations are very close to each
other (see Figure~\ref{orbitfig}), separated by only 0.66~days. The
location of superior conjunction where $\alpha = 0$ occurs 5.63 days
after the periastron passage. All three of these orbital locations are
covered by the MOST observations described in Section~\ref{most}. We
include the additional effects of ellipsoidal variations
\citep{dra03,zuc07,kan12b} and Doppler boosting \citep{loe03,fai11} in
the top panel of Figure~\ref{phasefig}, shown as dotted and dot-dashed
lines respectively. The combined effect of all three (phase
variations, ellipsoidal variations, and Doppler boosting) is shown as
a solid line. For the ellipsoidal component, we have assumed a gravity
darkening exponent of $\beta = 0.32$ \citep{luc67}. For the Doppler
boosting coefficient, we calculate a value of $\alpha_{\mathrm{beam}}
= -1.21$ using the stellar temperature from Table~\ref{stellar} and
the methodology of \citet{loe03}. Using the model of a
distance-dependent geometric albedo and Hilton phase function
\citep{kan10}, we determined that the amplitude of the phase
variations is comparable to the Doppler boosting, whereas the
ellipsoidal component is a minor contribution to the total flux
variations. Another point worth noting is that this model assumes an
orbit that is close to edge-on. The effect of orbital inclination on
the relative amplitudes of the three contributing components is minor
except for orbits close to face-on \citep{kan11a}.
As described in Section~\ref{most}, the original intent of acquiring
the MOST data was for the purpose of observing a potential transit
event. Evidence of phase variations was unexpected due to the low
predicted amplitude shown in the top panel of
Figure~\ref{phasefig}. To determine the overall trend in the MOST
data, we calculated a running mean of the data using 20 data points
either side of each measurement to calculate the running mean at that
location. The results of this calculation are shown as a solid line
along with the individual measurements (including error bars) in the
middle panel of Figure~\ref{phasefig}, where we have adjusted the
vertical scale of the plot to the range of the running mean values,
using the average of the running mean values as the zero-point. The
apparent brightening of the host star between the truncated dates of
38 and 39 on the plot is where the peak of the phase variations are
predicted to occur. This was diagnosed from an instrumentation point
of view, and it was determined that the change in the brightness was
not caused by any aspect of the MOST instrumentation or data reduction
issues.
We tested the likelihood whether this could be caused by an alignment
between intrinsic stellar variability and the expected periastron
passage, by conducting a Monte-Carlo simulation in which we treat the
observed data as representative of possible stellar variability and
randomly rearrange the data to see how often a similar chance
alignment can occur. Each random permutation of the observed flux
values to the times of observation resulted in a new dataset for which
the running mean was calculated and then analyzed for significant
peaks in the flux. The percentage of simulations for which a specific
criteria was met was taken as the probability that the criteria would
have been satisfied by chance. Based on 10,000 realizations of this
simulation, the probability of a peak occurring in the 38--39 date
range is $\sim$17\%, and the probability of that peak being of equal
or greater amplitude than the observed peak is $\sim$4\%. If indeed
the observed peak is related to the close passage of the planet to the
star, the flux variations may indicate that the assumption by
\citet{kan10} that the presence of reflective condensates in a
planetary atmosphere changes on timescales comparable with the
periastron passage is likely incorrect for highly eccentric orbits. In
fact, the larger the eccentricity, the more inconsistent the
assumption becomes with the radiative and advective timescales of the
atmosphere. Furthermore, the possible presence of phase variations
indicates the companion is not self-luminous, further supporting the
claim that the companion is planetary rather than stellar in nature
\citep{kan12b}.
To investigate this further, we binned the MOST photometry into 15
evenly spaced time intervals. The best-fit model to the binned data is
shown in the bottom panel of Figure~\ref{phasefig} where the model
includes ellipsoidal variations and Doppler boosting as well as phase
variations. The best-fit inclination of the planetary orbit is $i =
30\degr$. A fit of the data to both the described model and a constant
model resulted in $\Delta \chi^2 = 24$ which shows that the phase
model is quantitatively favored. However, the model uses a companion
radius of $\sim$5 Jupiter radii and a geometric albedo of unity, which
is a physically unlikely scenario. Thus there is either an additional
component missing in our model of the data, or the data may be
insufficient to fully characterize the flux variations, or some
combination of the two. As noted above, the most compelling aspects of
the variations described here are the timing of the variations with
those predicted by the phase model, combined with the extreme
eccentricity of the planet. This system is clearly highly unusual
amongst the known exoplanets, and we cannot exclude the possibility of
unaccounted for physics occuring during the extreme conditions of
periastron.
\begin{figure}
\includegraphics[angle=270,width=8.2cm]{f09.ps}
\caption{The predicted blackbody flux of HD~20782b, assuming a
calculated temperature of $\sim$1400~K. The passband boundaries of
MOST are indicated by the vertical dashed lines. The blackbody
calculation assumes zero Bond albedo and zero heat redistribution
(hot dayside model) and thus represents a maximum flux
scenario. Of the integrated flux, 0.02\% falls within the MOST
passband.}
\label{blackbody}
\end{figure}
A possible missing factor is that of thermal emission. This has been
shown to be a significant component at the {\it Kepler} passband
\citep{dem11}. The {\it Kepler} passband however is significantly
broader than that used by MOST (see Section~\ref{most}). We calculated
this component for our observations by estimating the equilibrium
temperature of the planet. To do so, we assumed the most extreme case
of zero heat redistribution (hot dayside) and zero Bond albedo
\citep{kan11b}. This produces a peak equilibrium temperature at
periastron of $\sim$1400~K. The resulting blackbody spectrum is shown
in Figure~\ref{blackbody} along with the passband of MOST, depicted as
vertical dashed lines. Of the integrated flux from the thermal
emission, only 0.02\% of the total flux falls within the passband of
our observations. This corresponds to a flux ratio of planet to star
thermal emission in the MOST passband of $\sim 1.5 \times 10^{-6}$. We
conclude that any phase variations due to the planet are dominated by
the optical component. Further data with higher precision are needed
to confirm the presence of the variations and constrain the reflective
properties of this fascinating planet as it passes through periastron.
\section{Conclusions}
\label{conclusions}
Exoplanets in eccentric orbits remain some of the most intriguing
discoveries of recent decades. Although the semi-amplitude of their RV
variations is systematically higher, the orbits of highly eccentric
planets are difficult to characterize due the rapid variation at
periastron passage. We have refit the orbit of HD~20782b, consistent
with it being the most extreme of these eccentricity cases and have
provided new stellar and planetary parameters. Our RV measurements
acquired during the brief duration of periastron passage allow a
detailed orbital ephemeris to be constructed, despite the relatively
long period of $\sim$18 months. Our analysis of the {\it Hipparcos}
astrometry for HD~20782 constrains the inclination sufficiently such
that the companion is likely to be planetary rather than stellar. The
uncertainties associated with our astrometric analysis leave open the
possibility that the companion lies within the brown dwarf mass
regime. It is expected that further astrometric data from the Gaia
mission will significantly improve these constraints
\citep{per14}. Even with a relatively high transit probability of
$\sim$7\%, we have shown that the planet does not transit the host
star.
The possible phase variations soon after periastron might be induced
in part by stellar light reflected off the planet's
atmosphere. Although our modeling is incomplete, if this hypothesis is
true then it raises interesting questions regarding the conditions to
which such an extreme orbit exposes the planet. In particular, the
effects of rotation rate and radiative/advective timescales on
atmospheric dynamics may be overwhelmed by the short yet intense
conditions that occur at the closest approach of the planet to the
star. It has been further noted by several studies that the brightest
region of the planet is shifted westward of the substellar point,
caused by a relatively cloudy western hemisphere
\citep{dem11,est15,hu15,shp15}. Additionally, it seems likely that,
although planets in short-period orbits tend to have relatively low
geometric albedos, long-period planets in eccentric orbits retain a
high geometric albedo during the periastron passage since the
atmosphere does not have time to respond to the change in incident
flux. The result of this is a higher than expected flux ratio of the
planet to the star at optical wavelengths. Thus, eccentric planets
present a particularly lucrative observing opportunity for the study
of planetary atmospheres, provided one is able to accurately predict
when the peak flux variations are expected to occur.
Further observations of this system at times close to inferior
conjunction are highly encouraged. The next two times of inferior
conjunction predicted from our ephemeris are BJD $2457634.859 \pm
0.123$ (2016 September 3 8:36 UT) and BJD $2458231.924 \pm 0.153$
(2018 April 23 10:10 UT). In each case, the subsequent superior
conjunction occurs $\sim$6.29 days after the inferior
conjunction. Matching these times to those when the target is most
visible is not trivial and the timescale of the periastron passage is
best suited to continuous space-based observations. Possibilities for
these upcoming windows would be a perfect use for upcoming missions
that are optimized for bright star observations, such as the
CHaracterizing ExOPLanet Satellite (CHEOPS). A deeper understanding of
the orbits and atmospheres of eccentric planets are key milestones
towards unlocking the origin and nature of these mysterious objects.
\section*{Acknowledgements}
The authors would like to thank the anonymous referee, whose comments
greatly improved the quality of the paper. S.R.K and
N.R.H. acknowledge financial support from the National Science
Foundation through grant AST-1109662. G.W.H. acknowledges long-term
support from Tennessee State University and the State of Tennessee
through its Centers of Excellence program. H.R.A.J acknowledges
support from STFC via grants ST/M001008/1 and Leverhulme Trust
RPG-2014-281. The authors thank the Gurushikar, Mt. Abu Observatory
Staff and the PARAS technical staff for the observations with
PARAS. The PARAS program is supported completely by the Physical
Research Laboratory, Dept. of Space, Govt. of India. The results
reported herein benefited from collaborations and/or information
exchange within NASA's Nexus for Exoplanet System Science (NExSS)
research coordination network sponsored by NASA's Science Mission
Directorate.
|
1,116,691,498,609 | arxiv | \section{Introduction}
In this paper, we consider the problem of prescribed Weingarten
curvatures for closed, star-shaped hypersurfaces in the warped product
manifold.
Let $(M,g')$ be a compact Riemannian manifold and $I$ be an open interval in $\mathbb{R}$. The warped product manifold $\overline{M}=I\times_{\lambda}M$ is endowed with the metric
\begin{eqnarray}\label{metric}
\overline{g}^2=dr^2+\lambda^2(r) g',
\end{eqnarray}
where $\lambda:I\rightarrow\mathbb{R}^{+}$ is a positive $C^2$ differentiable function.
Let $\Sigma$ be a compact star-shaped hypersurface in $\overline{M}$, thus $\Sigma$ can be parametrized as a radial graph over $M$. Specifically speaking,
there exists a differentiable function
$r : M \rightarrow I$ such that the graph of $\Sigma$ can be represented by
\begin{equation*}
\Sigma=\{(r(u),u)\mid u\in{M}\}.
\end{equation*}
We consider the following prescribed Weingarten curvature equation
\begin{eqnarray}\label{Eq}
\frac{\sigma_{k}}{\sigma_l}(\mu(\eta))=f(V, \nu(V)), \quad \forall~V \in \overline{M},
\end{eqnarray}
where $2\leq k\leq n, 0\leq l \leq k-2$, $V=\lambda\frac{\partial}{\partial r}$ is the position vector field of hypersurface $\Sigma$ in $\overline{M}$, $\sigma_k$ is the $k$-th elementary symmetric function, $\mu(\eta)$ is the eigenvalue of $g^{-1}\eta$, $f$ is a given smooth function and $\nu(V)$ is the unit outer normal vector at $V$. The $(0,2)$-tensor $\eta$ on $\Sigma$ is defined by
$$\eta_{ij}= Hg_{ij}- h_{ij},$$
where $g_{ij}$ and $h_{ij}$ are the first and second fundamental forms of $\Sigma$ respectively, $H(V)$ is the mean curvature at $V \in \Sigma$. In fact, $\eta$ is the first Newton transformation of $h$ with respect to $g$.
Given $r_1$, $r_2$ with $r_1<r_2$, we define the annulus domain $\{(r,u)\in \overline{M} \mid r_1\leq r \leq r_2\}$.
The main theorem is as follows.
\begin{theorem}\label{Main}
Let $M$ be a compact Riemannian manifold, $\overline{M}$ be the warped product manifold
with the metric (\ref{metric}) and $\Gamma$ be an open neighborhood of unit normal bundle of $M$ in $\overline{M}\times \mathbb{S}^n$. Assume that $\lambda$ is a positive $C^2$ differentiable function and $\lambda'>0$. Suppose that $f$ satisfies\par
\begin{eqnarray}\label{ASS1}
f(V,\nu)>\frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l},\quad \quad \forall~ r\leq r_{1},
\end{eqnarray}
\begin{eqnarray}\label{ASS2}
f(V,\nu)<\frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l},\quad \quad \forall ~r \geq r_{2}
\end{eqnarray}
and
\begin{eqnarray}\label{ASS3}
\frac{\partial}{\partial r}(\lambda^{k-l}f(V,\nu))\leq 0, \quad \quad \forall~ r_{1}<r<r_{2},
\end{eqnarray}
where $V=\lambda\frac{\partial}{\partial r}$ and $\zeta(r)=\lambda'(r)/\lambda(r)$.
Then there exists a $C^{4, \alpha}$, $(\eta,k)$-convex, star-shaped and closed hypersurface $\Sigma$ in $\{(r,u)\in \overline{M}\mid r_{1}\leq r \leq r_{2}\}$ satisfies the equation \eqref{Eq} for any $\alpha\in (0,1)$.
\end{theorem}
\begin{remark}
The key to prove Theorem \ref{Main} is to obtain the curvature estimate for
the Hessian quotient equation (\ref{Eq}) in the warped product manifold, which is established in Theorem \ref{n-2-C2e}.
\end{remark}
This kind of Hessian quotient equation is stimulated by many important geometric
problems.
When $k=n$, $l=0$ and $\lambda(r)=r$, the equation \eqref{Eq} becomes the following equation for an $(\eta, n)$-convex hypersurface
\begin{eqnarray}\label{ht-Eq-8}
\mbox{det} (\eta(V))=f(V, \nu),
\end{eqnarray}
which is studied intensively by Sha \cite{Sha1, Sha2}, Wu \cite{Wu} and Harvey-Lawson \cite{HL2}.
When the left hand of \eqref{ht-Eq-8} is replaced by $\sigma_k(\eta(V))$, Chu-Jiao
established curvature estimates for this kind of equation in \cite{CJ}.
Inspired by above results, the authors in \cite{CTX} considered the corresponding Hessian quotient type
prescribed curvature equations in Euclidean space. In this paper, we generalize the existence results in \cite{CTX} to the warped product manifold for the prescribed curvature problem. The remarkable fact is that Theorem \ref{Main} recovers the existence results in \cite{CJ, CTX}.
When $\mu(\eta)$ is replaced by $\kappa(V)$ and $l=0$, the equation \eqref{Eq} becomes this kind of prescribed curvature equation
\begin{eqnarray}\label{ht-Eq-9}
\sigma_k(\kappa(V))=f(V, \nu),
\end{eqnarray}
which has been widely studied in the past two decades.
The key to this prescribed curvature equation is the curvature estimate.
In Euclidean space, Caffarelli-Nirenberg-Spruck established the curvature estimates for $k=n$ in \cite{Ca1}.
Guan-Ren-Wang proved the $C^2$ estimates of the equation \eqref{ht-Eq-9} for $k=2$ in \cite{Guan-Ren15}. Spruck-Xiao extended the $2$-convex case to space forms and gave a simple proof for the Euclidean case in \cite{Sp}. Ren-Wang proved the $C^2$ estimates for $k=n-1$ and $n-2$ in \cite{Ren, Ren1}. When $2<k<n$, the $C^2$ estimates for the equation of prescribing curvature
measures were also proved in \cite{Guan12, Guan09}, where $f(V, \nu) = \langle V, \nu \rangle \Tilde{f}(V)$.
Ivochkina considered the Dirichlet problem of the equation \eqref{ht-Eq-9} on domains in $\mathbb{R}^n$ and obtained the $C^2$ estimates according to the dependence of $f$ on $\nu$ under some extra conditions in \cite{Iv1,Iv2}.
Caffarelli-Nirenberg-Spruck \cite{Ca} and Guan-Guan \cite{Guan02} proved the $C^2$ estimates when $f$ was independent of $\nu$ and depended only on $\nu$ respectively. Moreover, some results have been derived by Li-Oliker \cite{Li-Ol} on unit sphere, Barbosa-de Lira-Oliker \cite{Ba-Li} on space forms, Jin-Li \cite{Jin} on hyperbolic space and Andrade-Barbosa-de Lira \cite{Al} on the warped product manifold. In particular, Chen-Li-Wang \cite{CLW} generalized the results in \cite{Guan-Ren15} and Ren-Wang \cite{Ren} extended to the $(n-2)$-convex hypersurface in the warped product manifold.
The organization of the paper is as follows.
In Sect. 2
we start with some preliminaries.
The $C^0$, $C^1$ and $C^2$ estimates are given in Sect. 3.
In Sect. 4 we finish the proof of Theorem \ref{Main}.
\section{Preliminaries}
\subsection{Star-shaped hypersurfaces in the warped product
manifold}
Let $M$ be a compact Riemannian manifold with the metric $g'$ and $I$ be an open interval in $\mathbb{R}$. Assume that $\lambda : I\rightarrow \mathbb{R}^{+}$ is a positive differential function and
$\lambda'>0$. Clearly,
\begin{eqnarray*}
\lambda(r)=\begin{cases}r \quad \quad \mbox{on}~[0,\infty)\\
\sin r\quad \mbox{on}~[0,\frac{\pi}{2})\\
\sinh r \quad \mbox{on}~ [0,\infty)
\end{cases} \Longrightarrow \overline{M}= \begin{cases}\mathbb{R}^{n+1}\\
\mathbb{S}^{n+1}\\
\mathbb{H}^{n+1}.
\end{cases}
\end{eqnarray*}
The manifold $\overline{M}=I\times_{\lambda}M$ is called the warped product if it is endowed with the metric
\begin{eqnarray*}
\overline{g}^{2}=dr^2+\lambda^2(r) g'.
\end{eqnarray*}
The metric in $\overline{M}$ is denoted by $\langle\cdot,\cdot\rangle$. The corresponding Riemannian connection in $\overline{M}$ will be denoted by $\overline{\nabla}$. The usual connection in $M$ will be denoted by $\nabla'$. The curvature tensors in $M$ and $\overline{M}$ will be denoted by $R$ and $\overline{R}$ respectively.
Let $\{e_1,\cdots,e_{n-1}\}$ be an orthonormal frame field in M and let $\{\theta_1,\cdots,\theta_{n-1}\}$ be the associated dual frame.
The connection forms $\theta_{ij}$ and curvature forms $\Theta_{ij}$ in M satisfy the structural equations
\begin{align*}
&d\theta_i=\sum_j\theta_{ij}\wedge\theta_j,\quad \theta_{ij}=-\theta_{ji} ,\\
&d\theta_{ij}-\sum_k\theta_{ik}\wedge\theta_{kj}=\Theta_{ij}=-\frac{1}{2}\sum_{k,l}R_{ijkl}\theta_k
\wedge\theta_l.
\end{align*}
An orthonormal frame in $\overline{M}$ may be defined by $\overline{e}_i=\frac{1}{\lambda}e_i,1\leq i\leq n-1$ and $\overline{e}_0=\frac{\partial}{\partial r}$. The associated dual frame is that $\overline{\theta}_i=\lambda\theta_i$ , $1\leq i \leq n-1$ and $\overline{\theta}_0=dr$.
Then, we have the following lemma (See \cite{HLW}).
\begin{lemma}
Given a differentiable function $r : M \rightarrow I$, its graph is defined by the hypersurface
\begin{eqnarray*}
\Sigma=\{(r(u),u): u \in M\}.
\end{eqnarray*}
Then, the tangential vector takes the
form
$$V_i=\lambda\overline{e}_i+r_i \overline{e}_0,$$
where $r_i$ are the components of the differential $dr=r_i \theta^i$.
The induced metric on $\Sigma$ has
$$g_{ij}=\lambda^2(r)\delta_{ij}+r_i r_j,$$
and its inverse is given by
$$g^{ij}=\frac{1}{\lambda^2}(\delta_{ij}-\frac{r^i r^j}{v^2}).$$
We also have the outward unit normal vector of $\Sigma$
$$\nu=\frac{1}{v}\bigg(\lambda \overline{e}_0 -r^i\overline{e}_i\bigg),$$
where $v=\sqrt{\lambda^2+|\nabla'r|^2}$ with $\nabla'r=r^ie_i$.
Let $h_{ij}$ be the second fundamental form of $\Sigma$ in term of the tangential vector fields $\{X_1,
..., X_n\}$. Then,
$$h_{ij}=-\langle\overline{\nabla}_{X_j} X_i, \nu\rangle=
\frac{1}{v}\bigg(-\lambda r_{ij}+2\lambda'r_ir_j+\lambda^2\lambda' \delta_{ij}\bigg)$$
and
$$h^i_j=\frac{1}{\lambda^2 v}(\delta_{ik}-\frac{r^i r^k}{v^2})\bigg(-\lambda r_{kj}+2\lambda'r_kr_j+\lambda^2\lambda' \delta_{kj}\bigg),$$
where $r_{ij}$ are the components of the Hessian $\nabla'^{2} r=\nabla' dr $ of $r$ in $M$.
\end{lemma}
Let $\Gamma_k$ be the connected component of $\{\kappa\in \mathbb{R}^n \mid \sigma_m>0, m=1,\cdots, k\}$, the operator $\sigma_k(\kappa)$ for $\kappa=(\kappa_1, \cdots, \kappa_n)\in \Gamma_k$ is defined by
$$\sigma_k(\kappa)=\sum_{1\leq i_1<\cdots<i_k\leq n} \kappa_{i_1} \kappa_{i_2}\cdots \kappa_{i_k}.$$
A smooth hypersurface $M \subset \mathbb{R}^{n+1}$ is called
$(\eta, k)$-convex if $\mu(\eta) \in \Gamma_k$ for any $V\in M$,
where $\Gamma_k$ is the Garding cone
\begin{eqnarray*}\label{cone}
\Gamma_{k}=\{\lambda \in \mathbb{R} ^n: \sigma_{j}(\mu)>0, \forall ~ 1\leq j \leq k\}.
\end{eqnarray*}
For convenience, we introduce the following notations:
\begin{eqnarray*}
G(\eta):= \left(\frac{\sigma_k(\eta)}{\sigma_l(\eta)}\right)^{\frac{1}{k-l}},\quad G^{ij}:=\frac{\partial G}{ \partial \eta_{ij}}, \quad
G^{ij, rs}:= \frac{\partial^2 G}{\partial \eta_{ij} \partial \eta_{rs}}, \quad
F^{ii}:=\sum_{k\neq i} G^{kk}.
\end{eqnarray*}
Thus,
$$G^{ii}= \frac{1}{k-l} \left(\frac{\sigma_k(\eta)}{\sigma_l(\eta)}\right)^{\frac{1}{k-l}-1} \frac{\sigma_{k-1}(\eta| i)\sigma_l(\eta)-\sigma_k(\eta)\sigma_{l-1}(\eta| i)}{\sigma_l^2(\eta)}.$$
If $\eta=\mbox{diag}(\mu_1, \mu_2, \cdots, \mu_n)$ and $\mu_1 \leq \mu_2\leq \cdots \leq \mu_n$, then we have
$$G^{11}\geq G^{22} \geq \cdots \geq G^{nn}, \quad F^{11} \leq F^{22} \leq \cdots \leq F^{nn}.$$
Note that $\sum_iF^{ii} = (n-1) \sum_iG^{ii} \geq (n-1) \left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}$ and
\begin{equation}\label{Fi}
F^{ii} \geq F^{22} \geq \frac{1}{n(n-1)} \sum_i F^{ii}.
\end{equation}
To handle the ellipticity of the equation \eqref{Eq}, we need the
following important propositions and their proof are the same as
Proposition 2.2.3 in \cite{CQ1}.
\begin{proposition}\label{th-lem-07}
Let $\eta$ be a diagonal matrix with $\mu(\eta)\in \Gamma_k$, $0\leq l \leq k-2$ and $k\geq 3$. Then
\begin{eqnarray*}
-G^{1i, i1}(\eta)=\frac{G^{11}-G^{ii}}{\eta_{ii}-\eta_{11}}, \quad \forall~ i\geq 2.
\end{eqnarray*}
\end{proposition}
\begin{proposition}\label{ellipticconcave}
Let $M$ be a smooth $(\eta, k)$-convex closed hypersurface in $\mathbb{R}^{n+1}$
and $0\leq l< k-1$. Then the operator
\begin{eqnarray*}
G(\eta_{ij}(V))=\left(\frac{\sigma_k(\mu(\eta))}{\sigma_{l}(\mu(\eta))}\right)^{\frac{1}{k-l}}
\end{eqnarray*}
is elliptic and concave with respect to $\eta_{ij}(V)$. Moreover we have
\begin{eqnarray*}
\sum G^{ii} \geq \left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}.
\end{eqnarray*}
\end{proposition}
The Codazzi equation is a commutation formula for the first order derivative of $h_{ij}$ given by
\begin{equation*}
h_{ijk}-h_{ikj}=\overline{R}_{0ijk}
\end{equation*}
and the Ricci identity is a commutation formula for the second order derivative of $h_{ij}$ given by \cite[Lemma 2.2]{CLW}, the following lemma can be derived.
\begin{lemma}
Let $\overline{X}$ be a point of $\Sigma$ and $\{E_{0}=\nu,E_1,\cdots,E_n\}$ be an adapted frame field such that each $E_i$ is a principal direction and $\omega_i^k=0$ at $\overline{X}$. Let $(h_{ij})$ be the second quadratic form of $\Sigma$. Then, at the point $\overline{X}$, we have
\begin{equation}\label{hii}
h_{ii11}-h_{11ii}=h_{11}h_{ii}^2-h_{11}^2h_{ii}+2(h_{ii}-h_{11})\overline{R}_{i1i1}+h_{11}\overline{R}
_{i0i0}-h_{ii}\overline{R}_{1010}+\overline{R}_{i1i0;1}-\overline{R}_{1i10;i}.
\end{equation}
\end{lemma}
Consider the function
\begin{eqnarray*}
\tau=\langle V, \nu\rangle, \quad \quad \Lambda(r)=\int_{0}^{r}\lambda(s) d s
\end{eqnarray*}
with the position vector field
\begin{eqnarray*}
V=\lambda(r)\frac{\partial}{\partial_r}.
\end{eqnarray*}
Then, we need the following lemma for $\tau$ and $\Lambda$.
\begin{lemma}\label{supp}
Let $\tau$, $\Lambda$ be functions as above, then we have
\begin{eqnarray}\label{1d-lad}
\nabla_{E_i} \Lambda =\lambda \langle \overline{e}_0, E_i\rangle E_i,
\end{eqnarray}
\begin{eqnarray}\label{1d-tau}
\nabla_{E_i} \tau = \sum_j \left(\nabla_{E_j}\Lambda\right) h_{ij},
\end{eqnarray}
\begin{eqnarray}\label{2d-lad}
\nabla^2_{E_i, E_j} \Lambda=\lambda^{\prime}g_{ij}-\tau h_{ij}
\end{eqnarray}
and
\begin{eqnarray}\label{2d-tau}
\nabla^2_{E_i, E_j} \tau=-\tau\sum_k h_{ik}h_{kj}+ \lambda^{\prime}h_{ij}+\sum_k \left( h_{ijk}-\overline{R}_{0ijk}\right) \nabla_{E_k} \Lambda.
\end{eqnarray}
\end{lemma}
\begin{proof}
See Lemma 2.2, Lemma 2.6 and Lemma 2.3 in \cite{CLW}, \cite{Guan15} or \cite{Jin} for the details.
\end{proof}
\section{A priori estimates}
In order to prove Theorem \ref{Main}, we use the degree theory for the
nonlinear elliptic equation developed in \cite{Li89} and the proof
here is similar to those in \cite{Al,Jin,Li-Sh,Li-Ol}. First, we consider
the family of equations for $0\leq t\leq 1$
\begin{eqnarray}\label{Eq2}
\frac{\sigma_k(\mu(\eta))}{\sigma_l(\mu(\eta))}=f^t( V, \nu(V)),
\end{eqnarray}
where
$f^t=tf(r,u, \nu)+(1-t)\phi(r) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}$, $\zeta(r)=\frac{\lambda'}{\lambda}$ and $\phi$ is a positive function which satisfies the following conditions:
(a) $\phi(r)>0$,
(b) $\phi(r)\geq 1$ for $r\leq r_1$,
(c) $\phi(r)\leq 1$ for $r\geq r_2$,
(d) $\phi^{\prime}(r)<0$.
\subsection{$C^0$ Estimates}
Now, we can prove the following proposition which asserts that the
solution of the equation \eqref{Eq} has uniform $C^0$ bounds.
\begin{proposition}\label{n-2-C^0}
Under the assumptions \eqref{ASS1} and \eqref{ASS2}, if the $(\eta,k)$-convex hypersurface ${\Sigma}=\{(r(u),u)\mid
u \in M \}\subset \overline{M}$ satisfies the equation
\eqref{Eq2} for a given $t \in (0, 1]$, then
\begin{eqnarray*}
r_1<r(u)<r_2, \quad \forall \ u \in M.
\end{eqnarray*}
\end{proposition}
\begin{proof}
Assume $r(u)$ attains its maximum at $u_0 \in M$ and
$r(u_0)\geq r_2$, then recall
\begin{eqnarray*}
h^i_j=\frac{1}{\lambda^2 v}(\delta_{ik}-\frac{r^i r^k}{v^2})\bigg(-\lambda r_{kj}+2\lambda'r_kr_j+\lambda^2\lambda' \delta_{kj}\bigg),
\end{eqnarray*}
which implies together with the fact that the matrix $r_{ij}$ is
non-positive definite at $u_0$
\begin{eqnarray*}
h^{i}_{j}(u_0)=\frac{1}{\lambda^3}\bigg(-\lambda r_{ij}+\lambda^2\lambda' \delta_{ij}\bigg)\geq \frac{\lambda'}{\lambda} \delta_{ij}.
\end{eqnarray*}
Then
\begin{equation*}
\eta^i_j(u_0)= H \delta^i_j-h^i_j\geq\frac{(n-1)\lambda'}{\lambda}\delta_{ij}.
\end{equation*}
Note that $\frac{\sigma_{k}}{\sigma_{l}}$ for $0\leq l\leq k-2$ is concave in
$\Gamma_{k}$. Thus
\begin{eqnarray*}
\frac{\sigma_k(\mu(\eta))}{\sigma_{l}(\mu(\eta))} \geq
\frac{\sigma_k(\frac{(n-1)\lambda'}{\lambda}\delta_{ij})}{\sigma_{l}(\frac{( n-1)\lambda'}{\lambda}\delta_{ij})}=\frac{C_n^k}{C_n^l}(\frac{( n-1)\lambda'}{\lambda})^{k-l}=\frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}.
\end{eqnarray*}
So, we arrive at $u_0$
\begin{eqnarray*}
tf(r,u, \nu)+(1-t)\phi(r) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}\geq
\frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}.
\end{eqnarray*}
Thus, we obtain at $u_0$
\begin{eqnarray*}
f(r,u, \nu)\geq \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l},
\end{eqnarray*}
which is in contradiction to \eqref{ASS2}. Thus, we have $r(u)<
r_2$ for $u \in M$. Similarly, we can obtain $r(u)> r_1$
for $u \in M$.
\end{proof}
Now, we prove the following uniqueness result.
\begin{proposition}\label{Uni}
There exists an unique $(\eta,k)$-convex solution to the
equation \eqref{Eq2} with $t=0$, namely $\Sigma_0=\{(r(u), u) \in \overline{M} \mid
r(u)=r_0\}$, where $r_0$ satisfies $\varphi(r_0)=1$.
\end{proposition}
\begin{proof}
Let $\Sigma_0$ be a solution of \eqref{Eq2} for $t=0$, then
\begin{eqnarray*}
\frac{\sigma_k}{\sigma_l}(\mu(\eta))-\phi(r) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}=0.
\end{eqnarray*}
Assume $r(u)$ attains its maximum $r_{max}$ at $u_0 \in
M$, then we have at $u_0$
\begin{equation*}
h^{i}_{j}=\frac{1}{\lambda^3}\bigg(-\lambda r_{ij}+\lambda^2\lambda' \delta_{ij}\bigg),
\end{equation*}
which implies together with the fact that the matrix $r_{ij}$ is
non-positive definite at $u_0$
\begin{eqnarray*}
\frac{\sigma_k(\mu(\eta))}{\sigma_{l}(\mu(\eta))} \geq
\frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}.
\end{eqnarray*}
By the equation \eqref{Eq2}
\begin{eqnarray*}
\varphi(r_{max})\geq 1.
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
\varphi(r_{min})\leq 1.
\end{eqnarray*}
Thus, since $\varphi$ is a decreasing function, we obtain
\begin{eqnarray*}
\varphi(r_{min})=\varphi(r_{max})=1.
\end{eqnarray*}
We conclude
\begin{equation*}
r(x)=r_0, \quad\forall~( r(u), u) \in \overline{M},
\end{equation*}
where $r_0$ is the unique solution of
$\varphi(r_0)=1$.
\end{proof}
\subsection{$C^1$ Estimates}
In this section, we establish the gradient estimates for the equation \eqref{Eq2}.
The treatment of this section follows from \cite[Lemma 3.1]{CLW}.
We recall that a star-shaped hypersurface $\Sigma$ in $\overline{M}$ can be represented by
\begin{equation*}
\Sigma=\{V(u)=(r(u),u)\mid u\in{M}\},
\end{equation*}
where $V$ is the position vector field of hypersurface $\Sigma$ in $\overline{M}$.
We define a function $\tau=\langle V,\nu\rangle$. It is clear that
$$\tau=\frac{r^2}{\sqrt{r^2+|Dr|^2}}.$$
\begin{theorem}\label{n-2-C1e}
Under the assumption \eqref{ASS3}, if the closed star-shaped $(\eta,k)$-convex hypersurface $\Sigma=\{(r(u),u)\in \overline{M}\mid u \in M\}$ satisfies the curvature equation \eqref{Eq2}
and $r$ has positive upper and lower
bound, then there exists a constant C depending only on $n, k, l, \|\lambda\|_{C^1}, \inf_{\Sigma} r, \sup_{\Sigma} r, \inf_{\Sigma}f$ and $\|f\|_{C^1}$ such that
\begin{equation*}
|D r|\leq C.
\end{equation*}
\end{theorem}
\begin{proof}
It is sufficient to obtain a positive lower bound of $\tau$. We consider the function
\begin{equation*}
\Phi=-\ln\tau+\gamma(\Lambda),
\end{equation*}
where $\gamma(\Lambda)$ is a function which will be chosen later. Assume that $\Phi$ attains its maximum value at point $u_0$. If $V$ is parallel to the normal direction $\nu$ at $u_0$, we have $\langle V,\nu\rangle=|V|$. Thus our result holds. So we assume $V$ is not parallel to the normal direction $\nu$ at $u_0$. We can choose the local orthonomal frame $\{E_1,\cdots,E_n\}$ on $\Sigma$ satisfying
\begin{equation*}
\langle V, E_1\rangle\neq 0, \quad \mbox{and} \quad \langle V,
E_i\rangle=0, \quad \forall ~ i\geq 2.
\end{equation*}
Obviously, $V=\langle V, E_1\rangle E_1+ \langle V, \nu\rangle \nu$.
Then, we arrive at $u_0$
\begin{eqnarray}\label{Par-1}
0=\Phi_i= - \frac{\nabla_{E_i}\tau}{\tau}+ \gamma^{\prime} \nabla_{E_i} \Lambda,
\end{eqnarray}
\begin{eqnarray}\label{Par-2}
0\geq \Phi_{ii}=- \frac{\nabla^2_{E_i, E_i}\tau}{\tau}+ \frac{|\nabla_{E_i}\tau|^2}{\tau^2}+ \gamma^{\prime} \nabla^2_{E_i,E_i} \Lambda+\gamma^{\prime\prime} |\nabla_{E_i} \Lambda|^2.
\end{eqnarray}
From Lemma \ref{supp}, \eqref{Par-1} and \eqref{Par-2}, we have
\begin{equation*}
\begin{split}
0\geq -\frac{1}{\tau}(-\tau h_{il}h_{li}+\lambda^{\prime} h_{ii}+(h_{iil}-\overline{R}_{0iil})\Lambda_l)
+(\gamma^{\prime\prime}+(\gamma^{\prime})^2)\Lambda_i^2+\gamma^{\prime}(\lambda^{\prime} g_{ii}-\tau h_{ii}).
\end{split}
\end{equation*}
By \eqref{1d-tau} and \eqref{Par-1}, we obtain
\begin{eqnarray}\label{Par-3}
h_{11}=\tau \gamma^{\prime}, \quad \quad h_{i1}=0, \quad \forall ~i\geq 2.
\end{eqnarray}
Therefore, it is possible to rotate the coordinate system such that $\{E_1, \cdots, E_n\}$ are the principal curvature directions of the second fundamental form $(h_{ij})$, i.e., $h_{ij}=h_{ii}\delta_{ij}.$ Thus, from \eqref{1d-tau}-\eqref{2d-tau}, \eqref{Par-2} and \eqref{Par-3}, we get
\begin{eqnarray}\label{ineq-1}
0 & \geq & F^{ii}h_{ii}^2-\frac{1}{\tau}\lambda^{\prime}F^{ii}h_{ii}-
\frac{1}{\tau}F^{ii}(h_{iil}-\overline{R}_{0iil})\Lambda_l\\
\nonumber&&+(\gamma^{\prime\prime}+(\gamma^{\prime})^2)
F^{ii}\Lambda_i^2-\gamma^{\prime}\tau F^{ii}h_{ii}+\gamma^{\prime}\lambda^{\prime}F^{ii}g_{ii} \\
\nonumber & = & F^{ii}h_{ii}^2-\frac{1}{\tau}F^{ii}h_{ii1}\Lambda_1+
\frac{1}{\tau}F^{ii}\overline{R}_{0ii1}\Lambda_1+(\gamma^{\prime\prime}+
(\gamma^{\prime})^2) F^{11}\Lambda_1^2\\
\nonumber && +\gamma^{\prime}\lambda^{\prime}
F^{ii}g_{ii}-\frac{1}{\tau}\lambda^{\prime}F^{ii}h_{ii}-\gamma^{\prime}\tau F^{ii}h_{ii}.
\end{eqnarray}
Since $\eta_{ii}= \sum_{j\neq i} h_{jj}$, then
\begin{equation}\label{ht-c2-03}
\begin{aligned}
\sum_i F^{ii} h_{ii} =&\sum_i \left( \sum_k G^{kk} -G^{ii}\right) \left(\frac{1}{n-1} \sum_l \eta_{ll} -\eta_{ii}\right)\\
=& \sum_i G^{ii} \eta_{ii}\\
=& \frac{1}{k-l} \left(\frac{\sigma_k(\eta)}{\sigma_l(\eta)}\right)^{\frac{1}{k-l}-1} \frac{\sum_i \eta_{ii}\sigma_{k-1}(\eta| i)\sigma_l(\eta)-\sigma_k(\eta) \sum_i \eta_{ii}\sigma_{l-1}(\eta| i)}{\sigma_l^2(\eta)}\\
=& \tilde{f},
\end{aligned}
\end{equation}
where $\tilde{f}=f^{\frac{1}{k-l}}$.
Note that the curvature equation \eqref{Eq} can be written as
\begin{equation}\label{ht-eq-2}
G(\eta)=\tilde{f}.
\end{equation}
Differentiating \eqref{ht-eq-2} with respect to $E_1$, we obtain
$$G^{ii} \eta_{ii1}= d_V\widetilde{f}(\nabla_{E_1}V)+h_{11}d_{\nu}\widetilde{f}(E_1).$$
In fact
\begin{eqnarray}\label{ht-c2-110}
\nonumber F^{ii}h_{ii1}&=&\sum_i\left (\sum_jG^{jj}-G^{ii}\right )h_{ii1}\\
\nonumber&=&\sum_iG^{ii}\eta_{ii1}\\
&=&d_V\widetilde{f}(\nabla_{E_1}V)+h_{11}d_{\nu}\widetilde{f}(E_1).
\end{eqnarray}
Putting \eqref{1d-lad}, \eqref{Par-3}, \eqref{ht-c2-03} and \eqref{ht-c2-110} into \eqref{ineq-1}, we derive
\begin{eqnarray*}
\nonumber0 &\geq & F^{ii}h_{ii}^2-\frac{1}{\tau}F^{ii}(h_{ii1}-\overline{R}_{0ii1})\Lambda_1
+(\gamma^{\prime\prime}+(\gamma^{\prime})^2)F^{11}\Lambda_1^2
+\gamma^{\prime}\lambda^{\prime}F^{ii}g_{ii}-\frac{\lambda^{\prime}}{\tau}\widetilde{f}
-\gamma^{\prime}\tau\widetilde{f}\\
\nonumber&=&F^{ii}h_{ii}^2 -\frac{1}{\tau}d_V\widetilde{f}(\nabla_{E_1}V)\Lambda_1-\gamma^{\prime}d_{\nu}\widetilde{f}(E_1)\Lambda_1
-\frac{1}{\tau} F^{ii}\overline{R}_{0i1i}\Lambda_1\\
\nonumber &&+(\gamma^{\prime\prime}+(\gamma^{\prime})^2)
F^{11}\Lambda_1^2+\gamma^{\prime}\lambda^{\prime}F^{ii}g_{ii}-\frac{\lambda^{\prime}}{\tau}\widetilde{f}
-\gamma^{\prime}\tau\widetilde{f}\\
\nonumber &=&F^{ii}h_{ii}^2 -\frac{1}{\tau}\left(\langle V,E_1\rangle d_V\widetilde{f}(\nabla_{E_1}V)+\lambda^{\prime}\widetilde{f}\right)-\gamma^{\prime}
d_{\nu}\widetilde{f}(E_1)\langle V,E_1\rangle\\
&&-\frac{1}{\tau} F^{ii}\overline{R}_{0i1i}\langle V,E_1\rangle
+(\gamma^{\prime\prime}+(\gamma^{\prime})^2)
F^{11}{\langle V,E_1\rangle}^2+\gamma^{\prime}\lambda^{\prime}F^{ii}g_{ii}
-\gamma^{\prime}\tau\widetilde{f}.
\end{eqnarray*}
Since $V=\langle V,E_1\rangle E_1+\langle V,\nu\rangle \nu$, we have
\begin{eqnarray*}
d_V\widetilde{f}(V,\nu)&=&\langle V,E_1\rangle d_V\widetilde{f}(\nabla_{E_1}V)+\langle V,\nu\rangle d_V\widetilde{f}(\nabla_{\nu}V).
\end{eqnarray*}
From \eqref{ASS3} and $V=\lambda\frac{\partial}{\partial r}$, we see that
\begin{eqnarray*}
0&\geq & \frac{\partial}{\partial r}\left(\lambda^{k-l}f\right)=\frac{\partial}{\partial r}\left(\lambda^{k-l}\widetilde{f}^{k-l}\right)\\
&=&(k-l)(\lambda\widetilde{f})^{k-l-1}\left(\lambda^{\prime}\widetilde{f}+d_V\widetilde{f}\right)\\
&=&(k-l)(\lambda\widetilde{f})^{k-l-1}\left(\lambda^{\prime}\widetilde{f}+\langle V,E_1\rangle d_V\widetilde{f}(\nabla_{E_1}V)+\langle V,\nu\rangle d_V\widetilde{f}(\nabla_{\nu}V)\right).
\end{eqnarray*}
It follows that
$$-\left(\lambda^{\prime}\widetilde{f}+\langle V,E_1\rangle d_V\widetilde{f}(\nabla_{E_1}V)\right)\geq
\langle V,\nu\rangle d_V\widetilde{f}(\nabla_{\nu}V),$$
which implies
\begin{eqnarray}\label{eqam}
\nonumber 0 &\geq& F^{ii}h_{ii}^2 +d_V\widetilde{f}(\nabla_{\nu}V)-\gamma^{\prime}
d_{\nu}\widetilde{f}(E_1)\langle V,E_1\rangle
-\frac{1}{\tau} F^{ii}\overline{R}_{0i1i}\langle V,E_1\rangle\\
&&+(\gamma^{\prime\prime}+(\gamma^{\prime})^2)
F^{11}{\langle V,E_1\rangle}^2+\gamma^{\prime}\lambda^{\prime}F^{ii}g_{ii}
-\gamma^{\prime}\tau\widetilde{f}.
\end{eqnarray}
Choosing the function $\gamma(r)=\frac{\alpha}{r}$ for a positive constant $\alpha$, we get
\begin{equation}\label{eqna-2}
\gamma^{\prime}(r)=-\frac{\alpha}{r^2},\quad\quad \gamma^{\prime\prime}(r)=\frac{2\alpha}{r^3}.
\end{equation}
By \eqref{Par-3} and the choice of function $\gamma(r)$, we have $h_{11}<0$ at $u_0$. From $H>0$ we know that
\begin{equation}\label{F11}
F^{11}=\sum_{j \neq 1} G^{jj} \geq \frac{1}{2} \sum_i G^{ii} =\frac{1}{2(n-1)}\sum_iF^{ii}\geq \frac{1}{2}\left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}}.
\end{equation}
Putting \eqref{eqna-2} into \eqref{eqam}, we have
\begin{eqnarray}\label{eqmm}
\nonumber 0&\geq& F^{ii}h_{ii}^2 +d_V\widetilde{f}(\nabla_{\nu}V)+\frac{\alpha}{r^2}
d_{\nu}\widetilde{f}(E_1)\langle V,E_1\rangle
-\frac{1}{\tau} F^{ii}\overline{R}_{0i1i}\langle V,E_1\rangle\\
\nonumber &&+(\frac{2\alpha}{r^3}+\frac{\alpha^2}{r^4})
F^{11}{\langle V,E_1\rangle}^2-\frac{\alpha}{r^2}\lambda^{\prime}F^{ii}g_{ii}
+\frac{\alpha}{r^2}\tau\widetilde{f}\\
\nonumber &\geq &\tau^2\frac{\alpha^2}{r^4}F^{11}+(\frac{2\alpha}{r^3}+\frac{\alpha^2}{r^4})
F^{11}{\langle V,E_1\rangle}^2 +\frac{\alpha}{r^2}
d_{\nu}\widetilde{f}(E_1)\langle V,E_1\rangle\\
\nonumber&&-\frac{1}{\tau} F^{ii}\overline{R}_{0i1i}\langle V,E_1\rangle
+d_V\widetilde{f}(\nabla_{\nu}V)
-\frac{\alpha}{r^2}\lambda^{\prime}F^{ii}g_{ii}
+\frac{\alpha}{r^2}\tau\widetilde{f}\\
\nonumber&=&\frac{\alpha^2}{r^4}F^{11}|V|^2+\frac{2\alpha}{r^3}F^{11}\langle V,E_1\rangle^2+\frac{\alpha}{r^2}
d_{\nu}\widetilde{f}(E_1)\langle V,E_1\rangle\\
&&-\frac{1}{\tau} F^{ii}\overline{R}_{0i1i}\langle V,E_1\rangle
+d_V\widetilde{f}(\nabla_{\nu}V)
-\frac{\alpha}{r^2}\lambda^{\prime}\sum_iF^{ii}
+\frac{\alpha}{r^2}\tau\widetilde{f},
\end{eqnarray}
the third equality comes from $|V|^2=\langle V,E_1\rangle^2+\langle V,\nu\rangle^2$.
Since $V=\langle V,E_1\rangle E_1+\langle V,\nu\rangle\nu$, we can find that $V \bot Span\{E_2,\cdots,E_n\}$. On the other hand, $E_1,\nu \bot Span\{E_2,\cdots,E_n\}$. It is possible to choose coordinate systems such that $\overline{e}_1\bot Span\{E_2,\cdots,E_n\}$, which
implies that the pair $\{V,\overline{e}_1\}$ and $\{\nu,E_1\}$ lie in the same plane and
$$Span\{E_2,\cdots,E_n\}=Span\{\overline{e}_2,\cdots,\overline{e}_n\}.$$
Therefore, we can choose $E_2 = \overline{e}_2 ,\cdots,E_n = \overline{e}_n$. The vector $\nu$ and $E_1$ can be decomposed into
\begin{equation}\label{R-1}
\nu=\langle\nu,\overline{e}_0\rangle \overline{e}_0+\langle\nu,\overline{e}_1\rangle \overline{e}_1
=\frac{\tau}{\lambda}\overline{e}_0+\langle\nu,\overline{e}_1\rangle \overline{e}_1,
\end{equation}
\begin{equation}\label{R-2}
E_1=\langle E_1,\overline{e}_0\rangle \overline{e}_0+\langle E_1,\overline{e}_1\rangle \overline{e}_1.
\end{equation}
By \eqref{R-1} and \eqref{R-2}, we obtain
\begin{eqnarray}\label{eqnt}
\overline{R}_{0i1i} &=& \overline{R}(\nu,E_i,E_1,E_i)\\
\nonumber&=& \frac{\tau}{\lambda}\langle E_1,\overline{e}_0\rangle\overline{R}(\overline{e}_0,\overline{e}_i,
\overline{e}_0,\overline{e}_i)+\langle\nu,\overline{e}_1\rangle\langle E_1,\overline{e}_1\rangle\overline{R}(\overline{e}_1,
\overline{e}_i,\overline{e}_1,\overline{e}_i)\\
\nonumber&=&\frac{\tau}{\lambda}\langle E_1,\overline{e}_0\rangle\overline{R}(\overline{e}_0,\overline{e}_i,
\overline{e}_0,\overline{e}_i)-\tau\frac{\langle\nu,\overline{e}_1\rangle^2}{\langle E_1,V\rangle}
\overline{R}(\overline{e}_1,\overline{e}_i,\overline{e}_1,\overline{e}_i)\\
\nonumber&=&\tau\left(\frac{1}{\lambda}\langle E_1,\overline{e}_0\rangle\overline{R}(\overline{e}_0,\overline{e}_i,
\overline{e}_0,\overline{e}_i)-\frac{\langle\nu,\overline{e}_1\rangle^2}{\langle E_1,V\rangle}
\overline{R}(\overline{e}_1,\overline{e}_i,\overline{e}_1,\overline{e}_i)\right),
\end{eqnarray}
the second equality comes from $0=\overline{R}_{ijk0}$ (see \cite[Lemma 2.1]{CLW}), the third equality comes from $0=\langle V,\overline{e}_1\rangle$.
From \eqref{F11} and \eqref{eqnt}, \eqref{eqmm} becomes
\begin{eqnarray*}
\nonumber 0 &\geq&
C_1F^{11}\frac{\alpha^2}{r_2^4}-C_2F^{11}\frac{\alpha}{r_2^3}-C_3\frac{\alpha}{r_1^2}|d_{\nu}\widetilde{f}
(E_1)|-C_4F^{11}-|d_V\widetilde{f}(\nabla_{\nu}V)|-C_5\\
&\geq& C\alpha^2F^{11}-C_2\alpha F^{11}-C\alpha|d_{\nu}\widetilde{f}
(E_1)|-CF^{11}-|d_V\widetilde{f}(\nabla_{\nu}V)|-C,
\end{eqnarray*}
where $r_1=\inf_{\Sigma}r$, $r_2=\sup_{\Sigma}r$, $C_1$, $C_2$, $C_3$, $C_4$, $C_5$, $C$ depend on $n$, $r_1$, $r_2$, $\inf_{\Sigma}f$, the $C^1$ bounds of $\lambda$ and curvature $\overline{R}$. Thus, we have a contradiction when $\alpha$ is large enough. Hence, $V$ is parallel to the
normal $\nu$ which implies the lower bound of $\tau$.
\end{proof}
\subsection{$C^2$ Estimates}
Under the the assumption \eqref{ASS1}-\eqref{ASS3}, from Theorem \ref{n-2-C^0} and Theorem \ref{n-2-C1e} we know that
there exists a positive constant $C$ depending on $\inf_{\Sigma} r$ and $\|r\|_{C^1}$ such that
$$\frac{1}{C} \leq \inf_{\Sigma} \tau \leq
\tau \leq \sup_{\Sigma} \tau \leq C.$$
\begin{theorem}\label{n-2-C2e}
Let $\Sigma$ be a closed star-shaped $(\eta,k)$-convex hypersurface satisfying the curvature equation \eqref{Eq2} and the assumption of Theorem \ref{Main}. Then, there exists a constant C depending only on $n,k,l,\inf_{\Sigma}\lambda',\inf_{\Sigma}r,\inf_{\Sigma}f,\|r\|_{C^2}$,$\|f\|_{C^2}$ and the curvature $\overline{R}$ such that for $1\leq i\leq n$
\begin{equation*}
|\kappa_{i}(u)|\le C, \quad \forall ~ u \in M.
\end{equation*}
\end{theorem}
\begin{proof}
Since $\eta\in\Gamma_{k}\subset\Gamma_{1}$, we see that the mean curvature is positive. It suffices to prove that the largest curvature $\kappa_{\mbox{max}}$ is uniformly bounded from above. Take the auxiliary function
\begin{equation*}
P=\ln\kappa_{max}-\ln(\tau-a)+A\Lambda,
\end{equation*}
where $a=\frac{1}{2}\inf_{\Sigma}(\tau)$ and $A>1$ is a constant to be determined later. Assume that $P$ attains its maximum value at point $u_0$. We can choose a local orthonormal frame $\{E_{1}, E_{2}, \cdots, E_{n}\}$ near $u_0$ such that
$$h_{ii}=\delta_{ij}h_{ij}, \quad h_{11}\geq h_{22}\geq \cdots \geq h_{nn}$$
at $u_0$. Recalling that $\eta_{ii}=\sum_{k\neq i}h_{kk}$, we have
$$\eta_{11}\leq \eta_{22}\leq\cdots\leq\eta_{nn}.$$
It follows that
$$G^{11}\geq G^{22}\geq\cdots\geq G^{nn},\quad F^{11}\leq F^{22}\leq\dots\leq F^{nn}.$$
We define a new function $Q$ by
$$ Q=\ln h_{11}-\ln(\tau-a)+A\Lambda.$$
Since $h_{11}(u_0)=\kappa_{\mbox{max}}(u_0)$ and $h_{11}\leq\kappa_{\mbox{max}}$ near $u_0$, $Q$ achieves a maximum at $u_0$.
Hence
\begin{equation}\label{ht-c2-01}
0=Q_i=\frac{h_{11i}}{h_{11}}-\frac{\tau_i}{\tau-a}+A\Lambda_i,
\end{equation}
\begin{equation}\label{ht-c2-02}
0\geq F^{ii}Q_{ii}=F^{ii}(\ln h_{11})_{ii}-F^{ii}(\ln(\tau-a))_{ii}+AF^{ii}\Lambda_{ii}.
\end{equation}
We divide our proof into four steps.
\textbf{Step 1}: We claim that
\begin{eqnarray}\label{ht-c2-1}
0 &\geq& -\frac{2}{h_{11}}\sum_{i\geq2}G^{1i,i1}h_{1i1}^2 -\frac{F^{ii}h_{11i}^2}{h_{11}^2}
+\frac{aF^{ii}h_{ii}^2}{\tau-a}
+F^{ii}\frac{\tau_i^2}{(\tau-a)^2}\\
\nonumber&&+(A\lambda'-C_0)\sum_iF^{ii}-C_0h_{11}-\frac{C_0(1+\sum_iF^{ii})}{h_{11}} -AC_0,
\end{eqnarray}
where $C_0$ depends on $\inf_{\Sigma}r,\inf_{\Sigma}f,\|r\|_{C^2},\|f\|_{C^2}$ and the curvature $\overline{R}$.
Using the similar argument in \eqref{ht-c2-110}, we obtain
\begin{equation}\label{eqe}
F^{ii}h_{iij}=d_V\widetilde{f}(\nabla_{E_j}V)+h_{jj}d_{\nu}\widetilde{f}(E_j).
\end{equation}
By Gauss formula and Weingarten formula,
\begin{equation}\label{tau}
\tau_i=h_{ii}\langle V,E_i\rangle, \quad \tau_{ii}=\sum_j h_{iji}\langle V, E_j\rangle-\tau h_{ii}^2+h_{ii}.
\end{equation}
Combined with \eqref{eqe}, \eqref{tau} and Codazzi formula, we have
\begin{eqnarray}\label{ht-c2-05}
\nonumber-F^{ii}(\ln(\tau-a))_{ii}&=&-F^{ii}(\frac{\tau_{ii}}{\tau-a}-\frac{\tau_i^2}{(\tau-a)^2})\\
\nonumber&=&-\frac{1}{\tau-a}\sum_jh_{jj}(d_{\nu}\widetilde{f})(E_j)\langle V,E_j\rangle
-\frac{1}{\tau-a}\sum_jd_V\widetilde{f}(\nabla_{E_j}V)\langle V,E_j\rangle\\
\nonumber&&-\frac{1}{\tau-a}\sum_{i,j}\overline{R}_{0iji}F^{ii}\langle V,E_j\rangle+\frac{\tau F^{ii}h_{ii}^2}{\tau-a}-\frac{1}{\tau-a}
\widetilde{f}+F^{ii}\frac{\tau_i^2}{(\tau-a)^2}\\
&\geq&-\frac{1}{\tau-a}\sum_jh_{jj}(d_{\nu}\widetilde{f})(E_j)\langle V,E_j\rangle+\frac{\tau F^{ii}h_{ii}^2}{\tau-a}
+F^{ii}\frac{\tau_i^2}{(\tau-a)^2}-C_1\sum_iF^{ii}-C_1,
\end{eqnarray}
where $C_1$ depends on $\inf_{\Sigma}r,\|r\|_{C^1}$, $\|f\|_{C^1}$ and the curvature $\overline{R}$.
Differentiating \eqref{ht-eq-2} with respect to $E_1$ twice, we obtain
$$G^{ij}\eta_{ij1}=d_V\widetilde{f}(\nabla_{E_1}V)+h_{1k}d_{\nu}\widetilde{f}(E_k)$$
and
\begin{eqnarray*}
\nonumber G^{ij,rs}\eta_{ij1}\eta_{rs1}+G^{ij}\eta_{ij11} &=&d_V^2\widetilde{f}(\nabla_{E_1}V,\nabla_{E_1}V)+d_V\widetilde{f}(\nabla_{E_1,E_1}^2V) \\
\nonumber &&+2d_Vd_{\nu}\widetilde{f}
(\nabla_{E_1}V,\nabla_{E_1}\nu)+d_{\nu}^2\widetilde{f}(\nabla_{E_1}\nu,\nabla_{E_1}\nu)+d_{\nu}\widetilde{f}(\nabla_{E_1,E_1}^2\nu)\\
&\geq& \sum_ih_{1i1}(d_{\nu}\widetilde{f})(E_i)-C_2h_{11}^2-C_2h_{11}-C_2.
\end{eqnarray*}
Applying the concavity of $G$, we derive
$$-G^{ij,rs} \eta_{ij1} \eta_{rs1} \geq -2 \sum_{i\geq2} G^{1i, i1} \eta_{1i1}^2=-2 \sum_{i\geq 2} G^{1i, i1} h_{1i1}^2.$$
It follows that
\begin{equation}\label{Fii}
F^{ii}h_{ii11}=G^{ii}\eta_{ii11}\geq -2\sum_{i\geq2}G^{1i,i1}h_{1i1}^2+ \sum_i h_{1i1} (d_{\nu} \widetilde{f})(E_i) -C_2h_{11}^2-C_2h_{11}-C_2,
\end{equation}
where $C_2$ depends on $\|f\|_{C^2}$, $\|r\|_{C^2}$.
Combined with \eqref{hii}, \eqref{Fii} and Codazzi formula, we have
\begin{eqnarray}\label{ht-c2-06}
\nonumber F^{ii}(\ln h_{11})_{ii}&=&\frac{F^{ii}h_{11ii}}{h_{11}}-\frac{F^{ii}h_{11i}^2}{h_{11}^2}\\
\nonumber&\geq& -\frac{2}{h_{11}}\sum_{i\geq2}G^{1i,i1}h_{1i1}^2 +\frac{1}{h_{11}}
\sum_i h_{11i} (d_{\nu}\widetilde{f})(E_i)+\frac{1}{h_{11}}\sum_i\overline{R}_{01i1}(d_{\nu}\widetilde{f})(E_i)\\
\nonumber&&+h_{11}\widetilde{f}-F_{ii}h_{ii}^2
+\frac{F^{ii}}{h_{11}}(\overline{R}_{1i10;i}-\overline{R}_{i1i0;1})+2F^{ii}\overline{R}
_{i1i1}-F^{ii}\overline{R}_{i0i0}\\
\nonumber&&
-C_2\sum_iF^{ii}+\frac{\widetilde{f}}{h_{11}}\overline{R}_{1010}
-\frac{F^{ii}h_{11i}^2}{h_{11}^2}-C_2h_{11}-\frac{C_2}{h_{11}}-C_2\\
&\geq&- \frac{2}{h_{11}}\sum_{i\geq 2}G^{1i, i1}h_{1i1}^2 +\frac{1}{h_{11}}
\sum_i h_{11i} (d_{\nu}\widetilde{f})(E_i)-F^{ii}h_{ii}^2\\
\nonumber&&-\frac{F^{ii}h_{11i}^2}{h_{11}^2}-C_3h_{11}-\frac{C_3}{h_{11}}-\frac{C_3\sum_iF^{ii}}{h_{11}}-C_3\sum_iF^{ii}-C_3,
\end{eqnarray}
where $C_3$ depends on $\inf_{\Sigma}f,\|f\|_{C^2}$, $\|r\|_{C^2}$ and the curvature $\overline{R}$.
By \eqref{2d-lad}, we derive
\begin{equation}\label{AFii}
AF^{ii}\Lambda_{ii}=A\lambda'F^{ii}g_{ii}-A\tau F^{ii}h_{ii}=A\lambda'\sum_iF^{ii}-A\tau\widetilde{f}.
\end{equation}
Taking \eqref{ht-c2-05}, \eqref{ht-c2-06} and \eqref{AFii} into \eqref{ht-c2-02}, we get
\begin{eqnarray*}
0 &\geq& -\frac{2}{h_{11}}\sum_{i\geq2}G^{1i,i1}h_{1i1}^2 -\frac{F^{ii}h_{11i}^2}{h_{11}^2}
+\frac{1}{h_{11}}\sum_ih_{11i}(d_{\nu}\widetilde{f})(E_i)\\
\nonumber&& -\frac{1}{\tau-a} \sum_jh_{jj}(d_{\nu}\widetilde{f})(E_j)\langle V,E_j\rangle+\frac{aF^{ii}h_{ii}^2}{\tau-a}
+F^{ii}\frac{\tau_i^2}{(\tau-a)^2}\\
\nonumber&&+(A\lambda'-C_4)\sum_iF^{ii}-C_4h_{11}-\frac{C_4(1+\sum_iF^{ii})}{h_{11}}
-AC_4.
\end{eqnarray*}
By \eqref{ht-c2-01}, \eqref{tau},
\begin{eqnarray*}
&&\frac{1}{h_{11}}\sum_ih_{11i}(d_{\nu}\widetilde{f})(E_i)-\frac{1}{\tau-a} \sum_jh_{jj}(d_{\nu}\widetilde{f})(E_j)\langle V,E_j\rangle \\
\nonumber&=&\sum_i\left(\frac{h_{11i}}{h_{11}}-\frac{\tau_i}{\tau-a}\right)(d_{\nu}\widetilde{f})(E_i)\\
\nonumber&=&-A\sum_i(d_{\nu}\widetilde{f})(E_i)\langle V,E_i\rangle \\
\nonumber&\geq&-AC_4,
\end{eqnarray*}
which implies the inequality \eqref{ht-c2-1}.
\textbf{Step 2}:
There exists a positive constant $\delta<\frac{1}{n-2}$ such that
$$\frac{C_{n-1}^{k-1} [1-(n-2)\delta]^{k-1} -(n-1) \delta C_{n-1}^{k-2} [1+(n-2)\delta]^{k-2} }{C_n^l [1+(n-2)\delta]^l } >\frac{C_{n-1}^{k-1}}{2C_n^l}.$$
We claim that there exists a constant $B>1$ depending on $n ,k, l, \delta$, $\inf_{\Sigma} r$, $\inf_{\Sigma} f$, $\|r\|_{C^2}$, $\|f\|_{C^2}$ and the curvature $\overline{R}$, such that
\begin{equation*}
\frac{aF^{ii} h_{ii}^2}{2(\tau-a)}+\frac{A\lambda'-C_0}{2}\sum_i F^{ii}\geq C_0h_{11},
\end{equation*}
if $h_{11} \geq B$, $A= \left( 4\|f\|_{C^0}^{1-\frac{1}{k-l}} \frac{k C_n^l}{(n-k+1) C_{n-1}^{k-1}\inf\lambda'} +\frac{27}{\inf\lambda'} \right) C_0$.
Case 1: $|h_{ii}|\leq \delta h_{11}$ for all $i\geq 2$.\\
In this case, we have
\begin{equation*}
|\eta_{11}| \leq (n-1) \delta h_{11}, \quad [1-(n-2)\delta]h_{11}\leq \eta_{22}\leq \cdots \leq \eta_{nn}\leq [1+(n-2)\delta]h_{11}.
\end{equation*}
By the definitions of $G^{ii}$ and $F^{ii}$, we obtain
\begin{equation*}
\begin{aligned}
\sum_i F^{ii}&=(n-1)\sum_i G^{ii}\\
&= \frac{n-1}{k-l} \left(\frac{\sigma_k(\eta)}{\sigma_l(\eta)}\right)^{\frac{1}{k-l}-1} \frac{(n-k+1)\sigma_{k-1}(\eta)\sigma_l(\eta)-(n-l+1)\sigma_k(\eta)\sigma_{l-1}(\eta)}{\sigma_l^2(\eta)}\\
&\geq \frac{C_n^k}{C_n^{k-1}} \left(\frac{\sigma_k(\eta)}{\sigma_l(\eta)}\right)^{\frac{1}{k-l}-1} \left(\frac{\sigma_{k-1}(\eta)}{\sigma_l(\eta)}\right) \\
&= \frac{C_n^k}{C_n^{k-1}} \left(\frac{\sigma_k(\eta)}{\sigma_l(\eta)}\right)^{\frac{1}{k-l}-1} \left(\frac{
\sigma_{k-1}(\eta|1)+\eta_{11}\sigma_{k-2}(\eta|1)}{\sigma_l(\eta)}\right) \\
&\geq \frac{n-k+1}{k} f^{\frac{1}{k-l}-1} \frac{C_{n-1}^{k-1} [1-(n-2)\delta]^{k-1} -(n-1) \delta C_{n-1}^{k-2} [1+(n-2)\delta]^{k-2} }{C_n^l [1+(n-2)\delta]^l } h_{11}^{k-1-l}\\
&\geq f^{\frac{1}{k-l}-1} \frac{ (n-k+1)C_{n-1}^{k-1}}{2kC_n^l } h_{11},
\end{aligned}
\end{equation*}
which implies that
$$C_0h_{11} \leq \frac{A\lambda'-27C_0}{2} \sum_i F^{ii}.$$
Case 2: $h_{22} > \delta h_{11}$ or $h_{nn} <- \delta h_{11}$.\\
In this case, we have
\begin{equation*}
\begin{aligned}
\frac{a F^{ii} h_{ii}^2}{2(\tau-a)}&\geq \frac{a}{2(\sup \tau-a)} \left(F^{22} h_{22}^2+F^{nn} h_{nn}^2\right)\\
&\geq \frac{a\delta^2}{2(\sup \tau-a)} F^{22} h_{11}^2\\
&\geq \frac{a\delta^2}{2n(\sup \tau-a)} \sum_i G^{ii} h^2_{11} \\
&\geq \left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}} \frac{a\delta^2 h_{11}}{2n(\sup \tau-a)} h_{11}.
\end{aligned}
\end{equation*}
Then, we conclude that
$$\frac{a F^{ii} h_{ii}^2}{2(\tau-a)}\geq C_0 h_{11},$$
if
$$h_{11} \geq \left(\left(\frac{C_n^k}{C_n^l}\right)^{\frac{1}{k-l}} \frac{a\delta^2}{2n(\sup \tau-a)} \right)^{-1}C_0.$$
\textbf{Step 3}: We claim that
$$|h_{ii}|\leq C_5A, \quad \forall~i\geq2,$$
if $h_{11} \geq B>1$, where $C_5$ is a constant depending on $n, k, l$, $\inf_{\Sigma} r$, $\inf_
{\Sigma}f$, $\|r\|_{C^2}$, $\|f\|_{C^2}$ and the curvature $\overline{R}$.
Combined with Step 1 and Step 2, we obtain
\begin{eqnarray}\label{ht-c2-32}
\nonumber0&\geq& - \frac{2}{h_{11}} \sum_{i\geq 2}G^{1i, i1}h_{1i1}^2 -\frac{F^{ii}h_{11i}^2}{h_{11}^2}
+\frac{aF^{ii} h_{ii}^2}{2(\tau-a)}+\frac{F^{ii}{\tau}_i^2}{(\tau-a)^2}\\
&& + \frac{A\lambda'-C_0}{2}\sum_i F^{ii}-\frac{C_0(1+\sum_iF^{ii})}{h_{11}}-AC_0.
\end{eqnarray}
Use \eqref{ht-c2-01}, the concavity of $G$ and Cauchy-Schwarz inequality,
\begin{eqnarray*}
0&\geq& -\frac{1+\epsilon}{(\tau-a)^2} F^{ii} \tau_i^2 - (1+ \frac{1}{\epsilon})A^2 F^{ii} \Lambda_i^2\\
&&+\frac{aF^{ii} h_{ii}^2}{2(\tau-a)}+\frac{F^{ii}\tau_i^2}{(\tau-a)^2} + \frac{A\lambda'-C_0}{2}\sum_i F^{ii}-C_0(1+\sum_iF^{ii})-AC_0\\
&\geq& \left(\frac{a}{2(\tau-a)}-\frac{C_0\epsilon}{(\tau-a)^2}\right) F^{ii}h^2_{ii}- \left((1+ \frac{1}{\epsilon})A^2C_0- \frac{A\lambda'-C_0}{2}\right) \sum_i F^{ii}- \left(\sum_iF^{ii}+A+1\right)C_0,
\end{eqnarray*}
where $\tau_i=h_{ii} \langle V, E_i\rangle$ in the second inequality.
Choose $\epsilon=\frac{a(\tau-a)}{4C_0}$,
\begin{equation}\label{ht-c2-31}
\begin{aligned}
0&\geq \frac{a}{4(\tau-a)} F^{ii}h^2_{ii}- \left((1+ \frac{4C_0}{a(\tau-a)})A^2C_0- \frac{A\lambda'-C_0}{2}\right) \sum_i F^{ii}-\left(\sum_iF^{ii}+A+1\right) C_0\\
&\geq \frac{a}{4(\sup \tau-a)} F^{ii}h^2_{ii}-\left((1+ \frac{4C_0}{a^2})A^2C_0- \frac{A\lambda'-C_0}{2}\right) \sum_i F^{ii}-\left(\sum_iF^{ii}+A+1\right) C_0.
\end{aligned}
\end{equation}
By \eqref{Fi} and \eqref{ht-c2-31}, we have
\begin{eqnarray*}
0&\geq& \frac{a}{4(\sup \tau-a)n(n-1)} \left(\sum_{k\geq 2} h_{kk}^2\right) \sum_iF^{ii}\\
&&-\left((1+ \frac{4C_0}{a^2})A^2C_0- \frac{A\lambda'-C_0}{2}+C_0+ \frac{(A+1)C_0}{(n-1)}\left(\frac{C_n^k}{C_n^l}\right)^{-\frac{1}{k-l}}\right) \sum_i F^{ii},
\end{eqnarray*}
which implies that
$$\sum_{k\geq 2} h_{kk}^2 \leq C_5^2A^2,$$
where $C_5$ is a constant depending on $n, k, l$, $\inf_{\Sigma} r$, $\inf _{\Sigma}f$, $\|r\|_{C^2}$, $\|f\|_{C^2}$ and the curvature $\overline{R}$.
\textbf{Step 4}:
We claim that there exists a constant $C$ depending on $n, k, l$, $\inf_{\Sigma}\lambda'$, $\inf_{\Sigma} r$, $\inf _{\Sigma}f$, $\|r\|_{C^2}$, $\|f\|_{C^2}$ and the curvature $\overline{R}$ such that $$h_{11}\leq C.$$
Without loss of generality, we assume that
\begin{equation}\label{ab}
h_{11} \geq \max \left\{B, \left(\frac{32n C_0 A^2 (\sup \tau -a)}{\varepsilon a}\right)^{\frac{1}{2}}, \frac{C_5 A}{\beta}\right\},
\end{equation}
where $\beta<\frac{1}{2}$ will be determined later.
Recalling $\tau_1=h_{11} \langle V, E_1\rangle$, by \eqref{ht-c2-01} and Cauchy-Schwarz inequality, we have
\begin{equation*}
\begin{aligned}
\frac{F^{11} h_{111}^2}{h_{11}^2}&\leq \frac{1+\varepsilon}{(\tau-a)^2} F^{11} \tau_1^2+(1+\frac{1}{\varepsilon}) A^2 F^{11}\langle V, E_1\rangle^2\\
&\leq \frac{F^{11} \tau_1^2}{(\tau-a)^2} +\frac{C_0\varepsilon F^{11} h_{11}^2}{(\tau-a)^2} +(\frac{1+\varepsilon}{\varepsilon}) C_0A^2 F^{11}.
\end{aligned}
\end{equation*}
Choose $\varepsilon\leq \min\{\frac{a(\tau-a)}{16C_0},1\}$, such that
$$\frac{F^{11} h_{111}^2}{h_{11}^2}\leq \frac{F^{11} \tau_1^2}{(\tau-a)^2} +\frac{a F^{ii} h_{ii}^2}{16(\tau-a)} +\frac{2C_0A^2 F^{11}}{\varepsilon}.$$
Hence according to Step 3 and \eqref{ab}, we know that
\begin{equation}\label{ht-c2-41}
\begin{aligned}
\frac{F^{11} h_{111}^2}{h_{11}^2}\leq \frac{F^{11} \tau_1^2}{(\tau-a)^2} +\frac{a F^{ii} h_{ii}^2}{8(\tau-a)}
\end{aligned}
\end{equation}
and
$$|h_{ii}|\leq \beta h_{11}, \quad \forall i\geq 2.$$
Thus
$$\frac{1-\beta}{h_{11}-h_{ii}}\leq\frac{1}{h_{11}}\leq \frac{1+\beta}{h_{11}-h_{ii}}.$$
Combined with Proposition \ref{th-lem-07}, we obtain
\begin{eqnarray}\label{ht-c2-42}
\nonumber\sum_{i\geq 2} \frac{F^{ii}h_{11i}^2}{h_{11}^2}
&=&\sum_{i\geq 2} \frac{F^{ii}-F^{11}}{h_{11}^2} h_{11i}^2 +\sum_{i\geq 2} \frac{F^{11}h_{11i}^2}{h_{11}^2}\\
\nonumber&\leq& \frac{1+\beta}{h_{11}} \sum_{i\geq 2} \frac{F^{ii}-F^{11}}{h_{11}-h_{ii}} h_{11i}^2+\sum_{i\geq 2} \frac{F^{11}h_{11i}^2}{h_{11}^2}\\
\nonumber&=&\frac{1+\beta}{h_{11}} \sum_{i\geq 2} \frac{G^{11}-G^{ii}}{\eta_{ii}-\eta_{11}} h_{11i}^2+\sum_{i\geq 2} \frac{F^{11}h_{11i}^2}{h_{11}^2}\\
&=&-\frac{1+\beta}{h_{11}} \sum_{i\geq 2}G^{1i, i1} h_{11i}^2 +\sum_{i\geq 2} \frac{F^{11}h_{11i}^2}{h_{11}^2}.
\end{eqnarray}
Use \eqref{ht-c2-01}, \eqref{ab}, Cauchy-Schwarz inequality and the fact $\tau_i=h_{ii}\langle V, E_i \rangle$,
\begin{eqnarray}\label{ht-c2-43}
\nonumber\sum_{i\geq 2} \frac{F^{11}h_{11i}^2}{h_{11}^2}&\leq& 2\sum_{i\geq 2} \frac{F^{11}\tau_i^2}{(\tau-a)^2}+2A^2\sum_{i\geq 2} F^{11} \langle V, E_i \rangle^2\\
\nonumber&\leq& 2\frac{C_0}{a^2} \sum_{i\geq 2} \frac{aF^{11}h_{ii}^2}{\tau-a}+2n C_0A^2 F^{11} \\
&\leq& \beta^2 \frac{2nC_0}{a^2} \frac{aF^{11}h_{11}^2}{\tau-a}+
\frac{a}{16 (\tau-a)} F^{11}h_{11}^2.
\end{eqnarray}
By Cauchy-Schwarz inequality and Codazzi formula, we have
\begin{eqnarray}\label{CSC}
-\frac{2}{h_{11}} \sum_{i\geq2}G^{1i,i1}h_{1i1}^2&=& -\frac{2}{h_{11}} \sum_{i\geq2}G^{1i,i1}(h_{11i}
+\overline{R}_{01i1})^2 \\
\nonumber &\geq& -\frac{2}{h_{11}} \sum_{i\geq2}G^{1i,i1}\left(\frac{3}{4}h_{11i}^2-3\overline{R}_{01i1}^2\right).
\end{eqnarray}
When we choose $\beta$ sufficiently small such that $ \beta\leq \min \left\{\sqrt{\frac{a^2}{32n C_0}}, \frac{1}{2}\right\}$, by \eqref{ht-c2-42}, \eqref{ht-c2-43}, \eqref{CSC} and Proposition \ref{th-lem-07}, we have
\begin{eqnarray}\label{ht-c2-44}
\nonumber\sum_{i\geq 2} \frac{F^{ii}h_{11i}^2}{h_{11}^2}
&\leq&-\frac{3}{2h_{11}} \sum_{i\geq 2}G^{1i, i1} h_{11i}^2 +\frac{a F^{11}h_{11}^2}{8 (\tau-a)}\\
\nonumber&\leq&-\frac{2}{h_{11}} \sum_{i\geq 2}G^{1i, i1} h_{1i1}^2+\frac{6}{h_{11}}\sum_{i\geq2}-G^{1i,i1}\overline{R}_{01i1}^2
+\frac{a F^{11}h_{11}^2}{8 (\tau-a)}\\
\nonumber &\leq&-\frac{2}{h_{11}} \sum_{i\geq 2}G^{1i, i1} h_{1i1}^2+6C_0\sum_{i\geq2}\frac{G^{11}-G^{ii}}{h_{11}-h_{ii}}+\frac{a F^{11}h_{11}^2}{8 (\tau-a)}\\
&\leq&-\frac{2}{h_{11}} \sum_{i\geq 2}G^{1i, i1} h_{1i1}^2+\frac{6C_0}{1-\beta}\sum_{i\geq2}(G^{11}-G^{ii})+\frac{a F^{11}h_{11}^2}{8 (\tau-a)},
\end{eqnarray}
if $h_{11}\geq B>1$. The last inequality comes from $\frac{1-\beta}{h_{11}-h_{ii}}\leq\frac{1}{h_{11}}<1$.
Note that
$$\sum_{i\geq2}(G^{11}-G^{ii})=nG^{11}-\sum_iG^{ii}\leq(n-1)\sum_iG^{ii}=\sum_iF^{ii}.$$
Then \eqref{ht-c2-44} gives that
\begin{eqnarray}\label{Gt}
\nonumber-\frac{2}{h_{11}} \sum_{i\geq 2}G^{1i, i1} h_{1i1}^2-\sum_{i\geq 2} \frac{F^{ii}h_{11i}^2}{h_{11}^2}
&\geq&-\frac{6C_0}{1-\beta}\sum_{i\geq2}(G^{11}-G^{ii})-\frac{a F^{11}h_{11}^2}{8 (\tau-a)}\\
&\geq&-12C_0\sum_iF^{ii}-\frac{a F^{11}h_{11}^2}{8 (\tau-a)}.
\end{eqnarray}
Substitute \eqref{ht-c2-41} and \eqref{Gt} into \eqref{ht-c2-32}, then
\begin{eqnarray*}
\nonumber0&\geq& -\frac{F^{11}\tau_1^2}{(\tau-a)^2}-\frac{aF^{ii}h_{ii}^2}{8(\tau-a)}-12C_0\sum_iF^{ii}
-\frac{aF^{11}h_{11}^2}{8(\tau-a)}+\frac{aF^{ii}h_{ii}^2}{2(\tau-a)}\\
\nonumber&&+\frac{F^{ii}\tau_i^2}{(\tau-a)^2}+\frac{A\lambda'-C_0}{2}\sum_iF^{ii}-C_0(1+\sum_iF^{ii})-AC_0\\
\nonumber&\geq&\frac{a F^{ii}h_{ii}^2}{4(\tau-a)}+ \frac{A\lambda'-27C_0}{2} \sum_i F^{ii} -C_0(A+1)\\
&\geq&\frac{C_0}{2}h_{11}-C_0(A+1),
\end{eqnarray*}
which implies that
$$h_{11}\leq 2(A+1).$$
\end{proof}
\section{The proof of Theorem \ref{Main}}
In this section, we use the degree theory for nonlinear elliptic
equation developed in \cite{Li89} to prove Theorem \ref{Main}. The
proof here is similar to those in \cite{Al, Jin, Li-Sh}. So, only sketch will
be given below.
Based on a priori estimates in Theorem \ref{n-2-C^0},
Theorem \ref{n-2-C1e} and Theorem \ref{n-2-C2e}, we know that the
equation \eqref{Eq2} is uniformly elliptic. From \cite{Eva82},
\cite{Kry83} and Schauder estimates, we have
\begin{eqnarray}\label{C2+}
|r|_{C^{4,\alpha}(M)}\leq C
\end{eqnarray}
for any $(\eta,k)$-convex solution $\Sigma$ to the equation \eqref{Eq2}, where
the position vector of $\Sigma$ is $V=(r(u), u)$ for $u \in M$.
We define
\begin{eqnarray*}
C_{0}^{4,\alpha}(M)=\{r \in
C^{4,\alpha}(M): \Sigma \ \mbox{is}
\ (\eta,k)-\mbox{convex}\}.
\end{eqnarray*}
Let us consider $$F(.; t): C_{0}^{4,\alpha}(M)\rightarrow
C^{2,\alpha}(M),$$ which is defined by
\begin{eqnarray*}
F(r, u; t)=\frac{\sigma_k(\mu(\eta))}{\sigma_{l}(\mu(\eta))} -tf(r,u, \nu)-(1-t)\phi(r) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}.
\end{eqnarray*}
Let $$\mathcal{O}_R=\{r \in C_{0}^{4,\alpha}(M):
|r|_{C^{4,\alpha}(M)}<R\},$$ which clearly is an open
set of $C_{0}^{4,\alpha}(M)$. Moreover, if $R$ is
sufficiently large, $F(r, u; t)=0$ has no solution on $\partial
\mathcal{O}_R$ by a priori estimate established in \eqref{C2+}.
Therefore the degree $\deg(F(.; t), \mathcal{O}_R, 0)$ is
well-defined for $0\leq t\leq 1$. Using the homotopic invariance of
the degree, we have
\begin{eqnarray*}
\deg(F(.; 1), \mathcal{O}_R, 0)=\deg(F(.; 0), \mathcal{O}_R, 0).
\end{eqnarray*}
Proposition \ref{Uni} shows that $r_0=1$ is the unique
solution to the above equation for $t=0$. Direct calculation shows
that
\begin{eqnarray*}
F(sr_0; 0)= (1-\phi(sr_0)) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}(sr_0).
\end{eqnarray*}
Then
\begin{eqnarray*}
\delta_{r_0}F(r_0, u; 0)=\frac{d}{d s}|_{s=1}F(sr_0, u;
0)=-\phi'(r_0) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}(r_0)>0,
\end{eqnarray*}
where $\delta F(r_0, u; 0)$ is the linearized operator of $F$ at
$r_0$. Clearly, $\delta F(r_0, u; 0)$ takes the form
\begin{eqnarray*}
\delta_{w}F(r_0, u; 0)=-a^{ij}w_{ij}+b^i
w_i-\phi'(r_0) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}(r_0),
\end{eqnarray*}
where $a^{ij}$ is a positive definite matrix. Since
$-\phi'(r_0) \frac{C_n^k}{C_n^l}((n-1)\zeta(r))^{k-l}(r_0)>0,$
thus $\delta_{r_0} F(r_0, u; 0)$ is an invertible operator. Therefore,
\begin{eqnarray*}
\deg(F(.; 1), \mathcal{O}_R; 0)=\deg(F(.; 0), \mathcal{O}_R, 0)=\pm
1.
\end{eqnarray*}
So, we obtain a solution at $t=1$. This completes the proof of
Theorem \ref{Main}.
\bigskip
\bigskip
|
1,116,691,498,610 | arxiv | \section{Introduction}
The magnetized plasma modes, like lower hybrid (LH) and other modes
(e.g. electron and ion cyclotron modes) have found an important place in the context of magnetic confinement fusion studies. They are traditionally employed for the purpose of current drive and heating of fusion plasma in magnetic confinement devices. Lower hybrid waves have been extensively studied in a variety of contexts in laboratory and space plasmas \cite{mcclements1997, bingham1997, fabio, southwood1978}. The LH waves can impart their energy to plasma species through various mechanisms like electron heating via breaking of lower hybrid wave \cite{fabio}, providing energy to electrons via wave-particle interaction. An application of the latter is found in toroidal tokamak devices where
landau damping of externally launched lower hybrid wave helps in driving plasma current required for plasma confinement \cite{lhcd, LHCD_NF}. Furthermore, it is very common to observe lower hybrid waves and several interesting phenomena due to their breaking or turbulence in space plasmas \cite{retterer_ion_suprauroral, kelley1990_geophysical}. Localised lower hybrid turbulence is observed in density depletion regions in the early phases of an intense magnetic storm \cite{malingre2008lightning}. Many studies in the auroral region report observation of lower hybrid emissions, giving rise to solitary structures and/or ion-heating \cite{retterer_ion_suprauroral, berthelier2008lightning}. The LH mode is unique in the sense that the dynamical response of both electron and ions are relevant for this mode. Basically, the magnetized electron response couples with the un-magnetized ion dynamics for this particular mode to get excited.
The linear modes for magnetized plasma have, however, not been explored in the context of laser plasma interaction studies. With recent technological progress, quasi-static
magnetic field of the order of $1.2$ kilo Tesla \cite{Nakamura} can be produced in laboratory. With rapid advancements in technology it is quite likely that the regime of magnetized plasma response at laser frequencies
can be within reach of laboratory experiments. In the context of $CO_2$ lasers the magnetic field requirement to observe magnetized electron response
(i.e. $\omega_{ce} > \omega_l$) turns out to be around $1.2$ kilo Tesla.
In view of this our group has been engaged in investigating
this particular regime with the help of particle - in - cell (PIC) simulations. Recently, we illustrated a possible new mechanism of direct ion heating by laser pulse \cite{ayushi_EXB}. For relativistically intense lasers we had also demonstrated the formation of magnetosonic solitons \cite{kumar_soliton}. In this paper we show the direct coupling of the laser energy to electrostatic lower hybrid fluctuations in plasma. In addition the magnetosonic perturbations are also observed in simulation. A detailed parametric study for the frequency regime of the excitations of these modes has been carried out. This paper has been organized as follows. In Sec.II, we have provided the details of the simulation configuration. Sec.III contains our numerical observations and analysis. Finally, Sec.IV provides the summary of the work.
\section{ Simulation Details}
We have carried out two dimensional PIC simulations using OSIRIS-4.0 \cite{hemker,Fonseca2002,osiris}. A schematic of the simulation geometry has been shown in Fig.\ref{schematic}. A rectangular box in x-y plane of dimensions $ L_x= 3000 c/\omega_{pe} $ and $ L_y = 100 c/\omega_{pe} $ has been chosen for simulation. Here $\omega_{pe} = \sqrt{4\pi n_0 e^2/m_e}$ is the plasma frequency corresponding to the plasma density $n_0$. The left side of the box upto $x = 500 c/\omega_{pe}$ is vacuum. Thereafter,
a uniform density plasma has been placed.
A p-polarized plane laser pulse is incident normally from the left
side. We have chosen the parameters associated with $ CO_2 $ laser pulse having a wavelength of
$10 \mu m$ for our studies. This choice reduces the requirement on the applied
external magnetic field by typically $10$ times compared to the conventional lasers
with a wavelength close to $1 \mu m$. This is because the
external magnetic field has
been chosen so as to have the lighter electron species magnetized and the ions
to remain un-magnetized at the laser frequency. This translates into the requirement of
$ \omega_{ce} > \omega_{l} > \omega_{ci}$ where $\omega_l$ is the laser frequency. To satisfy this condition, we have chosen external magnetic field such that $ \omega_{ce} = 2.5 \omega_{pe} = 12.5 \omega_l$. The motion of both electrons and ions
are tracked in the simulation.
A reduced ion to electron mass ratio of $25$ has been considered. In some cases the mass ratio of $100$ has also been chosen, which has been explicitly mentioned while discussing those results. The small mass ratio has been chosen to expedite the simulations.
The boundary condition along $y$-axis for particles as well as fields has been taken to be periodic.
However, along x-axis the absorbing boundary condition has been implemented.
In Table \ref{simulation_parameters} we list out the main simulation parameters for a
quick reference. We have also varied some parameters like magnetic field and the laser frequency in a couple of our simulation runs for exploring the conditions of the excitation of these modes. These specific choices have been specified
explicitly
where the results concerning them are discussed.
\begin{table}
\caption{ Values of simulation parameters in normalised and corresponding standard units for $m_i$=25}
\begin{tabular}{|p{1.5cm}||p{2.5cm}||p{2.5cm}|}
\hline
\textcolor{red}{Parameter}& \textcolor{red}{Normalised Value}& \textcolor{red}{Value in standard unit}\\
\hline
\hline
\multicolumn{3}{|c|}{\textcolor{blue}{Plasma Parameters}} \\
\hline
$n_o$&1 &$3 \times 10^{20}$ $ cm^{-3}$\\
\hline
$\omega_{pe}$&1&$10^{15}$Hz\\
\hline
$\omega_{pi}$&0.2 (for M/m = 25) &$0.2 \times 10^{15}$Hz\\
\hline
\multicolumn{3}{|c|}{\textcolor{blue}{Laser Parameters}} \\
\hline
$\omega_l$&$0.2 \omega_{pe}^{-1}$&$0.2 \times 10^{15}$Hz\\
\hline
$\lambda_{l}$& $31.4 c/ \omega_{pe}$ &$9.42\mu m$\\
\hline
Intensity&$a_{0} =0.5$&$3.5\times 10^{15} W/cm^2$ \\
\hline
\multicolumn{3}{|c|}{\textcolor{blue}{External Magnetic Field Parameters}} \\
\hline
$B_{z}$&2.5 &14.14KT\\
\hline
\hline
\end{tabular}
\label{simulation_parameters}
\end{table}
\section{Observations and analysis}
From the laser and plasma parameters of Table-\ref{simulation_parameters}, the laser frequency is
$\omega_l = 0.2 \omega_{pe}$. The plasma is overdense for this laser frequency. The laser
propagates along $\hat{x}$ axis and is incident normally on the plasma target. It is linearly polarised with electric field of the laser pointing along $\hat{y}$.
As expected, in the absence of any applied external magnetic field, the simulation
shows that the laser pulse gets reflected from the plasma target and no disturbance gets excited in the plasma medium. We then carried out simulations in the presence of an external magnetic
field pointing along $\hat{z}$ direction.
The magnetic field was chosen to satisfy the condition $\omega_{ce} > \omega_l > \omega_{ci}$. This ensures that at the laser oscillation time period, the electrons would exhibit a magnetized response and the ions on the contrary will remain
unmagnetized. The results of these simulations have been presented
in this section. We observe that with the addition of external magnetic field,
a part of laser energy gets absorbed by the plasma medium, despite
it being overdense. This is clearly evident from
Fig. \ref{n_colorplot} where the excited disturbances in the plasma medium have been shown at various times. The plasma surface starts from $x = 500$. The color plot shows the charge density in 2-D.
The blue and green lines denote the $y$ and $x$ component of electric field ($E_y$ and $E_x$) respectively. At $t=0$
the plasma is undisturbed and no charge density fluctuations can be seen in the medium. The $x$ component of the electric field ($E_x$) is also zero initially. One observes only the $y$ component of electric field ($E_y$) associated with the laser pulse in the vacuum region at $t = 0$.
As the laser hits the plasma surface, the plasma medium gets disturbed by it. The effect can be seen
in the subplots of Fig.\ref{n_colorplot} where the plots are shown at times $t = 500 $ and $t = 1000$.
The charge density disturbances are evident in the zoomed plots at these times.
Presence of both components of electric fields $E_x$ and $E_y$ in the plasma medium can be seen.
With time, these disturbances propagate inwards, towards the deeper region of the
plasma medium (comparison of plots at $t=500$ and $t = 1000$ illustrates this).
These disturbances are not random fluctuations but appear to be quite regular.
In Fig. \ref{div_curl}(a), we have plotted the evolution of $I_1 =\int \mid \nabla \cdot \vec{E} \mid dx dy$ and
$I_2 = \int \mid \nabla \times \vec{E} \mid dx dy$ (integrated over the bulk region of the plasma) to
determine the electrostatic/electromagnetic character of these disturbances. It can be observed that though both $I_1$ and $I_2$ start with zero initially, they
increase with time. However, $I_1$ increases much more rapidly and clearly dominates over $I_2$.
This suggests that the fluctuations have a dominating electrostatic
character. This is further borne out from Fig. \ref{div_curl}(b) where we show directional distribution of $\vec{E}$ in the plasma region. It demonstrates that the
electric field $\vec{E}$ is dominantly along $\hat{x}$. We have also evaluated the spatial FFT of
$E_x$ in bulk plasma region at $t=1000$ (when the laser has been reflected back from the system) and observe that the spectrum peaks at a particular value of $k_x = 0.73$ (fig. \ref{2d_fft}).
The scale length of the fluctuations appear to be longer in the deeper region of the target as can be observed in Fig.\ref{n_colorplot}.
At a significantly later time (shown at $t = 4000$ and $t = 6000$ in Fig. \ref{soliton_heating}) formation of another structure
quite distinct from the oscillations discussed so far can be seen clearly in the plot. This structure is significantly longer compared to the oscillations observed at earlier times (Fig. \ref{soliton_heating}).
We now make an attempt at understanding the excitation and detail characterization of these two kind of structures in the subsections below.
\subsection{Identification of short scale fluctuation as lower hybrid mode}
We now show that the short scale electrostatic disturbance generated in the plasma is essentially the
lower hybrid mode. We also illustrate the physical mechanism and the condition that need to be satisfied for such excitations.
It should be noted that in the presence external magnetic field and the oscillating electric field of the laser, both ions and electrons would experience $\vec{E} \times \vec{B}$ drift along $\hat{x}$.
Here $\vec{E}$ is the electric field of the laser pulse along $\hat{y}$
and $\vec{B}$ is the applied external magnetic field. Since the electric field of the laser is oscillating in time the magnitude of this drift is dependent on the charge species and takes the following form \cite{Nishikawa}
\begin{equation}
\label{EXB}
\vec{V}_{s,\vec{E} \times \vec{B}}(t) = \frac{ \omega_{cs}^2}{ \omega_{cs}^2 - \omega_{l}^2} \frac{ \vec{E}(t) \times \vec{B}}{B^2}
\end{equation}
Here, the suffix $s = e, i $ stand for electron and ion species respectively. Thus, $\omega_{ce}$ and $\omega_{ci}$ denote the electron and ion cyclotron frequency respectively and $\omega_l$ is the laser frequency.
We have maintained the condition of $ \omega_{ce} > \omega_{l} > \omega_{ci}$ in all our simulation studies here.
The difference in this drift speed between the two species
gives rise to a current in the plasma medium, viz.,
$\vec{J} = e n_0 (\vec{V}_{i,\vec{E} \times \vec{B}}(t) - \vec{V}_{e,\vec{E} \times \vec{B}}(t) ) $.
This current is directed along $\hat{x}$. The spatial variation of laser electric field along $x$ at laser wavelength makes the
$\vec{\nabla} \cdot \vec{J} (= \partial J_x/\partial x)$ finite. The
finite divergence of the current
leads to space charge fluctuation from the
continuity equation. This charge density fluctuation is responsible for the creation
of the electrostatic field $E_x$. Once $E_x$ sets up the two charge species also respond to the
force ($q E_x$) along the $\hat{x} $.
However, the electrons being magnetized, they do not accelerate
freely by $E_x$. On the other hand ions being un-magnetized get directly accelerated by this field.
It is observed in the simulations that dominant $x$ component of the velocity for electron
is provided by the expression of $\vec{E} \times \vec{B}$ drift given in Eq.(\ref{EXB})
in the laser electric field, for the ions on the other had the velocity gained by
direct acceleration through $E_x$ dominates. The plot in Fig.\ref{vx} shows the spatial variation of the
$x$ component of velocity of the two species obtained from simulations.
The two species move together, however, the amplitude of their velocity differs, with the ions showing
higher amplitude.
The frequency spectrum of the observed oscillation has been shown in the subplot alongside. The spectrum peaks
at the frequency of $\omega = 0.176$ which differs from the laser frequency of $0.2$. It matches closely with the
lower hybrid frequency of $0.186$ evaluated from the analytical expression of
\begin{equation}
\label{lh_dispersion}
\omega_{LH} = {\left({\frac{1}{\omega_{pi}^{2}} + \frac{1}{\omega_{ce}\omega_{ci}}} \right)}^{-1/2}
\end{equation}
It is thus clear that the laser excites an electrostatic mode in the medium in which both electrons and ions have significant roles to play.
\subsection{Necessary condition for the generation of LH mode}
We have shown that the generation of the electrostatic field is linked to the difference in the $\vec{E} \times \vec{B}$ field of the two species in the presence of external
magnetic field and the oscillating transverse electric field of the laser. Clearly, this requires
that the laser field should penetrate the plasma. It should be noted that we have chosen
the plasma medium to be overdense for the chosen laser frequency in our simulations here.
Thus the laser field in the normal course would not have penetrated the plasma region.
However, the laser radiation is able to penetrate the plasma despite having frequency lower than the electron plasma frequency due to the presence of applied external magnetic field.
The oscillatory electric field of the electromagnetic field is directed
orthogonal to the applied magnetic field. This suggests
that the $X$ mode is the relevant mode for our simulation geometry.
The dispersion curve of the $X$ mode has the
characteristics shown in Fig.\ref{dispersion} (adapted from Boyd and Sanderson \cite{boyd_sanderson_book}).
The figure shows
the presence of a stop band between $\omega_{LH}$ and $\omega_L$ indicated by
the shaded region, which has been denoted as Region II in the figure.
Region I in frequency band ranges from $0$ to $\omega_{LH}$ and is the pass band, so is
Region III from $\omega_L$ to $\omega_{UH}$ for the incoming electromagnetic wave.
Here, $\omega_{LH}$ is the lower hybrid frequency and has been defined in equation \ref{lh_dispersion} , $\omega_{UH}=\sqrt{\omega_{pe}^2 + \omega_{ce}^2} $ is the upper hybrid oscillation frequency.
The frequencies $\omega_L$ and $\omega_R$ shown in Fig. \ref{dispersion} corresponds to the left and right hand cut off defined by the following expressions
\begin{equation}
\label{wl}
\omega_L= [\omega_{pe}^2+\omega_{pi}^2+(\omega_{ci}+\omega_{ce})^2/4]^{1/2} -(\omega_{ci}-\omega_{ce})/2
\end{equation}
\begin{equation}
\label{wr}
\omega_R= [\omega_{pe}^2+\omega_{pi}^2+(\omega_{ci}+\omega_{ce})^2/4]^{1/2} +(\omega_{ci}-\omega_{ce})/2
\end{equation}
On the basis of this distinction between the three regions we now identify the necessary condition for the excitation of the lower hybrid mode. We choose three different
values of the laser frequency corresponding to these three regions. To have a better distinction between
the three regions for these simulation runs, we have chosen the ratio of ion to electron mass to be $100$. This helps in having the three frequencies
$\omega_{l}$, $\omega_{ce}$ and $\omega_{ci}$ to be well separated.
Since, the physics that needs to be addressed can be explored in a simple 1-D geometry
we have chosen the same for these set of runs. We have already, in all our earlier studies,
shown that 2-D effects do not alter any physics associated with the main theme of the investigation
\cite{ayushi_EXB}. The simulation box in 1-D geometry is chosen to be larger with $L_x=4000c/\omega_{pe}$ and the simulation duration was also increased. We chose to give simulation runs for three different values of laser frequency corresponding to the three different regions in the $X$ mode dispersion curve (marked as region I, II and III in fig. \ref{dispersion}). The three values of frequencies are $\omega_{l1}=0.08 \omega_{pe}$, $\omega_{l2}=0.16 \omega_{pe}$ and $\omega_{l3}=0.5 \omega_{pe}$ falling in region I, II and III respectively. For our given simulation parameters $\omega_L$, $\omega_{LH}$ and $\omega_{pi}$ are $0.376$, $0.093$ and $0.1$ respectively. The stop-band for propagation of the $X$ mode electromagnetic wave in plasma lies between $\omega_L$ and $\omega_{LH}$. Thus we expect that while the laser would propagate inside plasma for runs with frequency $\omega_{l1}$(pass band) and $\omega_{l3}$(pass band), there should be total reflection of the laser for frequency lying in $\omega_{l2}$(stop band). This is indeed
what we observe in the simulations. Since there is no propagation of the laser field at $\omega_{l2}$
the laser does not interact with plasma and it is not possible to excite lower hybrid oscillations in this particular case. On the other hand, the other two frequencies,
$\omega_{l1}$ and $\omega_{l3}$ lie in the pass-band and hence are expected to interact with
the plasma.
We, however, observe that the laser energy gets coupled into plasma only when the laser frequency
lies in region I, where the electrostatic lower hybrid wave get excited by the mechanism discussed above.
This is because the lower hybrid frequency is nearby and therefore gets excited. In region III
where the laser frequency is much higher than the lower hybrid frequency the coupling of laser energy is found to be very weak with the plasma. The laser simply gets transmitted in this particular case.
This can be clearly seen from the evolution of
kinetic and field energies shown in Fig. \ref{energy} for frequencies lying in the three regions.
It thus appears that to excite the lower hybrid wave we need to choose the frequency of the
driving laser pulse to be in region I, i.e. lower than the lower hybrid resonance to satisfy the pass
band criteria. Furthermore, it should also not be in Region III which satisfies the pass band criteria but the laser frequency is much higher than the lower hybrid wave frequency to couple with it.
\begin{table}
\caption{Frequencies and their respective normalised values for $m_i=100$ described in for section C of results}.
\begin{tabular}{|p{1.6cm}||p{1.6cm}||p{2.6cm}|}
\hline
{Frequency}& {Normalised value} &{Observation}\\
\hline
\hline
$\omega_{l1}$ (Region I)&0.08& absorption via excitation of LH\\
\hline
$\omega_{l2}$ (Region II)&0.16&stop-band, hence, no absorption\\
\hline
$\omega_{l3}$ (Region III)&0.5&pass band but no coupling of laser energy into plasma\\
\hline
\hline
\end{tabular}
\label{table_regions}
\end{table}
\subsection{Identification of long scale disturbances as Magnetosonic excitation}
We now discuss the other long scale disturbances that get generated in the plasma by the laser, which become evident very clearly at a later time (See Fig. \ref{soliton_heating}). These disturbances have electromagnetic character as the perturbations in the $\hat{z}$ component of magnetic field has also been observed. It can be observed from Fig. \ref{soliton} that while it is observed in the
perturbed magnetic field $B_z$ (the applied field has been subtracted) and $E_y$, no formation of such a structure is observed in $E_x$.
Furthermore, the ion and electron density perturbations both show this structure.
These structures appear as a result of the ponderomotive force that acts on the plasma due to the
finite longitudinal laser pulse width. The envelope of the laser pulse pushes the surface of the plasma target, creating a magnetosonic perturbation which propagates inside the plasma.
The frequency associated with the envelope of the laser pulse is slow so that both ions and electrons display a magnetized response and hence it excites magnetosonic perturbation. The single hump of the
disturbance testifies that it has been excited by the laser envelope. We
carried out simulations with different pulse width of the laser and observe that the width of these structures scale
linearly with the laser pulse duration as can be seen from Fig. \ref{pulse_width}. We also made
two successive laser pulse of different duration to fall on the plasma target. In this case as expected two distinct
long scale structures get formed indicating clearly that the temporal profile of the laser pulse is responsible
for this.
It is also interesting to note that since these structures are essentially excited by the envelope
of the laser pulse, they are independent of the laser frequency. Thus even when the
laser frequency lies in the stop band of Region II, we observe the formation of these structures.
This has been shown in Fig. \ref{soliton_heating} where we compare the formation of these long scale structures
for both the cases of laser frequencies lying in region I and region II and having the same pulse width. It should be noted that
in the former case the structures form along with the short scale lower hybrid (LH) excitations. However, when the laser frequency lies in region II, the laser is unable to penetrate the plasma to excite the short scale LH fluctuations are absent but the long scale
structure continues to be present. It is yet another demonstration of the fact that the ponderomotive
force due to the finite laser pulse induces the formation of this long scale structures.
The velocity of this long scale structure is found to be $0.24c$ which matches closely with the Alfv\'en speed ($v_A =(m_e/m_i)^{1/2} \times \omega_{ce}/ \omega_{pe}$) of the medium, which is $0.25c$ for our choice of parameters.
At higher intensity of the laser field the ponderomotive force is higher and increases the amplitude
of these magnetosonic perturbations driving them in nonlinear regime. Such a disturbance then
forms magnetosonic solitons.
The properties of this structure then matches with KdV magnetosonic soliton as
has been reported earlier \cite{kumar_soliton}. This study can help in estimating the electromagnetic wave frequency in astrophysical observations where solitary structures and/or ion heating are observed.
\section{Conclusion}
To summarize, we have shown with the help of PIC simulations,
the possibility of exciting electrostatic lower hybrid perturbations in plasma with the help of a laser in the presence of an external magnetic field. The plasma profile
was chosen to be sharp and the external magnetic field was chosen to be normal to both the laser
propagation direction and the oscillatory electric field. It was shown that the necessary condition for the LH excitation is that the laser frequency should lie in the lower pass
band of the $X$ mode. For this case the electromagnetic field of the laser gets
partially transmitted inside the plasma and then drives the lower hybrid oscillations
by an interesting mechanism arising due to the difference between the $\vec{E} \times \vec{B}$ drift of the two species of ion and electrons in the oscillatory laser electric field and the applied external magnetic field. We have also demonstrated that the finite extent of the laser pulse excites long scale magnetosonic perturbations in the medium. These excitations are independent on the laser frequency and depend merely on pulse width as they get driven by the ponderomotive pressure of the laser pulse.
Recently, there has been a lot of technological progress leading to the development of low frequency short pulse lasers such as $CO_2$ lasers and also the magnetic fields of the order of $1.2$ kilo Tesla have already been produced in the laboratory. This would, therefore, soon open up the possibility of investigating magnetized plasma response in laser plasma related experiments, a regime we have considered in this paper. The importance of LH mode is well known used in the context of magnetic confinement fusion studies where it is used for the purpose of current drive and heating. Here, we have shown the possibility of exciting this particular mode in the context of laser plasma interaction. The laser experiments with magnetized response may thus
have important implications for several frontier experiments which rely on laser plasma interactions. For instance, it opens up a possibility of designing smarter fusion experiments which could use the best of both inertial and magnetic confinement principles.
\section*{Acknowledgements}
The authors would like to acknowledge the OSIRIS Consortium, consisting of UCLA ans IST(Lisbon, Portugal) for providing access to the OSIRIS4.0 framework which is the work supported by NSF ACI-1339893. AD would like to acknowledge her J. C. Bose fellowship grant JCB/2017/000055 and the CRG/2018/000624 grant of DST for the work. The simulations for the work described in this paper were performed on Uday, an IPR Linux cluster. AV and DM would like to thank Mr. Omstavan Samant for constructive discussions and ideas.\\
\paragraph*{\bf{{References:}}}
|
1,116,691,498,611 | arxiv | \section{Introduction}
\label{s:intro}
On a monoid $\mathscr{M}$ (=semigroup with unit $1_{\mathscr{M}}$) there is naturally defined a
reflexive and transitive relation ``$\preceq$'', i.e., for $\omega,\tau\in\mathscr{M}$ one defines
$\omega\preceq\tau$
if, and only if, there exists $\sigma\in\mathscr{M}$ satisfying
$\omega=\tau\cdot\sigma$. In particular, one may consider $(\mathscr{M},\preceq)$ as a
{\emph{partially ordered set}}. Moreover,
if $\mathscr{M}$ is $\mathbb{N}_0$-graded,
then $(\mathscr{M},\preceq)$ is a (noetherian) partially ordered set (see Corollary~\ref{cor:poset}).
Such a poset has a poset completion $i_{\mathscr{M}}\colon \mathscr{M}\longrightarrow\bar{\moM}$
(see \S~\ref{ss:compl}), and one defines the {\emph{universal boundary}}
$\partial\mathscr{M}$ of $\mathscr{M}$ by
\begin{equation}
\label{eq:tilpadef}
\partial\mathscr{M}=(\bar{\moM}\setminus\mathrm{im}(i_\mathscr{M}))/\!\approx,
\end{equation}
where $\approx$ is the equivalence relation induced by "$\preceq$"
on $\bar{\moM}\setminus\mathrm{im}(i_\mathscr{M})$ (see \S~\ref{ss:compl}).
For several reasons (cf. Theorem A, Theorem B, Theorem C)
one may consider $\partial\mathscr{M}$ as the natural boundary associated to the monoid $\mathscr{M}$.
However, it is less clear what topology one should consider. Apart from the cone topology $\mathscr{T}_c(\bar{\moM})$ there is another
potentially finer topology $\mathscr{T}_f(\bar{\moM})$ which will be called the {\emph{fine topology}}
on $\partial\mathscr{M}$ (cf. \S~\ref{ss:finetop}), i.e., the identity
\begin{equation}
\label{eq:idtop}
\mathrm{id}_{\partial\mathscr{M}}\colon (\partial\mathscr{M}, \mathscr{T}_f(\bar{\moM}))\longrightarrow
(\partial\mathscr{M}, \mathscr{T}_c(\bar{\moM}))
\end{equation}
is a continuous map. The monoid $\mathscr{M}$ will be said to be {\emph{$\mathscr{T}$-regular}}, if
\eqref{eq:idtop} is a homeomorphism. E.g., finitely generated free monoids are
$\mathscr{T}$-regular (cf. Proposition~\ref{prop:T2}, \S~\ref{ss:boundT}). The universal boundary
$\eth\mathscr{M}=(\partial\mathscr{M},\mathscr{T}_f(\bar{\moM})$ with the fine topology can be identified with the
Laca boundary $\widehat{E}(\mathscr{M})$ of the monoid $\mathscr{M}$.
This topological space plays an essential role for defining
boundary quotients of $C^\ast$-algebras associated to monoids (cf. \cite[\S~7]{Li:sgrp},
\cite{Lietal:Cstar}). Indeed one has the following (cf. Theorem~\ref{thm:princ}).
\begin{thmA}
The map $\overline{\chi}_{\cdot}\colon (\partial \mathscr{M},\mathscr{T}_f(\bar{\moM}))\longrightarrow \widehat{E}(\mathscr{M})$
defined by \eqref{eq:map2} is a homeomorphism.
\end{thmA}
\begin{rem}
\label{rem:funct}
(a) By Theorem A, the topological space $(\partial\mathscr{M},\mathscr{T}_f(\bar{\moM}))$ is
totally-disconnected and compact. On the other hand, in general one can only show
that $(\partial\mathscr{M},\mathscr{T}_c(\bar{\moM}))$ is a $T_0$-space. Indeed, if
$(\partial\mathscr{M},\mathscr{T}_c(\bar{\moM}))$ happens to be Hausdorff, then \eqref{eq:idtop}
is necessarily a homeomorphism and $\mathscr{M}$ is $\mathscr{T}$-regular (cf. Proposition~\ref{prop:T2}).
So from this perspective considering $\mathscr{T}_f(\bar{\moM})$ as the natural topology would have
many advantages. However, there are also indications supporting the idea
to consider $\mathscr{T}_c(\bar{\moM})$ as the natural topology.
\noindent
(b) If $\phi\colon\mathscr{Q}\longrightarrow\mathscr{M}$ is a surjective graded homomorphism of finitely $1$-generated monoids,
then, by construction, $\phi$ induces a surjective, continuous and open map
\begin{equation}
\label{eq:funct1}
\partial\bar{\phi}\colon (\partial\mathscr{Q},\tau_c(\bar{\moQ}))\longrightarrow(\partial\mathscr{M},\tau_c(\bar{\moM}))
\end{equation}
(cf. Proposition~\ref{prop:homs}). This property can be used to establish the following.
\begin{theoB}
Let $\mathscr{M}$ be a finitely $1$-generated $\mathbb{N}_0$-graded monoid.
Then $\partial\mathscr{M}$ carries naturally a Borel probability meausure
\begin{equation}
\label{eq:bor}
\mu_\mathscr{M}\colon\mathrm{Bor}(\partial\mathscr{M})\longrightarrow\mathbb{R}^+\cup\{\infty\}.
\end{equation}
\end{theoB}
On the other hand the induced mapping $\phi_{\widehat{E}}$ is given by a map
\begin{equation}
\label{eq:funct2}
\phi_{\widehat{E}}\colon \widehat{E}(\mathscr{M}),\longrightarrow\widehat{E}(\mathscr{Q}),
\end{equation}
(cf. Proposition~\ref{prop:hom}). Hence for the purpose of constructing Borel measures the fine topology seems to be inappropriate.
\end{rem}
Theorem B can be used to define the $C^\ast$-algebra
\begin{equation}
\label{eq:Cstarmon}
C^\ast(\mathscr{M},\mu_\mathscr{M})=\langle\, \beta_\omega,\,\beta_\omega^\ast\mid \omega\in\mathscr{M}\,
\rangle
\subseteq\mathcal{B}(L^2(\partial\mathscr{M},\mathbb{C},\mu_{\mathscr{M}}))
\end{equation}
for every finitely $1$-generated $\mathbb{N}_0$-graded monoid $\mathscr{M}$,
where $\beta(\omega)$ is the mapping induced by left multiplication with $\omega$
(cf. \S~\ref{ss:caT}). We will show by explicit calculation that for the monoid $\mathscr{F}_n$, freely generated
by a set of cardinality $n$, the $C^\ast$-algebra
$C^\ast(\mathscr{F}_n,\mu_{\mathscr{F}_n})$ coincides with the Cuntz algebra $\mathcal{O}_n$
(cf. Proposition~\ref{prop:cuntz}),
while for the right-angled Artin monoid $\mathscr{M}^\Gamma$ associated to the
finite graph $\Gamma$, $C^\ast(\mathscr{M}^\Gamma,\mu_{\mathscr{M}^\Gamma})$
coincides with the boundary quotients introduced by Crisp and Laca
in \cite{CL07} (cf. \S~\ref{ss:raam}).
Nevertheless the following more general question remains unanswered.
\begin{ques}
Let $\mathscr{M}$ be a finitely $1$-generated $\mathbb{N}_0$-graded monoid with the left cancellation property. Then
$C^\ast(\mathscr{M},\mu_\mathscr{M})$ coincides with the boundary quotient $\partial C_\lambda(\mathscr{M})$
defined
by X.~Li in \cite[\S 7]{Li:sgrp}.
\end{ques}
From now on we will assume that the $\mathbb{N}_0$-graded monoid $\mathscr{M}=\bigcup_{k\in\mathbb{N}_0}\mathscr{M}_k$ is finitely $1$-generated.
In the context of self-similar fractals in the sense of
John E. Hutchinson (cf. \cite{hutch:frac}) it will be more natural
to endow $\partial\mathscr{M}$ with the cone topology.
Let $(X,d)$ be a complete metric space with a left $\mathscr{M}$-action
$\alpha\colon\mathscr{M}\longrightarrow C(X,X)$
by continuous maps. Such a presentation will be said to be {\em{contracting}},
if there exists a positive real number $\delta<1$ such that
\begin{equation}
\label{eq:cont}
d(\alpha(s)(x),\alpha(s)(y))\leq \delta\cdot d(x,y),
\end{equation}
for all $x,y\in X$, $s\in\mathscr{M}_1$ (cf. \cite[\S~2.2]{hutch:frac}).
For such a metric space $(X,d)$
there exists a unique compact subset $K\subseteq X$ such that
\begin{itemize}
\item[(1)] $K=\bigcup_{s\in\mathscr{M}_1} \alpha(s)(K)$,
\item[(2)] $K=\mathrm{cl}(\{\,Fix(\alpha(t))\mid t\in \mathscr{M}\,\}\subseteq X$.
\end{itemize}
Obviously, by definition every map $\alpha(t)\in C(X,X)$ is contracting, and thus
has a unique fixed point $x_t\in X$. For short we call $K=K(\alpha)\subset X$ the {\em{attractor}}
of the representation $\alpha$. One has the following (cf. Proposition~\ref{prop:uniprop}).
\begin{theoC}
Let $\mathscr{M}=\bigcup_{k\in\mathbb{N}_0}\mathscr{M}_k$ be a finitely $1$-generated $\mathbb{N}_0$-graded
monoid, let $(X,d)$ be a complete metric space and let
$\alpha\colon\mathscr{M}\longrightarrow C(X,X)$ be a contracting representation of $\mathscr{M}$.
Then for any point $x\in X$, $\alpha$ induces a continuous map
\begin{equation*}
\label{eq:attmap}
\kappa_x\colon\partial\mathscr{M}\longrightarrow K(\alpha).
\end{equation*}
Moreover, if $\mathscr{M}$ is $\mathscr{T}$-regular, then $\kappa_x$ is surjective.
\end{theoC}
Under the general hypothesis of Theorem C we do not know whether the topological space $(\partial\mathscr{M}, \mathscr{T}_c(\bar{\moM}))$
is necessarily compact (see Question~\ref{ques:comp}).
However, in case that it is, then
Theorem~C offers the interpretation that
$(\partial\mathscr{M}, \mathscr{T}_c(\bar{\moM}))$ may be considered as a kind of
{\em{universal attractor of the
finitely $1$-generated $\mathbb{N}_0$-graded $\mathscr{T}$-regular monoid $\mathscr{M}$}. }
\begin{rem}
\label{rem:fracCstar}
Let $\mathscr{M}$ be a finitely $1$-generated monoid.
Then $\partial\mathscr{M}$ carries canonically a probability measure $\mu_\mathscr{M}$
(cf. \S~\ref{ss:caT}).
Thus, by Theorem C, the attractor of the $\mathscr{M}$-fractal $((X,d),\alpha)$
carries a type of \emph{contact probability measure} $\mu_x=\mu_\circ^{\kappa_x}$
for every point $x\in X$, which is given by
\begin{equation}
\label{eq:kont}
\mu_x(B)=\mu_{\mathscr{M}}(\kappa_x^{-1}(B)),\qquad B\in\mathrm{Bor}(K).
\end{equation}
By (1), the monoid $\mathscr{M}$ is acting on $K$, and thus also on
$L^2(K,\mathbb{C},\mu_x)$ by bounded linear operators $\gamma(\omega)$, $\omega\in\mathscr{M}$
(cf. \S~\ref{ss:Mfrac})
This defines a $C^\ast$-algebra (cf. \S~\ref{ss:Mfrac})
\begin{equation}
\label{eq:Cstarfrac}
C^\ast(\mathscr{M},X,d,\mu_x)=\langle\,\gamma(\omega),\,\gamma(\omega)^\ast\mid\omega\in\mathscr{M}
\rangle\subseteq\mathcal{B}(L^2(K,\mathbb{C},\mu)).
\end{equation}
\end{rem}
The action of $\partial\mathscr{M}$ on $K$ factors through an $\widetilde{\partial}\mathscr{M}$-action,
and $\widetilde{\partial}\mathscr{M}$ is in general different from $(\partial\mathscr{M},\mathscr{T}_c(\bar{\moM}))$
(cf. Remark~\ref{rem:tilno}). Hence it seems plausible that the $C^\ast$-algebras
$C^\ast(\mathscr{M},X,d,\mu_x)$ are in general different from $C^\ast(\mathscr{M},\mu_\mathscr{M})$.
However further investigations seem to be necessary in order to clarify this
point in more detail.
\section{Posets and their boundaries}
\label{s:poset}
A poset (or partially ordered set) is a set $X$ together with a
reflexive and transitive relation $\preceq\colon X\times X\to\{t,f\}$
with the property that for all $x,y\in X$
satisfying $x\preceq y$ and $y\preceq x$ follows that $x=y$.
By $\mathbb{N}=\{1,2,\ldots\}$ we denote the set of positive integers,
and by $\mathbb{N}_0=\{0,1,2,\ldots\}$ we denote the set of non-negative integers, i.e.,
$\mathbb{N}_0$ is a commutative monoid.
\subsection{Cones, cocones and intervalls}
\label{ss:cones}
For a poset $(X,\preceq)$ and $\tau, \omega\in X$ the set
\begin{align}
\mathrm{C}_\omega&=\{\,x\in X\mid x\preceq\omega\,\}\label{eq:cone}\\
\intertext{will be called the {\em cone} defined by $\omega$, and}
\mbox{\reflectbox{\textup{C}}}_\tau&=\{\,y\in X\mid y\succeq\tau\,\}\label{eq:cocone}\\
\intertext{the {\em cocone} defined by $\tau$.
For $\tau\preceq\omega$ the set}
[\tau,\omega]&=\mbox{\reflectbox{\textup{C}}}_\tau\cap\mathrm{C}_\omega=\{\,x\in X\mid
\tau\preceq x\preceq\omega\,\}
\end{align}
is called the {\em closed intervall} from $\tau$ to $\omega$,
i.e., $[\omega,\omega]=\{\omega\}$.
The poset $(X,\preceq)$ is said to be
{\em noetherian},
if $\mathrm{card}(\mbox{\reflectbox{\textup{C}}}_\tau)<\infty$ for all $\tau\in X$.
\subsection{Complete posets}
\label{ss:compo}
For a poset $(X,\preceq)$ let
\begin{equation}
\label{eq:dec}
\mathscr{D}(\mathbb{N},X,\preceq)=\{\,f\in\mathscr{F}(\mathbb{N},X)\mid \forall n,m\in\mathbb{N}:\ n\leq m\,\Longrightarrow\
f(n)\succeq f(m)\,\}
\end{equation}
denote the set of decreasing functions which we will - if necessary - identify with the set of decreasing sequences. A poset $(X,\preceq)$ is said to be {\it complete}, if for all
$f\in\mathscr{D}(\mathbb{N},X,\preceq)$ there exists an element $z\in X$ such that
\begin{itemize}
\item[(CP$_1$)] $f(n)\succeq z$ for all $n\in\mathbb{N}$, and
\item[(CP$_2$)] if $y\in X$ satisfies $f(n)\succeq y$ for all $n\in\mathbb{N}$, then $z\succeq y$.
\end{itemize}
Note that - if it exists - $z\in X$ is the unique element satisfying (i) and (ii) for
$f\in\mathscr{D}(\mathbb{N},X,\preceq)$.
As usually, $z=\min(f)$ is called
the {\em minimum} of $f\in\mathscr{D}(\mathbb{N},X,\preceq)$.
\subsection{The poset completion of a poset}
\label{ss:compl}
Let $(X,\preceq)$ be a poset.
For $u,v \in \mathscr{D}(\mathbb{N},X,\preceq)$ put
\begin{equation}
\label{eq:rel}
u\preceq v\ \Longleftrightarrow \forall n\in\mathbb{N}\ \exists k_n\in\mathbb{N}\colon\ u(k_n)\preceq v(n),
\end{equation}
and put
\begin{equation}
\label{eq:rel2}
u\sim v \Longleftrightarrow \,\, \big[\,(u\preceq v \wedge v\preceq u)
\vee \big(v\preceq u \wedge v=c_m, m=min(u)\big)\,\big],
\end{equation}
where $c_z\in \mathscr{D}(\mathbb{N},X,\preceq)$, $z\in X$, is given by $c_z(n)=z$ for all $n\in\mathbb{N}$.
Let $\approx$ be the equivalence relation generated by $\sim$ and put $\bar{X}=\mathscr{D}(\mathbb{N},X,\preceq)/\!\approx$.
Then the following properties hold for $(\bar{X},\preceq)$.
\begin{prop}
\label{prop:comp}
Let $(X,\preceq)$ be a poset.
\begin{itemize}
\item[(a)] The relation $\preceq$ defined by \eqref{eq:rel} is reflexive and transitive.
\item[(b)] For any strictly increasing function $\alpha\colon\mathbb{N}\to\mathbb{N}$ and $u\in\mathscr{D}(\mathbb{N},X,\preceq)$
one has $u\approx u\circ\alpha$.
\item[(c)] Define for $[u], [v]\in \bar{X}$ that $[u]\preceq[v]$ if,
and only if, $u\preceq v$. Then $(\bar{X},\preceq)$ is a poset.
\item[(d)] $(\bar{X},\preceq)$ is complete.
\end{itemize}
\end{prop}
\begin{proof}
(a) The relation $\preceq$ is obviously reflexive.
Let $u,v,w\in\mathscr{D}(\mathbb{N},X,\preceq)$, $u\preceq v$, $v\preceq w$.
Then for all $n\in\mathbb{N}$ there exists $h_n, k_n\in\mathbb{N}$ such that
$u(h_n)\preceq v(k_n)\preceq w(n)$. Thus, $u \preceq w$.
\noindent
(b) Let $u\in\mathscr{D}(\mathbb{N},X,\preceq)$ and let $\alpha\colon\mathbb{N}\to\mathbb{N}$
be a strictly increasing function.
Let $m<n$, $m,n\in\mathbb{N}$. Since $\alpha$ is strictly increasing, $\alpha(m)<\alpha(n)$.
Then there exist $m_0,n_0\in\mathbb{N}$ such that $m_0\le\alpha(m)<\alpha(n)\le n_0$.
Then one has $u(m_0)\succeq u(\alpha(m))\succeq u(\alpha(n))\succeq u(n_0)$.
Thus $u\preceq u\circ\alpha$ and $u\circ\alpha\preceq u$, proving that $u\approx u\circ\alpha$.
\noindent
(c) Let $[u], [v]\in\bar{X}$, $[u]\preceq [v]$ and $[v]\preceq [u]$.
Then, by definition, $u\preceq v$ and $v\preceq u$, and thus $u\approx v$, i.e.,
$[u]=[v]$.
\noindent
(d) Let $\{u^k\}_{k\in\mathbb{N}}\in\mathscr{D}(\mathbb{N},\bar{X},\preceq)$, i.e.,
$u^k\in\bar{X}$ for all $k\in\mathbb{N}$.
Then one has $u^1\succeq u^2\succeq\dots$ by definition.
Since each $u^k\in\mathscr{D}(\mathbb{N},X,\preceq)$,
one has $u^k(n)\succeq u^k(m)$ for all $n\le m$, $m,n\in\mathbb{N}$.
We define $v\in\mathscr{D}(\mathbb{N},X,\preceq)$ by $v(n)=u^n(n)$, $n\in\mathbb{N}$.
Then $[v]\in\bar{X}$ is the minimum of $\{u^k\}_{k\in\mathbb{N}}$.
This yields the claim.
\end{proof}
Assigning every element $x\in X$ the equivalence class containing the
constant function $c_x\in\mathscr{D}(\mathbb{N},X,\preceq)$ satisfying
$c_x(n)=x$ for all $n\in\mathbb{N}$ yields a strictly increasing mapping of posets
$\iota_X\colon X\to\bar{X}$.
From now on $(X,\preceq)$ will be considered as a sub-poset of $(\bar{X},\preceq)$.
The poset $(\bar{X},\preceq)$ will be called the
{\em poset completion of $(X,\preceq)$}.
The following fact is straightforward.
\begin{fact}
\label{fact:compl}
The map $\iota_X$ is a bijection if, and only if, $(X,\preceq)$ is complete.
\end{fact}
\begin{example}
Let $X=\mathbb{N}\sqcup\{\infty\}$ and define $n\preceq m$ if, and only if, $n\geq m$,
where ``$\geq$'' denotes the natural order relation.
Then the poset $(X,\preceq)$ is complete and $\bar{X}=X$.
\end{example}
\subsection{The universal boundary of a poset}
\label{ss:bound}
For a poset $(X,\preceq)$ the poset
$\partial X=\bar{X}\setminus \mathrm{im}(i_X)$ will be called
the {\em universal boundary} of the poset $(X,\preceq)$.
From now on we use the notation $x\succ y$ as a short form for
$x\succeq y$ and $x\not= y$. A function $f\colon\mathbb{N}\to X$
will be said to be strictly decreasing, if $f(n+1)\prec f(n)$ for all $n\in\mathbb{N}$.
The following fact will turn out to be useful.
\begin{fact}
\label{fact:strict}
Let $f\in\mathscr{D}(\mathbb{N},X,\preceq)$ be a decreasing function such that $[f]\in\partial X$.
Then there exists a strictly decreasing function $h\in\mathscr{D}(\mathbb{N},X,\preceq)$
such that $f\approx h$, i.e., $[f]=[h]$.
\end{fact}
\begin{proof}
By hypothesis, $J=\mathrm{im}(f)$ is an infinite set. In particular, the set
$\Omega=\{\,\min(f^{-1}(\{j\})\mid j\in J\,\}$ is an infinite and unbounded subset of $\mathbb{N}$.
Let $e\colon\mathbb{N}\to\Omega$ be the enumeration function of $\Omega$, i.e.,
$e(1)=\min(\Omega)$, and recursively one has
$e(k+1)=\min(\Omega\setminus\{e(1),\ldots, e_k\})$.
Then, by construction, $h=f\circ e$ is strictly decreasing, and, by
Proposition~\ref{prop:comp}(b),
one has $f\approx h$, and hence the claim.
\end{proof}
\begin{fact}
\label{fact:noeth}
Let $(X,\preceq)$ be a noetherian poset, and let $(\bar{X},\preceq)$
be its completion. Then for all $\tau\in X$ one has $\mbox{\reflectbox{\textup{C}}}_\tau(\bar{X})\subseteq X$.
In particular, $\mbox{\reflectbox{\textup{C}}}_\tau(\bar{X})=\mbox{\reflectbox{\textup{C}}}_\tau(X)$,
where the cocones are taken in the respective posets.
\end{fact}
\begin{example}
Let $X=A\sqcup B$, where $A,B=\mathbb{Z}$ and define
\begin{equation}
n\preceq m\Longleftrightarrow\, \Big(((n,m\in A \vee n,m \in B) \wedge n\le m) \vee (n\in A \wedge m\in B)\Big),
\end{equation}
where ``$\leq$'' denotes the natural order relation on $\mathbb{Z}$.
Then $(X,\preceq)$ is a poset and its completion is given by
$\bar{X}=\mathbb{Z}\sqcup\{-\infty\}\, \bigsqcup \mathbb{Z}\sqcup\{-\infty\}$.
For $n\in A$, one has $\mbox{\reflectbox{\textup{C}}}_n(X)\neq \mbox{\reflectbox{\textup{C}}}_n(\bar{X})$,
since $-\infty\in B$ is in $\mbox{\reflectbox{\textup{C}}}_n(\bar{X})$, but not in $\mbox{\reflectbox{\textup{C}}}_n(X)$.
\end{example}
\subsection{The cone topology}
\label{ss:cone}
Let $(X,\preceq)$ be a poset, and let
$(\bar{X},\preceq)$ denote its completion.
For $\tau,\omega\in X$ let
\begin{equation}
\label{eq:sect}
S(\tau,\omega)=\{\,x\in X\mid x\preceq\tau\ \wedge\ x\preceq\omega\,\}.
\end{equation}
By transitivity,
\begin{equation}
\label{eq:cone1}
\mathrm{C}_\tau(\bar{X})\cap\mathrm{C}_\omega(\bar{X})=\bigcup_{z\in S(\tau,\omega)}\mathrm{C}_z(\bar{X}).
\end{equation}
In particular,
\begin{equation}
\label{eq:cone2}
\mathscr{B}_c(\bar{X})=\big\{\,\{x\}\mid x\in X\,\big\}\cup\big\{\,\mathrm{C}_\omega(\bar{X})\mid
\omega\in X\,\big\}
\end{equation}
is a base of a topology $\mathscr{T}_c(\bar{X})$ - the {\em cone topology} - on $\bar{X}$. By construction,
the subspace $X$ is discrete and open, and the subspace $\partial X$ is closed.
For $\omega\in\bar{X}$ let $\mathscr{N}_c(\omega)$ denote the set of all open
neighborhoods of $\omega$ with respect to the cone-topology, and put
$\mathscr{S}(\omega)=\bigcap_{U\in\mathscr{N}_c(\omega)} U$. Then, by construction, one has
$\mathscr{S}(\omega)=\{\omega\}$ for $\omega\in X$,
and $\mathscr{S}(\omega)=\mathrm{C}_\omega(\bar{X})$ for $\omega\in\partial X$.
This implies the following.
\begin{prop}
\label{prop:coneT}
Let $(X,\preceq)$ be a poset, and let $(\bar{X},\preceq)$ denote its completion.
Then $(\bar{X},\mathscr{T}_c(\bar{X}))$ is a $T_0$-space (or Kolmogorov space).
\end{prop}
\begin{proof}
Let $\tau,\omega\in\bar{X}$, $\tau\not=\omega$.
If either $\tau\in X$ or $\omega\in X$, then either $\{\tau\}$ or
$\{\omega\}$ is an open set. So we may assume that $\tau,\omega\in\partial X$.
As $\mathscr{S}(\omega)=\mathrm{C}_\omega(\bar{X})$, either there exists $U\in\mathscr{N}_c(\omega)$,
$\tau\not\in U$, or $\tau\preceq\omega$. By changing the role of $\omega$ and $\tau$,
either there exists $V\in\mathscr{N}_c(\tau)$,
$\omega\not\in V$, or $\omega\preceq\tau$. Since $\tau\preceq\omega$
and $\omega\preceq\tau$ is impossible, this yields the claim.
\end{proof}
\subsection{The fine topology}
\label{ss:finetop}
For a partially ordered set $(X,\preceq)$ let
\begin{equation}
\label{eq:finetop1}
\mathscr{S}=\{\,\{\tau\},\,C_\tau(\bar{X}),\,C_\tau(\bar{X})^C\mid\tau\in X\,\}
\end{equation}
denote the set of all 1-elementary sets in $X$, and all cones and their complements in $X$. Then $\mathscr{S}$ is a subbasis of a topology $\mathscr{T}_f(\bar{X})$ on $\bar{X}$ which will call the {\em{fine topology}} on $\bar{X}$.
In particular, the set $\Omega=\{\,X=\bigcap_{1\leq j\leq r}X_j\mid X_1,\ldots,X_r\in\mathscr{S}\,\}$
is a base of the topology $\mathscr{T}_f(\bar{X})$. By definition, this topology has the following
properties.
\begin{fact}
\label{fact:fintop}
Let $(X,\preceq)$ be a partially ordered set. Then
\begin{itemize}
\item[(a)] $(\bar{X},\mathscr{T}_f(\bar{X}))$ is a $T_2$-space (or Hausdorff space).
\item[(b)] $\mathscr{T}_c(\bar{X})\subseteq\mathscr{T}_f(\bar{X})$.
\end{itemize}
\end{fact}
\subsection{The $\sim$-boundary}
\label{ss:sbound}
There is another type of boundary for a poset, the $\sim$-boundary, which
seems to be relevant for the study of fractals.
Let $(X,\preceq)$ be a noetherian poset, and let $(\bar{X},\preceq)$ denote its completion.
Put
\begin{equation}
\label{eq:ome}
\Omega=\Delta(X)\sqcup\{\,(\varepsilon,\eta)\in\partial X\times \partial X\mid \varepsilon\preceq\eta\,\},
\end{equation}
where $\Delta(X)=\{\,(x,x)\mid x\in X\,\}$, and let $\sim$ denote the
equivalence relation on $\bar{X}$ generated by the relation $\Omega$.
Then one has a canonical map
\begin{equation}
\label{eq:projtilde}
\pi\colon\bar{X}\to\widetilde{X},
\end{equation}
where $\widetilde{X}=\bar{X}/\!\!\sim$.
By construction, $\pi\vert_X$ is injective.
The set $\widetilde{\der} X=\widetilde{X}\setminus \pi(X)$ will be called
the {\em $\sim$-boundary} of the poset $(X, \preceq)$.
We put
\begin{equation}
I(\sim)=\{\,(\omega,\tau)\in \bar{X}\times\bar{X} \mid \omega\sim\tau \,\}\subseteq \bar{X}\times\bar{X}\\
\end{equation}
The set $\widetilde{X}$ carries the {\em quotient topology} $\mathscr{T}_q(\widetilde{X})$ with respect to the mapping
$\pi$ and the topological space $(\bar{X},\mathscr{T}_c(\bar{X}))$.
In particular, the subspace $\pi(X)\subseteq\widetilde{X}$ is discrete and open, and
$\widetilde{\der} X\subseteq\widetilde{X}$ is closed.
For $\omega\in\bar{X}$ we put $\widetilde{\mathrm{C}}_\omega=\pi(\mathrm{C}_\omega(\bar{X}))$.
The space $\widetilde{X}$ will be considered merely as topological space.
It has the following property.
\begin{prop}
\label{prop:frech}
The topological space $(\widetilde{X},\mathscr{T}_q(\widetilde{X}))$ is a $T_1$-space.
\end{prop}
\begin{proof}
For $\omega\in\bar{X}$ one has
\begin{equation}
\label{eq:propT1}
\mathscr{S}(\pi(\omega))=\pi(\bigcap_{\tau\sim\omega}\mathscr{S}(\tau))=
\pi(\bigcap_{\tau\sim\omega} \mathrm{C}_\tau(\bar{X}))=\{\pi(\omega)\}.
\end{equation}
This yields the claim.
\end{proof}
\section{Monoids and their boundaries}
\label{s:monoids}
A {\em monoid} (or semigroup with unit) $\mathscr{M}$ is a set with
an associative multiplication $\hbox to 7truept{\hrulefill}\cdot\hbox to 7truept{\hrulefill}\colon \mathscr{M}\times \mathscr{M}\to \mathscr{M}$
and a distinguished element $1\in \mathscr{M}$ satisfying
$1\cdot x=x\cdot1=x$ for all $x\in \mathscr{M}$. For a monoid $\mathscr{M}$ we denote by
\begin{equation}
\label{eq:maxgr}
\mathscr{M}^{\times}=\{\,x\in \mathscr{M}\mid \exists y\in \mathscr{M}\colon x\cdot y=y\cdot x=1\,\}
\end{equation}
the {\em maximal subgroup} contained in $\mathscr{M}$.
\subsection{$\mathbb{N}_0$-graded monoids}
\label{ss:gradmon}
The set of non-negative integers $\mathbb{N}_0=\{0,1,2,\ldots\}$
together with addition is a monoid.
A monoid $\mathscr{M}$ together with a homomorphism of monoids
$|\hbox to 7truept{\hrulefill}|\colon \mathscr{M}\to\mathbb{N}_0$ is called an {\em $\mathbb{N}_0$-graded monoid}.
For $k\in\mathbb{N}_0$ one defines $\mathscr{M}_k=\{\,x\in \mathscr{M}\mid |x|=k\,\}$.
The $\mathbb{N}_0$-graded monoid $\mathscr{M}$ is said to be {\em connected}, if $\mathscr{M}_0=\{1\}$.
One has the following straightforward fact.
\begin{fact}
\label{fact:monvon}
For a connected $\mathbb{N}_0$-graded monoid $\mathscr{M}$ one has
$\mathscr{M}^\times=\{1\}$.
\end{fact}
If $\mathscr{Q}$ and $\mathscr{M}$ are $\mathbb{N}_0$-graded monoids, a homomorphism $\phi\colon \mathscr{Q}\to \mathscr{M}$ is a
homomorphism of $\mathbb{N}_0$-graded monoids, if $\phi(\mathscr{Q}_k)\subseteq \mathscr{M}_k$
for all $k\in\mathbb{N}_0$. The following property is straightforward.
\begin{prop}
\label{prop:homs}
Let $\phi\colon\mathscr{Q}\to\mathscr{M}$ be a homomorphism of $\mathbb{N}_0$-graded monoids.
Then $\phi$ is monoton, i.e., $x,y\in\mathscr{Q}$, $x\preceq y$ implies
$\phi(x)\preceq\phi(y)$, and thus induces a monoton map
\begin{equation}
\label{eq:bphi}
\mathscr{D}\phi\colon\mathscr{D}(\mathbb{N},\mathscr{Q},\preceq)\longrightarrow\mathscr{D}(\mathbb{N},\mathscr{M},\preceq).
\end{equation}
Let $\bar{\phi}\colon\bar{\moQ}\to\bar{\moM}$ denote the induced map.
Let $\partial\bar{\phi}\colon\partial\mathscr{Q}\longrightarrow\partial\mathscr{M}$ be the map induced by $\bar{\phi}$.
Then $\partial\bar{\phi}$ is continuous with repect to the cone topology.
\end{prop}
\begin{proof}
Let $\tau\in\mathscr{M}$. Then the monotony of $\mathscr{D}\phi$ implies that
\begin{equation}
\label{eq:conthom}
\bar{\phi}^{-1}(C_\tau(\bar{\moM}))=\bigcup_{y\in \mathscr{S}} C_y(\bar{\moQ}),
\end{equation}
where $\mathscr{S}=\{\,q\in\bar{\moQ}\mid \bar{\phi}(q)\in C_\tau(\bar{\moM})\,\}$. Thus $\bar{\phi}$
and $\partial\phi$ are continuous.
\end{proof}
\subsection{$1$-generated monoids}
\label{ss:1gen}
For any set $Y$ there exists a {\em free monoid} $\mathscr{F}\langle Y\rangle$
which is naturally $\mathbb{N}_0$-graded. Moreover, $\mathscr{F}\langle Y\rangle$ is connected
and $\mathscr{F}\langle Y\rangle_1=Y$. For an $\mathbb{N}_0$-graded monoid $\mathscr{M}$ there exists
a canonical homomorphism of $\mathbb{N}_0$-graded monoids
\begin{equation}
\label{eq:canmon}
\phi_\mathscr{M}\colon \mathscr{F}\langle \mathscr{M}_1\rangle\longrightarrow \mathscr{M}
\end{equation}
satisfying $\phi_{\mathscr{M},1}=\mathrm{id}_{\mathscr{M}_1}$. The $\mathbb{N}_0$-graded monoid $\mathscr{M}$ is said to be
{\em $1$-generated}, if $\phi_\mathscr{M}$ is surjective. In particular,
such a monoid is connected. By definition, free monoids are $1$-generated.
Moreover, $\mathscr{M}$ is said to be {\em finitely $1$-generated}, if it is $1$-generated
and $\mathscr{M}_1$ is a finite set. The following important question remains unanswered in this paper.
\begin{ques}
\label{ques:comp} Does there exist a finitely $1$-generated monoid $\mathscr{M}$ satisfying
$\mathscr{T}_c(\bar{\moM})\not=\mathscr{T}_f(\bar{\moM})$.
\end{ques}
\subsection{Monoids as posets}
\label{ss:monpos}
Let $\mathscr{M}$ be a monoid.
For $x\in \mathscr{M}$, put
\begin{align}
\mathscr{M} x&=\{\,yx\mid y\in \mathscr{M} \,\};\\
x\mathscr{M}&=\{\,xy\mid y\in \mathscr{M} \,\}.
\end{align}
For $x,y\in \mathscr{M}$ we define
\begin{equation}
x\preceq y \Longleftrightarrow x\mathscr{M}\subseteq y\mathscr{M},
\end{equation}
i.e., $x\preceq y$ if, and only if, there exists $z\in \mathscr{M}$ such that $x=yz$.
\subsection{Left cancellative monoids}
\label{ss:lcmon}
A monoid $\mathscr{M}$ is said to be {\it left cancellative} if $xy=xz$ implies $y=z$
for all $x,y,z\in \mathscr{M}$; and {\it right cancellative} if $yx=zx$ implies $y=z$
for all $x,y,z\in \mathscr{M}$.
\begin{prop}
Let $\mathscr{M}$ be a left-cancellative monoid.
For $x,y\in \mathscr{M}$ one has $\mathscr{M} x=\mathscr{M} y$ if, and only if,
there exists $z\in \mathscr{M}^\times$ such that $y=xz$.
\end{prop}
\begin{proof}
For $z\in \mathscr{M}^{\times}$ one has $z\mathscr{M}=\mathscr{M}$.
Thus for $x\in\mathscr{M}$ and $y=xz$, multiplying by $x$ from the left yields $y\mathscr{M}=x\mathscr{M}$.
Viceversa, suppose $x\mathscr{M}=y\mathscr{M}$ for $x,y$ in $\mathscr{M}$.
Then there exist $z,w\in \mathscr{M}$ such that $y=xz$ and $x=yw$.
Hence $y=ywz$ and $x=xzw$.
Thus left cancellation implies $zw=1=wz$, showing that $z,w\in \mathscr{M}^\times$.
\end{proof}
\begin{cor}
\label{cor:canmon}
Let $\mathscr{M}$ be a left-cancellative monoid. Then $(\mathscr{M}/\mathscr{M}^\times,\preceq)$ is a poset.
\end{cor}
\begin{rem}
If left cancellation is replaced by right cancellation, then one has
$x\mathscr{M}=y\mathscr{M}$ if, and only if, there exists $z\in \mathscr{M}^\times$ such that $y=zx$.
\end{rem}
\subsection{$1$-generated monoids as posets}
\label{ss:1genpos}
\begin{prop}
\label{prop:poset}
Let $\mathscr{M}$ be a connected $\mathbb{N}_0$-graded monoid.
For $x,y\in \mathscr{M}$ one has $x\mathscr{M}=y\mathscr{M}$ if, and only if, $x=y$.
\end{prop}
\begin{proof}
Suppose $x\mathscr{M}=y\mathscr{M}$, for $x,y\in \mathscr{M}$.
Then there exist $z,w\in \mathscr{M}$ such that $x=yz$ and $y=xw$,
so $|x|=|y|+|z|$ and $|y|=|x|+|w|$.
Thus $|z|=0=|w|$.
Since $\mathscr{M}$ is connected, this implies $z=1=w$.
\end{proof}
As a consequence one obtains the following.
\begin{cor}
\label{cor:poset}
Let $\mathscr{M}$ be a $1$-generated $\mathbb{N}_0$-graded monoid.
Then $(\mathscr{M},\preceq)$ is a poset. If $\mathscr{M}$ be finitely $1$-generated,
then $(\mathscr{M},\preceq)$ is a noetherian poset.
\end{cor}
\begin{rem}
\label{rem:tilno}
The following example shows that the universal boundary $\partial\mathscr{M}$
is in general different from the $\sim$-boundary $\widetilde{\partial}\mathscr{M}$.
\label{ex:stup}
Let $\mathscr{M}=\langle x,y,z \mid xz=zx\,\rangle$. Consider
\begin{equation}
\label{eq:ex1}
\begin{aligned}
f_1\colon& \mathbb{N}\to M,\qquad f_1(n)=(xz)^n,\\
f_2\colon& \mathbb{N}\to M,\qquad f_2(n)=x^n,\\
f_3\colon& \mathbb{N}\to M,\qquad f_3(n)=z^n.
\end{aligned}
\end{equation}
Then $f_2\succeq f_1\preceq f_3$. Hence $\pi(f_1)=\pi(f_2)=\pi(f_3)\in\widetilde{\partial}\mathscr{M}$,
and $\pi\colon\partial M\to\widetilde{\partial}\mathscr{M}$ is not injective.
\end{rem}
\subsection{Abelian semigroups generated by idempotents}
\label{ss:sgrpid}
Let $E$ be an abelian semigroup being generated by a set of
elements $\Sigma\subseteq E$ satisfying $\sigma^2=\sigma$ for
all $\sigma\in\Sigma$, i.e., all elements of $\Sigma$ are idempotents.
Then every element $u\in E$ is an idempotent, and one may define
a partial order "$\preceq$" on $E$ by
\begin{equation}
\label{eq:pord}
u\preceq v \qquad \Longleftrightarrow \qquad u\cdot v=v,
\end{equation}
for $u,v\in E$. Let $\mathscr{R}=\{\,(u,v)\in \Sigma\times \Sigma\mid u\preceq v\,\}$.
By definition, one has
\begin{equation}
\label{eq:E1}
E=\{\,u=\sigma_1\cdots\sigma_r\mid \sigma_i\in\Sigma\,\}.
\end{equation}
Hence
\begin{equation}
\label{eq:E2}
E\simeq \mathscr{F}^{\mathrm{ab}}(\Sigma)/R,
\end{equation}
where $\mathscr{F}^{\mathrm{ab}}(\Sigma)$ is the free abelian semigroup over the set $\Sigma$,
and $R$ is the relation
\begin{equation}
\label{eq:E3}
R=\{\, (uv,v) \mid (u,v)\in\mathscr{R}\,\}\subseteq \mathscr{F}^{\mathrm{ab}}(\Sigma)\times \mathscr{F}^{\mathrm{ab}}(\Sigma),
\end{equation}
i.e., $E=\mathscr{F}^{\mathrm{ab}}(\Sigma)/R^\sim$, where $R^\sim$ is the equivalence relation on
$\mathscr{F}^{\mathrm{ab}}(\Sigma)$ generated by the set $R$.
Let
\begin{equation}
\widehat{E}=\{\,\chi\colon E\to\{0,1\}\mid \text{$\chi$ a semigroup homomorphism, $\chi(0)=0$, $\chi\not\equiv 0$}
\,\}
\end{equation}
Then $\widehat{E}$ coincides with the set of characters of the $C^\ast$-algebra $C^\ast(E)$
generated by $E$ (satisfying $e^\ast=e$ for all $e\in E$), and hence carries naturally
the structure of a compact topological space. By construction, $\widehat{E}$ can be identified with
a subset of $\mathcal{F}(\Sigma,\{0,1\})$ - the set of functions from $\Sigma$ to $\{0,1\}$. In more detail,
\begin{equation}
\label{eq:ident1}
\widehat{E}=\{\,\phi\in\mathcal{F}(\Sigma,\{0,1\})\mid \forall (u,v)\in\mathscr{R}:\ \phi(v)=\phi(u)\cdot\phi(v)\,\}
\end{equation}
Thus identifying $\mathscr{F}(\Sigma,\{0,1\})$ with $\{0,1\}^{\Sigma}$, one obtains that
\begin{equation}
\label{eq:ident2}
\widehat{E}=\{\,(\eta_\sigma)_{\sigma\in\Sigma}\in\{0,1\}^{\Sigma}\mid\forall (u,v)\in\mathscr{R}:
\sigma_v=\sigma_u\cdot\sigma_v\,\}.
\end{equation}
\subsection{The semigroup of idempotents generated by a set of subset of a set}
\label{ss:semiset}
Let $X$ be a set, and let $S\subseteq\mathscr{P}(X)$ be a set of subsets of $X$.
Then $S$ generates an algebra of sets $\mathscr{A}(S)\subseteq\mathscr{P}(X)$, i.e., the sets of $\mathscr{A}(S)$
consist of the finite intersections of sets in $S$. Then
\begin{equation}
\label{eq:semiset}
E(S)=\langle\,I_A\mid A\in\mathscr{A}(S)\,\rangle\subseteq\mathscr{F}(X,\{0,1\})
\end{equation}
is an abelian semigroup being generated by
the set of idempotents
\begin{equation}
\label{eq:semiset2}
\Sigma=\{\,I_Y\mid Y\in S\,\}.
\end{equation}
Moreover, by \eqref{eq:ident1}, one has
\begin{equation}
\label{eq:semiset3}
\widehat{E}(S)=\{\,\phi\in\mathcal{F}(S,\{0,1\})\mid \forall U,V\in S, V\subseteq U: \phi(V)=
\phi(U)\cdot \phi(V)\,\}
\end{equation}
\subsection{The Laca-boundary of a monoid}
\label{ss:lacab}
Let $\mathscr{M}$ be a $1$-generated monoid. Then one chooses
\begin{equation}
\label{eq:laca1}
S=\{\, \omega\cdot \mathscr{M}\mid\omega\in \mathscr{M}\,\}
\end{equation}
to consist of all principal right ideals.
For short we call the compact set $\eth \mathscr{M}=\widehat{E}(S)$ for $S$ as in \eqref{eq:laca1}
the {\emph{Laca boundary}} of $\mathscr{M}$.
For an infinite word $\underline{\omega}=(\omega_k)\in\mathscr{D}(\mathbb{N},\mathscr{M},\succeq)$ one defines the element
$\chi_{\underline{\omega}}\in\widehat{E}(S)$ by $\chi_{\underline{\omega}}(\tau \mathscr{M})=1$ if, and only if,
there exists $k\in\mathbb{N}$ such that $\omega_k\in\tau \mathscr{M}$, i.e., $\tau\succeq\omega_k$, and thus
$\tau\succeq\underline{\omega}$. This yields a map
\begin{equation}
\label{eq:map1}
\chi_{\cdot}\colon \mathscr{D}(\mathbb{N},\mathscr{M},\preceq)\longrightarrow \widehat{E}(S)
\end{equation}
(cf. \cite[\S~2.2]{Lietal:Cstar}). By definition, it has the following property:
\begin{prop}
\label{prop:val}
For $\underline{\omega}=(\omega_k)\in\mathscr{D}(\mathbb{N},\mathscr{M},\preceq), \tau\in \mathscr{M}$, one has
$\chi_{\underline{\omega}}(\tau \mathscr{M})=1$ if, and only if, $\tau\succeq\underline{\omega}$.
In particular, one has $\chi_{\underline{\eta}}=\chi_{\underline{\omega}}$ if, and only if, $\underline{\eta}\approx\underline{\omega}$, and hence
$\chi_\cdot$ induces an injective map
\begin{equation}
\label{eq:map2}
\overline{\chi}_\cdot\colon \partial \mathscr{M}\longrightarrow\eth \mathscr{M}.
\end{equation}
\end{prop}
\begin{proof}
The first part has already been established before.
Let $\underline{\eta}=(\eta_k)$. Then by the first part, $\underline{\omega}\succeq\underline{\eta}$ implies that
for all $\tau\in\mathscr{M}$ one has
\begin{equation}
\label{eq:map3}
\chi_{\underline{\omega}}(\tau\mathscr{M})=1\Longrightarrow \chi_{\underline{\eta}}(\tau\mathscr{M})=1.
\end{equation}
Thus as $\mathrm{im}(\chi_{\omega})\subseteq\{0,1\}$ one concludes that
$\underline{\omega}\succeq\underline{\eta}$ and $\underline{\omega}\preceq\underline{\eta}$ implies that $\chi_{\underline{\omega}}=\chi_{\underline{\eta}}$.
On the other hand $\chi_{\underline{\eta}}=\chi_{\underline{\omega}}$ implies that $1=\chi_{\underline{\eta}}(\eta_k \mathscr{M})=\chi_{\underline{\omega}}(\eta_k \mathscr{M})$ for all $k\in\mathbb{N}$.
In particular, $\underline{\eta}\succeq\underline{\omega}$. Interchanging the roles of
$\underline{\eta}$ and $\underline{\omega}$ yields $\underline{\omega}\succeq\underline{\eta}$, and thus $\underline{\eta}\approx\underline{\omega}$ (cf. section~\ref{ss:compl}).
The last part is a direct consequence of the definition of $\partial \mathscr{M}$.
\end{proof}
The following theorem shows that for a $1$-generated $\mathbb{N}_0$-graded monoid $\mathscr{M}$
its universal boundary
$\partial\mathscr{M}$ with the fine topology is a totally-disconnected compact space.
\begin{thm}
\label{thm:princ}
The map $\overline{\chi}_{\cdot}\colon (\partial \mathscr{M},\mathscr{T}_f(\bar{\moM}))\longrightarrow \eth \mathscr{M}$ is a homeomorphism.
\end{thm}
\begin{proof}
It is well known that $\chi$ is surjective (cf. \cite[Lemma~2.3]{Lietal:Cstar}), and thus $\overline{\chi}$ is surjective. By Proposition~\ref{prop:val}, $\overline{\chi}$ is injective, and hence $\overline{\chi}$ is a bijection. The sets
\begin{equation}
\label{eq:subbase}
U_\tau^{\varepsilon}=\{\,\eta\in\widehat{E}(\mathscr{M})\mid\eta(\tau\mathscr{M})=\varepsilon\,\},\qquad \tau\in\mathscr{M},\qquad\varepsilon\in\{0,1\}
\end{equation}
form a subbasis of the topology of $\widehat{E}(\mathscr{M})$, and
\begin{equation}
\label{eq:map4}
\overline{\chi}^{-1}(U_\tau^1)=C_\tau(\bar{\mathscr{M}}))\cap\partial\mathscr{M}
\end{equation}
by \eqref{eq:map3}. Hence
$\overline{\chi}^{-1}(U_\tau^0)=C_\tau(\bar{\mathscr{M}}))^C\cap\partial\mathscr{M}$ and this yields the claim.
\end{proof}
The proof of Theorem~\ref{thm:princ} has also shown that
\begin{equation}
\label{eq:chiback}
\overline{\chi}^{-1}\colon\eth\mathscr{M}\longrightarrow (\partial\mathscr{M},\mathscr{T}_c(\bar{\moM}))
\end{equation}
is a bijective and continuous map. This has the following consequence
(cf. \cite[\S~ 9.4, Corollary 2]{bou:top}).
\begin{prop}
\label{prop:T2}
Let $\mathscr{M}$ be a $1$-generated $\mathbb{N}_0$-graded monoid such that
$(\partial\mathscr{M},\mathscr{T}_c(\bar{\moM}))$ is Hausdorff. Then $\mathscr{M}$ is $\mathscr{T}$-regular.
\end{prop}
In contrast to Proposition~\ref{prop:homs} one has the following property for the
Laca boundary of monoids.
\begin{prop}
\label{prop:hom}
Let $\phi\colon\mathscr{Q}\longrightarrow\mathscr{M}$ be a surjective homomorphism of connected
$\mathbb{N}_0$-graded monoids. Then $\phi$ induces an injective continuous map
$\phi_{\eth}\colon\eth\mathscr{M}\longrightarrow\eth\mathscr{Q}$.
\end{prop}
\begin{proof}
By Proposition~\ref{prop:poset}, $\phi$ induces a map $\phi_{\Sigma}\colon\Sigma(\mathscr{Q})\to\Sigma(\mathscr{M})$ given by
\begin{equation}
\label{eq:phiS}
\phi_{\Sigma}(\omega\mathscr{Q})=\phi(\omega)\mathscr{M}.
\end{equation}
Moreover, for $x,y\in\mathscr{Q}$ one has $x\preceq y$, if and only if, $x\mathscr{Q}\subseteq y\mathscr{Q}$,
if and only if there exists $z\in\mathscr{Q}$ such that $x=y\cdot z$.
From the last statement one concludes that $\phi_\Sigma(x\mathscr{Q})\subseteq
\phi_{\Sigma}(z\mathscr{Q})$. Thus, by \eqref{eq:E2}, $\phi_\Sigma$ induces a homomorphism of
semigroups
\begin{equation}
\label{eq:phiE}
\phi_E\colon E(\mathscr{Q})\longrightarrow E(\mathscr{M}),
\end{equation}
and thus a map
\begin{equation}
\label{eq:hEphio}
\phi_{\widehat{E}}^\circ\colon \widehat{E}(\mathscr{M})\cup\{0\}\longrightarrow \widehat{E}(\mathscr{Q})\cup\{0\}.
\end{equation}
If $\phi$ is surjective, then $\phi_E$ is surjective, and $\phi_{\widehat{E}}^\circ$ restricts
to a map
\begin{equation}
\label{eq:hEphi}
\phi_{\widehat{E}}\colon\widehat{E}(\mathscr{M})\longrightarrow\widehat{E}(\mathscr{Q}).
\end{equation}
It is straightforward to verify that $\phi_{\widehat{E}}$ is continuous and injective.
\end{proof}
\section{Free monoids and trees}
\label{ss:freetree}
Let $\mathscr{F}_n=\mathscr{F}\langle x_1,\dots,x_n \rangle$ be the free monoid on $n$ generators.
Let $S=\{ x_1,\dots,x_n \}$ be the set of generators, and let $|\hbox to 7truept{\hrulefill}|\colon\mathscr{F}_n\to\mathbb{N}_0$
be the grading morphisms, i.e., $|y|=1$ if and only if $y\in S$.
Then the Cayley graph $\Gamma(\mathscr{F}_n, S)$ of $\mathscr{F}_n$ is the graph defined by
\begin{align}
V&= \{\, x\mid x\in \mathscr{F}_n\,\}\\
E&=\big\{\, (x,xx_i)\in V\times V\mid x\in \mathscr{F}_n, x_i\in S \,\big\}.
\end{align}
One has two maps $o,t\colon E \to V$ defined by
\begin{align}
o\big((x,xx_i)\big)&=x;\\
t\big((x,xx_i)\big)&=xx_i.
\end{align}
Then $\Gamma(\mathscr{F}_n, S)$ is an $n$-regular tree with root $1$ and all edges
pointing away from $1$. The graph $\Gamma(\mathscr{F}_n,S)$ is not a graph in the sense
of \cite[\S~2.1]{ser:trees}), but coincides with an orientation on the $n$-regular tree $T_n$.
\subsection{The boundary of the $n$-regular tree}
\label{ss:boundT}
The boundary $\partial T_n$ of $T_n$ is the set of equivalence classes
of infinite paths without backtracking under the relation $\sim$
defined by the shift, i.e.
\begin{equation}
v_0 v_1 v_2\dots\sim v_1 v_2 v_3\dots
\end{equation}
We denote by $[v,w)$ the unique path starting at $v$ in the class $\omega$
and define
\begin{equation}
I_v=\{\,\omega\in\partial T_n\mid v\in[1,\omega) \,\}
\end{equation}
the {\it interval} of $\partial T_n$ starting at $v$.
Then $\partial T_n$ is compact with respect to the topology $\mathscr{T}_I$ generated by $\{ I_v\}_{v\in V}$
(cf. \cite[p.~20, Exercise~1]{ser:trees}).
For any $[\rho]\in\partial T_n$ there exists a unique ray
$\rho=(e_k)_{k\in\mathbb{N}}$, $o(\rho)=o(e_1)=1$.
One can assign to $\rho$ the decreasing function
$\omega_\rho\in \mathscr{D}(\mathbb{N},\mathscr{F}_n,\preceq)$ given by
$\omega_\rho(k)=t(e_k)$.
The map $\varphi\colon\partial T_n\to\partial\mathscr{F}_n$ given by
\begin{equation}
\label{boundtree}
\varphi ([\rho])= [\omega_\rho]
\end{equation}
is a bijection. Hence one can identify $\partial T_n$ with $\partial\mathscr{F}_n$.
\subsection{The space $(\bar{\euF}_n,\mathscr{T}_c(\bar{\euF}_n))$}
\label{ss:comp}
Every cone $C_\tau(\bar{\euF}_n)$ defines a rooted subtree $ T_\tau$ of $T_n$ satisfying
$\partial T_\tau=\partial\mathscr{F}_n\cap C_\tau(\bar{\euF}_n)$. Thus every covering
$\bigcup_{\tau\in U} C_\tau(\bar{\euF}_n)\cap\partial\mathscr{F}_n$ of the boundary
of $\partial\mathscr{F}_n$ by cones defines a forest $F=\bigcup_{\tau\in U} T_\tau$.
Let $F=\bigcup_{i\in I} F_i$ be the decomposition of $F$ in connected components.
Then $\partial T_n=\partial F=\bigsqcup_{i\in I} \partial F_i$, where $\sqcup$ denotes disjoint union.
Hence the compactness of $\partial T_n$ implies $|I|<\infty$.
As $\partial F_i\subseteq \partial T_n$ is closed, and hence compact, a similar argument shows that there exist finitely many cones $C_{\tau_{i,j}}$, $1\leq j\leq r_i$, such that
$F_i=\bigcup_{1\leq j\leq r_i} T_{\tau_{i,j}}$.
Thus, if $\bigcup V$ is an open covering of $\bar{\euF}_n$ by open sets, it can be refined
to a covering $\bigcup U$, where $U$ consists either of a cone $C_\tau(\bar{\euF}_n)$ or of a
singleton set $\{\omega\}$, $\omega\in \mathscr{F}_n$.
Let $\Lambda\subseteq T_n$ be the subtree being generated by the vertices $\tau_{i,j}$.
Then $\Lambda$ is a finite subtree, and the only vertices of $T_n$ not being covered by
$\bigcup_{i,j} C_{\tau_{i,j}}(\bar{\euF}_n))$ are contained in $V(\Lambda)$. This shows that
$(\bar{\euF}_n,\mathscr{T}_c(\bar{\euF}_n))$ is a compact space.
\subsection{The space $(\bar{\moM},\mathscr{T}_c(\bar{\moM}))$}
Let $\mathscr{M}$ be a $\mathscr{T}$-regular finitely $1$-generated monoid. Then, by definition,
$(\partial\mathscr{M},\mathscr{T}_c(\bar{\moM}))$ is a Hausdorff space, and hence
$(\bar{\moM},\mathscr{T}_c(\bar{\moM}))$ is a Hausdorff space. By Proposition \ref{prop:homs},
the canonical mapping $\phi_\mathscr{M}\colon\mathscr{F}\to\mathscr{M}$ (cf. \eqref{eq:canmon}) induces
a continuous surjective map $\bar{\phi}_{\mathscr{M}}\colon\bar{\euF}\to\bar{\moM}$. This shows the following.
\begin{prop}
\label{prop:Treg}
Let $\mathscr{M}$ be a finitely $1$-generated $\mathbb{N}_0$-graded $\mathscr{T}$-regular monoid.
Then $(\bar{\moM},\mathscr{T}_c(\bar{\moM}))$ is a compact space.
\end{prop}
\subsection{The canonical probability measure on the boundary of a regular tree}
\label{ss:cammu}
By Carath\'eodory's extension theorem the assignment
\begin{equation}
\label{eq:mudTn}
\mu(I_v)=n^{-|v|}
\end{equation}
defines a unique probability measure $\mu\colon\mathrm{Bor}(\partial T_n)\to\mathbb{R}^+_0$.
Hence the corresponding probability measure $\mu\colon\mathrm{Bor}(\partial{\mathscr{F}}_n)\to\mathbb{R}^+_0$
satisfies
\begin{equation}
\label{eq:mupaM}
\mu(\partial\mathscr{F}_n\cap C_\tau(\bar{\mathscr{F}}_n))= n^{-|\tau|}\ \text{for $\tau\in\mathscr{F}_n$}.
\end{equation}
\begin{defi}
Let $\hbox to 7truept{\hrulefill}\cdot\hbox to 7truept{\hrulefill}\colon \mathscr{F}_n\times\partial \mathscr{F}_n\to\partial \mathscr{F}_n$ be the map given by
\begin{equation}
x\cdot[\omega]=[x\omega],
\end{equation}
where $x\omega\colon\mathbb{N}\to \mathscr{F}_n$ is given by $(x\omega)(n)=x\omega(n)$
\end{defi}
Note that this action is well defined, since $\omega\sim\omega^\prime$ implies that
$x\omega\sim x\omega^\prime$.
\begin{defi}
Let $\hbox to 7truept{\hrulefill}\cdot\hbox to 7truept{\hrulefill}\colon L^2(\partial \mathscr{F}_n,\mathbb{C},\mu)\times \mathscr{F}_n\to L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$
be the map given by
\begin{equation}
\label{eq:highx}
f.x={}^xf,
\end{equation}
where
\begin{equation}
({}^xf)([\omega])=f([x\omega]).
\end{equation}
\end{defi}
Note that for $f\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$ one has ${}^xf\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$,
since
\begin{align}
\|\ {}^xf \|_2^2
&=\int_{\partial T_n} |\ {}^xf|^2\, d\mu\\
&=\int_{x\partial T_n} |f|^2\, d\mu\\
\label{eqmeas}
&\le\int_{\partial T_n} |f|^2\, d\mu\\
&=\| f\|_2^2,
\end{align}
where \eqref{eqmeas} follows
since $x \partial\mathscr{F}_n\subseteq \partial \mathscr{F}_n$.
\begin{defi}
For $z\in \mathscr{F}_n$ we define the map
$T_z\colon L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)\to L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$ by
\begin{equation}
\label{eq:defTx}
T_z(f)= {}^zf.
\end{equation}
\end{defi}
\begin{fact}
\label{fact:fact1}
$\mathscr{F}_n$ acts via $T_\cdot$ on $L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$ by bounded linear operators.
\end{fact}
\begin{proof}
Let $z\in \mathscr{F}_n$. For $f,g\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$, $[\omega]\in\partial\mathscr{F}_n$, one has
\begin{align}
\big(T_z(f+g)\big)([\omega])
&=(f+g)([z\omega])\\
&=f([z\omega])+g([z\omega])\notag\\
&=( {}^zf)([\omega])+( {}^zg)([\omega])\notag\\
&=\big(T_z (f)\big) ([\omega])+ \big(T_z (g)\big) ([\omega]).\notag
\end{align}
Thus $T_z$ is linear.
It is also bounded, since
\begin{equation}
\| T_z\|_\infty
= \sup_{\| f\|_2=1} \|T_z (f)\|_2
\le \sup_{\| f\|_2=1} \|f\|_2\le1.
\end{equation}
\end{proof}
Hence $T_z\in\mathcal{B}\big(L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)\big)$ for all $z\in \mathscr{F}_n$.
As $\mathcal{B}\big(L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)\big)$ is a $C^\ast$-algebra,
$T_z$ has an adjoint operator $T_z^\ast$,
which is the bounded operator satisfying
\begin{equation}
\langle\, T_z f,g\,\rangle =\langle\, f,T_z^\ast g\,\rangle,
\end{equation}
for all $f,g\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$.
\begin{fact}
\label{fact:adj}
The bounded operator $T_z^\ast$, for $z\in \mathscr{F}_n$, is given by
\begin{equation}
\label{eq:star}
(T_z^\ast f) ([\omega])=
\begin{cases}
0 &\mbox{ if } [\omega]\notin z\partial\mathscr{F}_n\\
f([\omega^\prime]) &\mbox{ if } [\omega]=x[\omega^\prime].
\end{cases}
\end{equation}
\end{fact}
\begin{proof}
Note that $T_z^\ast f\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$, since
\begin{align}
\| T_z^\ast f\|_2^2
&=\int_{\partial\mathscr{F}_n} |T_z^\ast f|^2\, d\mu\\
&= \int_{x\partial\mathscr{F}_n} |T_z^\ast f|^2\, d\mu\\
&\le \int_{\partial T_n} |f|^2\, d\mu.
\end{align}
Let $f,g\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$. Then one has
\begin{align}
\langle\,f, T_z^\ast g \,\rangle
&=\int_{\partial T_n} f(\overline{T_z^\ast g}) \,d\mu\\
&=\int_{z\partial T_n} f(\overline{T_z^\ast g}) \,d\mu\\
\label{adj}
&=\int_{z\partial T_n} (T_z f)\overline{g} \,d\mu\\
&\le\int_{\partial T_n} (T_z f)\overline{g} \,d\mu\\
&=\langle\,T_z f, g \,\rangle.
\end{align}
where equality \eqref{adj} holds by
\begin{equation}
f([z\omega^\prime])\,\overline{T_z^\ast g}([z\omega^\prime])
=(T_z f)([\omega^\prime])\,\bar{g}([\omega^\prime]).
\end{equation}
\end{proof}
\begin{prop}
\label{prop:cuntz}
The following identities hold for all $x,y\in S\subseteq\mathscr{F}_n$
\begin{align}
\label{eqdelta}
T_x^\ast T_y&=\delta_{xy};\\
\label{eqsum}
\sum_{i=1}^n T_{x_i} T_{x_i}^\ast &=1.
\end{align}
In particular, the $C^\ast$-algebra
$C^\ast(\mathscr{F}_n,\mu)\subseteq\mathscr{B}(L^2(\partial\mathscr{F}_n,\mathbb{C},\mu))$
generated by $\mathscr{F}_n$ is isomorphic to the Cuntz algebra $\mathcal{O}_n$.
\end{prop}
\begin{proof}
Let $x,y\in S\subseteq \mathscr{F}_n$ and let $f\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$.
For any $[\omega]\in\partial\mathscr{F}_n$ one has
\begin{equation}
T_x^\ast T_y(f)([\omega])= T_x^\ast f([y\omega]) = \delta_{xy}
\end{equation}
by Fact \ref{fact:adj}. This proves identity \eqref{eqdelta}.
Let $[\omega]\in\partial\mathscr{F}_n$.
Then there exists $x_j\in S$ such that $[\omega]\in x_j\partial\mathscr{F}_n$.
Hence one has
\begin{equation}
T_{x_i}T_{x_i}^\ast f ([\omega])=\delta_{ij} f([\omega])
\end{equation}
for any $f\in L^2(\partial\mathscr{F}_n,\mathbb{C},\mu)$.
This yields the identity \eqref{eqsum}.
\end{proof}
\subsection{Finitely $1$-generated monoids}
\label{ss:caT}
Let $\mathscr{M}$ be a finitely $1$-generated $\mathbb{N}_0$-graded monoid.
Then one has a canonical surjective graded homomorphism $\phi_\mathscr{M}\colon\mathscr{F}\to\mathscr{M}$,
where $\mathscr{F}$ is a finitely generated free monoid (cf. \eqref{eq:canmon}),
which induces a continuous map $\partial\phi\colon\partial\mathscr{F}\longrightarrow\partial\mathscr{M}$
(cf. Proposition~\ref{prop:homs}).
In particular,
\begin{equation}
\mu_\mathscr{M}\colon\mathrm{Bor}(\partial\mathscr{M})\longrightarrow\mathbb{R}^+_0
\end{equation}
given by $\mu_\mathscr{M}(A)=\mu(\partial\phi_\mathscr{M})^{-1}(A))$
is a Borel probability measure on $\partial\mathscr{M}$.
For $s\in\mathscr{M}$, define the map $\beta_s\colon\partial\mathscr{M}\to\partial\mathscr{M}$ by
\begin{equation}
\label{eq:act1}
\beta_s([f])=[sf],\quad [f]\in\partial\mathscr{M},
\end{equation}
where $(sf)(n)=s\cdot f(n)$ for all $n\in\mathbb{N}$, $f\in\mathscr{D}(\mathbb{N},\mathscr{M},\preceq)$.
Then, as $\beta_s$ is mapping cones to cones, $\beta_s$ is continuous.
Hence one has a representation
\begin{equation}
\label{eq:act2}
\beta\colon \mathscr{M}\to C(\partial \mathscr{M},\partial \mathscr{M}).
\end{equation}
For $s\in\mathscr{M}$, let $\beta_{\ast,s}\colon L^2(\partial\mathscr{M},\mathbb{C},\mu)\to L^2(\partial\mathscr{M},\mathbb{C},\mu)$
be the map defined by
\begin{equation}
\label{eq:act3}
\beta_{\ast,s}(g)([f])=g(\beta_s([f]))=g([sf]),\quad\,g\in L^2(\partial\mathscr{M},\mu),\,[f]\in\partial\mathscr{M}.
\end{equation}
Then one has
\begin{align}
\|\beta_{\ast,s}(g)\|_2^2
&=\int_{\partial\mathscr{M}} \big|g\big(\beta_s([f]\big)\big|^2\, d\mu_{\mathscr{M}}\\
&=\int_{\partial\mathscr{M}} \big|g\big([sf]\big)\big|^2 \, d\mu_{\mathscr{M}}\\
&=\int_{s\partial\mathscr{M}} \big|g\big([f]\big)\big|^2 \, d\mu_{\mathscr{M}}\\
&\le\int_{\partial\mathscr{M}}\big| g \big([f]\big) \big|^2 \,d\mu_{\mathscr{M}}\\
&=\|g\|_2^2,
\end{align}
for all $g\in L^2(\partial\mathscr{M},\mathbb{C},\mu)$, $s\in \mathscr{M}$.
Thus,
\begin{equation}
\label{eq:act4}
\|\beta_{\ast,s}\|=\sup_{\| g\|_2=1} \|\beta_{\ast,s}(g)\|_2\le 1
\end{equation}
for all $s\in\mathscr{M}$, i.e., $\beta_{\ast,s}$ is a bounded operator on $L^2(\partial\mathscr{M},\mathbb{C},\mu)$.
By an argument similar to the one used in the proof of Fact~\ref{fact:fact1} one can show that
it is also linear.
In particular, there exists a representation
\begin{equation}
\label{eq:act5}
\beta_\ast\colon\mathscr{M}\to\mathcal{B}\big(L^2(\partial\mathscr{M},\mathbb{C},\mu)\big).
\end{equation}
\subsection{Right-angled Artin monoids}
\label{ss:raam}
Let $\Gamma=(V,E)$ be a finite undirected graph, i.e. $|V|=n<\infty$
and $E\subseteq \mathscr{P}_2(V)$ is a set of subsets of cardinality $2$ of $V$ .
The right-angled Artin monoid associated to $\Gamma$ is the monoid $\mathscr{M}^\Gamma$
defined by
\begin{equation}
\label{eq:raam1}
\mathscr{M}^\Gamma=\langle\,x\in V \mid xy=yx\mbox{ if } \{x,y\}\in E\,\rangle^+.
\end{equation}
Clearly, $\mathscr{M}^\Gamma$ is $\mathbb{N}_0$-graded and finitely $1$-generated.
By Luis Paris theorem (cf. \cite{paris:artin}), $\mathscr{M}^\Gamma$ embeds into the right-angled Artin group $G_\Gamma$. Thus $\mathscr{M}^\Gamma$ has the left-cancellation
property as well as the right-cancellation property.
The canonical homomorphism $\phi_\Gamma\colon \mathscr{F}\langle V\rangle\longrightarrow
\mathscr{M}^\Gamma$ is surjetcive and induces a continuous surjective map
\begin{equation}
\label{eq:canraam}
\partial\phi_\Gamma\colon\partial\mathscr{F}\langle V\rangle\longrightarrow\partial\mathscr{M}^\Gamma.
\end{equation}
(cf. Proposition~\ref{prop:homs}).
We denote by $\mu_\Gamma\colon\mathrm{Bor}(\partial\mathscr{M}^\Gamma)\longrightarrow\mathbb{R}^+_0$
the Borel probability measure induced by $\partial\phi_\Gamma$, i.e., for
$A\in\mathrm{Bor}(\partial\mathscr{M}^\Gamma)$ one has $\mu_\Gamma(A)=
\mu(\partial\phi_\Gamma^{-1}(A))$, where $\mu$ is the measure defined
on $\partial\mathscr{F}\langle V\rangle$ by \eqref{eq:mupaM}.
\begin{defi}
Let $\Gamma=(V,E)$ be a graph with unoriented edges,
and let $\Gamma_1=(V_1,E_1)$ and $\Gamma_2=(V_2,E_2)$ be subgraphs
of $\Gamma$. We say that $\Gamma$ is {\it bipartitly decomposed} by $\Gamma_1$
and $\Gamma_2$, if $V=V_1\sqcup V_2$ and
\begin{equation}
\label{eq:bipar1}
E=E_1\sqcup E_2\sqcup \big\{\,\{v_1,v_2\}\mid v_1\in V_1,\,v_2\in V_2\,\big\}.
\end{equation}
In this case we will write $\Gamma=\Gamma_1\vee\Gamma_2$.
If no such decomposition exists, $\Gamma$ will be called {\it coconnected}.
\end{defi}
If $\Gamma$ is any graph which is not coconnected,
we can decompose it into coconnected components.
We find these coconnected components by considering its
{\it opposite graph} $\,\Gamma^{\mathrm{op}}=(V,\mathscr{P}_2(V)\setminus E)$ and finding its connected components;
the coconnected components of $\Gamma$ correspond to
the connected components of $\Gamma^\mathrm{op}$.
One has the following property.
\begin{fact}
\label{fact:graph}
Let $\Gamma=(V,E)$ be a graph with unoriented edges. Then $\Gamma$ is
coconnected if, and only if, $\Gamma^{\mathrm{op}}$ is connected. In particular, if
$\,\Gamma^{\mathrm{op}}=\bigsqcup_{i\in I} \Lambda_i$ is the decomposition of $\Gamma^{\mathrm{op}}$
in its connected components, then one has
\begin{equation}
\label{eq:bipar2}
\Gamma=\textstyle{\bigvee_{i\in I} \Lambda_i^{\mathrm{op}}},
\end{equation}
where $\Lambda_i^{\mathrm{op}}$ are coconnected subgraphs of $\Gamma$.
\end{fact}
\begin{proof}
Obviously, the graph $\Gamma=\Gamma_1\vee\Gamma_2$ is bipartitly decomposed
if, and only if, $\Gamma^{\mathrm{op}}=\Gamma_1^\mathrm{op}\sqcup\Gamma_2^\mathrm{op}$ is not connected.
This yields to the claim.
\end{proof}
In analogy to the decomposition in connected components we will call \eqref{eq:bipar2}
the {\it decomposition in coconnected components}.
Note that the decomposition in coconnected components
implies that ant two vertices in different components
must be connected by an edge. From this property one concludes
the following straightforward fact.
\begin{fact}
\label{fact:coco1}
Let $\Gamma=(V,E)$ be a finite graph with unoriented edges, and let
$\Gamma=\bigvee_{1\leq i\leq r} \Gamma_i$ be its decomposition in coconnected
components, $\Gamma_i=(V_i,E_i)$.Then
\begin{equation}
\label{eq:coco1}
\mathscr{M}^\Gamma=\mathscr{M}^{\Gamma_1}\times\cdots\times \mathscr{M}^{\Gamma_r},
\end{equation}
where $\mathscr{M}^{\Gamma_i}=\langle\, v\in V_i\,\rangle$. In particular,
$\partial\mathscr{M}^\Gamma=\times_{1\leq r} \partial\mathscr{M}^{\Gamma_i}$ and
\begin{equation}
\label{eq:prodL2}
L^2(\partial\mathscr{M},\mathbb{C},\mu)=L^2(\partial\mathscr{M}^{\Gamma_r},\mathbb{C},
\mu_{\Gamma_1})\hotimes\cdots\hotimes
L^2(\partial\mathscr{M}^{\Gamma_r},\mathbb{C},\mu_{\Gamma_r}).
\end{equation}
\end{fact}
In \cite{CL07}, J. Crisp and M.Laca has shown the following.
\begin{thm}[\cite{CL07}, Theorem 6.7]
\label{th:CL07}
Let $\Gamma=(V,E)$ be a finite unoriented graph such that $\Gamma^\mathrm{op}$ has no isolated vertices, and let $\Gamma=\bigvee_{i=1}^r \Gamma_i$ be the decomposition of $\Gamma$ in coconnected components, $\Gamma_i=(V_i,E_i)$.
Then the universal $C^\ast$-algebra
with generators $\{S_x\mid x\in V\}$ subject to the relations
\begin{itemize}
\item[(i)] $S_x^\ast S_x=1$ for each $x\in V$;
\item[(ii)] $S_x S_y=S_y S_x$ and $S_x^\ast S_y=S_y S_x^\ast$ if $x$ and $y$ are adjacent in $\Gamma$;
\item[(iii)] $S_x^\ast S_y=0$ if $x$ and $y$ are distinct and not adjacent in $\Gamma$;
\item[(iv)] $\prod_{x\in V_i}(1-S_x S_x^\ast)=0$ for each $i\in\{1,\dots,r\}$;
\end{itemize}
is canonically isomorphic to the boundary quotient $\partial C_\lambda(\mathscr{M}^\Gamma)$ for
$\mathscr{M}^\Gamma$ and it is a simple $C^\ast$-algebra.
\end{thm}
Hence, one has the following proposition.
\begin{prop}
The $C^\ast$-algebra $C^\ast(\mathscr{M}^\Gamma,\mu_\Gamma)$ (cf. \eqref{eq:Cstarmon})
of a right-angled Artin monoid $\mathscr{M}^\Gamma$
is isomorphic to the boundary quotient $\partial C_\lambda(\mathscr{M}^\Gamma)$ of Theorem \ref{th:CL07}.
\end{prop}
\begin{proof}
Let $\Gamma=(V,E)$ be a finite unoriented graph such that $|V|=n$
and let $\Gamma=\bigvee_{i=1}^r \Gamma_i$ be its decomposition in coconnected components.
It is straightforward to verify that for the set of operators $\{T_x\mid x\in V\}$, where the
operator $T_x$ is defined as in \eqref{eq:defTx}, the adjoint operators are given by \eqref{eq:star},
and thus $\{T_x\mid x\in V\}$ satisfies the relations (i)-(iii).
It remains to prove that it also satisfies (iv). Let
\begin{equation}
\label{eq:defidem}
\mathbf{e}_i=\prod_{x\in V_i}(1-T_x T_x^\ast).
\end{equation}
In order to show that $\mathbf{e}_i(f)=0$ for all $f\in L^2(\partial\mathscr{M},\mathbb{C},
\mu_{\Gamma})$ it suffices to show that
$\mathbf{e}_i(f)=0$ for $f=f_1\otimes\cdots\otimes f_r$, $f_i\in
L^2(\partial\mathscr{M}_{\Gamma_i},\mathbb{C},\mu_{\Gamma_i})$ (cf. \eqref{eq:prodL2}).
Note that
\begin{equation}
\label{eq:valprod}
\big(1-T_xT_x^\ast\big)(f)([u])=
\begin{cases}
0 &\mbox{ if } [u]\in x\partial \mathscr{M}^\Gamma,\\
f([u]) &\mbox{ otherwise.}
\end{cases}
\end{equation}
Let $u=u_1\cdots u_r$, $u_j\in\partial\mathscr{M}^{\Gamma_j}$. Then there
exists $y\in V_i$ such that $u_i\in y\partial\mathscr{M}^{\Gamma_i}$. Hence, by
\eqref{eq:valprod}
\begin{equation}
\big(1-T_y T_y^\ast\big)(f)([u])=0.
\end{equation}
Hence $\mathbf{e}_i(f)=0$ and this yields the claim.
\end{proof}
\section{Fractals}
\label{ss:fract}
Let $\mathscr{M}$ be a finitely $1$-generated monoid.
By an {\emph{$\mathscr{M}$-fractal}} we will understand a complete metrix space $(X,d)$
with a contracting left $\mathscr{M}$-action $\alpha\colon \mathscr{M}\longrightarrow C(X,X)$,
i.e., there exists a real number $\delta<1$ such that for all $x,y\in X$ and all $\omega\in
\mathscr{M}\setminus\{1\}$ one has
\begin{equation}
\label{eq:cont1}
d(\alpha(\omega)(x),\alpha(\omega)(y))<\delta\cdot d(x,y).
\end{equation}
The real number $\delta$ will be called the contraction constant.
To the authors knowledge the following important question has not yet been discussed in the literature.
\begin{ques}
\label{ques:fract} For which finitely $1$-generated monoids $\mathscr{M}$ does there exist an
$\mathscr{M}$-fractal $(X,d,\alpha)$.
\end{ques}
\subsection{The action of the universal boundary on an $\mathscr{M}$-fractal}
\label{ss:actuni}
For a strictly decreasing sequence $f\in\mathscr{D}(\mathbb{N},\mathscr{M},\preceq)$ and for $n,m\in\mathbb{N}_0$,
$m> n$,
there exists $\tau_{m,n}\in \mathscr{M}\setminus\{1\}$ such that $f(m)=f(n)\cdot\tau_{m,n}$.
By induction, one concludes
that $|f(n)|\geq n$. If $[f]\in\partial \mathscr{M}$, then $f$ can be represented by a strictly
decreasing sequence (cf. Fact~\ref{fact:strict}).
As $\alpha$ is contracting, one concludes that $(\alpha(f_n)(x))$ is a Cauchy sequence
for every strictly decreasing sequence $f\in\mathscr{D}(\mathbb{N},M,\preceq)$
and thus has a limit point $\alpha(f)(x)=\lim_{n\to\infty}(\alpha(f_n)(x))$. In more detail,
if $\alpha$ has contracting constant $\delta<1$,
one has
for $n,m\in\mathbb{N}$, $m> n$, that
\begin{equation}
\label{eq:contt2}
d(\alpha(f(m))(x),\alpha(f(n))(x))<\delta^{|f(n)|}\cdot d(\alpha(\tau_{m,n})(x),x)\leq
\delta^{|f(n)|}\cdot\mathrm{diam}(X),
\end{equation}
where $\mathrm{diam}(X)=\max\{\,d(y,z)\mid x,y\in X\,\}$.
Thus one has a map
\begin{equation}
\label{eq:baMact}
\hbox to 7truept{\hrulefill}\cdot\hbox to 7truept{\hrulefill}\colon\mathscr{D}(\mathbb{N},\mathscr{M},\prec)\times X\longrightarrow X
\end{equation}
given by $[f]\cdot x=\alpha(f)(x)$. This map has the following property.
\begin{prop}
\label{prop:frac}
Let $\mathscr{M}$ be a finitely $1$-generated monoid, and let $((X,d),\alpha)$ be an
$\mathscr{M}$-fractal with attractor $K\subseteq X$ . Then the map \eqref{eq:baMact} is continuous and $[f]\cdot x\in K$ for all $f\in\mathscr{D}(\mathbb{N},\mathscr{M},\prec)$
and $x\in X$.
\end{prop}
\begin{proof}
Let $f\in\mathscr{D}(\mathbb{N},X,\prec)$ be a
strictly decreasing function.
For $A=\{x\}$, and $\mathscr{S}(A)=\bigcup_{\sigma\in \mathscr{M}_1} \sigma(A)$, the sequence
$\mathscr{S}^k(A)$
converges to $K$ in the Hausdorff metric (cf. \cite[Statement (1)]{hutch:frac}). Thus
for all $\varepsilon>0$ there exists $N(\varepsilon)\in\mathbb{N}$ such that for all $n>N(\varepsilon)$
$\mathfrak{d}(\mathscr{S}^n(A),K)<\varepsilon$, where $\mathfrak{d}$ denotes the Hausdorff metric
(cf. \cite[(2.4)]{hutch:frac}). Hence $d(\alpha(f(n))(x),K)<\varepsilon$ for all $n>N(\varepsilon)$, and
$\alpha(f)(x)$ is a clusterpoint
of $K$. As $K$ is closed this implies $\alpha(f)(x)\in K$.
The map \eqref{eq:baMact} is obviously continuous in the second argument. Moreover, let $f,h\in\mathscr{D}(\mathbb{N},X,\prec)$, $f, h\prec \tau$. Then
\begin{equation}
\label{eq:stetig1}
d(\alpha(f)(x)),\alpha(h)(x))\leq 2\cdot\delta^{|\tau|}\cdot\mathrm{diam}(X).
\end{equation}
Thus \eqref{eq:baMact} is continuous.
\end{proof}
\begin{prop}
\label{prop:leq}
Let $f,h\in\mathscr{D}(\mathbb{N},\mathscr{M},\prec)$ satisfying $f\preceq h$. Then,
$\alpha(f)(x)=\alpha(h)(x)$.
\end{prop}
\begin{proof}
We may assume that $f(n)\preceq h(n)$ for all $n\in\mathbb{N}$, i.e., there exists $y_n\in \mathscr{M}$
such that $f(n)=h(n)\cdot y_n$. Then, by the same argument which was used for \eqref{eq:contt2}, one concludes that
\begin{equation}
\label{eq:cont3}
d(\alpha(f_n)(x),\alpha(h(n)(x))\leq \delta^{|h(n)|}\mathrm{diam}(X)\leq\delta^n\mathrm{diam}(X).
\end{equation}
This yields the claim.
\end{proof}
From Proposition~\ref{prop:leq} one concludes that the map
\eqref{eq:baMact} induces an action
\begin{equation}
\label{eq:boact}
\hbox to 7truept{\hrulefill}\cdot\hbox to 7truept{\hrulefill}\colon\partial\mathscr{M}\times X\longrightarrow X.
\end{equation}
\begin{prop}
\label{prop:uniprop}
Let $x\in X$, and let $K\subset X$ be the attractor of the $\mathscr{M}$-fractal
$((X,d),\alpha)$. Then the induced map
\begin{equation}
\label{eq:contt4}
\kappa_x\colon\partial\mathscr{M}\longrightarrow K
\end{equation}
given by $\kappa_x([f])=\alpha(f)(x)$ is surjective.
\end{prop}
\begin{proof}
Let $z\in K$, and $A=\{x\}$. By (cf. \cite[(2.4)]{hutch:frac}), for all $\varepsilon>0$ there exists $N(\varepsilon)\in\mathbb{N}$ such that for all $n>N(\varepsilon)$
$\mathfrak{d}(\mathscr{S}^n(A),z)<\varepsilon$, i.e., there exists a sequence $(f_n)_{n\in\mathbb{N}}$, $f_n\in\mathscr{M}_n$,
$f_{n+1}\in\bigcup_{\sigma\in\mathscr{M}_1} \{\sigma\cdot f_n\}$, such that
$d(\alpha(f_n)(x),z)<\varepsilon$.
If $\mathscr{M}$ is $\mathscr{T}$-regular, then $(\bar{\moM},\mathscr{T}_c(\bar{\moM}))$ is compact (cf. Proposition~\ref{prop:Treg}). Hence $(f_n)_{n\in\mathbb{N}}$ has a cluster point $f\in\bar{\moM}$.
As $|f_n|=n$, one has $f\not\in\mathscr{M}$ and thus $f\in\partial\mathscr{M}$. It is straightforward to verify that $[f]\cdot x=z$, showing that $\kappa_x$ is surjective.
\end{proof}
From Proposition~\ref{prop:leq} one concludes that the $\partial \mathscr{M}$-action on $X$ (cf. \eqref{eq:baMact}) induces a $\widetilde{\partial}\mathscr{M}$-action
\begin{equation}
\label{eq:wtMact}
\hbox to 7truept{\hrulefill}\cdot\hbox to 7truept{\hrulefill}\colon\widetilde{\partial}\mathscr{M}\times X\longrightarrow X
\end{equation}
given by $\pi([f])\cdot x=\alpha(f)(x)$ (cf. \eqref{eq:projtilde}).
\subsection{The $C^\ast$-algebra associated to an $\mathscr{M}$-fractals for a finitely $1$-generated monoid $\mathscr{M}$}
\label{ss:Mfrac}
Let $\mathscr{M}$ be a finitely $1$-generated monoid, and let $((X,d),\alpha)$
be an $\mathscr{M}$-fractal with atrractor $K$. For $x\in X$ there exists a continuous
mapping $\kappa_x\colon\partial\mathscr{M}\to K$ (cf. Theorem C). Let
$\mu_x\colon\mathrm{Bor}(K)\to\mathbb{R}^+_0$ be the probability measure given by
\eqref{eq:kont}. Then $\mathscr{M}$ acts on $K$, and thus also on $L^2(K,\mathbb{C},\mu_x)$.
For $t\in\mathscr{M}$ let $\gamma_t\colon L^2(K,\mathbb{C},\mu_x)\to L^2(K,\mathbb{C},\mu_x)$ be given by
\begin{equation}
\gamma_t(g)(x)=g(\alpha_t(x))
\end{equation}
where $g\in L^2(K,\mathbb{C},\mu_x)$.
Hence the monoid $\mathscr{M}$ acts on the Hilbert space
$L^2(K,\mathbb{C},\mu_x)$ by bounded linear operators.
\begin{equation*}
\|\gamma_t(g)\|_2^2=\int_K |\gamma_t(g(z))|^2\,d\mu_x
=\int_K|g(\alpha_t(z))|^2\,d\mu_x\le\|g\|_2^2
\end{equation*}
(cf. \S~\ref{ss:boundT}). One defines the $C^\ast$-algebra generated
by the $\mathscr{M}$-fractal $((X,d),\alpha)$ by
\begin{equation}
\label{eq:defCstarfrak}
C^\ast(\mathscr{M},X,d,\mu_x)=\langle\,\gamma_t,\,\gamma_t^\ast\mid t\in\mathscr{M}\,\rangle\subseteq
\mathscr{B}(L^2(K,\mathbb{C},\mu_x)).
\end{equation}
|
1,116,691,498,612 | arxiv | \section{Introduction}
Stellar winds have profound implications for the evolution of massive stars, on the chemical evolution of the Universe and as a source of energy and momentum in the interstellar medium.
Indirect measures of the structure of massive-star winds are possible in X-ray binaries through the analysis of the interaction between the compact companion and the stellar wind.
In this report we summarize the constraints obtained on wind clumping in HMXBs using the hard X-ray variability observed by the IBIS/ISGRI instrument
on board INTEGRAL \citep{walter:winkler03AA}.
Classical wind-fed, Roche-lobe underflow, super-giant HMXB (sgHMXB) are made of a compact object orbiting within a few (1.5 to 2.5) stellar radii from a super-giant companion. Recently INTEGRAL almost tripled the number of sgHMXB systems known in the Galaxy and revealed a much more complex picture with two additional families of sources:
(1) the highly-absorbed systems which have orbital and spin periods similar to those of classical sgHMXB but much higher absorbing column densities on average \citep{walter:walter2006} and (2) the fast transient systems which are characterized by fast outbursts and by a very low quiescent luminosity \citep{walter:Sguera2006,walter:Negueruela2007}.
Several sources have now been proposed as candidate super-giant fast X-ray transient based on their hard X-ray variability characteristics, and, for a subset of them, optical counterpart spectral type. Contrasting statements have however been made on specific sources for what concerns their persistent or transient nature. In the frame of the current study \citep{walter:WalterZurita2007} we have considered all SFXT candidates together with several persistent and absorbed super-giant HMXB for comparison.
We analyzed the available INTEGRAL data for 12 candidate SFXT
that have large variability factors and compared them with the classical and absorbed sgHMXB systems that have a typical variability factor $\simless20$.
The sources have been separated in two categories. The SFXT include systems featuring hard X-ray variability by a factor $\mathrel{\copy\simgreatbox} 100$. ``Intermediate'' systems are candidate SFXT with smaller variability factors that could be compared with those of classical systems.
\vspace{-0.3cm}
\section{Hard X-ray Flares and Clumpy Winds}
The average count rate observed during SFXT flares lies between 3 and 60 ct/s which translates to hard X-ray luminosities of $(0.2-4)\times 10^{36}~\rm{erg/s}$. Such luminosities are not exceptional for sgHMXB but very significantly larger than the typical X-ray luminosity of single massive stars of $10^{30-33}~\rm{erg/s}$ at soft X-rays \citep{walter:Cassinelli1981}.
As the sources are flaring at most once per day, their average hard X-ray luminosity is very low.
It is therefore very unlikely that those systems have an average orbital radius lower than $10^{13}~\rm{cm}$, i.e. $\sim 10~R_*$. One expects orbital periods larger than 15 days and underflow Roche lobe systems (note that no orbital period has yet been derived in any of these systems).
The interaction of a compact object with a dense clump formed in the wind of a massive companion leads to increased accretion rate and hard X-ray emission.
The free-fall time from the accretion radius $R_a = 2\times 10^{10}~ \rm{cm}$ towards the compact object is of the order of $(2-3)\times10^2~\rm{sec}$.
The infall is mostly radial (down to the Compton radius) and proceeds at the Bondi-Hoyle accretion rate.
With a duration of $t_{fl}=2-10$ ksec, the observed short hard X-ray flares are significantly longer than the free-fall time. The flare duration is therefore very probably linked with the thickness of the clumps which, for a clump radial velocity $V_{cl}=10^8 ~\rm{cm/s}$, is $h_{cl} = V_{cl} \times t_{fl} \sim (2-10) \times 10^{11}~\rm{cm}$.
The average hard X-ray luminosity resulting from an interaction between the compact object and the clump can be evaluated as $L_X = \epsilon~M_{acc}c^2/t_{fl}$ (where $\epsilon\sim0.1$) and the mass of a clump can then be estimated as
$ M_{cl} = ~ (R_{cl}/R_{a})^2 ~M_{acc}= (R_{cl}/R_{a})^2~L_X~t_{fl}/(\epsilon~ c^2) $
where $R_{cl}$ is the radius of the clump perpendicular to the radial distance.
In the case of a spherical clump,
$M_{cl} =
\left(\frac{L_X}{10^{36}~\rm{erg/s}}\right) \left(\frac{t_{fl}}{3~\rm{ks}}\right)^3
~7.5\times 10^{21} ~\rm{g}.$
If $\dot{N}$ is the rate of clumps emitted by the star, the observed hard X-ray flare rate is given by $T^{-1} = \dot{N}(R_{cl}^2/4R_{orb}^2).$
The rate of mass-loss in the form of wind clumps can then be estimated as
$\dot{M}_{cl} =
\left(\frac{10\rm{d}}{T}\frac{L_X}{10^{36}\rm{erg/s}}\frac{t_{fl}}{3\rm{ks}}\right)\left(\frac{R_{orb}}{10^{13}\rm{cm}}\right)^2 ~3\times 10^{-6}~\rm{M_{\odot}/y}.$
For a $\beta=1$ velocity law and spherical clumps, the number of clumps located between $1.05R_*$ and $R_{orb}$ can be evaluated as
$N=
\left(\frac{10~\rm{d}}{T}\right)\left(\frac{3~\rm{ks}}{t_{fl}}\right)^2\left( \frac{R_{orb}}{10^{13}~\rm{cm}}\right)^3~3.8\times 10^3$.
Assuming spherical clumps, the clump density at the orbital radius is $\rho_{cl}=\left(\frac{L_X}{10^{36}~\rm{erg/s}}\right) ~7\times 10^{-14} ~\rm{g~cm}^{-3}$ and the corresponding homogeneous wind density is $\rho_h=\dot{M}_{cl}/(4\pi~R_{orb}^2~V_{cl})=
\left(\frac{10~\rm{d}}{T}\frac{L_X}{10^{36}~\rm{erg/s}}\frac{t_{fl}}{3~\rm{ks}}\right)
~1.5\times 10^{-15}~\rm{g~cm}^{-3}$. The clump volume filling factor at the orbital radius is $
f_V = \frac{\rho_h}{\rho_{cl}} =
\left(\frac{10~\rm{d}}{T}\frac{t_{fl}}{3~\rm{ks}}\right)
~0.02$ and the corresponding porosity length is
$h=\frac{R_{cl}}{f_V}=
\left(\frac{T}{10~\rm{d}}\right)
~15\times 10^{12} ~\rm{cm}$.
If the density of a clump decreases with radius as $r^{-2\beta}$ and its mass remains constant, the averaged homogeneous wind density within $R_{obs}$ is $\overline{\rho_{h}}=N M_{cl}/(\frac{4}{3}\pi
R_{orb}^3
) =
\left(\frac{10~\rm{d}}{T}\frac{L_X}{10^{36}~\rm{erg/s}}\frac{t_{fl}}{3~\rm{ks}}\right)
~7\times 10^{-15} ~\rm{g~cm}^{-3}$ and the average clump volume filling factor and porosity length could be estimated as 0.1 and $3\times10^{12} ~\rm{cm}$, respectively.
The variety of $t_{fl}$, $T$ and $F_{fl}$ that are observed probably reflects a range of clump parameters and orbital radii. Several of the average clump parameters estimated above, in particular the clump density, filling factor and porosity length do not depend on the orbital radius, which is unknown, and only slowly depend on the observed quantities.
\section{Discussion}
The average clump parameters derived above match the macro-clumping scenario of \cite{walter:OskinovaHamannFeldmeier2007}
to reconcile clumping and mass-loss rates.
The number of clumps derived above is also comparable to evaluations by \cite{walter:Lepine1999, walter:OskinovaFeldmeierHamann2006}. The volume filling factor, porosity length and the clump mass-loss rate are also similar to those derived by \cite{walter:Bouret2005} from the study of ultraviolet and optical line profiles in two super-giant stars.
The variation of the observed X-ray flux between flares and quiescence provides in principle a direct measure of the density contrast between the wind clumps and the inter-clump medium.
Density contrasts of $>10^{2-4}$ and 15--50 have been observed in SFXT and ``Intermediate'' sources, respectively. The density contrast is larger in SFXT than in ``Intermediate'' and, of course, classical systems. Density contrasts are probably stronger when clumping is very effective.
Numerical simulations of the line driven instability \citep{walter:Runacres2005} predict density contrasts as large as $10^{3-5}$ in the wind up to large radii. At a distance of $10~R_*$, the simulated density can vary between $10^{-18}$ and $10^{-13}~\rm{g~cm^{-3}}$ and the separation between the density peaks are of the order of $R_*$. These characteristics are comparable to the values we have derived.
Classical sgHMXB are characterized by small orbital radii $R_{orb}=(1.5-2.5)~R_*$, and by flux variability of a factor $\mathrel{\copy\simlessbox} 10$. Such variabilities were modeled in terms of wind inhomogeneities largely triggered by the hydrodynamic and photo-ionisation effects of the accreting object on the companion and inner stellar wind \citep{walter:blondin91, walter:blondin94}. At small orbital radii, the companion is close to fill its Roche lobe, which triggers tidal streams. In addition the X-ray source ionizes the wind acceleration zone, prevents wind acceleration and generates slower velocities, denser winds, larger accretion radius and finally larger X-ray luminosities. Whether or not the stellar wind is intrinsically clumpy at low radius, the effect of the compact object on the wind is expected to be important.
The main difference between SFXT and classical sgHMXB could therefore be their orbital radius. At very low orbital radius $(<1.5~R_*)$ tidal accretion will take place through an accretion disk and the system will soon evolve to a common envelope stage. At low orbital radius $(\sim 2~R_*)$ the wind will be perturbed in any case and efficient wind accretion will lead to copious and persistent X-ray emission $(10^{36-37}~\rm{erg/s})$. At larger orbital radius $(\sim 10~R_*)$ and if the wind is clumpy, the SFXT behavior is expected as described above. If the wind clumps do not form for any reason, the average accretion rate will remain too low and the sources will remain mostly undetected by the current hard X-ray survey instruments.
|
1,116,691,498,613 | arxiv | \section{Introduction}
The primordial power spectrum of density fluctuations underpins much of modern cosmology. On large scales, it
has been measured with high precision by cosmic microwave background (CMB) experiments (e.g. \citealt{WMAP7} and references within).
In order to improve
our knowledge of its scale-dependence, we turn to smaller scales, and astrophysical measurements probing later epochs in
the evolution of the Universe. In this paper, we shall examine constraints from the data set which has probed the smallest
scales to date: the Lyman-$\alpha\;$ forest.
The Lyman-$\alpha\;$ forest consists of a series of features in quasar spectra due to scattering of quasar photons with neutral hydrogen.
Since hydrogen makes up most of the baryonic density of the Universe, the Lyman-$\alpha\;$ forest
traces the intergalactic medium (IGM), and thus the baryonic power spectrum, on scales from
a few up to tens of Mpc. This makes it the only currently
available probe of fluctuations at small scales in a regime when the corresponding density fluctuations were
still only mildly non-linear, thereby simplifying cosmological inferences. A number of authors have examined
the cosmological constraints from the Lyman-$\alpha\;$ forest in the past
(\citealt{Croft:1997, Theuns:1998, McDonald:2000, Hui:2001, Viel:2001, Gnedin:2002, McDonald:2004pk, Viel:2005,Lidz:2006}),
while \cite{Seljak:2005, Seljak:2006} examined constraints combined with other data sets.
For a review of the physics of the IGM and its potential for cosmology, see \cite{Meiksin:2009}.
Previous analyses assumed that the primordial power spectrum on Lyman-$\alpha\;$ scales is described by a
nearly scale-invariant power law -- a strong prior -- and proceeded with parameter estimation under this assumption.
In contrast, in this work we attempt to constrain the shape and amplitude of the primordial power spectrum at these scales using
minimal prior assumptions about its scale-dependence.
In view of the observational effort dedicated to the Lyman-$\alpha\;$ forest, and its promise as a probe
of the primordial power spectrum, in this work we shall explore the possibilities of going beyond parameter fitting.
To give us insight into the underlying model for the power spectrum shape, which parameter estimation by itself cannot do, our present application to Lyman-$\alpha\;$ data should therefore ideally assume full shape freedom throughout the analysis. As a nearly scale-invariant primordial power spectrum
is a generic prediction of the simplest models of inflation, a minimally parametric reconstruction can be a powerful test of inflationary models.
Lyman-$\alpha\;$ constrains the smallest cosmological scales; thus, it provides the longest lever-arm when combined with the statistical power and robustness of CMB data, yielding the best opportunity presently available to understand the overall shape of the power spectrum.
The main Lyman-$\alpha\;$ observable, the flux power spectrum, does not have a simple algebraic
relationship to the matter power spectrum. By $z\sim3$, the absorbing structures are weakly nonlinear,
and are also affected by baryonic physics. Hence, to establish the relationship between the primordial power spectrum and
the flux power spectrum, we must resort to hydrodynamical simulations.
The initial conditions used in our simulations allow for considerable freedom in the shape of the primordial power spectrum, and this allows us to recreate the Lyman-$\alpha\;$ forest resulting from generic power spectrum shapes. Using an ensemble of simulations which sample the parameter space required to describe the flux power spectrum, we construct a likelihood function which can be used to perform minimally parametric reconstruction of the primordial power spectrum, while simultaneously constraining parameters describing IGM physics.
A statistical technique called cross-validation (CV) is used to robustly reconstruct the primordial power spectrum and Markov Chain Monte Carlo (MCMC) techniques are used to obtain the final constraints. The statistical approach parallels \cite{Sealfon:2005} and \cite{Verde:2008}, who applied the same method to data from the CMB and galaxy surveys. \cite{Peiris:2009} added the current Lyman-$\alpha\;$ forest data to the joint analysis with larger scale data, via the derived constraints on the small-scale matter power spectrum from \cite{McDonald:2004pk}. However, these latter constraints were derived assuming a tight prior on the shape of the primordial power spectrum at Lyman-$\alpha\;$ scales -- an assumption which we drop in this work. In our analysis we consider both the flux power spectrum determined by \cite{McDonald:2004data} from low-resolution quasar spectra obtained during the SDSS, and simulated data for the upcoming Baryon Oscillation Sky Survey (BOSS: \citealt{Schlegel:2009}).
This paper is organized as follows. In Section \ref{sec:methods} we review the framework for
power spectrum reconstruction and describe the details of the simulations and parameter estimation setup.
Section \ref{sec:datasets} describes the data,
and results are presented in Section \ref{sec:results}. We conclude in Section \ref{sec:discussion}.
Technical details of our calculations are relegated to Appendices \ref{ap:absorption} and \ref{ap:converge}.
\section{Methods}
\label{sec:methods}
In this section, we describe the statistical technique used in this paper, and how we built the likelihood function for minimally parametric reconstruction from Lyman-$\alpha\;$ data. Section \ref{sec:pkrecon} describes the framework for power spectrum reconstruction in general terms, while Section \ref{sec:knots} gives further details of our specific implementation of this framework. Sections \ref{sec:simulations} - \ref{sec:fluxpow} detail numerical methods used to extract a flux power spectrum from a given primordial power spectrum. Finally, Section \ref{sec:parameter} describes the parameter estimation implementation.
\subsection{Power spectrum reconstruction}
\label{sec:pkrecon}
Previous analyses of the Lyman-$\alpha$ forest (\citealt{McDonald:2004pk, Viel:2004}) have assumed
that the primordial power spectrum is a nearly scale-invariant power law, of the form
\begin{align}
P(k) = A_\mathrm{s}\left(\frac{k}{k_0}\right)^{n_\mathrm{s}-1 + \alpha_\mathrm{s} \ln k},
\label{eq:pk}
\end{align}
and then constrained $A_\mathrm{s}$,$n_\mathrm{s}$ and $\alpha_\mathrm{s}$. In this work we
will follow the same spirit as \cite{Sealfon:2005, Verde:2008}, going
beyond parameter estimation in an attempt to deduce what the Lyman-$\alpha\;$ forest data can tell us
about the shape of the power spectrum under minimal prior assumptions. A major challenge involved in all such reconstructions is to avoid
over-fitting the data; it is of little use to produce a complex function that fits the data set extremely well if
we are simply fitting statistical noise. Equally, an overly prescriptive function which is a poor
fit to the data should be rejected. To achieve this balance, we add an extra term, $\mathcal{L}_P$, to the likelihood function
which penalizes superfluous fluctuations. Schematically, the likelihood function is:
\begin{equation}
\log \mathcal{L} = \log \mathcal{L}(\mathrm{Data} | P(k)) + \lambda \log \mathcal{L}_{\mathrm{P}}\,,
\label{eq:likelihoodschematic}
\end{equation}
where the form of $\mathcal{L}_\mathrm{P}$ will be discussed shortly. Equation \ref{eq:likelihoodschematic}
now contains an extra free parameter which measures the magnitude of the smoothing required; the penalty weight $\lambda$.
As $\lambda \to \infty$ the likelihood will implement linear regression. For particularly clean data, carrying no evidence for any feature in the $P(k)$, $\lambda$ should be large.
Data carrying strong evidence for $P(k)$ features would be best analysed with a small value of $\lambda$.
We need a method of determining, from the data, the optimal penalty weight. Our chosen technique is called CV
(\citealt{Green:1994}), which quantifies the idea that a correct reconstruction of the underlying information should accurately
predict new, independent data.
The variant of CV used in this paper splits the data into three sets. The function is reconstructed using two of these sets (training sets).
The likelihood (excluding the penalty term) of this reconstruction, given only the data in the remaining set (validation set), is calculated.
This step is called validation; because the data in each set are assumed to be independent,
we now have a measure of the predictivity of the reconstruction.
Validation is repeated using each set in turn and the total CV score is the sum of all three validation likelihoods.
The optimal penalty is the one which maximizes the CV score.
More generally, CV splits the data into $k$ independent sets, with
$2 < k \leq n$, where $n$ is the number of data points. $k-1$ sets are used for training, and the remaining set for validation.
Larger $k$ allows for a bigger training set, and thus better estimation of the function to be validated against, but for most practical problems large $k$
is computationally intractable. We have chosen $k=3$ as a compromise. We verified that using
$k=2$, following \cite{Verde:2008}, made a negligible difference to our results despite the smaller training set size.
CV assumes that each set is uncorrelated;
a mild violation of this assumption will lead to an underestimation of errors, but not a systematic bias
in the derived parameters \cite{Carmack:2009}. Our data include a full covariance matrix, and so we are able
to verify that correlations between the sets are weak.
The minimally parametric framework applied in this paper follows that of \cite{Sealfon:2005, Verde:2008} and \cite{Peiris:2009}.
It uses cubic splines to reconstruct a function $f(x)$
from measurements at a series of points, $x_i$, called the knots. The function value between each pair of knots
is interpolated using a piecewise cubic polynomial. The spline is fully specified by the knots,
continuity of the first and second derivatives, and boundary conditions on the second derivatives at the exterior knots
(the knots at either end of the spline). The splines have vanishing second derivative at the exterior knots.
If the power spectrum is given by smoothed splines, the form of the likelihood function given above is
\begin{align}
\log \mathcal{L} = \log \mathcal{L}(\mathrm{Data} | P(k)) &+ \lambda \int_k d \ln k (P''(k))^2,\\
\mathrm{where}\; P''(k) &= \frac{d^2 P}{d(\ln k)^2} \nonumber.
\label{eq:penalty}
\end{align}
\subsection{Knot placement}
\label{sec:knots}
The number and placement of the knots is chosen initially and kept fixed throughout the analysis. Once
there are sufficient knots to allow a good fit to the data, adding more will not alter the shape of the
reconstructed function significantly. In choosing the number of knots, we seek to find a balance between
allowing sufficient freedom in the power spectrum, and having few enough parameters that the data are still able to provide
meaningful constraints on two sets out of three when subdividing the data into the training and validation sets, as described above.
Available computing resources limit us in any case to considering only a few knots.
We fit the primordial power spectrum with a four-knot spline for the Lyman-$\alpha\;$ forest $k$-range.
The flux power spectrum is available in twelve $k$-bins, so there are
three bins per knot, which should allow sufficient freedom. By comparison, \cite{Peiris:2009} used
seven knots to cover the $k$-range spanned by CMB, galaxy surveys and Lyman-$\alpha\;$ data, with a
single knot for the Lyman-$\alpha\;$ forest.
The SDSS flux power spectrum covers the range of scales, in velocity units, of $k_v = 1.41\times 10^{-3} - 0.018\, \mathrm{s/km}$.
Dividing by a factor of $H(z)/(1+z)$ converts to comoving distance coordinates, so the constraints on
the matter power spectrum are on scales of roughly $k = 0.4 - 3\ \mathrm{Mpc}^{-1}$.
In this range of scales we place four knots (A--D, from large to small scales) evenly in log space. Numerical details of the knots
are shown in Table \ref{tab:knots}. The maximum and minimum values of $P(k)$ given there for each knot are simply the extremal
values covered by our simulations. Simulation coverage of $P(k)$ has been expanded where necessary to fully cover the range
allowed by the data.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Knot & Position &\multicolumn{2}{|c|}{$P(k)$ ($/ 10^{-9}$)} \\
& ($\mathrm{Mpc}^{-1}$) & Minimum & Maximum \\
\hline
A & 0.475 & $0.83$ & $3.25$ \\% & 2.28
B & 0.75 & $0.60$ & $3.23$ \\% & 2.26
C & 1.19 & $0.60$ & $3.67$ \\% & 2.21
D & 1.89 & $0.53$ & $4.16$ \\% & 2.19
\hline
\end{tabular}
\end{center}
\caption{Positions of the knots. The maximum and minimum values of $P(k)$ are
the extremal values covered by our simulations.
Fixed knots are not shown, but are discussed in the text.}
\label{tab:knots}
\end{table}
We must specify the primordial power spectrum on scales well outside the range probed by data, even though they have no effect
on the Lyman-$\alpha\;$ forest. This is for two reasons. The first is that
when running a simulation we must have a well-defined way to perturb the initial particle grid for all scales included in the simulation.
In order to ensure that the scales on which we have data are properly resolved, we also need to simulate larger and smaller scales
, and these require a defined power spectrum. The second reason is that our interpolation scheme works best when
the perturbations induced by altering one of the knots are reasonably local. Adding extra end knots helps to prevent large secondary
boundary effects, which would make interpolation far more difficult.
For numerical stability reasons, we would like the amplitude of fluctuations on
these scales to be reasonably constant, but do not wish to make strong assumptions about the amplitude of the power spectrum there.
Therefore we add two ``follower'' knots at each end of the spline. The amplitude is fixed to follow the nearest parameter knot, assuming
that between follower and followed, the shape is a power law with $n_s = 0.97$.
\footnote{Hence, if the amplitude of the power spectrum at the D-knot is $P^D$, the power spectrum at the follower knot
has the amplitude $P^D (k^D/k^{D+1})^{0.03}$, where $k^D$ is the position of the D knot and $k^{D+1}$ the position of the follower.}
The two follower knots are at scales of $k= (0.15, 4)\, \mathrm{Mpc}^{-1}$.
We also add a few knots, even more distant from the scales probed by the data, with completely fixed amplitudes consistent
with the WMAP best-fitting power spectrum. The amplitude of the primordial power spectrum on these scales does not significantly
affect results; we have added knots here so that the initial density field is well-defined on a larger range of scales than probed by the
simulation. This allows us to avoid any boundary effects associated with the ends of the spline.
These fixed knots are at $k = (0.07, 25,40)\ \mathrm{Mpc}^{-1}$, with amplitudes of $(2.43,2.03, 2.01)\ \times 10^{-9}$.
Fig. \ref{fig:powerspec} shows the effect of altering the amplitude of the D knot on the flux power spectrum.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{deltaDknot.pdf}
\caption{Effect on the flux power spectrum of varying the D knot at $z=3$. On a scale where the best-fitting amplitude is $0.9$, the amplitudes of the D knot are, from the lowest line upwards, $0.5$, $0.7$, $1.1$, $1.3$ and $1.7$. Non-linear growth tends to erase dependence on the initial conditions, so the effect is smaller at lower redshifts.}
\label{fig:powerspec}
\end{figure}
\subsection{Simulations}
\label{sec:simulations}
In this study, full hydrodynamical simulations were run using the parallel TreePM code
{\small GADGET}-$2$ (\citealt{Springel:2005}). {\small GADGET} computes long-range gravitational forces using a particle grid,
while the short-range physics are calculated using a particle tree.
Hydrodynamical effects are included by having two separate particle types;
dark matter, affected only by gravity, and baryons, modelled using
smoothed particle hydrodynamics (SPH), where particles are supposed to approximate density elements in the matter fluid.
The rest of this section gives technical details of our simulations and the included astrophysics,
and may be skipped by the reader interested only in the cosmological implications.
{\small GADGET} has been modified to compute the ionisation state of the gas using radiative cooling and ionisation physics as originally
described by \cite{Katz:1995}, and used in \cite{Viel:2005}. Star formation is included via a simplified prescription
which greatly increases the speed of the simulations, where all baryonic particles with overdensity $\rho/\rho_0 > 10^3$
and temperature $T < 10^5 K$ are immediately made collisionless. \cite{Viel:2004} compared simulations using this prescription
with identical simulations using a multi-phase model, and found negligible difference in the Lyman-$\alpha\;$ statistics. Additionally, all
feedback options have been disabled, and galactic winds neglected; \cite{Bolton:2008} found
that winds have a small effect on the Lyman-$\alpha\;$ forest.
The gravitational softening length was set to $1/30$ of the mean linear inter-particle spacing.
The gas is assumed to be ionised by an externally specified, spatially homogeneous UV background,
based on the galaxy and quasar emission model of \cite{HaardtMadau}. We follow previous analyses in assuming
that the gas temperature is initially in equilibrium with the CMB, that the gas is in ionisation equilibrium, optically thin,
and that we can neglect metals and evolution of elemental abundances. Lyman-$\alpha\;$ absorption arises largely from near mean-density
hydrogen,
which should undergo little chemical evolution, so using a simplified star formation criterion and neglecting metals is
physically well-motivated. Assuming that the gas is optically thin and in ionisation equilibrium will break down during reionisation,
but at the redshifts we are interested in, we can model the effect of non-instantaneous reionisation by increasing the
photo-heating rate, as described in \cite{Viel:2005}.
The fiducial simulation for this paper has a box size of $60\ \mathrm{Mpc} \,h^{-1}$ and $2 \times 400^3$ gas and dark matter particles,
[which we will write as ($60$, $400$) in future], and runs from
$z=199$ to $z=2$. Snapshots are output at regular intervals between redshift $4.2$ and $2.0$. Initial conditions were generated using N-GenICs,
modified to specify the primordial power spectrum by a spline, and use separate transfer functions for baryons and dark matter,
as calculated using CAMB (\citealt{camb}).
For knots B and C, we used the above fiducial parameters for box size and particle resolution. For the D knot, we slightly compromised
on box size in favour of particle resolution, and used simulations of ($48$, $400$), since we found that the D knot had a negligible affect
on the largest scales. To fully capture the behaviour of the A knot, we used larger simulations with ($120$, $400$).
We have used different sized simulations to ensure that for each knot, the characteristic scales representing it have very good numerical
convergence; this issue is addressed in full in Appendix \ref{ap:converge}.
Our ability to do this is one technical advantage of our approach compared with previous studies; if we were to alter the amplitude of the
whole power spectrum, we would need to achieve convergence over all the relevant scales at once. In our approach, each simulation only
needs strict convergence over the narrow range of scales probed by a single knot.
\subsection{IGM thermodynamics}
\label{sec:thermo}
Constraints on the thermal history of the IGM are given in terms of the parameters of a polytropic temperature-density relation
\begin{equation}
T = T_0 \left(\frac{\rho}{\rho_0}\right)^{\gamma-1}\,,
\label{eq:igmeos}
\end{equation}
where a given SPH particle has temperature $T$ and density $\rho$. $T_0$ and $\rho_0$ are the average quantities for
the whole simulation snapshot.
To determine $T_0$ and $\gamma$ from a simulation box, a least-squares fit is performed from low-density
particles satisfying
\begin{equation}
-1.0 < \log\left( \frac{\rho}{\rho_0} \right) < 0\, .
\label{eq:T0}
\end{equation}
Regions that are less dense than the lower limit above are ignored because they are poorly resolved in SPH simulations (\citealt{Bolton:2009}).
The simplified star formation criterion
means that many overdensities have been turned into stars, and their baryonic evolution not followed; hence they are also neglected.
Both $\gamma$ and $T_0$ are assumed to follow a power law broken at $z=3$ by HeII reionisation (\citealt{Schaye:1999}), so that they are given by:
\begin{align}
\gamma =& \begin{cases}
\gamma^A \left[(1+z)/4\right]^{d\gamma^S}& \text{if $z<3$}, \\
\gamma^A \left[ (1+z)/4 \right]^{d\gamma^R}& \text{if $z>3$}.
\end{cases} \\
T_0 =& \begin{cases}
T_0^A \left[(1+z)/4\right]^{dT_0^S}& \text{if $z<3$}, \\
T_0^A \left[ (1+z)/4 \right]^{dT_0^R}& \text{if $z>3$}.
\end{cases}
\label{eq:thermalhistory}
\end{align}
When performing parameter estimation, we marginalize over $\gamma^A,\, T_0^A$ and $d\gamma^{S,R},\, dT_0^{S,R}$. The different thermal histories
were constructed by modifying the fiducial simulation's photo-heating rate as described in Section $2.2$ of \cite{Bolton:2008}.
The effective optical depth is described by a power law, with parameters:
\begin{equation}
\tau_\mathrm{eff} = \tau^A_\mathrm{eff} \left[ (1+z)/4 \right]^{\tau^S_\mathrm{eff}}\, .
\label{eq:taueff}
\end{equation}
Previous studies (\citealt{McDonald:2004pk, Viel:2005}) used the same transfer function for both dark matter and baryon particles;
we have used different transfer functions for baryon and
dark matter species. At our starting redshifts, the transfer functions for the baryons are about $10\%$
lower than for the dark matter on these scales, because baryon fluctuations have not grown as fast
during tight coupling. Once they have decoupled from the photons, the baryons fall
into the potential wells of the dark matter, and by $z=1$, the linear transfer functions
are almost identical.
At redshifts $2-3$, however, the effect is small but noticeable, and accounts for a $2\%$ scale-independent
drop in the power spectrum. This is too small to affect current data, but could be potentially important for analysing BOSS data.
\subsection{The flux power spectrum}
\label{sec:fluxpow}
In the case of Lyman-$\alpha$, the observable is not a direct measurement of the
clustering properties of tracer objects, as in galaxy clustering, but the
statistics of absorption along a number of quasar sightlines.
Therefore we define the flux, $\mathcal{F}$, as
\begin{equation}
\mathcal{F} = \exp (-\tau),
\label{eq:flux}
\end{equation}
where $\tau$ is the optical depth. We define the flux power spectrum as
\begin{align}
P_F(k) &= | \tilde{\delta_F}(k) |^2, \nonumber\\
\delta_F &= \frac{\mathcal{F}}{\bar{\mathcal{F}}} - 1 \,.
\label{eq:fluxpk}
\end{align}
Here $\bar{\mathcal{F}}$ is the mean flux.
The tilde denotes a Fourier transformed quantity, where our Fourier conventions, used throughout, are:
\begin{equation}
\tilde{f}(k) = \int f(x) e^{i kx} \mathrm{dx} \,.
\label{eq:fourier}
\end{equation}
To aid the eventual understanding of our results, we digress slightly here to review the physical effects of the various thermal parameters on
the flux power spectrum.
The mean flux, essentially a measure of the average density of neutral hydrogen, has a large impact on the amplitude of the flux power spectrum.
Cosmological information from the Lyman-$\alpha\;$ forest is obtained through examining the power spectrum shape and its redshift dependence.
The effect of a higher temperature, as preferred by the flux power spectrum, is to suppress power predominantly on small scales,
as a higher temperature wipes out small-scale structure in the baryons. The exponent of the temperature-density relation,
$\gamma$, controls the temperature difference between voids and overdensities. A higher $\gamma$ makes voids cooler and overdensities
hotter. At high redshifts, where much of the Lyman-$\alpha\;$ absorption comes
from voids, the effect of an increased $\gamma$ is to decrease the temperature of the Lyman-$\alpha\;$ emitting regions, so there is relatively more
small-scale structure. At low redshifts, however, most of the Lyman-$\alpha\;$ absorption comes from near mean density material, and so an increased
$\gamma$ increases the temperature, decreasing the amount of small-scale structure. For further details of the physical effects of the
various parameters, see Section 4.2.1 and Fig. 3 of \cite{Viel:2005}, as well as Fig. 11-13 of \cite{McDonald:2004pk}.
Current constraints on $P_F$ are given by \cite{McDonald:2004data}, determined
from $\sim 3000$ SDSS quasar spectra at $z=2-4$.
Each simulation snapshot was processed to
generate an averaged flux power spectrum as follows.
First, $8000$ randomly placed simulated quasar sightlines were drawn through
the simulation box. For a $60\ \mathrm{Mpc} \,h^{-1}$ box, this constitutes an average spacing between
sightlines of $670\ h^{-1}\mathrm{kpc}$, corresponding to scales of roughly $k=10\ \mathrm{Mpc} \,h^{-1}$,
far smaller than the scales probed by the Lyman-$\alpha\;$ forest. We verified that doubling the number of sightlines
to $16000$ made a negligible difference to the resulting power spectra.
When calculating absorption, particle peculiar velocities were included,
which increases the (non-rescaled) magnitude of the power spectrum by approximately $10\%$.
To generate the flux power spectrum, the absorption due to each SPH particle near the sightline is calculated, giving us a number of simulated quasar
spectra, which are smoothed with a simple boxcar average. Each spectrum is rescaled by a constant so that the mean
flux across all spectra and absorption bins matches that observed by \cite{Kim:2007}. This rescaling hides our ignorance
of the amplitude of the photo-ionizing UV background.
The mean over all the rescaled spectra is then used as the extracted flux power spectrum for the box. For further details of how we computed the
absorption, see Appendix \ref{ap:absorption}.
We follow previous work in not attempting to model continuum fitting errors.
The Si III contamination found by \cite{McDonald:2004data} is modelled by assuming a linear bias correction
of the form $P_F' = \left[(1+a^2)+2a \cos ( v k)\right] P_F$, with
$a = f_{\mathrm{SiIII}}/(1-\bar{\mathcal{F}})$, $f_{\mathrm{SiIII}} = 0.011$, and $v=2271\, \mathrm{km}/\mathrm{s}$.
Finally, since high-density, damped Lyman-$\alpha\;$ systems (DLAs) are not modelled by our simulations,
we add a correction to the flux power spectrum to account for them, of the form calculated by \cite{McDonald:2004dla}.
The amplitude of this correction is a free parameter, and will be discussed further in Section \ref{sec:mcmcparam}.
We checked the convergence of our simulations with respect to box size and particle resolution.
Here we give only a brief summary of the results;
further details may be found in Appendix \ref{ap:converge}.
For the highest redshift bins at $z=4.2$, $4.0$ and $3.8$, increasing the particle resolution had a large effect on the
flux power spectrum. Achieving numerical convergence for the Lyman-$\alpha\;$ forest at high redshift
is challenging, because most of the signal for the Lyman-$\alpha\;$ forest is coming from poorly resolved underdense regions.
In addition, current data at high redshifts are much more noisy than at low redshifts,
and future surveys will not probe these redshifts at all. Accordingly, we follow \cite{Viel:2005}
and do not use the three highest redshift bins in our analysis.
At lower redshifts, and except in the smallest and largest $k$-bins, the change with increased particle resolution was small.
On the smallest scales, however, there was a change of around $5\%$ in each bin. This increase is systematic, and so we correct for it as described
in Appendix \ref{ap:converge}. The larger box increased power on the largest scales by around $5\%$,
due to sample variance in the simulation box. The methodology we
used to correct for this effect is again detailed in Appendix \ref{ap:converge}.
The above figures were the dominant errors in our modelling of the flux power spectrum. Uncorrected modelling errors
are therefore $\lesssim 2\%$ of the flux power spectrum in each bin, far below the current measurement error of $\sim 12\%$ in each bin of the flux power spectrum, and on the order of the expected statistical errors for the BOSS survey, which are $\sim 1.5\%$. A significant decrease
in modelling errors would require the use of simulations with improved particle resolution, which are beyond
the computational resources available to us.
\subsection{Parameter estimation}
\label{sec:parameter}
So far we have given a formula for the primordial power spectrum, and described how we use it to extract
a flux power spectrum to compare with observational data. In this section, we shall describe how we
actually performed that comparison. First we describe a scheme for robustly interpolating the parameter space to obtain flux power spectra corresponding to parameter combinations which we have not simulated, following \cite{Viel:2005}. Secondly, we
describe the parameters of the Monte Carlo Markov chains we used for parameter estimation. For more details of MCMC,
see, for example \cite{cosmomc}.
\subsubsection{Parameter interpolation}
\label{sec:interp}
Directly calculating a flux power spectrum from a given set of primordial fluctuations
requires a hydrodynamical simulation. This makes it impractical to directly calculate
$P_F$ for every possible set of input parameters.
Instead, simulations are run for a representative sample and other results are obtained from these via interpolation.
We assume that the flux power spectrum varies smoothly around the best-fitting model, parametrize this variation
with a quadratic polynomial for each data point, and then check that this accurately predicts new points.
If we have some simulation with a parameter vector which differs from a `best-guess' simulation by $\delta p_i$,
the corresponding change in the flux power spectrum, $\delta P_F$, is given by
\begin{equation}
\delta P_F = \sum_j \alpha_j \delta p_j+ \beta_j \delta p_j^2 \,.
\label{eq:fluxpower}
\end{equation}
The coefficients of this polynomial are constrained by performing a least-squares fit
to flux power spectra generated by numerical simulations. We experimented with
including cross-terms (of the form $p_i p_j$), but found that this did not significantly
improve the accuracy of the interpolation.
To estimate the interpolation coefficients, we used seven simulations for each of our four power spectrum parameters, one of which
was used to test the accuracy of the interpolation.
To check for correlation between parameters, we simulated varying two neighbouring knots at once.
As the greatest effect of each knot on the flux power spectrum is over a localized range of scales, our interpolation errors should be maximal here.
We needed only four simulations per thermal history parameter, and checked we could accurately predict
$\delta P_F$ for a very different thermal history.
As a final interpolation verification, we performed a simulation where all six parameters were changed simultaneously.
Fig. \ref{con:intp} shows the interpolation errors for one of our tests, which are around $1\%$ of the total change for each bin.
This is smaller than the expected statistical errors for BOSS, and was replicated by our other test simulations.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{convergence-intpC12D065.pdf}
\caption{The difference between the flux power spectrum as obtained from interpolation, and directly by simulation. Here only the C and D knots have been changed from their initial values.
Each line represents simulation output at a different redshift bin, between $z=2.0$ and $z=4.2$. The grey band shows $1\%$ error bars.
}\label{con:intp}
\end{figure}
\subsubsection{MCMC methodology}
\label{sec:mcmcparam}
To perform parameter estimation, we use a version of the publicly available CosmoMC \cite{cosmomc} code, with a modified likelihood function
as described in Section \ref{sec:pkrecon}.
We marginalize over four parameters for the four knots, with priors as specified in Table \ref{tab:knots}, and over eight parameters
of the thermal history, as described in Section \ref{sec:simulations}.
We follow the advice of \cite{McDonald:2004data}, and add a number of nuisance parameters to the SDSS data, all with Gaussian priors.
To parametrize uncertainty in the resolution of the spectra, we add a parameter $\alpha^2$ with prior $0\pm 49$, and multiply the
flux power spectrum by $\exp \left( -k^2 \alpha^2 \right)$. The effect of an increased $\alpha^2$ is
therefore to damp power on the very smallest scales.
Each redshift bin has one parameter, $f_i$, to describe uncertainty in the subtraction of background noise, with a prior of $0 \pm 0.05$.
To marginalize over the uncertainty in the effect of DLAs, we add $A_{\mathrm{damp}}$, with a prior of $1 \pm 0.3$.
The effect of this correction is to increase the slope of the flux power spectrum.
We also marginalize over residual uncertainties in the Hubble parameter, $h$ and $\Omega_M$, using flat priors of
$0.2 < \Omega_M < 0.4$, and $0.5 < h < 0.9$. For the rest of our background cosmology,
we assume parameters in agreement with those preferred by WMAP 7, including negligible
gravitational waves and spatial curvature. The priors on $h$ and $\Omega_M$ make a negligible difference to our results, because
both these parameters only weakly affect the Lyman-$\alpha\;$ forest. We assume $T_0 < 50000$ K and $0 < \gamma < 5/3$ on physical grounds;
the temperature-density relation of the IGM cannot be steeper than the perfect gas law,
and very high temperatures would
contradict independent measurements of the IGM temperature by \cite{Schaye:1999}.
\subsubsection{Cross-validation methodology}
\label{sec:crossvalid}
Cross-validation (CV) requires the splitting of the data set into $n$ independent sets. For best results,
these sets should be as uncorrelated as possible. We choose to use alternating bins in $k$ for each set.
For data with $n$ $k$ bins, the first set would consist of bins $1,4,7\dots$, the second bins $2,5,6\dots$ and
the third similarly.
To calculate the CV score, we estimate the best-fit from the two training sets, using an MCMC. The CV score
for the remaining, validation, set is the likelihood of this best-fit. The total CV score for a given penalty
is the sum of the CV scores for each set.
\section{Datasets}
\label{sec:datasets}
\subsection{Current data from SDSS}
\label{sec:sdssdata}
The SDSS data used in this study consist of a best-fitting flux power spectrum
in $12$ $k$-bins and $11$ redshift bins, together with a covariance matrix and a set of vectors describing the
foreground noise subtraction. It was analysed by \cite{McDonald:2004data}, and comes from $3000$ quasar spectra.
Of these, $\sim 2000$ are at redshift $2.2-3$, and $\sim 1000$ above that. We use the $8$ redshift bins at $z<3.8$ only.
We have chosen not to include any additional small-scale information based on high-resolution quasar spectra.
In principle, this can help break degeneracies and should be included in future analyses.
Currently, however, systematic error from such data sets is hard to quantify, and the optimal method for extracting the
thermal state of the IGM is not yet clear. Our focus in this work has been robustness, and so we have
limited ourselves to a single data set, whose properties have been extensively studied and
are relatively well-understood.
\subsection{Simulated data from BOSS}
\label{sec:bosssimulate}
In this section we will describe our simulated data for forecasting constraints from BOSS,
an ongoing-future survey which will acquire $1.6\times 10^5$ quasar spectra (\citealt{Schlegel:2009}), between $ z= 2.2 - 3.0$.
We need to simulate both a covariance matrix and a flux power spectrum.
We have assumed that the noise per spectrum of the BOSS data
will be approximately the same as they were for SDSS. This is a simple assumption, but broadly justified because both
surveys use similar instruments (\citealt{Schlegel:2009}). Truly accurate modelling of the covariance matrix is
impossible until the release of the final data, however, we expect our modelling of the BOSS covariance matrix
to be completely adequate for a forecast.
Our simulated BOSS covariance matrix is simply the SDSS covariance matrix scaled to account for the increase in
statistical power resulting from the much greater number of quasar sightlines. There are roughly $2000$ quasar sightlines
in the SDSS sample below $z=3$, so the scale factor is $2000/160000 = 1/80$.
To generate the flux power spectrum, we used cosmological parameters consistent with the best-fitting
results from WMAP 7, and thermal parameters consistent with theoretical expectations:
$\gamma \sim 1.45$ and $T_0 = 2.3\times10^3 \left[(1+z)/4\right]^{0.2}\,$K.
The effective optical depth was $\tau = 0.36 \left[(1+z)/4\right]^{3.65}$.
The power spectrum amplitude was selected to match a spectrum with $\sigma_8 =0.8$ and $n_s =0.96$.
We then added uncorrelated Gaussian noise with a variance given by the diagonal elements of the simulated BOSS covariance matrix.
As BOSS will only take data at $z\leq 3$, we dispense with the thermal parameters for higher redshifts.
The foreground noise properties of the BOSS data are expected to be similar to those of the SDSS data; we therefore
leave the priors on the parameters measuring uncertainty in the noise subtraction and the parameter measuring resolution
uncertainty, $\alpha^2$, unchanged.
BOSS is also expected to determine the transverse flux power spectrum. Simulating the larger
scales needed to properly model the effect of this is beyond the scope of this paper, and we refer the interested reader to
\cite{Slosar:2009, White:2010}.
\section{Results}
\label{sec:results}
\subsection{Current constraints}
\label{sec:sdssresults}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{pk_lya_sdss.pdf}
\includegraphics[width=0.45\textwidth]{pk_lya_sdss_high.pdf}
\caption{Constraints on the primordial power spectrum from SDSS data from CV,
for low (left) and high (right) penalties.
Black circles show the positions of the knots, with arbitrary normalisation.
The light blue regions show the top $68\%$ of likelihoods for SDSS data,
while the dark blue regions show the $95\%$ likelihood range.
The black error bar shows the results of previous analyses (\protect\citealt{Viel:2009}) assuming a power-law power spectrum at $k=1\, \mathrm{Mpc}^{-1}$.
The dashed lines show limits on the slope from that work.}\label{fig:current}
\end{figure*}
Fig. \ref{fig:current} shows the CV-reconstruction of the primordial power spectrum from SDSS data. We have emulated
confidence limits by plotting the envelope of samples which have a likelihood in the top $68\%$ and the top $95\%$. At the $95\%$ level,
the power spectrum is allowed to oscillate more within the allowed envelope, but the size of the overall constraint on the amplitude does not
greatly change, as found by \cite{Verde:2008}.
We have shown plots for two penalties: one high, one low. This was because we have been unable to determine an optimal penalty from current data;
the CV score shows no significant variation, even when the penalty is having negligible impact on the likelihood.
We interpret this to mean that the shape constraints on the primordial power spectrum from current Lyman-$\alpha$ data are very weak.
Previous analyses assumed a power-law prior for the shape of the primordial power spectrum, and constrained this slope and the overall
normalisation from the same data used above. While such parameter estimation leads to tight constraints from the data (assuming the underlying
shape prior is correct), relaxing this tight prior leads to the loss of ability to constrain the scale-dependent shape of the power spectrum.
The current data can still be used as part of a minimally parametric primordial power spectrum if one exploits the extended range in scales
that can be probed in combination with other data sets (\citealt{Peiris:2009}).
The black error bar in Fig. \ref{fig:current} shows a comparison with \cite{Viel:2009}. Our method gives results for the amplitude of the
primordial power spectrum at Lyman-$\alpha\;$ scales which are completely consistent with that work, but somewhat weaker. This is to be expected;
we are removing a tight prior on the shape of the power spectrum. For a very high penalty, i.e. the limit at which the implicit prior
in our analysis approaches a power-law spectrum, we can reproduce the error bars of \cite{Viel:2009}. We are also in agreement with the results
of an earlier analysis of the Lyman-$\alpha\;$ forest, \cite{McDonald:2004pk}, which constrained $\sigma_8 = 0.85 \pm 0.13$.
The corresponding constraints on $n_s$ from our reconstruction are extremely weak, especially for the low penalty: $n_s \sim 0.2 - 1.2$.
The constraints on $n_s$ in \cite{Viel:2009}, in addition to the power-law prior, were greatly assisted by the
fact that the pivot scale $k_0$ in Eq.~\ref{eq:pk} was chosen to be $k_0=0.002\, \mathrm{Mpc}^{-1}$; a small change
in the slope of the power spectrum at $k_0$ leads to a large change in power spectrum amplitude
by $k=1\, \mathrm{Mpc}^{-1}$. Here, we are trying to constrain a scale-dependent $n_s(k) = 1+\frac{d \ln P }{d \ln k}$ using only
the interval of scales sampled by Lyman-$\alpha\;$ forest. We find that, while the current Lyman-$\alpha\;$ data are able to constrain the
amplitude of the power spectrum at these scales, they are
not powerful enough on their own to significantly constrain the shape of the spectrum in a robust manner.
At no penalty do we see any evidence in the current data against a scale-invariant power spectrum.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{pk_lya_nodata.pdf}
\caption{Constraints on the primordial power spectrum from the penalty term alone, using the value in the ``low penalty'' plot of
Fig. \protect\ref{fig:current} above. Dashed lines show the power specrum range sampled by the simulations.
}\label{fig:prior}
\end{figure}
We can explicitly demonstrate that the current Lyman-$\alpha\;$ data add little information to a weak prior on the shape of the power spectrum in the following way.
Fig. \ref{fig:prior} shows a minimally parametric reconstruction assuming the penalty designated ``low''. These constraints were
generated {\sl without using any data whatsoever}, and are similar to those obtained with the Lyman-$\alpha\;$ forest data. This figure shows clearly
that our SDSS constraints are affected by the prior even for the low penalty.
Since the penalty is proportional to $P''(k)$, it cannot determine the power spectrum amplitude. Instead, the allowed
power spectrum amplitude is simply the minimal range probed by our simulations.
The CV part of our method involves reconstructing the optimal penalty, and thus the strength of the shape prior justified by the data.
CV is essentially a method to reconstruct the most favoured prior correlation between knots; since the prior is reconstructed
from the data, prior-driven constraints would not necessarily be a problem. However, here we are finding that no
particular prior is favoured over any other. Thus, the width of the envelopes in Fig. \ref{fig:current} are actually arbitrary
and should not be used to draw conclusions about the amplitude of primordial fluctuations at Lyman-$\alpha\;$ scales.
We performed a number of checks to determine the cause of our failure to find an optimal penalty.
Changing our methodology for splitting the data into CV bins did not affect the results.
A flux power spectrum simulated in the same way as our BOSS data, and using the same parameters, but
with error bars of the same magnitude as the current data showed no preference for a particular penalty, despite, as we shall see,
there being a well-defined optimal penalty for BOSS simulations.
Fixing the thermal history parameters $\gamma$ and $T_0$ to fiducial values was also sufficient to allow us to reconstruct a penalty.
Therefore statistical error and systematic uncertainty in the thermal history are the significant factors preventing us from
robustly reconstructing a minimally parametric power spectrum shape from current data.
Constraints on the thermal history parameters are as follows. For the low penalty we found $0.8 < \gamma < 1.7$ at
$1\sigma$ (recall that this upper limit is imposed as a physical prior), while for the high penalty $0.2 < \gamma < 1.7$.
The corresponding constraint from \cite{Viel:2009} is $\gamma = 0.63 \pm 0.5$. There is a noticeable decrease in the best-fitting
value of $\gamma$ with an increased penalty (i.e. a stronger shape prior).
We find it intriguing that we prefer an inverted temperature-density relation with $\gamma < 1.0$ only for a high penalty,
but the constraints are
so weak that we cannot draw any solid conclusions from them.
Constraints on the other parameters at $1\sigma$ were similar for both penalties. Those for the low penalty were:
$50000 > T_0 > 35000$ K, $\tau_\mathrm{eff} = 0.33 \pm 0.03$, a slope of $\tau^S_\mathrm{eff} = 3.3 \pm 0.3$, $h = 0.7 \pm 0.15$ and $\Omega_M = 0.25 \pm 0.04$.
Finally, constraints on the noise parameters largely reproduce the priors (listed in Sec \ref{sec:mcmcparam}).
Our results mirror those of \cite{Viel:2009}; we have therefore verified that those results are not biased by a shape prior
on the power spectrum. The constraints of this work and \cite{Viel:2005} on the IGM temperature, $T_0$, prefer a larger
central value than that obtained by \cite{McDonald:2004pk}. However, \cite{McDonald:2004pk} imposed a prior of
$T_0 = 20000 \pm 2000 K$, derived from analysis of the flux probability distribution function of high-resolution quasar spectra, so a direct
comparison is not possible.
For further discussion of this intriguing result, we refer the interested reader to Section 5 of \cite{Viel:2009}.
\subsection{Simulated constraints from BOSS}
\label{sec:bossresults}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{pk_lya_boss_wmap.pdf}
\caption{Constraints on the power spectrum for simulated BOSS data.
Black circles show the positions of the knots, normalized to match
the input power spectra (black line).
The orange region shows the top $68\%$ of likelihoods from BOSS-quality Lyman-$\alpha\;$ data.
The grey region shows an extrapolation of the $1\sigma$ results from WMAP data to these scales, and the grey dashed line
shows its lower extent.
}\label{fig:boss}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{thermal-params-boss.pdf}
\caption{Joint 2D posterior constraints on the thermal history using forecast BOSS data. Input parameters are marked by black dots.
Contours are drawn at $68$ and $95$ percent CL.
See Section \protect\ref{sec:simulations} for definitions of the thermal parameters.}\label{fig:bosstherm}
\end{figure*}
Unlike current data, our simulated BOSS data shows a well-defined maximum in the CV score.
In Fig. \ref{fig:boss} we show the constraints using this optimal penalty, together with our
input power spectrum. The input data is reconstructed very well, within an envelope of roughly $0.4 \times 10^{-9}$;
a precision comparable to that of a CV reconstruction from WMAP data \citep{Verde:2008}.
Even though our simulated power spectrum is nearly scale-invariant, we do not recover a very high optimal penalty. This is a feature
of our approach; unless the data is noiseless, not all oscillations in the power spectrum will be ruled out, and the optimal penalty
is one which allows for them while being consistent with experimental noise.
Our method was designed to extract $P(k)$, and so the penalty may not be entirely optimal for the derivative.
Even given this, our constraints of $0.7 < n_\mathrm{s} < 1.2$ are still comparatively weak.
However, even this constraint could be useful to test for potential systematics, or in combination with other data sets.
One other important data set will be the power spectrum of the cross-correlation of the
flux (\citealt{Viel:2002,McDonald:2007, Slosar:2009}), which BOSS is expected to measure for the first time.
Estimating the power of combined constraints is beyond the scope of this paper, but it could be considerable.
Fig. \ref{fig:bosstherm} shows the thermal parameters as reconstructed from BOSS data. We have correctly reconstructed our input, as marked by
the black dots. The reconstructed $h$ and $\Omega_M$ were also consistent with their input values; $\Omega_M = 0.27 \pm 0.02$ (input: $0.267$),
$h = 0.74 \pm 0.05$ (input: $0.72$).
Marginalized constraints on the thermal and noise parameters are almost a factor of two better for BOSS than for
current data. We have assessed the impact that further information about the thermal history of the
IGM would have on cosmological constraints, imposing priors corresponding to present and reasonable near-future measurements:
\begin{align}
\tau_\mathrm{eff} &= 0.36 \pm 0.11\, , &\tau^S_\mathrm{eff} = 3.65 \pm 0.25\, , \nonumber \\
T_0 &= 23000 \pm 3000 K \, , &\gamma = 1.45 \pm 0.2\,. \nonumber
\end{align}
Constraints on the mean optical depth are from \cite{Kim:2007}. For the temperature of the IGM, we follow \cite{Becker:2010}
and assume a future IGM study has determined $\gamma$ to the required precision.
The effect of the primordial power spectrum evolves with redshift in a different way to $T_0$ and $\gamma$. Hence,
sufficiently accurate data
can break degeneracies between them. For $\tau_\mathrm{eff}$, the constraints from BOSS are already much tighter
than our prior from \cite{Kim:2007}, so this prior provides no additional information.
Overall, therefore, the extra information provided by our thermal priors has no significant effect on our reconstruction of
the primordial power spectrum.
\section{Discussion}
\label{sec:discussion}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{pk_lya_wmap.pdf}
\caption{Comparison of our constraints. Blue is from current data; orange is our BOSS forecast.
The grey region shows part of a reconstruction using both the CMB data and galaxy clustering measured
by SDSS (\protect\citealt{Peiris:2009}).
The black squares show two knots used in the earlier reconstruction,
while the black error bar shows $1\sigma$ constraints on power spectrum amplitude from parameter estimation (\protect\citealt{Viel:2009}).
The dashed line shows the extrapolated WMAP best-fitting power spectrum.
}\label{fig:all}
\end{figure*}
In this work, we have performed a minimally parametric reconstruction of the primordial power spectrum, using Lyman-$\alpha$ data.
This is an extension of \cite{McDonald:2004pk,Viel:2005}, who used Lyman-$\alpha$ data to measure the amplitude
and slope of the primordial power spectrum on small scales, assuming that it had a power-law shape.
Using a highly prescriptive model to fit data, even if it is physically motivated, can hide systematic effects, which
may bias the recovered parameters in a manner which is hard to detect unless the bias is extremely large.
Further, it is vital to go beyond parameter estimation and test the underlying model of the primordial power spectrum. This can in
principle be achieved with a minimally parametric reconstruction framework coupled with a scheme for avoiding over-fitting the data.
\cite{Peiris:2009}, who attempted such a reconstruction including Lyman-$\alpha\;$ data, assumed that
the power spectrum could be well approximated by an amplitude, a power-law slope and its running across the scales probed by the Lyman-$\alpha\;$ forest. In their analysis this assumption was
justified as the Lyman-$\alpha\;$ data was treated as a single point and combined with CMB and galaxy survey data to reconstruct the power spectrum
over a wide range of scales.
However, the only likelihood function available up to now contained a power-law assumption
about the primordial power spectrum shape, making it impossible to treat the Lyman-$\alpha\;$ data in a fully
minimally parametric manner. We remedy this, performing a
large suite of numerical simulations to construct a new likelihood function.
The primordial power spectra thus emulated have considerable freedom in their shapes, specified by cubic smoothing splines.
This provides the first ingredient for a minimally-parametric reconstruction scheme.
The second ingredient, as mentioned above, is to avoid fitting the noise structure of the data with superfluous oscillations.
To this end, our method uses cross-validation (CV) to reconstruct the level of freedom allowed by the data.
CV is a statistical technique which quantifies the notion that a good fit should be predictive. Schematically,
it is a method of jackknifing the data as a function of a ``roughness'' penalty. A small penalty thus allows considerable
oscillatory structure in the power spectrum shape, while a larger penalty specifies a smoother shape.
This penalty term thus performs the same function as a prior on the smoothness of the power spectrum.
Jackknifing the data then tests the predictivity of the smoothing prior, choosing as the optimal penalty the one that maximizes
predictivity. For technical details see Section \ref{sec:pkrecon}.
For the Lyman-$\alpha\;$ current data from SDSS (\citealt{McDonald:2004data}), CV yields no significant preference for any
particular penalty. In the context of CV, this indicates that no penalty is more predictive or favoured over any other;
in other words, the data are not sufficiently powerful to accurately reconstruct the strength of the shape prior.
The minimally parametric method thus provides no evidence for features in the power spectrum in the current data, and our results are
fully consistent with a scale-invariant power spectrum. The best-fitting amplitude
of the power spectrum is, as in previous work, slightly higher than that extrapolated from WMAP (\citealt{WMAP7}).
However, because the data do not contain sufficient statistical power to reconstruct the power spectrum shape, our error
bars are extremely large. An analysis that uses different statistical
techniques, such as Bayesian evidence (\citealt{Jeffreys:1961}), could provide further insight, but is beyond the scope of this paper.
In the not so distant future, the first data from a new Lyman-$\alpha$ survey, BOSS (\citealt{Schlegel:2009}),
will be made available. We simulate a flux power spectrum and covariance matrix for BOSS, with an 80 fold
increase in statistical power over the current data.
In this case we successfully reconstruct the power spectrum, using CV to find an optimal penalty.
The parameters we extract using CV are completely consistent with the inputs to the simulation, and the resulting constraints are comparable
to those achieved by performing CV reconstruction using WMAP data (\citealt{Verde:2008}). We verify that statistical error is the
factor preventing us from finding an optimal penalty for current data by simulating a power spectrum identical to BOSS, but
with wider error bars, again failing to find an optimal penalty.
Finally, we show that adding plausible future data on the smallscale thermodynamics of the IGM to BOSS does not significantly
improve constraints on the primordial power spectrum. The simulated BOSS data are sufficiently powerful on their own to break degeneracies
between the IGM and cosmological parameters, and are limited by statistical error rather than systematic uncertainty.
We have not considered the impact of the information BOSS is expected to provide on the transverse flux power spectrum.
This will probe larger scales than our current work, offering a longer baseline and thus better sensitivity to the overall
shape of the power spectrum. However, applying the present technique to the improved data set would require simulations
probing much larger scales, hence greatly increasing the numerical requirements.
Fig. \ref{fig:all} shows the constraints from BOSS in comparison to those \cite{Peiris:2009} obtained by reconstructing the power
spectrum using the CMB and the matter power spectrum from SDSS. By combining BOSS data with other probes
(\citealt{Seljak:2005, Seljak:2006}), such as galaxy clustering, the CMB,
and the transverse flux power spectrum, we will be able to accurately reconstruct the
shape of the power spectrum on scales of $k=0.001 - 3 \,\mathrm{Mpc}^{-1}$, probing ten $e$-folds
of inflation.
\section*{Acknowledgments}
This work was performed using the Darwin Supercomputer of the University of Cambridge High
Performance Computing Service (http://www.hpc.cam.ac.uk/), provided by Dell Inc.
using Strategic Research Infrastructure Funding from the Higher Education
Funding Council for England.
We thank Volker Springel for writing and making public the {\small GADGET}-2 code, and
for giving us permission to use his initial conditions code N-GenICs.
SB would like to thank Martin Haehnelt, Antony Lewis, Debora Sijacki, and Christian Wagner
for help and useful discussions.
SB is supported by STFC. HVP is supported by Marie Curie grant MIRG-CT-2007-203314 from the
European Commission, the Leverhulme Trust, and by STFC.
MV is partly supported by: ASI/AAE theory grant, INFN-PD51 grant, PRIN-MIUR
and a PRIN-INAF and an ERC starting grant.
LV acknowledges support of MICINN grant AYA2008-03531 and FP7-IDEAS-Phys.LSS 240117
and thanks IoA Cambridge for hospitality.
|
1,116,691,498,614 | arxiv | \section{Introduction}
This work is framed into the study of multisummable formal solutions of certain family of PDEs. Multisummability of formal solutions of functional equations is observed in recent studies made by some research groups in different directions, and a growing interest has been observed in the scientific community. The present work belongs to these trends of studies, for which we provide a brief overview.\smallskip
Borel-Laplace summability procedures have been recently applied to solve partial differential equations. In the seminal work~\cite{lumisch}, the authors obtain positive results on the linear complex heat equation with constant coefficients. This construction was extended to more general linear PDEs by W. Balser in~\cite{ba3}, under the assumption of adequate extension of the initial data to an infinite sector. More recently, M. Hibino~\cite{hibino} has made some advances in the study of linear first order PDEs. Subsequently, several authors have studied complex heat like equations with variable coefficients (see~\cite{balo,copata,ly2}). The second author~\cite{ma1}, both authors~\cite{lama} and the two authors and J. Sanz~\cite{lamasa1} have also contributed in this theory.\smallskip
Recently, multisummability of formal solutions of PDEs has also been put forward in different works. W. Balser~\cite{ba4} described a multisummability phenomenon in certain PDEs with constant coefficients. S. Ouchi~\cite{ou} constructed multisummable formal solutions of nonlinear PDEs, coming from perturbation of ordinary differential equations. H. Tahara and H. Yamazawa~\cite{taya} have made progresses on general linear PDEs with non constant coefficients under entire initial data. In~\cite{ly1}, G. Lysik constructs summable formal solutions of the one dimensional Burgers equation by means of the Cole-Hopf transform. O. Costin and S. Tanveer~\cite{cota2} construct summable formal power series in time variable to 3D Navier Stokes equations. The authors have obtained results in this direction~\cite{lama1,lama2}.\smallskip
A recent overview on summability and multisummability techniques under different points of view is displayed in~\cite{loday}.\smallskip
The purpose of the present work is to study the solutions of a family of singularly perturbed partial differential equations from the asymptotic point of view. More precisely, we consider a problem of the form
\begin{multline}\label{e1}
Q(\partial_z)\partial_{t_2}u(t_1,t_2,z,\epsilon)=(P_1(\partial_z,\epsilon)u(t_1,t_2,z,\epsilon))(P_2(\partial_z,\epsilon)u(t_1,t_2,z,\epsilon))+ P(t_1,t_2,\epsilon,\partial_{t_1},\partial_{t_2},\partial_z)\\+f(t_1,t_2,z,\epsilon),
\end{multline}
under initial conditions $u(t_1,0,z,\epsilon)\equiv u(0,t_2,z,\epsilon)\equiv 0$, and where $Q(X)\in\mathbb{C}[X]$. The elements which conform the nonlinear part $P_1,P_2$ are polynomials in their second variable with coefficients being holomorphic functions defined on some neighborhood of the origin, say $D(0,\epsilon_0)$, continuous up to their boundary.\smallskip
Here, $D(0,\epsilon_0)$ stands for the open disc in the complex plane centered at 0, and with positive radius $\epsilon_0>0$. We write $\overline{D}(0,\epsilon_0)$ for its closure.\smallskip
Moreover, $P$ stands for some polynomial of six variables, with complex coefficients, and the forcing term $f(t_1,t_2,z,\epsilon)$ is a holomorphic and bounded function in $D(0,\rho)^2\times H_{\beta'}\times D(0,\epsilon_0)$, for some $\rho>0$, and where $H_{\beta'}$ stands for the horizontal strip
$$H_{\beta'}:=\{z\in\mathbb{C}:|\hbox{Im}(z)|<\beta'\},$$
for some $\beta'>0$.\smallskip
The precise configuration of the elements involved in the problem is stated and described in Section~\ref{seclayout}.\smallskip
This paper provides a step beyond in the study of the asymptotic behavior of the solutions of a subfamily of singularly perturbed partial differential equations of the form (\ref{e1}). We first recall some previous advances made in this respect, which motivate the present framework.\smallskip
In~\cite{lama}, we studied under the asymptotic point of view the solutions of certain family of PDEs of the form
$$Q(\partial_z)\partial_tu(t,z,\epsilon)=(P_1(\partial_z,\epsilon)u(t,z,\epsilon))(P_2(\partial_z,\epsilon)u(t,z,\epsilon))+P(t,\epsilon,\partial_t,\partial_z)u(t,z,\epsilon)+f(t,z,\epsilon),$$
where the elements involved in the problem depend only on one time variable $t$. Our next aim was to check whether the asymptotic properties of the solutions in this equation can be extended to functions of more number of time variables, as stated in (\ref{e1}). \smallskip
It is worth mentioning that, in the previous work~\cite{lama}, the linear part of the equation, ruled by $P(t,\epsilon,\partial_t,\partial_z)u(t,z,\epsilon)$ was assumed to be more general than in the present configuration, admitting an additional term of the form $c_0(t,z,\epsilon)R(\partial_z)u(t,z,\epsilon)$, where $c_0(t,z,\epsilon)$ is given by a certain holomorphic function defined on a product $D(0,\rho)\times H_{\beta'}\times D(0,\epsilon_0)$. \smallskip
We decided not to incorporate this term in the present study for the sake of simplicity. However, the results can be written with no additional theoretical difficulties by adding the analog of such terms into the equation. As a matter of fact, the decision of not considering this term in the present work is due to emphasize other fact: an outstanding phenomena occurred when dealing with two complex variables, arriving at substantially and qualitatively different asymptotic properties of the solutions attained.\smallskip
In~\cite{family1}, we described a study of a family of equations of the shape (\ref{e1}) which showed a symmetric behaviour with respect to the asymptotic properties of the analytic solutions with respect to both time variables, as initially expected from the generalization of the one-time variable case. More precisely, we proved the following result: given a good covering of $\mathbb{C}^{\star}$, $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ (see Definition~\ref{goodcovering}) involving sectors of opening larger than $\pi/k_2$, there exist sectors with vertex at the origin in $\mathbb{C}$ and finite radius, say $\mathcal{T}_1$ and $\mathcal{T}_2$, such that a family of solutions $\{ u_{p_1,p_2}(t_1,t_2,z,\epsilon) \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ of (\ref{e1}) is constructed. The function $u_{p_1,p_2}(t_1,t_2,z,\epsilon)$ turns out to be holomorphic in $\mathcal{T}_1\times\mathcal{T}_2\times H_{\beta'}\times \mathcal{E}_{p_1,p_2}$, for every $0\le p_1\le \varsigma_1-1$ and $0\le p_2\le \varsigma_2-1$. In addition to this, we obtain in this previous work that the difference of two consecutive (in the sense that they are related to consecutive sectors in the good covering) solutions $u_{p_1,p_2}$ and $u_{p'_1,p'_2}$, of (\ref{e1}) can be classified into two categories:
\begin{enumerate}
\item Those pairs $((p_1,p_2),(p'_1,p'_2))\in\mathcal{U}_{k_1}$ such that
$$\sup_{(t_1,t_2,z)\in \mathcal{T}_1\times\mathcal{T}_2\times H_{\beta'}}|u_{p_1,p_2}(t_1,t_2,z,\epsilon)-u_{p'_1,p'_2}(t_1,t_2,z,\epsilon)|\le K_{p}e^{-\frac{M_{p}}{|\epsilon|^{k_1}}},\quad \epsilon\in \mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2};$$
\item and those pairs $((p_1,p_2),(p'_1,p'_2))\in\mathcal{U}_{k_2}$ such that
$$\sup_{(t_1,t_2,z)\in \mathcal{T}_1\times\mathcal{T}_2\times H_{\beta'}}|u_{p_1,p_2}(t_1,t_2,z,\epsilon)-u_{p'_1,p'_2}(t_1,t_2,z,\epsilon)|\le K_{p}e^{-\frac{M_{p}}{|\epsilon|^{k_2}}},\quad \epsilon\in \mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}.$$
\end{enumerate}
Here, $k_1$ and $k_2$ are different positive integers involved in the definition of the polynomials appearing in the main equation, and $K_p,M_p$ are positive constants.\smallskip
The application of a two-level Ramis-Sibuya type result entails the existence of a formal power series $\hat{u}(t_1,t_2,z,\epsilon)\in\mathbb{F}[[\epsilon]]$, where $\mathbb{F}$ stands for the Banach space of holomorphic and bounded functions in the domain $\mathcal{T}_1\times\mathcal{T}_2\times H_{\beta'}$, with the supremum norm. Such formal power series is a formal solution of (\ref{e1}) and can be split in the form
$$\hat{u}(t_1,t_2,z,\epsilon)=a(t_1,t_2,z,\epsilon)+\hat{u}_1(t_1,t_2,z,\epsilon)+\hat{u}_2(t_1,t_2,z,\epsilon),$$
where $a(t_1,t_2,z,\epsilon)$ belongs to $\mathbb{F}\{\epsilon\}$, and $\hat{u}_1,\hat{u}_2\in\mathbb{F}[[\epsilon]]$. Moreover, for all $p_1\in\{0,\ldots,\varsigma_1-1\}$ and $p_2\in\{0,\ldots,\varsigma_2-1\}$, the function $u_{p_1,p_2}(t_1,t_2,z,\epsilon)$ can be split analogously:
$$u_{p_1,p_2}(t_1,t_2,z,\epsilon)=a(t_1,t_2,z,\epsilon)+u^1_{p_1,p_2}(t_1,t_2,z,\epsilon)+u^2_{p_1,p_2}(t_1,t_2,z,\epsilon),$$
where $\epsilon\mapsto u^j_{p_1,p_2}(t_1,t_2,z,\epsilon)$ is an $\mathbb{F}$-valued function which admits $\hat{u}_{j}(t_1,t_2,z,\epsilon)$ as its $k_j$-Gevrey asymptotic expansion on $\mathcal{E}_{p_1,p_2}$, for $j=1,2$, seeing $\hat{u}_{j}$ as a formal power series in $\epsilon$, with coefficients in $\mathbb{F}$. In addition to this, and under the assumption that $k_1<k_2$, a multisummability result is also attained. Under the assumption that
$$\{((p_1^0,p_2^0),(p_1^1,p_2^1)), ((p_1^1,p_2^1),(p_1^2,p_2^2)),\ldots, ((p_1^{2y-1},p_2^{2y-1}),(p_1^{2y},p_2^{2y}))\}\subseteq\mathcal{U}_{k_2}$$
for some $y\in\mathbb{N}:=\{1,2,\ldots\}$, and
$$\mathcal{E}_{p_1^y,p_2^y}\subseteq S_{\pi/k_1}\subseteq\bigcup_{0\le j\le 2y}\mathcal{E}_{p_1^j,p_2^j},$$
for some sector $S_{\pi/k_1}$ with opening larger than $\pi/k_1$, then it holds that $\hat{u}(t_1m,t_2,z,\epsilon)$ is indeed $(k_2,k_1)-$summable on $\mathcal{E}_{p_1^y,p_2^y}$, being its $(k_2,k_1)$-sum given by $u_{p_1^y,p_2^y}$ on $\mathcal{E}_{p_1^y,p_2^y}$.
The role played by $k_1$ and $k_2$ in the previous framework is completely symmetric. The assumption $k_1<k_2$ is innocuous, reaching symmetric results in the case that $k_2<k_1$. In that study, the principal part of any of the equations in the family studied is factorisable as a product of two operators involving a single time variable, yielding a multisummability phenomena in the perturbation parameter $\epsilon$.\smallskip
On the other hand, in the present study, the sign of $k_1-k_2$ is crucial at the time of studying the asymptotic behavior of the analytic solution. In fact, a negative sign provides less information on the asymptotic behavior, which entails only Gevrey estimates whilst the positive one furnishes more precise information, namely multisummability. Here is where the strength of the present results holds. More precisely, we find a family of analytic solutions $\{ u_{p_1,p_2}(t_1,t_2,z,\epsilon) \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ of the main problem under study, which are holomorphic in $\mathcal{T}_1\times\mathcal{T}_2\times H_{\beta'}\times \mathcal{E}_{p_1,p_2}$, and such that one of the following hold:
\begin{enumerate}
\item In case $k_2>k_1$, a formal power series $\hat{u}(t_1,t_2,z,\epsilon)\in\mathbb{F}[[\epsilon]]$, formal solution of (\ref{e1}), exists such that for every $(p_1,p_2)\in \{0,\ldots,\varsigma_1-1\}\times\{0,\ldots,\varsigma_2-1\}$, the function $u_{p_1,p_2}(t_1,t_2,z,\epsilon)$ admits $\hat{u}(t_1,t_2,z,\epsilon)$ as its asymptotic expansion of Gevrey order $1/k_1$ in $\mathcal{E}_{p_1,p_2}$ (see Theorem~\ref{teo2}).
\item In case that $k_1>k_2$, a formal power series $\hat{u}(t_1,t_2,z,\epsilon)\in\mathbb{F}[[\epsilon]]$ exists, being formal solution of (\ref{e1}), and such that $\hat{u}(t_1,t_2,z,\epsilon)$ shows analogous properties as those described in the family of equations in~\cite{family1}, i.e. multisummability of the formal solution with Gevrey levels $k_1$ and $k_2$ (see Theorem~\ref{teo3}).
\end{enumerate}
The present study is based on the following approach: after establishing the main problem under study:
\begin{multline}
\left(Q(\partial_{z})\partial_{t_2}+\epsilon^{\Delta_1}t_1^{d_1}\partial_{t_1}^{\delta_{D_1}}
\epsilon^{\tilde{\Delta}_2}t_2^{\tilde{d}_2}\partial_{t_2}^{\tilde{\delta}_{D_2}}R_{D_1,D_2}(\partial_z)+\epsilon^{\tilde{\Delta}_3}t_2^{\tilde{d}_3}\partial_{t_2}^{\tilde{\delta}_{D_3}}R_{D_3}(\partial_z)\right)u(\boldsymbol{t},z,\epsilon)\\
= (P_1(\partial_z,\epsilon)u(\boldsymbol{t},z,\epsilon))(P_2(\partial_z,\epsilon)u(\boldsymbol{t},z,\epsilon)) + \sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon^{\Delta_{l_1,l_2}}t_1^{d_{l_1}}t_2^{\tilde{d}_{l_2}}\partial_{t_1}^{\delta_{l_1}}\partial_{t_2}^{\tilde{\delta}_{l_2}}R_{l_1,l_2}(\partial_z)u(\boldsymbol{t},z,\epsilon)\\
+ f(\boldsymbol{t},z,\epsilon), \label{ICP_main00}
\end{multline}
where $k_1,k_2\ge1$, $D_1,D_2\ge 2$, $\Delta_1,d_1,\delta_{D_1},\tilde{\Delta}_{2},\tilde{d}_2,\tilde{\delta}_{D_2},\tilde{\Delta}_{3},\tilde{d}_3,\tilde{\delta}_{D_3}$ are integer numbers, and for all $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$, we take nonnegative integers $d_{l_1},\tilde{d}_{l_2},\delta_{l_1},\tilde{d}_{l_2}$, and $\Delta_{l_1,l_2}$, under the assumptions (\ref{e120})-(\ref{e331}). Moreover, $Q, R_{D_1,D_2},R_{D_3}$ and $R_{l_1,l_2}$ are polynomials with complex coefficients, for all $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$. The polynomials $P_1,P_2$ present coefficients which are holomorphic functions with respect the perturbation parameter on some neighborhood of the origin, under assumptions (\ref{raicesgrandes})-(\ref{e165b}). The forcing term $f(t_1,t_2,z,\epsilon)$ is given by some holomorphic and bounded function on a neighborhood of the origin with respect to both variables and the perturbation parameter $\epsilon$, and some horizontal strip with respect to $z$ variable.\smallskip
We search for analytic solutions of (\ref{ICP_main00}) given as a Laplace and Fourier transform of certain function to be determined:
\begin{equation}\label{e2}
u(t_1,t_2,z,\epsilon)=\frac{k_1k_2}{(2\pi)^{1/2}}\int_{-\infty}^{\infty}\int_{L_{d_1}}\int_{L_{d_2}}\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(u_1,u_2,m,\epsilon)e^{-\left(\frac{u_1}{\epsilon t_1}\right)^{k_1}-\left(\frac{u_2}{\epsilon t_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{du_1}{u_1},
\end{equation}
where $L_{\gamma_{j}}=\mathbb{R}_{+}e^{i\gamma_j}$, for some appropriate direction $\gamma_j\in\mathbb{R}$, for $j=1,2$. The problem of finding such a function is equivalent (in view of Lemma~\ref{lema257}) to solve an auxiliary convolution equation in the Borel plane. More precisely, there is a one-to-one correspondence between functions $u(t_1,t_2,z,\epsilon)$ of the form (\ref{e2}), which solve (\ref{ICP_main00}), and functions $\omega(\tau_1,\tau_2,m,\epsilon)$ admitting Laplace transform with respect to the first two variables along directions $d_1$ and $d_2$ resp., and Fourier transform with respect to $m$ variable, which turn out to be solutions of a convolution equation (see~\ref{e310}).\smallskip
For every fixed value of the perturbation parameter $\epsilon$, $(\tau_1,\tau_2,m)\mapsto \omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\tau_1,\tau_2,m,\epsilon)$ is obtained as the fixed point of the contractive operator $\mathcal{H}_\epsilon$ (see~(\ref{e623}) for its definition) acting on some Banach space of functions owing exponential decay at infinity on the Fourier variable, and defined on some neighborhood of the origin for $(\tau_1,\tau_2)$ in $\mathbb{C}^2$, which can be prolonged to some neighborhood of the origin together with an infinite sector of bisecting direction $d_1$ times an infinite sector with bisecting direction $d_2$; under certain concrete monomial exponential growth at infinity. More precisely, $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\tau_1,\tau_2,m,\epsilon)$ is a continuous function in $(\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}\times \mathbb{R}\times D(0,\epsilon_0)\setminus\{0\}$, and holomorphic with respect to $(\tau_1,\tau_2)$ in $(D(0,\rho)\cup S_{d_1})\times S_{d_2}$, and on $D(0,\epsilon_0)\setminus\{0\}$ with respect to the perturbation parameter. In addition to this, there exist constants $\varpi,\mu,\beta,\nu_1,\nu_2>0$ such that
$$|\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\tau_1,\tau_2,m,\epsilon)|\le\varpi (1+|m|)^{-\mu}\frac{\left|\frac{\tau_1}{\epsilon}\right|}{1+\left|\frac{\tau_1}{\epsilon}\right|^{2k_1}}\frac{\left|\frac{\tau_2}{\epsilon}\right|}{1+\left|\frac{\tau_2}{\epsilon}\right|^{2k_2}}\exp\left(-\beta|m|+\nu_1\left|\frac{\tau_1}{\epsilon}\right|^{k_1}+\nu_2\left|\frac{\tau_2}{\epsilon}\right|^{k_2}\right),$$
for every $(\tau_1,\tau_2,m,\epsilon)\in (\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}\times \mathbb{R}\times D(0,\epsilon_0)\setminus\{0\}$. Laplace and Fourier transforms make sense in order to get (\ref{e2}). At this point, we are able to construct a family of solutions $\{ u_{p_1,p_2}(t_1,t_2,z,\epsilon) \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ of (\ref{ICP_main00}), where $u_{p_1,p_2}(t_1,t_2,z,\epsilon)$ is a holomorphic function defined in $\mathcal{T}_1\times\mathcal{T}_2\times H_{\beta'}\times \mathcal{E}_{p_1,p_2}$, with $\mathcal{T}_1$ and $\mathcal{T}_2$ being finite sectors in $\mathbb{C}^{\star}$ with vertex at the origin, and where $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ conforms a good covering at 0 (see Definition~\ref{goodcovering}).\smallskip
The distinction of $k_1<k_2$ and $k_2<k_1$ provide Gevrey asymptotics or multisummability results in Theorem~\ref{teo2}, resp. Theorem~\ref{teo3}. It is worth mentioning that these results lean on the application of a cohomological criteria known as Ramis-Sibuya Theorem; resp. a multilevel version of such result. \smallskip
The fact that a different behavior can be observed with respect to the variables in time is due to the domain of definition of $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}$ with respect to such variables: a neighborhood of the origin for $(\tau_1,\tau_2)\in\mathbb{C}^2$ which can only be prolonged up to a neighborhood of the origin together with an infinite sector with respect to the first variable; whereas it can not be defined on any neighborhood of the origin with respect to the second time variable, but it does on some infinite sector. This causes the impossibility of application of a deformation path at the time of estimating the difference of two consecutive solutions in order to apply the multilevel version Ramis-Sibuya Theorem. With respect to the study of the main problem in~\cite{family1}, the main difficulty at this point comes due to the fact that Case 1 in Theorem 1 of~\cite{family1} is no longer available.\smallskip
We also find it necessary to justify the fact that $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}$ can not be defined with respect to $(\tau_1,\tau_2)$ in sets of the form
\begin{equation}\label{e145}
S_1\times (S_2\cup D(0,\rho_2)),
\end{equation}
for some infinite sectors $S_1$ and $S_2$, and for some $\rho_2>0$. In order to solve the main equation, one needs to divide by $P_m(\tau_1,\tau_2)$ (see (\ref{e326}) for its definition). However, as stated in Section~\ref{secnodef}, the roots of such polynomial lie on sets of the form (\ref{e145}), for any $\rho_2>0$. Therefore, a small divisor phenomena is observed , which does not allow a summability procedure. This occurrence has already been noticed in another context in previous works: in the framework of $q-$difference-differential equations~\cite{lama3}; in the context of multilevel Gevrey solutions of PDEs in the complex domain in~\cite{lama2}, etc.\smallskip
The layout of the paper is as follows.\smallskip
After recalling the definition and the action of Fourier transform in the first part of Section~\ref{sec2}, we describe the main problem under study in Section~\ref{seclayout}, and reduce it to the research of a solution of an auxiliary convolution equation. Such solution is obtained following a fixed point argument in appropriate Banach spaces (see Section~\ref{fixed}), whose main properties are provided in Section~\ref{subsecespbanach}.
Section~\ref{secnodef} is devoted to motivate the domain of definition of the solution, in contrast to that studied in~\cite{family1}.
The first main result of our work is Theorem~\ref{teo1}, where the existence of a family of analytic solutions of the main problem is obtained. In Section~\ref{secborellaplace} we recall the Borel summability procedure and two cohomological criteria: Ramis-Sibuya Theorem, and a multilevel version of Ramis-Sibuya Theorem. We conclude the present work with the existence of a formal solution to the problem, and two asymptotic results which connect the formal and the analytic solutions: Theorem~\ref{teo2} states a result on Gevrey asymtotics in a subfamily of equations; Theorem~\ref{teo3} states a multisummability result in another different subfamily of equations under study.
\section{Layout of the main and auxiliary problems}\label{sec2}
This section is devoted to describe the main problem under study. We first recall some facts on the action of Fourier transform on certain Banach spaces of functions.
\subsection{Fourier transform on exponentially decreasing function spaces}
In order to transform the main problem under study into an auxiliary one, easier to handle, we first describe the action of Fourier transform in certain Banach spaces of rapidly decreasing functions.
\begin{defin} Let $\beta, \mu \in \mathbb{R}$. $E_{(\beta,\mu)}$ stands for the vector space of continuous functions $h : \mathbb{R} \rightarrow \mathbb{C}$ such that
$$ \left\|h(m)\right\|_{(\beta,\mu)} = \sup_{m \in \mathbb{R}} (1+|m|)^{\mu} \exp( \beta |m|) |h(m)| $$
is finite. $E_{(\beta,\mu)}$ turns out to be a Banach space when endowed with the norm $\left\|.\right\|_{(\beta,\mu)}$.
\end{defin}
The following result is stated without proof, which can be found in~\cite{lama}, Proposition 7.
\begin{prop}\label{prop359b}
Let $f \in E_{(\beta,\mu)}$ with $\beta > 0$, $\mu > 1$. The inverse Fourier transform of $f$
$$ \mathcal{F}^{-1}(f)(x) = \frac{1}{ (2\pi)^{1/2} } \int_{-\infty}^{+\infty} f(m) \exp( ixm ) dm,\quad x\in\mathbb{R},$$
can be extended to an analytic function on the strip
$$H_{\beta} := \{ z \in \mathbb{C} / |\mathrm{Im}(z)| < \beta \}.
$$
Let $\phi(m) = im f(m) \in E_{(\beta,\mu - 1)}$. Then, it holds that $ \partial_{z} \mathcal{F}^{-1}(f)(z) = \mathcal{F}^{-1}(\phi)(z)$, for $z \in H_{\beta}$.
Let $g \in E_{(\beta,\mu)}$ and put $\psi(m) = \frac{1}{(2\pi)^{1/2}} f \ast g(m)$, the convolution product of $f$ and $g$, for all $m \in \mathbb{R}$. $\psi$ belongs to $E_{(\beta,\mu)}$. Moreover, we have $\mathcal{F}^{-1}(f)(z)\mathcal{F}^{-1}(g)(z) = \mathcal{F}^{-1}(\psi)(z)$, for $z\in H_{\beta}$.
\end{prop}
\subsection{Layout of the main problem}\label{seclayout}
Let $k_1,k_2 \geq 1$ and $D_1,D_2 \geq 2$ be integer numbers. We also consider non negative integer numbers $d_1,\tilde{d}_j,\Delta_1,\tilde{\Delta}_j,\delta_{D_1},\tilde{\delta}_{D_j}$, for $j\in\{2,3\}$. For all $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$, let $d_{l_1},\tilde{d}_{l_2},\delta_{l_1},\tilde{d}_{l_2},\Delta_{l_1,l_2}$ be non negative integers.
We assume the previous elements satisfy the next identities:
\begin{equation}\label{e120}
\frac{2}{k_2}<\tilde{\delta}_{D_2}\le \tilde{\delta}_{D_3}
\end{equation}
and $\delta_{l_1} < \delta_{l_1+1}$, $\tilde{\delta}_{l_2} < \tilde{\delta}_{l_2+1}$
for all $0 \leq l_1 \leq D_1-1$ and $0 \leq l_2 \leq D_2-1$,
\begin{multline}
\Delta_1+\tilde{\Delta}_2-d_1-\tilde{d}_2-1+\delta_{D_1}+\tilde{\delta}_{D_2}=0\qquad \tilde{\Delta}_3-\tilde{d}_3+\tilde{\delta}_{D_3}-1=0\\
d_1=\delta_{D_1}(k_1+1),\qquad k_2+1+\tilde{d}_j=\tilde{\delta}_{D_j}(k_2+1)\quad (j=2,3)
\label{e232}
\end{multline}
Moreover, for every $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$, we assume
\begin{equation}\label{e331}
d_{l_1}>\delta_{l_1}(k_1+1),\qquad \tilde{d}_{l_2}>(\tilde{\delta}_{l_2}-1)(k_2+1),
\end{equation}
$$ \Delta_{l_1,l_2}>\delta_{D_1}k_1+(\tilde{\delta}_{D_2}-1)k_2.$$
Let $Q(X), R_{D_1,D_2},R_{D_3}\in\mathbb{C}[X]$, and for $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$ we take $R_{l_1,l_2}(X)\in\mathbb{C}[X]$. We consider polynomials $P_1,P_2$ with coefficients belonging to $\mathcal{O}(\overline{D}(0,\epsilon_0))$, such that
\begin{equation}\label{raicesgrandes}
\hbox{deg}(Q)\ge \hbox{deg}(R_{l_1,l_2}),
\end{equation}
for $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$. Moreover, we choose these polynomials satisfying
\begin{equation}
\mathrm{deg}(Q) \geq \mathrm{deg}(P_{j}),\quad j=1,2\qquad Q(im)\neq0,\quad m\in\mathbb{R},\label{assum_deg_Q_R}
\end{equation}
$$\mathrm{deg}(Q)= \mathrm{deg}(R_{D_3})= \mathrm{deg}(R_{D_1,D_2}).$$
More precisely, we assume there exist sectorial annulus $E_{D_3,Q}$ and $E_{D_1,D_2,D_3}$ such that \begin{equation}\label{e165b}
\frac{R_{D_3}(im)}{Q(im)}\in E_{D_3,Q}, \qquad \frac{R_{D_1,D_2}(im)}{Q(im)}\in E_{D_1,D_2,D_3},
\end{equation}
for every $m\in\mathbb{R}$. In other words, there exist real numbers $0<r_{j}<R_j$ and $\alpha_j<\beta_j$, for $j=1,2,$ such that
\begin{multline}
E_{D_3,Q}:=\{x\in\mathbb{C}:r_1<|x|<R_1, \hbox{arg}(x)\in(\alpha_1,\beta_1)\},\\
E_{D_1,D_2,Q}:=\{x\in\mathbb{C}:r_2<|x|<R_2, \hbox{arg}(x)\in(\alpha_2,\beta_2)\}.\label{eannulus}
\end{multline}
Throughout the whole work, we denote the pairs of variables in bold letters: $\boldsymbol{t}:=(t_1,t_2)$, $\boldsymbol{T}:=(T_1,T_2)$, $\boldsymbol{\tau}:=(\tau_1,\tau_2)$, etc.
We consider the following nonlinear initial value problem
\begin{multline}
\left(Q(\partial_{z})\partial_{t_2}+\epsilon^{\Delta_1}t_1^{d_1}\partial_{t_1}^{\delta_{D_1}}
\epsilon^{\tilde{\Delta}_2}t_2^{\tilde{d}_2}\partial_{t_2}^{\tilde{\delta}_{D_2}}R_{D_1,D_2}(\partial_z)+\epsilon^{\tilde{\Delta}_3}t_2^{\tilde{d}_3}\partial_{t_2}^{\tilde{\delta}_{D_3}}R_{D_3}(\partial_z)\right)u(\boldsymbol{t},z,\epsilon)\\
= (P_1(\partial_z,\epsilon)u(\boldsymbol{t},z,\epsilon))(P_2(\partial_z,\epsilon)u(\boldsymbol{t},z,\epsilon)) + \sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon^{\Delta_{l_1,l_2}}t_1^{d_{l_1}}t_2^{\tilde{d}_{l_2}}\partial_{t_1}^{\delta_{l_1}}\partial_{t_2}^{\tilde{\delta}_{l_2}}R_{l_1,l_2}(\partial_z)u(\boldsymbol{t},z,\epsilon)\\
+ f(\boldsymbol{t},z,\epsilon) \label{ICP_main0}
\end{multline}
with null initial data $u(t_1,0,z,\epsilon)\equiv u(0,t_2,z,\epsilon) \equiv 0$.\smallskip
The forcing term $f(\boldsymbol{t},z,\epsilon)$ is constructed as follows. For $n_1,n_2 \geq 1$, let $m \mapsto F_{n_1,n_2}(m,\epsilon)$ be a family of functions belonging to the Banach space
$E_{(\beta,\mu)}$ for some $\beta > 0$, $\mu > \max( \mathrm{deg}(P_{1})+1, \mathrm{deg}(P_{2})+1)$ and which
depend holomorphically on $\epsilon \in D(0,\epsilon_{0})$. We assume there exist constants $K_{0},T_{0}>0$
such that
\begin{equation}\label{e165}
\left\|F_{n_1,n_2}(m,\epsilon)\right\|_{(\beta,\mu)} \leq K_{0} (\frac{1}{T_{0}})^{n_1+n_2},
\end{equation}
for all $n_1,n_2 \geq 1$, and $\epsilon \in D(0,\epsilon_{0})$. We deduce that
$$\mathbf{F}(\boldsymbol{T},z,\epsilon) = \sum_{n_1,n_2 \geq 1} \mathcal{F}^{-1}(m \mapsto F_{n_1,n_2}(m,\epsilon))(z) T_1^{n_1}T_2^{n_2} $$
represents a bounded and holomorphic function on $D(0,T_{0}/2)^2 \times H_{\beta'} \times D(0,\epsilon_{0})$ for any $0 < \beta' < \beta$. We define
\begin{equation}
f(\boldsymbol{t},z,\epsilon) = \mathbf{F}(\epsilon t_1,\epsilon t_2 , z,\epsilon).
\label{defin_c_0_f}
\end{equation}
Observe the function $f$ is holomorphic and bounded on $D(0,\rho)^2 \times H_{\beta'} \times D(0,\epsilon_{0})$ where $\rho \epsilon_{0} < T_{0}/2$.
We search for solutions of the main problem (\ref{ICP_main0}), which are time scaled and expressed as a Fourier transform with respect to $z$ variable, in the form
$$u(\boldsymbol{t},z,\epsilon)=\mathbb{U}(\epsilon t_1,\epsilon t_2,z,\epsilon)=\frac{1}{(2\pi)^{1/2}}\int_{-\infty}^{\infty}U(\epsilon t_1,\epsilon t_2,m,\epsilon)\exp(izm)dm.$$
The symbol $U(\boldsymbol{T},m,\epsilon)$ satisfies the next equation.
\begin{multline}
\left(Q(im)\epsilon\partial_{T_2}+ \epsilon^{\Delta_1+\tilde{\Delta}_2-d_1-\tilde{d}_2}T_1^{d_1}T_2^{\tilde{d}_2}\epsilon^{\delta_{D_1}+\tilde{\delta}_{D_2}}\partial_{T_1}^{\delta_{D_1}}\partial_{T_2}^{\tilde{\delta}_{D_2}}R_{D_1,D_2}(im)\right.\\
\left.+\epsilon^{\tilde{\Delta}_3-\tilde{d}_3}T_2^{\tilde{d}_3}\epsilon^{\tilde{\delta}_{D_3}}\partial_{T_2}^{\tilde{\delta}_{D_3}}R_{D_3}(im)\right)U(\boldsymbol{T},m,\epsilon)\\
=\frac{1}{(2\pi)^{1/2}}\int_{-\infty}^{\infty}(P_1(i(m-m_1),\epsilon)U(\boldsymbol{T},m-m_1,\epsilon))(P_2(im_1,\epsilon)U(\boldsymbol{T},m_1,\epsilon))dm_1\\
+\sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon^{\Delta_{l_1,l_2}-d_{l_1}-\tilde{d}_{l_2}+\delta_{l_1}+\tilde{\delta}_{l_2}}T_1^{d_{l_1}}T_2^{\tilde{d}_{l_2}}\partial_{T_1}^{\delta_{l_1}}\partial_{T_2}^{\tilde{\delta}_{l_2}}R_{l_1,l_2}(im)U(\boldsymbol{T},m,\epsilon)\\
+ \mathcal{F}(z\mapsto \mathbf{F}(\boldsymbol{T},z,\epsilon))(m)\label{e32b}
\end{multline}
Our goal is to provide solutions of (\ref{e32b}) in the form of a Laplace transform. Namely, we search for solutions of the form
\begin{equation}\label{e215}
U(\boldsymbol{T},m, \epsilon)=k_1k_2 \int_{L_{\gamma_1}}\int_{L_{\gamma_2}} \omega_{\boldsymbol{k}}^{\boldsymbol{d}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_1}{T_1})^{k_1}-(\frac{u_2}{T_2})^{k_2}} \frac{du_2}{u_2}\frac{du_1}{u_1},
\end{equation}
where $L_{\gamma_j}=\mathbb{R}_{+}e^{i\gamma_j}$, for some appropriate direction $\gamma_j\in\mathbb{R}$, for $j=1,2,$ which depend on $T_j$. The function $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)$ is constructed in the incoming sections as the fixed point of a map defined in certain Banach spaces, studied in the forthcoming sections. For $j=1,2,$ let $S_{d_j}$ be infinite sectors with vertex at the origin and bisecting direction $d_j$, such that $L_{\gamma_j}\subseteq S_{d_j}$. We fix a positive real number $\rho>0$.
In the present section, we depart from a function $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)$ continuous on $(\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}\times \mathbb{R}\times D(0,\epsilon_0)\setminus\{0\}$, holomorphic with respect to $(\boldsymbol{\tau},\epsilon)$ in $(D(0,\rho)\cup S_{d_1})\times S_{d_2}\times D(0,\epsilon_0)$, and such that
\begin{equation}\label{e209}
|\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)| \leq \varpi_{\boldsymbol{d}}(1+ |m|)^{-\mu} e^{-\beta|m|}
\frac{ |\frac{\tau_1}{\epsilon}|}{1 + |\frac{\tau_1}{\epsilon}|^{2k_1}}\frac{ |\frac{\tau_2}{\epsilon}|}{1 + |\frac{\tau_2}{\epsilon}|^{2k_2}} \exp( \nu_1 |\frac{\tau_1}{\epsilon}|^{k_1}+\nu_2 |\frac{\tau_2}{\epsilon}|^{k_2})
\end{equation}
for all $\boldsymbol{\tau}\in (\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}$, every $m\in\mathbb{R}$ and $\epsilon\in D(0,\epsilon_0)\setminus\{0\}$.
In order to construct the solution, we present a refined form of the problem. For that purpose, we need some preliminary results. We make use of the following relations, which can be found in~\cite{taya}, p. 40:
\begin{multline}
T_1^{\delta_{D_1}(k_1+1)} \partial_{T_1}^{\delta_{D_1}} = (T_1^{k_1+1}\partial_{T_1})^{\delta_{D_1}} +
\sum_{1 \leq p_1 \leq \delta_{D_1}-1} A_{\delta_{D_1},p_1} T_1^{k_1(\delta_{D_1}-p_1)} (T_1^{k_1+1}\partial_{T_1})^{p_1}\\
=(T_1^{k_1+1}\partial_{T_1})^{\delta_{D_1}}+A_{\delta_{D_1}}(T_1,\partial_{T_1})
\label{expand_op_diff}
\end{multline}
\begin{multline}
T_2^{\tilde{\delta}_{D_j}(k_2+1)} \partial_{T_2}^{\tilde{\delta}_{D_j}} = (T_2^{k_2+1}\partial_{T_2})^{\tilde{\delta}_{D_j}} +
\sum_{1 \leq p_j \leq \tilde{\delta}_{D_j}-1} \tilde{A}_{\tilde{\delta}_{D_j},p_j} T_2^{k_2(\tilde{\delta}_{D_j}-p_j)} (T_2^{k_2+1}\partial_{T_2})^{p_j}\\
=(T_2^{k_2+1}\partial_{T_2})^{\tilde{\delta}_{D_j}}+\tilde{A}_{\tilde{\delta}_{D_j}}(T_2,\partial_{T_2}) \label{expand_op_diff2}
\end{multline}
for some real numbers $A_{\delta_{D_1},p_1}$, $p_1=1,\ldots,\delta_{D_1}-1$ and $\tilde{A}_{\tilde{\delta}_{D_j},p_j}$, $p_j=1,\ldots,\tilde{\delta}_{D_j}-1$, for $j=2,3$. We write $A_{D_1}$ (resp. $\tilde{A}_{D_j}$, for $j=2,3$,) in the place of $A_{\delta_{D_1}}$ (resp. $\tilde{A}_{\tilde{\delta}_{D_j}}$) for the sake of simplicity.
We divide by $\epsilon$ and multiply by $T_2^{k_2+1}$ at both sides of (\ref{e32b}). Under the assumptions displayed in (\ref{e232}) one may apply (\ref{expand_op_diff}) and (\ref{expand_op_diff2}) in order to rewrite equation (\ref{e32b}). This step is important to exhibit the equations as an expression where some operators algebraically well-behaved with respect to Laplace transform appear. The resulting equation is as follows:
\begin{multline}
\left(Q(im)T_2^{k_2+1}\partial_{T_2}+ (T_1^{k_1+1}\partial_{T_1})^{\delta_{D_1}}(T_2^{k_2+1}\partial_{T_2})^{\tilde{\delta}_{D_2}} R_{D_1,D_2}(im)+(T_2^{k_2+1}\partial_{T_2})^{\tilde{\delta}_{D_3}}R_{D_3}(im)\right)U(\boldsymbol{T},m,\epsilon)\\
=\left[-(T_1^{k_1+1}\partial_{T_1})^{\delta_{D_1}}\tilde{A}_{D_2}(T_2,\partial_{T_2})R_{D_1,D_2}(im)-(T_2^{k_2+1}\partial_{T_2})^{\tilde{\delta}_{D_2}}A_{D_1}(T_1,\partial_{T_1})R_{D_1,D_2}(im)\right.\\
\left.-A_{D_1}(T_1,\partial_{T_1})\tilde{A}_{D_2}(T_2,\partial_{T_2})R_{D_1,D_2}(im)- \tilde{A}_{D_3}(T_2,\partial_{T_2})R_{D_3}(im)\right] U(\boldsymbol{T},m,\epsilon)\\
+\frac{T_2^{k_2+1}\epsilon^{-1}}{(2\pi)^{1/2}}\int_{-\infty}^{\infty}(P_1(i(m-m_1),\epsilon)U(\boldsymbol{T},m-m_1,\epsilon))(P_2(im_1,\epsilon)U(\boldsymbol{T},m_1,\epsilon))dm_1\\
+\sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon^{\Delta_{l_1,l_2}-d_{l_1}-\tilde{d}_{l_2}+\delta_{l_1}+\tilde{\delta}_{l_2}-1}T_1^{d_{l_1}}T_2^{\tilde{d}_{l_2}}\partial_{T_1}^{\delta_{l_1}}\partial_{T_2}^{\tilde{\delta}_{l_2}}R_{l_1,l_2}(im)U(\boldsymbol{T},m,\epsilon)\\
+ T_2^{k_2+1}\epsilon^{-1}\mathcal{F}(z\mapsto \mathbf{F}(\boldsymbol{T},z,\epsilon))(m).\label{e32c}
\end{multline}
The following result allows to establish a one-to-one correspondence between solutions of equation (\ref{e32c}), and an auxiliary equation in the Borel plane, (\ref{e310}). The last equation will be presented afterwards, in this same section.
\begin{lemma}\label{lema257}
Let $U(\boldsymbol{T},m,\epsilon)$ be the function constructed in (\ref{e215}). Then, the following statements hold:
$$T_j^{k_j+1}\partial_{T_j}U(\boldsymbol{T},m,\epsilon)=k_1k_2\int_{L_{\gamma_1}}\int_{L_{\gamma_2}}(k_ju_j^{k_j})\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{u},m,\epsilon)e^{-\left(\frac{u_1}{T_1}\right)^{k_1}-\left(\frac{u_2}{T_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{du_1}{u_1},\quad j=1,2.$$
\begin{multline*}T_1^{m_1}U(\boldsymbol{T},m,\epsilon)=k_1k_2\int_{L_{\gamma_1}}\int_{L_{\gamma_2}}\left(\frac{u_1^{k_1}}{\Gamma\left(\frac{m_1}{k_1}\right)}\int_0^{u_1^{k_1}}(u_1^{k_1}-s_1)^{\frac{m_1}{k_1}-1}\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(s_1^{1/k_1},u_2,m,\epsilon)\frac{ds_1}{s_1}\right)\\
\times e^{-\left(\frac{u_1}{T_1}\right)^{k_1}-\left(\frac{u_2}{T_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{du_1}{u_1},\qquad m_1\in\mathbb{N}.
\end{multline*}
\begin{multline*}
T_2^{m_2}U(\boldsymbol{T},m,\epsilon)=k_1k_2\int_{L_{\gamma_1}}\int_{L_{\gamma_2}}\left(\frac{u_2^{k_2}}{\Gamma\left(\frac{m_2}{k_2}\right)}\int_0^{u_2^{k_2}}(u_2^{k_2}-s_2)^{\frac{m_2}{k_2}-1}\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(u_1,s_2^{1/k_2},m,\epsilon)\frac{ds_2}{s_2}\right)\\
\times e^{-\left(\frac{u_1}{T_1}\right)^{k_1}-\left(\frac{u_2}{T_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{du_1}{u_1},\qquad m_2\in\mathbb{N}.
\end{multline*}
\begin{multline*}
\int_{-\infty}^{\infty}U(\boldsymbol{T},m-m_1,\epsilon)U(\boldsymbol{T},m_1,\epsilon)dm_1\\
=k_1k_2\int_{L_{\gamma_1}}\int_{L_{\gamma_2}}\left( u_1^{k_1}u_2^{k_2}\int_{-\infty}^{\infty}\int_{0}^{u_1^{k_1}}\int_0^{u_2^{k_2}}\omega_{\boldsymbol{k}}^{\boldsymbol{d}}((u_1^{k_1}-s_1)^{\frac{1}{k_1}},(u_2^{k_2}-s_2)^{\frac{1}{k_2}},m-m_1,\epsilon)\right.\\
\left.\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m_1,\epsilon)\frac{1}{(u_1^{k_1}-s_1)s_1}\frac{1}{(u_2^{k_2}-s_2)s_2}ds_1ds_2\right)e^{-\left(\frac{u_1}{T_1}\right)^{k_1}-\left(\frac{u_2}{T_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{du_1}{u_1}
\end{multline*}
\end{lemma}
\begin{proof}
The first statement is a direct application of the derivation under the integral symbol. The second and third statements are equivalent, so we only give details for the second one.
In order to give proof for the second statement, we first apply Fubini theorem at the inner and outer integrals. That expression can be rewritten in the next form:
\begin{multline*}
A:=\int_{L_{k_1\gamma_1}}k_1k_2\int_{L_{\gamma_2}}\left(\int_{L_{s_1^{1/k_1},\gamma_1}}\frac{u_1^{k_1-1}}{\Gamma\left(\frac{m_1}{k_1}\right)}(u_1^{k_1}-s_1)^{\frac{m_1}{k_1}-1}e^{-\left(\frac{u_1}{T_1}\right)^{k_1}}\frac{du_1}{u_1}\right)\\
\times \omega_{\boldsymbol{k}}^{\boldsymbol{d}}(s_1^{1/k_1},u_2,m,\epsilon)e^{-\left(\frac{u_2}{T_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{ds_1}{s_1},
\end{multline*}
where $L_{k_1\gamma_1}:=\{re^{ik_1\gamma_1}:r\ge0\}$ and $L_{s_1^{1/k_1},\gamma_1}=\{re^{i\gamma_1}:r\ge |s_1|^{1/k_1}\}$. We proceed by applying two consecutive deformation paths at the inner integral in the previous expression: first, we apply $h_1=u_1^{k_1}$, and then $h_1-s_1=h_{11}$. We arrive at
\begin{multline*}
A=k_1k_2\int_{L_{k_1\gamma_1}}\frac{1}{\Gamma\left(\frac{m_1}{k_1}\right)}h_{11}^{\frac{m_1}{k_1}-1}e^{-\frac{h_{11}}{T_1^{k_1}}}dh_{11}\\
\times \int_{L_{k_1\gamma_1}}\int_{L_{\gamma_2}} \omega_{\boldsymbol{k}}^{\boldsymbol{d}}(s_1^{1/k_1},u_2,m,\epsilon)e^{-\frac{s_1}{T_1^{k_1}}-\left(\frac{u_2}{T_2}\right)^{k_2}}\frac{du_2}{u_2}\frac{ds_1}{s_1}
\end{multline*}
The deformation path $\tilde{u}_1=s_1^{1/k_1}$ followed by $h_{12}=\frac{h_{11}}{T_1^{k_1}}$ yields
$$A=U(\boldsymbol{T},m,\epsilon)\frac{1}{\Gamma\left(\frac{m_1}{k_1}\right)}T_1^{m_1}\int_{L_{k_1\gamma_1}-k_1\hbox{arg}(T_1)}h_{12}^{\frac{m_1}{k_1}-1}e^{-h_{12}}dh_{12}.
$$
A deformation path and the definition of Gamma function allow us to conclude that $A=U(\boldsymbol{T},m,\epsilon)$.
The proof of the third formula follows the same lines of arguments, involving Fubini theorem and it is omitted for the sake of brevity.
\end{proof}
\textbf{Remark:} Lemma~\ref{lema257} provides the equivalence of existence of solutions of different equation (\ref{e32c}) and (\ref{e310}), related by Laplace transformation.
We define the operators
\begin{equation}\label{e558b}
\mathcal{A}_{\delta_{D_1}}\omega_{\boldsymbol{k}}(\boldsymbol{\tau},m,\epsilon)=\sum_{1\le p_1\le \delta_{D_1}-1}\frac{A_{\delta_{D_1},p_1} \tau_1^{k_1} }{\Gamma(\delta_{D_1}-p_1)}\int_{0}^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\delta_{D_1}-p_1-1}k_1s_1^{p_1}\omega_{\boldsymbol{k}}(s_1^{1/k_1},\tau_2,m,\epsilon)\frac{ds_1}{s_1},
\end{equation}
$$\tilde{\mathcal{A}}_{\tilde{\delta}_{D_j}}\omega_{\boldsymbol{k}}(\boldsymbol{\tau},m,\epsilon)=\sum_{1\le p_j\le \tilde{\delta}_{D_j}-1}\frac{\tilde{A}_{\tilde{\delta}_{D_j},p_j}\tau_2^{k_2} }{\Gamma(\tilde{\delta}_{D_j}-p_j)}\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\tilde{\delta}_{D_j}-p_j-1}k_2s_2^{p_j}\omega_{\boldsymbol{k}}(\tau_1,s_2^{1/k_2},m,\epsilon)\frac{ds_2}{s_2},$$
for $j=2,3$. Observe that they turn out to be the $m_{k_1}$ (resp. $m_{k_2}$) Borel transform of the operator $A_{D_1}(T_1,\partial_{T_1})$ (resp. $\tilde{A}_{D_j}(T_2,\partial_{T_2})$, for $j=2,3$) (see Section~\ref{secborellaplace} for more details on this).
In view of the assumption,described in (\ref{e331}), we define the natural numbers $d_{l_1,k_1}$ and $d_{l_2,k_2}$ by
\begin{equation}\label{e332}
d_{l_1}=\delta_{l_1}(k_1+1)+d_{l_1,k_1},\quad \tilde{d}_{l_2}=(\tilde{\delta}_{l_2}-1)(k_2+1)+d_{l_2,k_2},
\end{equation}
for all $0\le l_1\le D_1-1$ and $0\le l_2\le D_2-1$.
Taking into account Lemma~\ref{lema257}, we see that $U(\boldsymbol{T},m,\epsilon)$ satisfies (\ref{e32c}), iff $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)$ is a solution of the next equation.
\begin{multline}
\left(Q(im)+ R_{D_1,D_2}(im)(k_1\tau_1^{k_1})^{\delta_{D_1}}(k_2\tau_2^{k_2})^{\tilde{\delta}_{D_2}-1}+R_{D_3}(im)(k_2\tau_2^{k_2})^{\tilde{\delta}_{D_3}-1}\right)\omega(\boldsymbol{\tau},m,\epsilon)\\
=-(k_1\tau_1^{k_1})^{\delta_{D_1}}\frac{\tilde{\mathcal{A}}_{D_2}(\tau_2)}{k_2\tau_2^{k_2}}R_{D_1,D_2}(im)\omega(\boldsymbol{\tau},m,\epsilon)- (k_2\tau_2^{k_2})^{\tilde{\delta}_{D_2}-1}\mathcal{A}_{D_1}(\tau_1)R_{D_1,D_2}(im)\omega(\boldsymbol{\tau},m,\epsilon)\\
-\mathcal{A}_{D_1}(\tau_1)\frac{\tilde{\mathcal{A}}_{D_2}(\tau_2)}{k_2\tau_2^{k_2}}R_{D_1,D_2}(im)\omega(\boldsymbol{\tau},m,\epsilon)-\frac{\tilde{\mathcal{A}}_{D_3}(\tau_2)}{k_2\tau_2^{k_2}}R_{D_3}(im)\omega(\boldsymbol{\tau},m,\epsilon)\\
+\frac{\epsilon^{-1}}{(2\pi)^{\frac{1}{2}}}\frac{\tau_1^{k_1}}{k_2\Gamma\left(1+\frac{1}{k_2}\right)}
\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\int_{-\infty}^{\infty}\int_0^{\tau_1^{k_1}}\int_0^{s_2}P_1(i(m-m_1),\epsilon)\\
\times \omega((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2-x_2)^{\frac{1}{k_2}},m-m_1,\epsilon)P_2(im_1,\epsilon)\omega(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1,\epsilon)\frac{dx_2ds_1dm_1ds_2}{(\tau_1^{k_1}-s_1)s_1(s_2-x_2)x_2}\\
+\sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon^{\Delta_{l_1,l_2}-d_{l_1}-\tilde{d}_{l_2}+\delta_{l_1}+\tilde{\delta}_{l_2}-1}R_{l_1,l_2}(im)\frac{\tau_1^{k_1}}{k_2\Gamma\left(\frac{d_{l_1,k_1}}{k_1}\right)\Gamma\left(\frac{d_{l_2,k_2}}{k_2}\right)}\\
\times
\int_0^{\tau_2^{k_2}}\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\frac{d_{l_1,k_1}}{k_1}-1}(\tau_2^{k_2}-s_2)^{\frac{d_{l_2,k_2}}{k_2}-1}k_1^{\delta_{l_1}}s_1^{\delta_{l_1}}k_2^{\tilde{\delta}_{l_2}}s_2^{\tilde{\delta}_{l_2}}\omega(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m,\epsilon)\frac{ds_1}{s_1}\frac{ds_2}{s_2}\\
+\frac{\epsilon^{-1}}{k_2\Gamma\left(1+\frac{1}{k_2}\right)}\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\psi_{\boldsymbol{k}}(\tau_1,s_2^{\frac{1}{k_2}},m,\epsilon)\frac{ds_2}{s_2},\label{e310}
\end{multline}
where $\psi_{\boldsymbol{k}}$ is the formal $m_{k_1}$-Borel transform with respect to $T_1$ and the formal $m_{k_2}$-Borel transform with respect to $T_2$ of $F(\boldsymbol{T},m,\epsilon)$, i.e.
$$\psi_{\boldsymbol{k}}(\boldsymbol{\tau},m,\epsilon) = \sum_{n_1,n_2 \geq 1} F_{n_1,n_2}(m,\epsilon) \frac{\tau_1^{n_1}}{\Gamma(\frac{n_1}{k_1})}\frac{\tau_2^{n_2}}{\Gamma(\frac{n_2}{k_2})}.$$
Observe that $\psi_{\boldsymbol{k}}$ is an entire function with respect to $\boldsymbol{\tau}$. Moreover, regarding the construction of $\psi_{\boldsymbol{k}}$ and (\ref{e165}), one has
\begin{multline*}
\left\|\psi_{\boldsymbol{k}}(\boldsymbol{\tau},m,\epsilon)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)} \leq \sum_{n_1,n_2 \geq 1}
\left\|F_{n_1,n_2}(m,\epsilon)\right\|_{(\beta,\mu)}\\
\times (\sup_{\boldsymbol{\tau} \in (\bar{D}(0,\rho) \cup S_{d_1})\times S_{d_2})}
\frac{1 + |\frac{\tau_1}{\epsilon}|^{2k_1}}{|\frac{\tau_1}{\epsilon}|}\frac{1 + |\frac{\tau_2}{\epsilon}|^{2k_2}}{|\frac{\tau_2}{\epsilon}|} \exp(-\nu_1 |\frac{\tau_1}{\epsilon}|^{k_1}-\nu_2 |\frac{\tau_2}{\epsilon}|^{k_2})
\frac{|\tau_1|^{n_1}|\tau_2|^{n_2}}{\Gamma(\frac{n_1}{k_1})\Gamma(\frac{n_2}{k_2})})
\end{multline*}
for all $\epsilon \in D(0,\epsilon_{0}) \setminus \{ 0 \}$, any unbounded sectors $S_{d_1}$ and $S_{d_2}$ centered at 0 and bisecting directions $d_1 \in \mathbb{R}$ and $d_2\in\mathbb{R}$, respectively, for some $\boldsymbol{\nu}=(\nu_1,\nu_2)\in(0,+\infty)^2$.
\textbf{Remark:} According to classical estimates and Stirling formula, we observe that $\psi_{\boldsymbol{k}}(\boldsymbol{\tau},m,\epsilon) \in F_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}^{\boldsymbol{d}}$. See Definition~\ref{def2}.
We write
\begin{equation}\label{e326}P_m(\boldsymbol{\tau})=Q(im)+ R_{D_1,D_2}(im)(k_1\tau_1^{k_1})^{\delta_{D_1}}(k_2\tau_2^{k_2})^{\tilde{\delta}_{D_2}-1}+R_{D_3}(im)(k_2\tau_2^{k_2})^{\tilde{\delta}_{D_3}-1}.
\end{equation}
\section{Construction of the solution for a convolution equation}\label{seccons}
The main aim in this section is to provide with a solution of (\ref{e310}) which belongs to certain Banach space of functions satisfying bounds in the form (\ref{e209}). Such function is obtained as a fixed point of an operator acting on Banach spaces, introduced and studied in the incoming section.
\subsection{Banach spaces of exponencial growth}\label{subsecespbanach}
We consider the open disc $D(0,\rho)$ for some $\rho>0$. Let $S_{d_j}$ be open unbounded sectors with bisecting directions $d_j \in \mathbb{R}$, for $j=1,2$, and let $\mathcal{E}$ be an open sector with finite radius $r_{\mathcal{E}}$, all with vertex at $0$ in $\mathbb{C}$.
The following norm is inspired from that considered by the authors in~\cite{family1}. It is an adecquate modification of that described in~\cite{lama}, adapted to the framework of two complex time variables.
\begin{defin}\label{def2} Let $\nu_1,\nu_2,\beta,\mu>0$ and $\rho>0$ be positive real numbers. Let $k_1,k_2 \geq 1$ be integer numbers and let $\epsilon \in \mathcal{E}$. We put $\boldsymbol{\nu}=(\nu_1,\nu_2)$, $\boldsymbol{k}=(k_1,k_2)$, $\boldsymbol{d}=(d_1,d_2)$, and denote
$F_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}^{\boldsymbol{d}}$ the vector space of continuous functions $(\boldsymbol{\tau},m) \mapsto h(\boldsymbol{\tau},m)$ on the set
$(\bar{D}(0,\rho) \cup S_{d_1})\times S_{d_2} \times \mathbb{R}$, which are holomorphic with respect to $\boldsymbol{\tau}$ on $(D(0,\rho) \cup S_{d_1})\times S_{d_2} $ and such that
\begin{multline}
||h(\boldsymbol{\tau},m)||_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
=
\sup_{\stackbin{\boldsymbol{\tau} \in (\bar{D}(0,\rho) \cup S_{d_1})\times S_{d_2}}{m \in \mathbb{R}}} (1+|m|)^{\mu}
\frac{1 + |\frac{\tau_1}{\epsilon}|^{2k_1}}{|\frac{\tau_1}{\epsilon}|}\frac{1 + |\frac{\tau_2}{\epsilon}|^{2k_2}}{|\frac{\tau_2}{\epsilon}|}\exp( \beta|m| - \nu_1|\frac{\tau_1}{\epsilon}|^{k_1}-\nu_2|\frac{\tau_2}{\epsilon}|^{k_2} ) |h(\tau,m)|
\end{multline}
is finite. The normed space
$(F_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}^{\boldsymbol{d}},||.||_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)})$ is a Banach space.
\end{defin}
We fix $\epsilon \in \mathcal{E}$, $\mu,\beta,>0$ in the whole subsection. We also choose $\boldsymbol{\nu}=(\nu_1,\nu_2)\in (0,\infty)^2$, $\boldsymbol{d}=(d_1,d_2)\in\mathbb{R}^2$, and $\boldsymbol{k}=(k_1,k_2)\in\mathbb{N}^2$.
We first state some technical results. The first one follows directly from the definition of the norm of the Banach space.
\begin{lemma}\label{lema1} Let $(\boldsymbol{\tau},m) \mapsto a(\boldsymbol{\tau},m)$ be a bounded continuous function on
$(\bar{D}(0,\rho) \cup S_{d_1})\times S_{d_2} \times \mathbb{R}$, holomorphic with respect to $\boldsymbol{\tau}$ on $(D(0,\rho) \cup S_{d_1})\times S_{d_2}$. Then,
$$
|| a(\boldsymbol{\tau},m) h(\boldsymbol{\tau},m) ||_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)} \leq
\left( \sup_{\boldsymbol{\tau} \in (\bar{D}(0,\rho) \cup S_{d_1})\times S_{d_2},m \in \mathbb{R}} |a(\boldsymbol{\tau},m)| \right)
||h(\boldsymbol{\tau},m)||_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}
$$
for all $h(\boldsymbol{\tau},m) \in F_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}^{\boldsymbol{d}}$.
\end{lemma}
\begin{lemma}\label{lema2}
Let $\boldsymbol{\sigma}=(\sigma_1,\sigma_2)\in(0,\infty)^2$, and assume that $a_{\boldsymbol{\sigma},\boldsymbol{k}}$ is a holomorphic function of $(D(0,\rho)\cup S_{d_1})\times S_{d_2}$, continuous up to $(\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}$, such that
$$|a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})|\le \frac{1}{(1+|\tau_1|^{k_1})^{\sigma_1}(1+|\tau_2|^{k_2})^{\sigma_2}},$$
for every $\boldsymbol{\tau}\in (\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}$. We take $0\le \tilde{\sigma}_j\le\sigma_j$ for $j=1,2$. Assume that one of the following hold:
\begin{itemize}
\item $\sigma_3\ge0$ and $\sigma_3+\sigma_4\le \sigma_2-\tilde{\sigma}_2$,
\item $\sigma_3=\frac{\xi}{k_2}-1$ and $\sigma_3+\frac{1}{k_2}\le \sigma_2-\tilde{\sigma}_2$,
\end{itemize}
where $\xi>1$. Then, there exists $C_1>0$, depending on $\boldsymbol{k},\nu_2,\tilde{\sigma}_j,\sigma_\ell,$ $j=1,2$, $\ell=1,\ldots,4$, such that
\begin{multline*}
\left\|a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\sigma_3}s_2^{\sigma_4}f(\tau_1,s_2^{\frac{1}{k_2}},m)ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le C_1|\epsilon|^{k_2(1+\sigma_3+\sigma_4-\sigma_2+\tilde{\sigma}_2)}\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},
\end{multline*}
for every $f\in F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$.
\end{lemma}
\begin{proof}
There exists $C_{1.1}>0$ only depending on $\sigma_1,\sigma_2,k_1,k_2$ such that
$$|a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}|\le \frac{C_{1.1}}{(1+|\tau_2|^{k_2})^{\sigma_2-\tilde{\sigma}_2}},$$
for every $\boldsymbol{\tau}\in (\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}$. We apply the definition of the norm of $F_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}^{\boldsymbol{d}}$ to arrive at
\begin{multline*}
\left\|a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\sigma_3}s_2^{\sigma_4}f(\tau_1,s_2^{\frac{1}{k_2}},m)ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le C_{1.1}\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\sup_{\tau_2\in S_{d_2}}\frac{1+\left|\frac{\tau_2}{\epsilon}\right|^{2k_2}}{\left|\frac{\tau_2}{\epsilon}\right|}\exp\left(-\nu_2\left|\frac{\tau_2}{\epsilon}\right|^{k_2}\right)\frac{1}{(1+|\tau_2|^{k_2})^{\sigma_2-\tilde{\sigma}_2}}\\
\times \int_0^{|\tau_2|^{k_2}}(|\tau_2|^{k_2}-h)^{\sigma_3}h^{\sigma_4}\frac{\frac{h^{\frac{1}{k_2}}}{|\epsilon|}}{1+\frac{h^2}{|\epsilon|^{2k_2}}}\exp\left(\nu_2\frac{h}{|\epsilon|^{k_2}}\right)dh.
\end{multline*}
The proof concludes with the steps providing a bound for $C_2(\epsilon)$ in the proof of Proposition 2 in~\cite{lama}.
\end{proof}
An analogous result holds by interchanging the role of the time variables.
\begin{lemma}\label{lema22}
Under the same hypotheses as in Lemma~\ref{lema2}, assume that
\begin{itemize}
\item $\sigma_3\ge0$ and $\sigma_3+\sigma_4\le \sigma_1-\tilde{\sigma}_1$,
\item $\sigma_3=\frac{\xi}{k_1}-1$ and $\sigma_3+\frac{1}{k_1}\le \sigma_1-\tilde{\sigma}_1$,
\end{itemize}
where $\xi>1$. Then, there exists $C_1>0$, depending on $\boldsymbol{k},\nu_1,\tilde{\sigma}_j,\sigma_\ell,$ for $j=1,2$ and $\ell=1\ldots,4$, such that
\begin{multline*}
\left\|a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\sigma_3}s_1^{\sigma_4}f(s_1^{\frac{1}{k_1}},\tau_2,m)ds_1\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le C_1|\epsilon|^{k_1(1+\sigma_3+\sigma_4-\sigma_1+\tilde{\sigma}_1)}\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},
\end{multline*}
for every $f\in F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$.
\end{lemma}
Grouping the integral operators in Lemma~\ref{lema2} and Lemma~\ref{lema22} the following result is attained.
\begin{lemma}\label{lema23}
Let $\boldsymbol{\sigma}\in (0,\infty)^{2}$. Assume that $a_{\boldsymbol{\sigma},\boldsymbol{k}}$ is given as in Lemma~\ref{lema2}. Let $0\le\tilde{\sigma}_j\le\sigma_j$ for $j=1,2$, and $\sigma_{31},\sigma_{32},\sigma_{41},\sigma_{42}$ be real numbers such that
\begin{itemize}
\item $\sigma_{3j}\ge0$ and $\sigma_{3j}+\sigma_{4j}\le \sigma_j-\tilde{\sigma}_j$,
\item $\sigma_{3j}=\frac{\xi_j}{k_j}-1$ and $\sigma_{3j}+\frac{1}{k_j}\le \sigma_j-\tilde{\sigma}_j$,
\end{itemize}
for $j=1,2$ and where $\xi_j>1$. Then, there exists $C_1>0$ depending on $\boldsymbol{k},\boldsymbol{\nu},\sigma_j,\tilde{\sigma}_j,\sigma_{3j},\sigma_{4j}$ for $j=1,2$, such that
\begin{multline*}
\left\|a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}\int_0^{\tau_1^{k_1}}\int_0^{\tau_2^{k_2}}(\tau_1^{k_1}-s_1)^{\sigma_{31}}s_1^{\sigma_{41}}(\tau_2^{k_2}-s_2)^{\sigma_{32}}s_2^{\sigma_{42}}f(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m)ds_2ds_1\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le C_1^2|\epsilon|^{k_1(1+\sigma_{31}+\sigma_{41}-\sigma_1+\tilde{\sigma}_1)+k_2(1+\sigma_{32}+\sigma_{42}-\sigma_2+\tilde{\sigma}_2)}\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},
\end{multline*}
for every $f\in F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$.
\end{lemma}
The proof of Proposition 1 in~\cite{lama} can be adapted with minor modifications to the Banach spaces under study.
\begin{lemma}\label{lemaaux}
Let $\gamma_2>0$. Assume that $1/k_2\le \gamma_2\le 1$. Then, there exists $C_2>0$ (depending on $\boldsymbol{\nu},\boldsymbol{k},\gamma_2$) such that
$$\left\|\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\gamma_2}f(\tau_1,s_2^{\frac{1}{k_2}},m)\frac{ds_2}{s_2}\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le C_2|\epsilon|^{k_2\gamma_2}\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},$$
for every $f(\boldsymbol{\tau},m)\in F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$.
\end{lemma}
The symmetric statement of Lemma~\ref{lemaaux}, obtained by interchanging the role of $\tau_2$ and $\tau_1$ is derived straightforward from Lemma~\ref{lemaaux}. We finally state the following auxiliary lemma.
\begin{lemma}\label{lema6}
Let $\boldsymbol{\sigma}$ and $a_{\boldsymbol{\sigma},\boldsymbol{k}}$ be as in Lemma~\ref{lema2}. Assume that $P_1,P_2,R\in\mathbb{C}[X]$ such that
$$\hbox{deg}(R)\ge \hbox{deg}(P_1),\quad \hbox{deg}(R)\ge \hbox{deg}(P_2),\quad R(im)\neq 0$$
for every $m\in\mathbb{R}$. Assume that $\mu>\max\{\hbox{deg}(P_1)+1,\hbox{deg}(P_2)+1\}$. We take $\tilde{\sigma}_{j}\le \sigma_j$ for $j=1,2$. Then, there exists a constant $C_3>0$ (depending on $Q_1,Q_2,R,\mu,\boldsymbol{k},\boldsymbol{\nu}$) such that
\begin{multline*}
\left\|\frac{a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})}{R(im)}\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\int_{-\infty}^{\infty}\int_0^{\tau_1^{k_1}}\int_0^{s_2}P_1(i(m-m_1))\right.\\
\left.f((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2^{k_1}-x_2)^{\frac{1}{k_2}},m-m_1)P_2(im_1) g(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1)\frac{dx_2ds_1dm_1ds_2}{(\tau_1^{k_1}-s_1)s_1(s_2-x_2)x_2}\right\|\\
\le C_3|\epsilon|\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\left\|g(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},
\end{multline*}
for every $f(\boldsymbol{\tau},m),g(\boldsymbol{\tau},m)\in F^{d}_{(\boldsymbol{\nu},\beta,\mu,k,\epsilon)}$.
\end{lemma}
\begin{proof}
We follow analogous estimates as in the proof of Proposition 3 in~\cite{lama} to arrive at
\begin{multline*}
\left\|\frac{a_{\boldsymbol{\sigma},\boldsymbol{k}}(\boldsymbol{\tau})}{R(im)}\tau_1^{\tilde{\sigma}_1k_1}\tau_2^{\tilde{\sigma}_2k_2}\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\int_{-\infty}^{\infty}\int_0^{\tau_1^{k_1}}\int_0^{s_2}P_1(i(m-m_1))\right.\\
\left.\times f((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2^{k_1}-x_2)^{\frac{1}{k_2}},m-m_1)P_2(im_1)g(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1)\frac{dx_2ds_1dm_1ds_2}{(\tau_1^{k_1}-s_1)s_1(s_2-x_2)x_2}\right\|\\
\le \left(\sup_{\tau_1\in (D(0,\rho_1)\cup S_{d_1})}\frac{|\tau_1|^{\tilde{\sigma}_1k_1}}{(1+|\tau_1|^{k_1})^{\sigma_1}}\frac{1+\left|\frac{\tau_1}{\epsilon}\right|^{2k_1}}{\left|\frac{\tau_1}{\epsilon}\right|}\int_0^{|\tau_1|^{k_1}}\frac{\frac{(|\tau_1|^{k_1}-h_1)^{\frac{1}{k_1}}}{|\epsilon|}}{1+\frac{(|\tau_1|^{k_1}-h_1)^2}{|\epsilon|^{2k_1}}}\frac{\frac{h_1^{\frac{1}{k_1}}}{|\epsilon|}}{1+\frac{h_1^2}{|\epsilon|^{2k_1}}}\frac{dh_1}{(|\tau_1|^{k_1}-h_1) h_1}\right)\\
\times \left(\sup_{\tau_2\in S_{d_2}}\frac{|\tau_2|^{\tilde{\sigma}_2k_2}}{(1+|\tau_2|^{k_2})^{\sigma_2}}\frac{1+\left|\frac{\tau_2}{\epsilon}\right|^{2k_2}}{\left|\frac{\tau_2}{\epsilon}\right|}\exp\left(-\nu_2\left|\frac{\tau_2}{\epsilon}\right|^{k_2}\right)\int_0^{|\tau_2|^{k_2}}(|\tau_2|^{k_2}-h_2)^{\frac{1}{k_2}}\right.\\
\left.\int_0^{|s_2|}\frac{\frac{(h_2-x_2)^{\frac{1}{k_2}}}{|\epsilon|}}{1+\frac{(h_2-x_2)^2}{|\epsilon|^{2k_2}}}\exp\left(\nu_2\frac{h_2}{|\epsilon|^{k_2}}\right)\frac{\frac{x_2^{\frac{1}{k_2}}}{|\epsilon|}}{1+\frac{x_2^2}{|\epsilon|^{2k_2}}}\frac{dx_2dh_2}{(h_2-x_2) x_2}\right)\left\|f(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\left\|g(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}.
\end{multline*}
On the one hand, the expression $|\tau_2|^{\tilde{\sigma}_2k_2}/(1+|\tau_2|^{k_2})^{\sigma_2}$ is bounded. Moreover,
\begin{multline*}
A_2:=\sup_{\tau_2\in S_{d_2}}\frac{1+\left|\frac{\tau_2}{\epsilon}\right|^{2k_2}}{\left|\frac{\tau_2}{\epsilon}\right|}\exp\left(-\nu_2\left|\frac{\tau_2}{\epsilon}\right|^{k_2}\right)\int_0^{|\tau_2|^{k_2}}(|\tau_2|^{k_2}-h_2)^{\frac{1}{k_2}}\\
\int_0^{|s_2|}\frac{\frac{(h_2-x_2)^{\frac{1}{k_2}}}{|\epsilon|}}{1+\frac{(h_2-x_2)^2}{|\epsilon|^{2k_2}}}\exp\left(\nu_2\frac{h_2}{|\epsilon|^{k_2}}\right)\frac{\frac{x_2^{\frac{1}{k_2}}}{|\epsilon|}}{1+\frac{x_2^2}{|\epsilon|^{2k_2}}}\frac{dx_2dh_2}{(h_2-x_2) x_2}
\end{multline*}
can be estimated following the same steps as in the study of upper bounds for $C_{3.2}$ in formula (35) of~\cite{lama}. We get the existence of $C_{2.1}>0$ such that $A_2\le C_{3.2}|\epsilon|$. It only rests to prove that $A_1$ is upper bounded, where
\begin{align*}
A_1:=&\sup_{\tau_1\in (D(0,\rho)\cup S_{d_1})}\frac{|\tau_1|^{\tilde{\sigma}_1k_1}}{(1+|\tau_1|^{k_1})^{\sigma_1}}\frac{1+\left|\frac{\tau_1}{\epsilon}\right|^{2k_1}}{\left|\frac{\tau_1}{\epsilon}\right|}\int_0^{|\tau_1|^{k_1}}\frac{\frac{(|\tau_1|^{k_1}-h_1)^{\frac{1}{k_1}}}{|\epsilon|}}{1+\frac{(|\tau_1|^{k_1}-h_1)^2}{|\epsilon|^{2k_1}}}\frac{\frac{h_1^{\frac{1}{k_1}}}{|\epsilon|}}{1+\frac{h_1^2}{|\epsilon|^{2k_1}}}\frac{dh_1}{(|\tau_1|^{k_1}-h_1) h_1}\\
=&\sup_{\tau_1\in (D(0,\rho)\cup S_{d_1})}\tilde{A}_1.
\end{align*}
We distinguish two cases. First, we assume that $|\tau_1|\ge C$, for some $C>0$. Then, it holds that
$$\frac{|\tau_1|^{\tilde{\sigma}_1k_1}}{(1+|\tau_1|^{k_1})^{\sigma_1}}$$ is upper bounded, and by putting $x=|\tau_1/\epsilon|$ one can estimate $\tilde{A}_1$ from above by
$$\sup_{x\ge\tilde{C}}\frac{1+x^2}{x^{1/k_1}}\int_0^{\infty}\frac{dh}{(1+(x-h)^2)(1+h^2)}.$$
for some $\tilde{C}>0$. We apply Corollary 4.9 in~\cite{cota} to conclude that
$$\tilde{A}_1\le \sup_{x\ge\tilde{C}}\frac{1+x^2}{x^{1/k_1}}\frac{j_1}{x^2+1},$$
for some $j_1>0$. The previous expression is upper bounded by a positive constant. Second, in the case that $|\tau_1|<C$, we have $(1+|\tau_1|^{k_1})^{\sigma_1}\ge 1$. We put $x=(|\tau_1|/|\epsilon|)^{k_1}$ to get that
$$\sup_{\tau_1\in (D(0,\rho)\cup S_{d_1}),|\tau_1|\le C}\tilde{A}_1\le \sup_{x\ge0}x\frac{1+x^2}{x^{1/k_1}}\int_0^x\frac{(x-h_1)^{\frac{1}{k_1}}}{1+(x-h_1)^2}\frac{h_1^{\frac{1}{k_1}}}{1+h_1^{2}}\frac{dh_1}{h_1(x-h_1)}.$$
A partial fraction decomposition yields
$$\int_0^x\frac{(x-h_1)^{\frac{1}{k_1}}}{1+(x-h_1)^2}\frac{h_1^{\frac{1}{k_1}}}{1+h_1^{2}}\frac{dh_1}{h_1(x-h_1)}\le \frac{j_k}{x^{1-\frac{2}{k_1}}(x^2+4)},\qquad x\ge0,$$
for some $j_k>0$, valid for $k\ge 2$. This concludes the existence of a positive upper bound for $A_1$, and the proof follows from this point.
\end{proof}
\subsubsection{Domain of existence for the solution}\label{secnodef}
The purpose of this section is twofold. On the one hand, we motivate the fact that any actual holomorphic solution $\omega(\boldsymbol{\tau},m,\epsilon)$ of (\ref{e310}) is not well defined on sets of the form $S_{d_1}\times (S_{d_2}\cup D(0,\rho_2))$, for $d_1,d_2\in\mathbb{R}$ and any choice of $\rho_2>0$.This is due to a small divisor phenomenon observed, which does not allow to proceed with a summability procedure. On the second hand, we aim to display geometric conditions on the natural domains in which the solution is defined.
In order to motivate that the natural domains of definition of a solution cannot be of the form $S_{d_1}\times (S_{d_2}\cup D(0,\rho_2))$, for $d_1,d_2\in\mathbb{R}$ and $\rho_2>0$, let $\rho_2>0$. We rewrite the equation $P_{m}(\boldsymbol{\tau})=0$ (see (\ref{e326}) for the definition of $P_{m}$) in the form
\begin{equation}\label{e494}
\tau_2^{k_2(\tilde{\delta}_2-1)}=\frac{-Q(im)}{R_{D_1,D_2}(im)k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2-1}}\tau_1^{k_1\delta_{D_1}}+R_{D_3}(im)k_2^{\tilde{\delta}_{D_3}-1}\tau_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}}.
\end{equation}
We put $T_2=\tau_2^{k_2(\tilde{\delta}_{D_2}-1)}$, and write (\ref{e494}) in the form $\Psi(T_2)=T_2$, where
\begin{equation}\label{e495}
\Psi(T_2):=-Q(im)\left(R_{D_1,D_2}(im)k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2-1}}\tau_1^{k_1\delta_{D_1}}+R_{D_3}(im)k_2^{\tilde{\delta}_{D_3}-1}T_2^{\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}}\right)^{-1}.
\end{equation}
\begin{lemma}\label{lema502}
Let $d_1,d_2\in\mathbb{R}$. Under the assumption that $\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}\in\mathbb{N}\setminus\{0\}$, there exists $\tau_1\in S_{d_1}$ such that the following statements hold:
\begin{enumerate}
\item $\Psi$ is a map from $E:=\overline{D}(0,\left(\frac{\rho_2}{2}\right)^{k_2(\tilde{\delta}_{D_{2}}-1)})$ into itself.
\item $\Psi:E\to E$ is a shrinking map.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\tau_1\in S_{d_1}$ with large enough modulus in such a way that
\begin{equation}\label{e511}
\left|R_{D_1,D_2}(im)k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2-1}}+R_{D_3}(im)k_2^{\tilde{\delta}_{D_3}-1}\frac{T_2^{\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}}}{\tau_1^{k_1\delta_{D_1}}}\right|\ge |R_{D_3}(im)|,
\end{equation}
for every $m\in\mathbb{R}$ and all $T_2\in \hat{S}_{d_2}\cup D(0,\hat{\rho}_2)$. Here, $\hat{S}_{d_2}$ stand for the infinite sector defined by $\hat{S}_{d_2}:=\{T_2\in\mathbb{C}^{\star}: \tau_2^{k_2(\tilde{\delta}_{D_2}-1)}\in S_{d_2}\}$, and $\hat{\rho}_2=\rho_2^{k_2(\tilde{\delta}_2-1)}$. The assumption (\ref{e165b}) on the geometry of the problem and (\ref{e511}) yield
\begin{multline*}
|\Psi(T_2)|\le\frac{|Q(im)|}{|\tau_1|^{k_1\delta_{D_1}}}\left|R_{D_1,D_2}(im)k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2-1}}+R_{D_3}(im)k_2^{\tilde{\delta}_{D_3}-1}\frac{T_2^{\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}}}{\tau_1^{k_1\delta_{D_1}}}\right|^{-1}\\
\le \frac{|Q(im)|}{|\tau_1|^{k_2\delta_{D_1}}|R_{D_3}(im)|}\le \left(\frac{\rho_2}{2}\right)^{k_2(\tilde{\delta}_{D_2}-1)},
\end{multline*}
for large enough $|\tau_1|$. As a result, we get the fist statement in the result. We have
\begin{multline*}
|\Psi'(z)|\le \frac{|R_{D_3}(im)|k_2^{\tilde{\delta}_{D_3}-1}\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}\left(\frac{\rho_2}{2}\right)^{k_2(\tilde{\delta}_{D_2}-1)\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}-1}|Q(im)|}{(|\tau_1|^{k_1\delta_{D_1}}|R_{D_3}(im)|)^2}\\
\le k_2^{\tilde{\delta}_{D_3}-1}\frac{\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}{\tilde{\delta}_{D_2}-1}\left(\frac{\rho_2}{2}\right)^{k_2(\tilde{\delta}_{D_3}-2\tilde{\delta}_{D_2}+1)}\left|\frac{Q(im)}{R_{D_3}(im)}\right|\frac{1}{|\tau_1|^{2k_1\delta_{D_1}}}\le\frac{1}{2},
\end{multline*}
for every $z\in D(0,\left(\frac{\rho_2}{2}\right)^{k_2(\tilde{\delta}_{D_2}-1)})$, $m\in\mathbb{R}$, and large enough $|\tau_1|$. We get that
$$|\Psi(T_2)-\Psi(T'_2)|\le\sup_{z\in[T_2,T'_2]}|\Psi'(z)||T_2-T'_2|\le \frac{1}{2}|T_2-T'_2|,$$
for every $T_2,T'_2\in \overline{D}(0,\left(\frac{\rho_2}{2}\right)^{k_2(\tilde{\delta}_{D_2}-1)}$. The application of the mean value theorem entails the second statement of the result.
\end{proof}
As a consequence of Lemma~\ref{lema502}, we deduce that $\Psi$ has a unique fixed point in $E$, hence, there exists a unique solution of $\Psi(T_2)=T_2$ for $T_2\in E$, say $T_0$. The solutions of (\ref{e494}) are the solutions of $\tau_2^{k_2(\tilde{\delta}_{D_2}-1}=T_0$. As a matter of fact, the $k_2(\tilde{\delta}_{D_2}-1)$ roots of $T_0$ belong the disc $D(0,\frac{\rho_2}{2})$.
\textbf{Remark:} Observe that, in the case that $\tilde{\delta}_{D_2}=\tilde{\delta}_{D_3}$, the equation $\Psi(T_2)=T_2$ can be solved directly in terms of $\tau_1$. In this case, the $k_2(\tilde{\delta}_{D_2}-1)$ roots of $P_{m}(\boldsymbol{\tau})=0$ lay on $D(0,\frac{\rho_2}{2})$, and we can not define $\omega(\boldsymbol{\tau},m,\epsilon)$ in any set of the form $S_{d_1}\times (S_{d_2}\cup D(0,\rho_2))$.
\vspace{0.3cm}
In the next paragraphs we display geometric conditions on the problem, which allow us to attain lower estimates on $P_m(\boldsymbol{\tau})$, defined in (\ref{e326}). On the way, the choice of directions $d_1$ and $d_2$ is made accordingly with the geometry of the problem.
We write
$$\frac{P_m(\boldsymbol{\tau})}{Q(im)}=1+\tau_2^{k_2(\tilde{\delta}_{D_2}-1)}\left(\frac{R_{D_1,D_2}(im)}{Q(im)}k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}\tau_1^{k_1\delta_{D_1}}+\frac{R_{D_3}(im)}{Q(im)}k_2^{\tilde{\delta}_{D_3}-1}\tau_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}\right).$$
We distinguish different cases.
\begin{enumerate}
\item[1.] In case that $\tau_1\in D(0,\rho_1)$, for some small enough $\rho_1>0$.
\begin{enumerate}
\item[1.1.] If $\tau_2\in D(0,\rho_2)$, for small enough $\rho_2>0$. Regarding (\ref{e165b}), there exist $r^1_{D_1,D_2,Q},r^1_{D_3,Q}>0$ such that
\end{enumerate}
\end{enumerate}
\begin{multline*}
\tau_2^{k_2(\tilde{\delta}_{D_2}-1)}\left(\frac{R_{D_1,D_2}(im)}{Q(im)}k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}\tau_1^{k_1\delta_{D_1}}+\frac{R_{D_3}(im)}{Q(im)}k_2^{\tilde{\delta}_{D_3}-1}\tau_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}\right)\\
\le \rho_2^{k_2(\tilde{\delta}_{D_2}-1)}\left(r^{1}_{D_1,D_2,Q}k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}\rho_1^{k_1\delta_{D_1}}+r^1_{D_3,Q}k_2^{\tilde{\delta}_{D_3}-1}\rho_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}\right)\le \frac{1}{4}
\end{multline*}
for every $m\in\mathbb{R}$, every $\tau_1\in D(0,\rho_1)$, and $\tau_2\in D(0,\rho_2)$. We conclude that
\begin{equation}\label{e479}
\left|\frac{P_{m}(\boldsymbol{\tau})}{Q(im)}\right|\ge C_1,
\end{equation}
for some positive constant $C_1$, common for all $m\in\mathbb{R}$, $\tau_1\in D(0,\rho_1)$, and $\tau_2\in D(0,\rho_2)$.
\begin{enumerate}
\item[1.2.] Assume that $\tau_2\in S_{d_2}$, with $|\tau_2|\ge \rho_0$, for some fixed $\rho_0>0$. We write
\end{enumerate}
\begin{multline*}
\frac{R_{D_1,D_2}(im)}{Q(im)}k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}\tau_1^{k_1\delta_{D_1}}+\frac{R_{D_3}(im)}{Q(im)}k_2^{\tilde{\delta}_{D_3}-1}\tau_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}\\
=\frac{R_{D_3}(im)}{Q(im)}k_2^{\tilde{\delta}_{D_3}-1}\tau_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}(1+A(m,\boldsymbol{\tau})),
\end{multline*}
where
$$A(m,\boldsymbol{\tau}):=\frac{R_{D_1,D_2}(im)}{R_{D_3}(im)}\frac{k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}\tau_1^{k_1\delta_{D_1}}}{k_1^{\tilde{\delta}_{D_3}-1}\tau_2^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2})}}.$$
From the assumptions made in (\ref{e165b}), we get that
$$|A(m,\boldsymbol{\tau})|\le r_{D_1,D_2,D_3}^1\frac{k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}}{k_2^{\tilde{\delta}_{D_3}-1}}\frac{\rho_1^{k_1\delta_{D_1}}}{\rho_0^{k_2(\tilde{\delta}_{D_3}-\tilde{\delta}_{D_2}}},$$
for some $r_{D_1,D_2,D_3}^1>0$. Taking small enough $\rho_1>0$, we can write
$$1+A(m,\boldsymbol{\tau})=\rho_{m,\boldsymbol{\tau}}e^{i\theta_{m,\boldsymbol{\tau}}},$$
with $\rho_{m,\boldsymbol{\tau}}$ close to 1 and $\theta_{m,\boldsymbol{\tau}}$ close to 0, uniformly for every $m\in\mathbb{R}$ and all $\tau_2\in S_{d_2}$, with $|\tau_2|>\rho_0$ and $\tau_1\in D(0,\rho_1)$.
Therefore, we have
$$\frac{P_m(\boldsymbol{\tau})}{Q(im)}=1+k_2^{\tilde{\delta}_{D_3}-1}\frac{R_{D_3}(im)}{Q(im)}\tau_2^{k_2(\tilde{\delta}_{D_3}-1)}\rho_{m,\boldsymbol{\tau}}e^{i\theta_{m,\boldsymbol{\tau}}}.$$
Let $\tau_{2,k}$ be the roots satisfying
$$\tau_{2,k}^{k_2(\tilde{\delta}_{D_3}-1)}=\frac{-1}{k_{2}^{\tilde{\delta}_{D_3}-1}\rho_{m,\boldsymbol{\tau}}}\frac{Q(im)}{R_{D_3}(im)}e^{-i\theta_{m,\boldsymbol{\tau}}},$$
for $k=0,\ldots,k_2(\tilde{\delta}_{D_3}-1)-1$. We select the sector $S_{d_2}$ in such a way that if $\tau_2\in S_{d_2}$, then it can be expressed as $\tau_2=\rho e^{i\theta}\tau_{2,k}$ for some fixed $k$, some $\theta$ close to 0, $\theta\neq 0$, and any $\rho>0$. We get
$$\frac{P_m(\boldsymbol{\tau})}{Q(im)}=1-\rho^{k_2(\tilde{\delta}_{D_3}-1)}e^{i\theta k_2(\tilde{\delta}_{D_3}-1)}=\rho^{k_2(\tilde{\delta}_{D_3}-1)}\left(-e^{i\theta k_2(\tilde{\delta}_{D_3}-1)}+\frac{1}{\rho^{k_2(\tilde{\delta}_{D_3}-1)}}\right).$$
Now, there exists $C_1>0$ such that
$$\left|-e^{i\theta k_2(\tilde{\delta}_{D_3}-1)}+\frac{1}{\rho^{k_2(\tilde{\delta}_{D_3}-1)}}\right|\ge C_1$$
for every $\rho\ge 0$. By construction, we also have $\rho^{k_2(\tilde{\delta}_{D_3}-1)}=|\tau_2|^{k_2(\tilde{\delta}_{D_3}-1)}/|\tau_{2,k}|^{k_2(\tilde{\delta}_{D_3}-1)}.$ We deduce the existence of $C_2>0$ such that $\rho^{k_2(\tilde{\delta}_{D_3}-1)}\ge C_2|\tau_2|^{k_2(\tilde{\delta}_{D_3}-1)}.$
As a result, we see that
\begin{equation}\label{e525}
\left|\frac{P_{m}(\boldsymbol{\tau})}{Q(im)}\right|\ge C_1C_2|\tau_2|^{k_2(\tilde{\delta}_{D_3}-1)},
\end{equation}
for every $\tau_2\in S_{d_2}$ with $|\tau_2|\ge \rho_0$ and $\tau_1\in D(0,\rho_1)$, for some small enough $\rho_1>0$.
\begin{enumerate}
\item[2.] Assume that $\tau_1\in S_{d_1}$ with $|\tau_1|\ge \rho_1$ for some fixed $\rho_1>0$, and $\tau_2\in S_{d_2}$.
\end{enumerate}
We select $S_{d_1}$ in such a way that for $\tau_1\in S_{d_1}$ one can write
$$\tau_1=\xi_1 e^{i\theta_1}\tau_{2}^{\frac{k_2(\tilde{\delta}_{d_3}-\tilde{\delta}_{D_2})}{k_1\delta_{D_1}}}\left(\frac{R_{D_3}(im)}{R_{D_1,D_2}(im)}\right)^{\frac{1}{k_1\delta_{D_1}}},$$
(here, we have chosen any particular $1/k_1\delta_{D_1}$ root), for some $\xi_1>0$ and $\theta_1$ close to 0, when $\tau_2\in S_{d_2}$. Since $|\tau_1|\ge \rho_1$, we have that $\xi_1>\nu_1>0$ for some fixed $\nu_1>0$.
\textbf{Remark:} This factorization is a particular case of a so-called blow up in the desingularization procedure. We refer to the excellent textbook of Y. Ilyashenko and S. Yakovenko~\cite{iy}, Chapter 1, Section 8, for an introduction to the geometric aspects.
We write
$$\frac{P_m(\boldsymbol{\tau})}{Q(im)}=1+\tau_2^{k_2(\tilde{\delta}_{D_3}-1)}\xi_1^{k_1\delta_{D_1}}\frac{R_{D_3}(im)}{Q(im)}\left(k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}e^{i\theta_1 k_1\delta_{D_1}}+\frac{k_2^{\tilde{\delta}_{D_3}-1}}{\xi_1^{k_1\delta_{D_1}}}\right).$$
Again, taking into account (\ref{e165b}) one can select a sector $S_{d_2}$ which additionally satisfies
$$\left|\frac{1}{\tau_2^{k_2(\tilde{\delta}_{D_3}-1)}\xi_1^{k_1\delta_{D_1}}}+\frac{R_{D_3}(im)}{Q(im)}(k_1^{\delta_{D_1}}k_2^{\tilde{\delta}_{D_2}-1}e^{i\theta_1k_1\delta_{D_1}}+\frac{k_2^{\tilde{\delta}_{D_3}-1}}{\xi_1^{k_1\delta_{D_1}}})\right|\ge C>0,$$
for some $C>0$, valid for every $\tau_2\in S_{d_2}$, and $\xi_1>\nu_1>0$.
As a result, we get
\begin{multline}\label{e547}
\left|\frac{P_m(\boldsymbol{\tau})}{Q(im)}\right|\ge C|\xi_1|^{k_1\delta_{D_1}}|\tau_2|^{k_2(\tilde{\delta}_{D_3}-1)}\\
= C|\tau_1|^{k_1\delta_{D_1}}|\tau_2|^{k_2(\tilde{\delta}_{D_2}-1)}\left|\frac{R_{D_1,D_2}(im)}{R_{D_3}(im)}\right|
\ge \tilde{C}|\tau_1|^{k_1\delta_{D_1}}|\tau_2|^{k_2(\tilde{\delta}_{D_2}-1)},
\end{multline}
for some $\tilde{C}>0$ and all $m\in\mathbb{R}$.
As a summary, we have achieved the following result.
\begin{prop}\label{prop614}
There exist $d_1,d_2\in\mathbb{R}$ and $\rho_1>0$ such that for every $m\in\mathbb{R}$ and all $\tau_1\in D(0,\rho_1)\cup S_{d_1}$, $\tau_2\in S_{d_2}$ one has
\begin{equation}\label{e557}
\left|\frac{P_m(\boldsymbol{\tau})}{Q(im)}\right|\ge C(1+|\tau_1|^{k_1})^{\delta_{D_1}}f(\boldsymbol{\tau}),
\end{equation}
for some $C>0$, and where $f(\boldsymbol{\tau})$ is defined by
$$f(\boldsymbol{\tau})=\left\{ \begin{array}{lcc}
(1+|\tau_2|^{k_2})^{\tilde{\delta}_{D_3}-1} & if & |\tau_1|\le \rho_1 \\
(1+|\tau_2|^{k_2})^{\tilde{\delta}_{D_2}-1} & if & |\tau_1|> \rho_1.
\end{array}
\right.
$$
\end{prop}
\textbf{Remark:} Without loss of generality, we may assume that $\rho_1\le \rho$, where $\rho>0$ is the radius of the disc of holomorphy with respect to the first time variable appearing in Section~\ref{seclayout}.
\subsection{Fixed point of a convolution operator in Banach spaces}\label{fixed}
The main purpose of this section is to obtain the existence of a fixed point on certain operator defined in a Banach space. It will allow us to construct the analytic solution of the main problem under study, (\ref{ICP_main0}).
For every $\epsilon\in D(0,\epsilon_0)\setminus\{0\}$, we consider the operator $\mathcal{H}_\epsilon$ defined by
\begin{multline}
\mathcal{H}_{\epsilon}(\omega(\boldsymbol{\tau},m))\\
=\frac{-(k_1\tau_1^{k_1})^{\delta_{D_1}}}{P_m(\boldsymbol{\tau})}\frac{\tilde{\mathcal{A}}_{D_2}(\tau_2)}{k_2\tau_2^{k_2}}R_{D_1,D_2}(im)\omega(\boldsymbol{\tau},m)- \frac{(k_2\tau_2^{k_2})^{\tilde{\delta}_{D_2}-1}}{P_m(\boldsymbol{\tau})}\mathcal{A}_{D_1}(\tau_1)R_{D_1,D_2}(im)\omega(\boldsymbol{\tau},m)\\
-\frac{1}{P_m(\boldsymbol{\tau})}\mathcal{A}_{D_1}(\tau_1)\frac{\tilde{\mathcal{A}}_{D_2}(\tau_2)}{k_2\tau_2^{k_2}}R_{D_1,D_2}(im)\omega(\boldsymbol{\tau},m)-\frac{1}{P_m(\boldsymbol{\tau})}\frac{\tilde{\mathcal{A}}_{D_3}(\tau_2)}{k_2\tau_2^{k_2}}R_{D_3}(im)\omega(\boldsymbol{\tau},m)\\
+\frac{1}{P_m(\boldsymbol{\tau})}\frac{\epsilon^{-1}}{(2\pi)^{\frac{1}{2}}}\frac{\tau_1^{k_1}}{k_2\Gamma\left(1+\frac{1}{k_2}\right)}
\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\int_{-\infty}^{\infty}\int_0^{\tau_1^{k_1}}\int_0^{s_2}P_1(i(m-m_1),\epsilon)\\
\times \omega((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2-x_2)^{\frac{1}{k_2}},m-m_1)P_2(im_1,\epsilon)\omega(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1)\frac{dx_2ds_1dm_1ds_2}{(\tau_1^{k_1}-s_1)s_1(s_2-x_2)x_2}\\
+\frac{1}{P_m(\boldsymbol{\tau})}\sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon^{\Delta_{l_1,l_2}-d_{l_1}-\tilde{d}_{l_2}+\delta_{l_1}+\tilde{\delta}_{l_2}-1}R_{l_1,l_2}(im)\frac{\tau_1^{k_1}}{k_2\Gamma\left(\frac{d_{l_1,k_1}}{k_1}\right)\Gamma\left(\frac{d_{l_2,k_2}}{k_2}\right)}\\
\times
\int_0^{\tau_2^{k_2}}\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\frac{d_{l_1,k_1}}{k_1}-1}(\tau_2^{k_2}-s_2)^{\frac{d_{l_2,k_2}}{k_2}-1}k_1^{\delta_{l_1}}s_1^{\delta_{l_1}}k_2^{\tilde{\delta}_{l_2}}s_2^{\tilde{\delta}_{l_2}}\omega(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m)\frac{ds_1}{s_1}\frac{ds_2}{s_2}\\
+\frac{\epsilon^{-1}}{k_2P_m(\boldsymbol{\tau})\Gamma\left(1+\frac{1}{k_2}\right)}\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\psi_{\boldsymbol{k}}(\tau_1,s_2^{\frac{1}{k_2}},m,\epsilon)\frac{ds_2}{s_2}.\label{e623}
\end{multline}
\begin{prop}\label{prop653}
Assume that the hypotheses (\ref{e120})-(\ref{e165b}) hold. There exist $\varpi,\xi,R>0$ such that if
$$\left\|\psi_{\boldsymbol{k}}(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le \xi,\qquad \max\{R_1,R_2\}\le R,$$
, where $R_1,R_2$ are the geometric conditions determined in (\ref{eannulus}), for all $\epsilon\in D(0,\epsilon_0)\setminus\{0\}$. Then, the operator $\mathcal{H}_{\epsilon}$ defined in (\ref{e623}) admits a unique fixed point $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)\in F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$ such that $\left\|\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le \varpi$, for all $\epsilon\in D(0,\epsilon_0)\setminus\{0\}$.
\end{prop}
\begin{proof}
Take $d_1,d_2\in\mathbb{R}$ and $\rho_1>0$ determined in Proposition~\ref{prop614}. First, we apply Lemma~\ref{lema1} and Lemma~\ref{lema2} to get that
\begin{multline}
\left\|\frac{\tau_1^{k_1\delta_{D_1}}R_{D_1,D_2}(im)}{P_m(\boldsymbol{\tau})}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\delta_{D_2}-p_2-1}s_2^{p_2-1}\omega(\tau_1,s_2^{\frac{1}{k_2}},m)ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le \frac{C_1}{C}\sup_{m\in\mathbb{R}}\left|\frac{R_{D_1,D_2}(im)}{Q(im)}\right| \left\|\omega(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e658}
\end{multline}
for every $1\le p_2\le \tilde{\delta}_{D_2}-1$.
In view of Lemma~\ref{lema1} and Lemma~\ref{lema22} we have
\begin{multline}
\left\|\tau_2^{k_2(\tilde{\delta}_{D_2}-1)}\frac{R_{D_1,D_2}(im)}{P_m(\boldsymbol{\tau})}\tau_1^{k_1}
\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\delta_{D_1}-p_1-1}s_1^{p_1-1}\omega(s_1^{\frac{1}{k_1}},\tau_2,m)ds_1\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le \frac{C_1}{C}\sup_{\tau_2\in S_{d_2}}\frac{|\tau_2|^{k_2(\tilde{\delta}_{D_2}-1)}}{(1+|\tau_2|^{k_2})^{\tilde{\delta}_{D_2}-1}}\sup_{m\in\mathbb{R}}\frac{1}{|Q(im)|}\left\|\omega(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e659}
\end{multline}
for every $1\le p_1\le \delta_{D_1}-1$.
Moreover, from Lemma~\ref{lema1} and Lemma~\ref{lema23} we have
\begin{multline}
\left\|\tau_1^{k_1}\frac{R_{D_1,D_2}(im)}{P_m(\boldsymbol{\tau})}\int_0^{\tau_2^{k_2}}\int_0^{\tau_1^{k_1}}(\tau_2^{k_2}-s_2)^{\tilde{\delta}_{D_2}-p_2-1}s_2^{p_2-1}(\tau_1^{k_1}-s_1)^{\delta_{D_1}-p_1-1}s_1^{p_1-1}\right.\\
\left.\times \omega(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m)ds_1ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le\frac{C_1}{C}\sup_{m\in\mathbb{R}}\left|\frac{R_{D_1,D_2}(im)}{Q(im)}\right| \left\|\omega(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e660}
\end{multline}
for every $1\le p_1\le \delta_{D_1}-1$ and $1\le p_2\le \tilde{\delta}_{D_2}-1$.
We apply Lemma~\ref{lema1} and Lemma~\ref{lema2} to get
\begin{multline}
\left\|\frac{1}{P_m(\boldsymbol{\tau})}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\delta_{D_3}-p_3-1}s_2^{p_3-1}\omega(\tau_1,s_2^{\frac{1}{k_2}},m)ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le \frac{C_1}{C}\sup_{m\in\mathbb{R}}\left|\frac{1}{Q(im)}\right| \left\|\omega(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e661}
\end{multline}
for every $1\le p_3\le \tilde{\delta}_{D_3}-1$.
Regarding Lemma~\ref{lema1} and Lemma~\ref{lema6}, we deduce that
\begin{multline}
\left\|\frac{\epsilon^{-1}}{P_m(\boldsymbol{\tau})}\tau_1^{k_1}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\int_{-\infty}^{\infty}\int_0^{\tau_1^{k_1}}\int_0^{s_2}P_1(i(m-m_1),\epsilon)\right.\\
\left.\times \omega((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2-x_2)^{\frac{1}{k_2}},m-m_1)P_2(im_1,\epsilon)\omega(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1)\frac{dx_2ds_1dm_1ds_2}{(\tau_1^{k_1}-s_1)s_1(s_2-x_2)x_2}\right\|\\
\le |\epsilon|\frac{C_3}{\hbox{max}_{m\in\mathbb{R}}|Q(im)|}\left\|\omega(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}^2\label{e662}
\end{multline}
We apply Lemma~\ref{lema1} and Lemma~\ref{lema23} to get
\begin{multline}
\left\|\frac{R_{l_1,l_2}(im)}{P_m(\boldsymbol{\tau})}\tau_1^{k_1}\int_0^{\tau_2^{k_2}}\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\frac{d_{l_1,k_1}}{k_1}-1}(\tau_2^{k_2}-s_2)^{\frac{d_{l_2,k_2}}{k_2}-1}s_1^{\delta_{l_1}-1}s_2^{\tilde{\delta}_{l_2}-1}\right.\\
\left.\times \omega(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m)ds_1 ds_2 \right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)} \le\frac{C_1}{C} \sup_{m\in\mathbb{R}}\left|\frac{R_{l_1,l_2}(im)}{Q(im)}\right|\left\|\omega(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}
\end{multline}\label{e663}
for every $0\le l_j\le D_j-1$, for $j=1,2$.
Finally, the application of Lemma~\ref{lema1} and Lemma~\ref{lemaaux} yield
\begin{equation}\label{e664}
\left\|\frac{1}{P_m(\boldsymbol{\tau})}\int_{0}^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\psi_{\boldsymbol{k}}(\tau_1,s_2^{\frac{1}{k_2}},m,\epsilon)\frac{ds_2}{s_2}\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le\frac{C_1}{C}\sup_{m\in\mathbb{R}}\frac{1}{|Q(im)|}|\epsilon|\left\|\psi_{\boldsymbol{k}}(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}
\end{equation}
Take small enough $\varpi,\xi,\epsilon_0>0$ and assume that
$$\sup_{m\in\mathbb{R}}\left|\frac{R_{D_1,D_2}(im)}{Q(im)}\right|\le R \quad \hbox{ and } \sup_{m\in\mathbb{R}}\left|\frac{R_{D_3}(im)}{Q(im)}\right|\le R,$$
in such a way that
\begin{multline}
k_1^{\delta_{D_1}}\frac{C_1 R}{C}\sum_{1\le p_2\le\tilde{\delta}_{D_2}-1}\frac{|A_{\tilde{\delta}_{D_2},p_2}|}{\Gamma(\tilde{\delta}_{D_2}-p_2)}\varpi+k_2^{\tilde{\delta}_{D_2}-1}\frac{C_1R}{C}\sum_{1\le p_1\le\delta_{D_1}-1}\frac{|A_{\delta_{D_1},p_1}|}{\Gamma(\delta_{D_1}-p_1)}\varpi\\
\frac{C_1R}{C}\sum_{1\le p_1\le\delta_{D_1}-1}\sum_{1\le p_2\le\tilde{\delta}_{D_2}-1}\frac{|A_{\delta_{D_1},p_1}|}{\Gamma(\delta_{D_1}-p_1)}\frac{|A_{\tilde{\delta}_{D_2},p_2}|}{\Gamma(\tilde{\delta}_{D_2}-p_2)}\varpi\\
\frac{C_1R}{C}\sum_{1\le p_3\le\tilde{\delta}_{D_3}-1}\frac{|A_{\tilde{\delta}_{D_3},p_3}|}{\Gamma(\tilde{\delta}_{D_3}-p_3)}\varpi+C_3\sup_{m\in\mathbb{R}}\frac{1}{|Q(im)|}\frac{1}{(2\pi)^{\frac{1}{2}}}\frac{1}{k_2\Gamma(1+\frac{1}{k_2})}\varpi^2\\
+\frac{C_1}{C}\sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon_0^{\Delta_{l_1,l_2}-\delta_{D_1}k_1-\tilde{\delta}_{D_2}k_2+k_2-1}\sup_{m\in\mathbb{R}}\left|\frac{R_{l_1,l_2}(im)}{Q(im)}\right|\frac{k_1^{\delta_{l_1}}k_2^{\tilde{\delta}_{l_2}-1}}{\Gamma\left(\frac{d_{l_1,k_1}}{k_1}\right)\Gamma\left(\frac{d_{l_2,k_2}}{k_1}\right)}\varpi\\
+\frac{C_1}{C k_2\Gamma\left(1+\frac{1}{k_2}\right)} \sup_{m\in\mathbb{R}}\frac{1}{|Q(im)|}\xi\le\varpi.\label{e677}
\end{multline}
Taking into account (\ref{e658}-\ref{e664}) and (\ref{e677}), we get that $\mathcal{H}_{\epsilon}(\overline{D}(0,\varpi))\subseteq \overline{D}(0,\varpi)$. Here, $\overline{D}(0,\varpi)$ stands for the closed disc of radius $\varpi$ centered at the origin in the Banach space $F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$. Now, let $\omega_1,\omega_2\in F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$, with $\left\|\omega_j(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le\varpi$. We now prove that
\begin{equation}\label{e741b}
\left\|\mathcal{H}_\epsilon(\omega_1)-\mathcal{H}_\epsilon(\omega_2)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\le\frac{1}{2}\left\|\omega_1-\omega_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}.
\end{equation}
At this point, the classical contractive mapping theorem acting on the complete metric space $\overline{D}(0,\varpi)\subseteq F^{\boldsymbol{d}}_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}$ guarantees the existence of a fixed point for $\mathcal{H}_\epsilon$. Let us check (\ref{e741b}).
Analogous estimates as in the first part of the proof yield
\begin{multline}
\left\|\frac{\tau_1^{k_1\delta_{D_1}}R_{D_1,D_2}(im)}{P_m(\boldsymbol{\tau})}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\delta_{D_2}-p_2-1}s_2^{p_2-1}(\omega_1(\tau_1,s_2^{\frac{1}{k_2}},m)-\omega_2(\tau_1,s_2^{\frac{1}{k_2}},m))ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le \frac{C_1}{C}\sup_{m\in\mathbb{R}}\left|\frac{R_{D_1,D_2}(im)}{Q(im)}\right| \left\|\omega_1(\boldsymbol{\tau},m)-\omega_2(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e658b}
\end{multline}
for every $1\le p_2\le \tilde{\delta}_{D_2}-1$. Also Lemma~\ref{lema1} and Lemma~\ref{lema22} yield
\begin{multline}
\left\|\tau_2^{k_2(\tilde{\delta}_{D_2}-1)}\frac{R_{D_1,D_2}(im)}{P_m(\boldsymbol{\tau})}\tau_1^{k_1}
\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\delta_{D_1}-p_1-1}s_1^{p_1-1}(\omega_1(s_1^{\frac{1}{k_1}},\tau_2,m)-\omega_2(s_1^{\frac{1}{k_1}},\tau_2,m))ds_1\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le \frac{C_1}{C}\sup_{\tau_2\in S_{d_2}}\frac{|\tau_2|^{k_2(\tilde{\delta}_{D_2}-1)}}{(1+|\tau_2|^{k_2})^{\tilde{\delta}_{D_2}-1}}\sup_{m\in\mathbb{R}}\frac{1}{|Q(im)|}\left\|\omega_1(\boldsymbol{\tau},m)-\omega_2(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e659b}
\end{multline}
for every $1\le p_1\le \delta_{D_1}-1$. Lemma~\ref{lema1} and Lemma~\ref{lema23} guarantee that
\begin{multline}
\left\|\tau_1^{k_1}\frac{R_{D_1,D_2}(im)}{P_m(\boldsymbol{\tau})}\int_0^{\tau_2^{k_2}}\int_0^{\tau_1^{k_1}}(\tau_2^{k_2}-s_2)^{\tilde{\delta}_{D_2}-p_2-1}s_2^{p_2-1}(\tau_1^{k_1}-s_1)^{\delta_{D_1}-p_1-1}s_1^{p_1-1}\right.\\
\left.\times (\omega_1(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m)-\omega_1(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m))ds_1ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le\frac{C_1^2}{C}\sup_{m\in\mathbb{R}}\left|\frac{R_{D_1,D_2}(im)}{Q(im)}\right| \left\|w(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e660b}
\end{multline}
for every $1\le p_1\le \delta_{D_1}-1$ and $1\le p_2\le \tilde{\delta}_{D_2}-1$.
We apply Lemma~\ref{lema1} and Lemma~\ref{lema2} to get
\begin{multline}
\left\|\frac{1}{P_m(\boldsymbol{\tau})}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\tilde{\delta}_{D_3}-p_3-1}s_2^{p_3-1}(\omega_1(\tau_1,s_2^{\frac{1}{k_2}},m)-\omega_2(\tau_1,s_2^{\frac{1}{k_2}},m))ds_2\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\\
\le \frac{C_1}{C}\sup_{m\in\mathbb{R}}\left|\frac{1}{Q(im)}\right| \left\|\omega_1(\boldsymbol{\tau},m)-\omega_2(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)},\label{e661b}
\end{multline}
for every $1\le p_3\le \tilde{\delta}_{D_3}-1$.
In order to study the convolution operator, we need to give some details on the procedure. Put
$$W_1:=\omega_{1}((\tau_1-s_1)^{1/k_1},(s_2-x_2)^{1/k_2},m-m_{1}) - \omega_{2}((\tau_1-s_1)^{1/k_1},(s_2-x_2)^{1/k_2},m-m_{1}),$$
and $W_2:=\omega_{1}(s_1^{1/k_1},x_2^{1/k_2},m_{1}) - \omega_{2}(s_1^{1/k_1},x_2^{1/k_2},m_{1}).$
Then, taking into account that
\begin{multline}
P_{1}(i(m-m_{1}),\epsilon)\omega_{1}((\tau_1-s_1)^{1/k_1},(s_2-x_2)^{1/k_2},m-m_{1})P_{2}(im_{1},\epsilon)\omega_{1}(s_1^{1/k_1},x_2^{1/k_2},m_{1})\\
-P_{1}(i(m-m_{1}),\epsilon)\omega_{2}((\tau_1-s_1)^{1/k_1},(s_2-x_2)^{1/k_2},m-m_{1})P_{2}(im_{1},\epsilon)\omega_{2}(s_1^{1/k_1},x_2^{1/k_2},m_{1})\\
= P_{1}(i(m-m_{1}),\epsilon)W_1 P_2(im_1,\epsilon)\omega_{1}(s_1^{1/k_1},x_2^{1/k_2},m_{1})\\
+ P_{1}(i(m-m_{1}),\epsilon)\omega_{2}((\tau_1-s_1)^{1/k_1},(s_2-x_2)^{1/k_2},m-m_{1})P_2(im_1,\epsilon)W_2,
\end{multline}
and due to Lemma~\ref{lema1} and Lemma~\ref{lema6}, we proceed with analogous estimates as in (\ref{e662}) to get that
\begin{multline}
\left\|\frac{\epsilon^{-1}}{P_m(\boldsymbol{\tau})}\tau_1^{k_1}\int_0^{\tau_2^{k_2}}(\tau_2^{k_2}-s_2)^{\frac{1}{k_2}}\int_{-\infty}^{\infty}\int_0^{\tau_1^{k_1}}\int_0^{s_2}P_1(i(m-m_1),\epsilon)\right.\\
\times (\omega_1((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2-x_2)^{\frac{1}{k_2}},m-m_1)-\omega_2((\tau_1^{k_1}-s_1)^{\frac{1}{k_1}},(s_2-x_2)^{\frac{1}{k_2}},m-m_1))\\
\left.\times P_2(im_1,\epsilon)(\omega_1(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1)-\omega_2(s_1^{\frac{1}{k_1}},x_2^{\frac{1}{k_2}},m_1))\frac{dx_2ds_1dm_1ds_2}{(\tau_1^{k_1}-s_1)s_1(s_2-x_2)x_2}\right\|\\
\le |\epsilon|\frac{C_3}{\hbox{max}_{m\in\mathbb{R}}|Q(im)|}\left(\left\|\omega_1(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}+\left\|\omega_2(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\right)\left\|\omega_1(\boldsymbol{\tau},m)-\omega_2(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}\label{e662b}
\end{multline}
Finally, we apply Lemma~\ref{lema1} and Lemma~\ref{lema23} to get
\begin{multline}
\left\|\frac{R_{l_1,l_2}(im)}{P_m(\boldsymbol{\tau})}\tau_1^{k_1}\int_0^{\tau_2^{k_2}}\int_0^{\tau_1^{k_1}}(\tau_1^{k_1}-s_1)^{\frac{d_{l_1,k_1}}{k_1}-1}(\tau_2^{k_2}-s_2)^{\frac{d_{l_2,k_2}}{k_2}-1}s_1^{\delta_{l_1}-1}s_2^{\tilde{\delta}_{l_2}-1}\right.\\
\left.\times (\omega_1(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m)-\omega_2(s_1^{\frac{1}{k_1}},s_2^{\frac{1}{k_2}},m))ds_1 ds_2 \right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)} \\
\le\frac{C_1}{C} \sup_{m\in\mathbb{R}}\left|\frac{R_{l_1,l_2}(im)}{Q(im)}\right|\left\|\omega_1(\boldsymbol{\tau},m)-\omega_2(\boldsymbol{\tau},m)\right\|_{(\boldsymbol{\nu},\beta,\mu,\boldsymbol{k},\epsilon)}
\end{multline}\label{e663b}
for every $1\le l_j\le D_j-1$, for $j=1,2$.
We choose small enough $\varpi,\epsilon_0>0$ and assume that
$$\sup_{m\in\mathbb{R}}\left|\frac{R_{D_1,D_2}(im)}{Q(im)}\right|\le R\quad \hbox{ and } \sup_{m\in\mathbb{R}}\left|\frac{R_{D_3}(im)}{Q(im)}\right|\le R,$$
to satisfy
\begin{multline*}
k_1^{\delta_{D_1}}\frac{C_1R}{C}\sum_{1\le p_2\le\tilde{\delta}_{D_2}-1}\frac{|A_{\tilde{\delta}_{D_2},p_2}|}{\Gamma(\tilde{\delta}_{D_2}-p_2)}+k_2^{\tilde{\delta}_{D_2}-1}\frac{C_1R}{C}\sum_{1\le p_1\le\delta_{D_1}-1}\frac{|A_{\delta_{D_1},p_1}|}{\Gamma(\delta_{D_1}-p_1)}\\
\frac{C_1R}{C}\sum_{1\le p_1\le\delta_{D_1}-1}\sum_{1\le p_2\le\tilde{\delta}_{D_2}-1}\frac{|A_{\delta_{D_1},p_1}|}{\Gamma(\delta_{D_1}-p_1)}\frac{|A_{\tilde{\delta}_{D_2},p_2}|}{\Gamma(\tilde{\delta}_{D_2}-p_2)}\\
\frac{C_1R}{C}\sum_{1\le p_3\le\tilde{\delta}_{D_3}-1}\frac{|A_{\tilde{\delta}_{D_3},p_3}|}{\Gamma(\tilde{\delta}_{D_3}-p_3)}+2C_3\sup_{m\in\mathbb{R}}\frac{1}{|Q(im)|}\frac{1}{(2\pi)^{\frac{1}{2}}}\frac{1}{k_2\Gamma(1+\frac{1}{k_2})}\varpi\\
+\frac{C_1}{C}\sum_{\stackrel{0\le l_j\le D_j-1}{j=1,2}} \epsilon_0^{\Delta_{l_1,l_2}-\delta_{D_1}k_1-\tilde{\delta}_{D_2}k_2+k_2-1}\sup_{m\in\mathbb{R}}\left|\frac{R_{l_1,l_2}(im)}{Q(im)}\right|\frac{k_1^{\delta_{l_1}}k_2^{\tilde{\delta}_{l_2}-1}}{\Gamma\left(\frac{d_{l_1,k_1}}{k_1}\right)\Gamma\left(\frac{d_{l_2,k_2}}{k_1}\right)}\\
\le\frac{1}{2}.
\end{multline*}
Then, (\ref{e741b}) holds, and the proof is complete.
\end{proof}
The following is a direct consequence of the previous result.
\begin{corol}
The function $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)$, obtained in Proposition~\ref{prop653} is a continuous function in $(\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}\times \mathbb{R}\times D(0,\epsilon_0)\setminus\{0\}$, and holomorphic with respect to $\boldsymbol{\tau}$ in the set $(D(0,\rho)\cup S_{d_1})\times S_{d_2}$ and on $D(0,\epsilon_0)\setminus\{0\}$ with respect to the perturbation parameter $\epsilon$. Moreover, it turns out to be a solution of (\ref{e310}), which satisfies there exists $\varpi>0$ such that
\begin{equation}\label{e807}
|\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)|\le \varpi(1+|m|)^{-\mu}
\frac{|\frac{\tau_1}{\epsilon}|}{1 + |\frac{\tau_1}{\epsilon}|^{2k_1}}\frac{|\frac{\tau_2}{\epsilon}|}{1 + |\frac{\tau_2}{\epsilon}|^{2k_2}}\exp( -\beta|m| + \nu_1|\frac{\tau_1}{\epsilon}|^{k_1}+\nu_2|\frac{\tau_2}{\epsilon}|^{k_2} ),
\end{equation}
for every $(\boldsymbol{\tau},m,\epsilon)\in (\overline{D}(0,\rho)\cup S_{d_1})\times S_{d_2}\times \mathbb{R}\times D(0,\epsilon_0)\setminus\{0\}$.
\end{corol}
\section{Family of analytic solutions of the main problem}
In this section, we consider the main problem under study, namely (\ref{ICP_main0}), under the conditions (\ref{e120})-(\ref{e331}) on the parameters involved, and also on the geometry of the problem, (\ref{raicesgrandes})-(\ref{e165b}). In order to construct the analytic solution of the problem, we recall the definition of a good covering in $\mathbb{C}^{\star}$.
\begin{defin}\label{goodcovering} Let $\varsigma_1,\varsigma_2 \geq 2$ be integer numbers. Let $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ be a finite family of open sectors with vertex at $0$, and radius $\epsilon_{0}$. In addition to this, we assume the opening of every sector is chosen to be slightly larger than $\pi/k_2$ in the case that $k_1<k_2$, and slightly larger than $\pi/k_1$ in case $k_2<k_1$.
We assume that the intersection of three different sectors in the good covering is empty, and
$\cup_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}} \mathcal{E}_{p_1,p_2} = \mathcal{U} \setminus \{ 0 \}$,
for some neighborhood of 0, $\mathcal{U}\in\mathbb{C}$. Such set of sectors is called a good covering in $\mathbb{C}^{\ast}$.
\end{defin}
\begin{defin}\label{defgood2} Let $\varsigma_1,\varsigma_2\ge 2$ and $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ be a good covering in $\mathbb{C}^{\ast}$. Let
$\mathcal{T}_j$ be open bounded sectors centered at 0 with radius $r_{\mathcal{T}_j}$ for $j\in\{1,2\}$, and consider two families of sectors as follows: let
$$ S_{\mathfrak{d}_{p_1},\theta_1,\epsilon_{0}r_{\mathcal{T}_1}} =
\{ T_1 \in \mathbb{C}^{\ast} / |T_1| < \epsilon_{0}r_{\mathcal{T}_1} \ \ , \ \ |\mathfrak{d}_{p_1} - \mathrm{arg}(T_1)| < \theta_1/2 \}, $$
$$ S_{\tilde{\mathfrak{d}}_{p_2},\theta_2,\epsilon_{0}r_{\mathcal{T}_2}} =
\{ T_2 \in \mathbb{C}^{\ast} / |T_2| < \epsilon_{0}r_{\mathcal{T}_2} \ \ , \ \ |\mathfrak{d}_{p_2} - \mathrm{arg}(T_2)| < \theta_2/2 \},$$
with opening $\theta_j > \pi/k_j$, and where $\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2} \in \mathbb{R}$, for all $0 \leq p_1 \leq \varsigma_1-1$ and $0 \leq p_2 \leq \varsigma_2-1$ is the couple of directions $d_1,d_2\in\mathbb{R}$ mentioned in Proposition~\ref{prop614}, whenever $\mathcal{E}_{p_1,p_2}$ is the domain of definition of the perturbation parameter $\epsilon$.
In addition to that, the sectors $S_{\mathfrak{d}_{p_1},\theta_1,\epsilon_{0}r_{\mathcal{T}_1}}$ and $S_{\tilde{\mathfrak{d}}_{p_2},\theta_2,\epsilon_{0}r_{\mathcal{T}_2}}$ are such that for all $0 \leq p_1 \leq \varsigma_1 - 1$, $0 \leq p_2 \leq \varsigma_2 - 1$, $\boldsymbol{t} \in \mathcal{T}_1\times \mathcal{T}_2$, and $\epsilon \in \mathcal{E}_{p_1,p_2}$, one has
$$\epsilon t_1 \in S_{\mathfrak{d}_{p_1},\theta_1,\epsilon_{0}r_{\mathcal{T}_1}}\hbox{ and }\epsilon t_2 \in S_{\tilde{\mathfrak{d}}_{p_2},\theta_2,\epsilon_{0}r_{\mathcal{T}_2}}.$$
\noindent We say that the family
$\{ (S_{\mathfrak{d}_{p_1},\theta_1,\epsilon_{0}r_{\mathcal{T}_1}})_{0 \leq p_1 \leq \varsigma_1-1}, (S_{\tilde{\mathfrak{d}}_{p_2},\theta_2,\epsilon_{0}r_{\mathcal{T}_2}})_{0 \leq p_2 \leq \varsigma_2-1} ,\mathcal{T}_1\times \mathcal{T}_2 \}$
is associated to the good covering $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l}0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$.
\end{defin}
Let $\varsigma_1,\varsigma_2\ge 2$ and $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ be a good covering in $\mathbb{C}^{\ast}$. We assume the family $\{ (S_{\mathfrak{d}_{p_1},\theta_1,\epsilon_{0}r_{\mathcal{T}_1}})_{0 \leq p_1 \leq \varsigma_1-1}, (S_{\tilde{\mathfrak{d}}_{p_2},\theta_2,\epsilon_{0}r_{\mathcal{T}_2}})_{0 \leq p_2 \leq \varsigma_2-1} ,\mathcal{T}_1\times \mathcal{T}_2 \}$ is associated to the previous good covering.
The existence of a solution $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)$ of the auxiliary problem (\ref{e310}) turns out to provide an actual solution of the main problem via Laplace and Fourier transform, in view of the constraints satisfied by $\omega_{\boldsymbol{k}}^{\boldsymbol{d}}(\boldsymbol{\tau},m,\epsilon)$, see (\ref{e807}). More precisely, for every $0\le p_1\le \varsigma_1$ and $0\le p_2\le \varsigma_2-1$, the function
\begin{equation}\label{e817}
u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)=\frac{k_1k_2}{(2\pi)^{1/2}}\int_{-\infty}^{+\infty}
\int_{L_{\gamma_{p_1}}}\int_{L_{\gamma_{p_2}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_1}{\epsilon t_1})^{k_1}-(\frac{u_2}{\epsilon t_2})^{k_2}} e^{izm} \frac{du_2}{u_2}\frac{du_1}{u_1} dm,
\end{equation}
is holomorphic on the domain $(\mathcal{T}_1\cap D(0,h'))\times(\mathcal{T}_2\cup D(0,h'))\times H_{\beta'}\times \mathcal{E}_{p_1,p_2}$, for any $0<\beta'<\beta$ and some $h'>0$.
The first main result of the present work is devoted to the construction of a family of actual holomorphic solutions to the equation (\ref{ICP_main0}) for null initial data. Each of the elements in the family of solutions is associated to an element of a good covering with respect to the complex parameter $\epsilon$. The strategy leans on the control of the difference of two solutions defined in domains with nonempty intersection with respect to the perturbation parameter $\epsilon$. The construction of each analytic solution in terms of two Laplace transforms in different time variables requires to distinguish different cases, depending on the coincidence of the integration paths or not.
\begin{theo}\label{teo1}
Let the hypotheses of Proposition~\ref{prop653} hold. Then, for every element $\mathcal{E}_{p_1,p_2}$ in the good covering in $\mathbb{C}^{\star}$, there exists a solution $u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)$ of the main problem under study (\ref{ICP_main0}) defined and holomorphic on $(\mathcal{T}_1\cap D(0,h'))\times(\mathcal{T}_2\cup D(0,h'))\times H_{\beta'}\times \mathcal{E}_{p_1,p_2}$, for any $0<\beta'<\beta$ and some $h'>0$.
Moreover, for every two different multiindices $(p_1,p_2),(p'_1,p'_2)\in\{0,\ldots,\varsigma_1-1\}\times\{0,\ldots,\varsigma_2-1\}$, one of the following situations hold:
\begin{itemize}
\item Case 1: $\mathcal{E}_{p_1,p_2}\cap \mathcal{E}_{p'_1,p'_2}=\emptyset$.
\item Case 2: $\mathcal{E}_{p_1,p_2}\cap \mathcal{E}_{p'_1,p'_2}\neq\emptyset$. The path $L_{\gamma_{p_2}}$ coincides with $L_{\gamma_{p'_2}}$ but $L_{\gamma_{p_1}}$ does not coincide with $L_{\gamma_{p'_1}}$. Then, it holds that
\begin{equation}
\sup_{\boldsymbol{t} \in (\mathcal{T}_1 \cap D(0,h''))\times (\mathcal{T}_2 \cap D(0,h'')), z \in H_{\beta'}}
|u_{p_1,p_2}(\boldsymbol{t},z,\epsilon) - u_{p'_1,p'_2}(\boldsymbol{t},z,\epsilon)| \leq K_{p}e^{-\frac{M_p}{|\epsilon|^{k_1}}},
\label{exp_small_difference_u_p11}
\end{equation}
for every $\epsilon\in\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}$. In that case, we say that $((p_1,p_2),(p'_1,p'_2))$ belongs to the subset $\mathcal{U}_{k_1}$ of $\{0,\ldots,\varsigma_1-1\}\times\{0,\ldots,\varsigma_2-1\}$.
\item Case 3: $\mathcal{E}_{p_1,p_2}\cap \mathcal{E}_{p'_1,p'_2}\neq\emptyset$. Neither, the path $L_{\gamma_{p_2}}$ coincides with $L_{\gamma_{p'_2}}$, nor $L_{\gamma_{p_1}}$ coincides with $L_{\gamma_{p'_1}}$. Then, it holds that
\begin{equation}
\sup_{\boldsymbol{t} \in (\mathcal{T}_1 \cap D(0,h''))\times (\mathcal{T}_2 \cap D(0,h'')), z \in H_{\beta'}}
|u_{p_1,p_2}(\boldsymbol{t},z,\epsilon) - u_{p'_1,p'_2}(\boldsymbol{t},z,\epsilon)| \leq K_{p}\max \left\{ e^{-\frac{M_p}{|\epsilon|^{k_1}}}, e^{-\frac{M_p}{|\epsilon|^{k_2}}} \right\},
\label{exp_small_difference_u_p12}
\end{equation}
for every $\epsilon\in\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}$.
\end{itemize}
\end{theo}
\begin{proof}
The existence of the solution $u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)$, for every $0\le p_1\le \varsigma_1$ and $0\le p_2\le \varsigma_2-1$ is guaranteed from the construction described previously.
We now give proof for the second statement of the result, namely, the existence of an exponential decay to 0, with respect to the perturbation parameter, of the difference of two consecutive solutions in the good covering, uniformly with respect to $(\boldsymbol{t},z)$.
The proof is close to that of Theorem 1 in~\cite{family1}, but for the sake of clarity, we give a complete description.\par
\textbf{Case 2:} Assume that the path $L_{\gamma_{p_2}}$ coincides with $L_{\gamma_{p'_2}}$, and $L_{\gamma_{p_1}}$ does not coincide with $L_{\gamma_{p'_1}}$. Then, using that
$u_1 \mapsto \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) \exp( -(\frac{u_1}{\epsilon t_1})^{k_1} )/u_1$ is holomorphic on $D(0,\rho)$ for all
$(m,\epsilon) \in \mathbb{R} \times (D(0,\epsilon_{0}) \setminus \{ 0 \})$, and every $u_2\in L_{\gamma_{p_2}}$, one can deform one of the integration paths and write
$$I=\int_{L_{\gamma_{p_1}}}\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon)e^{-\left(\frac{u_1}{\epsilon t_1}\right)^{k_1}}\frac{du_1}{u_1}-\int_{L_{\gamma_{p'_1}}}\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon)e^{-\left(\frac{u_1}{\epsilon t_1}\right)^{k_1}}\frac{du_1}{u_1}$$
in the form
\begin{multline*}
\int_{L_{\rho_1/2,\gamma_{p_1}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_1}{\epsilon t_1})^{k_1}} \frac{du_1}{u_1} \\
-\int_{L_{\rho_1/2,\gamma_{p'_1}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_1}{\epsilon t_1})^{k_1}} \frac{du_1}{u_1}\\
+ \int_{C_{\rho_1/2,\gamma_{p'_1},\gamma_{p_1}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_1}{\epsilon t_1})^{k_1}} \frac{du_1}{u_1}.
\end{multline*}
where $L_{\rho_1/2,\gamma_{p_1}} = [\rho_1/2,+\infty)e^{i\gamma_{p_1}}$,
$L_{\rho_1/2,\gamma_{p'_1}} = [\rho_1/2,+\infty)e^{i\gamma_{p'_1}}$ and
$C_{\rho_1/2,\gamma_{p'_1},\gamma_{p_1}}$ is an arc of circle connecting
$(\rho_1/2)e^{i\gamma_{p'_1}}$ and $(\rho_1/2)e^{i\gamma_{p_1}}$ with the adequate orientation. The positive real number $\rho_1$ is determined in Proposition~\ref{prop614}.\medskip
We get the existence of constants $C_{p_1,p'_1},M_{p_1,p'_1}>0$ such that
$$|I|\le C_{p_1,p'_1}\varpi_{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(1+ |m|)^{-\mu} e^{-\beta|m|} \frac{ |\frac{u_2}{\epsilon}|}{1 + |\frac{u_2}{\epsilon}|^{2k_2}}
\exp( \nu_2 |\frac{u_2}{\epsilon}|^{k_2})e^{-\frac{M_{p_1,p'_1}}{|\epsilon|^{k_1}}},$$
for $t_1\in\mathcal{T}_1\cap D(0,h')$ and $\epsilon\in\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}$ and $u_2\in L_{\gamma_{p_2}}$. We have
\begin{multline}
|u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)-u_{p'_1,p'_2}(\boldsymbol{t},z,\epsilon)|\\
\le \frac{k_1k_2}{(2\pi)^{1/2}}C_{p_1,p'_1}\left(\int_{-\infty}^{\infty}(1+|m|)^{-\mu}e^{-\beta|m|}e^{-m|\hbox{Im}(z)|}dm\right)\\
\times \int_{L_{\gamma_{p_2}}}\frac{ |\frac{u_2}{\epsilon}|}{1 + |\frac{u_2}{\epsilon}|^{2k_2}}
\exp( \nu_2 |\frac{u_2}{\epsilon}|^{k_2})\exp(-\left(\frac{u_2}{\epsilon t_2}\right)^{k_2})\left|\frac{du_2}{u_2}\right| e^{-\frac{M_{p_1,p'_1}}{|\epsilon|^{k_1}}}.
\end{multline}
The last integral is estimated via the reparametrization $u_2=re^{\gamma_{p_2}\sqrt{-1}}$ and the change of variable $r=|\epsilon|s$ by
$$\int_0^{\infty}\frac{1}{1+s^2}e^{-\delta_{2}s^{k_2}}ds,$$
for some $\delta_{2}>0$, whenever $t_2\in\mathcal{T}_2\cap D(0,h')$.
The estimates given in the enunciate of Case 2 follows from here.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{t1c1}
\caption{Path deformation in Case 2}
\end{figure}
\textbf{Case 3:} Assume that neither $L_{\gamma_{p_1}}$ coincides with $L_{\gamma_{p'_1}}$, nor $L_{\gamma_{p_2}}$ coincides with $L_{\gamma_{p'_2}}$.
Owing to the fact that $u_1 \mapsto \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) \exp( -(\frac{u_1}{\epsilon t_1})^{k_1} )/u_1$ is holomorphic on $D(0,\rho)$ for all
$(m,\epsilon) \in \mathbb{R} \times (D(0,\epsilon_{0}) \setminus \{ 0 \})$, and every $u_2\in L_{\gamma_{p_2}}$ we deform the integration paths with respect to the first time variable and write
$$u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)-u_{p'_1,p'_2}(\boldsymbol{t},z,\epsilon)=J_1-J_2+J_3,$$
where
$$J_1=\frac{k_1k_2}{(2\pi)^{1/2}} \int_{L_{\gamma_{p_1},1}}\int_{L_{\gamma_{p_2}}}\int_{-\infty}^{\infty} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_1}{\epsilon t_1})^{k_1}-(\frac{u_2}{\epsilon t_2})^{k_2}}e^{izm} dm\frac{du_2}{u_2}\frac{du_1}{u_1}.$$
$$J_2=\frac{k_1k_2}{(2\pi)^{1/2}} \int_{L_{\gamma_{p'_1},1}}\int_{L_{\gamma_{p'_2}}}\int_{-\infty}^{\infty} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p'_1},\tilde{\mathfrak{d}}_{p'_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_1}{\epsilon t_1})^{k_1}-(\frac{u_2}{\epsilon t_2})^{k_2}}e^{izm} dm\frac{du_2}{u_2}\frac{du_1}{u_1}.$$
\begin{multline*}
J_3=\frac{k_1k_2}{(2\pi)^{1/2}} \int_{0}^{\frac{\rho_1}{2}e^{i\theta}}\left(\int_{-\infty}^{\infty}\left(\int_{L_{\gamma_{p_2}}} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2}\right.\right.\\
\left.\left.-\int_{L_{\gamma_{p'_2}}} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p'_1},\tilde{\mathfrak{d}}_{p'_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2}\right)e^{izm}dm\right)e^{-(\frac{u_1}{\epsilon t_1})^{k_1}}\frac{du_1}{u_1},
\end{multline*}
where $\frac{\rho_1}{2}e^{i\theta}$ is such that $\theta$ is an argument between $\gamma_{p_1}$ and $\gamma_{p'_1}$. The path $L_{\gamma_{p_1},1}$ (resp. $L_{\gamma_{p'_1},1}$) consists of the concatenation of the arc of circle connecting $\frac{\rho_1}{2}e^{i\theta}$ with $\frac{\rho_1}{2}e^{i\gamma_{p_1}}$ (resp. with $\frac{\rho_1}{2}e^{i\gamma_{p'_1}}$) and the half line $[\frac{\rho_1}{2}e^{i\gamma_{p_1}},\infty)$ (resp. $[\frac{\rho_1}{2}e^{i\gamma_{p'_1}},\infty)$).
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{t22} \vline\vline \includegraphics[width=0.3\textwidth]{t21} \vline\vline \includegraphics[width=0.3\textwidth]{t23}\\
\includegraphics[width=0.3\textwidth]{t31b} \vline\vline \includegraphics[width=0.3\textwidth]{t33b} \vline\vline \includegraphics[width=0.3\textwidth]{t32b}
\caption{Path deformation in Case 3}
\end{figure}
We first give estimates for $|J_1|$. We have
\begin{multline*}
\left|\int_{L_{\gamma_{p_2}}} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_2}{\epsilon t_2})^{k_2}}\frac{du_2}{u_2}\right|\le \varpi_{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(1+ |m|)^{-\mu} e^{-\beta|m|} \frac{ |\frac{u_1}{\epsilon}|}{1 + |\frac{u_1}{\epsilon}|^{2k_1}}
\exp( \nu_1 |\frac{u_1}{\epsilon}|^{k_1})\\
\times \int_{L_{\gamma_{p_2}}}\left(\frac{ |\frac{u_2}{\epsilon}|}{1 + |\frac{u_2}{\epsilon}|^{2k_2}}\exp(\nu_2\left|\frac{u_2}{\epsilon}\right|^{k_2}\right)|e^{-\left(\frac{u_2}{\epsilon t_2}\right)^{k_2}}|\left|\frac{du_2}{u_2}\right|\\
\le \varpi_{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}C_{p_2}(1+ |m|)^{-\mu} e^{-\beta|m|} \frac{ |\frac{u_1}{\epsilon}|}{1 + |\frac{u_1}{\epsilon}|^{2k_1}}
\exp( \nu_1 |\frac{u_1}{\epsilon}|^{k_1}),
\end{multline*}
for some $C_{p_2}>0$, and $t_2\in\mathcal{T}_2\cap D(0,h')$. Using the parametrization $u_2=re^{\gamma_{p_2}\sqrt{-1}}$ and the change of variable $r=|\epsilon|s$. Using analogous estimations as in the Case 1, we arrive at
$$|J_1|\le C_{p,1}e^{-\frac{M_{p,1}}{|\epsilon|^{k_1}}},$$
for some $C_{p,1},M_{p,1}>0$, for all $\epsilon\in\mathcal{E}_{p_1,p_2}\cap \mathcal{E}_{p'_1,p'_2}$, where $t_1\in\mathcal{T}_{1}\cap D(0,h')$ and $t_2\in\mathcal{T}_{2}\cap D(0,h')$, $z\in H_{\beta'}$.
Analogous calculations yield to
$$|J_2|\le C_{p,2}e^{-\frac{M_{p,2}}{|\epsilon|^{k_1}}},$$
for some $C_{p,2},M_{p,2}>0$, for all $\epsilon\in\mathcal{E}_{p_1,p_2}\cap \mathcal{E}_{p'_1,p'_2}$, where $t_1\in\mathcal{T}_{1}\cap D(0,h')$ and $t_2\in\mathcal{T}_{2}\cap D(0,h')$, $z\in H_{\beta'}$.
In order to give upper bounds for $|J_3|$, we consider
$$\left|\int_{L_{\gamma_{p_2}}} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2}-\int_{L_{\gamma_{p'_2}}} \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p'_1},\tilde{\mathfrak{d}}_{p'_2}}(u_1,u_2,m,\epsilon)
e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2}\right|.$$
Since $u_1$ belongs to the disc $D(0,\rho_1)$, we know that the function
$$u_2\mapsto \omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_2}{\epsilon t_2})^{k_2}}\frac{1}{u_2}$$
is holomorphic on the disc $D(0,\rho)$. In this framework, one is able to deform the integration path in order to write the difference as the next sum
\begin{multline*}
\int_{L_{\rho_1/2,\gamma_{p_2}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2} \\
-\int_{L_{\rho_1/2,\gamma_{p'_2}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2}\\
+ \int_{C_{\rho_1/2,\gamma_{p'_2},\gamma_{p_2}}}
\omega_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(u_1,u_2,m,\epsilon) e^{-(\frac{u_2}{\epsilon t_2})^{k_2}} \frac{du_2}{u_2}.
\end{multline*}
We get the previous expression is upper estimated by
$$\varpi_{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}C_{p_2,p'_2}(1+ |m|)^{-\mu} e^{-\beta|m|} \frac{ |\frac{u_1}{\epsilon}|}{1 + |\frac{u_1}{\epsilon}|^{2k_1}}
\exp( \nu_1 |\frac{u_1}{\epsilon}|^{k_1})\exp\left(-\frac{M_{p_2,p'_2}}{|\epsilon|^{k_2}}\right),$$
for $\epsilon\in\mathcal{E}_{p_1,p_2}\cap \mathcal{E}_{p'_1,p'_2}$, $t_2\in\mathcal{T}_{2}\cap D(0,h')$, $u_1\in[0,\rho_1/2e^{i\theta}]$.
We finally get
\begin{multline*}
|J_3|\le\frac{k_1k_2}{(2\pi)^{1/2}} C_{p_2,p'_2}\varpi_{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}} \left(\int_{-\infty}^{\infty}(1+|m|)^{-\mu}e^{-\beta|m|}e^{-m|\hbox{Im}(z)|}dm\right)\\
\times\left(\int_{0}^{\rho_1/2e^{i\theta}}\frac{ |\frac{u_1}{\epsilon}|}{1 + |\frac{u_1}{\epsilon}|^{2k_1}}\exp(\nu_1|\frac{u_1}{\epsilon}|^{k_1})|e^{-\left(\frac{u_1}{\epsilon t_1}\right)^{k_1}}|\left|\frac{d u_1}{u_1}\right|\right)\exp\left(-\frac{M_{p_2,p'_2}}{|\epsilon|^{k_2}}\right).
\end{multline*}
We conclude that
$$|J_3|\le K_{p,3}e^{-\frac{M_{p,3}}{|\epsilon|^{k_2}}},$$
uniformly for $(t_1,t_2)\in (\mathcal{T}_1\cap D(0,h''))\times (\mathcal{T}_2\cap D(0,h''))$ for some $h''>0$, and $z\in H_{\beta'}$ for any fixed $\beta'<\beta$, where $K_{\boldsymbol{p},3},M_{\boldsymbol{p},3}$ are positive constants.
\end{proof}
\textbf{Remark:} Observe that, in case that the path $L_{\gamma_{p_1}}$ coincides with $L_{\gamma_{p'_1}}$, but $L_{\gamma_{p_2}}$ does not coincide with $L_{\gamma_{p'_2}}$, then it is not possible to obtain estimates on the difference of two solutions in the form $\exp(-M/|\epsilon|^{k_2})$, as it happens in Case 2. The reason is that we can not deform the path $L_{\gamma_{p_2}}-L_{\gamma_{p'_2}}$ since the function $w_{\boldsymbol{k}}^{\mathfrak{d}_{p_1},\tilde{\mathfrak{d}}_{p_2}}(\boldsymbol{\tau},m,\epsilon)$ and $w_{\boldsymbol{k}}^{\mathfrak{d}_{p'_1},\tilde{\mathfrak{d}}_{p'_2}}(\boldsymbol{\tau},m,\epsilon)$ are not holomorphic on a disc centered at 0 respect to $\tau_2$.\par
\section{Asymptotics of the problem in the perturbation parameter}
\subsection{$k-$Summable formal series and Ramis-Sibuya Theorem}\label{secborellaplace}
For the sake of completeness, we recall the definition of $k-$Borel summability of formal series with coefficients in a Banach space, and Ramis-Sibuya Theorem. A reference for the details on the first part is~\cite{ba}, whilst the second part of this section can be found in~\cite{ba2}, p. 121, and~\cite{hssi}, Lemma XI-2-6.
\begin{defin} Let $k \geq 1$ be an integer. A formal series
$$\hat{X}(\epsilon) = \sum_{j=0}^{\infty} \frac{ a_{j} }{ j! } \epsilon^{j} \in \mathbb{F}[[\epsilon]]$$
with coefficients in a Banach space $( \mathbb{F}, ||.||_{\mathbb{F}} )$ is said to be $k-$summable
with respect to $\epsilon$ in the direction $d \in \mathbb{R}$ if \medskip
{\bf i)} there exists $\rho \in \mathbb{R}_{+}$ such that the following formal series, called formal
Borel transform of $\hat{X}$ of order $k$
$$ \mathcal{B}_{k}(\hat{X})(\tau) = \sum_{j=0}^{\infty} \frac{ a_{j} \tau^{j} }{ j!\Gamma(1 + \frac{j}{k}) } \in \mathbb{F}[[\tau]],$$
is absolutely convergent for $|\tau| < \rho$, \medskip
{\bf ii)} there exists $\delta > 0$ such that the series $\mathcal{B}_{k}(\hat{X})(\tau)$ can be analytically continued with
respect to $\tau$ in a sector
$S_{d,\delta} = \{ \tau \in \mathbb{C}^{\ast} : |d - \mathrm{arg}(\tau) | < \delta \} $. Moreover, there exist $C >0$, and $K >0$
such that
$$ ||\mathcal{B}(\hat{X})(\tau)||_{\mathbb{F}}
\leq C e^{ K|\tau|^{k}} $$
for all $\tau \in S_{d, \delta}$.
\end{defin}
If this is so, the vector valued Laplace transform of order $k$ of $\mathcal{B}_{k}(\hat{X})(\tau)$ in the direction $d$ is defined by
$$ \mathcal{L}^{d}_{k}(\mathcal{B}_{k}(\hat{X}))(\epsilon) = \epsilon^{-k} \int_{L_{\gamma}}
\mathcal{B}_{k}(\hat{X})(u) e^{ - ( u/\epsilon )^{k} } ku^{k-1}du,$$
along a half-line $L_{\gamma} = \mathbb{R}_{+}e^{i\gamma} \subset S_{d,\delta} \cup \{ 0 \}$, where $\gamma$ depends on
$\epsilon$ and is chosen in such a way that
$\cos(k(\gamma - \mathrm{arg}(\epsilon))) \geq \delta_{1} > 0$, for some fixed $\delta_{1}$, for all
$\epsilon$ in a sector
$$ S_{d,\theta,R^{1/k}} = \{ \epsilon \in \mathbb{C}^{\ast} : |\epsilon| < R^{1/k} \ \ , \ \ |d - \mathrm{arg}(\epsilon) |
< \theta/2 \},$$
where $\frac{\pi}{k} < \theta < \frac{\pi}{k} + 2\delta$ and $0 < R < \delta_{1}/K$. The
function $\mathcal{L}^{d}_{k}(\mathcal{B}_{k}(\hat{X}))(\epsilon)$
is called the $k-$sum of the formal series $\hat{X}(t)$ in the direction $d$. It is bounded and holomorphic on the sector
$S_{d,\theta,R^{1/k}}$ and has the formal series $\hat{X}(\epsilon)$ as Gevrey asymptotic
expansion of order $1/k$ with respect to $\epsilon$ on $S_{d,\theta,R^{1/k}}$. This means that for all
$\frac{\pi}{k} < \theta_{1} < \theta$, there exist $C,M > 0$ such that
$$ ||\mathcal{L}^{d}_{k}(\mathcal{B}_{k}(\hat{X}))(\epsilon) - \sum_{p=0}^{n-1}
\frac{a_p}{p!} \epsilon^{p}||_{\mathbb{F}} \leq CM^{n}\Gamma(1+ \frac{n}{k})|\epsilon|^{n} $$
for all $n \geq 1$, all $\epsilon \in S_{d,\theta_{1},R^{1/k}}$.\medskip
Multisummability of a formal power series is a recursive process that allows to compute the sum of a formal power series in different Gevrey orders. One of the approaches to multisummability is that stated by W. Balser, which can be found in~\cite{ba}, Theorem 1, p.57. Roughly speaking, given a formal power series $\hat{f}$ which can be decomposed into a sum $\hat{f}(z)=\hat{f}_1(z)+\ldots+\hat{f}_m(z)$ such that each of the terms $\hat{f}_j(z)$ is $k_j$-summable, with sum given by $f_j$, then, $\hat{f}$ turns out to be multisummable, and its multisum is given by $f_1(z)+\ldots+f_m(z)$. More precisely, one has the following definition.
\begin{defin} Let $(\mathbb{F},\left\|\cdot\right\|_{\mathbb{F}})$ be a complex Banach space and let $0<k_2<k_1$. Let $\mathcal{E}$ be a bounded open sector with vertex at 0, and opening $\frac{\pi}{k_1}+\delta_1$ for some $\delta_1>0$, and let $\mathcal{F}$ be a bounded open sector with vertex at the origin in $\mathbb{C}$, with opening $\frac{\pi}{k_2}+\delta_2$, for some $\delta_2>0$ and such that $\mathcal{E}\subseteq\mathcal{F}$ holds.\smallskip
A formal power series $\hat{f}(\epsilon)\in\mathbb{F}[[\epsilon]]$ is said to be $(k_1,k_2)-$summable on $\mathcal{E}$ if there exist $\hat{f}_2(\epsilon)\in\mathbb{F}[[\epsilon]]$ which is $k_2-$summable on $\mathcal{F}$, with $k_2$-sum given by $f_2:\mathcal{F}\to\mathbb{F}$, and $\hat{f}_1(\epsilon)\in\mathbb{F}[[\epsilon]]$ which is $k_1-$summable on $\mathcal{E}$, with $k_1$-sum given by $f_1:\mathcal{E}\to\mathbb{F}$, such that $\hat{f}=\hat{f}_1+\hat{f}_2$. Furthermore, the holomorphic function $f(\epsilon)=f_1(\epsilon)+f_2(\epsilon)$ on $\mathcal{E}$ is called the $(k_1,k_2)-$sum of $\hat{f}$ on $\mathcal{E}$. In that situation, $f(\epsilon)$ can be obtained from the analytic continuation of the $k_2-$Borel transform of $\hat{f}$ by the successive application of accelerator operators and Laplace transform of order $k_1$, see Section 6.1 in~\cite{ba}.
\end{defin}
We recall the reader the classical version of Ramis-Sibuya Theorem for the Gevrey asymptotics as stated in~\cite{hssi} in the framework of our good covering $\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$, given above in Definition 3.
\noindent {\bf Theorem (RS)} {\it Let $0<k_1<k_2$ be integer numbers. Let $(\mathbb{F},||.||_{\mathbb{F}})$ be a Banach space over $\mathbb{C}$ and
$\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ be a good covering in $\mathbb{C}^{\ast}$, such that the aperture of every sector is slightly larger than $\pi/k_2$. For all
$0 \leq p_1 \leq \varsigma_1 - 1$, $0\le p_2\le \varsigma_2-1$, let $G_{p_1,p_2}$ be a holomorphic function from $\mathcal{E}_{p_1,p_2}$ into
the Banach space $(\mathbb{F},||.||_{\mathbb{F}})$ and let the cocycle $\Theta_{(p_1,p_2)(p'_1,p'_2)}(\epsilon) = G_{p_1,p_2}(\epsilon) - G_{p'_1,p'_2}(\epsilon)$
be a holomorphic function from the sector $Z_{(p_1,p_2),(p'_1,p'_2)} = \mathcal{E}_{p_1,p_2} \cap \mathcal{E}_{p'_1,p'_2}\neq\emptyset$ into $\mathbb{E}$. We make the following assumptions.\medskip
\noindent {\bf 1)} The functions $G_{p_1,p_2}(\epsilon)$ are bounded as $\epsilon \in \mathcal{E}_{p_1,p_2}$ tends to the origin
in $\mathbb{C}$, for all $0 \leq p_1 \leq \varsigma_1 - 1$ and all $0\le p_2\le \varsigma_2-1$.\medskip
\noindent {\bf 2)} The functions $\Theta_{(p_1,p_2)(p'_1,p'_2)}(\epsilon)$ are exponentially flat of order $1/k_1$ on $Z_{(p_1,p_2)(p'_1,p'_2)}$, for all
$0 \leq p_1,p'_1 \leq \varsigma_1-1$, and $0\le p_2,p'_2 \le \varsigma_2-1$. This means that there exist constants $C_{p_1,p_2,p'_1,p'_2},A_{p_1,p_2,p'_1,p'_2}>0$ such that
$$ ||\Theta_{(p_1,p_2)(p'_1,p'_2)}(\epsilon)||_{\mathbb{F}} \leq C_{p_1,p_2,p'_1,p'_2}e^{-A_{p_1,p_2,p'_1p'_2}/|\epsilon|^{k_1}} $$
for all $\epsilon \in Z_{(p_1,p_2),(p'_1,p'_2)}$, all $0 \leq p_1,p'_1 \leq \varsigma_1-1$ and $0\le p_2,p'_2\le \varsigma_2-1$.\medskip
Then, for all $0 \leq p_1 \leq \nu_1 - 1$ and $0\le p_2\le\varsigma_2-1$, the functions $G_{p_1,p_2}(\epsilon)$ admit a common formal power series $\hat{G}(\epsilon) \in \mathbb{F}[[\epsilon]]$ as asymptotic expansion of Gevrey order $1/k_1$.}
A novel version of Ramis-Sibuya Theorem has been developed in~\cite{takei}, and has provided successful results in previous works by the authors,~\cite{lama1},~\cite{lama2,family1}. A version of the result in two different levels which fits our needs is now given without proof, which can be found in~\cite{lama1},~\cite{lama2}.
\vspace{0.3cm}
\noindent {\bf Theorem (multilevel-RS)} {\it Assume that $0<k_2<k_1$ are integer numbers. Let $(\mathbb{F},||.||_{\mathbb{F}})$ be a Banach space over $\mathbb{C}$ and
$\{ \mathcal{E}_{p_1,p_2} \}_{\begin{subarray}{l} 0 \leq p_1 \leq \varsigma_1 - 1\\0 \leq p_2 \leq \varsigma_2 - 1\end{subarray}}$ be a good covering in $\mathbb{C}^{\ast}$, where all the sectors have an opening slightly larger than $\pi/k_1$. For all
$0 \leq p_1 \leq \varsigma_1 - 1$ and $0\le p_2\le \varsigma_2-1$, let $G_{p_1,p_2}$ be a holomorphic function from $\mathcal{E}_{p_1,p_2}$ into
the Banach space $(\mathbb{F},||.||_{\mathbb{F}})$ and for every $(p_1,p_2),(p'_1,p'_2)\in\{0,\ldots,\varsigma_1-1\}\times\{0,\ldots,\varsigma_2-1\}$ such that $\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}\neq\emptyset$ we define $\Theta_{(p_1,p_2)(p'_1,p'_2)}(\epsilon) = G_{p_1,p_2}(\epsilon) - G_{p'_1,p'_2}(\epsilon)$
be a holomorphic function from the sector $Z_{(p_1,p_2),(p'_1,p'_2)} = \mathcal{E}_{p_1,p_2} \cap \mathcal{E}_{p'_1,p'_2}$ into $\mathbb{F}$.
We make the following assumptions.\medskip
\noindent {\bf 1)} The functions $G_{p_1,p_2}(\epsilon)$ are bounded as $\epsilon \in \mathcal{E}_{p_1,p_2}$ tends to the origin
in $\mathbb{C}$, for all $0 \leq p_1 \leq \varsigma_1 - 1$ and $0\le p_2\le \varsigma_2-1$.\medskip
\noindent {\bf 2)} $(\{0,\ldots,\varsigma_1-1\}\times\{0,\ldots,\varsigma_2\})^2=\mathcal{U}_0\cup\mathcal{U}_{k_1}\cup\mathcal{U}_{k_2}$, where
$((p_1,p_2),(p'_1,p'_2))\in\mathcal{U}_0$ iff $\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}=\emptyset$,
$((p_1,p_2),(p'_1,p'_2))\in\mathcal{U}_{k_1}$ iff $\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}\neq\emptyset$ and
$$ ||\Theta_{(p_1,p_2),(p'_1,p'_2)}(\epsilon)||_{\mathbb{F}} \leq C_{p_1,p_2,p'_1,p'_2}e^{-A_{p_1,p_2,p'_1,p'_2}/|\epsilon|^{k_1}} $$
for all $\epsilon \in Z_{(p_1,p_2),(p'_1,p'_2)}$.
$((p_1,p_2),(p'_1,p'_2))\in\mathcal{U}_{k_2}$ iff $\mathcal{E}_{p_1,p_2}\cap\mathcal{E}_{p'_1,p'_2}\neq\emptyset$ and
$$ ||\Theta_{(p_1,p_2),(p'_1,p'_2)}(\epsilon)||_{\mathbb{F}} \leq C_{p_1,p_2,p'_1,p'_2}e^{-A_{p_1,p_2,p'_1,p'_2}/|\epsilon|^{k_2}} $$
for all $\epsilon \in Z_{(p_1,p_2),(p'_1,p'_2)}$.
Then, there exists a convergent power series $a(\epsilon)\in \mathbb{F}\{\epsilon\}$ and two formal power series $\hat{G}^1(\epsilon),\hat{G}^2(\epsilon)\in\mathbb{F}[[\epsilon]]$ such that $G_{p_1,p_2}(\epsilon)$ can be split in the form
$$G_{p_1,p_2}(\epsilon)=a(\epsilon)+G^{1}_{p_1,p_2}(\epsilon)+G^{2}_{p_1,p_2}(\epsilon),$$
where $G^{j}_{p_1,p_2}(\epsilon)\in\mathcal{O}(\mathcal{E}_{p_1,p_2},\mathbb{F})$, and admits $\hat{G}^j(\epsilon)$ as its asymptotic expansion of Gevrey order $1/k_j$ on $\mathcal{E}_{p_1,p_2}$, for $j\in\{1,2\}$.\smallskip
Moreover, assume that
$$\{((p_1^0,p_2^0),(p_1^1,p_2^1)), ((p_1^1,p_2^1),(p_1^2,p_2^2)), \ldots, ((p_1^{2y-1},p_2^{2y-1}),(p_1^{2y},p_2^{2y})) \}$$
is a subset of $\mathcal{U}_{k_1}$, for some positive integer $y$, and
$$\mathcal{E}_{p_1^{y},p_2^y}\subseteq S_{\pi/k_2}\subseteq\bigcup_{0\le j\le 2y}\mathcal{E}_{p_1^{j},p_2^{j}},$$
for some sector $S_{\pi/k_2}$ with opening larger than $\pi/k_2$. Then, the formal power series $\hat{G}(\epsilon)$ is $(k_1,k_2)-$summable on $\mathcal{E}_{p_1^y,p_2^y}$ and its $(k_1,k_2)-$sum is $G_{p_1^y,p_2^y}(\epsilon)$ on $\mathcal{E}_{p_1^y,p_2^y}$.}
\subsection{Formal solution and asymptotic behavior in the complex parameter}
The second and third main results state the existence of a formal power series in the perturbation parameter $\epsilon$, with coefficients in the Banach space $\mathbb{F}$ of holomorphic and bounded functions on $(\mathcal{T}_1\cap D(0,h''))\times (\mathcal{T}_2\cap D(0,h''))\times H_{\beta'}$, with the norm of the supremum. Here $h''$, $\mathcal{T}_1,\mathcal{T}_2$ are determined in Theorem~\ref{teo1}.
It is worth observing the different asymptotic behavior of the analytic solutions of the problem depending on $k_1$ and $k_2$. More precisely, in case that $k_1<k_2$, Theorem~\ref{teo2} shows a Gevrey estimates occurrence, whilst $k_2<k_1$ displays a multisummability phenomenon; in contrast to the results observed in~\cite{family1}, where multisummability is always observed.
\begin{theo}\label{teo2}
Let $k_2>k_1$. Under the assumptions of Theorem~\ref{teo1}, a formal power series
$$\hat{u}(\boldsymbol{t},z,\epsilon)=\sum_{m\ge0}H_m(\boldsymbol{t},z)\epsilon^m/m!\in\mathbb{F}[[\epsilon]]$$
exists, with the following properties. $\hat{u}$ is a formal solution of (\ref{ICP_main0}). Moreover, for every $p_1\in\{0,\ldots,\varsigma_1-1\}$ and $p_2\in\{0,\ldots,\varsigma_2-1\}$, the function $u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)$ admits $\hat{u}(\boldsymbol{t},z,\epsilon)$ as asymptotic expansion of Gevrey order $1/k_1$. This means that
$$\sup_{\boldsymbol{t}\in (\mathcal{T}_1\cap D(0,h''))\times (\mathcal{T}_2\cap D(0,h'')),z\in H_{\beta'}}\left|u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)-\sum_{m=0}^{N-1}h_m(\boldsymbol{t},z)\frac{\epsilon^m}{m!}\right|\le CM^{N}\Gamma(1+\frac{N}{k_1})|\epsilon|^{N},$$
for every $\epsilon\in\mathcal{E}_{p_1,p_2}$ and all integer $N\ge0$.\smallskip
\end{theo}
\begin{proof}
Let $(u_{p_1,p_2}(\boldsymbol{t},z,\epsilon))_{\begin{subarray}{l}0\le p_1\le \varsigma_1-1\\0\le p_2\le \varsigma_2-1\end{subarray}}$ be the family constructed in Theorem~\ref{teo1}. We recall that $(\mathcal{E}_{p_1,p_2})_{\begin{subarray}{l}0\le p_1\le \varsigma_1-1\\0\le p_2\le \varsigma_2-1\end{subarray}}$ is a good covering in $\mathbb{C}^{\star}$, with all its components being finite sectors of opening slightly larger than $\pi/k_2$.
The function $G_{p_1,p_2}(\epsilon):=(t_1,t_2,z)\mapsto u_{p_1,p_2}(t_1,t_2,z,\epsilon)$ belongs to $\mathcal{O}(\mathcal{E}_{p_1,p_2},\mathbb{F})$. We consider $\{(p_1,p_2),(p'_1,p'_2)\}$ such that $(p_1,p_2)$ and $(p'_1,p'_2)$ belong to $\{0,\ldots,\varsigma_1-1\}\times \{0,\ldots,\varsigma_2-1\}$, and $\mathcal{E}_{p_1,p_2}$ and $\mathcal{E}_{p'_1,p'_2}$ are consecutive sectors in the good covering, so their intersection is not empty. In view of (\ref{exp_small_difference_u_p11}) and (\ref{exp_small_difference_u_p12}), one has that $\Delta_{(p_1,p_2),(p'_1,p'_2)}(\epsilon):=G_{p_1,p_2}(\epsilon)- G_{p'_1,p'_2}(\epsilon)$ satisfies exponentially flat bounds of Gevrey order $k_1$, due to $\mathcal{U}_{k_1}$ coincides with $\{0,\ldots,\varsigma_1\}\times\{0,\ldots,\varsigma_2\}$. Ramis-Sibuya Theorem guarantees the existence of a formal power series $\hat{G}(\epsilon)\in\mathbb{F}[[\epsilon]]$ such that $G_{p_1,p_2}$ admits $\hat{G}(\epsilon)$ as its Gevrey asymptotic expansion of order $k_1$, say
$$\hat{G}(\epsilon)=:\hat{u}(\boldsymbol{t},z,\epsilon)=\sum_{m\ge0}H_m(\boldsymbol{t},z)\frac{\epsilon^{m}}{m!}.$$
Let us check that $\hat{u}(\boldsymbol{t},z,\epsilon)$ is a formal solution of (\ref{ICP_main0}). For every $0\le p_1\le \varsigma_1-1$, $0\le p_2\le \varsigma_2-1$, the existence of an asymptotic expansion concerning $G_{p_1,p_2}(\epsilon)$ and $\hat{G}(\epsilon)$ implies that
\begin{equation}\label{e1363}
\lim_{\epsilon\to 0,\epsilon\in\mathcal{E}_{p_1,p_2}}\sup_{(\boldsymbol{t},z)\in(\tau_1\cap D(0,h''))\times (\tau_2\cap D(0,h''))\times H_{\beta'}}|\partial_{\epsilon}^{\ell}u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)-H_{\ell}(\boldsymbol{t})|=0,
\end{equation}
for every $\ell\in\mathbb{N}$. By construction, the function $u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)$ is a solution of (\ref{ICP_main0}). Taking derivatives of order $m\ge0$ with respect to $\epsilon$ on that equation yield
\begin{multline}\label{e1367}
Q(\partial_{z})\partial_{t_2}+\sum_{m_1+m_2=m}\partial_\epsilon^{m_1}(\epsilon^{\tilde{\Delta}_2})t_2^{\tilde{d}_2}\partial_{t_2}^{\tilde{\delta}_{D_2}}R_{D_1,D_2}(\partial_z)\partial_\epsilon^{m_2}u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)\\
+\sum_{m_1+m_2=m}\partial_\epsilon^{m_1}(\epsilon^{\tilde{\Delta}_3})t_2^{\tilde{d}_3}\partial_{t_2}^{\tilde{\delta}_{D_3}}R_{D_3}(\partial_z)\partial_\epsilon^{m_2}u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)\\
=\sum_{m_1+m_2=m}\frac{m!}{m_1!m_2!}\left(\sum_{m_{11}+m_{12}=m_1}\frac{m_1!}{m_{11}!m_{12}!}\partial_\epsilon^{m_{11}}P_1(\partial_z,\epsilon)\partial_{\epsilon}^{m_{12}}u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)\right)\\
\times \left(\sum_{m_{21}+m_{22}=m_2}\frac{m_2!}{m_{21}!m_{22}!}\partial_\epsilon^{m_{21}}P_2(\partial_z,\epsilon)\partial_{\epsilon}^{m_{22}}u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)\right)\\
+\sum_{\stackrel{0\le l_j\le D_j}{j=1,2}}\left(\sum_{m_{1}+m_2=m}\frac{m!}{m_1!m_2!}\partial_\epsilon^{m_1}(\epsilon^{\Delta_{l_1,l_2}})t_1^{d_{l_1}}t_2^{\tilde{d}_{l_2}}\partial_{t_1}^{\delta_{l_1}}\partial_{t_2}^{\tilde{\delta}_{l_2}}R_{l_1,l_2}(\partial_z)\partial_\epsilon^{m_2}u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)\right)\\
+ \partial_{\epsilon}^{m}f(\boldsymbol{t},z,\epsilon),
\end{multline}
for every $m\ge 0$ and $(\boldsymbol{t},z,\epsilon)\in (\mathcal{T}_1\cap D(0,h''))\times (\mathcal{T}_2\cap D(0,h''))\times H_{\beta'}\times\mathcal{E}_{p_1,p_2}$. Tending $\epsilon\to 0$ in (\ref{e1367}) together with (\ref{e1363}), we obtain a recursion formula for the coefficients of the formal solution.
\begin{multline}\label{e1368}
Q_1(\partial_{z})Q_2(\partial_z)\partial_{t_1}\partial_{t_2}H_{m}(\boldsymbol{t},z)+\frac{m!}{(m-\tilde{\Delta}_2)!}t_2^{\tilde{d}_2}\partial_{t_2}^{\tilde{\delta}_{D_2}}R_{D_1,D_2}(\partial_z)H_{m-\tilde{\Delta}_2}(\boldsymbol{t},z)\\
\hfill+\frac{m!}{(m-\tilde{\Delta}_3)!}t_2^{\tilde{d}_3}\partial_{t_2}^{\tilde{\delta}_{D_3}}R_{D_3}(\partial_z)H_{m-\tilde{\Delta}_3}(\boldsymbol{t},z)\\
=\sum_{m_{1}+m_{2}=m} \frac{m!}{m_{1}!m_{2}!}\left(\sum_{m_{11}+m_{12}=m_1}\frac{m_1!}{m_{11}!m_{12}!}\partial_\epsilon^{m_{11}}P_1(\partial_z,0)H_{m_{12}}(\boldsymbol{t},z)\right)\hfill\\
\times \left(\sum_{m_{21}+m_{22}=m_2}\frac{m_2!}{m_{21}!m_{22}!}\partial_\epsilon^{m_{21}}P_2(\partial_z,0)H_{m_{12}}(\boldsymbol{t},z)\right)\\
+\sum_{0\le l_1\le D_1,0\le l_2\le D_2}\frac{m!}{(m-\Delta_{l_1,l_2})!}t_1^{d_{l_1}}t_2^{\tilde{d}_{l_2}}\partial_{t_1}^{\delta_{l_1}}\partial_{t_2}^{\tilde{\delta}_{l_2}}R_{l_1,l_2}(\partial_z)H_{m-\Delta_{l_1,l_2}}(\boldsymbol{t},z)\\
+ \partial_{\epsilon}^{m}f(\boldsymbol{t},z,0),
\end{multline}
for every $m\ge \max\{\max_{1\le l_1\le D_1,1\le l_2\le D_2}\Delta_{l_1,l_2},\tilde{\Delta}_2,\tilde{\Delta}_3\}$, and $(\boldsymbol{t},z,\epsilon)\in (\mathcal{T}_1\cap D(0,h''))\times (\mathcal{T}_2\cap D(0,h''))\times H_{\beta'}$. From the analyticity of $f$ with respect to $\epsilon$ in a vicinity of the origin we get
\begin{equation}
f(\boldsymbol{t},z,\epsilon) = \sum_{m \geq 0} \frac{(\partial_{\epsilon}^{m}f)(\boldsymbol{t},z,0)}{m!}\epsilon^{m}, \label{Taylor_f}
\end{equation}
for every $\epsilon\in D(0,\epsilon_0)$ and $(\boldsymbol{t},z)$ as above. On the other hand, a direct inspection from the recursion formula (\ref{e1368}) and (\ref{Taylor_f}) allow us to affirm that the formal power series $\hat{u}(\boldsymbol{t},z,\epsilon) = \sum_{m \geq 0} H_{m}(\boldsymbol{t},z)\epsilon^{m}/m!$ solves the equation (\ref{ICP_main0}).
\end{proof}
\begin{theo}\label{teo3}
Let $k_1>k_2$. Under the assumptions of Theorem~\ref{teo1}, a formal power series
$$\hat{u}(\boldsymbol{t}_1,t_2,z,\epsilon)=\sum_{m\ge0}h_m(t_1,t_2,z)\epsilon^m/m!\in\mathbb{F}[[\epsilon]]$$
exists, with the following properties. $\hat{u}$ is a formal solution of (\ref{ICP_main0}). In addition to that, $\hat{u}$ can be split in the form
$$\hat{u}(\boldsymbol{t},z,\epsilon)=a(\boldsymbol{t},z,\epsilon)+\hat{u}_{1}(\boldsymbol{t},z,\epsilon)+\hat{u}_{2}(\boldsymbol{t},z,\epsilon),$$
where $a(\boldsymbol{t},z,\epsilon)\in\mathbb{F}\{\epsilon\}$, and $\hat{u}_{1},\hat{u}_{2}\in\mathbb{F}[[\epsilon]]$. Moreover, for every $p_1\in\{0,\ldots,\varsigma_1-1\}$ and $p_2\in\{0,\ldots,\varsigma_2-1\}$, the function $u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)$ can be written as
$$u_{p_1,p_2}(\boldsymbol{t},z,\epsilon)=a(\boldsymbol{t},z,\epsilon)+u^1_{p_1,p_2}(\boldsymbol{t},z,\epsilon)+u^2_{p_1,p_2}(\boldsymbol{t},z,\epsilon),$$
where $\epsilon\mapsto u^j_{p_1,p_2}(\boldsymbol{t},z,\epsilon)$ is an $\mathbb{F}-$valued function which admits $\hat{u}_{j}(\boldsymbol{t},z,\epsilon)$ as its $k_j-$Gevrey asymptotic expansion on $\mathcal{E}_{p_1,p_2}$, for $j=1,2$.
Moreover, assume that
$$\{((p_1^0,p_2^0),(p_1^1,p_2^1)), ((p_1^1,p_2^1),(p_1^2,p_2^2)), \ldots, ((p_1^{2y-1},p_2^{2y-1}),(p_1^{2y},p_2^{2y})) \}$$
is a subset of $\mathcal{U}_{k_1}$, for some positive integer $y$, and
$$\mathcal{E}_{p_1^{y},p_2^y}\subseteq S_{\pi/k_2}\subseteq\bigcup_{0\le j\le 2y}\mathcal{E}_{p_1^{j},p_2^{j}},$$
for some sector $S_{\pi/k_2}$ with opening larger than $\pi/k_2$. Then, $\hat{u}(\boldsymbol{t},z,\epsilon)$ is $(k_1,k_2)-$summable on $\mathcal{E}_{p_1^y,p_2^y}$ and its $(k_1,k_2)-$sum is $u_{p_1^y,p_2^y}(\epsilon)$ on $\mathcal{E}_{p_1^y,p_2^y}$.
\end{theo}
\begin{proof}
Let $(u_{p_1,p_2}(\boldsymbol{t},z,\epsilon))_{\begin{subarray}{l}0\le p_1\le \varsigma_1-1\\0\le p_2\le \varsigma_2-1\end{subarray}}$ be the family constructed in Theorem~\ref{teo1}. In this case, we have
$$\emptyset\neq \mathcal{U}_{k_2}:=\{0,\ldots, \varsigma_1-1\}\times \{0,\ldots, \varsigma_2-1\}\setminus\mathcal{U}_{k_1},$$
and the opening of the sectors in the good covering are of opening slightly larger than $\pi/k_1$.
The function $G_{p_1,p_2}(\epsilon):=(t_1,t_2,z)\mapsto u_{p_1,p_2}(t_1,t_2,z,\epsilon)$ belongs to $\mathcal{O}(\mathcal{E}_{p_1,p_2},\mathbb{F})$. We consider $\{(p_1,p_2),(p'_1,p'_2)\}$ such that $(p_1,p_2)$ and $(p'_1,p'_2)$ belong to $\{0,\ldots,\varsigma_1-1\}\times \{0,\ldots,\varsigma_2-1\}$, and $\mathcal{E}_{p_1,p_2}$ and $\mathcal{E}_{p'_1,p'_2}$ are consecutive sectors in the good covering, so their intersection is not empty. In view of (\ref{exp_small_difference_u_p11}) and (\ref{exp_small_difference_u_p12}), one has that $\Delta_{(p_1,p_2),(p'_1,p'_2)}(\epsilon):=G_{p_1,p_2}(\epsilon)- G_{p'_1,p'_2}(\epsilon)$ satisfies exponentially flat bounds of certain Gevrey order, which is $k_1$ in the case that $\{(p_1,p_2),(p'_1,p'_2)\}\in\mathcal{U}_{k_1}$ and $k_2$ if $\{(p_1,p_2),(p'_1,p'_2)\}\in\mathcal{U}_{k_2}$. Multilevel-RS Theorem guarantees the existence of formal power series $\hat{G}(\epsilon),\hat{G}_1(\epsilon),\hat{G}_2(\epsilon)\in\mathbb{F}[[\epsilon]]$ such that
$$\hat{G}(\epsilon)=a(\epsilon)+\hat{G}_1(\epsilon)+\hat{G}_{2}(\epsilon),$$ and the splitting
$$G_{p_1,p_2}(\epsilon)=a(\epsilon)+G^1_{p_1,p_2}(\epsilon)+G^2_{p_1,p_2}(\epsilon),$$
for some $a\in\mathbb{F}\{\epsilon\}$, such that for every $(p_1,p_2)\in\{0,\ldots,\varsigma_1-1\}\times \{0,\ldots,\varsigma_2-1\}$, one has that $G^1_{p_1,p_2}(\epsilon)$ admits $\hat{G}_{p_1,p_2}^1(\epsilon)$ as its Gevrey asymptotic expansion of order $k_1$, and $G^2_{p_1,p_2}(\epsilon)$ admits $\hat{G}_{p_1,p_2}^2(\epsilon)$ as its Gevrey asymptotic expansion of order $k_2$. We define
$$\hat{G}(\epsilon)=:\hat{u}(\boldsymbol{t},z,\epsilon)=\sum_{m\ge0}H_m(\boldsymbol{t},z)\frac{\epsilon^{m}}{m!}.$$
Following analogous arguments as in the second part of the proof of Theorem~\ref{teo2}, we conclude that $\hat{u}(\boldsymbol{t},z,\epsilon)$ is a formal solution of (\ref{ICP_main0}).
\end{proof}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.